repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
hhain/sdap17
|
notebooks/solution_ueb01/02_Classification.ipynb
|
mit
|
# imports
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
import time
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Aufgabe 2: Classification
A short test to examine the performance gain when using multiple cores on sklearn's esemble classifier random forest.
Depending on the available system the maximum number of jobs to test and the sample size can be adjusted by changing the respective parameters.
End of explanation
"""
num_samples = 500 * 1000
num_features = 40
X, y = make_classification(n_samples=num_samples, n_features=num_features)
"""
Explanation: First we create a training set of size num_samples and num_features.
End of explanation
"""
# test different number of cores: here max 8
max_cores = 8
num_cpu_list = list(range(1,max_cores + 1))
max_sample_list = [int(l * num_samples) for l in [0.1, 0.2, 1, 0.001]]
training_times_all = []
# the default setting for classifier
clf = RandomForestClassifier()
for max_sample in max_sample_list:
training_times = []
for num_cpu in num_cpu_list:
# change number of cores
clf.set_params(n_jobs=num_cpu)
# train classifier on training data
t = %timeit -o clf.fit(X[:max_sample+1], y[:max_sample+1])
# save the runtime to the list
training_times.append(t.best)
# print logging message
print("Computing for {} samples and {} cores DONE.".format(max_sample,num_cpu))
training_times_all.append(training_times)
print("All computations DONE.")
"""
Explanation: Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.
End of explanation
"""
plt.plot(num_cpu_list, training_times_all[0], 'ro', label="{}k".format(max_sample_list[0]//1000))
plt.plot(num_cpu_list, training_times_all[1], "bs" , label="{}k".format(max_sample_list[1]//1000))
plt.plot(num_cpu_list, training_times_all[2], "g^" , label="{}k".format(max_sample_list[2]//1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[2])+1])
plt.title("Training time vs #CPU Cores")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
"""
Explanation: Finally we plot and evaluate our results.
End of explanation
"""
plt.plot(num_cpu_list, training_times_all[3], 'ro', label="{}k".format(max_sample_list[3]/1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[3])+1])
plt.title("Training time vs #CPU Cores on small dataset")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
"""
Explanation: The training time is inversely proportional to the number of used cpu cores.
End of explanation
"""
|
tleonhardt/machine_learning
|
SL3_Neural_Networks.ipynb
|
apache-2.0
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.special import expit
line = np.linspace(-3, 3, 100)
plt.figure(figsize=(10,8))
plt.plot(line, np.tanh(line), label="tanh")
plt.plot(line, np.maximum(line, 0), label="relu")
plt.plot(line, expit(line), label='sigmoid')
plt.legend(loc="best")
plt.xlabel("x")
plt.ylabel("relu(x), tanh(x), sigmoid(x)")
"""
Explanation: Neural Networks (Deep Learning)
A family of algorithms known as neural networks has recently seen a revival under the name “deep learning.” While deep learning shows great promise in many machine learning applications, deep learning algorithms are often tailored very carefully to a specific use case. Here, we will only discuss some relatively simple methods, namely multilayer perceptrons for classification and regression, that can serve as a starting point for more involved deep learning methods. Multilayer perceptrons (MLPs) are also known as (vanilla) feed-forward neural networks, or sometimes just neural networks.
Disclaimer: Much of the code in this notebook was lifted from the excellent book Introduction to Machine Learning with Python by Andreas Muller and Sarah Guido.
The neural network model
MLPs can be viewed as generalizations of linear models that perform multiple stages of processing to come to a decision.
Remember that the prediction by a linear regressor is given as:
ŷ = w[0] * x[0] + w[1] * x[1] + ... + w[p] * x[p] + b
In plain English, ŷ is a weighted sum of the input features x[0] to x[p], weighted by the learned coefficients w[0] to w[p].
In an MLP this process of computing weighted sums is repeated multiple times, first computing hidden units that represent an intermediate processing step, which are again combined using weighted sums to yield the final result.
This model has a lot more coefficients (also called weights) to learn than a simple linear model: there is one between every input and every hidden unit (which make up the hidden layer), and one between every unit in the hidden layer and the output.
Computing a series of weighted sums is mathematically the same as computing just one weighted sum, so to make this model truly more powerful than a linear model, we need one extra trick. After computing a weighted sum for each hidden unit, a nonlinear function is applied to the result—usually the rectifying nonlinearity (also known as rectified linear unit or relu), the tangens hyperbolicus (tanh), or the sigmoid (also called the logistic function). The result of this function is then used in the weighted sum that computes the output, ŷ. The relu cuts off values below zero, while tanh saturates to –1 for low input values and +1 for high input values, and sigmoid saturates to 0 for low input values and +1 for high input values. Any of these nonlinear functions allows the neural network to learn much more complicated functions than a linear model could. Below is a plot of these three nonlinear functions:
End of explanation
"""
# Tweak some colormap stuff for Matplotlib
from matplotlib.colors import ListedColormap
cm2 = ListedColormap(['#0000aa', '#ff2020'])
# Helper function for classification plots
def plot_2d_separator(classifier, X, fill=False, ax=None, eps=None, alpha=1, cm=cm2, linewidth=None, threshold=None,
linestyle="solid"):
# binary?
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 100)
yy = np.linspace(y_min, y_max, 100)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
try:
decision_values = classifier.decision_function(X_grid)
levels = [0] if threshold is None else [threshold]
fill_levels = [decision_values.min()] + levels + [decision_values.max()]
except AttributeError:
# no decision_function
decision_values = classifier.predict_proba(X_grid)[:, 1]
levels = [.5] if threshold is None else [threshold]
fill_levels = [0] + levels + [1]
if fill:
ax.contourf(X1, X2, decision_values.reshape(X1.shape), levels=fill_levels, alpha=alpha, cmap=cm)
else:
ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels, colors="black", alpha=alpha, linewidths=linewidth,
linestyles=linestyle, zorder=5)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
# Helper function for classification plots
import matplotlib as mpl
from matplotlib.colors import colorConverter
def discrete_scatter(x1, x2, y=None, markers=None, s=10, ax=None,
labels=None, padding=.2, alpha=1, c=None, markeredgewidth=None):
"""Adaption of matplotlib.pyplot.scatter to plot classes or clusters.
Parameters
----------
x1 : nd-array
input data, first axis
x2 : nd-array
input data, second axis
y : nd-array
input data, discrete labels
cmap : colormap
Colormap to use.
markers : list of string
List of markers to use, or None (which defaults to 'o').
s : int or float
Size of the marker
padding : float
Fraction of the dataset range to use for padding the axes.
alpha : float
Alpha value for all points.
"""
if ax is None:
ax = plt.gca()
if y is None:
y = np.zeros(len(x1))
unique_y = np.unique(y)
if markers is None:
markers = ['o', '^', 'v', 'D', 's', '*', 'p', 'h', 'H', '8', '<', '>'] * 10
if len(markers) == 1:
markers = markers * len(unique_y)
if labels is None:
labels = unique_y
# lines in the matplotlib sense, not actual lines
lines = []
current_cycler = mpl.rcParams['axes.prop_cycle']
for i, (yy, cycle) in enumerate(zip(unique_y, current_cycler())):
mask = y == yy
# if c is none, use color cycle
if c is None:
color = cycle['color']
elif len(c) > 1:
color = c[i]
else:
color = c
# use light edge for dark markers
if np.mean(colorConverter.to_rgb(color)) < .4:
markeredgecolor = "grey"
else:
markeredgecolor = "black"
lines.append(ax.plot(x1[mask], x2[mask], markers[i], markersize=s,
label=labels[i], alpha=alpha, c=color,
markeredgewidth=markeredgewidth,
markeredgecolor=markeredgecolor)[0])
if padding != 0:
pad1 = x1.std() * padding
pad2 = x2.std() * padding
xlim = ax.get_xlim()
ylim = ax.get_ylim()
ax.set_xlim(min(x1.min() - pad1, xlim[0]), max(x1.max() + pad1, xlim[1]))
ax.set_ylim(min(x2.min() - pad2, ylim[0]), max(x2.max() + pad2, ylim[1]))
return lines
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
X, y = make_moons(n_samples=100, noise=0.25, random_state=3)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
mlp = MLPClassifier(hidden_layer_sizes=[100], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
"""
Explanation: For a small neural network with a single hidden layer with three nodes, the full formula for computing ŷ in the case of regression would be (when using a tanh nonlinearity):
h[0] = tanh(w[0, 0] * x[0] + w[1, 0] * x[1] + w[2, 0] * x[2] + w[3, 0] * x[3])
h[1] = tanh(w[0, 0] * x[0] + w[1, 0] * x[1] + w[2, 0] * x[2] + w[3, 0] * x[3])
h[2] = tanh(w[0, 0] * x[0] + w[1, 0] * x[1] + w[2, 0] * x[2] + w[3, 0] * x[3])
ŷ = v[0] * h[0] + v[1] * h[1] + v[2] * h[2]
Here, w are the weights between the input x and the hidden layer h, and v are the weights between the hidden layer h and the output ŷ. The weights v and w are learned from data, x are the input features, ŷ is the computed output, and h are intermediate computations. An important parameter that needs to be set by the user is the number of nodes in the hidden layer. This can be as small as 10 for very small or simple datasets and as big as 10,000 for very complex data. It is also possible to add additional hidden layers.
Having large neural networks made up of many of these layers of computation is what inspired the term “deep learning.”
Advantages of Neural Networks
Able to capture information contained in large amounts of data and build incredibly complex models
Given enough computation time, data, and careful tuning of the parameters, neural networks often beat other machine learning algorithms (for classification and regression tasks)
Disadvantages of Neural Networks
Neural networks—particularly the large and powerful ones—often take a long time to train
Require careful preprocessing of the data
They work best with “homogeneous” data, where all the features have similar meanings
For data that has very different kinds of features, tree-based models might work better
Tuning neural network parameters is an art and generally more complex than tuning parameters for other algorithms
Neural Networks in scikit-learn
Recent versions of scikit-learn have added rudimentary support for neural networks. The implementation in scikit-learn is not intended for large-scale applications. In particular, scikit-learn offers no GPU support. For much faster, GPU-based implementations, as well as frameworks offering much more flexibility to build deep learning architectures, see either Tensorflow, Theano, or Keras. Using Tensorflow will be covered below.
Advantage of MLP
Capability to learn non-linear models
Capability to learn models in real-time (on-line learning) using partial_fit
Disadvantages of MLP
MLP with hidden layers have a non-convex loss function where there exists more than one local minimum. Therefore different random weight initializations can lead to different validation accuracy.
MLP requires tuning a number of hyperparameters such as the number of hidden neurons, layers, and iterations.
MLP is sensitive to feature scaling.
scikit-learn has support for Multi-layer Perceptron (MLP) networks only. The MLPClassifier and MLPRegressor for regression, classes in the neural_network module deal with classification and regression, respectively.
Tuning neural networks
Let’s look into the workings of the MLP by applying the MLPClassifier on a synthetic dataset.
scikit-learn has the make_moons function in the datasets module for creating a toy dataset consisting of two interleaving half circles for use with clustering and classification algorithms.
This can create a nice dataset for classification problems because the groups created are generally not linearly seperable.
End of explanation
"""
mlp = MLPClassifier(hidden_layer_sizes=[10], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
"""
Explanation: As you can see, the neural network learned a very nonlinear but relatively smooth decision boundary. We used solver='lbfgs', which we will discuss later.
By default, the MLP uses 100 hidden nodes, which is quite a lot for this small dataset. We can reduce the number (which reduces the complexity of the model) and still get a good result:
End of explanation
"""
# using two hidden layers, with 10 units each
mlp = MLPClassifier(hidden_layer_sizes=[10, 10], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
# using two hidden layers, with 10 units each, now with tanh nonlinearity
# using two hidden layers, with 10 units each
mlp = MLPClassifier(hidden_layer_sizes=[10, 10], activation='tanh', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
"""
Explanation: With only 10 hidden units, the decision boundary looks somewhat more ragged. The default nonlinearity is relu, shown above. With a single hidden layer, this means the decision function will be made up of 10 straight line segments. If we want a smoother decision boundary, we could add more hidden units, as shown two figures above, add a second hidden layer, or use the tanh or logistic nonlinearity:
End of explanation
"""
fig, axes = plt.subplots(2, 4, figsize=(20, 8))
for axx, n_hidden_nodes in zip(axes, [10, 100]):
for ax, alpha in zip(axx, [0.0001, 0.01, 0.1, 1]):
mlp = MLPClassifier(hidden_layer_sizes=[n_hidden_nodes, n_hidden_nodes], activation='relu', solver='lbfgs',
alpha=alpha, random_state=0)
mlp.fit(X_train, y_train)
plot_2d_separator(mlp, X_train, fill=True, alpha=.3, ax=ax)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train, ax=ax)
ax.set_title("n_hidden=[{}, {}]\nalpha={:.4f}".format(n_hidden_nodes, n_hidden_nodes, alpha))
"""
Explanation: Finally, we can also control the complexity of a neural network by using an l2 penalty to shrink the weights toward zero, as we did in ridge regression and the linear classifiers. The parameter for this in the MLPClassifier is alpha (as in the linear regression models), and it’s set to a very low value (little regularization) by default. The figure below shows the effect of different values of alpha on the two_moons dataset, using two hidden layers of 10 or 100 units each:
End of explanation
"""
fig, axes = plt.subplots(2, 4, figsize=(20, 8))
for i, ax in enumerate(axes.ravel()):
mlp = MLPClassifier(hidden_layer_sizes=[100, 100], solver='lbfgs', random_state=i)
mlp.fit(X_train, y_train)
plot_2d_separator(mlp, X_train, fill=True, alpha=.3, ax=ax)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train, ax=ax)
"""
Explanation: As you probably have realized by now, there are many ways to control the complexity of a neural network: the number of hidden layers, the number of units in each hidden layer, and the regularization (alpha). There are actually even more, which we won’t go into here.
An important property of neural networks is that their weights are set randomly before learning is started, and this random initialization affects the model that is learned. That means that even when using exactly the same parameters, we can obtain very different models when using different random seeds. If the networks are large, and their complexity is chosen properly, this should not affect accuracy too much, but it is worth keeping in mind (particularly for smaller networks). The figure below shows plots of several models, all learned with the same settings of the parameters:
End of explanation
"""
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("Cancer data per-feature maxima:\n{}".format(cancer.data.max(axis=0)))
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=0)
mlp = MLPClassifier(random_state=42)
mlp.fit(X_train, y_train)
print("Accuracy on training set: {:.2f}".format(mlp.score(X_train, y_train)))
print("Accuracy on test set: {:.2f}".format(mlp.score(X_test, y_test)))
"""
Explanation: To get a better understanding of neural networks on real-world data, let’s apply the MLPClassifier to the Breast Cancer dataset which is built into scikit-learn. We start with the default parameters:
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
mlp = MLPClassifier(random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
"""
Explanation: The accuracy of the MLP is quite good, but not as good as some other models. This is likely due to scaling of the data. Neural networks expect all input features to vary in a similar way, and ideally to have a mean of 0, and a variance of 1. We must rescale our data so that it fulfills these requirements. We can do this using semi-automatically using the StandardScaler.
End of explanation
"""
mlp = MLPClassifier(max_iter=250, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
"""
Explanation: The results are much better after scaling, and already quite competitive. We got a warning from the model, though, that tells us that the maximum number of iterations has been reached. This is part of the default adam solver for learning the model, and tells us that we should increase the number of iterations:
End of explanation
"""
mlp = MLPClassifier(max_iter=1000, alpha=1, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
"""
Explanation: Increasing the number of iterations only slightly increased the training and generalization performance. Still, the model is performing quite well. As there is some gap between the training and the test performance, we might try to decrease the model’s complexity to get better generalization performance. Here, we choose to increase the alpha parameter (quite aggressively, from 0.0001 to 1) to add stronger regularization of the weights:
End of explanation
"""
# Here's the Sequential model
from keras.models import Sequential
model = Sequential()
# Stacking layers is as easy as .add()
from keras.layers import Dense, Dropout
model.add(Dense(100, input_dim=30, init='uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# Once your model looks good, configure its learning process with .compile()
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
"""
Explanation: This didn't help, but the performance is already excellent.
While it is possible to analyze what a neural network has learned, this is usually much trickier than analyzing a linear model or a tree-based model.
While the MLPClassifier and MLPRegressor provide easy-to-use interfaces for the most common neural network architectures, they only capture a small subset of what is possible with neural networks. If you are interested in working with more flexible or larger models,you need to look beyond scikit-learn into the fantastic deep learning libraries that are out there. For Python users, the most well-established are tensorflow, theano, and keras. Theano is a mature low-level library, tensorflow is a new and up-and-coming mid-level library, and keras is a high-level library which can use either tensorflow or theano as a backend. These libraries provide a much more flexible interface to build neural networks and track the rapid progress in deep learning research. All of the popular deep learning libraries also allow the use of high-performance graphics processing units (GPUs), which scikit-learn does not support. Using GPUs allows us to accelerate computations by factors of 10x to 100x, and they are essential for applying deep learning methods to large-scale datasets.
A brief rundown of the advantages and disadvantages of the various libraries is as follows:
Tensorflow - an open source software library for numerical computation using data flow graphs
Rising star from Google which was built to be a replacement for Theano - Good API and rich visualization capabilities. Expected to become the best, but maybe not there yet.
Pros:
Faster compile times than Theano
Limitations:
Linux or Mac OS X (not available on Windows)
GPU acceleration only available for Nvidia CUDA GPUs
Theano - a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently
Mature product which is well optimized and has broad support, but maybe not the best future roadmap.
Pros:
Faster execution times than Tensorflow in most cases
Broader OS support: Linux, Mac OS, or Windows
Broader GPU support: supports Nvidia via CUDA and others via OpenCL
Keras - Deep Learning library for Theano and TensorFlow
a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation.
Pros:
Can switch between Tensorflow and Thenao backends without modifying any code
Allows you to quickly experiment with using both
Allows for easy and fast prototyping (through total modularity, minimalism, and extensibility)
Supports both convolutional networks and recurrent networks, as well as combinations of the two
Supports arbitrary connectivity schemes (including multi-input and multi-output training)
Runs seamlessly on CPU and GPU
Cons:
Errors thrown are difficult to debug
Bad documentation
Not many good examples
Lack of pre-trained models
Going beyond surface-level customization is difficult
Neural Networks in Keras
Since we can use Keras on all operating systems, with or without a GPU, and with either Tensorflow or Theano as a backend, it is an excellent step up from scikit-learn for Neural Networks and deep learning. It also has an easier learning curve than either Tensorflow or Theano. It isn't as fully flexible as either of those, but it is easy to use and powerful.
The core data structure of Keras is a model, a way to organize layers. The main type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API.
Let's try to use Keras to fit the same data as above ...
End of explanation
"""
# You can now iterate on your training data in batches
model.fit(X_train, y_train, nb_epoch=20, batch_size=32)
# Evaluate your performance in one line
loss_and_metrics = model.evaluate(X_test, y_test, batch_size=32)
print(loss_and_metrics)
"""
Explanation: If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code).
End of explanation
"""
|
jdhp-docs/python_notebooks
|
nb_dev_python/python_keras_1d_linear_regression.ipynb
|
mit
|
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
"""
Explanation: Basic 1D linear regression with Keras
Install Keras
https://keras.io/#installation
Install dependencies
Install TensorFlow backend: https://www.tensorflow.org/install/
pip install tensorflow
Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
pip install h5py
Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
pip install pydot
Install Keras
pip install keras
Import packages and check versions
End of explanation
"""
df_train = gen_1d_linear_samples(n_samples=100, noise_std=1.0)
x_train = df_train.x.values
y_train = df_train.y.values
plt.plot(x_train, y_train, ".k");
df_test = gen_1d_linear_samples(n_samples=100, noise_std=None)
x_test = df_test.x.values
y_test = df_test.y.values
plt.plot(x_test, y_test, ".k");
"""
Explanation: Make the dataset
End of explanation
"""
model = keras.models.Sequential()
model.add(keras.layers.Dense(units=1, activation='linear', input_dim=1))
model.compile(loss='mse',
optimizer='sgd')
model.summary()
hist = model.fit(x_train, y_train, epochs=200, verbose=False)
plt.plot(hist.history['loss']);
model.evaluate(x_test, y_test)
y_predicted = model.predict(x_test)
plt.plot(x_test, y_test, ".r")
plt.plot(x_test, y_predicted, ".k");
weights_list = model.get_weights()
print(weights_list)
"""
Explanation: Make the regressor
End of explanation
"""
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file="model.png")
"""
Explanation: Bonnus: plot the regressor
End of explanation
"""
|
AutuanLiu/Python
|
nbs/numba_basic.ipynb
|
mit
|
%matplotlib inline
# 多行结果输出支持
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
"""
Explanation: Numba 基础
Numba是一个用于Python数组和数值函数的编译器,它使您能够使用直接用Python编写的高性能函数来加速应用程序
Numba使用LLVM编译器基础结构从纯Python代码生成优化的机器代码。通过一些简单的注释,面向数组和Python的数学代码可以被即时优化,性能与C,C ++和Fortran类似,无需切换语言或Python解释器
CPU and GPU
numpy based
1.1. Overview — Numba 0.36.1-py2.7-macosx-10.6-x86_64.egg documentation
Universal functions (ufunc) — NumPy v1.14 Manual
End of explanation
"""
import numpy as np
import numba
numba.__version__
# 使用装饰器声明
@numba.jit
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
x = np.random.randn(8000, 8000) * 5000 ** 3
sum2d(x)
# 不使用加速
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
x = np.random.randn(8000, 8000) * 5000 ** 3
sum2d(x)
"""
Explanation: numpy 实例
End of explanation
"""
@numba.jit(nopython=True)
def f(x, y):
return x + y
f(1, 2)
f(2j, 3)
# 超出 32bit的数值将会被丢弃
@numba.jit([numba.complex128(numba.complex128, numba.complex128)],
target='cpu')
def f(x, y):
return x + y
f(1, 2)
# 类型错误
f(2j, 2 + 3j)
"""
Explanation: 由上面的对比可以看出:当数据量较大时,差距相当明显
直接使用@numba.jit 是最通用的方式
也可以告诉jit函数的类型签名
End of explanation
"""
from numba import jit
jit1 = jit(nopython=True, parallel=True)
@jit1
def f(x, y):
return x + y
from numba import generated_jit, types
@generated_jit(nopython=True)
def is_missing(x):
"""
Return True if the value is missing, False otherwise.
"""
if isinstance(x, types.Float):
return lambda x: np.isnan(x)
elif isinstance(x, (types.NPDatetime, types.NPTimedelta)):
# The corresponding Not-a-Time value
missing = x('NaT')
return lambda x: x == missing
else:
return lambda x: False
# test
is_missing(np.NaN)
from numba import vectorize
from numba import (int16, int32, int64, float32, float64, complex128)
vec_cpu = vectorize([
int16(int16, int16),
int32(int32, int32),
int64(int64, int64),
float32(float32, float32),
float64(float64, float64),
complex128(complex128, complex128)
])
@vec_cpu
def f(x, y):
return x + y
a = np.arange(6)
a
a + a
f(a, a)
a = np.linspace(0, 1, 6)
a
f(a, a)
a = np.linspace(0, 1+1j, 6)
a
# 没有写支持, 把complex128加入就可以了
f(a, a)
# from functools import reduce
a = np.arange(12).reshape(3, 4)
a
# numpy 的reduce
f.reduce(a, axis=0)
f.reduce(a, axis=1)
f.accumulate(a)
f.accumulate(a, axis=1)
"""
Explanation: 支持的数据类型
array types can be specified by indexing any numeric type, e.g. float32[:] for a one-dimensional single-precision array or int8[:,:] for a two-dimensional array of 8-bit integers.
int32
void(Python中返回None)
intp
uintp(无符号指针)
intc, uintc (等价于C语言中的有符号和无符号整型)
int8, uint8, int16, uint16, int32, uint32, int64, uint64 (固定宽度的整型)
float32 , float64 (单精度和双精度浮点型)
complex64, complex128(单精度和双精度复数浮点型)
mode
nopython(具有最高的性能)
object
End of explanation
"""
from numba import njit, prange
@njit(parallel=True)
def prange_test(A):
s = 0
for i in prange(A.shape[0]):
s += A[i]
return s
prange_test(np.array([1, 3, 4, 6, 8]))
from timeit import default_timer as timer
from matplotlib.pylab import imshow, jet, show, ion
import numpy as np
from numba import jit
@jit
def mandel(x, y, max_iters):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return 255
@jit
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500 * 2, 750 * 2), dtype=np.uint8)
s = timer()
create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
e = timer()
print(e - s)
imshow(image)
jet()
# ion()
show()
import numba
@numba.jit()
def c(n):
count=0
for i in range(n):
for i in range(n):
count+=1
return count
n=99999
c(n)
def c(n):
count=0
for i in range(n):
for i in range(n):
count+=1
return count
n=99999
c(n)
"""
Explanation: jitclass 编译Python类
njit 并行运算
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
"""
Explanation: TensorFlowアドオンオプティマイザ:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_conditionalgradient"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このノートブックでは、アドオンパッケージのConditional Gradientオプティマイザの使用方法を紹介します。
ConditionalGradient
ニューラルネットワークのパラメーターを制約すると根本的な正則化の効果があるため、トレーニングに有益であることが示されています。多くの場合、パラメーターはソフトペナルティ(制約充足を保証しない)または投影操作(計算コストが高い)によって制約されます。一方、Conditional Gradient(CG)オプティマイザは、費用のかかる投影ステップを必要とせずに、制約を厳密に適用します。これは、制約内のオブジェクトの線形近似を最小化することによって機能します。このガイドでは、MNISTデータセットのCGオプティマイザーを介してフロベニウスノルム制約を適用する方法を紹介します。CGは、tensorflow APIとして利用可能になりました。オプティマイザの詳細は、https://arxiv.org/pdf/1803.06453.pdfを参照してください。
セットアップ
End of explanation
"""
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
"""
Explanation: モデルの構築
End of explanation
"""
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
"""
Explanation: データの準備
End of explanation
"""
def frobenius_norm(m):
"""This function is to calculate the frobenius norm of the matrix of all
layer's weight.
Args:
m: is a list of weights param for each layers.
"""
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
"""
Explanation: カスタムコールバック関数の定義
End of explanation
"""
# Compile the model
model_1.compile(
optimizer=tfa.optimizers.ConditionalGradient(
learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
"""
Explanation: トレーニングと評価: CG をオプティマイザとして使用する
一般的なkerasオプティマイザを新しいtfaオプティマイザに置き換えるだけです。
End of explanation
"""
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
"""
Explanation: トレーニングと評価: SGD をオプティマイザとして使用する
End of explanation
"""
plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
"""
Explanation: 重みのフロベニウスノルム: CG と SGD の比較
現在の CG オプティマイザの実装はフロベニウスノルムに基づいており、フロベニウスノルムをターゲット関数の正則化機能と見なしています。ここでは、CG の正則化された効果を、フロベニウスノルム正則化機能のない SGD オプティマイザと比較します。
End of explanation
"""
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
"""
Explanation: トレーニングと検証の精度:CGとSGDの比較
End of explanation
"""
|
KDD-OpenSource/geox-young-academy
|
day-3/solutions/solution_david_timo.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123541312)
n = 100
z = np.zeros((n))
m = np.zeros((n))
mh = np.zeros((n))
y = np.zeros((n))
C = np.zeros((n))
Ch = np.zeros((n))
K = np.zeros((n))
A = .5
B = .2
C[0] = .4
R = .01
H = 1
zeta = np.random.normal(0, B, n)
nu = np.random.normal(0, R, n)
z[0] = np.random.normal(0, C[0])
#m[0] = z[0] + np.random.normal(0, C[0])
#y[0] = A * z[0] + np.random.normal(0, R)
#y = np.random.normal(0, R, n)
for i in range(1,n):
z[i] = A * z[i-1] + zeta[i-1]
y[i] = z[i] + nu[i]
mh[i] = A * m[i-1]
Ch[i] = A * C[i-1] * (A) + B
tmp = (R + H * Ch[i] * H)**-1
K[i] = Ch[i] * H * tmp
m[i] = mh[i] - K[i] * (H * mh[i] - y[i])
C[i] = Ch[i] - K[i] * H * Ch[i]
fig, ax = plt.subplots()
ax.plot(np.arange(n), z, label=r'$z$')
ax.plot(np.arange(n), m, label=r'$m$')
ax.legend()
plt.show()
"""
Explanation: Linear Model
\begin{equation}
z_n=0.5z_{n-1}+\xi_{n-1}
\end{equation}
with
$\xi_{n-1}\sim N(0,B)$ and $z_{0}\sim N(0,0.4)$
Observations
\begin{equation}
y_n=z_n+\eta_n
\end{equation}
$\eta_{n-1}\sim N(0,R)$
Kalman filter
Forecast formulas:
\begin{align}
\hat{m}{n+1}&=Am_n\
\hat{C}{n+1}&=AC_nA^{\top}+B
\end{align}
Analysis formulas
\begin{align}
m_{n+1}&=\hat{m}{n+1}-K{n+1}(H\hat{m}{n+1}-y{n+1})\
C_{n+1}&=\hat{C}{n+1}-K{n+1}H\hat{C}_{n+1}
\end{align}
with Kalman gain
\begin{equation}
K_{n+1}=\hat{C}{n+1}H^{\top}(R+H\hat{C}{n+1}H^{\top})^{-1}
\end{equation}
Exercise: Please implement the Kalman filter for the example above
End of explanation
"""
print(np.random.normal(0,.5))
print(np.random.normal(0,.5))
print(np.random.normal(0,.5))
print(np.random.normal(0,.5))
print(np.random.normal(0,.5))
print(np.random.normal(0,.5))
"""
Explanation: Lorenz equations
\begin{align}
\dot{x}&=\sigma(y-x)\
\dot{y}&=x(\rho-z)-y\
\dot{z}&=xy-\beta z
\end{align}
Ensemble Kalman Filter
\begin{equation}
z^i_{n+1}=\hat{z}^i_{n+1}-K_{n+1}(H\hat{z}^i_{n+1}-\tilde{y}^i_{n+1})
\end{equation}
\begin{align}
m_{n}&\approx\frac{1}{M}\sum^M_{i=1}z^i_{n}\
C_{n}&\approx\frac{1}{M}\sum^M_{i=1}(z^i_{n}-m_{n})(z^i_{n}-m_{n})^{\top}
\end{align}
Exercise: Please implement the Ensemble Kalman filter for the Lorenz equation
End of explanation
"""
|
Ttl/scikit-rf
|
doc/source/examples/networktheory/Properties of Rectangular Waveguides.ipynb
|
bsd-3-clause
|
%matplotlib inline
import skrf as rf
rf.stylely()
# imports
from scipy.constants import mil,c
from skrf.media import RectangularWaveguide, Freespace
from skrf.frequency import Frequency
import matplotlib as mpl
# plot formating
mpl.rcParams['lines.linewidth'] = 2
# create frequency objects for standard bands
f_wr5p1 = Frequency(140,220,1001, 'ghz')
f_wr3p4 = Frequency(220,330,1001, 'ghz')
f_wr2p2 = Frequency(330,500,1001, 'ghz')
f_wr1p5 = Frequency(500,750,1001, 'ghz')
f_wr1 = Frequency(750,1100,1001, 'ghz')
# create rectangular waveguide objects
wr5p1 = RectangularWaveguide(f_wr5p1.copy(), a=51*mil, b=25.5*mil, rho = 'au')
wr3p4 = RectangularWaveguide(f_wr3p4.copy(), a=34*mil, b=17*mil, rho = 'au')
wr2p2 = RectangularWaveguide(f_wr2p2.copy(), a=22*mil, b=11*mil, rho = 'au')
wr1p5 = RectangularWaveguide(f_wr1p5.copy(), a=15*mil, b=7.5*mil, rho = 'au')
wr1 = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, rho = 'au')
# add names to waveguide objects for use in plot legends
wr5p1.name = 'WR-5.1'
wr3p4.name = 'WR-3.4'
wr2p2.name = 'WR-2.2'
wr1p5.name = 'WR-1.5'
wr1.name = 'WR-1.0'
# create a list to iterate through
wg_list = [wr5p1, wr3p4,wr2p2,wr1p5,wr1]
# creat a freespace object too
freespace = Freespace(Frequency(125,1100, 1001))
freespace.name = 'Free Space'
"""
Explanation: Properties of Rectangular Waveguide
Introduction
This example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.
Object Creation
This first section imports neccesary modules and creates several RectangularWaveguide objects for some standard waveguide bands.
End of explanation
"""
from pylab import *
for wg in wg_list:
wg.frequency.plot(rf.np_2_db(wg.alpha), label=wg.name )
legend()
xlabel('Frequency(GHz)')
ylabel('Loss (dB/m)')
title('Loss in Rectangular Waveguide (Au)');
xlim(100,1300)
resistivity_list = linspace(1,10,5)*1e-8 # ohm meter
for rho in resistivity_list:
wg = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil,
rho = rho)
wg.frequency.plot(rf.np_2_db(wg.alpha),label=r'$ \rho $=%.e$ \Omega m$'%rho )
legend()
#ylim(.0,20)
xlabel('Frequency(GHz)')
ylabel('Loss (dB/m)')
title('Loss vs. Resistivity in\nWR-1.0 Rectangular Waveguide');
"""
Explanation: Conductor Loss
End of explanation
"""
for wg in wg_list:
wg.frequency.plot(100*wg.v_p.real/c, label=wg.name )
legend()
ylim(50,200)
xlabel('Frequency(GHz)')
ylabel('Phase Velocity (\%c)')
title('Phase Veclocity in Rectangular Waveguide');
for wg in wg_list:
plt.plot(wg.frequency.f_scaled[1:],
100/c*diff(wg.frequency.w)/diff(wg.beta),
label=wg.name )
legend()
ylim(50,100)
xlabel('Frequency(GHz)')
ylabel('Group Velocity (\%c)')
title('Phase Veclocity in Rectangular Waveguide');
"""
Explanation: Phase Velocity
End of explanation
"""
for wg in wg_list+[freespace]:
wg.frequency.plot(wg.beta, label=wg.name )
legend()
xlabel('Frequency(GHz)')
ylabel('Propagation Constant (rad/m)')
title('Propagation Constant \nin Rectangular Waveguide');
semilogy();
"""
Explanation: Propagation Constant
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session02/Day5/ImageVizSolutions.ipynb
|
mit
|
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.wcs import WCS
from astropy.visualization import (MinMaxInterval,
LogStretch,
ImageNormalize)
%matplotlib inline
hdu = fits.open('./data/w5.fits')[0]
wcs = WCS(hdu.header)
hdu2 = fits.open('./data/0259p6031_1342192088_SpirePhoto_L20_PMP350_SPG14.0.fits.gz')[1]
norm = ImageNormalize(hdu.data, interval=MinMaxInterval(),
stretch=LogStretch())
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(projection=wcs)
overlay = ax.get_coords_overlay('galactic')
plt.imshow(hdu.data, norm=norm, origin="lower", cmap='Greys_r')
ax.coords['ra'].set_ticks(color='green')
ax.coords['dec'].set_ticks(color='green')
ax.coords['ra'].set_axislabel('Right Ascension')
ax.coords['dec'].set_axislabel('Declination')
ax.coords.grid(color='green', linestyle='solid', alpha=1.0)
overlay['l'].set_ticks(color='cyan')
overlay['b'].set_ticks(color='cyan')
overlay['l'].set_axislabel('Galactic Longitude')
overlay['b'].set_axislabel('Galactic Latitude')
overlay.grid(color='cyan', linestyle='solid', alpha=1.0)
ax.contour(hdu2.data, transform=ax.get_transform(WCS(hdu2.header)),
levels=[0.7,1.4,3], colors='white');
"""
Explanation: Solutions for image visualization
Some exercises make use of code in the notebook. Other exercises will require a GUI interaction. In those cases, one or more images of results are included.
1. matplotlib: Contours from another image, and secondary axes
Using matplotlib and astropy:
display the file .data/w5.fits as a bitmap with log stretch and min-max scaling
overlay the data in image extension 1 of ./data/0259p6031_1342192088_SpirePhoto_L20_PMP350_SPG14.0.fits.fits.gz as white contours with levels drawn at [0.7, 1.4, 3] image units (Jy/beam)
display coordinate axes and grid (green, alpha=1) in (RA, Dec)
overlay a coordinate grid (cyan, alpha=1) and axis labels in Galactic longitude and latitude
End of explanation
"""
import numpy as np
from astropy.visualization import make_lupton_rgb
from astropy.io import fits
from reproject import reproject_interp
# Read in the three images downloaded from here:
g = fits.open('http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-g-001737-5-0039.fits.bz2')[0]
r = fits.open('http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-r-001737-5-0039.fits.bz2')[0]
i = fits.open('http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-i-001737-5-0039.fits.bz2')[0]
# remap r and i onto g
r_new, r_mask = reproject_interp(r, g.header)
i_new, i_mask = reproject_interp(i, g.header)
# zero out the unmapped values
i_new[np.logical_not(i_mask)] = 0
r_new[np.logical_not(r_mask)] = 0
# red=i, green=r, blue=g
# make a file with the default scaling
rgb_default = make_lupton_rgb(i_new, r_new, g.data, filename="ngc6976-default.jpeg")
# this scaling is very similar to the one used in Lupton et al. (2004)
rgb = make_lupton_rgb(i_new, r_new, g.data, Q=10, stretch=0.5, filename="ngc6976.jpeg")
"""
Explanation: 2. RGB-3-color images
Using astropy and reproject (installable with pip install reproject), follow these instructions in the Astropy documentation to make color RGB images. Compare the second one to Figure 1 of Lupton et al 2004.
End of explanation
"""
w5_250 = fits.open('./data/0259p6031_1342192088_SpirePhoto_L20_PMP250_SPG14.0.fits.gz')[1]
w5_350 = fits.open('./data/0259p6031_1342192088_SpirePhoto_L20_PMP350_SPG14.0.fits.gz')[1]
w5_500 = fits.open('./data/0259p6031_1342192088_SpirePhoto_L20_PMP500_SPG14.0.fits.gz')[1]
im250, msk250 = reproject_interp(w5_250, w5_500.header)
im350, msk350 = reproject_interp(w5_350, w5_500.header)
# zero out the unmapped values
im250[np.logical_not(msk250)] = 0
im350[np.logical_not(msk350)] = 0
rgb_w5_default = make_lupton_rgb(im250, im350, w5_500.data, filename="w5-default.jpeg")
"""
Explanation: 3. RGB colors of Herschel-SPIRE images
Reproject the 250 um image of W5 in ./data/0259p6031_1342192088_SpirePhoto_L20_PMP250_SPG14.0.fits.gz (extension 1) and the 350 micron image in ./data/0259p6031_1342192088_SpirePhoto_L20_PMP350_SPG14.0.fits.gz onto the 500 micron image in that directory, and try the same 3-color procedures.
End of explanation
"""
rgb_w5 = make_lupton_rgb(im250, im350, w5_500.data, Q=10, stretch=0.5, filename="w5.jpeg")
"""
Explanation:
End of explanation
"""
|
probml/pyprobml
|
deprecated/gp_spectral_mixture.ipynb
|
mit
|
try:
import tinygp
except ImportError:
%pip install -q tinygp
try:
import optax
except ImportError:
%pip install -q optax
import tinygp
import jax
import jax.numpy as jnp
class SpectralMixture(tinygp.kernels.Kernel):
def __init__(self, weight, scale, freq):
self.weight = jnp.atleast_1d(weight)
self.scale = jnp.atleast_1d(scale)
self.freq = jnp.atleast_1d(freq)
def evaluate(self, X1, X2):
tau = jnp.atleast_1d(jnp.abs(X1 - X2))[..., None]
return jnp.sum(
self.weight
* jnp.prod(
jnp.exp(-2 * jnp.pi**2 * tau**2 / self.scale**2) * jnp.cos(2 * jnp.pi * self.freq * tau),
axis=-1,
)
)
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gp_spectral_mixture.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Spectral mixture kernel in 1d for GP
https://tinygp.readthedocs.io/en/latest/tutorials/kernels.html#example-spectral-mixture-kernel
In this section, we will implement the "spectral mixture kernel" proposed by Gordon Wilson & Adams (2013).
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
def build_gp(theta):
kernel = SpectralMixture(
jnp.exp(theta["log_weight"]),
jnp.exp(theta["log_scale"]),
jnp.exp(theta["log_freq"]),
)
return tinygp.GaussianProcess(kernel, t, diag=jnp.exp(theta["log_diag"]), mean=theta["mean"])
params = {
"log_weight": np.log([1.0, 1.0]),
"log_scale": np.log([10.0, 20.0]),
"log_freq": np.log([1.0, 1.0 / 3.0]),
"log_diag": np.log(0.1),
"mean": 0.0,
}
random = np.random.default_rng(546)
t = np.sort(random.uniform(0, 10, 50))
true_gp = build_gp(params)
y = true_gp.sample(jax.random.PRNGKey(123))
plt.plot(t, y, ".k")
plt.ylim(-4.5, 4.5)
plt.title("simulated data")
plt.xlabel("x")
_ = plt.ylabel("y")
"""
Explanation: Now let's implement the simulate some data from this model:
End of explanation
"""
import optax
@jax.jit
@jax.value_and_grad
def loss(theta):
return -build_gp(theta).condition(y)
opt = optax.sgd(learning_rate=3e-4)
opt_state = opt.init(params)
for i in range(1000):
loss_val, grads = loss(params)
updates, opt_state = opt.update(grads, opt_state)
params = optax.apply_updates(params, updates)
opt_gp = build_gp(params)
tau = np.linspace(0, 5, 500)
plt.plot(tau, true_gp.kernel(tau[:1], tau)[0], "--k", label="true kernel")
plt.plot(tau, opt_gp.kernel(tau[:1], tau)[0], label="inferred kernel")
plt.legend()
plt.xlabel(r"$\tau$")
plt.ylabel(r"$k(\tau)$")
_ = plt.xlim(tau.min(), tau.max())
plt.savefig("gp-spectral-mixture-learned-kernel.pdf")
"""
Explanation: One thing to note here is that we've used named parameters in a dictionary, instead of an array of parameters as in some of the other examples.
This would be awkward (but not impossible) to fit using scipy, so instead we'll use optax for optimization:
End of explanation
"""
x = np.linspace(-2, 12, 500)
plt.plot(t, y, ".k", label="data")
mu, var = opt_gp.predict(y, x, return_var=True)
plt.fill_between(
x,
mu + np.sqrt(var),
mu - np.sqrt(var),
color="C0",
alpha=0.5,
label="prediction",
)
plt.plot(x, mu, color="C0", lw=2)
plt.xlim(x.min(), x.max())
plt.ylim(-4.5, 4.5)
plt.legend(loc=2)
plt.xlabel("x")
_ = plt.ylabel("y")
plt.savefig("gp-spectral-mixture-pred.pdf")
"""
Explanation: Using our optimized model, over-plot the conditional predictions:
End of explanation
"""
|
TomTranter/OpenPNM
|
examples/tutorials/Creating a custom phase with pore-scale models.ipynb
|
mit
|
import numpy as np
import openpnm as op
pn = op.network.Cubic(shape=[3, 3, 3], spacing=1e-4)
print(pn)
"""
Explanation: Creating a custom fluid using GenericPhase
OpenPNM comes with a small selection of pre-written phases (Air, Water, Mercury). In many cases users will want different options but it is not feasible or productive to include a wide variety of fluids. Consequntly OpenPNM has a mechanism for creating custom phases for this scneario. This requires that the user have correlations for the properties of interest, such as the viscosity as a function of temperature in the form of a polynomial for instance. This is process is described in the following tutuorial:
Import the usual packages and instantiate a small network for demonstration purposes:
End of explanation
"""
oil = op.phases.GenericPhase(network=pn)
print(oil)
"""
Explanation: Now that a network is defined, we can create a GenericPhase object associated with it. For this demo we'll make an oil phase, so let's call it oil:
End of explanation
"""
oil['pore.molecular_mass'] = 100.0 # g/mol
print(oil['pore.molecular_mass'])
"""
Explanation: As can be seen in the above printout, this phase has a temperature and pressure set at all locations, but has no other physical properties.
There are 2 ways add physical properties. They can be hard-coded, or added as a 'pore-scale model'.
- Some are suitable as hard coded values, such as molecular mass
- Others should be added as a model, such as viscosity, which is a function of temperature so could vary spatially and should be updated depending on changing conditions in the simulation.
Start with hard-coding:
End of explanation
"""
oil['pore.molecular_mass'] = np.ones(shape=[pn.Np, ])*120.0
print(oil['pore.molecular_mass'])
"""
Explanation: As can be seen, this puts the value of 100.0 g/mol in every pore. Note that you could also assign each pore explicitly with a numpy array. OpenPNM automatically assigns a scalar value to every location as shown above.
End of explanation
"""
oil['pore.viscosity'] = 1600.0 # cP
"""
Explanation: You can also specify something like viscosity this way as well, but it's not recommended:
End of explanation
"""
oil['pore.temperature'] = 100.0 # C
print(oil['pore.viscosity'])
"""
Explanation: The problem with specifying the viscosity as a hard-coded value is that viscosity is a function of temperature (among other things), so if we adjust the temperature on the oil object it will have no effect on the hard-coded viscosity:
End of explanation
"""
mod = op.models.misc.polynomial
oil.add_model(propname='pore.viscosity', model=mod,
a=[1600, 12, -0.05], prop='pore.temperature')
"""
Explanation: The correct way to specify something like viscosity is to use pore-scale models. There is a large libary of pre-written models in the openpnm.models submodule. For instance, a polynomial can be used as follows:
$$ viscosity = a_0 + a_1 \cdot T + a_2 \cdot T^2 = 1600 + 12 T - 0.05 T^2$$
End of explanation
"""
print(oil['pore.viscosity'])
"""
Explanation: We can now see that our previously written values of viscosity (1600.0) have been overwritten by the values coming from the model:
End of explanation
"""
oil['pore.temperature'] = 40.0 # C
oil.regenerate_models()
print(oil['pore.viscosity'])
"""
Explanation: And moreover, if we change the temperature the model will update the viscosity values:
End of explanation
"""
print(oil.models)
"""
Explanation: Note the call to regenerate_models, which is necessary to actually re-run the model using the new temperature.
When a pore-scale model is added to an object, it is stored under the models attribute, which is a dictionary with names corresponding the property that is being calculated (i.e. 'pore.viscosity'):
End of explanation
"""
oil.models['pore.viscosity']['a'] = [1200, 10, -0.02]
oil.regenerate_models()
print(oil['pore.viscosity'])
"""
Explanation: We can reach into this dictionary and alter the parameters of the model if necessary:
End of explanation
"""
|
chinapnr/python_study
|
Python 基础课程/Python Basic Lesson 14 - 访问网络.ipynb
|
gpl-3.0
|
# 获得一个网站的信息
import requests
r = requests.get('http://www.huifu.com')
print(r.content)
print(r.headers)
"""
Explanation: Lesson 14 访问网络初步和 requests 包
v1.0.0 2016.11 by David.Yi
v1.1 2020.5 2020.6 edit by David Yi
本次内容要点
requests 包介绍
访问网页
调用接口
思考一下:写个同步数据的软件需要注意哪些方面
requests 包
requests 包是 python 目前最好用的网站内容访问包,设计上比较人性化,可以大大简化代码的复杂度。
网络访问、接口调用、网络检查、互联网爬虫等,都离不开 requests 包。
一般来说,python 自带的函数包还是做的非常不错的,但是也有例外,其中之一就是网络访问方面。与其说是 python 原生的网络访问函数包做的不够人性化,不如说是 requests 函数包做的太人性化了,设计的非常好,配合 python 自带的优雅属性,在 python 的迅速发展过程中起到了正向的作用。requests 之后,也有很多函数包打着人性化的旗子,甚至有一些滥用了。
使用 requests 包之前需要安装 requests。
End of explanation
"""
# 下载文件 使用 requests
# baidu 的 logo 文件: http://home.baidu.com/resource/r/home/img/logo-yy.gif
import requests
url = 'http://home.baidu.com/resource/r/home/img/logo-yy.gif'
r = requests.get(url)
with open("files/baidu_logo.gif", "wb") as code:
code.write(r.content)
print('download ok')
"""
Explanation: 下载文件
使用requets 可以很方便的获得网站中的图片、文件等。下面只是简单的举例,下载 baidu 的 logo 文件。
End of explanation
"""
# demo for infection/region
# input region, start_date, then get data
# 接口:感染/国家地区
import requests
# API url
url = 'https://covid-19.adapay.tech/api/v1/'
# token, can call register function get the API token
token = '497115d0c2ff9586bf0fe03088cfdbe2'
# region or country
region='US'
# headers, need the API token
headers = {
'token': token
}
# the params
payload = {
'region': region,
'start_date':'2020-06-04'
}
# call requets to load
r = requests.get(url+'infection/region', params=payload, headers=headers)
data = r.json()
print(data)
print(type(data))
# 获得指定key 的内容,实际上是字典,因此可以一层层嵌套访问
print(data['data']['region']['US']['2020-04-24']['confirmed'])
# 我们模拟一个实际的使用方式,获得10天的数据
# demo for infection/region
# input region, start_date, end_date, then get data
# 接口:感染/国家地区
import requests
# API url
url = 'https://covid-19.adapay.tech/api/v1/'
# token, can call register function get the API token
token = '497115d0c2ff9586bf0fe03088cfdbe2'
# region or country
region='US'
# headers, need the API token
headers = {
'token': token
}
# the params
payload = {
'region': region,
'start_date':'2020-04-24',
'end_date':'2020-05-03'
}
# call requets to load
r = requests.get(url+'infection/region', params=payload, headers=headers)
data = r.json()
print(data)
# 截取需要的字典内容
dict1 = data['data']['region']['US']
print(dict1)
print('---')
# 根据字典进行遍历
list1 = []
list2 = []
for key, value in dict1.items():
print(key,value)
list1.append(value['confirmed'])
list2.append(key[5:10])
print('---')
print(list1)
print('---')
print(list2)
# 绘制一个折线图
import matplotlib.pyplot as plt
plt.plot(list2,list1)
plt.show()
"""
Explanation: 读取接口
我们来做一个读取新冠疫情数据的demo。
平时我们说的接口,可以简单的理解为满足一定的认证方式后,通过输入参数的值,获得需要的内容。认证方式、入参、出参这些都是事先约定的。包括接口文档、参数列表、自动联调等这些,在目前 python 的接口开发中使用一些新技术都是可以自动生成的。python 本身开发接口非常容易,有机会另外专门讲述。相对来说,读取和调用接口的操作更为常见。
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.20/_downloads/131324ab94fb4e4c09fa41f4692da130/plot_custom_inverse_solver.ipynb
|
bsd-3-clause
|
import numpy as np
from scipy import linalg
import mne
from mne.datasets import sample
from mne.viz import plot_sparse_source_estimates
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
condition = 'Left Auditory'
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
# Handling average file
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0.04, tmax=0.18)
evoked = evoked.pick_types(eeg=False, meg=True)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
"""
Explanation: Source localization with a custom inverse solver
The objective of this example is to show how to plug a custom inverse solver
in MNE in order to facilate empirical comparison with the methods MNE already
implements (wMNE, dSPM, sLORETA, eLORETA, LCMV, DICS, (TF-)MxNE etc.).
This script is educational and shall be used for methods
evaluations and new developments. It is not meant to be an example
of good practice to analyse your data.
The example makes use of 2 functions apply_solver and solver
so changes can be limited to the solver function (which only takes three
parameters: the whitened data, the gain matrix and the number of orientations)
in order to try out another inverse algorithm.
End of explanation
"""
def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):
"""Call a custom solver on evoked data.
This function does all the necessary computation:
- to select the channels in the forward given the available ones in
the data
- to take into account the noise covariance and do the spatial whitening
- to apply loose orientation constraint as MNE solvers
- to apply a weigthing of the columns of the forward operator as in the
weighted Minimum Norm formulation in order to limit the problem
of depth bias.
Parameters
----------
solver : callable
The solver takes 3 parameters: data M, gain matrix G, number of
dipoles orientations per location (1 or 3). A solver shall return
2 variables: X which contains the time series of the active dipoles
and an active set which is a boolean mask to specify what dipoles are
present in X.
evoked : instance of mne.Evoked
The evoked data
forward : instance of Forward
The forward solution.
noise_cov : instance of Covariance
The noise covariance.
loose : float in [0, 1] | 'auto'
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If loose
is 0 then the solution is computed with fixed orientation.
If loose is 1, it corresponds to free orientations.
The default value ('auto') is set to 0.2 for surface-oriented source
space and set to 1.0 for volumic or discrete source space.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Returns
-------
stc : instance of SourceEstimate
The source estimates.
"""
# Import the necessary private functions
from mne.inverse_sparse.mxne_inverse import \
(_prepare_gain, is_fixed_orient,
_reapply_source_weighting, _make_sparse_stc)
all_ch_names = evoked.ch_names
# Handle depth weighting and whitening (here is no weights)
forward, gain, gain_info, whitener, source_weighting, mask = _prepare_gain(
forward, evoked.info, noise_cov, pca=False, depth=depth,
loose=loose, weights=None, weights_min=None, rank=None)
# Select channels of interest
sel = [all_ch_names.index(name) for name in gain_info['ch_names']]
M = evoked.data[sel]
# Whiten data
M = np.dot(whitener, M)
n_orient = 1 if is_fixed_orient(forward) else 3
X, active_set = solver(M, gain, n_orient)
X = _reapply_source_weighting(X, source_weighting, active_set)
stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],
tstep=1. / evoked.info['sfreq'])
return stc
"""
Explanation: Auxiliary function to run the solver
End of explanation
"""
def solver(M, G, n_orient):
"""Run L2 penalized regression and keep 10 strongest locations.
Parameters
----------
M : array, shape (n_channels, n_times)
The whitened data.
G : array, shape (n_channels, n_dipoles)
The gain matrix a.k.a. the forward operator. The number of locations
is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation
constraint or 3 when using a free orientation model.
n_orient : int
Can be 1 or 3 depending if one works with fixed or free orientations.
If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that
are normal to the cortex.
Returns
-------
X : array, (n_active_dipoles, n_times)
The time series of the dipoles in the active set.
active_set : array (n_dipoles)
Array of bool. Entry j is True if dipole j is in the active set.
We have ``X_full[active_set] == X`` where X_full is the full X matrix
such that ``M = G X_full``.
"""
inner = np.dot(G, G.T)
trace = np.trace(inner)
K = linalg.solve(inner + 4e-6 * trace * np.eye(G.shape[0]), G).T
K /= np.linalg.norm(K, axis=1)[:, None]
X = np.dot(K, M)
indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]
active_set = np.zeros(G.shape[1], dtype=bool)
for idx in indices:
idx -= idx % n_orient
active_set[idx:idx + n_orient] = True
X = X[active_set]
return X, active_set
"""
Explanation: Define your solver
End of explanation
"""
# loose, depth = 0.2, 0.8 # corresponds to loose orientation
loose, depth = 1., 0. # corresponds to free orientation
stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)
"""
Explanation: Apply your custom solver
End of explanation
"""
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1)
"""
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
"""
|
takahish/deep-learning
|
first-neural-network/Your_first_neural_network.ipynb
|
mit
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides.describe()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0 / (1.0 + np.exp(-1.0 * x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(error, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1.0 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += X[:, None] * hidden_error_term[None, :]
# Weight step (hidden to output)
delta_weights_h_o += hidden_outputs[:, None] * output_error_term
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
output_nodes = 1
# Grid search
iterations_list = [100, 1000, 10000]
learning_rate_list = [0.01, 0.03, 0.1, 0.3]
hidden_nodes_list = [2, 5, 10, 20]
N_i = train_features.shape[1]
for iterations in iterations_list:
for learning_rate in learning_rate_list:
for hidden_nodes in hidden_nodes_list:
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
sys.stdout.write("iterations: {0}, learning_rate: {1}, hidden_nodes: {2}\n".format(
iterations, learning_rate, hidden_nodes))
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
sys.stdout.write("\n")
import sys
### Set the hyperparameters here ###
iterations = 10000 # Best
learning_rate = 0.3 # Best
hidden_nodes = 20 # Best
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
import sys
### Set the hyperparameters here ###
iterations = 10000 # Best
learning_rate = 0.3 # Best
hidden_nodes = 10 # Best is 20 but vaidation loss has more variance. so I alter hidden_nodes to 10.
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
WNoxchi/Kaukasos
|
quantum/grove_QAOA_overview_maxcut_codealong.ipynb
|
mit
|
import numpy as np
from grove.pyqaoa.maxcut_qaoa import maxcut_qaoa
from functools import reduce
barbell = [(0,1)] # graph is defined by a list of edges. Edge weights are assumed to be 1.0
steps = 1 # evolution path length ebtween the ref and cost hamiltonians
inst = maxcut_qaoa(barbell, steps=steps) # initializing problem instance
"""
Explanation: The Quantum Approximate Optimization Algorithm for MAX-CUT
2018/6/6:7 –– WNixalo. Code along of QAOA_overview_maxcut.ipynb
I have no idea what I'm doing
The following is a step-by-step guide to running QAOA on the MacCut problem. In the debut paper on QAOA (arXiv: 1411.4028), Farhi, Goldstone, and Gutmann demonstrate that the lowest order approximation of the algorithm produced an approximation ratio of 0.6946 for the MaxCut problem on 3-regular graphs. You can use this notebook to set up an arbitrary graph for MaxCut and solve it using the QAOA algorithm via the Rigetti Forest service.
pyQAOA is a python library that implements the QAOA. It uses the PauliTerm and PauliSum objects from the pyQuil library for expressing the cost and driver Hamiltonians. These operators are used to create a parametric pyQuil program and passed to the variational quantum eigensolver (VQE) in Grove. VQE calls the Rigetti Forest QVM to execute the Quil program that prepares the angle parameterized state. There're muliple ways to construct the MAX-CUT problem for the QAOA library. We include a method that accepts a graph and returns a QAOA instance where the costs and driver Hamiltonians have been constructed. The graph is either an undirected NetworkX graph or a list of tuples where each tuple represents an edge between a pair of nodes.
We start by demonstrating the QAOA algorithm with the simplest instance of MAXX-CUT –– partitioning the nodes on a barbell graph. The barbell graph corresponds to a single edge connecting 2 nodes. The solution is a partitioning of the nodes into different sets ${0, 1}$.
End of explanation
"""
cost_list, ref_list = inst.cost_ham, inst.ref_ham
cost_ham = reduce(lambda x,y: x + y, cost_list)
ref_ham = reduce(lambda x,y: x + y, ref_list)
print(cost_ham)
print(ref_ham)
"""
Explanation: The cost and driver Hamiltonians corresponding to the barbell graph are stored in QAOA object fields in the form of lists of PauliSums.
End of explanation
"""
param_prog = inst.get_parameterized_program()
prog = param_prog([1.2, 4.2])
print(prog)
"""
Explanation: The identity term above is not necessary to the computation since global phase rotations on the wavefunction don't change the expectation value. We include it here purely as a demonstration. The cost function printed above is the negative of the traditional Max Cut operator. This is because QAOA is forumulated as the maximizaiton of the cost operator but the VQE algorithm in the pyQuil library performs a minimization.
QAOA requires the construction of a state parameterized by β and γ rotation angles:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0A%5Cmid%20%5Cbeta%2C%20%5Cgamma%20%5Crangle%20%3D%20%5Cprod_%7Bp%3D0%7D%5E%7B%5Cmathrm%7Bsteps%7D%7D%5Cleft%28%20U%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%2C%20%5Cbeta_%7Bp%7D%29U%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29%20%5Cright%29%5E%7B%5Cmathrm%7Bsteps%7D%7D%20%28%5Cmid%20%2B%5Crangle_%7BN-1%7D%5Cotimes%5Cmid%20%2B%20%5Crangle_%7BN-2%7D...%5Cotimes%5Cmid%20%2B%20%5Crangle_%7B0%7D%29.%0A%5Cend%7Balign%7D&mode=display">
The unitaries <img src="https://render.githubusercontent.com/render/math?math=U%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%2C%20%5Cbeta_%7Bp%7D%29&mode=inline" style='display:inline; margin-top:0px;'> and <img src="https://render.githubusercontent.com/render/math?math=U%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29&mode=inline" style='display:inline; margin-top:0px;'> are exponentiations of the driver and cost Hamiltonians, respectively.
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0AU%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bref%7D%7D%2C%20%5Cbeta_%7Bp%7D%29%20%3D%20e%5E%7B-i%20%5Cbeta_%7Bp%7D%20%5Chat%7BH%7D_%7Bdrive%7D%7D%20%5C%5C%0AU%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29%20%3D%20e%5E%7B-i%20%5Cgamma_%7Bp%7D%20%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%7D%0A%5Cend%7Balign%7D&mode=display">
The QAOA algorithm relies on many constructions of a wavefunction via parameterized Quil and measurements on all qubits to evaluate an expectation value. In order to avoid needless classical computation, QAOA constructions this parametric program once at the beginning of the calculation and then uses this same program object throughout the computation. This is accomplished using the ParametricProgram object pyQuil that allows us to slot in a symbolic value for a parameterized gate.
The parameterized program object can be accessed through the QAOA method get_parameterized_program(). Calling this on an instantiated QAOA object returns a closure with a precomputed set of Quil Programs (wtf does that mean). Calling this closure with the parameters β and γ returns the circuit that has parameterized rotations (what).
End of explanation
"""
betas, gammas = inst.get_angles()
print(betas, gammas)
"""
Explanation: The above printout is a Quil program that can be executed on a QVM. QAOA has 2 methods of operation:
1. pre-computing the angles of rotation classically and using the quantum computer to measure expectation values through repeated experiments and,
2. installing a classical optimization loop on top of step 1 to optimally determine the angles.
Mode 2 is known as the Variational Quantum Eigensolver Algorith. The QAOA object wraps the instantiation of the VQE alorithm with aget_angles().
End of explanation
"""
param_prog = inst.get_parameterized_program()
t = np.hstack((betas, gammas))
prog = param_prog(t)
wf = inst.qvm.wavefunction(prog)
wf = wf.amplitudes
for i in range(2**inst.n_qubits):
print(inst.states[i], np.conj(wf[i])*wf[i])
"""
Explanation: get_angles() returns optimal β and γ angles. To view the probs of the state, you can call QAOA.probabilities(t) where t is a concatenation of β and γ, in that order. probabilities(t) takes β & γ, reconstructs the wave function, and returns their coefficients. A modified version can be used to print off the probabilities:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
from grove.pyqaoa.qaoa import QAOA
import pyquil.quil as pq
from pyquil.paulis import PauliSum, PauliTerm
from pyquil.gates import H
from pyquil.api import QVMConnection
# wonder why they call it "CXN". pyQuil docs called it "quantum_simulator"
CXN = QVMConnection() # heh, CXN --> "connection"?
# define 6-qubit ring
ring_size = 6
graph = nx.Graph()
for i in range(ring_size):
graph.add_edge(i, (i + 1) % ring_size)
nx.draw_circular(graph, node_color="#6CAFB7")
"""
Explanation: As expected the bipartitioning of a graph with a single edge connecting 2 nodes corresponds to the state ${ \rvert 01 \rangle, \rvert 10 \rangle }$
oh... cool it actually does. Great, so far so good.
In this trivial example the QAOA finds angles that constuct a distribution peaked around the 2 degenerate solutions.
MAXCUT on larger graphs and alternative optimizers
Larger graph instances and different classical optimizers can be used with the QAOA. Here we consider a 6-node ring of disagrees (eh?). For an even number ring graph, the ring of disagrees corresponds to the antiferromagnet ground state –– ie: alternating spin-up spin-down.
do we have to analogize everything to a physical QM phenom or is that just narrative-momentum?
End of explanation
"""
cost_operators = []
driver_operators = []
for i,j in graph.edges():
cost_operators.append(PauliTerm("Z", i, 0.5) *
PauliTerm("Z", j) +
PauliTerm("I", 0, -0.5))
for i in graph.nodes():
driver_operators.append(PauliSum([PauliTerm("X", i, 1.0)]))
"""
Explanation: This graph could be passed to the maxcut_qaoa method, and a QAOA instance with the correct driver & cost Hamiltonian could be generated as before. In order to demonstrate the more general approach, along with some VQE options, we'll construct the cost and driver Hamiltonians directly with PauliSum and PauliTerm objects. To do this we parse the edges and nodes of the graph to construct the relevant operators:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0A%5Chat%7BH%7D_%7B%5Cmathrm%7Bcost%7D%7D%20%3D%20%5Csum_%7B%5Clangle%20i%2C%20j%5Crangle%20%5Cin%20E%7D%5Cfrac%7B%5Csigma_%7Bi%7D%5E%7Bz%7D%5Csigma_%7Bj%7D%5E%7Bz%7D%20-%201%7D%7B2%7D%20%5C%5C%0A%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%20%3D%20%5Csum_%7Bi%7D%5E%7Bn%7D-%5Csigma_%7Bi%7D%5E%7Bx%7D%0A%5Cend%7Balign%7D&mode=display">
where $\langle i, j \rangle \in E$ referes to the pairs of nodes that form the edges of the graph.
End of explanation
"""
prog = pq.Program()
for i in graph.nodes():
prog.inst(H(i))
"""
Explanation: We'll also construct the initial state and pass this to the QAOA object. By default, QAOA uses the $\rvert + \rangle$ tensor product state. In other notebooks we'll demonstrate that you can use the driver_ref optional argument to pass a different starting state for QAOA.
End of explanation
"""
ring_cut_inst = QAOA(CXN, len(graph.nodes()), steps=1, ref_hamiltonian=driver_operators,
cost_ham=cost_operators, driver_ref=prog, store_basis=True,
rand_seed=42)
betas, gammas = ring_cut_inst.get_angles()
"""
Explanation: We're now ready to instantuate the QAOA object! 🎉
End of explanation
"""
from collections import Counter
# get the parameterized program
param_prog = ring_cut_inst.get_parameterized_program()
sampling_prog = param_prog(np.hstack((betas, gammas)))
# use the run_and)measure QVM API to prepare a circuit and then measure on the qubits
bitstring_samples = CXN.run_and_measure(quil_program=sampling_prog, qubits=range(len(graph.nodes())), trials=1000)
bitstring_tuples = map(tuple, bitstring_samples)
# aggregate the statistics
freq = Counter(bitstring_tuples)
most_frequent_bit_string = max(freq, key=lambda x: freq[x])
print(freq) ##for f in freq.items(): (print(f"{f[0]}, {f[1]}"))
print(f"The most frequently sampled string is {most_frequent_bit_string}")
"""
Explanation: We're interested in the bit strings returned from the QAOA algorithm. The get_angles() routine calls the VQE algorithm to find the best angles. We can then manually query the bit strings by rerunning the program and sampling many outputs.
End of explanation
"""
# plot strings!
n_qubits = len(graph.nodes())
def plot(inst, probs):
probs = probs.real
states = inst.states
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel("state",fontsize=20)
ax.set_ylabel("Probability",fontsize=20)
ax.set_xlim([0, 2**n_qubits])
rec = ax.bar(range(2**n_qubits), probs[:,0],)
num_states = [0,
int("".join(str(x) for x in [0,1] * (n_qubits//2)), 2),
int("".join(str(x) for x in [1,0] * (n_qubits//2)), 2),
2**n_qubits - 1]
ax.set_xticks(num_states)
ax.set_xticklabels(map(lambda x: inst.states[x], num_states), rotation=90)
plt.grid(True)
plt.tight_layout()
plt.show()
t = np.hstack((betas, gammas))
probs = ring_cut_inst.probabilities(t)
plot(ring_cut_inst, probs)
"""
Explanation: We can see that the first 2 most frequently sampled strings are the alternating solutions to the ring graph (well damn, they are). Since we have to access the wave function, we can go one step further and view the probability distribution over the bit strings produced by our $p = 1$ circuit.
End of explanation
"""
# get the angles from the last run
beta = ring_cut_inst.betas
gamma = ring_cut_inst.gammas
# form new beta/gamma angles from the old angles
betas = np.hstack((beta[0]/3, beta[0]*2/3))
gammas = np.hstack((gamma[0]/3, gamma[0]*2/3))
# set up a new QAOA instance
ring_cut_inst_2 = QAOA(CXN, len(graph.nodes()), steps=2,
ref_hamiltonian=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True,
init_betas=betas, init_gammas=gammas)
# run VQE to determine the optimal angles
betas, gammas = ring_cut_inst_2.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_2.probabilities(t)
plot(ring_cut_inst_2, probs)
"""
Explanation: For larger graphcs the probability of sampling the correct string could be significantly smaller, though still peaked around the solution. Therewfore we'd want to increase the probability of sampling the solution relative to any other string. To do this we simply increase the number of steps $p$ in the algorithm. We might want to bootstrap the algorithm with angles from a lower number of steps. We can pass initial angles to the solver as optional arguments:
End of explanation
"""
from scipy.optimize import fmin_bfgs
ring_cut_inst_3 = QAOA(CXN, len(graph.nodes()), steps=3,
ref_hamiltonian=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True,
minimizer=fmin_bfgs, minimizer_kwargs={'gtol':1.0e-3},
rand_seed=42)
betas,gammas = ring_cut_inst_3.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_3.probabilities(t)
plot(ring_cut_inst_3, probs)
"""
Explanation: We could also change the optimizer passed down to VQE via the QAOA interface. Let's say we want to use BFGS or another optimizer that can be wrapped in python. Simple pass it to QAOA via the minimizer, minimizer_args, and minimizer_kwargs keywords:
End of explanation
"""
|
infilect/ml-course1
|
week3/word2vec/notebook/Skip-Grams-Solution.ipynb
|
mit
|
import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
magenta/ddsp
|
ddsp/colab/tutorials/4_core_functions.ipynb
|
apache-2.0
|
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: <a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/tutorials/4_core_functions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Install and import dependencies
%tensorflow_version 2.x
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import ddsp
import ddsp.training
from ddsp.colab.colab_utils import (play, specplot, transfer_function,
plot_impulse_responses, DEFAULT_SAMPLE_RATE)
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
sample_rate = DEFAULT_SAMPLE_RATE # 16000
"""
Explanation: DDSP Core Functions
This notebook provides some simple demonstrations of using DDSP primitives for synthesis, filtering, and interpolation. Keep in mind that all of these components are fully differentiable and can be integrated with neural networks and end-2-end learning.
While the preferred API is to use the Synthesizer and Effect Processors that are built around these central components, it is of course possible to call the core functions directly as well.
End of explanation
"""
n_samples = int(sample_rate * 4.0)
n_components = 3
# Amplitudes [n_batch, n_samples, n_components].
# Linearly decay in time.
amps = np.linspace(0.3, 0.0, n_samples)
amps = np.tile(amps[np.newaxis, :, np.newaxis], [1, 1, n_components])
# Frequencies in Hz [n_batch, n_samples, n_components].
frequencies = np.ones([1, n_samples, 1]) * np.array([[[220, 440, 660]]])
# Sythesize.
audio = ddsp.core.oscillator_bank(frequencies, amps, sample_rate)
# Listen.
play(audio)
specplot(audio)
"""
Explanation: Generation
oscillator_bank()
Synthesize audio with an array of sinusoidal oscillators. Frequencies and amplitudes must be provided at audio rate.
Ex: Simple harmonic sound
End of explanation
"""
n_samples = int(sample_rate * 4.0)
n_components = 6
n_frames = 100
# Amplitudes [n_batch, n_samples, n_components].
# Linearly decay in time.
amps = np.linspace(0.3, 0.0, n_samples)
amps = np.tile(amps[np.newaxis, :, np.newaxis], [1, 1, n_components])
# Frequencies in Hz [n_batch, n_samples, n_components].
frequencies = []
for _ in range(n_components):
f_start = np.random.uniform(20, 4000)
f_end = np.random.uniform(20, 4000)
frequencies.append(np.linspace(f_start, f_end, n_frames))
frequencies = np.stack(frequencies).T[np.newaxis, ...]
frequencies = ddsp.core.resample(frequencies, n_samples)
# Sythesize.
audio = ddsp.core.oscillator_bank(frequencies, amps, sample_rate)
audio /= np.abs(audio).max()
# Listen.
play(audio)
specplot(audio)
"""
Explanation: Ex: Random frequencies
End of explanation
"""
def smooth(x, window_size=2000):
"""Smooth signal with box filter. For random frequency modulation."""
window = np.ones(window_size) / window_size
return np.convolve(window, x, mode='same')
n_samples = int(sample_rate * 6.0)
n_components = 100
# Time points for the frequency ramp.
n_start = int(sample_rate * 1.5)
n_stop = int(sample_rate * 4.0)
n_ramp = n_stop - n_start
n_level = n_samples - n_stop
# Amplitudes [n_batch, n_samples, n_components].
# Decrease amplitude for higher components.
amps = np.ones([1, n_samples, 1])
amps = amps * np.logspace(0, -2, n_components)[np.newaxis, np.newaxis, :]
# Fade in at the start, out at end.
amps[:, :n_start, :] *= np.logspace(-2, 0, n_start)[np.newaxis, :, np.newaxis]
amps[:, -2000:, :] *= np.logspace(0, -2, 2000)[np.newaxis, :, np.newaxis]
# Frequencies in Hz [n_batch, n_samples, n_components].
# Sweep frequencies from random initial frequenices to fixed final frequencies.
freq_initial = np.random.uniform(low=240.0, high=280.0, size=10)
harmonics = np.arange(1, 11)
f0 = np.array([0.5, 1, 1, 2.5, 3, 3.5, 4, 4.5, 5, 5.25])
freq_final = 150 * f0
# Treat each frequency sweep separately.
frequencies = []
for i, f in zip(freq_initial, freq_final):
# Sweep the frequency.
freq = np.concatenate(
[i * np.ones(n_start), np.linspace(i, f, n_ramp), f * np.ones(n_level),])
# Modulate the frequency.
d_freq = smooth(np.concatenate([
np.random.uniform(low=0.1, high=1.9, size=n_start),
np.random.uniform(low=0.5, high=1.5, size=n_ramp + n_level),
]))
freq *= d_freq
# Add harmonics for each fundamental.
frequencies.append([freq * h for h in harmonics])
# Rearrange to [n_batch, n_samples, n_components].
frequencies = np.transpose(np.stack(frequencies), (2, 1, 0))
frequencies = np.reshape(frequencies, [1, n_samples, -1])
# Sythesize.
audio = ddsp.core.oscillator_bank(frequencies, amps, sample_rate)
# Listen.
audio /= np.abs(audio).max()
play(audio)
specplot(audio)
"""
Explanation: Ex: Swarm of sinusoids
Just for fun...
End of explanation
"""
n_samples = int(sample_rate * 1.0)
n_wavetable = 2048
n_cycles = 440
# Sin wave
wavetable = tf.sin(tf.linspace(0.0, 2.0 * np.pi, n_wavetable))
wavetable = wavetable[tf.newaxis, tf.newaxis, :]
phase = tf.linspace(0.0, n_cycles, n_samples) % 1.0
phase = phase[tf.newaxis, :, tf.newaxis]
output = ddsp.core.linear_lookup(phase, wavetable)
target = np.sin(np.linspace(0.0, 2.0 * np.pi * n_cycles, n_samples))
# For plotting.
output = output[0]
phase = phase[0, :, 0]
# Plot the results
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(wavetable[0, 0, :])
plt.title('Wavetable')
plt.subplot(122)
plt.plot(output[:200], label='Output')
plt.plot(phase[:200], label='Oscillator Phase')
plt.plot(target[:200], label='Target')
plt.title('Wavetable lookup')
plt.legend(loc='lower left')
print('Target')
play(target)
print('Output')
play(output)
"""
Explanation: linear_lookup()
Synthesize audio with an array of sinusoidal oscillators. Frequencies and amplitudes must be provided at audio rate.
Ex: Sinusoidal lookup
As a simple example, lookup from a sin-wave wavetable produces a sin wave at the lookup frequency
End of explanation
"""
plt.plot(target[:200] - output[:200])
"""
Explanation: There are small aritfacts due to the linear interpolation and implicit resampling of the signal
End of explanation
"""
modulation = tf.linspace(0.0, 0.5, n_samples)
modulation = modulation[tf.newaxis, :, tf.newaxis]
phase2 = (tf.sin(np.pi * phase[tf.newaxis, :, tf.newaxis]) + modulation)**2.0 % 1.0
output2 = ddsp.core.linear_lookup(phase2, wavetable)
# For plotting
output2 = output2[0]
phase2 = phase2[0, :, 0]
# Plot the results
plt.figure(figsize=(6, 6))
plt.plot(output2[:200], label='Output')
plt.plot(phase2[:200], label='Oscillator Phase')
plt.title('Wavetable lookup')
plt.ylim(-1.5, 1.5)
plt.legend(loc='lower left')
print('Output')
play(output2)
"""
Explanation: You can also use any arbitrary waveform as the lookup signal to get more interesting outputs
End of explanation
"""
n_secs = 3
n_samples = int(sample_rate * n_secs)
n_wavetable = 2048
n_cycles = 110 * n_secs
phase = tf.linspace(0.0, n_cycles, n_samples) % 1.0
phase = phase[tf.newaxis, :, tf.newaxis]
# Sin wave
wavetable_sin = tf.sin(tf.linspace(0.0, 2.0 * np.pi, n_wavetable))
wavetable_sin = wavetable_sin[tf.newaxis, tf.newaxis, :]
# Square wave
wavetable_square = tf.cast(wavetable_sin > 0.0, tf.float32) * 2.0 - 1.0
# Combine them
wavetables = tf.concat([wavetable_sin, wavetable_square], axis=1)
wavetables = ddsp.core.resample(wavetables, n_samples)
wavetables *= 0.5
output_multiwave = ddsp.core.linear_lookup(phase, wavetables)
# For plotting
wavetables = wavetables[0]
output_multiwave = output_multiwave[0]
phase = phase[0, :, 0]
# Plot the results
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(wavetables[0, :])
plt.plot(wavetables[16000, :])
plt.plot(wavetables[32000, :])
plt.title('Wavetable')
plt.subplot(122)
plt.plot(output_multiwave[:200], label='Output')
plt.plot(phase[:200], label='Oscillator Phase')
plt.title('Wavetable lookup')
plt.legend(loc='lower left')
print('Output')
play(output_multiwave)
"""
Explanation: Ex: Wavetable Synthesis
We can also use this linear lookup to build a wavetable synthesizer. Here, we pass in a series of wavetables (one for each timestep) and look up from the changing wavetables over time
End of explanation
"""
n_frames = 100
frequencies = 110 * tf.linspace(1.5, 1, n_frames)[tf.newaxis, :, tf.newaxis]
amplitudes = 0.5 * tf.linspace(0.7, 0.001, n_frames)[tf.newaxis, :, tf.newaxis]
n_secs = 3
n_samples = int(sample_rate * n_secs)
n_wavetable = 2048
# Sin wave
wavetable_sin = tf.sin(tf.linspace(0.0, 2.0 * np.pi, n_wavetable))
wavetable_sin = wavetable_sin[tf.newaxis, tf.newaxis, :]
# Square wave
wavetable_square = tf.cast(wavetable_sin > 0.0, tf.float32) * 2.0 - 1.0
# Combine them
wavetables = tf.concat([wavetable_sin, wavetable_square, wavetable_sin], axis=1)
wavetables = ddsp.core.resample(wavetables, n_samples)
output_multiwave = ddsp.core.wavetable_synthesis(frequencies,
amplitudes,
wavetables,
n_samples=n_samples,
sample_rate=sample_rate)
# For plotting
wavetables = wavetables[0]
output_multiwave = output_multiwave[0]
# Plot the results
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(wavetables[0, :])
plt.plot(wavetables[16000, :])
plt.plot(wavetables[32000, :])
plt.title('Wavetable')
print('Output')
play(output_multiwave)
"""
Explanation: wavetable_synthesis()
We also have a convenience function to make wavetable synthesis easier. wavetable_synthesis() takes a frame-based frequency and amplitude of the oscillator.
End of explanation
"""
# Get a single example from NSynth.
# Takes a few seconds to load from GCS.
data_provider = ddsp.training.data.NSynthTfds(split='train')
batch = data_provider.get_batch(batch_size=1, shuffle=False).skip(1)
audio = next(iter(tfds.as_numpy(batch)))['audio']
specplot(audio)
play(audio)
n_samples = audio.shape[1]
n_seconds = n_samples / sample_rate
def sin_phase(mod_rate):
phase = tf.sin(tf.linspace(0.0, mod_rate * n_seconds * 2.0 * np.pi, n_samples))
phase = (phase[tf.newaxis, :, tf.newaxis] + 1.0) / 2.0 # Scale to [0, 1.0]
return phase
"""
Explanation: variable_length_delay()
If we instead treat a moving window as a "wavetable" we can implement a variable time delay in a forward pass using linear_lookup(). Variable time delays are the key component to time modulation effects such as vibrato, chorus, and flanging.
End of explanation
"""
mod_rate = 0.25 # Hz
mod_ms = 1.5
center_ms = 0.0
delay_ms = mod_ms + center_ms
max_length = int(sample_rate / 1000.0 * delay_ms)
phase = sin_phase(mod_rate) * (mod_ms / delay_ms) + (center_ms / delay_ms)
audio_wet = ddsp.core.variable_length_delay(phase,
audio,
max_length=max_length)
audio_out = 0.5 * (audio + audio_wet)
# Listen.
play(audio_out)
specplot(audio_out)
"""
Explanation: Ex. Flanger
End of explanation
"""
mod_rate = 2.0 # Hz
mod_ms = 1.0
center_ms = 25.0
delay_ms = mod_ms + center_ms
max_length = int(sample_rate / 1000.0 * delay_ms)
phase = sin_phase(mod_rate) * (mod_ms / delay_ms) + (center_ms / delay_ms)
audio_wet = ddsp.core.variable_length_delay(phase,
audio,
max_length=max_length)
audio_out = 0.5 * (audio + audio_wet)
# Listen.
play(audio_out)
specplot(audio_out)
"""
Explanation: Ex. Chorus
End of explanation
"""
mod_rate = 1.0 # Hz
mod_ms = 20.0
center_ms = 00.0
delay_ms = mod_ms + center_ms
max_length = int(sample_rate / 1000.0 * delay_ms)
phase = sin_phase(mod_rate) * (mod_ms / delay_ms) + (center_ms / delay_ms)
audio_wet = ddsp.core.variable_length_delay(phase,
audio,
max_length=max_length)
audio_out = audio_wet
# Listen.
play(audio_out)
specplot(audio_out)
"""
Explanation: Ex. Vibrato
End of explanation
"""
## Low-pass sweep in Hertz.
noise = np.random.uniform(-0.5, 0.5, [1, sample_rate *4])
f_cutoff = np.linspace(0., 1.0, 200)[np.newaxis, :, np.newaxis]
ir = ddsp.core.sinc_impulse_response(f_cutoff , 2048)
filtered = ddsp.core.fft_convolve(noise, ir)
specplot(noise)
specplot(filtered)
play(noise)
play(filtered)
"""
Explanation: Filtering
Time-varying differentiable linear filters (parameterized in frequency space). Impulse responses are designed by sinc_impulse_response() and frequency_impulse_reponse() and then applied by fft_convolve().
sinc_filter() and frequency_filter() are thin wrappers around filter design and fft_convolve().
fft_convolve()
Time-varying filter. Given audio [batch, n_samples], and a series of impulse responses [batch, n_frames, n_impulse_response], splits the audio into frames, applies filters, and then overlap-and-adds audio back together.
Ex: Low-pass sweep
End of explanation
"""
# Brick-wall filter
f_cutoff = 4000
window_size = 2000
# True filter.
impulse_response = ddsp.core.sinc_impulse_response(f_cutoff,
window_size,
sample_rate)
# Ideal brick-wall filter
half_nyquist = int(window_size / 2)
desired_magnitudes = np.concatenate([np.ones([half_nyquist]),
np.zeros([half_nyquist]) + 1e-6], axis=0)
plot_impulse_responses(impulse_response, desired_magnitudes)
## Normalized frequency [0, 1] works as well, without needing sample_rate.
f_cutoff = 0.5
# True filter.
impulse_response = ddsp.core.sinc_impulse_response(f_cutoff, window_size)
plot_impulse_responses(impulse_response, desired_magnitudes)
# Changing window size changes the time-frequency characteristics.
impulse_response = ddsp.core.sinc_impulse_response(f_cutoff, window_size=250)
plot_impulse_responses(impulse_response, desired_magnitudes)
"""
Explanation: sinc_impulse_response()
Simple FIR low-pass filter design using sinc functions.
Ex: Brick-wall filter
End of explanation
"""
original_sample_rate = 10000
n_samples = sample_rate + 1
# Let's start with a triangle wave at 100 Hz.
time = tf.linspace(0.0, 1.0, n_samples)
signal = (tf.linspace(0.0, 100.0, n_samples) % 1.0) - 0.5
# Look at FFT of signal.
frequencies, magnitudes = transfer_function(signal[tf.newaxis, tf.newaxis, :],
sample_rate=original_sample_rate)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(time[:200], signal[:200])
plt.title('Amplitude (time)')
plt.subplot(122)
plt.semilogy(frequencies, magnitudes[0, 0, :])
plt.title('Magnitude (frequency)')
print('Original')
play(signal, sample_rate=original_sample_rate)
"""
Explanation: sinc_filter()
Thin wrapper around sinc_impulse_response() and fft_convolve. Filter audio with a low-pass filter.
Ex: Bandlimited Upsampling
Let's start with a triangle wave at 100 Hz, sampled at 10kHz.
End of explanation
"""
upsample = 2
upsample_rate = int(original_sample_rate * upsample)
n_upsample = int(n_samples * upsample)
time_up = tf.linspace(0.0, 1.0, n_upsample)
# Box upsampling
signal_up = tf.compat.v1.image.resize_nearest_neighbor(
signal[tf.newaxis, :, tf.newaxis, tf.newaxis], [n_upsample, 1]
)[0, :, 0, 0]
frequencies_up, magnitudes_up = transfer_function(signal_up[tf.newaxis, tf.newaxis, :],
sample_rate=upsample_rate)
# Bilinear upsampling
signal_up_bl = ddsp.core.resample(signal[tf.newaxis, :, tf.newaxis], n_upsample)[0, :, 0]
frequencies_up_bl, magnitudes_up_bl = transfer_function(signal_up_bl[tf.newaxis, tf.newaxis, :],
sample_rate=upsample_rate)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.semilogy(frequencies_up, magnitudes_up[0, 0, :], label='box upsample')
plt.semilogy(frequencies_up_bl, magnitudes_up_bl[0, 0, :], label='bilinear upsample')
plt.semilogy(frequencies, magnitudes[0, 0, :], label='original')
plt.ylim(1e-3, 1e4)
plt.title('Magnitude (frequency)')
plt.legend()
print('Box upsample')
play(signal_up, sample_rate=upsample_rate)
print('Bilinear upsample')
play(signal_up_bl, sample_rate=upsample_rate)
print('Original')
play(signal, sample_rate=original_sample_rate)
"""
Explanation: If we naively double the sample rate to 20kHz, we introduce upsampling artifacts.
End of explanation
"""
n_frequencies = 1024
half_nyquist = int(n_frequencies / 2)
# Bandpass filters, [n_batch, n_frames, n_frequencies].
cutoff_frequency = tf.ones([1, 1, 1]) * 0.5
signal_filt = ddsp.core.sinc_filter(signal_up[tf.newaxis, :],
cutoff_frequency,
window_size=1024)[0]
frequencies_filt, magnitudes_filt = transfer_function(signal_filt[tf.newaxis, tf.newaxis, :],
sample_rate=upsample_rate)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.semilogy(frequencies_up, magnitudes_up[0, 0, :], label='box upsample')
plt.semilogy(frequencies_filt, magnitudes_filt[0, 0, :], label='anti-aliased')
plt.ylim(1e-3, 1e4)
plt.title('Magnitude (frequency)')
plt.legend()
print('Box upsample')
play(signal_up, sample_rate=upsample_rate)
print('Anti-aliased')
play(signal_filt, sample_rate=upsample_rate)
print('Original')
play(signal, sample_rate=original_sample_rate)
"""
Explanation: By applying a brick-wall low-pass filter as above, we can remove aliasing artifacts.
End of explanation
"""
# Brick-wall filter
n_frequencies = 512
half_nyquist = int(n_frequencies / 2)
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = (tf.linspace(1.0, 0.001, n_frequencies) +
0.1 * tf.sin(tf.linspace(0.0, 2.0 * np.pi * 8, n_frequencies)))
magnitudes = magnitudes[tf.newaxis, tf.newaxis, :]
desired_magnitudes = magnitudes[0, 0, :]
# Designed filter.
impulse_response = ddsp.core.frequency_impulse_response(magnitudes, window_size=0)
plot_impulse_responses(impulse_response, desired_magnitudes)
# Changing window size changes the time-frequency characteristics.
impulse_response = ddsp.core.frequency_impulse_response(magnitudes,
window_size=80)
plot_impulse_responses(impulse_response, desired_magnitudes)
"""
Explanation: frequency_impulse_response()
FIR filter design method used by ddsp.frequency_filter(). Uses the frequency sampling method of filter design as described here.
Ex: Arbitrary filter design
End of explanation
"""
n_samples = int(sample_rate * 4.0)
n_frequencies = 1000
# White noise.
audio_in = tf.random.uniform([1, n_samples], -0.5, 0.5)
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = tf.sin(tf.linspace(0.0, 10.0, n_frequencies))**4.0
magnitudes = magnitudes[tf.newaxis, tf.newaxis, :]
# Filter.
audio_out = ddsp.core.frequency_filter(audio_in, magnitudes)
# Listen.
print('Original')
play(audio_in)
specplot(audio_in)
print('Filtered')
play(audio_out)
specplot(audio_out)
"""
Explanation: frequency_filter()
Thin wrapper around frequency_impulse_response() and fft_convolve. Filter audio with a finite impulse response linear time-varying filter, designed using the frequency sampling method.
Ex: Arbitrary time-varying filter
Let's try a time-invariant filter. The magnitudes have a single frame and n_frequency bands linearly spaced between 0 and Nyquist.
End of explanation
"""
# Fewer frequencies, less frequency resolution.
n_frequencies = 32
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = tf.sin(tf.linspace(0.0, 10.0, n_frequencies))**4.0
magnitudes = magnitudes[tf.newaxis, tf.newaxis, :]
# Filter.
audio_out = ddsp.core.frequency_filter(audio_in, magnitudes, window_size=0)
# Listen.
print('Less frequency resolution')
play(audio_out)
specplot(audio_out)
# Smaller window_size, less frequency resolution (more temporal resolution).
n_frequencies = 1000
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = tf.sin(tf.linspace(0.0, 10.0, n_frequencies))**4.0
magnitudes = magnitudes[tf.newaxis, tf.newaxis, :]
# Filter.
audio_out = ddsp.core.frequency_filter(audio_in, magnitudes, window_size=32)
# Listen.
print('Smaller window')
play(audio_out)
specplot(audio_out)
# Now let's try a time-varying filter.
n_frames = 250
n_frequencies = 1000
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = [tf.sin(tf.linspace(0.0, w, n_frequencies))**4.0 for w in np.linspace(4.0, 40.0, n_frames)]
magnitudes = tf.stack(magnitudes)
magnitudes = magnitudes[tf.newaxis, :, :]
# Filter.
audio_out = ddsp.core.frequency_filter(audio_in, magnitudes)
# Listen.
print('Time-varying Filter')
play(audio_out)
specplot(audio_out)
# The filter equally spaces the frames in time, so if you don't have enough, you'll hear transitions.
n_frames = 15
# Bandpass filters, [n_batch, n_frames, n_frequencies].
magnitudes = [tf.sin(tf.linspace(0.0, w, n_frequencies))**4.0 for w in np.linspace(4.0, 40.0, n_frames)]
magnitudes = tf.stack(magnitudes)
magnitudes = magnitudes[tf.newaxis, :, :]
# Filter.
audio_out = ddsp.core.frequency_filter(audio_in, magnitudes)
# Listen.
print('Time-varying Filter, Low temporal resolution')
play(audio_out)
specplot(audio_out)
"""
Explanation: ddsp.fir_filter() uses the frequency sampling method of filter design as described here.
Reducing n_frequencies thus reduces frequency resolution.
window_size crops the impulse responses to also determine the time-frequency tradeoff.
window_size must be > the fft_size which is the power of 2 >= n_frequencies * 2.
Setting window_size < 1, automatically sets it at n_frequencies.
End of explanation
"""
n_coarse = 9
n_fine = 16000
coarse = 1.0 - np.sin(np.linspace(0, np.pi, n_coarse))[np.newaxis, :, np.newaxis]
fine = ddsp.core.resample(coarse, n_fine, add_endpoint=False)
plt.plot(np.linspace(0, n_fine, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Bilinear upsampling ({} points, {} intervals)'.format(n_coarse, n_coarse - 1))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: Resampling
Many functions require controls to be provided at the audio sample rate, but often one will want the network to output controls at a coarser rate.
resample()
Simple bilinear upsampling of control signal based on tf.image.resize().
With add_endpoint=False, uses the last timestep as the endpoint, producing n_frames - 1 segments, each with a length of n_timesteps / (n_frames - 1).
End of explanation
"""
fine = ddsp.core.resample(coarse, n_fine)
n_adjusted = int(n_fine / n_coarse * (n_coarse - 1))
plt.plot(np.linspace(0, n_adjusted, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Bilinear upsampling ({} points, {} intervals)'.format(n_coarse, n_coarse))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: With add_endpoint=True, holds the last timestep for an additional step as the endpoint.
Then, n_timesteps is divided evenly into n_frames segments of size n_timesteps / n_frames. This is the default behavior, as it matches the default behavior of fft_convolve.
End of explanation
"""
fine = ddsp.core.resample(coarse, n_fine, method='cubic', add_endpoint=False)
plt.plot(np.linspace(0, n_fine, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Bicubic upsampling ({} points, {} intervals)'.format(n_coarse, n_coarse - 1))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: You can also do cubic interpolation
End of explanation
"""
n_coarse = 9
n_fine = 16000
fine = 1.0 - np.sin(np.linspace(0, np.pi, n_fine))[np.newaxis, :, np.newaxis]
coarse = ddsp.core.resample(fine, n_coarse, add_endpoint=False)
plt.plot(np.linspace(0, n_coarse, n_fine), fine[0, :, 0], label='fine')
plt.plot(np.linspace(0, n_coarse, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.title('Bilinear downsampling ({} points, {} intervals)'.format(n_coarse, n_coarse - 1))
plt.legend(loc='lower right')
plt.xlim(-0.5, 10.5)
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: Resampling also works for downsampling
End of explanation
"""
n_intervals = (n_fine - 1)
n_forward = int(n_coarse / n_fine * n_intervals)
fine = 1.0 - np.sin(np.linspace(0, np.pi, n_fine))[np.newaxis, :, np.newaxis]
coarse = ddsp.core.resample(fine, n_coarse)
plt.plot(np.linspace(0, n_coarse, n_fine), fine[0, :, 0], label='fine')
plt.plot(np.linspace(0, n_coarse - 1, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.title('Bilinear downsampling ({} points, {} intervals)'.format(n_coarse, n_coarse))
plt.legend(loc='lower right')
plt.xlim(-0.5, 10.5)
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: For downsampling add endpoint interpolates up to an added endpoint, which actually removes an endpoint from the downsampled signal. This still results in the same number of points and segments.
End of explanation
"""
n_coarse = 5
n_fine = 16000
coarse = 1.0 - np.sin(np.linspace(0, np.pi, n_coarse))[np.newaxis, :, np.newaxis]
fine = ddsp.core.upsample_with_windows(coarse, n_fine, add_endpoint=False)
plt.plot(np.linspace(0, n_fine, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Upsample with windows ({} points, {} intervals)'.format(n_coarse, n_coarse - 1))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: upsample_with_windows()
Upsample signal with overlapping hann windows (like an inverse STFT). Good for smooth amplitude envelopes.
End of explanation
"""
coarse = 1.0 - np.sin(np.linspace(0, np.pi, n_coarse))[np.newaxis, :, np.newaxis]
fine = ddsp.core.upsample_with_windows(coarse, n_fine)
n_intervals = (n_coarse - 1)
n_forward = int(n_fine / n_coarse * n_intervals)
plt.plot(np.linspace(0, n_forward, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Upsample with windows ({} points, {} intervals)'.format(n_coarse, n_coarse))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: add_endpoint has the same behavior and defaults to True, as it matches the behavior of fft_convolve.
End of explanation
"""
fine = ddsp.core.resample(coarse, n_fine, method='window')
plt.plot(np.linspace(0, n_forward, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Upsample with windows ({} points, {} intervals)'.format(n_coarse, n_coarse))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: You can also call upsample_with_windows() by calling resample(method='window').
End of explanation
"""
fine = ddsp.core.resample(coarse, n_fine, method='cubic')
plt.plot(np.linspace(0, n_forward, n_coarse), coarse[0, :, 0], 'o', label='coarse')
plt.plot(np.linspace(0, n_fine, n_fine), fine[0, :, 0], label='fine')
plt.title('Bicubic upsampling ({} points, {} intervals)'.format(n_coarse, n_coarse))
plt.legend(loc='lower right')
_ = plt.ylim(-0.1, 1.1)
"""
Explanation: The Hann window transitions are smooth like bicubic, but more gradual and don't overshoot.
End of explanation
"""
|
dpshelio/2015-EuroScipy-pandas-tutorial
|
solved - 02 - Data structures.ipynb
|
bsd-2-clause
|
s = pd.Series([0.1, 0.2, 0.3, 0.4])
s
"""
Explanation: Data structures
Pandas does this through two fundamental object types, both built upon NumPy arrays: the Series object, and the DataFrame object.
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
"""
s.index
"""
Explanation: Attributes of a Series: index and values
The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
End of explanation
"""
s.values
"""
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
"""
s[0]
"""
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
"""
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
s2
s2['c']
"""
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
"""
pop_dict = {'Germany': 81.3,
'Belgium': 11.3,
'France': 64.3,
'United Kingdom': 64.9,
'Netherlands': 16.9}
population = pd.Series(pop_dict)
population
"""
Explanation: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary:
End of explanation
"""
population['France']
"""
Explanation: We can index the populations like a dict as expected:
End of explanation
"""
population * 1000
"""
Explanation: but with the power of numpy arrays:
End of explanation
"""
population['Belgium':'Germany']
"""
Explanation: Many things we have seen for numpy, can also be used with pandas objects.
Slicing:
End of explanation
"""
population[['France', 'Netherlands']]
population[population > 20]
"""
Explanation: Fancy indexing, like indexing with a list or boolean indexing:
End of explanation
"""
population / 100
"""
Explanation: Element-wise operations:
End of explanation
"""
population.mean()
"""
Explanation: A range of methods:
End of explanation
"""
population / population['Belgium'].mean()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population numbers relative to Belgium
</div>
End of explanation
"""
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
"""
Explanation: Alignment!
Only, pay attention to alignment: operations between series will align on the index:
End of explanation
"""
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
"""
Explanation: DataFrames: Multi-dimensional Data
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="img/dataframe.png" width=110%>
One of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
"""
countries.index
countries.columns
"""
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
"""
countries.dtypes
"""
Explanation: To check the data types of the different columns:
End of explanation
"""
countries.info()
"""
Explanation: An overview of that information can be given with the info() method:
End of explanation
"""
countries.values
"""
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
"""
countries = countries.set_index('country')
countries
"""
Explanation: If we don't like what the index looks like, we can reset it and set one of our columns:
End of explanation
"""
countries['area']
"""
Explanation: To access a Series representing a column in the data, use typical indexing syntax:
End of explanation
"""
countries['population']*1000000 / countries['area']
"""
Explanation: As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
For example there's arithmetic. Let's compute density of each country:
End of explanation
"""
countries['density'] = countries['population']*1000000 / countries['area']
countries
"""
Explanation: Adding a new column to the dataframe is very simple:
End of explanation
"""
countries[countries['density'] > 300]
"""
Explanation: We can use masking the way we did in NumPy to select certain data:
End of explanation
"""
countries.sort('density', ascending=False)
"""
Explanation: And we can do things like sorting the items in the array, and indexing to take the first two rows:
End of explanation
"""
countries.describe()
"""
Explanation: One useful method to use is the describe method, which computes summary statistics for each column:
End of explanation
"""
countries.plot()
"""
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
"""
countries['population'].plot(kind='bar')
"""
Explanation: However, for this dataset, it does not say that much:
End of explanation
"""
pd.read
states.to
"""
Explanation: You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'
Importing and exporting data
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
...
End of explanation
"""
|
dotsdl/msmbuilder
|
examples/gmrq-model-selection.ipynb
|
lgpl-2.1
|
from __future__ import print_function
import numpy as np
from msmbuilder.example_datasets import load_doublewell
from msmbuilder.cluster import NDGrid
from msmbuilder.msm import MarkovStateModel
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import KFold
"""
Explanation: This example demonstrates the use of the cross-validation and the generalized matrix Rayleigh quotient (GMRQ) for selecting
MSM hyperparameters. The GMRQ is a criterion which "scores" how well the MSM eigenvectors generated on the training dataset
serve as slow coordinates for the test dataset [1].
[1] McGibbon, R. T. and V. S. Pande, Variational cross-validation of slow dynamical modes in molecular kinetics (2014)
End of explanation
"""
trajectories = load_doublewell(random_state=0).trajectories
# sub-sample a little bit, by taking only every 100th data point in each trajectory.
trajectories = [t[::100] for t in trajectories]
print([t.shape for t in trajectories])
"""
Explanation: This example uses the doublewell dataset, which consists of ten trajectories in 1D with $x \in [-\pi, \pi]$.
End of explanation
"""
model = Pipeline([
('grid', NDGrid(min=-np.pi, max=np.pi)),
('msm', MarkovStateModel(n_timescales=1, lag_time=1, reversible_type='transpose', verbose=False))
])
"""
Explanation: A pipeline is a way of connecting together multiple estimators, so that we can create a custom model that
performs a sequence of steps. This model is relatively simple. It will first discretize the trajectory data
onto an evenly spaced grid between $-\pi$ and $\pi$, and then build an MSM.
End of explanation
"""
def fit_and_score(trajectories, model, n_states):
cv = KFold(len(trajectories), n_folds=5)
results = []
for n in n_states:
model.set_params(grid__n_bins_per_feature=n)
for fold, (train_index, test_index) in enumerate(cv):
train_data = [trajectories[i] for i in train_index]
test_data = [trajectories[i] for i in test_index]
# fit model with a subset of the data (training data).
# then we'll score it on both this training data (which
# will give an overly-rosy picture of its performance)
# and on the test data.
model.fit(train_data)
train_score = model.score(train_data)
test_score = model.score(test_data)
results.append({
'train_score': train_score,
'test_score': test_score,
'n_states': n,
'fold': fold})
return results
results = fit_and_score(trajectories, model, [5, 10, 25, 50, 100, 200, 500, 750])
import pandas as pd
results = pd.DataFrame(results)
results.head()
%matplotlib inline
from matplotlib import pyplot as plt
plt.figure(figsize=(8,6))
plt.plot(results['n_states'], results['train_score'], c='b', marker='.', ls='')
plt.plot(results['n_states'], results['test_score'], c='r', marker='.', ls='')
mean_over_folds = results.groupby('n_states').aggregate(np.mean)
plt.plot(mean_over_folds.index, mean_over_folds['test_score'], c='r', marker='.', ls='-', label='Mean test')
plt.plot(mean_over_folds.index, mean_over_folds['train_score'], c='b', marker='.', ls='-', label='Mean train')
plt.semilogx()
plt.ylabel('Generalized Matrix Rayleigh Quotient (Score)')
plt.xlabel('Number of states')
best_n_states = np.argmax(mean_over_folds['test_score'])
best_test_score = mean_over_folds.ix[best_n_states]['test_score']
plt.plot(best_n_states, best_test_score, marker='*', ms=20, c='w', label='n_states=%d' % best_n_states)
plt.legend(loc='best', numpoints=1)
plt.show()
"""
Explanation: Cross validation
To get an accurate indication of how well our MSMs are doing at finding the dominant eigenfunctions
of our stochastic process, we need to consider the tendenancy of statistical models to overfit their
training data. Our MSMs might build transition matrices which fit the noise in training data as opposed
to the underlying signal.
One way to combat overfitting in a data-efficient way is with cross validation. This example uses 5-fold
cross valiation.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.21/_downloads/03c9d71de135994dbf45db72856a1f9a/plot_mne_inverse_envelope_correlation.ipynb
|
bsd-3-clause
|
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Sheraz Khan <sheraz@khansheraz.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.connectivity import envelope_correlation
from mne.minimum_norm import make_inverse_operator, apply_inverse_epochs
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
data_path = mne.datasets.brainstorm.bst_resting.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'bst_resting'
trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')
src = op.join(subjects_dir, subject, 'bem', subject + '-oct-6-src.fif')
bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')
raw_fname = op.join(data_path, 'MEG', 'bst_resting',
'subj002_spontaneous_20111102_01_AUX.ds')
"""
Explanation: Compute envelope correlations in source space
Compute envelope correlations of orthogonalized activity [1] [2] in source
space using resting state CTF data.
End of explanation
"""
raw = mne.io.read_raw_ctf(raw_fname, verbose='error')
raw.crop(0, 60).pick_types(meg=True, eeg=False).load_data().resample(80)
raw.apply_gradient_compensation(3)
projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)
projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')
raw.info['projs'] += projs_ecg
raw.info['projs'] += projs_eog
raw.apply_proj()
cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest
"""
Explanation: Here we do some things in the name of speed, such as crop (which will
hurt SNR) and downsample. Then we compute SSP projectors and apply them.
End of explanation
"""
raw.filter(14, 30)
events = mne.make_fixed_length_events(raw, duration=5.)
epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,
baseline=None, reject=dict(mag=8e-13), preload=True)
del raw
"""
Explanation: Now we band-pass filter our data and create epochs.
End of explanation
"""
src = mne.read_source_spaces(src)
fwd = mne.make_forward_solution(epochs.info, trans, src, bem)
inv = make_inverse_operator(epochs.info, fwd, cov)
del fwd, src
"""
Explanation: Compute the forward and inverse
End of explanation
"""
labels = mne.read_labels_from_annot(subject, 'aparc_sub',
subjects_dir=subjects_dir)
epochs.apply_hilbert() # faster to apply in sensor space
stcs = apply_inverse_epochs(epochs, inv, lambda2=1. / 9., pick_ori='normal',
return_generator=True)
label_ts = mne.extract_label_time_course(
stcs, labels, inv['src'], return_generator=True)
corr = envelope_correlation(label_ts, verbose=True)
# let's plot this matrix
fig, ax = plt.subplots(figsize=(4, 4))
ax.imshow(corr, cmap='viridis', clim=np.percentile(corr, [5, 95]))
fig.tight_layout()
"""
Explanation: Compute label time series and do envelope correlation
End of explanation
"""
threshold_prop = 0.15 # percentage of strongest edges to keep in the graph
degree = mne.connectivity.degree(corr, threshold_prop=threshold_prop)
stc = mne.labels_to_stc(labels, degree)
stc = stc.in_label(mne.Label(inv['src'][0]['vertno'], hemi='lh') +
mne.Label(inv['src'][1]['vertno'], hemi='rh'))
brain = stc.plot(
clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',
subjects_dir=subjects_dir, views='dorsal', hemi='both',
smoothing_steps=25, time_label='Beta band')
"""
Explanation: Compute the degree and plot it
End of explanation
"""
|
ptpro3/ptpro3.github.io
|
Projects/AptListingsAnalysis.ipynb
|
mit
|
# imports
import pandas as pd
import dateutil.parser
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
%matplotlib inline
"""
Explanation: Rental Listings Analysis
By Prashant Tatineni
Project Overview
In this project, I attempt to predict the popularity (target variable: interest_level) of apartment rental listings in New York City based on listing characteristics. The data itself comes from a Kaggle Competition hosted in conjunction with renthop.com.
The dataset was provided as a single file train.json (49,352 rows).
An additional file, test.json (74,659 rows) contains the same columns as train.json, except that the target variable, interest_level, is missing. Predictions of the target variable are to be made on the test.json file and submitted to Kaggle.
Summary of Solution Steps
Load data from JSON
Build initial predictor variables, with interest_level as the target.
Initial run of classification models.
Add category indicators and aggregated features based on manager_id.
Run new Random Forest model.
An attempt to use the images for classification.
Further opportunities with this dataset.
End of explanation
"""
# Load the training dataset from Kaggle.
df = pd.read_json('train.json')
print df.shape
df.head(2)
"""
Explanation: Step 1: Load Data
End of explanation
"""
# Distribution of target value: interest_level
s = df.groupby('interest_level')['listing_id'].count()
s.plot.bar();
df_high = df.loc[df['interest_level'] == 'high']
df_medium = df.loc[df['interest_level'] == 'medium']
df_low = df.loc[df['interest_level'] == 'low']
plt.figure(figsize=(6,10))
plt.scatter(df_low.longitude, df_low.latitude, color='yellow', alpha=0.2, marker='.', label='Low')
plt.scatter(df_medium.longitude, df_medium.latitude, color='green', alpha=0.2, marker='.', label='Medium')
plt.scatter(df_high.longitude, df_high.latitude, color='purple', alpha=0.2, marker='.', label='High')
plt.xlim(-74.04,-73.80)
plt.ylim(40.6,40.9)
plt.title('Map of the listings in NYC')
plt.ylabel('N Lat.')
plt.xlabel('W Long.')
plt.legend(loc=2);
"""
Explanation: Total number of columns is 14 + 1 target:
- 1 target variable (interest_level), with classes low, medium, high
- 1 list of photo links
- lat/long, street address, display address
- listing_id, building_id, manager_id
- numerical (price, bathrooms, bedrooms)
- created date
- text (description, features)
Features for modeling:
- bathrooms
- bedrooms
- created date (calculate age of posting in days)
- description (number of words in description)
- features (number of features)
- photos (number of photos)
- price
- features (split into category indicators)
- manager_id (with manager skill level)
Further opportunities for modeling:
- description (with NLP)
- building_id (with a building popularity level)
- photos (quality, discussed in Step 6)
End of explanation
"""
(pd.to_datetime(df['created'])).sort_values(ascending=False).head()
# The most recent records are 6/29/2016. Computing days old from 6/30/2016.
df['days_old'] = (dateutil.parser.parse('2016-06-30') - pd.to_datetime(df['created'])).apply(lambda x: x.days)
# Add other "count" features
df['num_words'] = df['description'].apply(lambda x: len(x.split()))
df['num_features'] = df['features'].apply(len)
df['num_photos'] = df['photos'].apply(len)
"""
Explanation: Step 2: Initial Features
End of explanation
"""
X = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos']]
y = df['interest_level']
# Scaling is necessary for Logistic Regression and KNN
X_scaled = pd.DataFrame(preprocessing.scale(X))
X_scaled.columns = X.columns
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, random_state=42)
"""
Explanation: Step 3: Modeling, First Pass
End of explanation
"""
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_test_predicted_proba = lr.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
lr = LogisticRegression(solver='newton-cg', multi_class='multinomial')
lr.fit(X_train, y_train)
y_test_predicted_proba = lr.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
"""
Explanation: Logistic Regression
End of explanation
"""
for i in [95,100,105]:
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
y_test_predicted_proba = knn.predict_proba(X_test)
print log_loss(y_test, y_test_predicted_proba)
"""
Explanation: KNN
End of explanation
"""
rf = RandomForestClassifier(n_estimators=500, n_jobs=-1)
rf.fit(X_train, y_train)
y_test_predicted_proba = rf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
"""
Explanation: Random Forest
End of explanation
"""
y_test_predicted = rf.predict(X_test)
accuracy_score(y_test, y_test_predicted)
precision_recall_fscore_support(y_test, y_test_predicted)
rf.classes_
plt.figure(figsize=(5,5))
pd.Series(index = X_train.columns, data = rf.feature_importances_).sort_values().plot(kind= 'bar');
"""
Explanation: Random Forest performs the best with respect to Log Loss.
End of explanation
"""
bnb = BernoulliNB()
bnb.fit(X_train, y_train)
y_test_predicted_proba = bnb.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
gnb = GaussianNB()
gnb.fit(X_train, y_train)
y_test_predicted_proba = gnb.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
"""
Explanation: The above bar plot shows feature importance for the Random Forest classifier. "Price" is the most informative feature related to the target variable "interest_level".
Naive Bayes
End of explanation
"""
clf = MLPClassifier(hidden_layer_sizes=(100,50,10))
clf.fit(X_train, y_train)
y_test_predicted_proba = clf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
"""
Explanation: Neural Network
End of explanation
"""
# Reduce 1556 unique category text values into 34 main categories
def reduce_categories(full_list):
reduced_list = []
for i in full_list:
item = i.lower()
if 'cats allowed' in item:
reduced_list.append('cats')
if 'dogs allowed' in item:
reduced_list.append('dogs')
if 'elevator' in item:
reduced_list.append('elevator')
if 'hardwood' in item:
reduced_list.append('elevator')
if 'doorman' in item or 'concierge' in item:
reduced_list.append('doorman')
if 'dishwasher' in item:
reduced_list.append('dishwasher')
if 'laundry' in item or 'dryer' in item:
if 'unit' in item:
reduced_list.append('laundry_in_unit')
else:
reduced_list.append('laundry')
if 'no fee' in item:
reduced_list.append('no_fee')
if 'reduced fee' in item:
reduced_list.append('reduced_fee')
if 'fitness' in item or 'gym' in item:
reduced_list.append('gym')
if 'prewar' in item or 'pre-war' in item:
reduced_list.append('prewar')
if 'dining room' in item:
reduced_list.append('dining')
if 'pool' in item:
reduced_list.append('pool')
if 'internet' in item:
reduced_list.append('internet')
if 'new construction' in item:
reduced_list.append('new_construction')
if 'wheelchair' in item:
reduced_list.append('wheelchair')
if 'exclusive' in item:
reduced_list.append('exclusive')
if 'loft' in item:
reduced_list.append('loft')
if 'simplex' in item:
reduced_list.append('simplex')
if 'fire' in item:
reduced_list.append('fireplace')
if 'lowrise' in item or 'low-rise' in item:
reduced_list.append('lowrise')
if 'midrise' in item or 'mid-rise' in item:
reduced_list.append('midrise')
if 'highrise' in item or 'high-rise' in item:
reduced_list.append('highrise')
if 'pool' in item:
reduced_list.append('pool')
if 'ceiling' in item:
reduced_list.append('high_ceiling')
if 'garage' in item or 'parking' in item:
reduced_list.append('parking')
if 'furnished' in item:
reduced_list.append('furnished')
if 'multi-level' in item:
reduced_list.append('multilevel')
if 'renovated' in item:
reduced_list.append('renovated')
if 'super' in item:
reduced_list.append('live_in_super')
if 'green building' in item:
reduced_list.append('green_building')
if 'appliances' in item:
reduced_list.append('new_appliances')
if 'luxury' in item:
reduced_list.append('luxury')
if 'penthouse' in item:
reduced_list.append('penthouse')
if 'deck' in item or 'terrace' in item or 'balcony' in item or 'outdoor' in item or 'roof' in item or 'garden' in item or 'patio' in item:
reduced_list.append('outdoor_space')
return list(set(reduced_list))
df['categories'] = df['features'].apply(reduce_categories)
text = ''
for index, row in df.iterrows():
for i in row.categories:
text = text + i + ' '
plt.figure(figsize=(12,6))
wc = WordCloud(background_color='white', width=1200, height=600).generate(text)
plt.title('Reduced Categories', fontsize=30)
plt.axis("off")
wc.recolor(random_state=0)
plt.imshow(wc);
# Create indicators
X_dummies = pd.get_dummies(df['categories'].apply(pd.Series).stack()).sum(level=0)
"""
Explanation: Step 4: More Complex Features
Splitting out categories into 0/1 dummy variables
End of explanation
"""
# Choose features for modeling (and sorting)
df = df.sort_values('listing_id')
X = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos','listing_id','manager_id']]
y = df['interest_level']
# Merge indicators to X dataframe and sort again to match sorting of y
X = X.merge(X_dummies, how='outer', left_index=True, right_index=True).fillna(0)
X = X.sort_values('listing_id')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# compute ratios and count for each manager
mgr_perf = pd.concat([X_train.manager_id,pd.get_dummies(y_train)], axis=1).groupby('manager_id').mean()
mgr_perf.head(2)
# Apply weighting for each manager's listings: +1 for High, 0 for Medium, -1 for Low.
mgr_perf['manager_count'] = X_train.groupby('manager_id').count().iloc[:,1]
mgr_perf['manager_skill'] = mgr_perf['high']*1 + mgr_perf['medium']*0 + mgr_perf['low']*-1
# for training set
X_train = X_train.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')
# for test set
X_test = X_test.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')
# Fill na's with mean skill and median count
X_test['manager_skill'] = X_test.manager_skill.fillna(X_test.manager_skill.mean())
X_test['manager_count'] = X_test.manager_count.fillna(X_test.manager_count.median())
# Delete unnecessary columns before modeling
del X_train['listing_id']
del X_train['manager_id']
del X_test['listing_id']
del X_test['manager_id']
del X_train['high']
del X_train['medium']
del X_train['low']
del X_test['high']
del X_test['medium']
del X_test['low']
"""
Explanation: Aggregate manager_id to get features representing manager performance
Note: Need to aggregate manager performance ONLY over a training subset in order to validate against test subset. Otherwise, for any given manager, a portion of their calculated skill level might have been due to listings from the test set. So the train-test split is being performed in this step before creating the columns for manager performance.
End of explanation
"""
rf = RandomForestClassifier(n_estimators=500, n_jobs=-1)
rf.fit(X_train, y_train)
y_test_predicted_proba = rf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
y_test_predicted = rf.predict(X_test)
accuracy_score(y_test, y_test_predicted)
precision_recall_fscore_support(y_test, y_test_predicted)
rf.classes_
plt.figure(figsize=(15,5))
pd.Series(index = X_train.columns, data = rf.feature_importances_).sort_values().plot(kind = 'bar');
"""
Explanation: Step 5: Modeling, second pass with Random Forest
End of explanation
"""
import numpy as np
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.core import Flatten, Dense, Activation, Dropout
from keras.preprocessing import image
# My neural network layer sequence is based on the original LeNet architecture
model = Sequential()
# Layer 1
model.add(Convolution2D(32, 5, 5, input_shape=(192, 192, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 2
model.add(Convolution2D(64, 5, 5))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
# Layer 3
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.5))
# Layer 4
model.add(Dense(512))
model.add(Activation("relu"))
model.add(Dropout(0.5))
# Layer 5
model.add(Dense(2))
model.add(Activation("softmax"))
# "lenet_weights.h5" is the file containing weights from my trained neural network.
model.load_weights('lenet_weights.h5')
# Loading three images from the dataset.
# img_1 & img_2 are from the same High popularity listing
# img_3 is from a Low popularity listing.
pics = []
img_1 = image.load_img('6811966_1.jpg', target_size=(192,192))
pics.append(np.asarray(img_1))
img_2 = image.load_img('6811966_2.jpg', target_size=(192,192))
pics.append(np.asarray(img_2))
img_3 = image.load_img('6812150_1.jpg', target_size=(192,192))
pics.append(np.asarray(img_3))
pics_array = np.stack(pics)/255.
plt.figure(figsize=(12,12))
plt.subplot(131),plt.imshow(img_1),plt.title('6811966, interest_level: High')
plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(img_2),plt.title('6811966, interest_level: High')
plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(img_3),plt.title('6812150, interest_level: Low')
plt.xticks([]), plt.yticks([])
plt.show()
model.predict_classes(pics_array)
"""
Explanation: As seen here, introducing feature categories and manager performance has improved the model. In particular, manager_skill shows up as the dominant feature in terms of importance in this Random Forest model.
Step 6: Image Classification Attempt
I did not use the actual listing images in my model. Here, I outline an attempt at classifying the listing based on image quality using a "blurry image" detector that I created with a Convolutional Neural Network. For more details see my discussion of that project.
End of explanation
"""
df['num_photos'].mode()
df['num_photos'].median()
df[df.listing_id == 6811966][['listing_id','description','interest_level','num_photos']]
df[df.listing_id == 6812150][['listing_id','description','interest_level','num_photos']]
"""
Explanation: My model classified the first image for listing 6811966 as 0 = clear, but the second as 1 = blurry. This is likely due to the larger prevalance of a white wash-out effect in the second image from sunlight.
The third image was classified correctly as 1 = blurry; it is indeed a blurry image. However, this alone is probably not enough to decide listing popularity.
As seen below, the typical listing has 5 photos attached, while the High popularity listing we are discussing here has a total of 7 photos. The Low popularity listing meanwhile has only this 1 photo. So the number of photos is just as likely as blurriness to affect listing popularity in this case.
End of explanation
"""
|
GoogleCloudPlatform/practical-ml-vision-book
|
09_deploying/09b_rest.ipynb
|
apache-2.0
|
!cat ./vertex_deploy.sh
!./vertex_deploy.sh
"""
Explanation: Predictions using a REST endpoint
In this notebook, we start from an already trained and saved model (as in Chapter 7).
For convenience, we have put this model in a public bucket in gs://practical-ml-vision-book/flowers_5_trained
We deploy this model to a REST endpoint, and then show how to invoke the model using POST operations.
REST endpoint
End of explanation
"""
# CHANGE THESE TO REFLECT WHERE YOU DEPLOYED THE MODEL
import os
os.environ['ENDPOINT_ID'] = '4327589805996113920' # CHANGE
os.environ['MODEL_ID'] = '1963683786742824960' # CHANGE
os.environ['PROJECT'] = 'ai-analytics-solutions' # CHANGE
os.environ['BUCKET'] = 'ai-analytics-solutions-mlvisionbook' # CHANGE
os.environ['REGION'] = 'us-central1' # CHANGE
"""
Explanation: IMPORTANT: CHANGE THIS CELL
Note the endpoint ID and deployed model ID above. Set it in the cell below.
End of explanation
"""
%%writefile request.json
{
"instances": [
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg"
}
]
}
%%bash
gcloud ai endpoints predict ${ENDPOINT_ID} \
--region=${REGION} \
--json-request=request.json \
--format=json
"""
Explanation: JSON request
End of explanation
"""
# Invoke from Python.
import json
from oauth2client.client import GoogleCredentials
import requests
PROJECT = os.environ['PROJECT']
REGION = os.environ['REGION']
ENDPOINT_ID = os.environ['ENDPOINT_ID']
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://{}-aiplatform.googleapis.com/v1/projects/{}/locations/{}/endpoints/{}:predict".format(
REGION, PROJECT, REGION, ENDPOINT_ID)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg"
},
{
"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg"
}
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
"""
Explanation: Sending over HTTP Post
End of explanation
"""
%%writefile batchinputs.jsonl
{"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg"}
{"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg"}
{"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg"}
{"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg"}
{"filenames": "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg"}
%%bash
gsutil cp batchinputs.jsonl gs://BUCKET
# Invoke from Python.
import json
from oauth2client.client import GoogleCredentials
import requests
PROJECT = os.environ['PROJECT']
REGION = os.environ['REGION']
ENDPOINT_ID = os.environ['ENDPOINT_ID']
MODEL_ID = os.environ['MODEL_ID']
BUCKET = os.environ['BUCKET'] # used for staging
BATCH_JOB_NAME = "batch_pred_job"
INPUT_FORMAT = "jsonl"
INPUT_URI = "gs://{}/batchinputs.jsonl".format(BUCKET)
OUTPUT_DIRECTORY = "gs://{}/batch_predictions".format(BUCKET)
MACHINE_TYPE = "n1-standard-2"
STARTING_REPLICA_COUNT = 1
BATCH_SIZE = 64
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://{}-aiplatform.googleapis.com/v1/projects/{}/locations/{}/batchPredictionJobs".format(
REGION, PROJECT, REGION
)
headers = {"Authorization": "Bearer " + token}
data = {
"displayName": BATCH_JOB_NAME,
"model": "projects/{}/locations/{}/models/{}".format(
PROJECT, REGION, MODEL_ID
),
"inputConfig": {
"instancesFormat": INPUT_FORMAT,
"gcsSource": {
"uris": [INPUT_URI],
},
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": OUTPUT_DIRECTORY,
},
},
"dedicatedResources" : {
"machineSpec" : {
"machineType": MACHINE_TYPE
},
"startingReplicaCount": STARTING_REPLICA_COUNT
},
"manualBatchTuningParameters": {
"batch_size": BATCH_SIZE,
}
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
"""
Explanation: [Optional] CAIP Batch prediction
End of explanation
"""
import apache_beam as beam
import json
from oauth2client.client import GoogleCredentials
import requests
class ModelPredict:
def __init__(self, project, region, endpoint_id):
self._api = "https://{}-aiplatform.googleapis.com/v1/projects/{}/locations/{}/endpoints/{}:predict".format(
region, project, region, endpoint_id)
def __call__(self, filenames):
token = GoogleCredentials.get_application_default().get_access_token().access_token
if isinstance(filenames, str):
# Only one element, put it into a batch of 1.
data = {
"instances": [
{"filenames": filenames}
]
}
else:
data = {
"instances": []
}
for f in filenames:
data["instances"].append({
"filenames" : f
})
# print(data)
headers = {"Authorization": "Bearer " + token }
response = requests.post(self._api, json=data, headers=headers)
response = json.loads(response.content.decode("utf-8"))
# print(response)
if isinstance(filenames, str):
result = response["predictions"][0]
result["filename"] = filenames
yield result
else:
for (a,b) in zip(filenames, response["predictions"]):
result = b
result["filename"] = a
yield result
PROJECT = os.environ['PROJECT']
REGION = os.environ['REGION']
ENDPOINT_ID = os.environ['ENDPOINT_ID']
with beam.Pipeline() as p:
(p
| "input" >> beam.Create([
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg",
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg",
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg",
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg",
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg"
])
| "batch" >> beam.BatchElements(min_batch_size=2, max_batch_size=3)
| "addpred" >> beam.FlatMap(ModelPredict(PROJECT, REGION, ENDPOINT_ID))
| "write" >> beam.Map(print)
)
"""
Explanation: [Optional] Invoking online predictions from Apache Beam
End of explanation
"""
|
rdempsey/web-scraping-data-mining-course
|
week7/2_data_exploration/4. Create Basic Plots.ipynb
|
mit
|
# To show matplotlib plots in iPython Notebook we can use an iPython magic function
%matplotlib inline
# Import everything we need
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
"""
Explanation: Create Basic Charts (Plots)
In this notebook we'll be creating a number of basic charts from our data, including a histogram, box plot, and scatterplot.
End of explanation
"""
# Import the dataset from the CSV file
accidents_data_file = '/Users/robert.dempsey/Dropbox/Private/Art of Skill Hacking/' \
'Books/Python Business Intelligence Cookbook/Data/Stats19-Data1979-2004/Accidents7904.csv'
accidents = pd.read_csv(accidents_data_file,
sep=',',
header=0,
index_col=False,
parse_dates=['Date'],
dayfirst=True,
tupleize_cols=False,
error_bad_lines=True,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False,
nrows=1000000
)
accidents.head()
"""
Explanation: Import The Data
End of explanation
"""
# Create a frequency table of casualty counts from the previous recipe
casualty_count = accidents.groupby('Date').agg({'Number_of_Casualties': np.sum})
# Create a histogram from the casualty count dataframe
plt.hist(casualty_count['Number_of_Casualties'],
bins=30)
plt.title('Number of Casualties Histogram')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Create a Histogram for a Column
Create a histogram of the number of casualties
End of explanation
"""
# Show the probability of finding a number in a bin
plt.hist(casualty_count['Number_of_Casualties'],
bins=30,
normed=True)
plt.title('Probability Distribution')
plt.xlabel('Value')
plt.ylabel('Probability')
plt.show()
"""
Explanation: Plot the Data as a Probability Distribution
End of explanation
"""
# Shows the probability of finding a number in a bin or any lower bin
plt.hist(casualty_count['Number_of_Casualties'],
bins=20,
normed=True,
cumulative=True)
plt.title('Cumulative Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Plot a Cumulative Distribution Function
End of explanation
"""
plt.hist(casualty_count['Number_of_Casualties'],
bins=20,
histtype='step')
plt.title('Number of Casualties Histogram')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.show()
"""
Explanation: Show the Histogram as a Stepped Line
End of explanation
"""
# Create a frequency table of vehicle counts
vehicle_count = accidents.groupby('Date').agg({'Number_of_Vehicles': np.sum})
# Plot the two dataframes
plt.hist(casualty_count['Number_of_Casualties'], bins=20, histtype='stepfilled', normed=True, color='b', label='Casualties')
plt.hist(vehicle_count['Number_of_Vehicles'], bins=20, histtype='stepfilled', normed=True, color='r', alpha=0.5, label='Vehicles')
plt.title("Casualties/Vehicles Histogram")
plt.xlabel("Value")
plt.ylabel("Probability")
plt.legend()
plt.show()
"""
Explanation: Plot Two Sets of Values in a Probability Distribution
End of explanation
"""
data_to_plot = [casualty_count['Number_of_Casualties'],
vehicle_count['Number_of_Vehicles']]
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axis instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data_to_plot)
# Change the color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
# Change the color and linewidth of the medians
for median in bp['medians']:
median.set(color='#b2df8a', linewidth=2)
# Change the style of the fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
# Add x-axis labels
ax.set_xticklabels(['Casualties', 'Vehicles'])
# Show the figure
fig.savefig('fig1.png', bbox_inches='tight')
"""
Explanation: Create a Customized Box Plot with Whiskers
End of explanation
"""
# Create a figure instance
fig = plt.figure()
# Create an axis instance
ax = fig.add_subplot(111)
# Create the bar chart
ax.bar(range(len(casualty_count.index.values)), casualty_count['Number_of_Casualties'])
# Save the figure
fig.savefig('fig2.png')
"""
Explanation: Create a Basic Bar Chart of Casualties Over Time
End of explanation
"""
|
maxhutch/sem
|
Ducts.ipynb
|
gpl-3.0
|
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 7
Nl = 257
Nz = 2049
yms = []; y1s = []; y10s = []; zms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
alpha = 0.1
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(.1, 257)
plt.figure(figsize=(256,1))
plot_mesh(yl)
plt.savefig('mesh_8.png')
print(yl)
"""
Explanation: Find the best $\alpha$ for $p = 7, N_y = 256, N_z = 2048$
End of explanation
"""
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 16
Nl = 129
Nz = 1025
yms = []; y1s = []; y10s = []; zms = []; cms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
cms.append(cm)
alpha = 0.22
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(0.22, 129)
plt.figure(figsize=(128,1))
plot_mesh(yl)
plt.savefig('mesh_16.png')
print(yl)
"""
Explanation: Find the best $\alpha$ for $p = 15, N_y = 128, N_z = 1024$
End of explanation
"""
alphs = list(np.linspace(0,pi/2, 16, endpoint=False))
Re=2000;
N = 31
Nl = 65
Nz = 513
yms = []; y1s = []; y10s = []; zms = []
for alph in alphs:
yl=mesh(alph, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
yms.append(ym)
y1s.append(y1)
y10s.append(y10)
zms.append(zm)
alpha = 0.36
plot_units(yms, y1s, y10s, zms, alpha)
yl=mesh(alpha, Nl)
ym, y1, y10, zm, cm = wall_units(yl,Nz, N,Re)
print("Final values:", ym, y1, y10, zm, cm)
yl=mesh(.36, 65)
plt.figure(figsize=(64,1))
plot_mesh(yl)
plt.savefig('mesh_32.png')
print(yl)
"""
Explanation: Find the best $\alpha$ for $p = 31, N_y = 64, N_z = 512$
End of explanation
"""
|
OceanPARCELS/parcels
|
parcels/examples/tutorial_NestedFields.ipynb
|
mit
|
%matplotlib inline
from parcels import Field, NestedField, FieldSet, ParticleSet, JITParticle, plotTrajectoriesFile, AdvectionRK4
import numpy as np
"""
Explanation: Tutorial on how to combine different Fields into a NestedField object
In some applications, you may have access to different fields that each cover only part of the region of interest. Then, you would like to combine them all together. You may also have a field covering the entire region and another one only covering part of it, but with a higher resolution. The set of those fields form what we call nested fields.
It is possible to combine all those fields with kernels, either with different if/else statements depending on particle position, or using recovery kernels (if only two levels of nested fields).
However, an easier way to work with nested fields in Parcels is to combine all those fields into one NestedField object. The Parcels code will then try to successively interpolate the different fields.
For each Particle, the algorithm is the following:
Interpolate the particle onto the first Field in the NestedFields list.
If the interpolation succeeds or if an error other than ErrorOutOfBounds is thrown, the function is stopped.
If an ErrorOutOfBounds is thrown, try step 1) again with the next Field in the NestedFields list
If interpolation on the last Field in the NestedFields list also returns an ErrorOutOfBounds, then the Particle is flagged as OutOfBounds.
This algorithm means that the order of the fields in the NestedField matters. In particular, the smallest/finest resolution fields have to be listed before the larger/coarser resolution fields.
This tutorial shows how to use these NestedField with a very idealised example.
End of explanation
"""
dim = 21
lon = np.linspace(0., 2e3, dim, dtype=np.float32)
lat = np.linspace(0., 2e3, dim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V1_data = np.cos(lon_g / 200 * np.pi/2)
U1 = Field('U1', np.ones((dim, dim), dtype=np.float32), lon=lon, lat=lat)
V1 = Field('V1', V1_data, grid=U1.grid)
"""
Explanation: First define a zonal and meridional velocity field defined on a high resolution (dx = 100m) 2kmx2km grid with a flat mesh. The zonal velocity is uniform and 1 m/s, and the meridional velocity is equal to 0.5 * cos(lon / 200 * pi / 2) m/s.
End of explanation
"""
xdim = 11
ydim = 3
lon = np.linspace(-2e3, 18e3, xdim, dtype=np.float32)
lat = np.linspace(-1e3, 3e3, ydim, dtype=np.float32)
lon_g, lat_g = np.meshgrid(lon, lat)
V2_data = np.cos(lon_g / 200 * np.pi/2)
U2 = Field('U2', np.ones((ydim, xdim), dtype=np.float32), lon=lon, lat=lat)
V2 = Field('V2', V2_data, grid=U2.grid)
"""
Explanation: Now define the same velocity field on a low resolution (dx = 2km) 20kmx4km grid.
End of explanation
"""
U = NestedField('U', [U1, U2])
V = NestedField('V', [V1, V2])
fieldset = FieldSet(U, V)
pset = ParticleSet(fieldset, pclass=JITParticle, lon=[0], lat=[1000])
output_file = pset.ParticleFile(name='NestedFieldParticle.nc', outputdt=50)
pset.execute(AdvectionRK4, runtime=14000, dt=10, output_file=output_file)
output_file.export() # export the trajectory data to a netcdf file
plt = plotTrajectoriesFile('NestedFieldParticle.nc', show_plt=False)
plt.plot([0,2e3,2e3,0,0],[0,0,2e3,2e3,0], c='orange')
plt.plot([-2e3,18e3,18e3,-2e3,-2e3],[-1e3,-1e3,3e3,3e3,-1e3], c='green');
"""
Explanation: We now combine those fields into a NestedField and create the fieldset
End of explanation
"""
fieldset = FieldSet(U, V) # Need to redefine fieldset because FieldSets need to be constructed before ParticleSets
F1 = Field('F1', np.ones((U1.grid.ydim, U1.grid.xdim), dtype=np.float32), grid=U1.grid)
F2 = Field('F2', 2*np.ones((U2.grid.ydim, U2.grid.xdim), dtype=np.float32), grid=U2.grid)
F = NestedField('F', [F1, F2])
fieldset.add_field(F)
from parcels import Variable
def SampleNestedFieldIndex(particle, fieldset, time):
particle.f = fieldset.F[time, particle.depth, particle.lat, particle.lon]
class SampleParticle(JITParticle):
f = Variable('f', dtype=np.int32)
pset = ParticleSet(fieldset, pclass= SampleParticle, lon=[1000], lat=[500])
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
pset[0].lon = 10000
pset.execute(SampleNestedFieldIndex, runtime=0, dt=0)
print('Particle (%g, %g) interpolates Field #%d' % (pset[0].lon, pset[0].lat, pset[0].f))
"""
Explanation: As we observe, there is a change of dynamic at lon=2000, which corresponds to the change of grid.
The analytical solution to the problem:
\begin{align}
dx/dt &= 1;\
dy/dt &= \cos(x \pi/400);\
\text{with } x(0) &= 0, y(0) = 1000
\end{align}
is
\begin{align}
x(t) &= t;\
y(t) &= 1000 + 400/\pi \sin(t \pi / 400)
\end{align}
which is captured by the High Resolution field (orange area) but not the Low Resolution one (green area).
Keep track of the field interpolated
For different reasons, you may want to keep track of the field you have interpolated. You can do that easily by creating another field that share the grid with original fields.
Watch out that this operation has a cost of a full interpolation operation.
End of explanation
"""
|
google/applied-machine-learning-intensive
|
content/xx_misc/activation_functions/colab.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/xx_misc/activation_functions/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def linear(x):
return x
inputs = np.linspace(-10, 10, 10)
outputs = [linear(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: Activation Functions
Activation functions are core components of neural networks. These functions are used in every node of the network to reduce a vector of inputs into an output value.
Learning when to apply specific activation functions is a critical skill for building deep learning models.
What Is an Activation Function?
Picture yourself as a node in a neural network. On one side of you there are multiple input streams passing data from the prior layer. On the other side there are multiple output streams that we use to pass data to every node in the next layer.
We expect the data from our input layer to contain many different values since we get data from different nodes. On the output side we'll give every node in the next layer the same value. Distilling the multiple diverse inputs into a single value that we can hand to the next layer is the job of an activation function.
In mathematical terms it looks something like this:
$$a = activation(bias + \sum_{i=0}^{n}{x_i})$$
We sum our inputs from prior nodes, $x$, and our bias. We then pass that summation through an activation function in order to get our output value, $y$, that we then pass to every node in the next layer of the network.
Though activation functions are used in every layer of a network, it is particularly important to understand how they behave at the output layer of a model.
Pass-Through Activation
The most basic activation function is the linear activation function. This function takes the sum of inputs and bias, does nothing to it, and hands the result to the next layer of the network.
Let's plot the linear activation function in the code block below.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def relu(x):
if x < 0:
return 0
return x
inputs = np.linspace(-10, 10, 100, .1)
outputs = [relu(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: That's a pretty simple activation function to understand. But what value does it provide?
This function can be useful, especially in your output layer, if you want your model to produce large or negative values. Many of the activation functions we'll see greatly restrict the range of values that they output. The linear activation function does restrict its output range at all. Any real number can be produced by a node with this activation function.
Rectified Linear Unit (ReLU)
There is another linear activation function that turns out to be quite useful: the Rectified Linear Unit (ReLU).
ReLU simply returns the input value unless that value is less than zero. In that case it returns zero.
$$a = \begin{cases}
x \ , &x \geq 0 \
0 \ , &x < 0 \
\end{cases}$$
Let's take a look at ReLU:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def leaky_relu(x):
pass # Your code goes here
inputs = np.linspace(-10, 10, 100, .1)
outputs = [leaky_relu(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: This is also a quite simple activation, but it turns out to be quite useful in practice. Many powerful neural networks utilize ReLU activation, at least in part. It has the advantage of making training very fast; however, nodes using ReLU do run the risk of "dying" during the training process. The nodes die when they get to a state where they always produce a zero output.
Let's also think about the use of a ReLU node in a network. If the output layer consists of ReLU values, then the output of the network will be from 0 to infinity.
This works fine for models that are predicting positive values, but what if your model is predicting celsius temperatures in Antarctica or some other potentially negative value?
In this case you would need to adjust the target training data to all be positive, say by adding 100 to it, and then do the reverse to the output of the model, subtract 100 from each value.
You'll find that you'll need to do this type of adjustment quite often when building models. Understanding your activation functions, especially in your output layer, is critically important. When you know the range of values that your model can produce you can adjust your training data to fall within that range.
Leaky ReLU
We talked about dead nodes when discussing the ReLU activation function. One strategy that helps mitigate the dead node issue is a "leaky" ReLU. Leaky ReLUs are ReLU functions that pass through any value zero or greater. For values less than zero they apply an alpha value to them and return the result.
$$a = \begin{cases}
x \ , &x \geq 0 \
x * \alpha \ , &x < 0 \
\end{cases}$$
TensorFlow Keras doen't make a distinction between ReLU and Leaky ReLU, it simply provides an alpha parameter to relu.
Exercise 1: Leaky ReLU
Write a leaky_relu function that passes through any value zero or greater and applies an alpha of 0.1 to values less than zero.
Student Solution
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def binary_step(x):
if x < 0:
return 0
return 1
inputs = np.linspace(-10, 10, 100, .1)
outputs = [binary_step(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: Binary Step
The binary step activation function serves as an on/off switch for a node. This function returns zero if its input is on one side of a threshold and returns one if it is on the other.
$$a = \begin{cases}
1 \ , &x \geq 0 \
0 \ , &x < 0 \
\end{cases}$$
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
inputs = np.linspace(-10, 10, 100, .1)
outputs = [sigmoid(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: At the output layer this function can be useful when you need to make a yes/no decision and don't care about the confidence of the model in that decision.
Sigmoid
Activation functions can also be non-linear. The sigmoid function works using a logistic curve.
$$a=\frac{1}{1+e^{-x}}$$
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def tanh(x):
return (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x))
inputs = np.linspace(-10, 10, 100, .1)
outputs = [tanh(x) for x in inputs]
_ = plt.plot(inputs, outputs)
"""
Explanation: You'll notice that the sigmoid function restricts its output range to $(0.0, 1.0)$. This is typically not a concern in hidden layers, but needs to be considered in the output layer. You'll likely need to scale your training targets down to this range and expand your predictions back to your actual data range.
Sigmoids in the output layer can be very useful for predicting continuous values. They can also be useful when making binary classification decisions. You can build a model that outputs values from $(0.0, 1.0)$ and treat the output as a confidence in a decision where values closer to 0.0 show no confidence and values closer to 1.0 show extreme confidence. You then experiment and set a threshold where you make your binary decision.
For example, if you were making a classifier to determine if an image contained a cat you might find that any time the model returned a value over 0.85 there was typically a cat in the image. Before making this decision, you'd need to experiment, find the precision and recall for different thresholds, and choose the one that fits your use case the best.
Hyperbolic Tangent (tanh)
Similar to sigmoid, the hyperbolic tangent, tanh is a non-linear activation function that can be used in your models. The biggest difference between sigmoid and tanh is that tanh has an output range of $(-1.0, 1.0)$
$$a=\frac{e^x-e^{-x}}{e^x+e^{-x}}$$
End of explanation
"""
|
tensorflow/model-remediation
|
docs/min_diff/tutorials/min_diff_keras.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@title Installs
!pip install --upgrade tensorflow-model-remediation
!pip install --upgrade fairness-indicators
"""
Explanation: Model Remediation Case Study
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/responsible_ai/model_remediation/min_diff/tutorials/min_diff_keras">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-remediation/blob/master/docs/min_diff/tutorials/min_diff_keras.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-remediation/blob/master/docs/min_diff/tutorials/min_diff_keras.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/model-remediation/docs/min_diff/tutorials/min_diff_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table></div>
In this notebook, we’ll train a text classifier to identify written content that could be considered toxic or harmful, and apply MinDiff to remediate some fairness concerns. In our workflow, we will:
1. Evaluate our baseline model’s performance on text containing references to sensitive groups.
2. Improve performance on any underperforming groups by training with MinDiff.
3. Evaluate the new model’s performance on our chosen metric.
Our purpose is to demonstrate usage of the MinDiff technique with a very minimal workflow, not to lay out a principled approach to fairness in machine learning. As such, our evaluation will only focus on one sensitive category and a single metric. We also don’t address potential shortcomings in the dataset, nor tune our configurations. In a production setting, you would want to approach each of these with rigor. For more information on evaluating for fairness, see this guide.
Setup
We begin by installing Fairness Indicators and TensorFlow Model Remediation.
End of explanation
"""
#@title Imports
import copy
import os
import requests
import tempfile
import zipfile
import tensorflow_model_remediation.min_diff as md
from tensorflow_model_remediation.tools.tutorials_utils import min_diff_keras_utils
from fairness_indicators.tutorial_utils import util as fi_util
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_model_analysis.addons.fairness.view import widget_view
"""
Explanation: Import all necessary components, including MinDiff and Fairness Indicators for evaluation.
End of explanation
"""
# We use a helper utility to preprocessed data for convenience and speed.
data_train, data_validate, validate_tfrecord_file, labels_train, labels_validate = min_diff_keras_utils.download_and_process_civil_comments_data()
"""
Explanation: We use a utility function to download the preprocessed data and prepare the labels to match the model’s output shape. The function also downloads the data as TFRecords to make later evaluation quicker. Alternatively, you may convert the Pandas DataFrame into TFRecords with any available utility conversion function.
End of explanation
"""
TEXT_FEATURE = 'comment_text'
LABEL = 'toxicity'
BATCH_SIZE = 512
"""
Explanation: We define a few useful constants. We will train the model on the ’comment_text’ feature, with our target label as ’toxicity’. Note that the batch size here is chosen arbitrarily, but in a production setting you would need to tune it for best performance.
End of explanation
"""
#@title Seeds
np.random.seed(1)
tf.random.set_seed(1)
"""
Explanation: Set random seeds. (Note that this does not fully stabilize results.)
End of explanation
"""
use_pretrained_model = True #@param {type:"boolean"}
if use_pretrained_model:
URL = 'https://storage.googleapis.com/civil_comments_model/baseline_model.zip'
BASE_PATH = tempfile.mkdtemp()
ZIP_PATH = os.path.join(BASE_PATH, 'baseline_model.zip')
MODEL_PATH = os.path.join(BASE_PATH, 'tmp/baseline_model')
r = requests.get(URL, allow_redirects=True)
open(ZIP_PATH, 'wb').write(r.content)
with zipfile.ZipFile(ZIP_PATH, 'r') as zip_ref:
zip_ref.extractall(BASE_PATH)
baseline_model = tf.keras.models.load_model(
MODEL_PATH, custom_objects={'KerasLayer' : hub.KerasLayer})
else:
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
loss = tf.keras.losses.BinaryCrossentropy()
baseline_model = min_diff_keras_utils.create_keras_sequential_model()
baseline_model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
baseline_model.fit(x=data_train[TEXT_FEATURE],
y=labels_train,
batch_size=BATCH_SIZE,
epochs=20)
"""
Explanation: Define and train the baseline model
To reduce runtime, we use a pretrained model by default. It is a simple Keras sequential model with an initial embedding and convolution layers, outputting a toxicity prediction. If you prefer, you can change this and train from scratch using our utility function to create the model. (Note that since your environment is likely different from ours, you would need to customize the tuning and evaluation thresholds.)
End of explanation
"""
base_dir = tempfile.mkdtemp(prefix='saved_models')
baseline_model_location = os.path.join(base_dir, 'model_export_baseline')
baseline_model.save(baseline_model_location, save_format='tf')
"""
Explanation: We save the model in order to evaluate using Fairness Indicators.
End of explanation
"""
# We use a helper utility to hide the evaluation logic for readability.
base_dir = tempfile.mkdtemp(prefix='eval')
eval_dir = os.path.join(base_dir, 'tfma_eval_result')
eval_result = fi_util.get_eval_results(
baseline_model_location, eval_dir, validate_tfrecord_file)
"""
Explanation: Next we run Fairness Indicators. As a reminder, we’re just going to perform sliced evaluation for comments referencing one category, religious groups. In a production environment, we recommend taking a thoughtful approach to determining which categories and metrics to evaluate across.
To compute model performance, the utility function makes a few convenient choices for metrics, slices, and classifier thresholds.
End of explanation
"""
widget_view.render_fairness_indicator(eval_result)
"""
Explanation: Render Evaluation Results
End of explanation
"""
# Create masks for the sensitive and nonsensitive groups
minority_mask = data_train.religion.apply(
lambda x: any(religion in x for religion in ('jewish', 'muslim')))
majority_mask = data_train.religion.apply(lambda x: x == "['christian']")
# Select nontoxic examples, so MinDiff will be able to reduce sensitive FP rate.
true_negative_mask = data_train['toxicity'] == 0
data_train_main = copy.copy(data_train)
data_train_sensitive = data_train[minority_mask & true_negative_mask]
data_train_nonsensitive = data_train[majority_mask & true_negative_mask]
"""
Explanation: Let’s look at the evaluation results. Try selecting the metric false positive rate (FPR) with threshold 0.450. We can see that the model does not perform as well for some religious groups as for others, displaying a much higher FPR. Note the wide confidence intervals on some groups because they have too few examples. This makes it difficult to say with certainty that there is a significant difference in performance for these slices. We may want to collect more examples to address this issue. We can, however, attempt to apply MinDiff for the two groups that we are confident are underperforming.
We’ve chosen to focus on FPR, because a higher FPR means that comments referencing these identity groups are more likely to be incorrectly flagged as toxic than other comments. This could lead to inequitable outcomes for users engaging in dialogue about religion, but note that disparities in other metrics can lead to other types of harm.
Define and Train the MinDiff Model
Now, we’ll try to improve the FPR for underperforming religious groups. We’ll attempt to do so using MinDiff, a remediation technique that seeks to balance error rates across slices of your data by penalizing disparities in performance during training. When we apply MinDiff, model performance may degrade slightly on other slices. As such, our goals with MinDiff will be:
* Improved performance for underperforming groups
* Limited degradation for other groups and overall performance
Prepare your data
To use MinDiff, we create two additional data splits:
* A split for non-toxic examples referencing minority groups: In our case, this will include comments with references to our underperforming identity terms. We don’t include some of the groups because there are too few examples, leading to higher uncertainty with wide confidence interval ranges.
* A split for non-toxic examples referencing the majority group.
It’s important to have sufficient examples belonging to the underperforming classes. Based on your model architecture, data distribution, and MinDiff configuration, the amount of data needed can vary significantly. In past applications, we have seen MinDiff work well with 5,000 examples in each data split.
In our case, the groups in the minority splits have example quantities of 9,688 and 3,906. Note the class imbalances in the dataset; in practice, this could be cause for concern, but we won’t seek to address them in this notebook since our intention is just to demonstrate MinDiff.
We select only negative examples for these groups, so that MinDiff can optimize on getting these examples right. It may seem counterintuitive to carve out sets of ground truth negative examples if we’re primarily concerned with disparities in false positive rate, but remember that a false positive prediction is a ground truth negative example that’s incorrectly classified as positive, which is the issue we’re trying to address.
Create MinDiff DataFrames
End of explanation
"""
# Convert the pandas DataFrames to Datasets.
dataset_train_main = tf.data.Dataset.from_tensor_slices(
(data_train_main['comment_text'].values,
data_train_main.pop(LABEL).values.reshape(-1,1) * 1.0)).batch(BATCH_SIZE)
dataset_train_sensitive = tf.data.Dataset.from_tensor_slices(
(data_train_sensitive['comment_text'].values,
data_train_sensitive.pop(LABEL).values.reshape(-1,1) * 1.0)).batch(BATCH_SIZE)
dataset_train_nonsensitive = tf.data.Dataset.from_tensor_slices(
(data_train_nonsensitive['comment_text'].values,
data_train_nonsensitive.pop(LABEL).values.reshape(-1,1) * 1.0)).batch(BATCH_SIZE)
"""
Explanation: We also need to convert our Pandas DataFrames into Tensorflow Datasets for MinDiff input. Note that unlike the Keras model API for Pandas DataFrames, using Datasets means that we need to provide the model’s input features and labels together in one Dataset. Here we provide the 'comment_text' as an input feature and reshape the label to match the model's expected output.
We batch the Dataset at this stage, too, since MinDiff requires batched Datasets. Note that we tune the batch size selection the same way it is tuned for the baseline model, taking into account training speed and hardware considerations while balancing with model performance. Here we have chosen the same batch size for all three datasets but this is not a requirement, although it’s good practice to have the two MinDiff batch sizes be equivalent.
Create MinDiff Datasets
End of explanation
"""
use_pretrained_model = True #@param {type:"boolean"}
base_dir = tempfile.mkdtemp(prefix='saved_models')
min_diff_model_location = os.path.join(base_dir, 'model_export_min_diff')
if use_pretrained_model:
BASE_MIN_DIFF_PATH = tempfile.mkdtemp()
MIN_DIFF_URL = 'https://storage.googleapis.com/civil_comments_model/min_diff_model.zip'
ZIP_PATH = os.path.join(BASE_PATH, 'min_diff_model.zip')
MIN_DIFF_MODEL_PATH = os.path.join(BASE_MIN_DIFF_PATH, 'tmp/min_diff_model')
DIRPATH = '/tmp/min_diff_model'
r = requests.get(MIN_DIFF_URL, allow_redirects=True)
open(ZIP_PATH, 'wb').write(r.content)
with zipfile.ZipFile(ZIP_PATH, 'r') as zip_ref:
zip_ref.extractall(BASE_MIN_DIFF_PATH)
min_diff_model = tf.keras.models.load_model(
MIN_DIFF_MODEL_PATH, custom_objects={'KerasLayer' : hub.KerasLayer})
min_diff_model.save(min_diff_model_location, save_format='tf')
else:
min_diff_weight = 1.5 #@param {type:"number"}
# Create the dataset that will be passed to the MinDiffModel during training.
dataset = md.keras.utils.input_utils.pack_min_diff_data(
dataset_train_main, dataset_train_sensitive, dataset_train_nonsensitive)
# Create the original model.
original_model = min_diff_keras_utils.create_keras_sequential_model()
# Wrap the original model in a MinDiffModel, passing in one of the MinDiff
# losses and using the set loss_weight.
min_diff_loss = md.losses.MMDLoss()
min_diff_model = md.keras.MinDiffModel(original_model,
min_diff_loss,
min_diff_weight)
# Compile the model normally after wrapping the original model. Note that
# this means we use the baseline's model's loss here.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
loss = tf.keras.losses.BinaryCrossentropy()
min_diff_model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
min_diff_model.fit(dataset, epochs=20)
min_diff_model.save_original_model(min_diff_model_location, save_format='tf')
"""
Explanation: Train and evaluate the model
To train with MinDiff, simply take the original model and wrap it in a MinDiffModel with a corresponding loss and loss_weight. We are using 1.5 as the default loss_weight, but this is a parameter that needs to be tuned for your use case, since it depends on your model and product requirements. You can experiment with changing the value to see how it impacts the model, noting that increasing it pushes the performance of the minority and majority groups closer together but may come with more pronounced tradeoffs.
Then we compile the model normally (using the regular non-MinDiff loss) and fit to train.
Train MinDiffModel
End of explanation
"""
min_diff_eval_subdir = os.path.join(base_dir, 'tfma_eval_result')
min_diff_eval_result = fi_util.get_eval_results(
min_diff_model_location,
min_diff_eval_subdir,
validate_tfrecord_file,
slice_selection='religion')
"""
Explanation: Next we evaluate the results.
End of explanation
"""
widget_view.render_fairness_indicator(min_diff_eval_result)
"""
Explanation: To ensure we evaluate a new model correctly, we need to select a threshold the same way that we would the baseline model. In a production setting, this would mean ensuring that evaluation metrics meet launch standards. In our case, we will pick the threshold that results in a similar overall FPR to the baseline model. This threshold may be different from the one you selected for the baseline model. Try selecting false positive rate with threshold 0.400. (Note that the subgroups with very low quantity examples have very wide confidence range intervals and don’t have predictable results.)
End of explanation
"""
|
leoferres/prograUDD
|
labs/17_Funciones.ipynb
|
mit
|
def nompropio(texto):
resultado = ""
for i in range(len(texto)):
if i == 0:
resultado += texto[i].upper()
elif texto[i-1] == " ":
resultado += texto[i].upper()
else:
resultado += texto[i].lower()
return resultado
nombre = "jUaN pErEz"
nompropio(nombre)
nompropio('rodrigo TRIGO')
"""
Explanation: Introducción Funciones
Ejercicio 1: Replicar la funcion nompropio de excel.
End of explanation
"""
def sumatoria_k2(hasta):
suma = 0
while hasta > 0:
suma += hasta**2
hasta -= 1
return suma
sumatoria_k2(100)
100*101*201/6
"""
Explanation: Ejercicio 2. Realice una función que calcule: $ \sum_{k=1}^{n} k^2 $
Para verificar si su funcion es correcta, puede calcular $ \frac{n(n+1)(2n+1)}{6} $
End of explanation
"""
def calcula_nota(notas, ponderaciones):
nota = 0
for i in range(len(notas)):
nota += notas[i] * ponderaciones[i]
return nota
misnotas = [2.0, 7.0, 3.5, 3.8]
misponderaciones = [0.2, 0.2, 0.3, 0.3]
calcula_nota(misnotas, misponderaciones)
"""
Explanation: Ejercicio 3: Realice una función que calcule un promedio ponderado.
End of explanation
"""
def promedio(valores):
suma = 0
for i in valores:
suma += i
return suma/len(valores)
promedio(misnotas)
def varianza(valores):
suma = 0
prom = promedio(valores)
for i in valores:
suma += (i-prom)**2
return suma/len(valores)
varianza(misnotas)
def curtosis(valores):
suma = 0
prom = promedio(valores)
var = varianza(valores)
for i in valores:
suma += (i-prom)**4
suma /= len(valores)*var**2
suma -= 3
return suma
curtosis(misnotas)
"""
Explanation: Ejercicio 4. Realice una funcion que calcule la curtosis de una lista de valores, donde $$ Curtosis = \frac{\sum_{i=1}^{n} (X_i - \overline{X})^4}{n*\sigma^4} -3 $$ donde $\overline{X}$ es el promedio de la muestra y $\sigma$ es la desviación estandar.
Para ello realizaremos 3 funciones, promedio, varianza que es el cuadrado de la desviación estandar y luego la curtosis.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ipsl/cmip6/models/ipsl-cm6a-lr/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'ipsl-cm6a-lr', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: IPSL
Source ID: IPSL-CM6A-LR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: CMIP5:IPSL-CM5A-LR
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
DOC.set_value("Other: sea ice [thickness, concentration, velocity, temperature, heat content], snow thickness, snow temperature")
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
DOC.set_value("Ocean grid")
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
DOC.set_value("Hibler 1979")
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
DOC.set_value("Visco-plastic")
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
DOC.set_value("Other: multi-layer on a regular vertical grid")
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
DOC.set_value("Other: parametrized (calculated in ocean)")
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Ice formed with from prescribed thickness")
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
DOC.set_value("Other: no")
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Snow-ice")
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
DOC.set_value("Other: one layer")
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
DOC.set_value("Other: fonction of temperature and sea ice + snow thickness")
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
JasonMDev/guidedprojects
|
jupyter-files/GP02.ipynb
|
mit
|
csv_list = open("../data/GP02/US_births_1994-2003_CDC_NCHS.csv").read().split("\n")
csv_list[0:10]
"""
Explanation: GP02: Explore U.S. Births
The raw data behind the story Some People Are Too Superstitious To Have A Baby On Friday The 13th, which you can read here.
We'll be working with the data set from the Centers for Disease Control and Prevention's National National Center for Health Statistics.
The data set has the following structure:
* year - Year
* month - Month
* date_of_month - Day number of the month
* day_of_week - Day of week, where 1 is Monday and 7 is Sunday
* births - Number of births
1: Introduction To The Dataset
Lets explore the data and see how it looks.
End of explanation
"""
def read_csv(filename):
string_data = open(filename).read()
string_list = string_data.split("\n")[1:]
final_list = []
for row in string_list:
string_fields = row.split(",")
int_fields = []
for value in string_fields:
int_fields.append(int(value))
final_list.append(int_fields)
return final_list
cdc_list = read_csv("../data/GP02/US_births_1994-2003_CDC_NCHS.csv")
cdc_list[0:10]
"""
Explanation: 2: Converting Data Into A List Of Lists
The lists needs to be converted to a more structured format to be able to analyze it.
End of explanation
"""
def month_births(data):
births_per_month = {}
for row in data:
month = row[1]
births = row[4]
if month in births_per_month:
births_per_month[month] = births_per_month[month] + births
else:
births_per_month[month] = births
return births_per_month
cdc_month_births = month_births(cdc_list)
cdc_month_births
"""
Explanation: 3: Calculating Number Of Births Each Month
Now that the data is in a more usable format, we can start to analyze it.
End of explanation
"""
def dow_births(data):
births_per_dow = {}
for row in data:
dow = row[3]
births = row[4]
if dow in births_per_dow:
births_per_dow[dow] = births_per_dow[dow] + births
else:
births_per_dow[dow] = births
return births_per_dow
cdc_dow_births = dow_births(cdc_list)
cdc_dow_births
"""
Explanation: 4: Calculating Number Of Births Each Day Of Week
Let's now create a function that calculates the total number of births for each unique day of the week.
End of explanation
"""
def calc_counts(data, column):
sums_dict = {}
for row in data:
col_value = row[column]
births = row[4]
if col_value in sums_dict:
sums_dict[col_value] = sums_dict[col_value] + births
else:
sums_dict[col_value] = births
return sums_dict
cdc_year_births = calc_counts(cdc_list, 0)
cdc_month_births = calc_counts(cdc_list, 1)
cdc_dom_births = calc_counts(cdc_list, 2)
cdc_dow_births = calc_counts(cdc_list, 3)
cdc_year_births
cdc_month_births
cdc_dom_births
cdc_dow_births
"""
Explanation: 5: Creating A More General Function
It's better to create a single function that works for any column and specify the column we want as a parameter each time we call the function.
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp
|
day03/0. Preamble.ipynb
|
mit
|
!python --version
"""
Explanation: Deep Learning Tutorial with Keras and Tensorflow
<div>
<img style="text-align: left" src="imgs/keras-tensorflow-logo.jpg" width="40%" />
<div>
## Get the Materials
<img src="imgs/github.jpg" />
```shell
git clone https://github.com/ypeleg/Deep-Learning-Keras-Tensorflow-PyCon-Israel-2017
```
---
# Tentative Outline
# Outline at a glance
- **Part I**: **Introduction**
- Intro to Deep Learning and ANN
- Perceptron and MLP
- naive pure-Python implementation
- fast forward, sgd, backprop
- Intro to Tensorflow
- Model + SGD with Tensorflow
- Introduction to Keras
- Overview and main features
- Keras Backend
- Overview of the `core` layers
- Multi-Layer Perceptron and Fully Connected
- Examples with `keras.models.Sequential` and `Dense`
- HandsOn: FC with keras
- **Part II**: **Supervised Learning and Convolutional Neural Nets**
- Intro: Focus on Image Classification
- Intro to ConvNets
- meaning of convolutional filters
- examples from ImageNet
- Visualising ConvNets
- Advanced CNN
- Dropout
- MaxPooling
- Batch Normalisation
- HandsOn: MNIST Dataset
- FC and MNIST
- CNN and MNIST
- Deep Convolutiona Neural Networks with Keras (ref: `keras.applications`)
- VGG16
- VGG19
- ResNet50
- Transfer Learning and FineTuning
- Hyperparameters Optimisation
- **Part III**: **Unsupervised Learning**
- AutoEncoders and Embeddings
- AutoEncoders and MNIST
- word2vec and doc2vec (gensim) with `keras.datasets`
- word2vec and CNN
- **Part IV**: **Recurrent Neural Networks**
- Recurrent Neural Network in Keras
- `SimpleRNN`, `LSTM`, `GRU`
- **PartV**: **Additional Materials**:
- Quick tutorial on `theano`
- Perceptron and Adaline (pure-python) implementations
- MLP and MNIST (pure-python)
- LSTM for Sentence Generation
- Custom Layers in Keras
- Multi modal Network Topologies with Keras
---
# Requirements
This tutorial requires the following packages:
- Python version 3.5
- Python 3.4 should be fine as well
- likely Python 2.7 would be also fine, but *who knows*? :P
- `numpy` version 1.10 or later: http://www.numpy.org/
- `scipy` version 0.16 or later: http://www.scipy.org/
- `matplotlib` version 1.4 or later: http://matplotlib.org/
- `pandas` version 0.16 or later: http://pandas.pydata.org
- `scikit-learn` version 0.15 or later: http://scikit-learn.org
- `keras` version 2.0 or later: http://keras.io
- `tensorflow` version 1.0 or later: https://www.tensorflow.org
- `ipython`/`jupyter` version 4.0 or later, with notebook support
(Optional but recommended):
- `pyyaml`
- `hdf5` and `h5py` (required if you use model saving/loading functions in keras)
- **NVIDIA cuDNN** if you have NVIDIA GPUs on your machines.
[https://developer.nvidia.com/rdp/cudnn-download]()
The easiest way to get (most) these is to use an all-in-one installer such as [Anaconda](http://www.continuum.io/downloads) from Continuum. These are available for multiple architectures.
---
### Python Version
I'm currently running this tutorial with **Python 3** on **Anaconda**
End of explanation
"""
!cat ~/.keras/keras.json
"""
Explanation: Configure Keras with tensorflow
1) Create the keras.json (if it does not exist):
shell
touch $HOME/.keras/keras.json
2) Copy the following content into the file:
{
"epsilon": 1e-07,
"backend": "tensorflow",
"floatx": "float32",
"image_data_format": "channels_last"
}
End of explanation
"""
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
import keras
"""
Explanation: Test if everything is up&running
1. Check import
End of explanation
"""
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import IPython
print('iPython:', IPython.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
import keras
print('keras: ', keras.__version__)
# optional
import theano
print('Theano: ', theano.__version__)
import tensorflow as tf
print('Tensorflow: ', tf.__version__)
"""
Explanation: 2. Check installeded Versions
End of explanation
"""
|
xtr33me/deep-learning
|
embeddings/Skip-Gram_word2vec.ipynb
|
mit
|
import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
import collections, random
wordCounts = collections.Counter(int_words)
totWords = len(int_words)
## Your code here
def subsample(training_set, threshold):
retarr = []
for word in training_set:
occurence = wordCounts[word]/totWords
if(1-np.sqrt(threshold/occurence) < random.random()):
retarr.append(word)
return retarr
#need to look at best way to figure out below threshold
train_words = subsample(int_words, 1e-5)# The final subsampled word list
print(len(train_words))
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
start = random.randint(idx-window_size, idx) if idx-window_size>=0 else 0
end = (random.randint(idx, idx+window_size+1)) if (idx+window_size) < len(words) else len(words)
#print(words[start:end])
return words[start:end]
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32,[None], name='inputs')
labels = tf.placeholder(tf.int32,[None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))# create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
macks22/gensim
|
docs/notebooks/topic_coherence-movies.ipynb
|
lgpl-2.1
|
from __future__ import print_function
import re
import os
from scipy.stats import pearsonr
from datetime import datetime
from gensim.models import CoherenceModel
from gensim.corpora.dictionary import Dictionary
"""
Explanation: Benchmark testing of coherence pipeline on Movies dataset
How to find how well coherence measure matches your manual annotators
Introduction: For the validation of any model adapted from a paper, it is of utmost importance that the results of benchmark testing on the datasets listed in the paper match between the actual implementation (palmetto) and gensim. This coherence pipeline has been implemented from the work done by Roeder et al. The paper can be found here.
Approach :
1. In this notebook, we'll use the Movies dataset mentioned in the paper. This dataset along with the topics on which the coherence is calculated and the gold (human) ratings on these topics can be found here.
2. We will then calculate the coherence on these topics using the pipeline implemented in gensim.
3. Once we have all our coherence values on these topics we will calculate the correlation with the human ratings using pearson's r.
4. We will compare this final correlation value with the values listed in the paper and see if the pipeline is working as expected.
End of explanation
"""
base_dir = os.path.join(os.path.expanduser('~'), "workshop/nlp/data/")
data_dir = os.path.join(base_dir, 'wiki-movie-subset')
if not os.path.exists(data_dir):
raise ValueError("SKIP: Please download the movie corpus.")
ref_dir = os.path.join(base_dir, 'reference')
topics_path = os.path.join(ref_dir, 'topicsMovie.txt')
human_scores_path = os.path.join(ref_dir, 'goldMovie.txt')
%%time
texts = []
file_num = 0
preprocessed = 0
listing = os.listdir(data_dir)
for fname in listing:
file_num += 1
if 'disambiguation' in fname:
continue # discard disambiguation and redirect pages
elif fname.startswith('File_'):
continue # discard images, gifs, etc.
elif fname.startswith('Category_'):
continue # discard category articles
# Not sure how to identify portal and redirect pages,
# as well as pages about a single year.
# As a result, this preprocessing differs from the paper.
with open(os.path.join(data_dir, fname)) as f:
for line in f:
# lower case all words
lowered = line.lower()
#remove punctuation and split into seperate words
words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE)
texts.append(words)
preprocessed += 1
if file_num % 10000 == 0:
print('PROGRESS: %d/%d, preprocessed %d, discarded %d' % (
file_num, len(listing), preprocessed, (file_num - preprocessed)))
%%time
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Download the dataset (movie.zip) and gold standard data (topicsMovie.txt and goldMovie.txt) from the link and plug in the locations below.
End of explanation
"""
print(len(corpus))
print(dictionary)
topics = [] # list of 100 topics
with open(topics_path) as f:
topics = [line.split() for line in f if line]
len(topics)
human_scores = []
with open(human_scores_path) as f:
for line in f:
human_scores.append(float(line.strip()))
len(human_scores)
"""
Explanation: Cross validate the numbers
According to the paper the number of documents should be 108,952 with a vocabulary of 1,625,124. The difference is because of a difference in preprocessing. However the results obtained are still very similar.
End of explanation
"""
# We first need to filter out any topics that contain terms not in our dictionary
# These may occur as a result of preprocessing steps differing from those used to
# produce the reference topics. In this case, this only occurs in one topic.
invalid_topic_indices = set(
i for i, topic in enumerate(topics)
if any(t not in dictionary.token2id for t in topic)
)
print("Topics with out-of-vocab terms: %s" % ', '.join(map(str, invalid_topic_indices)))
usable_topics = [topic for i, topic in enumerate(topics) if i not in invalid_topic_indices]
"""
Explanation: Deal with any vocabulary mismatch.
End of explanation
"""
%%time
cm = CoherenceModel(topics=usable_topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
u_mass = cm.get_coherence_per_topic()
print("Calculated u_mass coherence for %d topics" % len(u_mass))
"""
Explanation: Start off with u_mass coherence measure.
End of explanation
"""
%%time
cm = CoherenceModel(topics=usable_topics, texts=texts, dictionary=dictionary, coherence='c_v')
c_v = cm.get_coherence_per_topic()
print("Calculated c_v coherence for %d topics" % len(c_v))
"""
Explanation: Start c_v coherence measure
This is expected to take much more time since c_v uses a sliding window to perform probability estimation and uses the cosine similarity indirect confirmation measure.
End of explanation
"""
%%time
cm.coherence = 'c_uci'
c_uci = cm.get_coherence_per_topic()
print("Calculated c_uci coherence for %d topics" % len(c_uci))
%%time
cm.coherence = 'c_npmi'
c_npmi = cm.get_coherence_per_topic()
print("Calculated c_npmi coherence for %d topics" % len(c_npmi))
final_scores = [
score for i, score in enumerate(human_scores)
if i not in invalid_topic_indices
]
len(final_scores)
"""
Explanation: Start c_uci and c_npmi coherence measures
c_v and c_uci and c_npmi all use the boolean sliding window approach of estimating probabilities. Since the CoherenceModel caches the accumulated statistics, calculation of c_uci and c_npmi are practically free after calculating c_v coherence. These two methods are simpler and were shown to correlate less with human judgements than c_v but more so than u_mass.
End of explanation
"""
for our_scores in (u_mass, c_v, c_uci, c_npmi):
print(pearsonr(our_scores, final_scores)[0])
"""
Explanation: The values in the paper were:
u_mass correlation : 0.093
c_v correlation : 0.548
c_uci correlation : 0.473
c_npmi correlation : 0.438
Our values are also very similar to these values which is good. This validates the correctness of our pipeline, as we can reasonably attribute the differences to differences in preprocessing.
End of explanation
"""
|
flaviostutz/datascience-snippets
|
kaggle-lung-cancer-approach2/.ipynb_checkpoints/LungCancerDetection-checkpoint.ipynb
|
mit
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#import seaborn as sns
import os
import glob
import SimpleITK as sitk
from PIL import Image
from scipy.misc import imread
%matplotlib inline
from IPython.display import clear_output
pd.options.mode.chained_assignment = None
"""
Explanation: I am using LUNA16 competition dataset
Lung cancer is the leading cause of cancer-related death worldwide. Screening high risk individuals for lung cancer with low-dose CT scans is now being implemented in the United States and other countries are expected to follow soon. In CT lung cancer screening, many millions of CT scans will have to be analyzed, which is an enormous burden for radiologists. Therefore there is a lot of interest to develop computer algorithms to optimize screening.?
The upcoming high-profile?Coding4Cancer?challenge invites coders to create the best computer algorithm that can identify a person as having lung cancer based on one or multiple low-dose CT images.
To be able to solve the Coding4Cancer challenge, and detect lung cancer in an early stage, pulmonary nodules, the early manifestation of lung cancers, have to be located. Many Computer-aided detection (CAD) systems have already been proposed for this task. The LUNA16 challenge will focus on a large-scale evaluation of automatic nodule detection algorithms on the publicly available LIDC/IDRI dataset.
Things to do!
Monday
.mhd files (extract 3D image data) - done
Tuesday
Extract 2D image slice based on the coordinates?
Preprocess data
Day after + Thursday
Train a CNN
Validate using their evaluation
Uncertainty quantification
Import necessary libraries
End of explanation
"""
annotations = pd.read_csv('../../input/luna16/annotations.csv')
candidates = pd.read_csv('../../input/luna16/candidates.csv')
annotations.head()
candidates['class'].sum()
len(annotations)
"""
Explanation: Let us import annotations
End of explanation
"""
candidates.info()
print(len(candidates[candidates['class'] == 1]))
print(len(candidates[candidates['class'] == 0]))
import multiprocessing
num_cores = multiprocessing.cpu_count()
print num_cores
"""
Explanation: Candidates have two classes, one with nodules, one without
End of explanation
"""
class CTScan(object):
def __init__(self, filename = None, coords = None):
self.filename = filename
self.coords = coords
self.ds = None
self.image = None
def reset_coords(self, coords):
self.coords = coords
def read_mhd_image(self):
path = glob.glob('../data/raw/*/'+ self.filename + '.mhd')
self.ds = sitk.ReadImage(path[0])
self.image = sitk.GetArrayFromImage(self.ds)
def get_resolution(self):
return self.ds.GetSpacing()
def get_origin(self):
return self.ds.GetOrigin()
def get_ds(self):
return self.ds
def get_voxel_coords(self):
origin = self.get_origin()
resolution = self.get_resolution()
voxel_coords = [np.absolute(self.coords[j]-origin[j])/resolution[j] \
for j in range(len(self.coords))]
return tuple(voxel_coords)
def get_image(self):
return self.image
def get_subimage(self, width):
self.read_mhd_image()
x, y, z = self.get_voxel_coords()
subImage = self.image[z, y-width/2:y+width/2, x-width/2:x+width/2]
return subImage
def normalizePlanes(self, npzarray):
maxHU = 400.
minHU = -1000.
npzarray = (npzarray - minHU) / (maxHU - minHU)
npzarray[npzarray>1] = 1.
npzarray[npzarray<0] = 0.
return npzarray
def save_image(self, filename, width):
image = self.get_subimage(width)
image = self.normalizePlanes(image)
Image.fromarray(image*255).convert('L').save(filename)
positives = candidates[candidates['class']==1].index
negatives = candidates[candidates['class']==0].index
"""
Explanation: Classes are heaviliy unbalanced, hardly 0.2% percent are positive.
The best way to move forward will be to undersample the negative class and then augment the positive class heaviliy to balance out the samples.
Plan of attack:
Get an initial subsample of negative class and keep all of the positives such that we have a 80/20 class distribution
Create a training set such that we augment minority class heavilby rotating to get a 50/50 class distribution
End of explanation
"""
scan = CTScan(np.asarray(candidates.iloc[negatives[600]])[0], \
np.asarray(candidates.iloc[negatives[600]])[1:-1])
scan.read_mhd_image()
x, y, z = scan.get_voxel_coords()
image = scan.get_image()
dx, dy, dz = scan.get_resolution()
x0, y0, z0 = scan.get_origin()
"""
Explanation: Check if my class works
End of explanation
"""
filename = '1.3.6.1.4.1.14519.5.2.1.6279.6001.100398138793540579077826395208'
coords = (70.19, -140.93, 877.68)#[877.68, -140.93, 70.19]
scan = CTScan(filename, coords)
scan.read_mhd_image()
x, y, z = scan.get_voxel_coords()
image = scan.get_image()
dx, dy, dz = scan.get_resolution()
x0, y0, z0 = scan.get_origin()
"""
Explanation: Try it on a test set you know works
End of explanation
"""
positives
np.random.seed(42)
negIndexes = np.random.choice(negatives, len(positives)*5, replace = False)
candidatesDf = candidates.iloc[list(positives)+list(negIndexes)]
"""
Explanation: Ok the class to get image data works
Next thing to do is to undersample negative class drastically. Since the number of positives in the data set of 551065 are 1351 and rest are negatives, I plan to make the dataset less skewed. Like a 70%/30% split.
End of explanation
"""
from sklearn.cross_validation import train_test_split
X = candidatesDf.iloc[:,:-1]
y = candidatesDf.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 42)
"""
Explanation: Now split it into test train set
End of explanation
"""
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.20, random_state = 42)
len(X_train)
X_train.to_pickle('traindata')
X_test.to_pickle('testdata')
X_val.to_pickle('valdata')
def normalizePlanes(npzarray):
maxHU = 400.
minHU = -1000.
npzarray = (npzarray - minHU) / (maxHU - minHU)
npzarray[npzarray>1] = 1.
npzarray[npzarray<0] = 0.
return npzarray
"""
Explanation: Create a validation dataset
End of explanation
"""
print 'number of positive cases are ' + str(y_train.sum())
print 'total set size is ' + str(len(y_train))
print 'percentage of positive cases are ' + str(y_train.sum()*1.0/len(y_train))
"""
Explanation: Focus on training data
End of explanation
"""
tempDf = X_train[y_train == 1]
tempDf = tempDf.set_index(X_train[y_train == 1].index + 1000000)
X_train_new = X_train.append(tempDf)
tempDf = tempDf.set_index(X_train[y_train == 1].index + 2000000)
X_train_new = X_train_new.append(tempDf)
ytemp = y_train.reindex(X_train[y_train == 1].index + 1000000)
ytemp.loc[:] = 1
y_train_new = y_train.append(ytemp)
ytemp = y_train.reindex(X_train[y_train == 1].index + 2000000)
ytemp.loc[:] = 1
y_train_new = y_train_new.append(ytemp)
print len(X_train_new), len(y_train_new)
X_train_new.index
"""
Explanation: There are 845 positive cases out of 5187 cases in the training set. We will need to augment the positive dataset like mad.
Add new keys to X_train and y_train for augmented data
End of explanation
"""
from scipy.misc import imresize
from PIL import ImageEnhance
class PreProcessing(object):
def __init__(self, image = None):
self.image = image
def subtract_mean(self):
self.image = (self.image/255.0 - 0.25)*255
return self.image
def downsample_data(self):
self.image = imresize(self.image, size = (40, 40), interp='bilinear', mode='L')
return self.image
def enhance_contrast(self):
self.image = ImageEnhance.Contrast(self.image)
return self.image
dirName = '../src/data/train/'
plt.figure(figsize = (10,10))
inp = imread(dirName + 'image_'+ str(30517) + '.jpg')
plt.subplot(221)
plt.imshow(inp)
plt.grid(False)
Pp = PreProcessing(inp)
inp2 = Pp.subtract_mean()
plt.subplot(222)
plt.imshow(inp2)
plt.grid(False)
#inp4 = Pp.enhance_contrast()
#plt.subplot(224)
#plt.imshow(inp4)
#plt.grid(False)
inp3 = Pp.downsample_data()
plt.subplot(223)
plt.imshow(inp3)
plt.grid(False)
#inp4 = Pp.enhance_contrast()
#plt.subplot(224)
#plt.imshow(inp4)
#plt.grid(False)
dirName
"""
Explanation: Preprocessing
End of explanation
"""
import tflearn
"""
Explanation: Convnet stuff
I am planning t us tflearn which is a wrapper around tensorflow
End of explanation
"""
y_train_new.values.astype(int)
train_filenames =\
X_train_new.index.to_series().apply(lambda x:\
'../src/data/train/image_'+str(x)+'.jpg')
train_filenames.values.astype(str)
dataset_file = 'traindatalabels.txt'
train_filenames =\
X_train_new.index.to_series().apply(lambda x:\
filenames = train_filenames.values.astype(str)
labels = y_train_new.values.astype(int)
traindata = np.zeros(filenames.size,\
dtype=[('var1', 'S36'), ('var2', int)])
traindata['var1'] = filenames
traindata['var2'] = labels
np.savetxt(dataset_file, traindata, fmt="%10s %d")
# Build a HDF5 dataset (only required once)
from tflearn.data_utils import build_hdf5_image_dataset
build_hdf5_image_dataset(dataset_file, image_shape=(50, 50), mode='file', output_path='traindataset.h5', categorical_labels=True, normalize=True)
# Load HDF5 dataset
import h5py
h5f = h5py.File('traindataset.h5', 'r')
X_train_images = h5f['X']
Y_train_labels = h5f['Y']
h5f2 = h5py.File('../src/data/valdataset.h5', 'r')
X_val_images = h5f2['X']
Y_val_labels = h5f2['Y']
"""
Explanation: loading image data on the fly is inefficient. So I am us
End of explanation
"""
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
# Make sure the data is normalized
img_prep = ImagePreprocessing()
img_prep.add_featurewise_zero_center()
img_prep.add_featurewise_stdnorm()
# Create extra synthetic training data by flipping, rotating and blurring the
# images on our data set.
img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle=25.)
img_aug.add_random_blur(sigma_max=3.)
# Input is a 50x50 image with 1 color channels (grayscale)
network = input_data(shape=[None, 50, 50, 1],
data_preprocessing=img_prep,
data_augmentation=img_aug)
# Step 1: Convolution
network = conv_2d(network, 50, 3, activation='relu')
# Step 2: Max pooling
network = max_pool_2d(network, 2)
# Step 3: Convolution again
network = conv_2d(network, 64, 3, activation='relu')
# Step 4: Convolution yet again
network = conv_2d(network, 64, 3, activation='relu')
# Step 5: Max pooling again
network = max_pool_2d(network, 2)
# Step 6: Fully-connected 512 node neural network
network = fully_connected(network, 512, activation='relu')
# Step 7: Dropout - throw away some data randomly during training to prevent over-fitting
network = dropout(network, 0.5)
# Step 8: Fully-connected neural network with two outputs (0=isn't a nodule, 1=is a nodule) to make the final prediction
network = fully_connected(network, 2, activation='softmax')
# Tell tflearn how we want to train the network
network = regression(network, optimizer='adam',
loss='categorical_crossentropy',
learning_rate=0.001)
# Wrap the network in a model object
model = tflearn.DNN(network, tensorboard_verbose=0, checkpoint_path='nodule-classifier.tfl.ckpt')
# Train it! We'll do 100 training passes and monitor it as it goes.
model.fit(X_train_images, Y_train_labels, n_epoch=100, shuffle=True, validation_set=(X_val_images, Y_val_labels),
show_metric=True, batch_size=96,
snapshot_epoch=True,
run_id='nodule-classifier')
# Save model when training is complete to a file
model.save("nodule-classifier.tfl")
print("Network trained and saved as nodule-classifier.tfl!")
h5f2 = h5py.File('../src/data/testdataset.h5', 'r')
X_test_images = h5f2['X']
Y_test_labels = h5f2['Y']
model.predict(X_test_images)
"""
Explanation: loading tflearn packages
End of explanation
"""
|
tcmoore3/mbuild
|
docs/tutorials/tutorial_monolayer.ipynb
|
mit
|
import mbuild as mb
from mbuild.examples import Alkane
from mbuild.lib.moieties import Silane
class AlkylSilane(mb.Compound):
"""A silane functionalized alkane chain with one Port. """
def __init__(self, chain_length):
super(AlkylSilane, self).__init__()
alkane = Alkane(chain_length, cap_end=False)
self.add(alkane, 'alkane')
silane = Silane()
self.add(silane, 'silane')
mb.force_overlap(self['alkane'], self['alkane']['down'], self['silane']['up'])
# Hoist silane port to AlkylSilane level.
self.add(silane['down'], 'down', containment=False)
AlkylSilane(5).visualize()
"""
Explanation: Monolayer: Complex hierarchies, patterns, tiling and writing to files
Note: mBuild expects all distance units to be in nanometers.
In this example, we'll cover assembling more complex hierarchies of components using patterns, tiling and how to output systems to files. To illustrate these concepts, let's build an alkane monolayer on a crystalline substrate.
First, let's build our monomers and functionalized them with a silane group which we can then attach to the substrate. The Alkane example uses the polymer tool to combine CH2 and CH3 repeat units. You also have the option to cap the front and back of the chain or to leave a CH2 group with a dangling port. The Silane compound is a Si(OH)<sub>2</sub> group with two ports facing out from the central Si. Lastly, we combine alkane with silane and add a label to AlkylSilane which points to, silane['down']. This allows us to reference it later using AlkylSilane['down'] rather than AlkylSilane['silane']['down'].
Note: In Compounds with multiple Ports, by convention, we try to label every Port successively as 'up', 'down', 'left', 'right', 'front', 'back' which should roughly correspond to their relative orientations. This is a bit tricky to enforce because the system is so flexible so use your best judgement and try to be consistent! The more components we collect in our library with the same labeling conventions, the easier it becomes to build ever more complex structures.
End of explanation
"""
import mbuild as mb
from mbuild.lib.surfaces import Betacristobalite
surface = Betacristobalite()
tiled_surface = mb.TiledCompound(surface, n_tiles=(2, 1, 1))
"""
Explanation: Now let's create a substrate to which we can later attach our monomers:
End of explanation
"""
from mbuild.lib.atoms import H
alkylsilane = AlkylSilane(chain_length=10)
hydrogen = H()
"""
Explanation: Here we've imported a beta-cristobalite surface from our component library. The TiledCompound tool allows you replicate any Compound in the x-, y-
and z-directions by any number of times - 2, 1 and 1 for our case.
Next, let's create our monomer and a hydrogen atom that we'll place on unoccupied surface sites:
End of explanation
"""
pattern = mb.Grid2DPattern(8, 8) # Evenly spaced, 2D grid of points.
# Attach chains to specified binding sites. Other sites get a hydrogen.
chains, hydrogens = pattern.apply_to_compound(host=tiled_surface, guest=alkylsilane, backfill=hydrogen)
"""
Explanation: Then we need to tell mBuild how to arrange the chains on the surface. This is accomplished with the "pattern" tools. Every pattern is just a collection of points. There are all kinds of patterns like spherical, 2D, regular, irregular etc. When you use the apply_pattern command, you effectively superimpose the pattern onto the host compound, mBuild figures out what the closest ports are to the pattern points and then attaches copies of the guest onto the binding sites identified by the pattern:
End of explanation
"""
monolayer = mb.Compound([tiled_surface, chains, hydrogens])
monolayer.visualize() # Warning: may be slow in IPython notebooks
# Save as .mol2 file
monolayer.save('monolayer.mol2', overwrite=True)
"""
Explanation: Also note the backfill optional argument which allows you to place a different compound on any unused ports. In this case we want to backfill with hydrogen atoms on every port without a chain.
And that's it! Check out examples.alkane_monolayer for the fully wrapped class.
End of explanation
"""
|
Ccaccia73/semimonocoque
|
07_CorrectiveSolutions-7nodes-non-symmetric.ipynb
|
mit
|
from pint import UnitRegistry
import sympy
import networkx as nx
#import numpy as np
import matplotlib.pyplot as plt
#import sys
%matplotlib inline
from IPython.display import display
"""
Explanation: Semi-Monocoque Theory: corrective solutions
End of explanation
"""
from Section import Section
"""
Explanation: Import Section class, which contains all calculations
End of explanation
"""
ureg = UnitRegistry()
sympy.init_printing()
"""
Explanation: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy)
End of explanation
"""
A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)
"""
Explanation: Define sympy parameters used for geometric description of sections
End of explanation
"""
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \
(b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \
(t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)]
datav = [(v[0],v[1].magnitude) for v in values]
"""
Explanation: We also define numerical values for each symbol in order to plot scaled section and perform calculations
End of explanation
"""
stringers = {1:[(3*a,h),A],
2:[(2*a,h),A],
3:[(a,h),A],
4:[(sympy.Integer(0),h),A],
5:[(sympy.Integer(0),sympy.Integer(0)),A],
6:[(sympy.Rational(7,4)*a,sympy.Integer(0)),A],
7:[(3*a,sympy.Integer(0)),A]}
panels = {(1,2):t,
(2,3):t,
(3,4):t,
(4,5):t,
(5,6):t,
(6,7):t,
(7,1):t}
"""
Explanation: Third example: Simple rectangular section with 7 nodes Non symmetric
Define graph describing the section:
1) stringers are nodes with parameters:
- x coordinate
- y coordinate
- Area
2) panels are oriented edges with parameters:
- thickness
- lenght which is automatically calculated
End of explanation
"""
S1 = Section(stringers, panels)
S1.cycles
"""
Explanation: Define section and perform first calculations
End of explanation
"""
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)
plt.arrow(0,0,20,0)
plt.arrow(0,0,0,20)
#plt.text(0,0, 'CG', fontsize=24)
plt.axis('equal')
plt.title("Section in starting reference Frame",fontsize=16);
"""
Explanation: Plot of S1 section in original reference frame
Define a dictionary of coordinates used by Networkx to plot section as a Directed graph.
Note that arrows are actually just thicker stubs
End of explanation
"""
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
x_ct, y_ct = S1.ct.subs(datav)
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, pos=positions)
plt.plot([0],[0],'o',ms=12,label='CG')
plt.plot([x_ct],[y_ct],'^',ms=12, label='SC')
#plt.text(0,0, 'CG', fontsize=24)
#plt.text(x_ct,y_ct, 'SC', fontsize=24)
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
plt.title("Section in pricipal reference Frame",fontsize=16);
"""
Explanation: Plot of S1 section in inertial reference Frame
Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes.
Center of Gravity and Shear Center are drawn
End of explanation
"""
sympy.simplify(S1.Ixx), sympy.simplify(S1.Iyy), sympy.simplify(S1.Ixy), sympy.simplify(S1.θ)
S1.symmetry
S1.compute_L()
S1.L
S1.compute_H()
S1.H.subs(datav)
S1.compute_KM(A,h,t)
S1.Ktilde
S1.Mtilde.subs(datav)
sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects()
sol_data
β2 = [sol[0] for sol in sol_data]
β2
X = []
for sol in sol_data:
for i in range(len(sol[2])):
X.append(sympy.N(sol[2][i]/sol[2][i].norm()))
X
λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2]
λ
"""
Explanation: Expression of inertial properties in principal reference frame
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
quests/serverlessml/05_feateng/labs/feateng_bqml.ipynb
|
apache-2.0
|
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
"""
Explanation: BigQuery ML models with feature engineering
In this notebook, we will use BigQuery ML to build more sophisticated models for taxifare prediction.
This is a continuation of our first models we created earlier with BigQuery ML but now with more feature engineering.
Learning Objectives
Create and train a new Linear Regression model with BigQuery ML
Evaluate and predict with the linear model
Apply transformations using SQL to prune the taxi cab dataset
Create a feature cross for day-hour combination using SQL
Examine ways to reduce model overfitting with regularization
Create and train a DNN model with BigQuery ML
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM `nyc-tlc.yellow.trips`
# The full dataset has 1+ Billion rows, let's take only 1 out of 1,000 (or 1 Million total)
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
# placeholder for additional filters as part of TODO 3 later
%%bigquery
# Tip: You can CREATE MODEL IF NOT EXISTS as well
CREATE OR REPLACE MODEL serverlessml.model4_feateng
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek
, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday
)
# TODO 1: Specify the BigQuery ML options for a linear model to predict fare amount
# OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
"""
Explanation: Model 4: With some transformations
BigQuery ML automatically scales the inputs. so we don't need to do scaling, but human insight can help.
Since we we'll repeat this quite a bit, let's make a dataset with 1 million rows.
End of explanation
"""
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model4_feateng)
%%bigquery
# TODO 2: Evaluate and predict with the linear model
# Write a SQL query to take the SQRT() of the Mean Squared Error as your loss metric for evaluation
# Hint: Use ML.EVALUATE on your newly trained model
"""
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:
End of explanation
"""
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model4_feateng, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
"""
Explanation: What is the RMSE? Could we do any better?
Try re-creating the above feateng_training_data table with additional filters and re-running training and evaluation.
TODO 3: Apply transformations using SQL to prune the taxi cab dataset
Now let's reduce the noise in our training dataset by only training on trips with a non-zero distance and fares above $2.50. Additionally, we will apply some geo location boundaries for New York City. Copy the below into your previous feateng_training_data table creation and re-train your model.
sql
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Yippee! We're now below our target of 6 dollars in RMSE.
We are now beating our goals, and with just a linear model.
Making predictions with BigQuery ML
This is how the prediction query would look that we saw earlier heading 1.3 miles uptown in New York City.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model5_featcross
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
# TODO 4: Create a feature cross for day-hour combination using SQL
, ML.( # <--- Enter the correct function for a BigQuery ML feature cross ahead of the (
STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)
) AS day_hr
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg')
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL serverlessml.model5_featcross)
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model5_featcross)
"""
Explanation: Improving the model with feature crosses
Let's do a feature cross of the day-hour combination instead of using them raw
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL serverlessml.model6_featcross_l2
TRANSFORM(
* EXCEPT(pickup_datetime)
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday)) AS day_hr
)
# TODO 5: Set the model options for a linear regression model to predict fare amount with 0.1 L2 Regularization
# Tip: Refer to the documentation for syntax:
# https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create
OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model6_featcross_l2)
"""
Explanation: Sometimes (not the case above), the training RMSE is quite reasonable, but the evaluation RMSE is terrible. This is an indication of overfitting.
When we do feature crosses, we run into the risk of overfitting (for example, when a particular day-hour combo doesn't have enough taxirides).
Reducing overfitting
Let's add L2 regularization to help reduce overfitting. Let's set it to 0.1
End of explanation
"""
%%bigquery
SELECT * FROM ML.PREDICT(MODEL serverlessml.model6_featcross_l2, (
SELECT
-73.982683 AS pickuplon,
40.742104 AS pickuplat,
-73.983766 AS dropofflon,
40.755174 AS dropofflat,
3.0 AS passengers,
TIMESTAMP('2019-06-03 04:21:29.769443 UTC') AS pickup_datetime
))
"""
Explanation: These sorts of experiment would have taken days to do otherwise. We did it in minutes, thanks to BigQuery ML! The advantage of doing all this in the TRANSFORM is the client code doing the PREDICT doesn't change. Our model improvement is transparent to client code.
End of explanation
"""
%%bigquery
-- BQML chooses the wrong gradient descent strategy here. It will get fixed in (b/141429990)
-- But for now, as a workaround, explicitly specify optimize_strategy='BATCH_GRADIENT_DESCENT'
CREATE OR REPLACE MODEL serverlessml.model7_geo
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek,
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr
, CONCAT(
ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)),
ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01))
) AS pickup_and_dropoff
)
OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.1, optimize_strategy='BATCH_GRADIENT_DESCENT')
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model7_geo)
"""
Explanation: Let's try feature crossing the locations too
Because the lat and lon by themselves don't have meaning, but only in conjunction, it may be useful to treat the fields as a pair instead of just using them as numeric values. However, lat and lon are continuous numbers, so we have to discretize them first. That's what ML.BUCKETIZE does.
Here are some of the preprocessing functions in BigQuery ML:
* ML.FEATURE_CROSS(STRUCT(features)) does a feature cross of all the combinations
* ML.POLYNOMIAL_EXPAND(STRUCT(features), degree) creates x, x^2, x^3, etc.
* ML.BUCKETIZE(f, split_points) where split_points is an array
End of explanation
"""
%%bigquery
-- This is alpha and may not work for you.
CREATE OR REPLACE MODEL serverlessml.model8_dnn
TRANSFORM(
fare_amount
, ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean
, CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING),
CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS day_hr
, CONCAT(
ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)),
ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)),
ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01))
) AS pickup_and_dropoff
)
-- at the time of writing, l2_reg wasn't supported yet.
# TODO 6: Create a DNN model (dnn_regressor) with hidden_units [32,8]
OPTIONS()
AS
SELECT * FROM serverlessml.feateng_training_data
%%bigquery
SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model8_dnn)
"""
Explanation: Yippee! We're now below our target of 6 dollars in RMSE.
DNN
You could, of course, train a more sophisticated model. Change "linear_reg" above to "dnn_regressor" and see if it improves things.
Note: This takes 20 - 25 minutes to run.
End of explanation
"""
|
WormLabCaltech/mprsq
|
src/9 Decorrelation Within Pathways.ipynb
|
mit
|
# important stuff:
import os
import pandas as pd
import numpy as np
import morgan as morgan
import genpy
import gvars
# Graphics
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
rc('text', usetex=True)
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
%matplotlib inline
# This enables SVG graphics inline.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
genvar = gvars.genvars()
# Specify the genotypes to refer to:
single_mutants = ['b', 'c', 'd', 'e', 'g']
# Specify which genotypes are double mutants
double_mutants = {'a' : 'bd', 'f':'bc'}
# initialize the morgan.hunt object:
thomas = morgan.hunt('target_id', 'b', 'tpm', 'qval')
# input the genmap file:
thomas.add_genmap('../input/library_genotype_mapping.txt',
comment='#')
# add the names of the single mutants
thomas.add_single_mutant(single_mutants)
# add the names of the double mutants
thomas.add_double_mutants(['a', 'f'], ['bd', 'bc'])
# set the q-value threshold
thomas.set_qval()
# Add the tpm files:
kallisto_loc = '../input/kallisto_all/'
thomas.add_tpm(kallisto_loc, '/kallisto/abundance.tsv', '')
# Make all possible combinations of WT, X
combs = {}
for gene in thomas.genmap.genotype.unique():
if gene != 'wt':
combs[gene] = 'WT_'+gene+'/'
# load all the beta values for each genotype:
sleuth_loc = '../sleuth/kallisto/'
for file in os.listdir("../sleuth/kallisto"):
if file[:4] == 'beta':
letter = file[-5:-4].lower()
thomas.add_beta(sleuth_loc + file, letter)
thomas.beta[letter].sort_values('target_id',
inplace=True)
thomas.beta[letter].reset_index(inplace=True)
thomas.filter_data()
barbara = morgan.mcclintock('bayesian', thomas, True)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Figure-7" data-toc-modified-id="Figure-7-1"><span class="toc-item-num">1 </span>Figure 7</a></div>
In this notebook, I show that decorrelation could help order a pathway. The approach I will take is as follows:
* Calculate primary pairwise correlations between each mutant transcriptome
* Weight all correlations by the number of isoforms that are DE in both transcriptomes, divided by the total number of isoforms in either transcriptome.
* Plot
End of explanation
"""
def tidy_df(df, corr='corr', morgan_obj=thomas):
"""
A function that returns a tidied up dataframe.
Dataframe provided must be the result of morgan.robust_regression()
or morgan.robust_regression_secondary()
df - dataframe to tidy up
corr - a string indicating whether to use 'corr' or 'outliers'
outputs:
df - a tidied dataframe with columns 'corr_wit', 'variable',
'fraction' and 'pair'
"""
# make a copy of the df
df = df.copy()
# append a column called corr_with
if 'corr_with' not in df:
df['corr_with'] = morgan_obj.single_mutants
# melt it so that each row has a single correlation
df = pd.melt(df, id_vars='corr_with')
# drop any observations where the correlated letters are the same
df = df[df.corr_with != df.variable]
def calculate_fraction(x, fraction='corr'):
"""Fraction of genes that participate in a given interaction."""
if (x.corr_with, x.variable) in barbara.correlated_genes.keys():
dd = barbara.correlated_genes[(x.corr_with, x.variable)]
outliers = len(dd['outliers'])
corr = len(dd['corr'])
total = outliers + corr
if fraction == 'corr':
return corr/total
else:
return outliers/total
else:
return np.nan
# calculate the fraction of genes participating in any interaction
df['fraction'] = df.apply(calculate_fraction, args=(corr,), axis=1)
# generate a new variable 'pair' that is
df['pair'] = df.variable + df.corr_with
# return the damned thing:
return df
def different(x, d):
"""
Returns an indicator variable if the primary regression
is different in sign from the secondary.
"""
# extract the pair in question:
p = x.pair
# search for the primary interaction in the dataframe
primary = d[(d.pair == p) &
(d.regression == 'primary')].value.values[0]
# search for the secondary
secondary = d[(d.pair == p) &
(d.regression == 'secondary')].value.values[0]
# if the interactions are 0, return 0
if primary == 0 or secondary == 0:
return 0
# if they have the same sign, return -1
elif (primary*secondary > 0):
return -1
# otherwise return 1
else:
return 1
def special_add(x):
"""
If the primary and secondary have the same sign,
returns the addition of both.
"""
# if the current row is a secondary row
# and the primary and secondary rows are the same
# then return np.nan since we will want to ignore
# the secondary correlation
# if they are different in sign, return the current value
if x.regression == 'secondary':
if x.different == -1:
return np.nan
else:
return x.value
# if the regression is primary,
# then add the values if the correlations have the same sign
# otherwise just return the current value:
check = d[(d.regression=='secondary') & \
(d.pair == x.pair)].different.values
if check == -1:
to_add = d[(d.regression=='secondary') &
(d.pair == x.pair)].value.values[0]
return x.value + to_add
else:
return x.value
"""
Explanation: Next, I define some functions that will help me clean up the matrix I just generated with the above command and place it into a tidy dataframe.
End of explanation
"""
# tidy up the dataframe w/bayesian primary interactions:
d_pos = tidy_df(barbara.robust_slope)
d_pos['regression'] = 'primary'
# tidy up the secondary interactions
d_minus = tidy_df(barbara.secondary_slope, corr='outliers')
d_minus['regression'] = 'secondary'
frames = [d_pos, d_minus]
d = pd.concat(frames)
# identify whether primary and secondary
# interactions have different signs
d['different'] = d.apply(different, args=(d,), axis=1)
# drop any fractions that are NAN
d.dropna(subset=['fraction'], inplace=True)
# calculate corrected coefficients
d['corrected'] = d.apply(special_add, axis=1)
# drop any NAN corrected columns
d.dropna(subset=['corrected'], inplace=True)
# sort the pairs according to functional distance
d['sort_pairs'] = d.pair.map(genvar.sort_pairs)
d.sort('sort_pairs', inplace=True)
# add the labels for plotting:
d['genes'] = d.pair.map(genvar.decode_pairs)
# extract the standard error for each correlation
e_plus = tidy_df(barbara.errors_primary)
# add a sort pairs column
e_plus['sort_pairs'] = e_plus.pair.map(genvar.sort_pairs)
# decode the gene pairs
e_plus['genes'] = e_plus.pair.map(genvar.decode_pairs)
# sort
e_plus.sort('sort_pairs', inplace=True)
# drop nonnumeric values
e_plus.dropna(inplace=True)
# repeat for secondary errors
e_minus = tidy_df(barbara.errors_secondary)
e_minus['sort_pairs'] = e_minus.pair.map(genvar.sort_pairs)
e_minus['genes'] = e_minus.pair.map(genvar.decode_pairs)
e_minus.sort('sort_pairs', inplace=True)
e_minus.dropna(inplace=True)
"""
Explanation: tidy up the dataframes:
End of explanation
"""
# generate a stripplot with all the
sns.stripplot(x='genes', y='corrected',
data=d[d.regression=='primary'], size=15,
color='g', alpha=0.7)
# add errorbars:
# for each xtick and xticklabel
for x, xlabel in zip(plt.gca().get_xticks(),
plt.gca().get_xticklabels()):
# get the data
temp = d[d.regression=='primary']
# get the gene ID
f = temp.genes == xlabel.get_text()
# get the error bar gene ID
f2 = e_plus.genes == xlabel.get_text()
# plot the errorbar
plt.gca().errorbar(np.ones_like(temp[f].corrected.values)*x,
temp[f].corrected.values,
yerr=e_plus[f2].value.values,
ls='none', color='g')
# prettify:
plt.xticks(rotation=90, fontsize=20)
# plt.yticks([-0.1, 0, 0.5], fontsize=20)
plt.yticks(fontsize=20)
plt.axhline(0, lw=2, ls='--', color='gray')
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel('Gene Pairs, Ordered By Decreasing Functional Distance', fontsize=20)
plt.ylabel('Weighted Correlation', fontsize=20)
# save
plt.savefig('../output/weighted_corr_decreases_w_distance.svg')
"""
Explanation: Figure 7
End of explanation
"""
# plot secondary interactions
sns.stripplot(x='genes', y='corrected',
data=d[(d.regression=='secondary') &
(d.different == 1)],
size=10, color='k')
# add errorbars:
for x, xlabel in zip(plt.gca().get_xticks(),
plt.gca().get_xticklabels()):
temp = d[d.regression=='secondary']
f = temp.genes == xlabel.get_text()
f2 = e_minus.genes == xlabel.get_text()
plt.gca().errorbar(np.ones_like(temp[f].corrected.values)*x,
temp[f].corrected.values,
yerr=e_minus[f2].value.values,
ls='none', color='k')
# prettify
plt.axhline(0, ls='--', color='0.5')
plt.xticks(rotation=45, fontsize=20)
plt.yticks([-0.1, 0, 0.1], fontsize=20)
plt.axhline(0, lw=2, ls='--', color='gray')
plt.ylabel('Secondary Correlation, Normalized to Overlap')
"""
Explanation: Secondary correlations do not seem to have this property. That may be a result of the low number of genes (we should have sequenced deeper) or a result of other things that may be occurring. I don't really know.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/niwa/cmip6/models/sandbox-3/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
getsmarter/bda
|
module_4/M4_NB1_NetworkX_Introduction.ipynb
|
mit
|
# Load relevant libraries.
import networkx as nx
import matplotlib.pylab as plt
%matplotlib inline
import pygraphviz as pgv
import random
from IPython.display import Image, display
"""
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Introduction to NetworkX
Your completion of the notebook exercises will be graded based on your ability to do the following:
Understand: Do your pseudo-code and comments show evidence that you recall and understand technical concepts?
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Analyze: Are you able to pick the relevant method or library to resolve specific stated questions?
Notebook objectives
By the end of this notebook, you will be expected to:
Create basic graphs using NetworkX;
Use graph generators to explore classic graphs;
Visualize graph objects; and
Compute neighborhood information from a NetworkX graph object.
List of exercises
Exercise 1: Create and manipulate NetworkX graphs.
Notebook introduction
This week, the practical assessments will focus on the study of networks. In this notebook, you will start with an introduction to NetworkX.
NetworkX is a Python language software package used to create, manipulate, and study the structure, dynamics, and function of complex networks. The first version of this software package was designed and written by Aric Hagberg, Dan Schult, and Pieter Swart between 2002 and 2003.
A network or graph is a set of vertices or nodes, with relationships between nodes represented by a set of lines. These lines can include arrows to depict a directional relationship.
With NetworkX you can load and store networks in standard and nonstandard data formats, generate numerous types of random and classic networks, analyze network structure, build network models, design new network algorithms, draw networks, and much more.
To access and use the NetworkX module functionality, it first needs to be imported into your notebook.
Here are some additional links that will provide you with solid foundational knowledge of NetworkX:
NetworkX documentation
NetworkX examples
NetworkX tutorial
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
Load libraries and set options
End of explanation
"""
# Instantiate an empty network undirected graph object, and assign to variable G.
G=nx.Graph()
"""
Explanation: 1. Graph creation
With NetworkX, graph objects can be created in one of three ways:
Adding edges and nodes explicitly.
Importing data from data sources.
Graph generators.
This notebook predominantly investigates graph exploration using the first approach, with a few remarks made on the other graph creation approaches.
1.1 Adding edges and nodes explicitly
First, create a graph object by explicitly adding nodes to said object.
1.1.1 Instantiate an empty, undirected graph object
End of explanation
"""
# Add a node (1) to G.
G.add_node(1)
# Add another node ('x') to G.
G.add_node('x')
"""
Explanation: 1.1.2 Add nodes
End of explanation
"""
def pydot(G):
'''
A function for graph visualization using the dot framework
'''
pdot = nx.drawing.nx_pydot.to_pydot(G)
display(Image(pdot.create_png()))
"""
Explanation: 1.1.3 Visualize the graph structure
A graph is an abstract mathematical object without a specific representation in the Cartesian coordinate space. Therefore, graph visualization is somewhat arbitrary. Notebook 2 will look at algorithms that have been proposed to aid in presenting graph objects. This notebook, however, will use a function - "pydot" - defined below, and which has some appealing aesthetics.
End of explanation
"""
pydot(G)
"""
Explanation: You can now visualize the simple graph that you have defined and populated with node information.
End of explanation
"""
# Add an edge between two nodes, 1 and 3.
# Note that nodes are automatically added if they do not exist.
G.add_edge(1,3)
# Add edge information, and specify the value of the weight attribute.
G.add_edge(2,'x',weight=0.9)
G.add_edge(1,'x',weight=3.142)
# Add edges from a list of tuples.
# In each tuple, the first 2 elements are nodes, and third element is value of the weight attribute.
edgelist=[('a','b',5.0),('b','c',3.0),('a','c',1.0),('c','d',7.3)]
G.add_weighted_edges_from(edgelist)
"""
Explanation: 1.1.4 Use edge information to populate a graph object
Alternatively, you can populate node information by starting off from an edge pair or a list of edge pairs. Such a pairing may or may not include the strength, or other attributes, that describe the relationship between the pair(s). The special edge attribute "weight" should always be numerical, and holds values used by algorithms requiring weighted edges. When specifying edges as tuples, the optional third argument refers to the weight.
End of explanation
"""
# Visualize the graph object.
pydot(G)
# Visualize the graph object, including weight information.
for u,v,d in G.edges(data=True):
d['label'] = d.get('weight','')
pydot(G)
"""
Explanation: 1.1.5 Graph visualization
End of explanation
"""
# Add a sine function, imported from the math module, as a node.
from math import sin
G.add_node(sin)
# Add file handle object as node.
fh = open("../data/CallLog.csv","r") # handle to file object.
G.add_node(fh)
"""
Explanation: 1.1.6 A node can be any hashable object
A node can be any of the so-called hashable objects:
An object is hashable if it has a hash value that never changes during its lifetime (it needs a "_hash()" method), and can be compared to other objects (it needs an "eq_()" or "cmp()" method). Hashable objects that compare equal must have the same hash value.
Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally.
While all of Python’s immutable built-in objects are hashable, no mutable containers (such as lists or dictionaries) are. Objects that are instances of user-defined classes are hashable by default; they all compare unequal, and their hash value is their id().
Examples of hashable objects in Python include strings, numbers, files, functions, etc. In the following two examples, a node that is a math function and a node that is a file object are added to the graph object.
End of explanation
"""
# List the nodes in your graph object.
list(G.nodes())
# How many nodes are contained within your graph model?
G.number_of_nodes()
# Alternative method for getting the number nodes.
G.order()
"""
Explanation: 1.2 Examining the Graph() object
You can examine the nodes and edges in your graph using various commands.
1.2.1 Getting node information
End of explanation
"""
# List the edges in the graph object.
list(G.edges())
# How many edges do you have?
G.number_of_edges()
# Alternative method for getting number of edges.
G.size()
"""
Explanation: 1.2.2 Getting edge information
End of explanation
"""
for (u, v, wt) in G.edges.data('weight'):
if wt != None:
print('(%s, %s, %.3f)' % (u, v, wt))
if wt is None:
print(u,v, wt)
"""
Explanation: 1.2.3 Getting edge weight information
The most direct way to get edge weight data is by using the "get_edge_data" method, which returns the attribute dictionary associated with an edge pairing.
End of explanation
"""
for n1,n2,attr in G.edges(data=True):
print(n1,n2,attr)
"""
Explanation: In the dict output, the label key was added above when you wanted to show weight attribute information when visualizing the graph. By default, it is not included when adding weight information to the edges.
Print the weight information for all of the edges in your graph object.
End of explanation
"""
list(G.neighbors('x'))
"""
Explanation: 1.2.4 Getting neighbor information
It is also possible to get a list of the neighbors associated with a given node. In the following cell, invoke the graph method "neighbors" and specify the node whose neighbors you are interested in.
End of explanation
"""
for node in G.nodes():
print(node, list(G.neighbors(node)))
"""
Explanation: You can also print the list of all nodes and their corresponding neighbors. The code below prints the node, and the node's neigbhors as list (that is, enclosed between two square brackets).
End of explanation
"""
# Add a set of edges from a list of tuples.
e = [(1 ,2) ,(1 ,3)]
G.add_edges_from(e)
# Remove edge (1,2).
G.remove_edge(1,2)
# Similarly, you can also remove a node, and all edges linked to that node will also fall away.
G.remove_node(3)
# Multiple edge or node removal is also possible.
G.remove_edges_from(e)
"""
Explanation: 1.2.5 Removing nodes or edges
Removing edges and nodes from a graph is very simple, and is illustrated in the following cell.
End of explanation
"""
# Trying to remove a node not in the graph raises an error.
G.remove_node(3)
"""
Explanation: <div class="alert alert-danger">
<b>Important</b>: Removing a node not in the graph raises an error.
</div>
End of explanation
"""
# Close the file handle object used above.
fh.close()
# Remove the graph object from the workspace.
del G
"""
Explanation: 1.2.6 Cleaning up
End of explanation
"""
# Generate some of the small, famous graphs.
petersen=nx.petersen_graph()
tutte=nx.tutte_graph()
maze=nx.sedgewick_maze_graph()
tet=nx.tetrahedral_graph()
# Plot one of the small, famous graphs.
pydot(petersen)
"""
Explanation: 1.3 Graph generators
NetworkX also has standard algorithms to create network topologies. The following cells include some examples that you are encouraged to build, analyze, and visualize, using the tools described above, as well as other tools that will be introduced later.
1.3.1 Small, famous graphs
End of explanation
"""
# Generate some classic graphs.
K_5=nx.complete_graph(5)
K_3_5=nx.complete_bipartite_graph(3,5)
barbell=nx.barbell_graph(10,10)
lollipop=nx.lollipop_graph(10,20)
# Plot one of the classic graphs.
pydot(barbell)
"""
Explanation: 1.3.2 Classic graphs
End of explanation
"""
# Generate some random graphs for arbitrary parameter values.
er=nx.erdos_renyi_graph(10,0.15)
ws=nx.watts_strogatz_graph(30,3,0.1)
ba=nx.barabasi_albert_graph(10,5)
red=nx.random_lobster(20,0.9,0.9)
# Plot one of the random graphs.
pydot(ba)
"""
Explanation: 1.3.3 Random graphs
End of explanation
"""
# Your answer here.
"""
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Note:
This exercise contains five sections. It is broken up into these sections in order to make it easier to follow. Complete all five sections before saving and submitting your notebook.
Exercise 1.1: Instructions
Create an Erdos Renyi random graph.
Your graph should have 30 nodes, where each of the edges are chosen with a probability of 0.15, using NetworkX's graph generator methods. Set the argument for the seed parameter to "random.seed(10)". Assign your graph to a variable "G".
Hint:
An Erdos Renyi random graph is generated with NetworkX using the following:
G = nx.erdos_renyi_graph(nodes, probability, seed_value)
End of explanation
"""
# Your answer here.
"""
Explanation: Exercise 1.2: Instructions
Compute the number of edges in the graph, using one of the methods provided by NetworkX.
End of explanation
"""
# Your answer here.
"""
Explanation: Exercise 1.3: Instructions
Write a piece of code that prints the nodel label and neighbors for each node in the graph "G" that you created. Your code should be reusable for other graph objects.
End of explanation
"""
# Your answer here.
"""
Explanation: Exercise 1.4: Instructions
Write code to find a node with the largest number of neighbors in the graph. Your code should print both the node label and the number of neighbors it has.
End of explanation
"""
# Your answer here.
"""
Explanation: Exercise 1.5: Instructions
Remove the node with the most neighbors (found in Exercise 1.4) from the graph.
End of explanation
"""
|
sjev/talks
|
pythonMeetupDec16/slides.ipynb
|
mit
|
# matplotlib example
# plot 5-sec data
price = pd.DataFrame.from_csv('data/SPY_20160411205955.csv')
price.close.plot()
# bokeh example
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
from bokeh.charts import Line
output_notebook()
line = Line(price.close, plot_width=800, plot_height=400)
show(line)
"""
Explanation: <img src="files/img/cover_sheet.svg">
Outline
Intro
Scientific Python tools
Getting data
Example strategy
About
Programming since 1992
Background in Applied Physics (TU Delft)
Working at Oce since 2005
algorithm development
machine visioin
image processing
Trading stocks as a hobby since 2009
see my adventures at tradingWithPython blog
Why Python
Perfect all-round tool
(web) application development
scientific calculations
massive community
Scientific python - dev tools
(see http://github.com/jrjohansson/scientific-python-lectures for more)
IPython - interactive python - hacking
Jupyter Notebook - code & document - fantastic research tool
Spyder - IDE - good development tool
Eclipse & others - engineering tools
Libraries - general
numpy - matlab-like matrix calculations
scipy - scientific libraries (interpolation, transformations etc)
scikit-learn - machine learning
keras - deep learning
Libraries - Finance
pandas - data analysys library
trading-with-python - my toolbox
zipline - backtesting (I don't use it)
ibpy - interfacing with InteractiveBrokers API
Jupyter notebook
( previously called IPython notebook )
Project found on jupyter.org
Combine code, equations, visualisations , html etc
Explore & document
Share with others
<img src="img/jupyterpreview.png" width="800">
Spyder
Spyder is a MATLAB-like IDE for scientific computing with python. It has the many advantages of a traditional IDE environment, for example that everything from code editing, execution and debugging is carried out in a single environment, and work on different calculations can be organized as projects in the IDE environment.
<!-- <img src="files/images/spyder-screenshot.jpg" width="800"> -->
<img src="img/spyder-screenshot.jpg" width="800">
Some advantages of Spyder:
Powerful code editor, with syntax high-lighting, dynamic code introspection and integration with the python debugger.
Variable explorer, IPython command prompt.
Integrated documentation and help.
Plotting
matplotlib - Matlab plotting clone
bokeh - interactive javascript plots (see tutorial )
End of explanation
"""
# get data from yahoo finance
price = twp.yahooFinance.getHistoricData('SPY')
price['adj_close'].plot()
"""
Explanation: Getting the data
Yahoo Finance
free daily OHLC data
Interactive Brokers
free (for clients) intraday data
CBOE - Chicago Board Options Echange
daily volatility data
Quantdl
subscription packages, easy interface
End of explanation
"""
# get data
import tradingWithPython.lib.cboe as cboe # cboe data
symbols = ['SPY','VXX','VXZ']
priceData = twp.yahooFinance.getHistoricData(symbols).minor_xs('adj_close')
volData = cboe.getHistoricData(['VIX','VXV']).dropna()
volData.plot();
delta = volData.VIX - volData.VXV
delta.plot()
title('delta')
# prepare data
df = pd.DataFrame({'VXX':priceData.VXX, 'delta':delta}).dropna()
df.plot(subplots='True');
# strategy simulation function
def backtest(thresh):
""" backtest strategy with a threshold value"""
df['dir'] = 0 # init with zeros
df['ret'] = df.VXX.pct_change()
long = df.delta > thresh
short = df.delta < thresh
#df.ix[long,'dir'] = 1 # set long positions
df.ix[short,'dir'] = -1 # set short positions
df['dir'] = df['dir'].shift(1) # dont forget to shift one day forward!
df['pnl'] = df['dir'] * df['ret']
return df
df = backtest(0)
df
# check relationship delta-returns
df.plot(kind='scatter',x='delta',y='ret')
"""
Explanation: Interactive brokers
has descent API for historic & realtime data and order submission
provides data down to 1 s resolution
historic data - see downloader code
realtime quotes - see tick logger
Simple volatility strategy
trade VXX
use VIX-VXV as inidicator
very simple approximation
no transaction cost
simple summation of percent returns
... this one actually makes money.
Disclaimer : you will lose money. Don't blame me for anything.
End of explanation
"""
T = np.linspace(-3,0,10)
h = ['%.2f' % t for t in T] # make table header
pnl ={} # pnl dict
PNL = pd.DataFrame(index=df.index, columns=h)
for i, t in enumerate(T):
PNL[h[i]] = backtest(thresh=t)['pnl']
PNL.cumsum().plot()
# evaluate performance
twp.sharpe(PNL).plot(kind='bar')
xlabel('threshold')
ylabel('sharpe')
title('strategy performance')
# plot best strategy
PNL['-2.00'].cumsum().plot()
"""
Explanation: Do a parameter scan
... simulate for different values of thresh variable
End of explanation
"""
|
Hvass-Labs/TensorFlow-Tutorials
|
10_Fine-Tuning.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import PIL
import tensorflow as tf
import numpy as np
import os
"""
Explanation: TensorFlow Tutorial #10
Fine-Tuning
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
We have previously seen in Tutorials #08 and #09 how to use a pre-trained Neural Network on a new dataset using so-called Transfer Learning, by re-routing the output of the original model just prior to its classification layers and instead use a new classifier that we had created. Because the original model was 'frozen' its weights could not be further optimized, so whatever had been learned by all the previous layers in the model, could not be fine-tuned to the new data-set.
This tutorial shows how to do both Transfer Learning and Fine-Tuning using the Keras API for Tensorflow. We will once again use the Knifey-Spoony dataset introduced in Tutorial #09. We previously used the Inception v3 model but we will use the VGG16 model in this tutorial because its architecture is easier to work with.
NOTE: It takes around 15 minutes to execute this Notebook on a laptop PC with a 2.6 GHz CPU and a GTX 1070 GPU. Running it on the CPU alone is estimated to take around 10 hours!
Flowchart
The idea is to re-use a pre-trained model, in this case the VGG16 model, which consists of several convolutional layers (actually blocks of multiple convolutional layers), followed by some fully-connected / dense layers and then a softmax output layer for the classification.
The dense layers are responsible for combining features from the convolutional layers and this helps in the final classification. So when the VGG16 model is used on another dataset we may have to replace all the dense layers. In this case we add another dense-layer and a dropout-layer to avoid overfitting.
The difference between Transfer Learning and Fine-Tuning is that in Transfer Learning we only optimize the weights of the new classification layers we have added, while we keep the weights of the original VGG16 model. In Fine-Tuning we optimize both the weights of the new classification layers we have added, as well as some or all of the layers from the VGG16 model.
Imports
End of explanation
"""
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout
from tensorflow.keras.applications import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam, RMSprop
"""
Explanation: These are the imports from the Keras API.
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.6 and TensorFlow version:
End of explanation
"""
def path_join(dirname, filenames):
return [os.path.join(dirname, filename) for filename in filenames]
"""
Explanation: Helper Functions
Helper-function for joining a directory and list of filenames.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot at most 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def print_confusion_matrix(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
print("Confusion matrix:")
# Print the confusion matrix as text.
print(cm)
# Print the class-names for easy reference.
for i, class_name in enumerate(class_names):
print("({0}) {1}".format(i, class_name))
"""
Explanation: Helper-function for printing confusion matrix
End of explanation
"""
def plot_example_errors(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Boolean array whether the predicted class is incorrect.
incorrect = (cls_pred != cls_test)
# Get the file-paths for images that were incorrectly classified.
image_paths = np.array(image_paths_test)[incorrect]
# Load the first 9 images.
images = load_images(image_paths=image_paths[0:9])
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
# Plot the 9 images we have loaded and their corresponding classes.
# We have only loaded 9 images so there is no need to slice those again.
plot_images(images=images,
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
"""
Explanation: Helper-function for plotting example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
"""
def example_errors():
# The Keras data-generator for the test-set must be reset
# before processing. This is because the generator will loop
# infinitely and keep an internal index into the dataset.
# So it might start in the middle of the test-set if we do
# not reset it first. This makes it impossible to match the
# predicted classes with the input images.
# If we reset the generator, then it always starts at the
# beginning so we know exactly which input-images were used.
generator_test.reset()
# Predict the classes for all images in the test-set.
y_pred = new_model.predict(generator_test, steps=steps_test)
# Convert the predicted classes from arrays to integers.
cls_pred = np.argmax(y_pred,axis=1)
# Plot examples of mis-classified images.
plot_example_errors(cls_pred)
# Print the confusion matrix.
print_confusion_matrix(cls_pred)
"""
Explanation: Function for calculating the predicted classes of the entire test-set and calling the above function to plot a few examples of mis-classified images.
End of explanation
"""
def load_images(image_paths):
# Load the images from disk.
images = [plt.imread(path) for path in image_paths]
# Convert to a numpy array and return it.
return np.asarray(images)
"""
Explanation: Helper-function for loading images
The data-set is not loaded into memory, instead it has a list of the files for the images in the training-set and another list of the files for the images in the test-set. This helper-function loads some image-files.
End of explanation
"""
def plot_training_history(history):
# Get the classification accuracy and loss-value
# for the training-set.
acc = history.history['categorical_accuracy']
loss = history.history['loss']
# Get it for the validation-set (we only use the test-set).
val_acc = history.history['val_categorical_accuracy']
val_loss = history.history['val_loss']
# Plot the accuracy and loss-values for the training-set.
plt.plot(acc, linestyle='-', color='b', label='Training Acc.')
plt.plot(loss, 'o', color='b', label='Training Loss')
# Plot it for the test-set.
plt.plot(val_acc, linestyle='--', color='r', label='Test Acc.')
plt.plot(val_loss, 'o', color='r', label='Test Loss')
# Plot title and legend.
plt.title('Training and Test Accuracy')
plt.legend()
# Ensure the plot shows correctly.
plt.show()
"""
Explanation: Helper-function for plotting training history
This plots the classification accuracy and loss-values recorded during training with the Keras API.
End of explanation
"""
import knifey
"""
Explanation: Dataset: Knifey-Spoony
The Knifey-Spoony dataset was introduced in Tutorial #09. It was generated from video-files by taking individual frames and converting them to images.
End of explanation
"""
knifey.maybe_download_and_extract()
"""
Explanation: Download and extract the dataset if it hasn't already been done. It is about 22 MB.
End of explanation
"""
knifey.copy_files()
"""
Explanation: This dataset has another directory structure than the Keras API requires, so copy the files into separate directories for the training- and test-sets.
End of explanation
"""
train_dir = knifey.train_dir
test_dir = knifey.test_dir
"""
Explanation: The directories where the images are now stored.
End of explanation
"""
model = VGG16(include_top=True, weights='imagenet')
"""
Explanation: Pre-Trained Model: VGG16
The following creates an instance of the pre-trained VGG16 model using the Keras API. This automatically downloads the required files if you don't have them already. Note how simple this is in Keras compared to Tutorial #08.
The VGG16 model contains a convolutional part and a fully-connected (or dense) part which is used for classification. If include_top=True then the whole VGG16 model is downloaded which is about 528 MB. If include_top=False then only the convolutional part of the VGG16 model is downloaded which is just 57 MB.
We will try and use the pre-trained model for predicting the class of some images in our new dataset, so we have to download the full model, but if you have a slow internet connection, then you can modify the code below to use the smaller pre-trained model without the classification layers.
End of explanation
"""
input_shape = model.layers[0].output_shape[0][1:3]
input_shape
"""
Explanation: Input Pipeline
The Keras API has its own way of creating the input pipeline for training a model using files.
First we need to know the shape of the tensors expected as input by the pre-trained VGG16 model. In this case it is images of shape 224 x 224 x 3.
End of explanation
"""
datagen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=180,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=[0.9, 1.5],
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest')
"""
Explanation: Keras uses a so-called data-generator for inputting data into the neural network, which will loop over the data for eternity.
We have a small training-set so it helps to artificially inflate its size by making various transformations to the images. We use a built-in data-generator that can make these random transformations. This is also called an augmented dataset.
End of explanation
"""
datagen_test = ImageDataGenerator(rescale=1./255)
"""
Explanation: We also need a data-generator for the test-set, but this should not do any transformations to the images because we want to know the exact classification accuracy on those specific images. So we just rescale the pixel-values so they are between 0.0 and 1.0 because this is expected by the VGG16 model.
End of explanation
"""
batch_size = 20
"""
Explanation: The data-generators will return batches of images. Because the VGG16 model is so large, the batch-size cannot be too large, otherwise you will run out of RAM on the GPU.
End of explanation
"""
if True:
save_to_dir = None
else:
save_to_dir='augmented_images/'
"""
Explanation: We can save the randomly transformed images during training, so as to inspect whether they have been overly distorted, so we have to adjust the parameters for the data-generator above.
End of explanation
"""
generator_train = datagen_train.flow_from_directory(directory=train_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=True,
save_to_dir=save_to_dir)
"""
Explanation: Now we create the actual data-generator that will read files from disk, resize the images and return a random batch.
It is somewhat awkward that the construction of the data-generator is split into these two steps, but it is probably because there are different kinds of data-generators available for different data-types (images, text, etc.) and sources (memory or disk).
End of explanation
"""
generator_test = datagen_test.flow_from_directory(directory=test_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=False)
"""
Explanation: The data-generator for the test-set should not transform and shuffle the images.
End of explanation
"""
steps_test = generator_test.n / batch_size
steps_test
"""
Explanation: Because the data-generators will loop for eternity, we need to specify the number of steps to perform during evaluation and prediction on the test-set. Because our test-set contains 530 images and the batch-size is set to 20, the number of steps is 26.5 for one full processing of the test-set. This is why we need to reset the data-generator's counter in the example_errors() function above, so it always starts processing from the beginning of the test-set.
This is another slightly awkward aspect of the Keras API which could perhaps be improved.
End of explanation
"""
image_paths_train = path_join(train_dir, generator_train.filenames)
image_paths_test = path_join(test_dir, generator_test.filenames)
"""
Explanation: Get the file-paths for all the images in the training- and test-sets.
End of explanation
"""
cls_train = generator_train.classes
cls_test = generator_test.classes
"""
Explanation: Get the class-numbers for all the images in the training- and test-sets.
End of explanation
"""
class_names = list(generator_train.class_indices.keys())
class_names
"""
Explanation: Get the class-names for the dataset.
End of explanation
"""
num_classes = generator_train.num_classes
num_classes
"""
Explanation: Get the number of classes for the dataset.
End of explanation
"""
# Load the first images from the train-set.
images = load_images(image_paths=image_paths_train[0:9])
# Get the true classes for those images.
cls_true = cls_train[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=True)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
from sklearn.utils.class_weight import compute_class_weight
class_weight = compute_class_weight(class_weight='balanced',
classes=np.unique(cls_train),
y=cls_train)
"""
Explanation: Class Weights
The Knifey-Spoony dataset is quite imbalanced because it has few images of forks, more images of knives, and many more images of spoons. This can cause a problem during training because the neural network will be shown many more examples of spoons than forks, so it might become better at recognizing spoons.
Here we use scikit-learn to calculate weights that will properly balance the dataset. These weights are applied to the gradient for each image in the batch during training, so as to scale their influence on the overall gradient for the batch.
End of explanation
"""
class_weight
class_names
"""
Explanation: Note how the weight is about 1.398 for the forky-class and only 0.707 for the spoony-class. This is because there are fewer images for the forky-class so the gradient should be amplified for those images, while the gradient should be lowered for spoony-images.
End of explanation
"""
def predict(image_path):
# Load and resize the image using PIL.
img = PIL.Image.open(image_path)
img_resized = img.resize(input_shape, PIL.Image.LANCZOS)
# Plot the image.
plt.imshow(img_resized)
plt.show()
# Convert the PIL image to a numpy-array with the proper shape.
img_array = np.expand_dims(np.array(img_resized), axis=0)
# Use the VGG16 model to make a prediction.
# This outputs an array with 1000 numbers corresponding to
# the classes of the ImageNet-dataset.
pred = model.predict(img_array)
# Decode the output of the VGG16 model.
pred_decoded = decode_predictions(pred)[0]
# Print the predictions.
for code, name, score in pred_decoded:
print("{0:>6.2%} : {1}".format(score, name))
"""
Explanation: Example Predictions
Here we will show a few examples of using the pre-trained VGG16 model for prediction.
We need a helper-function for loading and resizing an image so it can be input to the VGG16 model, as well as doing the actual prediction and showing the result.
End of explanation
"""
predict(image_path='images/parrot_cropped1.jpg')
"""
Explanation: We can then use the VGG16 model on a picture of a parrot which is classified as a macaw (a parrot species) with a fairly high score of 79%.
End of explanation
"""
predict(image_path=image_paths_train[0])
"""
Explanation: We can then use the VGG16 model to predict the class of one of the images in our new training-set. The VGG16 model is very confused about this image and cannot make a good classification.
End of explanation
"""
predict(image_path=image_paths_train[1])
"""
Explanation: We can try it for another image in our new training-set and the VGG16 model is still confused.
End of explanation
"""
predict(image_path=image_paths_test[0])
"""
Explanation: We can also try an image from our new test-set, and again the VGG16 model is very confused.
End of explanation
"""
model.summary()
"""
Explanation: Transfer Learning
The pre-trained VGG16 model was unable to classify images from the Knifey-Spoony dataset. The reason is perhaps that the VGG16 model was trained on the so-called ImageNet dataset which may not have contained many images of cutlery.
The lower layers of a Convolutional Neural Network can recognize many different shapes or features in an image. It is the last few fully-connected layers that combine these featuers into classification of a whole image. So we can try and re-route the output of the last convolutional layer of the VGG16 model to a new fully-connected neural network that we create for doing classification on the Knifey-Spoony dataset.
First we print a summary of the VGG16 model so we can see the names and types of its layers, as well as the shapes of the tensors flowing between the layers. This is one of the major reasons we are using the VGG16 model in this tutorial, because the Inception v3 model has so many layers that it is confusing when printed out.
End of explanation
"""
transfer_layer = model.get_layer('block5_pool')
"""
Explanation: We can see that the last convolutional layer is called 'block5_pool' so we use Keras to get a reference to that layer.
End of explanation
"""
transfer_layer.output
"""
Explanation: We refer to this layer as the Transfer Layer because its output will be re-routed to our new fully-connected neural network which will do the classification for the Knifey-Spoony dataset.
The output of the transfer layer has the following shape:
End of explanation
"""
conv_model = Model(inputs=model.input,
outputs=transfer_layer.output)
"""
Explanation: Using the Keras API it is very simple to create a new model. First we take the part of the VGG16 model from its input-layer to the output of the transfer-layer. We may call this the convolutional model, because it consists of all the convolutional layers from the VGG16 model.
End of explanation
"""
# Start a new Keras Sequential model.
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(conv_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1024, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add the final layer for the actual classification.
new_model.add(Dense(num_classes, activation='softmax'))
"""
Explanation: We can then use Keras to build a new model on top of this.
End of explanation
"""
optimizer = Adam(lr=1e-5)
"""
Explanation: We use the Adam optimizer with a fairly low learning-rate. The learning-rate could perhaps be larger. But if you try and train more layers of the original VGG16 model, then the learning-rate should be quite low otherwise the pre-trained weights of the VGG16 model will be distorted and it will be unable to learn.
End of explanation
"""
loss = 'categorical_crossentropy'
"""
Explanation: We have 3 classes in the Knifey-Spoony dataset so Keras needs to use this loss-function.
End of explanation
"""
metrics = ['categorical_accuracy']
"""
Explanation: The only performance metric we are interested in is the classification accuracy.
End of explanation
"""
def print_layer_trainable():
for layer in conv_model.layers:
print("{0}:\t{1}".format(layer.trainable, layer.name))
"""
Explanation: Helper-function for printing whether a layer in the VGG16 model should be trained.
End of explanation
"""
print_layer_trainable()
"""
Explanation: By default all the layers of the VGG16 model are trainable.
End of explanation
"""
conv_model.trainable = False
for layer in conv_model.layers:
layer.trainable = False
print_layer_trainable()
"""
Explanation: In Transfer Learning we are initially only interested in reusing the pre-trained VGG16 model as it is, so we will disable training for all its layers.
End of explanation
"""
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
"""
Explanation: Once we have changed whether the model's layers are trainable, we need to compile the model for the changes to take effect.
End of explanation
"""
epochs = 20
steps_per_epoch = 100
"""
Explanation: An epoch normally means one full processing of the training-set. But the data-generator that we created above, will produce batches of training-data for eternity. So we need to define the number of steps we want to run for each "epoch" and this number gets multiplied by the batch-size defined above. In this case we have 100 steps per epoch and a batch-size of 20, so the "epoch" consists of 2000 random images from the training-set. We run 20 such "epochs".
The reason these particular numbers were chosen, was because they seemed to be sufficient for training with this particular model and dataset, and it didn't take too much time, and resulted in 20 data-points (one for each "epoch") which can be plotted afterwards.
End of explanation
"""
history = new_model.fit(x=generator_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight,
validation_data=generator_test,
validation_steps=steps_test)
"""
Explanation: Training the new model is just a single function call in the Keras API. This takes about 6-7 minutes on a GTX 1070 GPU.
End of explanation
"""
plot_training_history(history)
"""
Explanation: Keras records the performance metrics at the end of each "epoch" so they can be plotted later. This shows that the loss-value for the training-set generally decreased during training, but the loss-values for the test-set were a bit more erratic. Similarly, the classification accuracy generally improved on the training-set while it was a bit more erratic on the test-set.
End of explanation
"""
result = new_model.evaluate(generator_test, steps=steps_test)
print("Test-set classification accuracy: {0:.2%}".format(result[1]))
"""
Explanation: After training we can also evaluate the new model's performance on the test-set using a single function call in the Keras API.
End of explanation
"""
example_errors()
"""
Explanation: We can plot some examples of mis-classified images from the test-set. Some of these images are also difficult for a human to classify.
The confusion matrix shows that the new model is especially having problems classifying the forky-class.
End of explanation
"""
conv_model.trainable = True
"""
Explanation: Fine-Tuning
In Transfer Learning the original pre-trained model is locked or frozen during training of the new classifier. This ensures that the weights of the original VGG16 model will not change. One advantage of this, is that the training of the new classifier will not propagate large gradients back through the VGG16 model that may either distort its weights or cause overfitting to the new dataset.
But once the new classifier has been trained we can try and gently fine-tune some of the deeper layers in the VGG16 model as well. We call this Fine-Tuning.
It is a bit unclear whether Keras uses the trainable boolean in each layer of the original VGG16 model or if it is overrided by the trainable boolean in the "meta-layer" we call conv_layer. So we will enable the trainable boolean for both conv_layer and all the relevant layers in the original VGG16 model.
End of explanation
"""
for layer in conv_model.layers:
# Boolean whether this layer is trainable.
trainable = ('block5' in layer.name or 'block4' in layer.name)
# Set the layer's bool.
layer.trainable = trainable
"""
Explanation: We want to train the last two convolutional layers whose names contain 'block5' or 'block4'.
End of explanation
"""
print_layer_trainable()
"""
Explanation: We can check that this has updated the trainable boolean for the relevant layers.
End of explanation
"""
optimizer_fine = Adam(lr=1e-7)
"""
Explanation: We will use a lower learning-rate for the fine-tuning so the weights of the original VGG16 model only get changed slowly.
End of explanation
"""
new_model.compile(optimizer=optimizer_fine, loss=loss, metrics=metrics)
"""
Explanation: Because we have defined a new optimizer and have changed the trainable boolean for many of the layers in the model, we need to recompile the model so the changes can take effect before we continue training.
End of explanation
"""
history = new_model.fit(x=generator_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight,
validation_data=generator_test,
validation_steps=steps_test)
"""
Explanation: The training can then be continued so as to fine-tune the VGG16 model along with the new classifier.
End of explanation
"""
plot_training_history(history)
result = new_model.evaluate(generator_test, steps=steps_test)
print("Test-set classification accuracy: {0:.2%}".format(result[1]))
"""
Explanation: We can then plot the loss-values and classification accuracy from the training. Depending on the dataset, the original model, the new classifier, and hyper-parameters such as the learning-rate, this may improve the classification accuracies on both training- and test-set, or it may improve on the training-set but worsen it for the test-set in case of overfitting. It may require some experimentation with the parameters to get this right.
End of explanation
"""
example_errors()
"""
Explanation: We can plot some examples of mis-classified images again, and we can also see from the confusion matrix that the model is still having problems classifying forks correctly.
A part of the reason might be that the training-set contains only 994 images of forks, while it contains 1210 images of knives and 1966 images of spoons. Even though we have weighted the classes to compensate for this imbalance, and we have also augmented the training-set by randomly transforming the images in different ways during training, it may not be enough for the model to properly learn to recognize forks.
End of explanation
"""
|
QasimMuhammad/Ipython_WorkFlow
|
arundo-take_home_challenge.ipynb
|
mit
|
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn import svm
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
import time
"""
Explanation: Arundo's Take home challenge
Given Data:
Arundo_take_home_challenge_training_set.csv
Arundo_take_home_challenge_test_set.csv
Task:
Arundo_take_home_challenge_training_set.csv will be used to train a model to predict request_count (target variable).
Predict request_count that is missing in Arundo_take_home_challenge_test_set.csv.
Note: Arundo_take_home_challenge_training_set.csv though will be loaded as a test_data (considering the name of the given file) should not be confused with the test data or validation data of the ML model. Validation/test data will actually come by splitting Arundo_take_home_challenge_training_set.csv.
The problem is solved in Jupyter's notebook with following main steps
Data Visualization
Features selection and preprocessing
Testing with different ML models
Output csv files with predicted request_values using the trained ML models
End of explanation
"""
#ALso remember to parse the date column. This will be helpful in the next step
data=pd.read_csv('Arundo_take_home_challenge_training_set.csv',sep=',',parse_dates=['date'])
#Have a look at the data
data.head(15)
data.tail(5)
print(data.isnull().any())
"""
Explanation: Read the given dataset
End of explanation
"""
data.hist('max_temp',weights=data['request_count'])
data.hist('min_temp',weights=data['request_count'])
data.hist('precipitation',weights=data['request_count'])
plt.show()
"""
Explanation: Quick observations
Daily data related to weather variations
Covers mainly the winter months
A better visualization of correlation of request_count needed with other variables
No nulls
Let us try to get a better insight into the data. First let us have a look at the dependence of request counts on the float variables.
End of explanation
"""
#Sort request_count with events
data.groupby('events').request_count.agg(['mean','max','min']).plot(kind='bar')
plt.show()
data.groupby('events').request_count.agg(['count','mean','max','min'])
"""
Explanation: From the above histograms we see most of the requests comes when
maximum temperature is below 10C
min temperature is below 2C
When there is zero precipitation
All in all the distribtion of request count is strongly correlated to floating variables and hence all floating variables are to be considered as features.
Let's now evaluate the correlation of two variables 'events' and 'calendar code' with request_count which clearly are categorical variables.
End of explanation
"""
# Now sort request_count with calendar code
data.groupby('calendar_code').request_count.agg(['mean','max','min']).plot(kind='bar')
plt.show()
data.groupby('calendar_code').request_count.agg(['count','mean','max','min'])
"""
Explanation: Clearly, support request comes in more when the weather condition is overcast which is understandable. But not many data instances are available in most of the weather events except when the events are 'None', 'Rain' and 'Snow' which will make it challenging to split the data honestly, train the ML model and test the accuracy.
End of explanation
"""
var_name = "events"
col_order = np.sort(data[var_name].unique()).tolist()
plt.figure(figsize=(16,6))
sns.violinplot(x=var_name, y='request_count', data=data, order=col_order)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('y', fontsize=12)
plt.title("Distribution of request count with "+var_name, fontsize=15)
plt.show()
"""
Explanation: Calendar code probably refers to the intensity of weather variations in a single day. The distribution of calendar code behavior would be interesting to see.
We further use violin plot for dependence on the categorical variables (https://blog.modeanalytics.com/violin-plot-examples/)
End of explanation
"""
var_name = "calendar_code"
col_order = np.sort(data[var_name].unique()).tolist()
plt.figure(figsize=(16,6))
sns.violinplot(x=var_name, y='request_count', data=data, order=col_order)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('y', fontsize=12)
plt.title("Distribution of request count with "+var_name, fontsize=15)
plt.show()
"""
Explanation: For events with less data points, it is difficult to to see the distribution which will eventually might reflect as a error in the train model or else would be difficult or not possible to split into taining, test and validation data sets. Certainly, more data points corresponding to these events would help in better trained model for these weather event.
End of explanation
"""
data['day_of_week'] = data['date'].dt.dayofweek
data['week_day'] = data['date'].dt.weekday_name
data.head()
# We again choose the groupby and violin plots to see the underlying behaviour
data.groupby('week_day').request_count.agg(['mean','max','min']).plot(kind='bar')
plt.show()
data.groupby('week_day').request_count.agg(['count','mean','max','min'])
"""
Explanation: Significant distribution of request count over calendar code is visible and hence would be the part of feature metrix.
Next, lets analayze the impact of date. The dates covers mainly the winter months and may not be represented well if it is used as it is given. I realized may be the site maintenence is dependant on the working day or weekend instead. We start by adding an additional column with the week day (0: Monday, ... 6: Sunday)
End of explanation
"""
data['events_code'] = pd.Categorical(data["events"]).codes
data.head()
"""
Explanation: Clearly, Weekends (Friday-Sunday) are the most active period while Monday-Thursday show nearly uniform mean request_count. This indicates the best way to reflect the effect of date as numeric feature would be through the week days.
Next, we convert the events into some unique identifiers (integers). This will result in an additional column "events_code".
End of explanation
"""
y=data["request_count"]
print("Shape of y ",y.shape)
"""
Explanation: Since request_count is the target variable, we store it separately as "y" for ML model
End of explanation
"""
data_orig = data #Save data in data_orig before droping reduntant variables
data = data.drop(["date","events","request_count","week_day"],axis=1)
data.head()
"""
Explanation: Drop the redundant columns now "date","events","request_count" and week_day.
End of explanation
"""
data= pd.get_dummies(data,columns=["calendar_code","events_code","day_of_week"],prefix=["calendar","event","week"])
data.head()
"""
Explanation: The categorical values day_of_week, events_code and calender code need to be one-hot-encoded to be used as a feature input vector.
End of explanation
"""
X=data.values
X.shape
plt.figure(1)
plt.plot(X[:,0],y[:],'r.')
plt.xlabel("No. of sites")
plt.ylabel("No. of requests")
plt.show()
plt.figure(1)
plt.plot((X[:,1]+X[:,2])**2/2.0,y[:],'r.')
plt.xlabel("Mean temperature")
plt.ylabel("No. of requests")
plt.show()
plt.figure(1)
plt.plot(X[:,3],y[:],'r.')
plt.xlabel("Precipitation")
plt.ylabel("No. of requests")
plt.show()
"""
Explanation: DataFrame is now ready to be used as feature matrix. Lets assign data values to X.
End of explanation
"""
X=np.column_stack([X,(X[:,1]+X[:,2])**2.0])
X.shape
#Split the data into training and validation test, shuffling of data won't be necessary since data seems to be already random
X_train, X_val, y_train, y_val = train_test_split(X,y,test_size=0.2,random_state = 0)
"""
Explanation: It appears that the no of requests has some kind of a quadratic dependence on the mean temperature so in addition to max and min temperature we should construct a new feature $((mintemp+maxtemp)/2)^2$
End of explanation
"""
#Multivariabte regression
regr = linear_model.LinearRegression()
start_time =time.time()
regr.fit(X_train, y_train)
print("--- %s seconds ---" % (time.time() - start_time))
y_train_pred=regr.predict(X_train)
print("Mean squared error: %.2f" % np.mean((regr.predict(X_train) - y_train) ** 2))
print("Mean squared error with validation set: %.2f" % np.mean((regr.predict(X_val) - y_val) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_train, y_train))
print('Variance score with validation set: %.2f' % regr.score(X_val, y_val))
"""
Explanation: Conduct a multivariate linear regression on the dataset.
End of explanation
"""
#Delete the least appeared events
plt.figure(1)
plt.plot((X[:,1]+X[:,2])**2/2.0,y[:],'r.')
plt.xlabel("Mean temperature")
plt.ylabel("No. of requests")
plt.show()
data_orig = data_orig[(data_orig['events'] != 'Fog') & (data_orig['events'] != 'Fog-Rain-Snow') & (data_orig['events'] != 'Rain-Thunderstorm')]
data_orig.shape
#Preprocess data before sending to multivariant regression
data_orig['day_of_week'] = data_orig['date'].dt.dayofweek
data_orig['events_code'] = pd.Categorical(data_orig["events"]).codes
data_orig= pd.get_dummies(data_orig,columns=["calendar_code","events_code","day_of_week"],prefix=["calendar","event","week"])
y_red=data_orig["request_count"]
data_orig = data_orig.drop(["date","events","request_count","week_day"],axis=1)
data_orig.head()
#Split the data into training and validation test, shuffling of data won't be necessary since data seems to be already random
X_red = data_orig.values
X_red_train, X_red_val, y_red_train, y_red_val = train_test_split(X_red,y_red,test_size=0.2,random_state = 0)
# Multivariant regression
regr = linear_model.LinearRegression()
regr.fit(X_red_train, y_red_train)
y_red_train_pred=regr.predict(X_red_train)
print("Mean squared error: %.2f" % np.mean((regr.predict(X_red_train) - y_red_train) ** 2))
print("Mean squared error with validation set: %.2f" % np.mean((regr.predict(X_red_val) - y_red_val) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_red_train, y_red_train))
print('Variance score with validation set: %.2f' % regr.score(X_red_val, y_red_val))
"""
Explanation: I also checked the variance score decreases if the new column of $((mintemp+maxtemp)/2)^2$ is not considered. Here, Recursive Feature Elimination ( RFE ) technique is ignored since with analysis above I see each of the variable has its own impact. This is something I can do for larger set of features but not in this case.
Now apply regression on data after ignoring the least appeared events to try if it gives us any better result.
End of explanation
"""
#Multivariabte regression
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_train_pred=regr.predict(X_train)
print("Mean squared error: %.2f" % np.mean((regr.predict(X_train) - y_train) ** 2))
print("Mean squared error with validation set: %.2f" % np.mean((regr.predict(X_val) - y_val) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_train, y_train))
print('Variance score with validation set: %.2f' % regr.score(X_val, y_val))
"""
Explanation: This actually has increased the error, so removing least appeared events in not helping at all in reducing the error. Lets go back to our original regression fit.
End of explanation
"""
test_data=pd.read_csv('Arundo_take_home_challenge_test_set.csv',sep=',',parse_dates=['date'])
# Sort by events to see if all the events in training data set
test_data.groupby('events').site_count.agg(['mean','max','min']).plot(kind='bar')
plt.show()
test_data.groupby('events').site_count.agg(['count','mean','max','min'])
"""
Explanation: Load CSV file with missing request_count and predict request_count. This is one of the tasks
End of explanation
"""
# We must process the test csv data in similar way before predicting request_count
# To ensure to have codes for catogorical variables, we will merge the given test data with training data csv (in csv files)
# and after inclusion of all events codes and week day codes we will remove the training data
data=pd.read_csv('Arundo_take_home_challenge_training_set.csv',sep=',',parse_dates=['date'])
data['key1'] = 1
data = data.drop(["request_count"],axis=1)
data.head()
test_data['key2'] = 0
frames = [test_data,data]
merg_frame = pd.concat(frames)
merg_frame['day_of_week'] = merg_frame['date'].dt.dayofweek
merg_frame['events_code'] = pd.Categorical(merg_frame["events"]).codes
merg_frame.head()
# Drop data and Events columns
merg_frame = merg_frame.drop(["date","events"],axis=1)
merg_frame.head()
merg_frame= pd.get_dummies(merg_frame,columns=["calendar_code","events_code","day_of_week"],prefix=["calendar","event","week"])
test_data = merg_frame[merg_frame['key2'] == 0]
test_data = test_data.drop(["key1","key2"],axis=1)
test_data.head()
# Assign test data to X_test and add ((minx+maxx)/2)^2 as an additional column
X_test=test_data.values
X_test=np.column_stack([X_test,(X_test[:,1]+X_test[:,2])**2.0])
X_test.shape
y_test_pred=regr.predict(X_test)
np.shape(y_test_pred)
pred = pd.DataFrame({'Predicted_request_counts':y_test_pred})
pred.head()
pred.to_csv('predicted_request_counts_regression.csv', index=False)
"""
Explanation: Clearly 4 events Fog-Rain-Snow, Fog-Snow, Rain-Snow and Snow are not listed. This I just wanted to see if I can remove the least appeared events that won't be needed in predicting request_count in given test csv file. The only least appeared event not listed here is Fog-Rain-Snow which might not be very helpful in improving our fit. But now I already have checked and this won't give us any improved result.
End of explanation
"""
m,input_layer_size=X.shape
hidden_layer_size = input_layer_size
ANN_classifier = Sequential()
ANN_classifier.add(Dense(units = 10, kernel_initializer = 'uniform', activation = 'relu', input_dim = input_layer_size))
ANN_classifier.add(Dense(units = 10, kernel_initializer = 'uniform', activation = 'relu'))
ANN_classifier.add(Dense(units = 1, kernel_initializer = 'normal'))
start_time = time.time()
ANN_classifier.compile(loss='mean_squared_error', optimizer='adam')
history=ANN_classifier.fit(X_train, y_train, batch_size = 15, epochs = 4000,verbose=0)
print("--- %s seconds ---" % (time.time() - start_time))
pred_train = ANN_classifier.predict(X_train)
pred = ANN_classifier.predict(X_val)
print("Mean squared error: ", np.mean((pred_train - y_train.values.reshape(-1,1)) ** 2))
print("Mean squared error validation: ", np.mean((pred - y_val.values.reshape(-1,1)) ** 2))
"""
Explanation: Now lets try a neural network, if it can decrease the mean square error
End of explanation
"""
y_test_pred_ANN=ANN_classifier.predict(X_test)
pred_ANN = pd.DataFrame({'Predicted_request_counts':[y_test_pred_ANN]},index =[0])
pred_ANN.head()
pred_ANN.to_csv('predicted_request_counts_ANN.csv', index=False)
"""
Explanation: Clearly Neural network has improved the fit but much more slower than regression
Now create Predicted_request_counts.csv with neural network fit
End of explanation
"""
|
bjodah/pyodesys
|
examples/transformations.ipynb
|
bsd-2-clause
|
from __future__ import print_function, division, absolute_import
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
from pyodesys import OdeSys
from pyodesys.symbolic import SymbolicSys, symmetricsys
sp.init_printing()
%matplotlib inline
print(sp.__version__)
"""
Explanation: Solving a transformed system of first order ordinary differential equations
In this notebook we explore how we can use different convenient subclasses of SymbolicSys to reformulate a problem (for other stability/accuracy/performance characteristics)
End of explanation
"""
f = lambda t, x, k: [-k[0]*x[0], k[0]*x[0] - k[1]*x[1], k[1]*x[1] - k[2]*x[2], k[2]*x[2] - k[3]*x[3]]
k = [7, 3, 2, 0]
y0 = [1, 0, 0, 0]
tend = 1.0
"""
Explanation: We will consider a chain of three coupled decays:
End of explanation
"""
ref = [9.11881965554516244e-04, 8.55315762040415040e-02, 3.07983556726319885e-01, 6.05572985104084083e-01]
def rmsd(x):
diff = np.array(ref[4-len(x):]) - x
return np.sum(diff**2)**0.5
"""
Explanation: The above equation has an analytic solution, evaluated at t = 1 it is:
End of explanation
"""
odesys = OdeSys(f)
res = odesys.integrate(np.linspace(0, tend), y0, params=k)
res.plot(names='abcd')
plt.legend()
{k: v for k, v in res.info.items() if not k.startswith('internal')}, rmsd(res.yout[-1, :])
"""
Explanation: We can integrate this using e.g. lsoda from scipy:
End of explanation
"""
odesys = SymbolicSys.from_callback(f, 4, 4, names='abcd')
odesys.exprs, odesys.get_jac(), odesys.get_dfdx()
"""
Explanation: For a stiff system (requiring an implicit method) we would have needed to define the jacobian too, then using symbolic manipulation can be very useful (less error prone and less user code needed to be written):
End of explanation
"""
def integrate_and_plot(odesys, integrator, tout, y0, k, interpolate=False, stiffness=False, **kwargs):
plt.figure(figsize=(14,5))
xout, yout, info = odesys.integrate(tout, y0, k, integrator=integrator, **kwargs)
plt.subplot(1, 3 if stiffness else 2, 1)
odesys.plot_result(interpolate=interpolate)
plt.legend(loc='best')
plt.subplot(1, 3 if stiffness else 2, 2)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
odesys.plot_result(interpolate=interpolate)
plt.legend(loc='best')
if stiffness:
ratios = odesys.stiffness()
plt.subplot(1, 3, 3)
plt.yscale('linear')
plt.plot(odesys._internal[0], ratios)
info.pop('internal_xout')
info.pop('internal_yout')
return len(xout), info, rmsd(yout[-1, :])
"""
Explanation: Let's define a convienience function for plotting both with linear and logartithmic scale:
End of explanation
"""
integrate_and_plot(odesys, 'scipy', np.linspace(0, tend), y0, k, first_step=1e-14, name='vode', method='bdf')
"""
Explanation: Let's use the vode integrator this time and an implicit algorithm:
End of explanation
"""
print(odesys.f_cb(0, y0, k))
print()
print(odesys.j_cb(0, y0, k))
"""
Explanation: We see that the jacobian was evaluated twice. The final error is slightly higher (which might be expected since the system is not particularly stiff)
SymbolicSys has provided us with callbacks if we manually want to evaluate f or its jacobian:
End of explanation
"""
integrate_and_plot(odesys, 'cvode', tend, y0, k, atol=1e-8, rtol=1e-8, method='bdf')
"""
Explanation: We can use the cvode integrator (through the use of pycvodes)
End of explanation
"""
y0aug = [1, 1e-20, 1e-20, 1e-20]
logexp = sp.log, sp.exp
LogLogSys = symmetricsys(
logexp, logexp, exprs_process_cb=lambda exprs: [
sp.powsimp(expr.expand(), force=True) for expr in exprs])
"""
Explanation: Since we are using symbolic manipulation it is very easy to perform a variable transformation.
We will look at how this system behaves in logarithmic space (we need $y>0$ so we'll add a $\delta$ much smaller than our original linear absolute tolerance)
First we need to define our transformation, we use the helper function symmetricsys for this:
End of explanation
"""
tsys = LogLogSys.from_callback(f, 4, 4, names=True)
tsys.exprs, tsys.get_jac(), tsys.get_dfdx()
"""
Explanation: Alright, so our newly defined LogLogSys class can now take the same input and give back transformed expressions:
End of explanation
"""
integrate_and_plot(tsys, 'cvode', [1e-12, tend], y0aug, k, atol=1e-7, rtol=1e-7, first_step=1e-4, nsteps=5000)
integrate_and_plot(tsys, 'odeint', [1e-12, tend], y0aug, k, atol=1e-8, rtol=1e-8, first_step=1e-4, nsteps=50000)
"""
Explanation: Now let us integrate the transformed system:
End of explanation
"""
def f2(t, x, p):
y0 = p[0]*sp.exp(-p[1]*t)
return [p[1]*y0 - p[2]*x[0]] + [
p[i+2]*x[i] - p[i+3]*x[i+1] for i in range(len(x) - 1)
]
tsys2 = LogLogSys.from_callback(f2, 3, 5, names=True)
tsys2.exprs, tsys2.get_jac()
integrate_and_plot(tsys2, 'cvode', [1e-12, tend], y0aug[1:], [y0aug[0], 7, 3, 2, 1],
atol=1e-7, rtol=1e-7, first_step=1e-4, stiffness=True, nsteps=5000)
integrate_and_plot(odesys, 'cvode', [1e-12, tend], y0aug, [7, 3, 2, 1], atol=1e-7, rtol=1e-7, first_step=1e-4, stiffness=True)
"""
Explanation: Substantially more work was needed to solve this transformed system, also the accuracy suffered. So this transformation was not of any help for this particular problem, choice of initial conditions and length of integration. There may be situations where it is useful though.
Stiffness
End of explanation
"""
from pyodesys.symbolic import PartiallySolvedSystem
psys = PartiallySolvedSystem(odesys, lambda x0, y0, p0: {odesys.dep[0]: y0[0]*sp.exp(-p0[0]*(odesys.indep-x0))})
psys.exprs, psys.get_jac()
integrate_and_plot(psys, 'cvode', [1e-12, tend], y0aug, [7, 3, 2, 1], atol=1e-7, rtol=1e-7, first_step=1e-4, stiffness=True)
"""
Explanation: Using PartiallySolvedSys
Sometimes we can solve a system paritally by integrating some of the dependent variables. The ODE system then needs to be reformulated in terms of the new anayltic function. PartiallySolvedSystem helps us with this task:
End of explanation
"""
from pyodesys.integrators import RK4_example_integrator
integrate_and_plot(odesys, RK4_example_integrator, [1e-12, tend], y0aug, k, atol=1e-8, rtol=1e-8, first_step=1e-1)
integrate_and_plot(psys, RK4_example_integrator, [1e-12, tend], y0aug, [7, 3, 2, 1], atol=1e-8, rtol=1e-8, first_step=1e-1)
"""
Explanation: Using fixed RK4 stepper
pyodesys provides an example class of an integrator (RK4_example_integrator) in pyodesys.integrators. It can be used as a model integrator when designing custom steppers.
End of explanation
"""
|
beangoben/HistoriaDatos_Higgs
|
Dia2/5_Estadistica_Basica.ipynb
|
gpl-2.0
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Un poco de estadística
Antes de meternos a tratar el problema de clasificacion, vamos a ver unas cosas basicas de las gaussianas. Atravez de ellas vamos a entender algunos conceptos de la estadistica y la probabilidad.
En particular vamos a ver los conceptos de:
Tipos de aleatoredad.
Distribuciones normales o Gaussianas.
Que significa el promedio (mean) y la varianza/desviacion estandar (var/std)?
Que significa la sigmas para un experimento.
Primero las librerias
End of explanation
"""
Edades = np.array([10,15, 16, 17, 18, 19, 20, 21, 22, 23, 24,25,26,30,32,33,34,37,38])
Frecuencia = np.array([2,1,20,12,12,20,16,17,9,4,1,2,1,2,1,1,1,1,3])
print(sum(Frecuencia))
plt.bar(Edades, Frecuencia)
plt.show()
"""
Explanation: Encuesta de edades
Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad
End of explanation
"""
x = np.random.rand(100)
sns.distplot(x,kde=False)
plt.xlim(0,1)
plt.show()
"""
Explanation: Distribución uniforme
Cacho!
Que otros feneomenos siguen una distribuicon uniforme?
End of explanation
"""
s = np.random.poisson(10,20000)
sns.distplot(s,kde=False)
plt.show()
"""
Explanation: Distribución de Poisson
Número de solicitudes de amistad en facebook en una semana
End of explanation
"""
x=np.random.randn(50)
sns.distplot(x)
plt.show()
"""
Explanation: Distribución normal
Distribución de tiempos de maraton para corredores de varias edades:
Cambia el numero de datos y ve que pasa:
End of explanation
"""
numeros = [10,50,100,1000,10000]
for n in numeros:
x = np.random.randn(n)
sns.distplot(x)
plt.title('n = %d'%n)
plt.show()
"""
Explanation: Una forma de automatizar esto es:
End of explanation
"""
numeros = np.random.normal(loc=2.0,scale=0.1,size=1000)
sns.distplot(numeros)
plt.xlim(0,4)
plt.show()
"""
Explanation: Tambien podemos hacerlo de otra manera con np.random.normal() que tiene 3 argumentos:
loc, la locacion ... o mejor dicho el promedio.
scale, la escala ... o mejor dicho el la desviacion estandar.
size, o tamaño ... cuantos numeros vamos a generar.
End of explanation
"""
x = np.random.normal(loc=2.0,scale=0.5,size=100)
y = np.random.normal(loc=2.0,scale=0.5,size=100)
plt.scatter(x,y,c='r',label='rojos')
plt.legend()
plt.show()
"""
Explanation: Que significa la desviacion estandar?
Y las sigmas!
$1 \sigma$ = 68.26%
$2 \sigma$ = 95.44%
$3 \sigma$ = 99.74%
$4 \sigma$ = 99.995%
$5 \sigma$ = 99.99995%
Actividades
Haz lo siguiente:
Crear 3 distribuciones variando promedio (mean)
Crear 3 distribuciones variando la desviacion estandar (std)
Crear 2 distribuciones con cierto sobrelape, es decir que se toquen
Campanas gaussianas en la Naturaleza
Examenes de salidad en prepas en Polonia:
Que podria ser esta desviacion?
Distribución normal en 2D
End of explanation
"""
|
hannorein/variations
|
Figure5.ipynb
|
gpl-3.0
|
import rebound
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
"""
Explanation: Figure 5
This notebook recreates Figure 5 in Rein & Tamayo 2016. The figure illustrates the accuracy of second order variational equations compared to finite differences.
When using finite differences to calculate approximations to a derivative, the finite difference needs to be fine tuned. It has to be small enough to stay in the linear regime, but large enough to avoid floating point rounding issues. As we will see, there is only a small range in which finite differences give accurate approximations to the local derivative. The problem is significantly exaggerated for second order derivates. Even for fine tuned values of the finite differences (which is in general not possible to do), one cannot achieve an accuracy better than $10^{-6}$ with finite differences.
When variational equations are used to calculate derivatives, none of the problems exits. There is no small parameter that needs to be fine tuned. Variational equations just work. Better yet, because we use the high accuracy IAS15 integrator, the derivates are accurate to machine precision in all cases.
We start by import the REBOUND, numpy and matplotlib packages.
End of explanation
"""
sample_times = np.array([0.,0.1,0.3,1.2,1.5,1.9,2.3,2.8,3.3,9.5,11.5,12.5,15.6,16.7,20.])
def generatedata(x):
a, e = x
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(primary=sim.particles[0],m=1e-3, a=a, e=e)
sim.add(primary=sim.particles[0],m=1e-3, a=1.3,f=1.4)
sim.move_to_com()
samples = np.zeros((len(sample_times)))
for i,t in enumerate(sample_times):
sim.integrate(t)
samples[i] = sim.particles[0].vx
return samples
x_true = (1.0,0.2500)
samples_true = generatedata(x_true)
"""
Explanation: We recreate the same problem as for Figures 3 and 4. Two planets are orbiting one star. We sample the radial velocity of the star at random intervals.
End of explanation
"""
def chi2_derivatives(x):
a, e = x
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(primary=sim.particles[0],m=1e-3, a=a, e=e)
sim.add(primary=sim.particles[0],m=1e-3, a=1.3,f=1.4)
var_da = sim.add_variation()
var_dda = sim.add_variation(order=2,first_order=var_da)
var_de = sim.add_variation()
var_dde = sim.add_variation(order=2,first_order=var_de)
var_da_de = sim.add_variation(order=2,first_order=var_da,first_order_2=var_de)
var_da.vary(1,"a")
var_de.vary(1,"e")
var_dda.vary(1,"a")
var_dde.vary(1,"e")
var_da_de.vary(1,"a","e")
sim.move_to_com()
l = 0.
d = np.zeros((2))
dd = np.zeros((2,2))
for i, t in enumerate(sample_times):
sim.integrate(t)
rvobs = samples_true[i]
rv = sim.particles[0].vx
l += (rv-rvobs)*(rv-rvobs)
d[0] += 2. * var_da.particles[0].vx*(rv-rvobs)
d[1] += 2. * var_de.particles[0].vx*(rv-rvobs)
dd[0][0] += 2. * var_dda.particles[0].vx*(rv-rvobs)
dd[0][0] += 2. * var_da.particles[0].vx*var_da.particles[0].vx
dd[1][0] += 2. * var_da_de.particles[0].vx*(rv-rvobs)
dd[1][0] += 2. * var_da.particles[0].vx*var_de.particles[0].vx
dd[1][1] += 2. * var_dde.particles[0].vx*(rv-rvobs)
dd[1][1] += 2. * var_de.particles[0].vx*var_de.particles[0].vx
dd[0][1] = dd[1][0]
return l, d, dd
"""
Explanation: The following function calculates a goodness of fit, the $\chi^2$ amd the derivates of $\chi^2$ with respect to the inner planet's initial semi-major axis and eccentricity. We use the variational equation approach in the following function.
End of explanation
"""
def chi2_derivatives_finite_differences(x, d=(1e-6,1e-6)):
a, e = x
d_a, d_e = d
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(primary=sim.particles[0],m=1e-3, a=a, e=e)
sim.add(primary=sim.particles[0],m=1e-3, a=1.3,f=1.4)
sim.move_to_com()
sim_a = rebound.Simulation()
sim_a.add(m=1.)
sim_a.add(primary=sim_a.particles[0],m=1e-3, a=a+d_a, e=e)
sim_a.add(primary=sim_a.particles[0],m=1e-3, a=1.3,f=1.4)
sim_a.move_to_com()
sim_e = rebound.Simulation()
sim_e.add(m=1.)
sim_e.add(primary=sim_e.particles[0],m=1e-3, a=a, e=e+d_e)
sim_e.add(primary=sim_e.particles[0],m=1e-3, a=1.3,f=1.4)
sim_e.move_to_com()
sim_aa = rebound.Simulation()
sim_aa.add(m=1.)
sim_aa.add(primary=sim_aa.particles[0],m=1e-3, a=a-d_a, e=e)
sim_aa.add(primary=sim_aa.particles[0],m=1e-3, a=1.3,f=1.4)
sim_aa.move_to_com()
sim_ee = rebound.Simulation()
sim_ee.add(m=1.)
sim_ee.add(primary=sim_ee.particles[0],m=1e-3, a=a, e=e-d_e)
sim_ee.add(primary=sim_ee.particles[0],m=1e-3, a=1.3,f=1.4)
sim_ee.move_to_com()
sim_ea1 = rebound.Simulation()
sim_ea1.add(m=1.)
sim_ea1.add(primary=sim_ea1.particles[0],m=1e-3, a=a+d_a, e=e-d_e)
sim_ea1.add(primary=sim_ea1.particles[0],m=1e-3, a=1.3,f=1.4)
sim_ea1.move_to_com()
sim_ea2 = rebound.Simulation()
sim_ea2.add(m=1.)
sim_ea2.add(primary=sim_ea2.particles[0],m=1e-3, a=a-d_a, e=e+d_e)
sim_ea2.add(primary=sim_ea2.particles[0],m=1e-3, a=1.3,f=1.4)
sim_ea2.move_to_com()
sim_ea3 = rebound.Simulation()
sim_ea3.add(m=1.)
sim_ea3.add(primary=sim_ea3.particles[0],m=1e-3, a=a+d_a, e=e+d_e)
sim_ea3.add(primary=sim_ea3.particles[0],m=1e-3, a=1.3,f=1.4)
sim_ea3.move_to_com()
sim_ea4 = rebound.Simulation()
sim_ea4.add(m=1.)
sim_ea4.add(primary=sim_ea4.particles[0],m=1e-3, a=a-d_a, e=e-d_e)
sim_ea4.add(primary=sim_ea4.particles[0],m=1e-3, a=1.3,f=1.4)
sim_ea4.move_to_com()
d = np.zeros((2))
dd = np.zeros((2,2))
for i, t in enumerate(sample_times):
sim.integrate(t)
sim_a.integrate(t)
sim_e.integrate(t)
sim_aa.integrate(t)
sim_ee.integrate(t)
sim_ea1.integrate(t)
sim_ea2.integrate(t)
sim_ea3.integrate(t)
sim_ea4.integrate(t)
rvobs = samples_true[i]
rv = sim.particles[0].vx
rv_da = (sim_a.particles[0].vx-sim.particles[0].vx)/d_a
rv_de = (sim_e.particles[0].vx-sim.particles[0].vx)/d_e
rv_da_de = (sim_ea3.particles[0].vx-sim_ea1.particles[0].vx-sim_ea2.particles[0].vx+sim_ea4.particles[0].vx)/(4.*d_e*d_a)
rv_daa = (sim_a.particles[0].vx-2.*sim.particles[0].vx+sim_aa.particles[0].vx)/(d_a*d_a)
rv_dee = (sim_e.particles[0].vx-2.*sim.particles[0].vx+sim_ee.particles[0].vx)/(d_e*d_e)
dd[0][0] += 2. * rv_daa*(rv-rvobs)
dd[0][0] += 2. * rv_da*rv_da
dd[1][0] += 2. * rv_da_de*(rv-rvobs)
dd[1][0] += 2. * rv_da*rv_de
dd[1][1] += 2. * rv_dee*(rv-rvobs)
dd[1][1] += 2. * rv_de*rv_de
d[0] += 2. * rv_da*(rv-rvobs)
d[1] += 2. * rv_de*(rv-rvobs)
dd[0][1] = dd[1][0]
return d,dd
"""
Explanation: The following function calculates the same as chi2_derivatives, but uses the finite difference approach.
End of explanation
"""
N=50
grid_a = np.logspace(-16.,-1.5,N)
grid_e = np.logspace(-16.,-1.5,N)
d_var = chi2_derivatives((0.951,0.12))[1]
dd_var = chi2_derivatives((0.951,0.12))[2]
d_sha = np.zeros((N,N,2))
dd_sha = np.zeros((N,N,3))
for i, a in enumerate(grid_a):
for j, e in enumerate(grid_e):
res = chi2_derivatives_finite_differences((0.951,0.12),d=(a,e))
resabs = np.abs((res[0]-d_var))
#print(("%e %e %e %e") %(a,e,resabs[0]/np.abs(d_var[0]), resabs[1]/np.abs(d_var[1])))
d_sha[i][j][0] = resabs[0]/np.abs(d_var[0])
d_sha[i][j][1] = resabs[1]/np.abs(d_var[1])
dd_sha[i][j][0] = np.abs((res[1][0][0]-dd_var[0][0])/dd_var[0][0])
dd_sha[i][j][1] = np.abs((res[1][1][1]-dd_var[1][1])/dd_var[1][1])
dd_sha[i][j][2] = np.abs((res[1][0][1]-dd_var[0][1])/dd_var[0][1])
"""
Explanation: We now test both approaches for a given set of inital conditions, $a=0.951, e=0.12$. We vary the finite difference parameters $\delta a$ and $\delta e$ on a 50x50 grid (this may take a few minutes to run, depending on the speed of your computer).
End of explanation
"""
fig = plt.figure(figsize=(22,4))
ax = plt.subplot(131)
extent = [min(grid_a),max(grid_a)]
#ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("$\delta a$")
ax.set_ylim(1e-8,1e1)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_ylabel("relative error")
ax.plot(grid_a, d_sha[:,0,0],label="first derivative")
ax.plot(grid_a, dd_sha[:,0,0],"--",label="second derivative")
legend = plt.legend(loc=2)
ax = plt.gca().add_artist(legend)
ax = plt.subplot(132)
extent = [min(grid_a),max(grid_a)]
#ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("$\delta e$")
ax.set_ylim(1e-8,1e1)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_ylabel("relative error")
ax.plot(grid_e, d_sha[0,:,1])
ax.plot(grid_e, dd_sha[0,:,1]+2.*grid_e,"--")
ax = plt.subplot(133)
extent = [min(grid_a),max(grid_a),min(grid_e),max(grid_e)]
#ax.set_xlim(extent[0],extent[1])
ax.set_ylabel("$\delta e$")
ax.set_xlabel("$\delta a$")
ax.set_xscale("log")
ax.set_yscale("log")
import matplotlib.ticker as plticker
loc = plticker.LogLocator(numticks=10)
ax.xaxis.set_major_locator(loc)
ax.yaxis.set_major_locator(loc)
im = ax.imshow((dd_sha[:,:,2]), vmax=1e1, vmin=1e-6, cmap="viridis_r", origin="lower",norm=LogNorm(),aspect='auto', extent=extent) #interpolation="none",
cb = plt.colorbar(im, ax=ax)
cb.set_label("relative error of second derivative (cross term)")
plt.savefig('paper_test4.pdf',bbox_inches='tight')
"""
Explanation: We now create a series of plots that show the relative error of the finite difference method for both 1st and 2nd order derivatives of $\chi^2$.
End of explanation
"""
|
hetland/python4geosciences
|
materials/6_xarray.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import cartopy
import cmocean.cm as cmo
import pandas as pd
import xarray as xr
"""
Explanation: xarray
xarray expands the utility of the time series analysis package pandas into more than one dimension. It is actively being developed in conjunction with many other packages under the Pangeo umbrella. For example, you can run with Dask to use multiple cores on your laptop when you are working with data read in with xarray.
NetCDF files and other formats
NetCDF is a binary storage format for many different kinds of rectangular data. Examples include atmosphere and ocean model output, satellite images, and timeseries data. NetCDF files are intended to be device independent, and the dataset may be queried in a fast, random-access way. More information about NetCDF files can be found here. The CF conventions are used for storing NetCDF data for earth system models, so that programs can be aware of the coordinate axes used by the data cubes.
We will read in netCDF files using xarray; a variety of other file formats will work too.
End of explanation
"""
ds = xr.open_dataset('../data/sst.mnmean.v4.nc')
# look at overview of metadata for file
ds
# metadata for sst variable
ds['sst']
# look at shape of sst variable
ds.sst.shape, ds.sst.units
# view any of the metadata
ds.history
"""
Explanation: Sea surface temperature example
An example NetCDF file containing monthly means of sea surface temperature over 160 years can be found here. We'll use the xarray package to read this file, which has already been saved into the data directory.
One of the useful things about xarray is that it doesn't deal with the numbers in the file until it has to. This is called "lazy evaluation". It will note the operations you want done, but won't actually perform them until it needs to spit out numbers.
Viewing metadata is instantaneous since no calculations need to be done, even if the file is huge.
An xarray data object is a "dataset" or "data array".
End of explanation
"""
ds.lat.values
"""
Explanation: Exercise
Inspect the xarray dataset object.
What are the units of the time variable?
What are the dimensions of the latitude variable?
Extract numbers
Note that you can always extract the actual numbers from a called to your dataset using .values at the end. Be careful when you use this since it might be a lot of information. Always check the metadata without using .values first to see how large the arrays are you'll be reading in.
End of explanation
"""
ds.coords
"""
Explanation: Select data
Analogously to how we selected data from pandas dataframes using .loc and .iloc, we extract data from xarray datasets using .sel and .isel. The commands are much longer when using xarray because we have multi-dimensional data now.
When files are read in, data arrays are read in as variables and the coordinates that they are in reference to are called "coordinates". For example, in the present dataset, we have the following coordinates:
End of explanation
"""
ds.data_vars
"""
Explanation: We also have the following data variables, which are the main data of the file:
End of explanation
"""
ds.sst.coords
"""
Explanation: This means that we should subselect from the data variable "sst" with respect to the coordinates. We can select from none up to all of the coordinates that the sst variable is respect to. As we can see in the following cell, the variable "sst" has coordinates "lat", "lon", and "time".
End of explanation
"""
ds.sst.sel(time='1954-6-1')
"""
Explanation: We'll start with a small example: let's choose a single time to plot. Here is how to choose a specific time:
End of explanation
"""
proj = cartopy.crs.Mollweide(central_longitude=180)
pc = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
mappable = ax.contourf(ds.lon, ds.lat, ds.sst.sel(time='1954-6-1'), 10, cmap=cmo.thermal, transform=pc)
"""
Explanation: Now let's plot it! Note that we are still using cartopy to plot our maps and we therefore still need to input the projection information with the "transform" keyword argument.
End of explanation
"""
ds['sst'].sel(time='1954-05-23', method='nearest')
"""
Explanation: You can also select a "nearest" point in time if you aren't sure exactly when your time slices are:
End of explanation
"""
ds.sst.sel(time=slice('1900','1950'), lon=slice(-100+360, -80+360), lat=slice(30,16))
"""
Explanation: We can either select by coordinate type, such as in the following cell where we choose all times between (and including) the years 1900 and 1950, longtitudes between 260 and 280 degrees, and latitude between 16 and 30 degrees.
End of explanation
"""
ds.sst.isel(time=0, lon=0, lat=0)
"""
Explanation: .... or by index, such as in the following cell where we select the first index of data in terms of with time, longitude, and latitude:
End of explanation
"""
ds.sst.mean('time')
ds.sst.sum(('lat','lon'))
"""
Explanation: Calculations
You can do basic operations using xarray, such as take the mean. You can input the axis or axises you want to take the operation over in the function call.
End of explanation
"""
loc = 'http://apdrc.soest.hawaii.edu/dods/public_data/Reanalysis_Data/NOAA_20th_Century/V2c/daily/monolevel/cprat'
ds2 = xr.open_dataset(loc)
ds2
ds2['cprat'].long_name
proj = cartopy.crs.Sinusoidal(central_longitude=180)
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
ax.coastlines(linewidth=0.25)
# use the last time available
mappable = ax.contourf(ds2.lon, ds2.lat, ds2.cprat.isel(time=-1), 20, cmap=cmo.tempo, transform=pc)
ax.set_title(pd.Timestamp(ds2.time[-1].values).isoformat()[:10]) # or use .strftime instead of .isoformat
fig.colorbar(mappable).set_label('%s' % ds2['cprat'].long_name)
"""
Explanation: Loading remote data
THREDDS example. Loading data from a remote dataset.
The netCDF library can be compiled such that it is 'THREDDS enabled', which means that you can put in a URL instead of a filename. This allows access to large remote datasets, without having to download the entire file. You can find a large list of datasets served via an OpenDAP/THREDDs server here.
Let's look at the ESRL/NOAA 20th Century Reanalysis – Version 2. You can access the data by the following link (this is the link of the .dds and .das files without the extension.):
End of explanation
"""
ds.sst.sel(time='1954-6-1').plot()#transform=pc) # the plot's projection
proj = cartopy.crs.Mollweide(central_longitude=180)
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
ds.sst.sel(time='1954-6-1').plot(transform=pc) # the plot's projection
"""
Explanation: Exercise
Pick another variable from this dataset. Inspect and plot the variable in a similar manner to precipitation.
Find another dataset on a THREDDS server at SOEST (or elsewhere), pick a variable, and plot it.
Note that you can also just plot against the included coordinates with built-in convenience functions (this is analogous to pandas which was for one dimension). The sst is being plotted against longitude and latitude, which is flattening it out.
End of explanation
"""
seasonal_mean = ds.groupby('time.season').mean('time')
seasonal_mean
"""
Explanation: GroupBy
Like in pandas, we can use the groupby method to do some neat things. Let's group by season and save a new file.
End of explanation
"""
fname = 'test.nc'
seasonal_mean.to_netcdf(fname)
xr.open_dataset(fname)
"""
Explanation: Saving NetCDF files
Creating netCDF files is tedious if doing it from scratch, but it is very easy when starting from data that has been read in using xarray.
End of explanation
"""
|
cliburn/sta-663-2017
|
scratch/Test17.ipynb
|
mit
|
[x*x for x in range(3)]
"""
Explanation: Working with large data sets
Lazy evaluation, pure functions and higher order functions
Lazy and eager evaluation
A list comprehension is eager.
End of explanation
"""
(x*x for x in range(3))
"""
Explanation: A generator expression is lazy.
End of explanation
"""
g = (x*x for x in range(3))
next(g)
next(g)
next(g)
next(g)
"""
Explanation: You can use generators as iterators.
End of explanation
"""
for i in g:
print(i, end=", ")
g = (x*x for x in range(3))
for i in g:
print(i, end=", ")
"""
Explanation: A generator is single use.
End of explanation
"""
list(x*x for x in range(3))
"""
Explanation: The list constructor forces evaluation of the generator.
End of explanation
"""
def eager_updown(n):
xs = []
for i in range(n):
xs.append(i)
for i in range(n, -1, -1):
xs.append(i)
return xs
eager_updown(3)
"""
Explanation: An eager function.
End of explanation
"""
def lazy_updown(n):
for i in range(n):
yield i
for i in range(n, -1, -1):
yield i
lazy_updown(3)
list(lazy_updown(3))
"""
Explanation: A lazy generator.
End of explanation
"""
def pure(alist):
return [x*x for x in alist]
"""
Explanation: Pure and impure functions
A pure function is like a mathematical function. Given the same inputs, it always returns the same output, and has no side effects.
End of explanation
"""
def impure(alist):
for i in range(len(alist)):
alist[i] = alist[i]*alist[i]
return alist
xs = [1,2,3]
ys = pure(xs)
print(xs, ys)
ys = impure(xs)
print(xs, ys)
"""
Explanation: An impure function has side effects.
End of explanation
"""
def f1(n):
return n//2 if n % 2==0 else n*3+1
def f2(n):
return np.random.random(n)
def f3(n):
n = 23
return n
def f4(a, n=[]):
n.append(a)
return n
"""
Explanation: Quiz
Say if the following functions are pure or impure.
End of explanation
"""
list(map(f1, range(10)))
list(filter(lambda x: x % 2 == 0, range(10)))
from functools import reduce
reduce(lambda x, y: x + y, range(10), 0)
reduce(lambda x, y: x + y, [[1,2], [3,4], [5,6]], [])
"""
Explanation: Higher order functions
End of explanation
"""
import operator as op
reduce(op.mul, range(1, 6), 1)
list(map(op.itemgetter(1), [[1,2,3],[4,5,6],[7,8,9]]))
"""
Explanation: Using the operator module
The operator module provides all the Python operators as functions.
End of explanation
"""
import itertools as it
list(it.combinations(range(1,6), 3))
"""
Explanation: Using itertools
End of explanation
"""
list(it.product([0,1], repeat=3))
list(it.starmap(op.add, zip(range(5), range(5))))
list(it.takewhile(lambda x: x < 3, range(10)))
data = sorted('the quick brown fox jumps over the lazy dog'.split(), key=len)
for k, g in it.groupby(data, key=len):
print(k, list(g))
"""
Explanation: Generate all Boolean combinations
End of explanation
"""
import toolz as tz
list(tz.partition(3, range(10)))
list(tz.partition(3, range(10), pad=None))
n = 30
dna = ''.join(np.random.choice(list('ACTG'), n))
dna
tz.frequencies(tz.sliding_window(2, dna))
"""
Explanation: Using toolz
End of explanation
"""
from toolz import curried as c
tz.pipe(
dna,
c.sliding_window(2), # using curry
c.frequencies,
)
composed = tz.compose(
c.frequencies,
c.sliding_window(2),
)
composed(dna)
"""
Explanation: Using pipes and the curried namespace
End of explanation
"""
m = 10000
n = 300
dnas = (''.join(np.random.choice(list('ACTG'), n, p=[.1, .2, .3, .4]))
for i in range(m))
dnas
tz.merge_with(sum,
tz.map(
composed,
dnas
)
)
"""
Explanation: Processing many sets of DNA strings without reading into memory
End of explanation
"""
|
clausherther/public
|
Dirichlet Multinomial Example.ipynb
|
cc0-1.0
|
y = np.asarray([20, 21, 17, 19, 17, 28])
k = len(y)
p = 1/k
n = y.sum()
n, p
"""
Explanation: Dice, Polls & Dirichlet Multinomials
As part of a longer term project to learn Bayesian Statistics, I'm currently reading Bayesian Data Analysis, 3rd Edition by Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, commonly known as BDA3.
Although I've been using Bayesian statistics and probabilistic programming languages, like PyMC3, in projects for the last year or so, this book forces me to go beyond a pure practioner's approach to modeling, while still delivering very practical value.
Below are a few take aways from the earlier chapters in the book I found interesting. They are meant to hopefully inspire others to learn about Bayesian statistics, without trying to be overly formal about the math. If something doesn't look 100% to the trained mathematicians in the room, please let me know, or just squint a little harder. ;)
We'll cover:
- Some common conjugate distributions
- An example of the Dirichlet-Multinomial distribution using dice rolls
- Two examples involing polling data from BDA3
Conjugate Distributions
In Chapter 2 of the book, the authors introduce several choices for prior probability distributions, along with the concept of conjugate distributions in section 2.4.
From Wikipedia
In Bayesian probability theory, if the posterior distributions p(θ | x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function
John Cook has this helpful diagram on his website that shows some common families of conjugate distributions:
<img src=https://www.johndcook.com/conjugate_prior_diagram.png width="300">
Conjugate distributions are a very important concept in probability theory, owing to a large degree to some nice mathematical properties that make computing the posteriors more tractable. Even with increasingly better computational tools, such as MCMC, models based on conjugate distributions are advantageous.
Beta-Binomial
One of the better known examples of conjugate distributions is the Beta-Binomial distribution, which is often used to model series of coin flips (the ever present topic in posts about probability). While the $Binomial$ distribution represents the probability of success in a series of Bernoulli trials, the Beta distribution here represents the prior probability distribtution of the probability of success for each trial.
Thus, the probability $p$ of a coin landing on head is modeled to be $Beta$ distributed (with parameters $\alpha$ and $\beta$), while the likelihood of heads and tails is assumed to follow a $Binomial$ distribution with parameters $n$ (representing the number of flips) and the $Beta$ distributed $p$, thus creating the link.
$$p \sim Beta(\alpha, \beta)$$
$$y \sim Binomial(n, p)$$
Gamma-Poisson
Another often-used conjugate distribution is the Gamma-Poisson distribution, so named because the rate parameter $\lambda$ that parameterizes the Poisson distributed is modeled as a Gamma distribution:
$$\lambda \sim Gamma(k, \theta)$$
$$y \sim Poisson(\lambda)$$
While the discrete $Poisson$ distributed is often used in applications of count data, such as store customers, eCommerce orders, website visits, the $Gamma$ distribution serves as a useful distribution to model the rate at which these events occur ($\lambda$), since the $Gamma$ distribution models positive continuous values only but is otherwise quite flexible:
<img src=https://upload.wikimedia.org/wikipedia/commons/e/e6/Gamma_distribution_pdf.svg width="500">
Dirichlet-Multinomial
A perhaps more interesting and seemingly less talked-about example of conjugate distributions is the Dirichlet-Multinomial distribution, introduced in chapter 3 of BDA3.
One way of think about the $Dirichlet-Multinomial$ distribution is that while the $Multinomial$ (-> multiple choices) distribution is a generalization of the $Binomial$ distribution (-> binary choice), the $Dirichlet$ distribution is a generalization of the $Beta$ distribution. That is, while the $Beta$ distribution models the probability of a single probability $p$, the $Dirichlet$ models the probabilities of multiple, mutually exclusive choices, parameterized by $a$ which is referred to as the concentration parameter and represents the weights for each choice (we'll see more on that later).
In other words, think coins for $Beta-Binomial$ and dice for $Dirichlet-Multinomial$.
$$\theta \sim Dirichlet(a)$$
$$y \sim Multinomial(n, \theta)$$
In the wild, we might encounter the Dirichlet distribution these days mostly in the context of topic modeling in natural language processing, where it's commonly used as part of a Latent Dirichlet Allocation (or LDA) model, which is fancy way of saying we're trying to figure out the probability of an article belonging to a certain topic given its text.
However, for our purposes, let's look at the Dirichlet-Multinomial in the context of multiple choices, and let's start by throwing dice as a motivating example:
Throwing Dice
Let's first create some data representing 122 rolls of six-sided die, where $p$ represents the expected probability for each side, $1/6$
End of explanation
"""
sns.barplot(x=np.arange(1, k+1), y=y);
"""
Explanation: Just looking at a simple bar plot, we suspect that we might not be dealing with a fair die!
However, students of Bayesian statistics that we are, we'd like to go further and quantify our uncertainty in the fairness of the die and calculate the probability that someone slipped us loaded dice.
End of explanation
"""
n, y
with pm.Model() as dice_model:
# initializes the Dirichlet distribution with a uniform prior:
a = np.ones(k)
theta = pm.Dirichlet("theta", a=a)
# Since theta[5] will hold the posterior probability of rolling a 6
# we'll compare this to the reference value p = 1/6
# six_bias = pm.Deterministic("six_bias", theta[k-1] - p)
results = pm.Multinomial("results", n=n, p=theta, observed=y)
dice_model
"""
Explanation: Let's set up a simple model in PyMC3 that not only calculates the posterior probability for $theta$ (i.e. the probability for each side of the die), but also estimates the bias for throwing a $6$.
We will use Deterministic variable, in addition to our unobserved (theta) and observed (results) variables.
For the prior on $theta$, we'll use a non-informative uniform distribution, by initializing the $Dirichlet$ prior with a series of 1s for the parameter a, one for each of the k possible outcomes. This is similar to initializing a $Beta$ distribution as $Beta(1, 1)$, which corresponds to the Uniform distribution.
End of explanation
"""
pm.model_to_graphviz(dice_model)
dice_model.check_test_point()
"""
Explanation: Starting with version 3.5, PyMC3 includes a handy function to plot models in plate notation:
End of explanation
"""
with dice_model:
dice_trace = pm.sample(draws=1000)
"""
Explanation: Let's draw 1,000 samples from the joint posterior using the default NUTS sampler:
End of explanation
"""
with dice_model:
pm.traceplot(dice_trace, combined=True, lines={"theta": p})
"""
Explanation: From the traceplot, we can already see that one of the $theta$ posteriors isn't in line with the rest:
End of explanation
"""
axes = pm.plot_posterior(dice_trace, varnames=["theta"], ref_val=np.round(p, 3))
for i, ax in enumerate(axes):
ax.set_title(f"{i+1}")
"""
Explanation: We'll plot the posterior distributions for each $theta$ and compare it our reference value $p$ to see if the 95% HPD (Highest Posterior Density) interval includes $p = 1/6$.
End of explanation
"""
ax = pm.plot_posterior(dice_trace, varnames=["six_bias"], ref_val=[0])
ax.set_title(f"P(Theta[Six] - {p:.2%})");
"""
Explanation: We can clearly see that the HPD for the posterior probability for rolling a $6$ barely includes what we'd expect from a fair die.
To be more precise, let's plot the probability of our die being biased on $6$, by comparing $theta[Six]$ to $p$
End of explanation
"""
six_bias_perc = len(dice_trace["six_bias"][dice_trace["six_bias"]>0])/len(dice_trace["six_bias"])
print(f'P(Six is biased) = {six_bias_perc:.2%}')
"""
Explanation: Lastly, we can calculate the probability that the die is biased on $6$ by calculating the density to the right of our reference line at $0$:
End of explanation
"""
y = np.asarray([727, 583, 137])
n = y.sum()
k = len(y)
n, k
"""
Explanation: Better get some new dice...!
Polling #1
Let's turn our review of the Dirichlet-Multinomial distribution to another example, concerning polling data.
In section 3.4 of BDA3 on multivariate models and, specifically the section on Multinomial Models for Categorical Data, the authors include a, little dated, example of polling data in the 1988 Presidential race between George H.W. Bush and Michael Dukakis.
Here's the setup:
1,447 likely voters were surveyed about their preferences in the upcoming presidential election
Their responses were:
Bush: 727
Dukakis: 583
Other: 137
What is the probability that more people will vote for Bush over Dukakis?
i.e. what is the difference in support for the two major candidates?
We set up the data, where $k$ represents the number of choices the respondents had:
End of explanation
"""
with pm.Model() as polling_model:
# initializes the Dirichlet distribution with a uniform prior:
a = np.ones(k)
theta = pm.Dirichlet("theta", a=a)
bush_dukakis_diff = pm.Deterministic("bush_dukakis_diff", theta[0] - theta[1])
likelihood = pm.Multinomial("likelihood", n=n, p=theta, observed=y)
pm.model_to_graphviz(polling_model)
with polling_model:
polling_trace = pm.sample(draws=1000)
with polling_model:
pm.traceplot(polling_trace, combined=True)
"""
Explanation: We, again, set up a simple Dirichlet-Multinomial model and include a Deterministic variable that calculates the metric of interest - the difference in probability of respondents for Bush vs. Dukakis.
End of explanation
"""
_, ax = plt.subplots(1,1, figsize=(10, 6))
sns.distplot(polling_trace["bush_dukakis_diff"], bins=20, ax=ax, kde=False, fit=stats.beta)
ax.axvline(0, c='g', linestyle='dotted')
ax.set_title("% Difference Bush vs Dukakis")
ax.set_xlabel("% Difference");
"""
Explanation: Looking at the % difference between respondents for Bush vs Dukakis, we can see that most of the density is greater than 0%, signifying a strong advantage for Bush in this poll.
We've also fit a $Beta$ distribution to this data via scipy.stats, and we can see that posterior of the difference of the 2 $theta$ values is a pretty good match.
End of explanation
"""
bush_dukakis_diff_perc = len(polling_trace["bush_dukakis_diff"][polling_trace["bush_dukakis_diff"]>0])/len(polling_trace["bush_dukakis_diff"])
print(f'P(More Responses for Bush) = {bush_dukakis_diff_perc:.0%}')
"""
Explanation: Percentage of samples with bush_dukakis_diff > 0:
End of explanation
"""
data = pd.DataFrame([
{"candidate": "bush", "pre": 294, "post": 288},
{"candidate": "dukakis", "pre": 307, "post": 332},
{"candidate": "other", "pre": 38, "post": 10}
], columns=["candidate", "pre", "post"])
data
"""
Explanation: Polling #2
As an extension to the previous model, the authors of BDA include an exercise in chapter 3.10 (Exercise 2) that presents us with polling data from the 1988 Presidential race, taking before and after the one of the debates.
Comparison of two multinomial observations: on September 25, 1988, the evening of a
presidential campaign debate, ABC News conducted a survey of registered voters in the
United States; 639 persons were polled before the debate, and 639 different persons were
polled after. The results are displayed in Table 3.2. Assume the surveys are independent
simple random samples from the population of registered voters. Model the data with
two different multinomial distributions. For $j = 1, 2$, let $\alpha_j$ be the proportion of voters
who preferred Bush, out of those who had a preference for either Bush or Dukakis at
the time of survey $j$. Plot a histogram of the posterior density for $\alpha_2 − \alpha_1$. What is the
posterior probability that there was a shift toward Bush?
Let's copy the data from the exercise and model the problem as a probabilistic model, again using PyMC3:
End of explanation
"""
y = data[["pre", "post"]].T.values
y
"""
Explanation: Convert to 2x3 array
End of explanation
"""
n = y.sum(axis=1)
n
"""
Explanation: Number of respondents in each survey
End of explanation
"""
m = y[:, :2].sum(axis=1)
m
"""
Explanation: Number of respondents for the 2 major candidates in each survey
End of explanation
"""
n_debates, n_candidates = y.shape
n_debates, n_candidates
"""
Explanation: For this model, we'll need to set up the priors slightly differently. Instead of 1 set of thetas, we need 2, one for each survey (pre/post debate).
To do that without creating specific pre/post versions of each variable, we'll take advantage of PyMC3's shape parameter, available for most (all?) distributions.
In this case, we'll need a 2-dimensional shape parameter, representing the number of debates n_debates and the number of choices in candidates n_candidates
End of explanation
"""
with pm.Model() as polling_model_debates:
# initializes the Dirichlet distribution with a uniform prior:
shape = (n_debates, n_candidates)
a = np.ones(shape)
# This creates a separate Dirichlet distribution for each debate
# where sum of probabilities across candidates = 100% for each debate
theta = pm.Dirichlet("theta", a=a, shape=shape)
# get the "Bush" theta for each debate, at index=0
bush_pref = pm.Deterministic("bush_pref", theta[:, 0] * n / m)
# to calculate probability that support for Bush shifted from debate 1 [0] to 2 [1]
bush_shift = pm.Deterministic("bush_shift", bush_pref[1]-bush_pref[0])
# because of the shapes of the inputs, this essentially creates 2 multinomials,
# one for each debate
responses = pm.Multinomial("responses", n=n, p=theta, observed=y)
"""
Explanation: Thus, we need to initialize a Dirichlet distribution prior with shape (2,3) and then refer to the relevant parameters by index where needed.
End of explanation
"""
for v in polling_model_debates.unobserved_RVs:
print(v, v.tag.test_value.shape)
"""
Explanation: For models with multi-dimensional shapes, it's always good to check the shapes of the various parameters before sampling:
End of explanation
"""
pm.model_to_graphviz(polling_model_debates)
"""
Explanation: The plate notation visual can also help with that:
End of explanation
"""
with polling_model_debates:
polling_trace_debates = pm.sample(draws=3000, tune=1500)
with polling_model_debates:
pm.traceplot(polling_trace_debates, combined=True)
"""
Explanation: Let's sample with a slightly higher number of draws and tuning steps:
End of explanation
"""
s = ["pre", "post"]
candidates = data["candidate"].values
pd.DataFrame(polling_trace_debates["theta"].mean(axis=0), index=s, columns=candidates)
"""
Explanation: We'll take a look at the means of the posteriors for theta, indicating the % of support for each candidate pre & post debate:
End of explanation
"""
pd.DataFrame(polling_trace_debates["bush_pref"].mean(axis=0), index=s, columns=["bush_pref"])
"""
Explanation: Just from the means, we can see that the number of Bush supporters has likely decreased post debate from 48.8% to 46.3% (as a % of supporters of the 2 major candidates):
End of explanation
"""
_, ax = plt.subplots(2,1, figsize=(10, 10))
sns.distplot(polling_trace_debates["bush_pref"][:,0], hist=False, ax=ax[0], label="Pre-Debate")
sns.distplot(polling_trace_debates["bush_pref"][:,1], hist=False, ax=ax[0], label="Post-Debate")
ax[0].set_title("% Responses for Bush vs Dukakis")
ax[0].set_xlabel("% Responses");
sns.distplot(polling_trace_debates["bush_shift"], hist=True, ax=ax[1], label="P(Bush Shift)")
ax[1].axvline(0, c='g', linestyle='dotted')
ax[1].set_title("% Shift Pre/Prior Debate")
ax[1].set_xlabel("% Shift");
"""
Explanation: Let's compare the results visually, by plotting the posterior distributions of the pre/post debate values for % responses for Bush and the posterior for pre/post difference in Bush supporters:
End of explanation
"""
perc_shift = (len(polling_trace_debates["bush_shift"][polling_trace_debates["bush_shift"] > 0])
/len(polling_trace_debates["bush_shift"])
)
print(f'P(Shift Towards Bush) = {perc_shift:.1%}')
"""
Explanation: From the second plot, we can already see that a large portion of the posterior density is below 0, but let's be precise and actually calculate the probability that support shifted towards Bush after the debate:
End of explanation
"""
|
KrisCheng/ML-Learning
|
archive/MOOC/Deeplearning_AI/NeuralNetworksandDeepLearning/BuildingyourDeepNeuralNetworkStepbyStep/Deep+Neural+Network+-+Application+v3.ipynb
|
mit
|
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
"""
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
"""
Explanation: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
End of explanation
"""
# Example of a picture
index = 2
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
"""
Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
End of explanation
"""
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
"""
Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
End of explanation
"""
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation = "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, activation = "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation = "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation = "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
4 - Two-layer neural network
Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
"""
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
"""
Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
"""
predictions_train = predict(train_x, train_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
End of explanation
"""
predictions_test = predict(test_x, test_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
End of explanation
"""
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
5 - L-layer Neural Network
Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters_deep(layer_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
"""
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
"""
Explanation: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
"""
pred_train = predict(train_x, train_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
End of explanation
"""
pred_test = predict(test_x, test_y, parameters)
"""
Explanation: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
End of explanation
"""
print_mislabeled_images(classes, test_x, test_y, pred_test)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
End of explanation
"""
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
"""
Explanation: A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
7) Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation
"""
|
brooksandrew/simpleblog
|
_ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb
|
mit
|
import mplleaflet
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# can be found in https://github.com/brooksandrew/postman_problems_examples
from osm2nx import read_osm, haversine
from graph import contract_edges, create_rpp_edgelist
from postman_problems.tests.utils import create_mock_csv_from_dataframe
from postman_problems.solver import rpp, cpp
from postman_problems.stats import calculate_postman_solution_stats
"""
Explanation: This problem originated from a blog post I wrote for DataCamp on graph optimization here. The algorithm I sketched out there for solving the Chinese Problem on the Sleeping Giant state park trail network has since been formalized into the postman_problems python library. I've also added the Rural Postman solver that is implemented here.
So the three main enhancements in this post from the original DataCamp article and my second iteration published here updating to networkx 2.0 are:
1. OpenStreetMap for graph data and visualization.
2. Implementing the Rural Postman algorithm to consider optional edges.
3. Leveraging the postman_problems library.
This code, notebook and data for this post can be found in the postman_problems_examples repo.
The motivation and background around this problem is written up more thoroughly in the previous posts and postman_problems.
Table of Contents
Table of Contents
{:toc}
End of explanation
"""
# load OSM to a directed NX
g_d = read_osm('sleepinggiant.osm')
# create an undirected graph
g = g_d.to_undirected()
"""
Explanation: Create Graph from OSM
End of explanation
"""
g.add_edge('2318082790', '2318082832', id='white_horseshoe_fix_1')
"""
Explanation: Adding edges that don't exist on OSM, but should
End of explanation
"""
for e in g.edges(data=True):
e[2]['distance'] = haversine(g.node[e[0]]['lon'],
g.node[e[0]]['lat'],
g.node[e[1]]['lon'],
g.node[e[1]]['lat'])
"""
Explanation: Adding distance to OSM graph
Using the haversine formula to calculate distance between each edge.
End of explanation
"""
g_t = g.copy()
for e in g.edges(data=True):
# remove non trails
name = e[2]['name'] if 'name' in e[2] else ''
if ('Trail' not in name.split()) or (name is None):
g_t.remove_edge(e[0], e[1])
# remove non Sleeping Giant trails
elif name in [
'Farmington Canal Linear Trail',
'Farmington Canal Heritage Trail',
'Montowese Trail',
'(white blazes)']:
g_t.remove_edge(e[0], e[1])
# cleaning up nodes left without edges
for n in nx.isolates(g_t.copy()):
g_t.remove_node(n)
"""
Explanation: Create graph of required trails only
A simple heuristic with a couple tweaks is all we need to create the graph with required edges:
Keep any edge with 'Trail' in the name attribute.
Manually remove the handful of trails that are not part of the required Giant Master route.
End of explanation
"""
fig, ax = plt.subplots(figsize=(1,8))
pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()}
nx.draw_networkx_edges(g_t, pos, width=2.5, edge_color='black', alpha=0.7)
mplleaflet.save_html(fig, 'maps/sleepinggiant_trails_only.html')
"""
Explanation: Viz Sleeping Giant Trails
All trails required for the Giant Master:
End of explanation
"""
edge_ids_to_add = [
'223082783',
'223077827',
'40636272',
'223082785',
'222868698',
'223083721',
'222947116',
'222711152',
'222711155',
'222860964',
'223083718',
'222867540',
'white_horseshoe_fix_1'
]
edge_ids_to_remove = [
'17220599'
]
"""
Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/sleepinggiant_trails_only.html" height="400" width="750"></iframe>
Connect Edges
In order to run the RPP algorithm from postman_problems, the required edges of the graph must form a single connected component. We're almost there with the Sleeping Giant trail map as-is, so we'll just connect a few components manually.
Here's an example of a few floating components (southwest corner of park):
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_disconnected_components.png" width="500">
OpenStreetMap makes finding these edge (way) IDs simple. Once grabbing the ? cursor, you can click on any edge to retrieve IDs and attributes.
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/osm_edge_lookup.png" width="1000">
Define OSM edges to add and remove from graph
End of explanation
"""
for e in g.edges(data=True):
way_id = e[2].get('id').split('-')[0]
if way_id in edge_ids_to_add:
g_t.add_edge(e[0], e[1], **e[2])
g_t.add_node(e[0], lat=g.node[e[0]]['lat'], lon=g.node[e[0]]['lon'])
g_t.add_node(e[1], lat=g.node[e[1]]['lat'], lon=g.node[e[1]]['lon'])
if way_id in edge_ids_to_remove:
if g_t.has_edge(e[0], e[1]):
g_t.remove_edge(e[0], e[1])
for n in nx.isolates(g_t.copy()):
g_t.remove_node(n)
"""
Explanation: Add attributes for supplementary edges
End of explanation
"""
len(list(nx.connected_components(g_t)))
"""
Explanation: Ensuring that we're left with one single connected component:
End of explanation
"""
fig, ax = plt.subplots(figsize=(1,12))
# edges
pos = {k: (g_t.node[k].get('lon'), g_t.node[k].get('lat')) for k in g_t.nodes()}
nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color='black', alpha=0.6)
# nodes (intersections and dead-ends)
pos_x = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes() if (g_t.degree(k)==1) | (g_t.degree(k)>2)}
nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=35.0, node_color='red', alpha=0.9)
mplleaflet.save_html(fig, 'maps/trails_only_intersections.html')
"""
Explanation: Viz Connected Component
The map below visualizes the required edges and nodes of interest (intersections and dead-ends where degree != 2):
End of explanation
"""
name2color = {
'Green Trail': 'green',
'Quinnipiac Trail': 'blue',
'Tower Trail': 'black',
'Yellow Trail': 'yellow',
'Red Square Trail': 'red',
'White/Blue Trail Link': 'lightblue',
'Orange Trail': 'orange',
'Mount Carmel Avenue': 'black',
'Violet Trail': 'violet',
'blue Trail': 'blue',
'Red Triangle Trail': 'red',
'Blue Trail': 'blue',
'Blue/Violet Trail Link': 'purple',
'Red Circle Trail': 'red',
'White Trail': 'gray',
'Red Diamond Trail': 'red',
'Yellow/Green Trail Link': 'yellowgreen',
'Nature Trail': 'forestgreen',
'Red Hexagon Trail': 'red',
None: 'black'
}
fig, ax = plt.subplots(figsize=(1,10))
pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()}
e_color = [name2color[e[2].get('name')] for e in g_t.edges(data=True)]
nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color=e_color, alpha=0.5)
nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=30.0, node_color='black', alpha=0.9)
mplleaflet.save_html(fig, 'maps/trails_only_color.html', tiles='cartodb_positron')
"""
Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_intersections.html" height="400" width="750"></iframe>
Viz Trail Color
Because we can and it's pretty.
End of explanation
"""
print('{:0.2f} miles of required trail.'.format(sum([e[2]['distance']/1609.34 for e in g_t.edges(data=True)])))
"""
Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_color.html" height="400" width="750"></iframe>
Check distance
This is strikingly close (within 0.25 miles) to what I calculated manually with some guess work from the SG trail map on the first pass at this problem here, before leveraging OSM.
End of explanation
"""
print('Number of edges in trail graph: {}'.format(len(g_t.edges())))
# intialize contracted graph
g_tc = nx.MultiGraph()
# add contracted edges to graph
for ce in contract_edges(g_t, 'distance'):
start_node, end_node, distance, path = ce
contracted_edge = {
'start_node': start_node,
'end_node': end_node,
'distance': distance,
'name': g[path[0]][path[1]].get('name'),
'required': 1,
'path': path
}
g_tc.add_edge(start_node, end_node, **contracted_edge)
g_tc.node[start_node]['lat'] = g.node[start_node]['lat']
g_tc.node[start_node]['lon'] = g.node[start_node]['lon']
g_tc.node[end_node]['lat'] = g.node[end_node]['lat']
g_tc.node[end_node]['lon'] = g.node[end_node]['lon']
"""
Explanation: Contract Edges
We could run the RPP algorithm on the graph as-is with >5000 edges. However, we can simplify computation by contracting edges into logical trail segments first. More details on the intuition and methodology in the 50 states post.
End of explanation
"""
print('Number of edges in contracted trail graoh: {}'.format(len(g_tc.edges())))
"""
Explanation: Edge contraction reduces the number of edges fed to the RPP algorithm by a factor of ~40.
End of explanation
"""
# create list with edge attributes and "from" & "to" nodes
tmp = []
for e in g_tc.edges(data=True):
tmpi = e[2].copy() # so we don't mess w original graph
tmpi['start_node'] = e[0]
tmpi['end_node'] = e[1]
tmp.append(tmpi)
# create dataframe w node1 and node2 in order
eldf = pd.DataFrame(tmp)
eldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})]
# create edgelist mock CSV
elfn = create_mock_csv_from_dataframe(eldf)
"""
Explanation: Solve CPP
First, let's see how well the Chinese Postman solution works.
Create CPP edgelist
End of explanation
"""
circuit_cpp, gcpp = cpp(elfn, start_node='735393342')
"""
Explanation: Start node
The route is designed to start at the far east end of the park on the Blue trail (node '735393342'). While the CPP and RPP solutions will return a Eulerian circuit (loop back to the starting node), we could truncate this last long doublebacking segment when actually running the route
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_starting_node.png" width="600">
Solve
End of explanation
"""
cpp_stats = calculate_postman_solution_stats(circuit_cpp)
cpp_stats
print('Miles in CPP solution: {:0.2f}'.format(cpp_stats['distance_walked']/1609.34))
"""
Explanation: CPP Stats
(distances in meters)
End of explanation
"""
%%time
dfrpp = create_rpp_edgelist(g_tc,
graph_full=g,
edge_weight='distance',
max_distance=2500)
"""
Explanation: Solve RPP
With the CPP as benchmark, let's see how well we do when we allow for optional edges in the route.
End of explanation
"""
Counter( dfrpp['required'])
"""
Explanation: Required vs optional edge counts
(1=required and 0=optional)
End of explanation
"""
# create mockfilename
elfn = create_mock_csv_from_dataframe(dfrpp)
%%time
# solve
circuit_rpp, grpp = rpp(elfn, start_node='735393342')
"""
Explanation: Solve RPP
End of explanation
"""
rpp_stats = calculate_postman_solution_stats(circuit_rpp)
rpp_stats
"""
Explanation: RPP Stats
(distances in meters)
End of explanation
"""
print('Miles in RPP solution: {:0.2f}'.format(rpp_stats['distance_walked']/1609.34))
"""
Explanation: Leveraging the optional roads and trails, we're able to shave a about 3 miles off the CPP route. Total mileage checks in at 30.71, just under a 50K (30.1 miles).
End of explanation
"""
# hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe`
for e in circuit_rpp:
if type(e[3]['path']) == str:
exec('e[3]["path"]=' + e[3]["path"])
"""
Explanation: Viz RPP Solution
End of explanation
"""
g_tcg = g_tc.copy()
# calc shortest path between optional nodes and add to graph
for e in circuit_rpp:
granular_type = 'trail' if e[3]['required'] else 'optional'
# add granular optional edges to g_tcg
path = e[3]['path']
for pair in list(zip(path[:-1], path[1:])):
if (g_tcg.has_edge(pair[0], pair[1])) and (g_tcg[pair[0]][pair[1]][0].get('granular_type') == 'optional'):
g_tcg[pair[0]][pair[1]][0]['granular_type'] = 'trail'
else:
g_tcg.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type)
# add granular nodes from optional edge paths to g_tcg
for n in path:
g_tcg.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])
"""
Explanation: Create graph from RPP solution
End of explanation
"""
fig, ax = plt.subplots(figsize=(1,8))
pos = {k: (g_tcg.node[k].get('lon'), g_tcg.node[k].get('lat')) for k in g_tcg.nodes()}
el_opt = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'optional']
nx.draw_networkx_edges(g_tcg, pos, edgelist=el_opt, width=6.0, edge_color='blue', alpha=1.0)
el_tr = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'trail']
nx.draw_networkx_edges(g_tcg, pos, edgelist=el_tr, width=3.0, edge_color='black', alpha=0.8)
mplleaflet.save_html(fig, 'maps/rpp_solution_opt_edges.html', tiles='cartodb_positron')
"""
Explanation: Viz: RPP optional edges
The RPP algorithm picks up some logical shortcuts using the optional trails and a couple short stretches of road.
<font color='black'>black</font>: required trails
<font color='blue'>blue</font>: optional trails and roads
End of explanation
"""
## Create graph directly from rpp_circuit and original graph w lat/lon (g)
color_seq = [None, 'black', 'magenta', 'orange', 'yellow']
grppviz = nx.MultiGraph()
for e in circuit_rpp:
for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]):
if grppviz.has_edge(n1, n2):
grppviz[n1][n2][0]['linewidth'] += 2
grppviz[n1][n2][0]['cnt'] += 1
else:
grppviz.add_edge(n1, n2, linewidth=2.5)
grppviz[n1][n2][0]['color_st'] = 'black' if g_t.has_edge(n1, n2) else 'red'
grppviz[n1][n2][0]['cnt'] = 1
grppviz.add_node(n1, lat=g.node[n1]['lat'], lon=g.node[n1]['lon'])
grppviz.add_node(n2, lat=g.node[n2]['lat'], lon=g.node[n2]['lon'])
for e in grppviz.edges(data=True):
e[2]['color_cnt'] = color_seq[1] if 'cnt' not in e[2] else color_seq[e[2]['cnt'] ]
"""
Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_opt_edges.html" height="400" width="750"></iframe>
Viz: RPP edges counts
End of explanation
"""
fig, ax = plt.subplots(figsize=(1,10))
pos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()}
e_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)]
e_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)]
nx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7)
mplleaflet.save_html(fig, 'maps/rpp_solution_edge_cnts.html', tiles='cartodb_positron')
"""
Explanation: Edge walks per color:
<font color='black'>black</font>: 1 <br>
<font color='magenta'>magenta</font>: 2 <br>
End of explanation
"""
geojson = {'features':[], 'type': 'FeatureCollection'}
time = 0
path = list(reversed(circuit_rpp[0][3]['path']))
for e in circuit_rpp:
if e[3]['path'][0] != path[-1]:
path = list(reversed(e[3]['path']))
else:
path = e[3]['path']
for n in path:
time += 1
doc = {'type': 'Feature',
'properties': {
'latitude': g.node[n]['lat'],
'longitude': g.node[n]['lon'],
'time': time,
'id': e[3].get('id')
},
'geometry':{
'type': 'Point',
'coordinates': [g.node[n]['lon'], g.node[n]['lat']]
}
}
geojson['features'].append(doc)
with open('circuit_rpp.geojson','w') as f:
json.dump(geojson, f)
"""
Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_edge_cnts.html" height="400" width="750"></iframe>
Create geojson solution
Used for the forthcoming D3 route animation.
End of explanation
"""
|
bmeaut/python_nlp_2017_fall
|
course_material/13_Semantics_2/13_Semantics_2_lab.ipynb
|
mit
|
!wget http://sandbox.hlt.bme.hu/~recski/stuff/4a.tgz
"""
Explanation: 12. Semantics 2 - Lab excercise
Improving a baseline Sentiment Analysis algorithm
Below is a small system for training and testing a Support Vector classifier on sentiment analysis data from the 2017 Semeval Task 4a, containing English tweets.
Currently the system only contains a single feature type: each tweet is represented by the set of words it contains. More specifically, a binary feature is created for each word in the vocabulary of the full training set, and the value of each feature for any given tweet is 1 if the word is present and 0 otherwise.
Your task will be to improve the performance of the system by implementing other binary features. (If you want to include non-binary features, you will also have to change the provided code)
Before we start, let's download the dataset:
End of explanation
"""
!tar xvvf 4a.tgz
"""
Explanation: And extract the files:
End of explanation
"""
import numpy as np
import scipy
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')
class Featurizer():
@staticmethod
def bag_of_words(text):
for word in word_tokenize(text):
yield word
feature_functions = [
'bag_of_words']
def __init__(self):
self.labels = {}
self.labels_by_id = {}
self.features = {}
self.features_by_id = {}
self.next_feature_id = 0
self.next_label_id = 0
def to_sparse(self, events):
"""convert sets of ints to a scipy.sparse.csr_matrix"""
data, row_ind, col_ind = [], [], []
for event_index, event in enumerate(events):
for feature in event:
data.append(1)
row_ind.append(event_index)
col_ind.append(feature)
n_features = self.next_feature_id
n_events = len(events)
matrix = scipy.sparse.csr_matrix(
(data, (row_ind, col_ind)), shape=(n_events, n_features))
return matrix
def featurize(self, dataset, allow_new_features=False):
events, labels = [], []
n_events = len(dataset)
for c, (text, label) in enumerate(dataset):
if c % 2000 == 0:
print("{0:.0%}...".format(c/n_events), end='')
if label not in self.labels:
self.labels[label] = self.next_label_id
self.labels_by_id[self.next_label_id] = label
self.next_label_id += 1
labels.append(self.labels[label])
events.append(set())
for function_name in Featurizer.feature_functions:
function = getattr(Featurizer, function_name)
for feature in function(text):
if feature not in self.features:
if not allow_new_features:
continue
self.features[feature] = self.next_feature_id
self.features_by_id[self.next_feature_id] = feature
self.next_feature_id += 1
feat_id = self.features[feature]
events[-1].add(feat_id)
print('done, sparsifying...', end='')
events_sparse = self.to_sparse(events)
labels_array = np.array(labels)
print('done!')
return events_sparse, labels_array
"""
Explanation: 4a.train and 4a.dev are the full datasets for training and testing, test.train and test.dev are small samples from these that you may want to use while debugging your solution
Before you get started, let's walk through the main components of the system.
The Featurizer class implements features as static methods and also converts train and test data to data structures handled by sklearn, the library we use for training an SVC model.
End of explanation
"""
from collections import defaultdict
def evaluate(predictions, dev_labels):
stats_by_label = defaultdict(lambda: defaultdict(int))
for i, gold in enumerate(dev_labels):
auto = predictions[i]
# print(auto, gold)
if auto == gold:
stats_by_label[auto]['tp'] += 1
else:
stats_by_label[auto]['fp'] += 1
stats_by_label[gold]['fn'] += 1
print("{:>8} {:>8} {:>8} {:>8} {:>8} {:>8}".format(
'label', 'n_true', 'n_tagged', 'precision', 'recall', 'F-score'))
for label, stats in stats_by_label.items():
all_tagged = stats['tp'] + stats['fp']
stats['prec'] = stats['tp'] / all_tagged if all_tagged else 0
all_true = stats['tp'] + stats['fn']
stats['rec'] = stats['tp'] / all_true if all_true else 0
stats['f'] = (2 / ((1/stats['prec']) + (1/stats['rec']))
if stats['prec'] > 0 and stats['rec'] > 0 else 0)
print("{:>8} {:>8} {:>8} {:>8.2f} {:>8.2f} {:>8.2f}".format(
label, all_true, all_tagged, stats['prec'], stats['rec'],
stats['f']))
accuracy = (
sum([stats_by_label[label]['tp'] for label in stats_by_label]) /
len(predictions)) if predictions else 0
av_rec = sum([stats['rec'] for stats in stats_by_label.values()]) / 3
f_pn = (stats_by_label['positive']['f'] +
stats_by_label['negative']['f']) / 2
print()
print("{:>10} {:>.4f}".format('Acc:', accuracy))
print("{:>10} {:>.4f}".format('P/N av. F:', f_pn))
print("{:>10} {:>.4f}".format('Av.rec:', av_rec))
"""
Explanation: We'll need to evaluate our output against the gold data, using the metrics defined for the competition:
End of explanation
"""
import sys
def read_data(fn):
data = []
with open(fn) as f:
for line in f:
if not line:
continue
fields = line.strip().split('\t')
if line.strip() == '"':
continue
answer, text = fields[1:3]
data.append((text, answer))
return data
"""
Explanation: We need a small function to read the data from file:
End of explanation
"""
from sklearn import svm
def sa_exp(train_file, dev_file):
print('reading data...')
train_data = read_data(train_file)
dev_data = read_data(dev_file)
print('featurizing train...')
featurizer = Featurizer()
train_events, train_labels = featurizer.featurize(
train_data, allow_new_features=True)
print('featurizing dev...')
dev_events, dev_labels = featurizer.featurize(
dev_data, allow_new_features=False)
print('training...')
model = svm.LinearSVC()
model.fit(train_events, train_labels)
print('predicting...')
predictions = model.predict(dev_events)
predicted_labels = [
featurizer.labels_by_id[label] for label in predictions]
dev_labels = [
featurizer.labels_by_id[label] for label in dev_labels]
print('evaluating...')
print()
evaluate(predicted_labels, dev_labels)
"""
Explanation: And finally a main function to run an experiment:
End of explanation
"""
sa_exp('4a.train', '4a.dev')
"""
Explanation: Let's see how the system performs currently:
End of explanation
"""
|
josh-gree/maths-with-python
|
06-numpy-plotting.ipynb
|
mit
|
x = [1, 2, 3]
y = [4, 9, 16]
print(x+y)
"""
Explanation: A lot of computational algorithms are expressed using Linear Algebra terminology - vectors and matrices. This is thanks to the wide range of methods within Linear Algebra for solving the sort of problems that computers are good at solving!
Within Python, our first thought may be to represent a vector as a list. But there is a downside: lists do not naturally behave as vectors. For example:
End of explanation
"""
import numpy
"""
Explanation: Similarly, we cannot apply algebraic operations or functions to lists in a straightforward manner that matches our expectations.
However, there is a Python package - numpy - that does give us the behaviour we want.
numpy
End of explanation
"""
x_numpy = numpy.array(x)
y_numpy = numpy.array(y)
print(x_numpy + y_numpy)
print(x_numpy[0])
print(y_numpy[1:])
"""
Explanation: numpy is used so frequently that in a lot of cases and online explanations you will see it abbreviated, using import numpy as np. Here we try to avoid that - auto-completion inside spyder means that the additional typing is trivial, and using the full name is clearer.
numpy defines a special type, an array, which can represent vectors, matrices, and other higher-rank objects. Unlike standard Python lists, an array can only contain objects of a single type. The notation to create these objects is straightforward: one easy way is to start with a list:
End of explanation
"""
print(3*x_numpy)
print(numpy.log(x_numpy))
print(x_numpy*y_numpy)
print((x_numpy-1)**2)
"""
Explanation: We see that the array objects behave as we would expect, and accessing elements is exactly the same as for a list. We can also perform other mathematical operations on the whole vector:
End of explanation
"""
A_numpy = numpy.array([ [1, 2, 3], [4, 5, 6], [7, 8, 0]])
print(A_numpy**2)
"""
Explanation: Think about these carefully.
The first case is straightforward: all elements of the vector are multiplied by a constant.
The second case applies a function to each element separately. numpy implements a version of most interesting mathematical functions, which are applied directly to each element.
The third case is elementwise multiplication of the vectors. The first component of the answer is the product of the first component of x_numpy with the first component of y_numpy. The second component of the answer is the product of the second component of x_numpy with the second component of y_numpy. We cannot use the * operator to represent matrix multiplication, but must use a function (see below; note that there will be an operator in Python 3.5+, but using it will mean your code is, for now, not widely useable).
The fourth case shows a combination of cases above. The answer is given by elementwise subtraction of 1, then squaring (elementwise) that result.
Defining a matrix can be done by applying the array function to a list of lists:
End of explanation
"""
x_squared = numpy.dot(x_numpy, x_numpy)
A_times_x = numpy.dot(A_numpy, x_numpy)
print(x_squared)
print(A_times_x)
"""
Explanation: We see that for higher rank objects such as matrices, the operations are still performed elementwise.
To multiply a matrix by a vector, or a vector by a vector, in the standard linear algebra sense, we use the numpy.dot function:
End of explanation
"""
print(x_numpy.size)
print(x_numpy.shape)
print(A_numpy.size)
print(A_numpy.shape)
"""
Explanation: Note that we have appeared to multiply a matrix by a row vector, and get a row vector back. This is because numpy does not distinguish between row and column vectors, so everything appears as a row vector. (You could define a $n \times 1$ array instead, but there is no advantage).
To actually check the shape and size of numpy arrays, you can directly check their attributes:
End of explanation
"""
from scipy import linalg
print(linalg.solve(A_numpy, x_numpy))
print(linalg.det(A_numpy))
"""
Explanation: numpy contains a number of very efficient functions for working with arrays, for finding extreme values, and performing linear algebra tasks. Particular functions that are worth knowing, or starting from, are
arange: constructs an array containing increasing integers
linspace: constructs a linearly spaced array
zeros and ones: constructs arrays containing just ones or zeros
diag: extracts the diagonal of a matrix, or build a matrix with just diagonal entries
mgrid: constructs matrices from vectors for 3d plots
random.rand: constructs an array of random numbers.
Linear algebra
numpy also defines a number of linear algebra functions. However, a more comprehensive set of functions, which is better maintained and often more efficient, is given by scipy:
End of explanation
"""
numpy.savetxt('A_numpy.txt', A_numpy)
"""
Explanation: In addition to solving linear systems and computing determinants, you can also factorize matrices and generally do most linear algebra operations that you need. The scipy documentation is comprehensive, and has a specific section on Linear Algebra, as well as a section in the tutorial. Johansson also has a tutorial on scipy in general.
Working with files
Often we will want to work with data - constants, parameters, initial conditions, measurements, and so on. numpy provides ways to work with data stored in files - either reading them in or writing them out. A list of "File I/O routines" is available, but the two key routines are loadtxt and savetxt.
As a simple example we take our matrix A_numpy above and save it to a file:
End of explanation
"""
!cat A_numpy.txt
"""
Explanation: We can then check the contents of that file (you should open the file on your machine to check):
End of explanation
"""
A_from_file = numpy.loadtxt('A_numpy.txt')
print(A_from_file == A_numpy)
"""
Explanation: Finally, we can read the contents of that file into a new variable and check that it matches:
End of explanation
"""
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['figure.figsize']=(12,9)
x = numpy.linspace(0, 2.0)
y = numpy.sin(numpy.pi*x)**2
pyplot.plot(x, y)
pyplot.show()
"""
Explanation: Plotting
There are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.
This is a quick recap of the basic plotting commands, but using numpy as well.
End of explanation
"""
x = numpy.linspace(0, 2.0)
y = numpy.sin(numpy.pi*x)**2
pyplot.plot(x, y, marker='x', markersize=10, linestyle=':', linewidth=3,
color='b', label=r'$\sin^2(\pi x)$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic plot')
pyplot.show()
"""
Explanation: This plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:
End of explanation
"""
x = numpy.linspace(0, 2.0)
y = numpy.sin(numpy.pi*x)**2
pyplot.plot(x, y, marker='^', markersize=10, linestyle='-.', linewidth=3,
color='b', label=r'$\sin^2(\pi x)$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic plot')
pyplot.savefig('simple_plot.png')
"""
Explanation: Whilst most of the commands are self-explanatory, a brief note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string says that the following string will be "raw": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \pi and \sin. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms.
By combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly.
Saving figures
If you want to save the figure to a file, instead of printing it to the screen, use the savefig command instead of the show command. For example, try:
End of explanation
"""
from IPython.display import Image
Image('simple_plot.png')
"""
Explanation: We can then check the file on disk (you should open the file on your machine to check):
End of explanation
"""
fig = pyplot.figure(figsize=(12, 9))
"""
Explanation: The type of the file is taken from the extension. Here we have used a png file, but svg and pdf output will also work.
Object-based approach
To get a more detailed control over the plot it's better to look at the objects that matplotlib is producing. Remember, when we talked about classes we said that it is an object with attributes and methods (functions) that are accessed using dot notation. Here are steps to completely control the plot.
First we define a figure object. We do not have to define the figure class - it is defined within matplotlib itself, along with a lot of useful methods. We call the constructor of the figure object in the same way as in the previous section, by calling pyplot.figure(). We can and will control its size (the units default to inches) by passing additional arguments to the constructor:
End of explanation
"""
axis1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height
axis2 = fig.add_axes([0.4, 0.7, 0.2, 0.15])
"""
Explanation: We will then define two axes on this figure. The numbers refers to the positions of the edges of the axes with respect to the figure window (between 0 and 1):
End of explanation
"""
axis1.plot(x, y)
axis2.plot(x, y)
"""
Explanation: We will then add data to the both axes:
End of explanation
"""
axis2.set_xbound(0.7, 0.8)
axis2.set_ybound(0.3, 0.7)
"""
Explanation: We will then set the range of the second axis:
End of explanation
"""
fig
"""
Explanation: Finally, we'll see what it looks like:
End of explanation
"""
axis2.set_xscale('log')
axis1.set_xlabel(r'$x$', fontsize=16)
fig
axis1.set_xticks([0, 1, 2])
axis1.set_xticklabels(['Start', 'Middle', 'End'])
fig
"""
Explanation: Each axis contains additional objects that can be modified.
End of explanation
"""
fig = pyplot.figure(figsize=(12, 9))
x = numpy.linspace(0.0, 1.0)
for subplot in range(1, 7):
axis = fig.add_subplot(2, 3, subplot)
axis.plot(x, numpy.sin(numpy.pi*x*subplot))
axis.set_xlabel(r'$x$')
axis.set_ylabel(r'$y$')
axis.set_title(r'$\sin({} \pi x)$'.format(subplot))
fig.tight_layout();
"""
Explanation: Adding multiple axes by hand is often annoying (although sometimes necessary). There are a number of tools that can be used to simplify this in standard cases: add_subplot is the standard one. When you want a figure containing multiple subplots all the same size, with r rows and c columns, the command is add_subplot(r, c, <subplot_number>). For example:
End of explanation
"""
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = pyplot.figure(figsize=(12, 9))
axis = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection='3d')
"""
Explanation: The tight_layout function call at the end ensures that the axis labels and titles do not overlap with other subplots.
Higher dimensions
To plot three-dimensional objects, we need to modify the axis so that it knows a third dimension is required. To do this, we import another module and modify the command that sets up the axis object.
End of explanation
"""
t = numpy.linspace(0.0, 10.0, 500)
x = numpy.cos(2.0*numpy.pi*t)
y = numpy.sin(2.0*numpy.pi*t)
z = 0.1*t
axis.plot(x, y, z)
axis.set_xlabel(r'$x$')
axis.set_ylabel(r'$y$')
axis.set_zlabel(r'$z$')
fig
"""
Explanation: We can then construct, for example, a parametric spiral:
End of explanation
"""
fig = pyplot.figure(figsize=(12, 9))
axis = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection='3d')
x = numpy.linspace(0.0, 1.0)
y = numpy.linspace(0.0, 1.0)
X, Y = numpy.meshgrid(x, y)
# x, y are vectors
# X, Y are 2d arrays
phi = numpy.sin(numpy.pi*X*Y)**2 * numpy.cos(2.0*numpy.pi*Y**2)
axis.plot_surface(X, Y, phi)
axis.set_xlabel(r'$x$')
axis.set_ylabel(r'$y$')
axis.set_zlabel(r'$\phi$');
"""
Explanation: If we want to plot a surface, then we need to construct 2d arrays containing the locations of the $x$ and $y$ coordinates, and a 2d array containing the "height" of the surface. For structured data (ie, where the $x$ and $y$ coordinates lie on a regular grid) the meshgrid function helps. For example, the function
\begin{equation}
\phi(x, y) = \sin^2 ( \pi x y ) \cos( 2 \pi y^2 ), \qquad x \in [0, 1], \quad y \in [0, 1],
\end{equation}
would be plotted using
End of explanation
"""
from matplotlib import cm
fig = pyplot.figure(figsize=(12, 9))
axis = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection='3d')
p = axis.plot_surface(X, Y, phi, rstride=1, cstride=2, cmap = cm.coolwarm)
axis.set_xlabel(r'$x$')
axis.set_ylabel(r'$y$')
axis.set_zlabel(r'$\phi$')
fig.colorbar(p, shrink=0.5);
"""
Explanation: There are a lot of options to modify the appearance of this plot. Important ones include the colormap (note the US spelling), which requires importing the cm module from matplotlib, and the stride parameters changing the appearance of the grid. For example
End of explanation
"""
from numpy import sin
from scipy.integrate import quad
def integrand(x):
"""
The integrand \sin^2(x).
Parameters
----------
x : real (list)
The point(s) at which the integrand is evaluated
Returns
-------
integrand : real (list)
The integrand evaluated at x
"""
return sin(x)**2
result = quad(integrand, 0.0, numpy.pi)
print("The result is {}.".format(result))
"""
Explanation: Further reading
As noted earlier, the matplotlib documentation contains a lot of details, and the gallery contains a lot of examples that can be adapted to fit. There is also an extremely useful document as part of Johansson's lectures on scientific Python.
scipy
scipy is a package for scientific Python, and contains many functions that are essential for mathematics. It works particularly well with numpy. We briefly introduced it above for tackling Linear Algebra problems, but it also includes
Scientific constants
Integration and ODE solvers
Interpolation
Optimization and root finding
Statistical functions
and much more.
Integration
The numerical quadrature problem involves solving the definite integral
\begin{equation}
\int_a^b f(x) \, \text{d} x,
\end{equation}
or a suitable generalization. scipy has a module, scipy.integrate, that includes a number of functions to solve these types of problems. For example, to solve
\begin{equation}
I = \int_0^{\pi} \sin^2(x) \, \text{d} x,
\end{equation}
the quad function can be used as:
End of explanation
"""
from numpy import sin
from scipy.integrate import quad
def integrand_param(x, a):
"""
The integrand \sin^2(a x).
Parameters
----------
x : real (list)
The point(s) at which the integrand is evaluated
a : real
The parameter for the integrand
Returns
-------
integrand : real (list)
The integrand evaluated at x
"""
return sin(a*x)**2
for a in range(1, 6):
result, accuracy = quad(integrand_param, 0.0, numpy.pi, args=(a,))
print("For a={}, the result is {}.".format(a, result))
"""
Explanation: The steps we have taken are:
Define the integrand by defining a function. This function takes the points at which the integrand is evaluated. By using numpy we can do this with a single command.
Import the quad function.
Call the quad function, passing the function defining the integrand, and the lower and upper limits.
The result we get back, as seen from the screen output, is not just $I$. It is a tuple containing both $I$, and also the accuracy with which quad believes it has computed the result. The quadrature is a numerical approximation, so can never be perfect. You should check this error estimate to ensure the result is "good enough" for your purposes.
We can also pass additional parameters if needed. Consider the problem
\begin{equation}
I_a = \int_0^{\pi} \sin^2 (a x) \, \text{d} x.
\end{equation}
If we wanted to solve this for many values of $a$, say $a = 1, 2, \dots, 5$, we could create a function taking a parameter, and then pass that parameter through:
End of explanation
"""
from numpy import sin
from scipy.integrate import quad
def integrand_param2(x, a, b):
"""
The integrand \sin^2(a x + b).
Parameters
----------
x : real (list)
The point(s) at which the integrand is evaluated
a : real
The parameter for the integrand
b : real
The second parameter for the integrand
Returns
-------
integrand : real (list)
The integrand evaluated at x
"""
return sin(a*x+b)**2
for a in range(1, 3):
for b in range(3):
result, accuracy = quad(integrand_param2, 0.0, numpy.pi, args=(a, b))
print("For a={}, b={}, the result is {}.".format(a, b, result))
"""
Explanation: Note that when passing the parameters using the args keyword argument, we put the parameters in a tuple. This shows how to pass more than one parameter: keep adding parameters to the argument list, and add them to the tuple. For example, to solve
\begin{equation}
I_{a,b} = \int_0^{\pi} \sin^2(ax + b) \, \text{d} x
\end{equation}
we write
End of explanation
"""
from numpy import exp
from scipy.integrate import odeint
def dydt(y, t):
"""
Defining the ODE dy/dt = e^{-t} - y.
Parameters
----------
y : real
The value of y at time t (the current numerical approximation)
t : real
The current time t
Returns
-------
dydt : real
The RHS function defining the ODE.
"""
return exp(-t) - y
t = numpy.linspace(0.0, 1.0)
y0 = [1.0]
y = odeint(dydt, y0, t)
print("The shape of the result is {}.".format(y.shape))
print("The value of y at t=1 is {}.".format(y[-1,0]))
"""
Explanation: Solving ODEs
There is a link between the solution of integrals and the solution of differential equations. Unfortunately, the numerical solution of an ODE is more complex than the solution of an integral. Fortunately, scipy contains a number of methods for these as well.
The methods in scipy solve ODEs of the form
\begin{equation}
\frac{\text{d} \vec{y}}{\text{d} t} = \vec{f} \left( \vec{y}, t \right), \qquad \vec{y}(0) = \vec{y}_0.
\end{equation}
For example, the ODE
\begin{equation}
\frac{\text{d} y}{\text{d} t} = e^{-t} - y, \qquad y(0) = 1
\end{equation}
has $f(y, t) = e^{-t} - y$.
The method for using scipy is similar to the integration case.
Define a function that specifies the system, by defining the RHS.
Import the function that solves ODEs (odeint)
Call the function, passing the RHS function, the initial data $\vec{y}_0$, the times at which the solution is needed, and any parameters.
To solve our example, we use:
End of explanation
"""
pyplot.plot(t, y[:,0])
pyplot.xlabel(r'$t$')
pyplot.ylabel(r'$y$')
pyplot.show()
"""
Explanation: Note that the result for $y$ is not a vector, but a two dimensional array. This is because scipy will solve a general system of ODEs. This scalar case is a system of size $1$, but it still returns an array. To solve a system, the RHS function must take a vector for y, return a vector for dydt, and the initial data y0 must be a vector. All these vectors must be the same size.
The output is the numerical approximation to $y$ at the input times $t$, and can be immediately plotted:
End of explanation
"""
import numpy
from scipy.integrate import odeint
def dzdt(z, t, alpha):
"""
Defining the ODE dz/dt.
Parameters
----------
z : real, list
The value of z at time t (the current numerical approximation)
t : real
The current time t
alpha : real
Parameter
Returns
-------
dzdt : real
The RHS function defining the ODE.
"""
dzdt = numpy.zeros_like(z)
x, y = z
dzdt[0] = -y + alpha
dzdt[1] = x
return dzdt
t = numpy.linspace(0.0, 50.0, 1000)
z0 = [1.0, 0.0]
alpha = 1e-5
z = odeint(dzdt, z0, t, args=(alpha,))
fig = pyplot.figure(figsize=(12,12))
ax = fig.add_subplot(1,1,1)
ax.plot(z[:,0], z[:,1])
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-1.1, 1.1);
"""
Explanation: Passing parameters is also similar to the integration case. For example, consider the problem
\begin{equation}
\frac{\text{d}}{\text{d} t} \begin{pmatrix} x \ y \end{pmatrix} = \begin{pmatrix} -y + \alpha \ x \end{pmatrix}, \qquad \begin{pmatrix} x \ y \end{pmatrix}(0) = \begin{pmatrix} 1 \ 0 \end{pmatrix}.
\end{equation}
If $\alpha$ is zero, the solution is a circle in the $x, y$ plane. We solve this using odeint, denoting the state vector $\vec{z} = (x, y)^T$:
End of explanation
"""
|
MarkWieczorek/SHTOOLS
|
examples/notebooks/spherical-harmonic-normalizations.ipynb
|
bsd-3-clause
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pyshtools as pysh
pysh.utils.figstyle(rel_width=0.75)
%config InlineBackend.figure_format = 'retina' # if you are not using a retina display, comment this line
lmax = 100
coeffs = pysh.SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[5], ms=[2])
"""
Explanation: More about spherical harmonic normalizations and Parseval's theorem
The variance of a single spherical harmonic
We will here demonstrate the relatioship between a function expressed in spherical harmonics and its variance. To make things simple, we will consider only a single harmonic, and note that the results are easily extended to more complicated functions given that the spherical harmonics are orthogonal.
We start by initializing a new coefficient class to zero and setting a single coefficient to 1.
End of explanation
"""
grid = coeffs.expand(grid='GLQ')
fig, ax = grid.plot(show=False) # show=False is used to avoid a warning when plotting in inline mode
"""
Explanation: Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature:
End of explanation
"""
N = ((grid.data[:, :grid.nlon-grid.extend]**2) * grid.weights[np.newaxis,:].T).sum() * (2. * np.pi / (grid.nlon-grid.extend))
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
"""
Explanation: Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm $N_{lm}$) divided by the surface area of the sphere ($4\pi$):
$$N_{lm} = \int_\Omega Y^2_{lm}(\mathbf{\theta, \phi})~d\Omega$$
$$Var(Y_{lm}) = \frac{N_{lm}}{4 \pi}$$
When the spherical harmonics are $4\pi$ normalized, $N_{lm}$ is equal to $4\pi$ for all values of l and m. Thus, by definition, the variance of each harmonic is 1 for $4\pi$-nomalized harmonics.
We can verify the mathematical value of $N_{lm}$ by doing the integration manually. For this, we will perform a Gauss-Legendre Quadrature, making use of the latitudinal weighting function that is stored in the SHGrid class instance. Note that it is necessary to ignore the redundant longitudinal band at 360 E.
End of explanation
"""
grid_dh = coeffs.expand(grid='DH')
weights = pysh.utils.DHaj(grid_dh.n)
N = ((grid_dh.data[:grid_dh.nlat-grid_dh.extend, :grid_dh.nlon-grid_dh.extend]**2) * weights[np.newaxis,:].T).sum() * 2. * np.sqrt(2.) * np.pi / (grid_dh.nlon-grid_dh.extend)
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
"""
Explanation: Alternatively, we could have done the integration with a 'DH' grid instead. In this case, it is necessary to ignore the redundant longitudinal band at 360 E and the latitudinal band at 90 S.
End of explanation
"""
power = coeffs.spectrum()
print('Total power is ', power.sum())
"""
Explanation: Parseval's theorem
We have seen in the previous section that a single $4\pi$-normalized spherical harmonic has unit variance. In spectral analysis, the word power is often used to mean the value of the function squared divided by the area it spans, and if the function has zero mean, power is equivalent to variance. Since the spherical harmonics are orthogonal functions on the sphere, there exists a simple relationship between the power of the function and its spherical harmonic coefficients:
$$\frac{1}{4 \pi} \int_{\Omega} f^2(\mathbf{\theta, \phi})~d\Omega = \sum_{lm} C_{lm}^2 \frac{N_{lm}}{4 \pi}$$
This is Parseval's theorem for data on the sphere. For $4\pi$ normalized harmonics, the last fraction on the right hand side is unity, and the total variance (power) of the function is the sum of the coefficients squared. Knowning this, we can confirm the result of the previous section by showing that the total power of the l=5, m=2 harmonic is unity:
End of explanation
"""
|
derrowap/MA490-MachineLearning-FinalProject
|
.ipynb_checkpoints/project-checkpoint.ipynb
|
mit
|
data_inorder = pd.read_csv('Data\\adder_inorder_data.csv')
data_inorder = data_inorder[['Steps', 'MSE']]
data_inorder = data_inorder.sort_values(['Steps'])
data_inorder.head(9)
data_rnd_0 = pd.read_csv('Data\\adder_random_0_data.csv')
data_rnd_0 = data_rnd_0[['Steps', 'MSE']]
data_rnd_0 = data_rnd_0.sort_values(['Steps'])
data_rnd_1 = pd.read_csv('Data\\adder_random_1_data.csv')
data_rnd_1 = data_rnd_1[['Steps', 'MSE']]
data_rnd_1 = data_rnd_1.sort_values(['Steps'])
data_rnd_2 = pd.read_csv('Data\\adder_random_2_data.csv')
data_rnd_2 = data_rnd_2[['Steps', 'MSE']]
data_rnd_2 = data_rnd_2.sort_values(['Steps'])
data_rnd_3 = pd.read_csv('Data\\adder_random_3_data.csv')
data_rnd_3 = data_rnd_3[['Steps', 'MSE']]
data_rnd_3 = data_rnd_3.sort_values(['Steps'])
data_rnd_4 = pd.read_csv('Data\\adder_random_4_data.csv')
data_rnd_4 = data_rnd_4[['Steps', 'MSE']]
data_rnd_4 = data_rnd_4.sort_values(['Steps'])
plt.plot(data_inorder['Steps'].ix[:20], data_inorder['MSE'].ix[:20], 'bo',
data_rnd_0['Steps'].ix[:20], data_rnd_0['MSE'].ix[:20],
data_rnd_1['Steps'].ix[:20], data_rnd_1['MSE'].ix[:20],
data_rnd_2['Steps'].ix[:20], data_rnd_2['MSE'].ix[:20],
data_rnd_3['Steps'].ix[:20], data_rnd_3['MSE'].ix[:20],
data_rnd_4['Steps'].ix[:20], data_rnd_4['MSE'].ix[:20])
plt.show()
plt.plot(data_inorder['Steps'].ix[30:], data_inorder['MSE'].ix[30:], 'bo',
data_rnd_0['Steps'].ix[30:], data_rnd_0['MSE'].ix[30:],
data_rnd_1['Steps'].ix[30:], data_rnd_1['MSE'].ix[30:],
data_rnd_2['Steps'].ix[30:], data_rnd_2['MSE'].ix[30:],
data_rnd_3['Steps'].ix[30:], data_rnd_3['MSE'].ix[30:],
data_rnd_4['Steps'].ix[30:], data_rnd_4['MSE'].ix[30:])
plt.show()
plt.plot(data_rnd_1['Steps'].ix[30:], data_rnd_1['MSE'].ix[30:],
data_rnd_2['Steps'].ix[30:], data_rnd_2['MSE'].ix[30:],
data_rnd_4['Steps'].ix[30:], data_rnd_4['MSE'].ix[30:])
plt.show()
"""
Explanation: After several tests we found in almost all cases that training a function on every possibility is much more effective than randomly generating data.
End of explanation
"""
data_inorder = pd.read_csv('Data\\adder_inorder_data.csv')
data_inorder = data_inorder[['Steps', 'MSE']]
data_inorder = data_inorder.sort_values(['Steps'])
arr = np.zeros(5)
arr[0] = 5
arr = ['100', '200', '300', '400', '500', '600', '700',
'1000','1100','1200','1300', '1400','1500', '1600','1700','1800', '1900',
'2000', '2100', '2300', '2400', '2500']
df_arr = []
for i in range(len(arr)):
temp = pd.read_csv('Data\\determinant_' + arr[i] +'_layer_by_100.csv', header=None)
temp = temp.T
temp.columns=['Second', 'MSE']
temp['First'] = arr[i]
temp = temp.sort_values(['First', 'Second'])
df_arr.append(temp)
len(df_arr)
temp = pd.read_csv('Data\\determinant_layer_by_100.csv', header=None)
temp = temp.T
temp.columns=['Second', 'MSE']
frames = [df_arr[0], df_arr[1], df_arr[2], df_arr[3], df_arr[4], df_arr[5],
df_arr[6], df_arr[7], df_arr[8], df_arr[9], df_arr[10], df_arr[11],
df_arr[12], df_arr[13], df_arr[14], df_arr[15], df_arr[16], df_arr[17],
df_arr[18], df_arr[19], df_arr[20], df_arr[21]]
result = pd.concat(frames)
result = result.reset_index(drop=True)
result.sort_values(['MSE'])
res1 = result.as_matrix(columns=['First'])
res2 = result.as_matrix(columns=['Second'])
res3 = result.as_matrix(columns=['MSE'])
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(res1[:,0], res2[:,0], res3[:,0], cmap=cm.jet, linewidth=0.2)
plt.show()
min = 30
for i in range(len(arr)):
plt.plot(df_arr[i]['Second'], df_arr[i]['MSE'])
plt.show()
num = 5
df_arr[5].as_matrix(columns=['MSE']))
df_arr[5].head(21)
for i in range(len(arr)):
plt.plot(df_arr[i]['Second'], df_arr[i]['MSE'])
plt.ylim(.3, .5)
plt.show()
plt.plot(df_arr[0]['Second'], df_arr[0]['MSE'])
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
n_angles = 36
n_radii = 8
# An array of radii
# Does not include radius r=0, this is to eliminate duplicate points
radii = np.linspace(0.125, 1.0, n_radii)
# An array of angles
angles = np.linspace(0, 2*np.pi, n_angles, endpoint=False)
# Repeat all angles for each radius
angles = np.repeat(angles[...,np.newaxis], n_radii, axis=1)
# Convert polar (radii, angles) coords to cartesian (x, y) coords
# (0, 0) is added here. There are no duplicate points in the (x, y) plane
x = np.append(0, (radii*np.cos(angles)).flatten())
y = np.append(0, (radii*np.sin(angles)).flatten())
# Pringle surface
z = np.sin(-x*y)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(x, y, z, cmap=cm.jet, linewidth=0.2)
plt.show()
z
arr np.array()
"""
Explanation: adder(n): Adds 42 to n
Process:
* Tried different methods of generating data to find which worked the best
* All possible values 0-100
* 1000 random values 0-100
* All possible values 0-100 10 times for a total of 1000 datum
* All possible values 0-100 100 times for a total of 10000 datum
* Experimented with single layer hidden units 1-20
* Experimented with two layer hidden units [1-20, 1-20]
* Found how well each different neural net extrapolated to other values
* Tried to scale data up to learn 1-1000
Results:
<table>
<thead>
<td>Data</td>
<td>MSE</td>
</thead>
<tr>
<td>100 data: 1-100</td>
<td>.065861</td>
</tr>
<tr>
<td>1000 datum: random(100)</td>
<td>.028475</td>
</tr>
<tr>
<td>1000 datum: 1-100 10 times</td>
<td>.007759</td>
</tr>
<tr>
<td>10000 datum: 1-100 100 times</td>
<td>409.116</td>
</tr>
</table>
The results seemed to show that iterating through every possibility multiple times and then training on that is the best method of data gathering. As with many things in this, you have to find the fine line of having the right amount of data without having too much.
I was interested in how far a neural net could extrapolate to numbers that it had never seen before. Although this should have been an easy problem because it is linear, it wasn't because SkFlow doesn't allow you to choose the activation function for your regressor. Most all of the single and double layer neural nets I trained failed around 200-300, however there was one that stood out and was able to correctly predict 1-1145 with just training on 1-100
End of explanation
"""
|
ESO-python/ESOPythonTutorials
|
notebooks/ESO Code Coffee Dec 7, 2015.ipynb
|
bsd-3-clause
|
x = StringIO.StringIO()
arr = np.arange(10)
np.savetxt(x,arr, header='test', comments="")
x.seek(0)
print(x.read())
with open('file.txt','w') as f:
f.write(x.getvalue())
%%bash
cat file.txt
"""
Explanation: Q1:
Saving a table to text with a header with no preceding "#"
Also, demo StringIO
End of explanation
"""
from astropy.convolution import convolve, convolve_fft
"""
Explanation: astropy convolution
How do you convolve fast?
see, e.g., http://keflavich.github.io/blog/fft-comparisons-in-python.html
End of explanation
"""
import scipy.fftpack, scipy.ndimage, scipy.signal
scipy.ndimage.convolve
scipy.signal.fftconvolve??
%%bash
factor 9216
"""
Explanation: Speed of DFT: $O(n^2)$
Speed of FFT: $O(n log(n))$
End of explanation
"""
x = np.random.randn(64) + 5 + np.sin(np.arange(64))*3
f = np.fft.fft(x)
pl.plot(x)
%matplotlib inline
import pylab as pl
pl.plot(np.abs(f))
f[0] = 0
f[10] = 0
f[64-10] = 0
pl.plot(np.abs(f))
xi = np.fft.ifft(f)
pl.plot(xi.real)
pl.plot(x)
"""
Explanation: faster fftw: --enable-avx for "advanced vector instructions". 8x FLOPs at a time!
What does it mean to "remove" fft modes?
End of explanation
"""
%%bash
cd ~/repos/astropy
ls
ls build/
"""
Explanation: Q3
Install a module, then keep editing it.
python setup.py develop
Use https://github.com/astropy/package-template to get everything set up in a cool way. develop doesn't do much good for C code.
Within an interactive session, use reload(package) (python2) or import importlib; importlib.reload(package) to reload the package. This finnicky.
Other option, which works with C extensions: python setup.py build_ext --inplace. Or you can use python setup.py build to build into the build/ directory, which will then be accessible using import if you've used python setup.py develop
End of explanation
"""
|
dxl0632/deeplearning_nd_udacity
|
intro-to-tflearn/TFLearn_Digit_Recognition_Solution.ipynb
|
mit
|
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def display_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
display_digit(0)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
# Output layer and training model
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
Kaggle/learntools
|
notebooks/python/raw/ex_2.ipynb
|
apache-2.0
|
# SETUP. You don't need to worry for now about what this code does or how it works.
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex2 import *
print('Setup complete.')
"""
Explanation: Functions are powerful. Try writing some yourself.
As before, don't forget to run the setup code below before jumping into question 1.
End of explanation
"""
def round_to_two_places(num):
"""Return the given number rounded to two decimal places.
>>> round_to_two_places(3.14159)
3.14
"""
# Replace this body with your own code.
# ("pass" is a keyword that does literally nothing. We used it as a placeholder
# because after we begin a code block, Python requires at least one line of code)
pass
# Check your answer
q1.check()
#%%RM_IF(PROD)%%
q1.assert_check_unattempted()
#%%RM_IF(PROD)%%
def round_to_two_places(num):
"""Return the given number rounded to two decimal places.
>>> round_to_two_places(3.14159)
3.14
"""
return round(num, 2)
q1.assert_check_passed()
#%%RM_IF(PROD)%%
def round_to_two_places(num):
"""Return the given number rounded to two decimal places.
>>> round_to_two_places(3.14159)
3.14
"""
return round(num, 3)
q1.assert_check_failed()
# Uncomment the following for a hint
#_COMMENT_IF(PROD)_
q1.hint()
# Or uncomment the following to peek at the solution
#_COMMENT_IF(PROD)_
q1.solution()
"""
Explanation: 1.
Complete the body of the following function according to its docstring.
HINT: Python has a built-in function round.
End of explanation
"""
# Put your test code here
"""
Explanation: 2.
The help for round says that ndigits (the second argument) may be negative.
What do you think will happen when it is? Try some examples in the following cell.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
q2.solution()
"""
Explanation: Can you think of a case where this would be useful? Once you're ready, run the code cell below to see the answer and to receive credit for completing the problem.
End of explanation
"""
def to_smash(total_candies):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
return total_candies % 3
# Check your answer
q3.check()
#%%RM_IF(PROD)%%
def to_smash(total_candies, n_friends=3):
return n_friends % total_candies
q3.assert_check_failed()
#%%RM_IF(PROD)%%
def to_smash(total_candies, n_friends=3):
return total_candies % n_friends
q3.assert_check_passed()
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
"""
Explanation: 3.
In the previous exercise, the candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.
Below is a simple function that will calculate the number of candies to smash for any number of total candies.
Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.
Update the docstring to reflect this new behaviour.
End of explanation
"""
# ruound_to_two_places(9.9999)
# x = -10
# y = 5
# # Which of the two variables above has the smallest absolute value?
# smallest_abs = min(abs(x, y))
# def f(x):
# y = abs(x)
# return y
# print(f(5))
"""
Explanation: 4. (Optional)
It may not be fun, but reading and understanding error messages will be an important part of your Python career.
Each code cell below contains some commented buggy code. For each cell...
Read the code and predict what you think will happen when it's run.
Then uncomment the code and run it to see what happens. (Tip: In the kernel editor, you can highlight several lines and press ctrl+/ to toggle commenting.)
Fix the code (so that it accomplishes its intended purpose without throwing an exception)
<!-- TODO: should this be autochecked? Delta is probably pretty small. -->
End of explanation
"""
|
tombstone/models
|
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
|
apache-2.0
|
!pip install -U --pre tensorflow=="2.*"
!pip install tf_slim
"""
Explanation: Context R-CNN Demo
<table align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td></table>
This notebook will walk you step by step through the process of using a pre-trained model to build up a contextual memory bank for a set of images, and then detect objects in those images+context using Context R-CNN.
Setup
Important: If you're running on a local machine, be sure to follow the installation instructions. This notebook includes only what's necessary to run in Colab.
Install
End of explanation
"""
!pip install pycocotools
"""
Explanation: Make sure you have pycocotools installed
End of explanation
"""
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
"""
Explanation: Get tensorflow/models or cd to parent directory of the repository.
End of explanation
"""
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
"""
Explanation: Compile protobufs and install the object_detection package
End of explanation
"""
import numpy as np
import os
import six
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import pathlib
import json
import datetime
import matplotlib.pyplot as plt
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
"""
Explanation: Imports
End of explanation
"""
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_utils
"""
Explanation: Import the object detection module.
End of explanation
"""
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
"""
Explanation: Patches:
End of explanation
"""
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
"""
Explanation: Model preparation
Loader
End of explanation
"""
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = 'models/research/object_detection/data/snapshot_serengeti_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=False)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to zebra. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
# If you want to test the code with your images, just add path to the images to
# the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images/snapshot_serengeti')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpeg")))
TEST_IMAGE_PATHS
"""
Explanation: We will test on a context group of images from one month at one camera from the Snapshot Serengeti val split defined on LILA.science, which was not seen during model training:
End of explanation
"""
test_data_json = 'models/research/object_detection/test_images/snapshot_serengeti/context_rcnn_demo_metadata.json'
with open(test_data_json, 'r') as f:
test_metadata = json.load(f)
image_id_to_datetime = {im['id']:im['date_captured'] for im in test_metadata['images']}
image_path_to_id = {im['file_name']: im['id']
for im in test_metadata['images']}
image_path_to_id
"""
Explanation: Load the metadata for each image
End of explanation
"""
faster_rcnn_model_name = 'faster_rcnn_resnet101_snapshot_serengeti_2020_06_10'
faster_rcnn_model = load_model(faster_rcnn_model_name)
"""
Explanation: Generate Context Features for each image
End of explanation
"""
faster_rcnn_model.inputs
"""
Explanation: Check the model's input signature, it expects a batch of 3-color images of type uint8.
End of explanation
"""
faster_rcnn_model.output_dtypes
faster_rcnn_model.output_shapes
"""
Explanation: And it returns several outputs. Note this model has been exported with additional output 'detection_features' which will be used to build the contextual memory bank.
End of explanation
"""
def run_inference_for_single_image(model, image):
'''Run single image through tensorflow object detection saved_model.
This function runs a saved_model on a (single) provided image and returns
inference results in numpy arrays.
Args:
model: tensorflow saved_model. This model can be obtained using
export_inference_graph.py.
image: uint8 numpy array with shape (img_height, img_width, 3)
Returns:
output_dict: a dictionary holding the following entries:
`num_detections`: an integer
`detection_boxes`: a numpy (float32) array of shape [N, 4]
`detection_classes`: a numpy (uint8) array of shape [N]
`detection_scores`: a numpy (float32) array of shape [N]
`detection_features`: a numpy (float32) array of shape [N, 7, 7, 2048]
'''
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
output_dict = model(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_dets = output_dict.pop('num_detections')
num_detections = int(num_dets)
for key,value in output_dict.items():
output_dict[key] = value[0, :num_detections].numpy()
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(
np.int64)
return output_dict
"""
Explanation: Add a wrapper function to call the model, and cleanup the outputs:
End of explanation
"""
def embed_date_captured(date_captured):
"""Encodes the datetime of the image.
Takes a datetime object and encodes it into a normalized embedding of shape
[5], using hard-coded normalization factors for year, month, day, hour,
minute.
Args:
date_captured: A datetime object.
Returns:
A numpy float32 embedding of shape [5].
"""
embedded_date_captured = []
month_max = 12.0
day_max = 31.0
hour_max = 24.0
minute_max = 60.0
min_year = 1990.0
max_year = 2030.0
year = (date_captured.year-min_year)/float(max_year-min_year)
embedded_date_captured.append(year)
month = (date_captured.month-1)/month_max
embedded_date_captured.append(month)
day = (date_captured.day-1)/day_max
embedded_date_captured.append(day)
hour = date_captured.hour/hour_max
embedded_date_captured.append(hour)
minute = date_captured.minute/minute_max
embedded_date_captured.append(minute)
return np.asarray(embedded_date_captured)
def embed_position_and_size(box):
"""Encodes the bounding box of the object of interest.
Takes a bounding box and encodes it into a normalized embedding of shape
[4] - the center point (x,y) and width and height of the box.
Args:
box: A bounding box, formatted as [ymin, xmin, ymax, xmax].
Returns:
A numpy float32 embedding of shape [4].
"""
ymin = box[0]
xmin = box[1]
ymax = box[2]
xmax = box[3]
w = xmax - xmin
h = ymax - ymin
x = xmin + w / 2.0
y = ymin + h / 2.0
return np.asarray([x, y, w, h])
def get_context_feature_embedding(date_captured, detection_boxes,
detection_features, detection_scores):
"""Extracts representative feature embedding for a given input image.
Takes outputs of a detection model and focuses on the highest-confidence
detected object. Starts with detection_features and uses average pooling to
remove the spatial dimensions, then appends an embedding of the box position
and size, and an embedding of the date and time the image was captured,
returning a one-dimensional representation of the object.
Args:
date_captured: A datetime string of format '%Y-%m-%d %H:%M:%S'.
detection_features: A numpy (float32) array of shape [N, 7, 7, 2048].
detection_boxes: A numpy (float32) array of shape [N, 4].
detection_scores: A numpy (float32) array of shape [N].
Returns:
A numpy float32 embedding of shape [2057].
"""
date_captured = datetime.datetime.strptime(date_captured,'%Y-%m-%d %H:%M:%S')
temporal_embedding = embed_date_captured(date_captured)
embedding = detection_features[0]
pooled_embedding = np.mean(np.mean(embedding, axis=1), axis=0)
box = detection_boxes[0]
position_embedding = embed_position_and_size(box)
bb_embedding = np.concatenate((pooled_embedding, position_embedding))
embedding = np.expand_dims(np.concatenate((bb_embedding,temporal_embedding)),
axis=0)
score = detection_scores[0]
return embedding, score
"""
Explanation: Functions for embedding context features
End of explanation
"""
def run_inference(model, image_path, date_captured, resize_image=True):
"""Runs inference over a single input image and extracts contextual features.
Args:
model: A tensorflow saved_model object.
image_path: Absolute path to the input image.
date_captured: A datetime string of format '%Y-%m-%d %H:%M:%S'.
resize_image: Whether to resize the input image before running inference.
Returns:
context_feature: A numpy float32 array of shape [2057].
score: A numpy float32 object score for the embedded object.
output_dict: The saved_model output dictionary for the image.
"""
with open(image_path,'rb') as f:
image = Image.open(f)
if resize_image:
image.thumbnail((640,640),Image.ANTIALIAS)
image_np = np.array(image)
# Actual detection.
output_dict = run_inference_for_single_image(model, image_np)
context_feature, score = get_context_feature_embedding(
date_captured, output_dict['detection_boxes'],
output_dict['detection_features'], output_dict['detection_scores'])
return context_feature, score, output_dict
context_features = []
scores = []
faster_rcnn_results = {}
for image_path in TEST_IMAGE_PATHS:
image_id = image_path_to_id[str(image_path)]
date_captured = image_id_to_datetime[image_id]
context_feature, score, results = run_inference(
faster_rcnn_model, image_path, date_captured)
faster_rcnn_results[image_id] = results
context_features.append(context_feature)
scores.append(score)
# Concatenate all extracted context embeddings into a contextual memory bank.
context_features_matrix = np.concatenate(context_features, axis=0)
"""
Explanation: Run it on each test image and use the output detection features and metadata to build up a context feature bank:
End of explanation
"""
context_rcnn_model_name = 'context_rcnn_resnet101_snapshot_serengeti_2020_06_10'
context_rcnn_model = load_model(context_rcnn_model_name)
"""
Explanation: Run Detection With Context
Load a context r-cnn object detection model:
End of explanation
"""
context_padding_size = 2000
"""
Explanation: We need to define the expected context padding size for the
model, this must match the definition in the model config (max_num_context_features).
End of explanation
"""
context_rcnn_model.inputs
"""
Explanation: Check the model's input signature, it expects a batch of 3-color images of type uint8, plus context_features padded to the maximum context feature size for this model (2000) and valid_context_size to represent the non-padded context features:
End of explanation
"""
context_rcnn_model.output_dtypes
context_rcnn_model.output_shapes
def run_context_rcnn_inference_for_single_image(
model, image, context_features, context_padding_size):
'''Run single image through a Context R-CNN saved_model.
This function runs a saved_model on a (single) provided image and provided
contextual features and returns inference results in numpy arrays.
Args:
model: tensorflow Context R-CNN saved_model. This model can be obtained
using export_inference_graph.py and setting side_input fields.
Example export call -
python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path /path/to/context_rcnn_model.config \
--trained_checkpoint_prefix /path/to/context_rcnn_model.ckpt \
--output_directory /path/to/output_dir \
--use_side_inputs True \
--side_input_shapes 1,2000,2057/1 \
--side_input_names context_features,valid_context_size \
--side_input_types float,int \
--input_shape 1,-1,-1,3
image: uint8 numpy array with shape (img_height, img_width, 3)
context_features: A numpy float32 contextual memory bank of shape
[num_context_examples, 2057]
context_padding_size: The amount of expected padding in the contextual
memory bank, defined in the Context R-CNN config as
max_num_context_features.
Returns:
output_dict: a dictionary holding the following entries:
`num_detections`: an integer
`detection_boxes`: a numpy (float32) array of shape [N, 4]
`detection_classes`: a numpy (uint8) array of shape [N]
`detection_scores`: a numpy (float32) array of shape [N]
'''
image = np.asarray(image)
# The input image needs to be a tensor, convert it using
# `tf.convert_to_tensor`.
image_tensor = tf.convert_to_tensor(
image, name='image_tensor')[tf.newaxis,...]
context_features = np.asarray(context_features)
valid_context_size = context_features.shape[0]
valid_context_size_tensor = tf.convert_to_tensor(
valid_context_size, name='valid_context_size')[tf.newaxis,...]
padded_context_features = np.pad(
context_features,
((0,context_padding_size-valid_context_size),(0,0)), mode='constant')
padded_context_features_tensor = tf.convert_to_tensor(
padded_context_features,
name='context_features',
dtype=tf.float32)[tf.newaxis,...]
# Run inference
output_dict = model(
inputs=image_tensor,
context_features=padded_context_features_tensor,
valid_context_size=valid_context_size_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_dets = output_dict.pop('num_detections')
num_detections = int(num_dets)
for key,value in output_dict.items():
output_dict[key] = value[0, :num_detections].numpy()
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
return output_dict
def show_context_rcnn_inference(
model, image_path, context_features, faster_rcnn_output_dict,
context_padding_size, resize_image=True):
"""Runs inference over a single input image and visualizes Faster R-CNN vs.
Context R-CNN results.
Args:
model: A tensorflow saved_model object.
image_path: Absolute path to the input image.
context_features: A numpy float32 contextual memory bank of shape
[num_context_examples, 2057]
faster_rcnn_output_dict: The output_dict corresponding to this input image
from the single-frame Faster R-CNN model, which was previously used to
build the memory bank.
context_padding_size: The amount of expected padding in the contextual
memory bank, defined in the Context R-CNN config as
max_num_context_features.
resize_image: Whether to resize the input image before running inference.
Returns:
context_rcnn_image_np: Numpy image array showing Context R-CNN Results.
faster_rcnn_image_np: Numpy image array showing Faster R-CNN Results.
"""
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
with open(image_path,'rb') as f:
image = Image.open(f)
if resize_image:
image.thumbnail((640,640),Image.ANTIALIAS)
image_np = np.array(image)
image.thumbnail((400,400),Image.ANTIALIAS)
context_rcnn_image_np = np.array(image)
faster_rcnn_image_np = np.copy(context_rcnn_image_np)
# Actual detection.
output_dict = run_context_rcnn_inference_for_single_image(
model, image_np, context_features, context_padding_size)
# Visualization of the results of a context_rcnn detection.
vis_utils.visualize_boxes_and_labels_on_image_array(
context_rcnn_image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
use_normalized_coordinates=True,
line_thickness=2)
# Visualization of the results of a faster_rcnn detection.
vis_utils.visualize_boxes_and_labels_on_image_array(
faster_rcnn_image_np,
faster_rcnn_output_dict['detection_boxes'],
faster_rcnn_output_dict['detection_classes'],
faster_rcnn_output_dict['detection_scores'],
category_index,
use_normalized_coordinates=True,
line_thickness=2)
return context_rcnn_image_np, faster_rcnn_image_np
"""
Explanation: And returns several outputs:
End of explanation
"""
%matplotlib inline
plt.rcParams['axes.grid'] = False
plt.rcParams['xtick.labelsize'] = False
plt.rcParams['ytick.labelsize'] = False
plt.rcParams['xtick.top'] = False
plt.rcParams['xtick.bottom'] = False
plt.rcParams['ytick.left'] = False
plt.rcParams['ytick.right'] = False
plt.rcParams['figure.figsize'] = [15,10]
"""
Explanation: Define Matplotlib parameters for pretty visualizations
End of explanation
"""
for image_path in TEST_IMAGE_PATHS:
image_id = image_path_to_id[str(image_path)]
faster_rcnn_output_dict = faster_rcnn_results[image_id]
context_rcnn_image, faster_rcnn_image = show_context_rcnn_inference(
context_rcnn_model, image_path, context_features_matrix,
faster_rcnn_output_dict, context_padding_size)
plt.subplot(1,2,1)
plt.imshow(faster_rcnn_image)
plt.title('Faster R-CNN')
plt.subplot(1,2,2)
plt.imshow(context_rcnn_image)
plt.title('Context R-CNN')
plt.show()
"""
Explanation: Run Context R-CNN inference and compare results to Faster R-CNN
End of explanation
"""
|
yashdeeph709/Algorithms
|
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
|
apache-2.0
|
# Grab every letter in string
lst = [x for x in 'word']
# Check
lst
"""
Explanation: Comprehensions
In addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.
List comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line for loop built inside of brackets. For a simple example:
Example 1
End of explanation
"""
# Square numbers in range and turn into list
lst = [x**2 for x in range(0,11)]
lst
"""
Explanation: This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10}
Lets see a few more example of list comprehensions in Python:
Example 2
End of explanation
"""
# Check for even numbers in a range
lst = [x for x in range(11) if x % 2 == 0]
lst
"""
Explanation: Example 3
Lets see how to add in if statements:
End of explanation
"""
# Convert Celsius to Fahrenheit
celsius = [0,10,20.1,34.5]
fahrenheit = [ ((float(9)/5)*temp + 32) for temp in celsius ]
fahrenheit
"""
Explanation: Example 4
Can also do more complicated arithmetic:
End of explanation
"""
lst = [ x**2 for x in [x**2 for x in range(11)]]
lst
"""
Explanation: Example 5
We can also perform nested list comprehensions, for example:
End of explanation
"""
|
ghvn7777/ghvn7777.github.io
|
content/fluent_python/2_1_listcomp.ipynb
|
apache-2.0
|
symbols = "a%b&c$de$"
beyond_ascii = [ord(s) for s in symbols if ord(s) > 50]
beyond_ascii
beyond_ascii = list(filter(lambda c: c > 50, map(ord, symbols)))
beyond_ascii
"""
Explanation: 列表生成式和生成式表达式
我们可以用 map 和 filter 达到 列表生成式的效果
End of explanation
"""
colors = ['black', 'white']
sizes = ['S', 'M', 'L']
tshirts = [(color, size) for color in colors #两种颜色,三种尺寸的方式创建列表
for size in sizes]
tshirts
"""
Explanation: 对于上面的例子来说, filter/map 并不比 listcomp(列表生成式) 快, 后面的章节我们进一步讨论
列表生成式的多层循环
假如我们要制作两种颜色三种大小组成的 T 恤, 下面的代码是生成 T 恤的序列
End of explanation
"""
symbols = "a%b&c$de$"
tuple(ord(s) for s in symbols) #如果生成器是一个函数的唯一的参数,则不用重复使用括号
import array
array.array('I', (ord(symbol) for symbol in symbols)) #不是唯一的参数,所以必须要加上括号
"""
Explanation: Python中,[], {}, () 内的换行符会被忽略,所以我们在创建列表,字典等时,不必使用 \ 来换行,列表推导式的作用只有一个,生成列表,如果想生成其它类型的序列,生成器就派上了用场。
生成器表达式
生成式表达式更省存储空间,因为它背后遵守了迭代器协议,逐个的产生项目,而不是一次构建整个列表然后将列表传递到某个构造函数中。
Genexp(生成式表达式)语法和 listcomp(列表生成式) 一样,但是它使用圆括号而不是方括号
End of explanation
"""
lax_coordinates = (33.942, -118.4080) #经纬度
city, year, pop, chg, area = ('Tokyo', 2003, 32450, 0.66, 8014) #Tokyo 的资料,包括名称年份人口人口变化和面积
traveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'), ('ESP', 'XDA205856')] #country_code,passport_number
for passport in sorted(traveler_ids):
print('%s/%s' % passport)
for country, _ in traveler_ids:
print(country)
"""
Explanation: Tuple
Tuple 并非只是不可变列表
Tuple 可以当成不可变序列使用,也可以当成没有字段名的记录。元组其实是对数据的记录,元组中每个元素都存放了记录中的一个字段的数据,外加这个字段的位置,正是这个位置信息给数据赋予了意义。将 tuple 当成记录时,重新排序 tuple 会破坏它表示的信息
End of explanation
"""
lax_coordinates = (33.348, -117.932)
latitude, longitude = lax_coordinates # tuple 拆解
print(latitude)
print(longitude)
"""
Explanation: 元组拆包
在上面我们将元组拆解成 city, year,... 等,接着在下面,我们用 "%s/%s" % passport 打印元组,这些例子是拆解元组的范例。任何可迭代对象都可以进行元组拆包,被可迭代对象的元素数量必须和接受这些元素的元组空档一致,除非你用星号(*)来处理多余的元素
在元组拆包中,最常见的是平行赋值,也就是将可迭代对象的项目指派给一个变量 tuple,如下所示:
End of explanation
"""
a = 3
b = 5
a, b = b, a
a
"""
Explanation: 不用中间变量交换值,是元组拆包的另一种应用
End of explanation
"""
divmod(20, 8) #a 除 b,返回商和余数
t = (20, 8)
divmod(*t)
quotient, remainder = divmod(*t)
quotient, remainder
"""
Explanation: 另一种元组拆包,调用函数时,参数前面加个星号
End of explanation
"""
import os
_, filename = os.path.split('/home/kaka/.ssh/idrsa.pub')
filename
"""
Explanation: 上面也是一种元组拆包方法,调用函数时候用 * 把可迭代对象拆开作为函数参数。
下面的例子是让一个函数可以使用元组的方式返回多个值,os.path.split() 函数返回一个以路径和最后一个文件名组成的元组(path, last_part)
End of explanation
"""
a, b, *rest = range(5)
a, b, rest
a, b, *rest = range(3)
a, b, rest
a, b, *rest = range(2)
a, b, rest
"""
Explanation: 如果在编写国际化的软件, 并不是一个很好的占位符,因为传统上,它会被当成 gettext.gettext 函数的别名,其他时候, 是一个好的占位符
使用 * 来处理剩下的元素
以 *args 来指定函数参数,可以获取任意数量的参数,这是 Python 中一种经典的写法
Python 3 中,这个概念被扩展到平行赋值中。
End of explanation
"""
a, *body, c, d = range(5)
a, body, c, d
*head, b, c, d = range(5)
head, b, c, d
"""
Explanation: 在平行赋值情况下, * 前置符只能用在一个变量前面,但是可以这个变量可以是任意位置
End of explanation
"""
metro_areas = [
('Tokyo', 'JP', 36.933, (35.689722, 139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.43333, -99.13333)),
('Net York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635883))
]
print('{:15} | {:^9} | {:^9}'.format('', 'lat.', 'long.')) #^9 表示占 9 个字符,居中显示
fmt = '{:15} | {:^9} | {:^9}'
for name, cc, pop, (latitude, longitude) in metro_areas:
if longitude <= 0: #西半球
print(fmt.format(name, latitude, longitude))
"""
Explanation: 嵌套元组拆包
我们可以在需要用运算式来拆解的 tuple 中套入其它的 tuple,例如(a, b, (c, d)),而且如果运算式与嵌套结构匹配,python 就会做出正确的动作
End of explanation
"""
from collections import namedtuple
City = namedtuple('City', 'name country population coordinates')
tokyo = City('Tokyo', 'JP', 36.933, (35.689722, 139.691667))
tokyo
tokyo.coordinates
tokyo[1]
"""
Explanation: 在 python3 之前,你可以在定义函数时,可以使用嵌套元组做形参,例如 def func(a, (b, c), d)。但是 python3 已经不支持了,不过这对函数调用者没有任何影响,它改变的是某些函数的声明方式
按照设计,tuple 是非常方便的,可以把它当成记录来使用,不过它不可以指定名字,这也是 namedtuple 被设计出来的原因
可以指定名字的元组(具名元组)
collections.namedtuple 是一个工厂函数,可以构建一个带字段名的元组和一个有名字的类。这个带名字的类对调试程序有很大的帮助。使用 namedtuple 创建的实例消耗的内存与元组是一样的,因为字段名都被存储到相应的类中。这个实例跟普通的对象实例比起来要小一些,因为 Python 不会用到 __dict__ 来存放这些实例的属性
下面展示了如何用具名元组来记录一个城市的信息。
End of explanation
"""
City._fields #是 1 个元组,里面是我们的字段名
LatLong = namedtuple('LatLong', 'lat long')
delhi_data = ('Delhi NCR', 'IN', 21.935, LatLong(28.613889, 77.208889))
delhi = City._make(delhi_data) #_make() 可以让我们接受一个可迭代对象生成这个类的一个实例。City(*delhi_data) 也可以做这件事
delhi._asdict() #把具名元组按照 collections.OrderedDict 的形式返回, 用来友好的显示城市资料
"""
Explanation: 构建 namedtuple 需要两个参数,第一个是类名,另一个是各个字段的名字,可以使用可迭代的字符串(列表),也可以使用用空格分隔的字符串。
存放在对应字段里的数据要以一串参数的形式传入到构造函数中。注意,元祖的构造函数值接受单一的可迭代对象。
可以用字段名或位置来读取数据
下面是 namedtuple 的常用属性:
End of explanation
"""
|
Unidata/unidata-python-workshop
|
notebooks/XArray/XArray Introduction.ipynb
|
mit
|
# Convention for import to get shortened namespace
import numpy as np
import xarray as xr
# Create some sample "temperature" data
data = 283 + 5 * np.random.randn(5, 3, 4)
data
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>XArray Introduction</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png" alt="NumPy Logo" style="height: 250px;"></div>
Overview:
Teaching: 25 minutes
Exercises: 20 minutes
Questions
What is XArray?
How does XArray fit in with Numpy and Pandas?
Objectives
Create a DataArray.
Open netCDF data using XArray
Subset the data.
XArray
XArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM).
DataArray
The DataArray is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:
Coordinate names and values are stored with the data, making slicing and indexing much more powerful
It has a built-in container for attributes
End of explanation
"""
temp = xr.DataArray(data)
temp
"""
Explanation: Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
End of explanation
"""
temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])
temp
"""
Explanation: We can also pass in our own dimension names:
End of explanation
"""
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2018-01-01', periods=5)
times
# Sample lon/lats
lons = np.linspace(-120, -60, 4)
lats = np.linspace(25, 55, 3)
"""
Explanation: This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.
End of explanation
"""
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])
temp
"""
Explanation: When we create the DataArray instance, we pass in the arrays we just created:
End of explanation
"""
temp.attrs['units'] = 'kelvin'
temp.attrs['standard_name'] = 'air_temperature'
temp
"""
Explanation: ...and we can also set some attribute metadata:
End of explanation
"""
# For example, convert Kelvin to Celsius
temp - 273.15
"""
Explanation: Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.
End of explanation
"""
temp.sel(time='2018-01-02')
"""
Explanation: Selection
We can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).
End of explanation
"""
from datetime import timedelta
temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
"""
Explanation: .sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
.interp() works similarly to .sel(). Using .interp(), get an interpolated time series "forecast" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href="http://xarray.pydata.org/en/stable/interpolation.html">interp</a>).
End of explanation
"""
# %load solutions/interp_solution.py
"""
Explanation: Solution
End of explanation
"""
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
"""
Explanation: Slicing with Selection
End of explanation
"""
# As done above
temp.loc['2018-01-02']
temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]
# This *doesn't* work however:
#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
"""
Explanation: .loc
All of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.
End of explanation
"""
# Open sample North American Reanalysis data in netCDF format
ds = xr.open_dataset('../../data/NARR_19930313_0000.nc')
ds
"""
Explanation: Opening netCDF data
With its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).
End of explanation
"""
ds.isobaric1
"""
Explanation: This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:
End of explanation
"""
ds['isobaric1']
"""
Explanation: or
End of explanation
"""
ds_1000 = ds.sel(isobaric1=1000.0)
ds_1000
ds_1000.Temperature_isobaric
"""
Explanation: Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:
End of explanation
"""
u_winds = ds['u-component_of_wind_isobaric']
u_winds.std(dim=['x', 'y'])
"""
Explanation: Aggregation operations
Not only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:
End of explanation
"""
# %load solutions/mean_profile.py
"""
Explanation: Exercise
Using the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:
* x: -182km to 424km
* y: -1450km to -990km
(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)
Solution
End of explanation
"""
|
yandex-load/volta
|
firmware/arduino_due_1MHz/sync.ipynb
|
mpl-2.0
|
df_r1000 = df.groupby(df.index//1000).mean()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000.plot(ax=ax)
"""
Explanation: Группируем по миллисекундам и усредняем:
End of explanation
"""
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[:12000].plot(ax=ax)
"""
Explanation: Интересные нам всплески потребления кончаются где-то на 10000-ной миллисекунде (их пять подряд, мы моргали лампочкой пять раз).
End of explanation
"""
import numpy as np
import pandas as pd
from scipy import signal
from scipy import interpolate
from scipy.stats import pearsonr
import logging
log = logging.getLogger(__name__)
def torch_status(lines):
"""
Parse torch statuses from lines
"""
for line in lines:
if "newStatus=2" in line:
yield (
datetime.strptime(
line.split()[1], "%H:%M:%S.%f"),
1)
elif "newStatus=1" in line:
yield (
datetime.strptime(
line.split()[1], "%H:%M:%S.%f"),
0)
def parse_torch_events(filename, sps=1000):
"""
Parse torch events from file, considering target sample rate.
Offset is the number of sample
"""
log.info("Parsing torch events...")
with open(filename) as eventlog:
df = pd.DataFrame.from_records(
torch_status(eventlog), columns=["offset", "status"])
df["offset"] = df["offset"].map(
lambda x: int(np.round((x - df["offset"][0]).total_seconds() * sps)))
return df
def ref_signal(torch, trailing_zeros=1000):
"""
Generate square reference signal with trailing zeroes
"""
log.info("Generating ref signal...")
f = interpolate.interp1d(torch["offset"], torch["status"], kind="zero")
X = np.linspace(0, torch["offset"].values[-1], torch["offset"].values[-1])
return np.append(f(X), np.zeros(trailing_zeros))
def cross_correlate(sig, ref, first=30000):
"""
Calculate cross-correlation with lag. Take only first n lags.
"""
log.info("Calculating cross-correlation...")
lags = np.arange(len(sig) - len(ref))
if len(lags) > first:
lags = lags[:first]
return pd.DataFrame.from_records(
[pearsonr(sig[lag:lag+len(ref)], ref) for lag in lags],
columns=["corr", "p_value"])
def sync(sig, eventlog, sps=1000, trailing_zeros=1000, first=30000):
rs = ref_signal(
parse_torch_events(eventlog, sps=sps),
trailing_zeros=trailing_zeros)
cc = cross_correlate(sig, rs)
sync_point = np.argmax(cc["corr"])
if cc["p_value"][sync_point] > 0.05:
raise RuntimeError("P-value is too big: %d" % cc["p_value"][sync_point])
log.info(
"Pearson's coef: %d, p-value: %d",
cc["corr"][sync_point],
cc["p_value"][sync_point])
return sync_point
te = parse_torch_events("browser_download.log", sps=1000)
rs = ref_signal(te)
cc = cross_correlate(df_r1000[0], rs)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
sns.plt.plot(df_r1000[0][:20000], label="signal")
sns.plt.plot(cc["corr"][:20000]*1000 + 500, label="cross-correlation")
sns.plt.plot(np.append(np.zeros(sync_point), rs * 500 + 500), label="reference")
#sns.plt.plot(cc["p_value"][:20000]*1000, label="p-value")
sync_point = np.argmax(cc["corr"])
sns.plt.axvline(sync_point)
ax.legend()
fig = sns.plt.figure(figsize=(10, 6))
ax = sns.plt.subplot()
sns.plt.scatter(np.arange(0, 30, 2), np.zeros(15), label="Одно")
sns.plt.scatter(np.arange(1, 31, 2), np.zeros(15), label="Другое", color="red")
ax.legend()
"""
Explanation: Функции для парсинга событий из лога и поиска точки синхронизации:
End of explanation
"""
|
jtyberg/interactive-insights-workbench
|
notebook/samples/python/Query_MongoDB.ipynb
|
bsd-3-clause
|
import pandas as pd
from pymongo import MongoClient
from bson.objectid import ObjectId
from urth.widgets.widget_channels import channel
"""
Explanation: Query MongoDB Database Collection
This notebook demonstrates how to:
Connect to a MongoDB instance
List the databases for the instance
List the collections for a database
Query a database collection and convert the result to a pandas.DataFrame
Bind a DataFrame to an jupyter-incubator/declarativewidgets urth-viz-table widget to display the results
End of explanation
"""
client = MongoClient('192.168.99.100', 27017)
"""
Explanation: Connect to a MongoDB instance.
End of explanation
"""
client.database_names()
"""
Explanation: List the databases in the instance.
End of explanation
"""
db = client.demo
"""
Explanation: Get a reference to a database.
End of explanation
"""
db.collection_names()
"""
Explanation: List the collections in the database.
End of explanation
"""
features = db.client_features
"""
Explanation: Get a reference to a collection.
End of explanation
"""
def query_collection(limit=100):
cursor = features.find({}).limit(limit)
df = pd.DataFrame(list(cursor))
# Remove the MongoDB _id column
del df['_id']
return df
df = query_collection()
df.head()
"""
Explanation: Query collection.
End of explanation
"""
df.iloc[0].to_dict()
"""
Explanation: Show a single record as dict.
End of explanation
"""
%%html
<link rel="import" href="urth_components/urth-core-function/urth-core-function.html">
<link rel="import" href="urth_components/urth-viz-table/urth-viz-table.html" is="urth-core-import">
<template is="dom-bind">
<urth-core-function id="fc" ref="query_collection"
arg-limit="{{ limit }}"
result="{{ data }}"></urth-core-function>
<div class="heading layout horizontal justified">
<button onClick="fc.invoke()">Run Query</button>
</div>
<template is="dom-if" if="[[data]]">
<urth-viz-table
datarows="{{ data.data }}"
columns="{{ data.columns }}"
selection="{{ selected }}"
rowsVisible=10></urth-viz-table>
</template>
</template>
"""
Explanation: Bind the above function to a table widget.
End of explanation
"""
|
gatmeh/Udacity-deep-learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
return None, None, None, None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
display_step = None
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
4dsolutions/Python5
|
Dimensions of Python.ipynb
|
mit
|
from keyword import kwlist
print(", ".join(kwlist))
"""
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
Five Dimensions of Python
Keywords: basic syntax, reserved terms
Builtins: available on bootup of Python, no import required
Special Names: hooks for tying code to syntax (e.g. obj.attr, obj[index])
Standard Library: "batteries included"
Third Party Packages: vast assortment of libraries, frameworks, distributions
Keywords
End of explanation
"""
exceptions, callables, not_callables = [], [], []
for name in dir(__builtins__): # cycle through a bunch of strings
obj = eval(name) # turn strings back into objects
if type(obj) == type and issubclass(obj, BaseException):
exceptions.append(name) # types of exception
elif callable(obj):
callables.append(name) # callable
else:
not_callables.append(name) # whatever is left over
print("Exceptions\n", exceptions)
print("------")
print("Callables\n", callables)
print("------")
print("Other\n", not_callables)
"""
Explanation: Builtins
When you boot into a Python shell (REPL), you're not entering a blank canvas or blank slate. Lots of names are already defined, and ready for calling or using.
End of explanation
"""
"""
This type of object gets along with nobody!
or·ner·y
ˈôrn(ə)rē/
adjective North American informal
adjective: ornery
bad-tempered and combative.
"some hogs are just mean and ornery"
synonyms: grouchy, grumpy, cranky, crotchety, cantankerous, bad-tempered,
ill-tempered, dyspeptic, irascible, waspish
自 = self in Chinese (showing that self is not a keyword, just a placeholder)
"""
class Ornery:
def __init__(自, name="Fred"):
自.name = name
print("A sourpuss is born!")
def __getitem__(自, key):
return "How dare you touch me with those brackets!"
def __call__(自, *args, **kwargs):
return "Don't call me at home!"
def __getattr__(自, attr):
return "I'm insulted you'd suppose I'd have {}".format(attr)
def __repr__(自):
return "Don't bother me! Go away."
def __invert__(自):
return "I can't invert, are you kidding?"
grumpy = Ornery()
print(grumpy("Hello?")) # __call__
print(grumpy.mood) # __getattr__
print(grumpy[3]) # __getitem__
print(~grumpy) # __invert__
print(grumpy) # __repr__
"""
Explanation: Special Names
Special names all have a similar look: __name__
In other words: two underlines, a name, two underlines. I sometimes think of the special names as the Python's __ribs__ (snakes have lots of ribs).
You don't make up or define these special names yourself. They come with the language. Newer versions of Python may have more of them, but within a given version, they're fixed.
Think of special names as like the strings a puppeteer uses to control marrionettes. Each string goes to a specific aspect of Python's grammar, such as calling an object, getting or setting an object's attribute, getting or setting using index notation (square brackets).
Many object types come with predefined behaviors for any or all of the above. When you write your own classes, or subclass an existing type, that's when you have the ability to define or override some special name behavior.
End of explanation
"""
|
jhprinz/openpathsampling
|
examples/alanine_dipeptide_tps/AD_tps_2b_run_fixed.ipynb
|
lgpl-2.1
|
import openpathsampling as paths
"""
Explanation: This is file runs the main calculation for the fixed length TPS simulation. It requires the file alanine_dipeptide_fixed_tps_traj.nc, which is written in the notebook alanine_dipeptide_fixed_tps_traj.ipynb.
In this file, you will learn:
* how to set up and run a fixed length TPS simulation
NB: This is a long calculation. In practice, it would be best to export the Python from this notebook, remove the live_visualizer, and run non-interactively on a computing node.
End of explanation
"""
old_storage = paths.Storage("tps_nc_files/alanine_dipeptide_fixed_tps_traj.nc", "r")
engine = old_storage.engines[0]
C_7eq = old_storage.volumes.find('C_7eq')
alpha_R = old_storage.volumes.find('alpha_R')
traj = old_storage.trajectories[0]
phi = old_storage.cvs.find('phi')
psi = old_storage.cvs.find('psi')
template = old_storage.snapshots[0]
print engine.name
print engine.snapshot_timestep
"""
Explanation: Load engine, trajectory, and states from file
End of explanation
"""
network = paths.FixedLengthTPSNetwork(C_7eq, alpha_R, length=400)
scheme = paths.OneWayShootingMoveScheme(network, selector=paths.UniformSelector(), engine=engine)
initial_conditions = scheme.initial_conditions_from_trajectories(traj)
sampler = paths.PathSampling(storage=paths.Storage("tps_nc_files/alanine_dipeptide_fixed_tps.nc", "w", template),
move_scheme=scheme,
sample_set=initial_conditions)
#sampler.live_visualizer = paths.StepVisualizer2D(network, phi, psi, [-3.14, 3.14], [-3.14, 3.14])
#sampler.live_visualizer = None
sampler.run(10000)
"""
Explanation: TPS
The only difference between this and the flexible path length example in alanine_dipeptide_tps_run.ipynb is that we used a FixedLengthTPSNetwork. We selected the length=400 (8 ps) as a maximum length based on the results from a flexible path length run.
End of explanation
"""
|
goodwordalchemy/thinkstats_notes_and_exercises
|
code/chap12_time_series_analysis.ipynb
|
gpl-3.0
|
transactions = pandas.read_csv('mj-clean.csv', parse_dates=[5])
dailies = timeseries.GroupByQualityAndDay(transactions)
def PlotDailies(dailies):
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i+1)
title = 'price per gram ($)' if i == 0 else ''
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
pyplot.xticks(rotation=30)
else:
thinkplot.Config(xticks=[])
thinkplot.Show()
PlotDailies(dailies)
def RunLinearModel(daily):
model = smf.ols('ppg ~ years', data=daily)
results = model.fit()
return model, results
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print
print name
regression.SummarizeResults(results)
def PlotFittedValues(model, results, label=''):
years = model.exog[:, 1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label='model')
high = dailies['high']
model, results = RunLinearModel(high)
PlotFittedValues(model, results, "high")
thinkplot.Show()
"""
Explanation: Time Series is a sequence of measurements taken from a system that varies in time.
End of explanation
"""
series = np.arange(10)
pandas.rolling_mean(series, 3)
def PlotRollingMean(daily):
##since there are missing dates, we have to reindex the df
dates = pandas.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
roll_mean = pandas.rolling_mean(reindexed.ppg, 30)
thinkplot.Plot(roll_mean.index, roll_mean)
pyplot.xticks(rotation=30)
PlotRollingMean(high)
"""
Explanation: Even though this is a good linear fit, we shouldn't trust it, because:
there is no reason to expect a long term trend to be a line or any other simple function.
the linear regression model gives equal weight to all data. We should probably give more weight to recent data.
one of the assumptions of linear regression is that residuals are uncorrelated noise. This is probably false here, because successive values are probably correlated.
Moving Averages:
Modeling assumption in time series analysis is that the observed series is the sum of:
Trend: a smooth function that captures persistent changes
Seasonality: Periodic variation, possibly including daily, weekly, monthly or yearly cycles
Noise: Random variation around long-term trend.
moving average divides the series into overlapping regions, called windows and computes the average of the values in each window.
rolling mean computes the mean of the values in each window.
End of explanation
"""
def PlotEWMA(daily):
##since there are missing dates, we have to reindex the df
dates = pandas.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pandas.ewma(reindexed.ppg, span=30)
thinkplot.Plot(ewma.index, ewma)
pyplot.xticks(rotation=30)
PlotEWMA(high)
"""
Explanation: exponentially-weighted moving average - most recent value has highest weight and weights of previous values drop off exponentially.
End of explanation
"""
def FillMissing(daily, span=30):
dates = pandas.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pandas.ewma(reindexed.ppg, span=span)
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
return reindexed
high2 = FillMissing(high)
PlotEWMA(high2)
"""
Explanation: Missing Values
End of explanation
"""
##will return a correlation between 0 and 1
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
ewma = pandas.ewma(high2.ppg, span=30)
resid = high2.ppg - ewma
corr = SerialCorr(resid, 365)
corr
"""
Explanation: serial correlation - each value is correlated with the next one in the series. To compute, we can shift the correlation by an interval called lag
End of explanation
"""
import statsmodels.tsa.stattools as smtsa
##unbiased corrects for sample size
acf = smtsa.acf(resid, nlags=365, unbiased=True)
def GenerateSimplePrediction(results, years):
n = len(years)
inter = np.ones(n)
d = dict(Intecept=inter, years=years)
predict_df = pandas.DataFrame(d)
predict = results.predict(predict_df)
return predict
years = np.linspace(0, 5, 101)
simplePrediction = GenerateSimplePrediction(results, years)
thinkplot.Plot(years, simplePrediction)
"""
Explanation: autocorrelation function - maps from lag to serial correlation with the given lag.
End of explanation
"""
"""
to quantify sampling error:
assume estimated parameters are correct
but residual could have been different
use resampling to rerun experiment
"""
def SimulateResults(daily, iters=101, func=RunLinearModel):
model, results = func(daily)
fake = daily.copy()
result_seq = []
for i in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = RunLinearModel(fake)
result_seq.append(fake_results)
return result_seq
def GeneratePredictions(result_seq, years, add_resid=False):
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pandas.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100-percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color='gray')
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color='gray')
years = np.linspace(0, 5, 101)
print "high"
PlotPredictions(dailies['high'], years)
thinkplot.figure()
print "mid"
PlotPredictions(dailies['medium'], years)
thinkplot.figure()
print "low"
PlotPredictions(dailies['low'], years)
"""
Explanation: This graph needs to take into account:
* Sampling Error: the prediction is based on estimated parameters, which are liable to change if we run the experiment again.
Random Variation: Even if estimated parameters are perfect, the observed data varies randomly around the long-term trend.
Modeling Error: predictions based on a linear model will eventually fail.
Unexpected Future events.
End of explanation
"""
dailies.keys()
"""
Explanation: the dark gray region represents a 90% confidence interval for sampling error (that is uncertainty about the estimated slope and intercept due to sampling)
the lighter region shows confidence interval for prediction error., which is the sum of the sampling error due to random variation.
End of explanation
"""
# def RunLinearModel(daily):
# model = smf.ols('ppg ~ years', data=daily)
# results = model.fit()
# return model, results
def RunQuadraticModel(daily):
daily['years2'] = daily.years**2
formula = 'ppg ~ years + years2'
model = smf.ols(formula, data=daily)
results = model.fit()
return model, results
thinkplot.PrePlot(2, rows=2)
thinkplot.SubPlot(1)
PlotPredictions(dailies['high'], years)
thinkplot.SubPlot(2)
PlotPredictions(dailies['high'], years, iters=101, percent=90,
func=RunQuadraticModel)
m, r = RunQuadraticModel(dailies['high'])
regression.SummarizeResults(r)
"""
Explanation: Exercise 12.1
Use a quadratic model as in Section 11.3 to fit the time series of daily prices and generate predictions. Basically this entails writing a new version of RunLinear model.
End of explanation
"""
class SerialCorrelationTest(thinkstats2.HypothesisTest):
"""
takes a series and a lag as data
"""
def TestStatistic(self, data):
series, lag = data
return thinkstats2.SerialCorr(series, lag)
def RunModel(self):
series, lag = self.data
index = series.index
shuffle = thinkstats2.Resample(series, len(series))
## a cool way to do this:
# shuffle = series.reindex(np.random.permutation(series.index))
new_series = pandas.Series(shuffle, index)
return new_series, lag
##raw price data
qualities = ["low", "medium","high"]
for q in qualities:
sct = SerialCorrelationTest((dailies[q].ppg, 300))
pvalue = sct.PValue()
print q, pvalue
##linear residuals
filled = timeseries.FillMissing(dailies['high'])
model, results = RunLinearModel(filled)
# lin_res = filled - results
sct = SerialCorrelationTest((results.resid, 100))
pvalue = sct.PValue()
print 'rsquared', results.rsquared
pvalue
#quadratic residuals
filled = timeseries.FillMissing(dailies['high'])
model, results = RunQuadraticModel(filled)
# lin_res = filled - results
sct = SerialCorrelationTest((results.resid, 100))
pvalue = sct.PValue()
print 'rsquared', results.rsquared
pvalue
"""
Explanation: Exercise 12.2
Write Class "Serial Correlation Test" that extends hypothesis test from section 9.2. Takes a series and a lag as data. Compute the serial correlation of the series with the given lag and then compute the p-value of the observed correlation
Also test the residuals of the linear and quadratic models.
End of explanation
"""
daily = dailies['high']
filled = timeseries.FillMissing(daily)
filled['slope'] = pandas.ewma(filled.ppg.diff(), span=180)
slope = filled.slope[-1]
inter = filled.ewma[-1]
start = filled.index[-1]
end = filled.index.max() + pandas.DateOffset(days=365)
dates = pandas.date_range(filled.index.min(), end)
predicted = filled.reindex(dates)
predicted['date'] = predicted.index
one_day = np.timedelta64(1,'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma)
pyplot.xticks(rotation=30)
"""
Explanation: Exercise 12.3
fill missing
compute diffs
reindex
fillna
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/tensorboard/get_started.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime
# Clear any logs from previous runs
!rm -rf ./logs/
"""
Explanation: Get started with TensorBoard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/get_started"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/get_started.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In machine learning, to improve something you often need to be able to measure it. TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.
This quickstart will show how to quickly get started with TensorBoard. The remaining guides in this website provide more details on specific capabilities, many of which are not included here.
End of explanation
"""
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
"""
Explanation: Using the MNIST dataset as the example, normalize the data and write a function that creates a simple Keras model for classifying the images into 10 classes.
End of explanation
"""
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
"""
Explanation: Using TensorBoard with Keras Model.fit()
When training with Keras's Model.fit(), adding the tf.keras.callbacks.TensorBoard callback ensures that logs are created and stored. Additionally, enable histogram computation every epoch with histogram_freq=1 (this is off by default)
Place the logs in a timestamped subdirectory to allow easy selection of different training runs.
End of explanation
"""
%tensorboard --logdir logs/fit
"""
Explanation: Start TensorBoard through the command line or within a notebook experience. The two interfaces are generally the same. In notebooks, use the %tensorboard line magic. On the command line, run the same command without "%".
End of explanation
"""
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)
test_dataset = test_dataset.batch(64)
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/quickstart_model_fit.png?raw=1"/> -->
A brief overview of the dashboards shown (tabs in top navigation bar):
The Scalars dashboard shows how the loss and metrics change with every epoch. You can use it to also track training speed, learning rate, and other scalar values.
The Graphs dashboard helps you visualize your model. In this case, the Keras graph of layers is shown which can help you ensure it is built correctly.
The Distributions and Histograms dashboards show the distribution of a Tensor over time. This can be useful to visualize weights and biases and verify that they are changing in an expected way.
Additional TensorBoard plugins are automatically enabled when you log other types of data. For example, the Keras TensorBoard callback lets you log images and embeddings as well. You can see what other plugins are available in TensorBoard by clicking on the "inactive" dropdown towards the top right.
Using TensorBoard with other methods
When training with methods such as tf.GradientTape(), use tf.summary to log the required information.
Use the same dataset as above, but convert it to tf.data.Dataset to take advantage of batching capabilities:
End of explanation
"""
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
"""
Explanation: The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer:
End of explanation
"""
# Define our metrics
train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
"""
Explanation: Create stateful metrics that can be used to accumulate values during training and logged at any point:
End of explanation
"""
def train_step(model, optimizer, x_train, y_train):
with tf.GradientTape() as tape:
predictions = model(x_train, training=True)
loss = loss_object(y_train, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_accuracy(y_train, predictions)
def test_step(model, x_test, y_test):
predictions = model(x_test)
loss = loss_object(y_test, predictions)
test_loss(loss)
test_accuracy(y_test, predictions)
"""
Explanation: Define the training and test functions:
End of explanation
"""
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/gradient_tape/' + current_time + '/train'
test_log_dir = 'logs/gradient_tape/' + current_time + '/test'
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
test_summary_writer = tf.summary.create_file_writer(test_log_dir)
"""
Explanation: Set up summary writers to write the summaries to disk in a different logs directory:
End of explanation
"""
model = create_model() # reset our model
EPOCHS = 5
for epoch in range(EPOCHS):
for (x_train, y_train) in train_dataset:
train_step(model, optimizer, x_train, y_train)
with train_summary_writer.as_default():
tf.summary.scalar('loss', train_loss.result(), step=epoch)
tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
for (x_test, y_test) in test_dataset:
test_step(model, x_test, y_test)
with test_summary_writer.as_default():
tf.summary.scalar('loss', test_loss.result(), step=epoch)
tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print (template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset metrics every epoch
train_loss.reset_states()
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
"""
Explanation: Start training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other tf.summary functions enable logging other types of data.
End of explanation
"""
%tensorboard --logdir logs/gradient_tape
"""
Explanation: Open TensorBoard again, this time pointing it at the new log directory. We could have also started TensorBoard to monitor training while it progresses.
End of explanation
"""
!tensorboard dev upload \
--logdir logs/fit \
--name "(optional) My latest experiment" \
--description "(optional) Simple comparison of several hyperparameters" \
--one_shot
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/quickstart_gradient_tape.png?raw=1"/> -->
That's it! You have now seen how to use TensorBoard both through the Keras callback and through tf.summary for more custom scenarios.
TensorBoard.dev: Host and share your ML experiment results
TensorBoard.dev is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with everyone in academic papers, blog posts, social media, etc. This can enable better reproducibility and collaboration.
To use TensorBoard.dev, run the following command:
End of explanation
"""
|
littlewizardLI/Udacity-ML-nanodegrees
|
Project0-titanic_survival_exploration/titanic_survival_exploration.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
# 数据可视化代码
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
# 加载数据集
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
# 显示数据列表中的前几项乘客数据
display(full_data.head())
"""
Explanation: 机器学习工程师纳米学位
入门
项目 0: 预测泰坦尼克号乘客生还率
1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。
提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。
点击这里查看本文件的英文版本。
开始
当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。
提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# 显示已移除 'Survived' 特征的数据集
display(data.head())
"""
Explanation: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# 预测 'passenger' 的生还率
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_0(data)
"""
Explanation: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Sex')
"""
Explanation: 回答: Predictions have an accuracy of 61.62%
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
End of explanation
"""
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_1(data)
"""
Explanation: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
"""
Explanation: 回答: Predictions have an accuracy of 78.68%.
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else :
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_2(data)
"""
Explanation: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Pclass')
"""
Explanation: 回答: Predictions have an accuracy of 79.35%.
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
if passenger['Pclass'] == 3 and passenger['Age'] > 40:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] < 2 and passenger['Age'] < 18:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示:运行下面的代码来查看你的预测准确度。
End of explanation
"""
|
cabreraj/sjcc_sacnas
|
Homework3/hw3.ipynb
|
mit
|
!python --version
"""
Explanation: CIS024C - Fall 2017 - Thursday 5:30-9:25pm
Homework 3
Homework 3 covers exercises in String Manipulation.
For a list of features supported in the string module, please refer to this URL https://docs.python.org/2/library/string.html
You will need to download this notebook and use this as a starting point for your homework. You will just need to fill in the content of each code-block (cell) and execute. Once you have completed all the exercises, you will need to save and upload this to your github repository under a folder called hw3.
Note also the exercises build on top of one another so you might be able to do the next exercise if you have not completed the previous exercise.
Post any questions you have on our Slack at cis-024c1.slack.com
<h3><font color='red'>
ALL THE WORK THAT WE DID IN CLASS DURING WEEK 3 IS NOW IN GITHUB AT THE BELOW LINK
</font></h3>
https://github.com/cis024c/fall2017classwork/blob/master/week3/week3_classwork.ipynb
Slides for Week 3 can be found at https://docs.google.com/presentation/d/16z1-Ln71MiXMRfgnZJB60mnXWnwCiUn0gHURe9KMMWg/edit?usp=sharing
Please refer back to hw1 and slack for instructions on how to setup your computer for developing using Python.
Helpful Jupyter Commands
Below are some useful commands to know when using Jupyter
You can add a new cell by clicking on the "+" icon on top.
You can delete a cell by selecting that cell and clicking on the "scissors" icon on top.
You can execute a cell by either pressing shift+enter or selecting the "play" button on top.
You can create a new file in Jupyter via the File menu->New Notebook option. Make sure to select Python 2 when creating your notebook.
Also, for your code blocks make sure that Code is selected instead of another option like Markdown.
Use the Enter key to go to the next line in a cell to enter the next statement.
You can clear results by clicking on the Cell menu item and selecting Current Output->Clear or All Output->Clear depending on whether you are trying to just clear the output for one cell or for all cells.
In case your program has crashed for some reason (infinite loop, for example), you can restart your Python session by select Kernel in the menu and selecting Restart.
Check Python Version
End of explanation
"""
### YOUR CODE GOES
harryAge = int(raw_input("Enter Harry's Age: "))
sallyAge = int(raw_input("Enter Sally's Age: "))
maryAge = int(raw_input("Enter Marry's Age: "))
if harryAge < 20 and sallyAge < 20:
print "Harry and Sally are less than 20 years old"
else:
if sallyAge > 30 or maryAge > 30:
print "Either Sally or Mary is older than 30"
### END CODE
"""
Explanation: Sample Exercises with conditionals and repetitions
Refer to Week 2 classwork 2 for sample exercises - https://github.com/cis024c/fall2017classwork/blob/master/week2/week2.ipynb
Exercise 1 - Using logical operators - and, or and not
Get the ages of three persons Harry, Sally and Mary from the user. Check the below conditions and display the results
If Harry and Sally are both less than 20 years old, display the message saying "Harry and Sally are less than 20 years old"
If either Sally or Mary is older than 30, then display the message saying "Either Sally or Mary is older than 30"
Remember that to do this you will need to use different variables to store the respective ages and then evaluate those ages using the if statement and logical operators.
End of explanation
"""
### YOUR CODE GOES BELOW
userName = raw_input("Enter your First Name: ")
lengthString = len(userName)
print lengthString
### END CODE
"""
Explanation: Exercise 2 - Find the length of a given string
Ask the user to enter their first name. Compute the number of characters in the first name and print the result.
Note that you will need to use the len function to obtain the number of characters in the string.
End of explanation
"""
### YOUR CODE GOES BELOW
favMovie = raw_input("Enter the name of your favorite movie: ")
print favMovie
reversedStr = favMovie[::-1]
print reversedStr
### END CODE
"""
Explanation: Exercise 3 - Reversing a String
Ask the user to enter the name of their favorite movie. Reverse the name of the movie and print it out.
End of explanation
"""
### YOUR CODE GOES BELOW
stringName = raw_input("Enter string:")
searchString = raw_input("Enter a search string: ")
stringNameLower = stringName.lower()
print "This the user string in lowercase:", stringNameLower
stringLen = len(stringNameLower)
searchStringLen = len(searchString)
found = False
for index in range(0,stringLen-searchStringLen+1):
#print("substring:",stringName[index:index+searchStringLen])
subString = stringNameLower[index:index+searchStringLen]
if searchString == subString:
print "We have found our searchString:",searchString
found = True
break
if found == False:
print "Sorry, could not find our searchString:",searchString
### END CODE
"""
Explanation: Exercise 4 - Converting an input string to lower case and looking for a match
Ask the user to enter a line of text and a search string. Convert the line of text that the user entered to lower case. Search the resulting text for the search string. Print "Search String Found" if the search string was found, otherwise, print "Search String not found"
For example, the user could enter "Jack and Jill went up the Hill" and the search string "jill". You first need to convert the input string to lower case like so - "jack and jill went up the hill".
Next you will need to look for the search string in the input string. You can use the "if searchString in text" form of query to determine if the text contains the search string. See week 3 classowork for an example
End of explanation
"""
### YOUR CODE GOES BELOW
userList = raw_input("Enter your grocery list: ")
myList = userList.split(",")
print theType
lengthList = len(myList)
print myList[-1]
#print myList
### END CODE
"""
Explanation: Exercise 5 - Parsing a comma separated set of values
Ask the user to type in a grocery list. Ensure that each item in the grocery list is separated by a comma. Use the split command to extract each token (item) in the grocery list. Print the last item in the list.
For example, if the user enters "milk,bananas,sugar,eggs,cheese", you will need to read this into a variable, parse the contents using the split command and print "cheese"
End of explanation
"""
### YOUR CODE GOES BELOW
### END CODE
"""
Explanation: OPTIONAL EXERCISES
Below is a set of optional exercises. These will not be graded but the solutions will be posted. I would strongly encourage you to try these out if you are done with the mandatory homework exercises to improve your understanding of python.
Exercise 6
Ask the user to type in a grocery list and a search item. Ensure that each item in the grocery list is separated by a comma and an arbitrary number of spaces. Use the split command to search for the search item in this list.
For example, let us say that the user enters ""milk , bananas, sugar, eggs, cheese " (notice the arbitrary spaces between items) and the search term is "eggs". You will need to look for "eggs" in the grocery list and if found, print the message "Item found", otherwise, print "Item not found"
End of explanation
"""
### YOUR CODE GOES BELOW
### END CODE
"""
Explanation: Exercise 7
Write a python program that takes in a list of words from the user and prints the shortest word in the list. If two words are equal short, just pick the first one that you see.
End of explanation
"""
### YOUR CODE GOES BELOW
### END CODE
"""
Explanation: Exercise 8
Accept a line of text from the user with some repeating words. Ask the user to enter a search term (one of the repeating words). Count the number of times the search term repeats in the text.
For example, if the sentence is - "She sells sea shells on the sea shore" and the search term is "sea", then the program should print the result 2, indicating that two occurrences of the word "sea" were found in the text
End of explanation
"""
### YOUR CODE GOES BELOW
### END CODE
"""
Explanation: Exercise 9
Write a python program to get text from the user. Create a new text from the original text with the word " stranger " inserted in the middle of the text. Print the resulting new text.
End of explanation
"""
### YOUR CODE GOES BELOW
### END CODE
"""
Explanation: Exercise 10
Write a python program to get a line of text from the user. Sort each word in the text alphabetically and print it out.
For example, if the user enters "Jack and Jill went up the hill", the result should be "and Jack Jill hill the up went"
End of explanation
"""
|
zenotech/zPost
|
ipynb/CYLINDER/CYLINDER.ipynb
|
bsd-3-clause
|
remote_data = True
remote_server_auto = True
case_name = 'cylinder'
data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CYLINDER'
data_host='dstandingford@vis03'
paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'
if not remote_server_auto:
paraview_cmd=None
if not remote_data:
data_host='localhost'
paraview_cmd=None
"""
Explanation: 2D Cylinder
Overview
The periodic shedding of laminar flow over a 2D cylinder at a Reynolds Number of 150 can be used to verify the time accuracy of the solver. For this case the shedding frequency is measured by monitoring the variation in pressure downstream of the cylinder and away from the centre of the wake so that it is not affected by the vortex shed by the opposite side.
References
http://www.grc.nasa.gov/WWW/wind/valid/lamcyl/Study1_files/Study1.html
Define Data Location
For remote data the interaction will use ssh to securely interact with the data
This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler
Note: The default paraview server connection will use port 11111
End of explanation
"""
# Validation criteria setup for cylinder - note that the timestep (dt=0.002) chosen is just
# small enough to capture the Direct frequency. A smaller timestep (dt=0.001) gives a more
# accurate output.
validate = True
regression = True
if (validate):
valid = True
valid_lower_strouhal = 0.1790
valid_upper_strouhal = 0.1820
print 'VALIDATING CYLINDER CASE'
if (regression):
print 'REGRESSION CYLINDER CASE'
"""
Explanation: zCFD Validation and Regression¶
End of explanation
"""
%pylab inline
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import pylab as pl
import math
"""
Explanation: Initialise Environment
End of explanation
"""
from zutil.post import pvserver_connect
if remote_data:
pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)
"""
Explanation: Data Connection
This starts paraview server on remote host and connects
End of explanation
"""
from zutil.post import get_case_parameters,print_html_parameters
parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)
"""
Explanation: Get control dictionary¶
End of explanation
"""
from zutil.post import get_status_dict
status=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)
num_procs = str(status['num processor'])
"""
Explanation: Get status file
End of explanation
"""
# print parameters
from IPython.display import HTML
HTML(print_html_parameters(parameters))
diameter = 1.0
time_step = parameters['time marching']['unsteady']['time step']
cycles = parameters['time marching']['cycles']
mach = parameters['IC_1']['V']['Mach']
print 'mach = %.2f'%(mach)
kappa = 1.402
print 'kappa = %.3f'%(kappa)
R = 287.058
print 'R = %.3f'%(R)
temperature = parameters['IC_1']['temperature']
print 'temperature = %.2f'%(temperature) + ' Kelvin'
pressure = parameters['IC_1']['pressure']
print 'pressure = %.2f'%(pressure) + ' Pascals'
density = pressure/(R*temperature)
print 'density = %.2f'%(density) + ' kg/m^3'
speed_of_sound = sqrt(kappa*pressure/density)
print 'speed_of_sound = %.2f'%(speed_of_sound) + ' m/s'
u_ref = mach*speed_of_sound
print 'u_ref = %.2f'%(u_ref) + ' m/s'
"""
Explanation: Define test conditions
End of explanation
"""
from zutil.post import get_case_root, get_case_report, get_monitor_data
monitor_data = get_monitor_data(get_case_report(case_name),'probe','cp')
# clean up the probe history - remove the pseudo-timestep data
probe_data_x = []
probe_data_y = []
for i in range(0,len(monitor_data[0])):
if ((float(monitor_data[0][i])/float(cycles)) == int(monitor_data[0][i]/cycles)):
probe_data_x.append(float(monitor_data[0][i])*float(time_step)/float(cycles))
probe_data_y.append(float(monitor_data[1][i]))
# Find local maxima after 1 second
maxima_x = []
maxima_y = []
time_start = 1.0
for i in range(1,len(probe_data_x)-1):
time = probe_data_x[i]
if (time > time_start):
val_im1 = probe_data_y[i-1]
val_i = probe_data_y[i]
val_ip1 = probe_data_y[i+1]
if ((val_i > val_im1) and (val_i > val_ip1)):
maxima_x.append(probe_data_x[i])
maxima_y.append(probe_data_y[i])
# Calculate the Strouhal number
num_periods = len(maxima_x)-1
if (num_periods > 1):
frequency = num_periods/(maxima_x[len(maxima_x)-1]-maxima_x[0])
strouhal = frequency*diameter/u_ref
else:
print 'INSUFFICIENT NUMBER OF PERIODS'
strouhal = -100.0
if (validate):
valid = False
fig = pl.figure(figsize=(12, 8), dpi=150, facecolor='w', edgecolor='#E48B25')
fig.suptitle('2D Laminar Cylinder - Strouhal Number = ' + '%.4f'%strouhal,
fontsize=24, fontweight='normal', color = '#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('Time (seconds)', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_ylabel(r'$\mathbf{C_P}$' + ' at [1.07, 0.313]', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_xlim((0.0,2.0))
ax.set_ylim((-1.5,0.0))
ax.plot(probe_data_x, probe_data_y, color='r', label='Probe at [1.07, 0.313]')
ax.scatter(maxima_x, maxima_y, color='g', label='Local maxima ' + '(t > %.1f seconds)'%time_start)
legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)
legend.get_frame().set_facecolor('white')
ax.tick_params(axis='x', pad=8)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
fig.savefig("images/cylinder_probe.png")
show()
from IPython.display import FileLink, display
display(FileLink('images/cylinder_probe.png'))
"""
Explanation: Plot pressure time-history at probe point
End of explanation
"""
from zutil.post import residual_plot, get_case_report
residual_plot(get_case_report(case_name))
show()
"""
Explanation: Convergence
End of explanation
"""
# define function to help with validation check
def validate_data(name, value, valid_lower, valid_upper):
if ((value < valid_lower) or (value > valid_upper)):
print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper
return False
else:
return True
if (validate):
valid = valid and validate_data('strouhal', strouhal, valid_lower_strouhal, valid_upper_strouhal)
if (valid):
print 'VALIDATION = PASS :-)'
else:
print 'VALIDATION = FAIL :-('
if (regression):
import pandas as pd
pd.options.display.float_format = '{:,.6f}'.format
print 'REGRESSION DATA'
regress = {'version' : ['dt=0.001', 'dt=0.002', 'dt=0.005', 'v0.1 (dt=0.001)', 'v0.1 (dt=0.002)', 'CURRENT (dt=%.3f)'%time_step],
'Strouhal': [0.179974 , 0.179189 , 0.149542, 0.179974 , 0.179189, strouhal]}
regression_table = pd.DataFrame(regress, columns=['version','Strouhal'])
print regression_table
"""
Explanation: Validation and regression
End of explanation
"""
if remote_data:
print 'Disconnecting from remote paraview server connection'
Disconnect()
"""
Explanation: Cleaning up
End of explanation
"""
|
robertoalotufo/ia898
|
deliver/tutorial-numpy.ipynb
|
mit
|
import numpy as np
a = np.array( [2,3,4,-1,-2] )
print('Dimensões: a.shape=', a.shape )
print('Tipo dos elementos: a.dtype=', a.dtype )
print('Imprimindo o array completo:\n a=',a )
"""
Explanation: Introdução ao NumPy
O tipo ndarray
O tipo ndarray, ou apenas array é um arranjo de itens homogêneos de dimensionalidade N, indexados por uma tupla de N inteiros. Existem 3 informações essenciais associadas ao ndarray: o tipo dos dados, suas dimensões e seus dados em si. A propriedade dtype permite saber o tipo de dados
enquanto que shape é uma tupla que indica o tamanho de cada dimensão do arranjo. O acesso aos dados em si deve ser feito por indexação, por fatiamento ou pela variável em si.
Existem várias maneiras de criar uma variável do tipo ndarray.
Por exemplo, é possível criar uma a partir de uma lista (1D) ou lista de listas usando a função array.
O tipo de matriz resultante é deduzida a partir do tipo de elementos nas sequências.
Veja a seguir um vetor de inteiros com 5 elementos. Note que o vetor é uma linha com 5 colunas. Observe também que o shape é uma tupla de um único elemento (veja a vírgula que aparece por ser uma tupla).
End of explanation
"""
b = np.array( [ [1.5, 2.3, 5.2],
[4.2, 5.6, 4.4] ] )
print('Um array bidimensional, dimensões:(b.shape=', b.shape )
print('Tipo dos elementos: b.dtype', b.dtype )
print('Número de colunas:', b.shape[-1] )
print('Número de linhas:', b.shape[-2] )
print('Elementos, b=\n', b )
"""
Explanation: Veja a seguir uma matriz bidimensional de dados ponto flutuante de 2 linhas e 3 colunas. Observe que a tupla do shape aumenta para a esquerda,
isto é, se eu tiver um vetor de 3 elementos, o seu shape é (3,) e se uma nova dimensão for adicionada, por exemplo 2 linhas e 3 colunas, o
shape passa a ser (3,2). O que antes shape[0] no vetor unidimensional era colunas, já na matriz bidimensional shape[0] passou a ser o número
de linhas.
Assim o último elemento da tupla do shape indica o número de colunas, a penúltima o número de linhas. Assim se quisermos sempre buscar
o número de colunas, independentemente do número de dimensões, shape[-1] informa sempre o número de colunas, shape[-2], o número de linhas.
End of explanation
"""
d = np.zeros( (2,4) )
print('Array de 0s: \n', d )
d = np.ones( (3,2,5), dtype='int16' )
print('\n\nArray de 1s: \n', d )
d = np.empty( (2,3), 'bool' )
print('Array não inicializado (vazio):\n', d )
"""
Explanation: Manipulação de arrays
Criando arrays inicializados
É possível criar arrays de zeros, uns ou valores não inicializados usando as funções zeros, ones ou empty. As dimensões
do array são obrigatórias e é dado por uma tupla e o tipo dos elementos é opcional, sendo que o default é tipo float.
O código a seguir cria 3 arrays. O primeiro possui 2 linhas e 4 colunas. O segundo tem 3 dimensões: 3 fatias (ou imagens) onde cada
uma tem 2 linhas e 5 colunas. Por último, é criado uma matriz booleana (True e False) de valores não inicializados
de 2 linhas e 3 colunas. A vantagem do empty é que ele é mais rápido que o zeros ou ones. No caso abaixo, os valores
mostrados na matrix criada pelo empty são aleatórios.
End of explanation
"""
print('np.arange( 10) = ', np.arange(10) )
print('np.arange( 3, 8) = ', np.arange(3,8) )
print('np.arange( 0, 2, 0.5) = ', np.arange(0, 2, 0.5) )
print('np.linspace( 0, 2, 5 ) = ', np.linspace( 0, 2, 5 ) )
"""
Explanation: Note que o Numpy permite arrays n-dimensionais. Em imagens em níveis de cinza iremos trabalhar com matrizes bidimensionais mas
se a imagem for colorida, iremos representá-la em 3 canais, R, G e B, representados na estrutura com 3 dimensões.
Se for um vídeo, isto é, uma sequência de imagens, teremos que adicionar mais uma dimensão.
Se for uma tomografia, também podemos representar em 3 dimensões: largura, altura e profundidade.
Criando arrays com valores sequenciais
As funções arange e linspace geram um vetor sequencial. Eles diferem apenas nos parâmetros. Enquanto o arange gera uma sequência a partir dos valores inicial (includente e opcional), final( excludente) e passo (opcional), linspace gera uma sequência com valores inicial e final e número de elementos. Note as diferenças nos exemplos a seguir:
End of explanation
"""
a = np.arange(20) # a é um vetor de dimensão 20
print('a = \n', a )
"""
Explanation: Veja que no último caso, usando o linspace, a sequência começa em 0 e termina em 2 e deve possuir 5 elementos. Veja que
para isto o passo a ser utilizado será 0.5, calculado automaticamente. Já no exemplo anterior, a sequência começa em 0 e deve terminar antes de 2 e o passo é 0.5.
Fatiamento em narray unidimensional
Um recurso importante do numpy é o fatiamento no qual é possível acessar
um subconjunto do array de diversas formas. O fatiamento define os índices
onde o array será acessado definindo o ponto inicial, final e o passo entre
eles, nesta ordem: [inicial:final:passo].
Inicializando um array unidimensional
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[1:15:2]' )
print(a[1:15:2] )
"""
Explanation: Exemplo simples de fatiamento
Para a realização do fatiamento são utilizados 3 parâmetros, colocados no local do índice do array.
Os 3 parâmetros são separados por dois pontos ":". Todos os 3 parâmetros podem ser opcionais que ocorrem
quando o valor inicial é 0, o valor final é o tamanho do array e o passo é 1. Lembrar que a ordem deles
é: [inicial:final:passo]. Se o passo for 1 fica: [inicial:final]. Se o início for 0 fica: [:final] e se
o final for o último fica: [inicio:] e se forem todos [:].
O fatiamento é feito começando pelo primeiro valor, adicionando-se o passo até antes do último valor. Três
aspectos são extremamente importantes de serem lembrados: O índice inicial começa em zero, o índice final
nunca é atingido, o último índice utilizado é sempre o imediatamente anterior e o Numpy admite índices negativos, que
é uma indexação do último (-1) até o primeiro elemento (-W).
Os exemplos a seguir ajudam a fixar estes conceitos.
O código abaixo acessa os elementos ímpares começando de 1 até 14:
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[1:-1:2]' )
print(a[1:-1:2] )
print('Note que o fatiamento termina antes do último elemento (-1)' )
"""
Explanation: Exemplo de fatiamento com indices negativos
Acessando o último elemento com índice negativo
O código abaixo acessa os elementos ímpares até antes do último elemento:
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[-3:2:-1]' )
print(a[-3:2:-1] )
print( 'Note que o fatiamento retorna o array invertido' )
print( 'Antepenúltimo até o terceiro elemento com step = -1' )
"""
Explanation: Inversão do array com step negativo (step = -1)
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[:15:2]' )
print(a[:15:2] )
print('Note que o fatiamento inicia do primeiro elemento' )
print('Primeiro elemento até antes do 15o com passo duplo' )
"""
Explanation: Fatiamento avançado
É possível realizar o fatiamento utilizando os 3 parâmetros explícitos
( o limite inferior, limite superior e o step), ou podemos suprimir algum
desses parâmetros. Nestes casos a função toma o valor defaut: limite
inferior = primeiro elemento, limite superior = último elemento e step = 1.
|Proposta inicial | Equivalente |
|---------------------|-------------|
|a[0:len(a):1] | a[:] |
|a[0:10:1] | a[:10] |
|a[0:10:2] | a[:10:2] |
|a[2:len(a):1] | a[2::] |
|a[2:len(a):2] | a[2::2] |
Supressão do indice limite inferior
Quando o índice do limite inferior é omitido, é subentendido que é 0:
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[1::2]' )
print(a[1::2] )
print('Note que o fatiamento termina último elemento' )
print('Primeiro elemento até o último com passo duplo' )
"""
Explanation: Supressão do indice limite superior
Quando o índice do limite superior é omitido, fica implícito
que é o último elemento:
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[1:15]' )
print(a[1:15] )
print('Note que o fatiamento tem step unitário' )
print('Primeiro elemento até antes do 15o com passo um' )
"""
Explanation: Supressão do indice do step
O índice do step é opcional e quando não é indicado, seu valor é 1:
End of explanation
"""
a = np.arange(20)
print('Resultado da operação a[:]' )
print(a[:] )
print('Todos os elementos com passo unitário' )
"""
Explanation: Todos os elementos com passo unitário
End of explanation
"""
a = np.arange(20) # a é um vetor unidimensional de 20 elementos
print('a = \n', a )
a = a.reshape(4,5) # a agora é um array 4x5 (4 linhas por 5 colunas)
print('a.reshape(4,5) = \n', a )
"""
Explanation: Fatiamento no ndarray bidimensional
Um recurso importante do numpy é o fatiamento no qual é possível acessar
partes do array de diversas formas, como pode ser visto abaixo:
Inicializando um array e mudando o seu shape
End of explanation
"""
print('A segunda linha do array: \n', a[1,:] ) # A segunda linha é o índice 1
print(' A primeira coluna do array: \n', a[:,0] ) # A primeira coluna corresponde ao índice 0
"""
Explanation: Fatiamento de linhas e colunas de um array
O operador : indica que todos os elementos naquela dimensão devem ser acessados.
End of explanation
"""
print('Acessando as linhas do array de 2 em 2 começando pelo índice 0: \n', a[0::2,:] )
print(' Acessando as linhas e colunas do array de 2 em 2 começando \
pela linha 0 e coluna 1: \n', a[0::2,1::2] )
"""
Explanation: Fatiamento de elementos específicos de um array
End of explanation
"""
b = a[-1:-3:-1,:]
print('Acesso as duas últimas linhas do array em ordem reversa, \
b = a[-1:-3:-1,:] = \n',a[-1:-3:-1,:] )
print('Acesso elemento na última linha e coluna do array, a[-1,-1] =', a[-1,-1] )
c = a[::-1,:]
print('Invertendo a ordem das linhas do array: c = a[::-1,:] = \n', a[::-1,:] )
"""
Explanation: Fatiamento com índices invertidos
End of explanation
"""
a = np.arange(6)
b = a
print("a =\n",a )
print("b =\n",b )
b.shape = (2,3) # mudança no shape de b,
print("\na shape =",a.shape ) # altera o shape de a
b[0,0] = -1 # mudança no conteúdo de b
print("a =\n",a ) # altera o conteudo de a
print("\nid de a = ",id(a) ) # id é um identificador único de objeto
print("id de b = ",id(b) ) # a e b possuem o mesmo id
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
"""
Explanation: Copiando variáveis ndarray
O ndarray foi projetado para acesso otimizado a uma grande quantidade de dados. Neste sentido, os conceitos
descritos a seguir sobre as três formas de cópias entre variáveis ditas sem cópia, cópia rasa (shallow) e
cópia profunda (deep) são fundamentais para uma codificação eficiente. Podemos dizer que um ndarray possui
o cabeçalho que contém dados pelas informações sobre o tipo do elemento, a dimensionalidade (shape) e
passo ou deslocamento para o próximo elemento (strides) e os dados raster em si. A tabela
a seguir mostra a situação do cabeçalho e dos dados nos três tipos de cópias.
|Tipo | Cabeçalho:Type,Shape,Strides|Dados raster | Exemplo |
|---------------------|-----------------------------|------------------|------------|
|Sem cópia, apenas ref| apontador original |apontador original| a = b |
|Cópia rasa | novo |apontador original|b=a.reshape |
| | | |slicing, a.T|
|Cópia profunda | novo |novo |a = b.copy()|
Sem cópia explícita, apenas referência
No caso abaixo, usaremos o comando normal de igual como atribuição do array a para o array b.
Verifica-se que tanto o shape como os dados de b são os mesmos de a. Tudo se passa como b
fosse apenas um apontador para a. Qualquer modificação em b é refletida em a.
End of explanation
"""
def cc(a):
return a
b = cc(a)
print("id de a = ",id(a) )
print("id de b = ",id(b) )
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
"""
Explanation: Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a
seguir de uma função que apenas retorna a variável de entrada:
End of explanation
"""
a = np.arange(30)
print("a =\n", a )
b = a.reshape( (5, 6))
print("b =\n", b )
b[:, 0] = -1
print("a =\n", a )
c = a.reshape( (2, 3, 5) )
print("c =\n", c )
print('c.base is a:',c.base is a )
print('np.may_share_memory(a,c):',np.may_share_memory(a,c) )
"""
Explanation: Cópia rasa
A cópia rasa é muito útil e extensivamente utilizada. É usada quando se quer indexar o array original
através da mudança de dimensionalidade ou do
refatiamento, porém sem a necessidade de realizar uma cópia dos dados raster. Desta forma consegue-se
uma otimização no acesso ao array n-dimensional. Existem várias formas onde a cópia rasa acontece,
sendo as principais:
1) no caso do reshape onde o número de elementos do ndarray é o mesmo, porém sua dimensionalidade
é alterada; 2) no caso de fatiamento onde um subarray é indexado; 3) no caso de transposição do array;
4) no caso de linearização do raster através do ravel().
entre outros.
Reshape
O exemplo a seguir mostra inicialmente a criação de um vetor unidimensional sequencial sendo "visto" de
forma bidimensional ou tridimensional.
End of explanation
"""
a = np.zeros( (5, 6))
print('%s %s %s %s %s' % (type(a), np.shape(a), a.dtype, a.min(), a.max()) )
b = a[::2,::2]
print('%s %s %s %s %s' % (type(b), np.shape(b), b.dtype, b.min(), b.max()) )
b[:,:] = 1.
print('b=\n', b )
print('a=\n', a )
print('b.base is a:',b.base is a )
print('np.may_share_memory(a,b):',np.may_share_memory(a,b) )
"""
Explanation: Slice - Fatiamento
O exemplo a seguir mostra a cópia rasa no uso de fatiamento. No exemplo, todos os elementos de linhas
e colunas pares são modificados para 1. CUIDADO: quando é feita a atribuição de b = 1., é importante
que b seja referenciado como ndarray na forma b[:,:], caso contrário, se fizermos b = 1., uma nova
variável é criada.
End of explanation
"""
a = np.arange(25).reshape((5,5))
print('a=\n',a )
b = a[:,0]
print('b=',b )
b[:] = np.arange(5)
print('b=',b )
print('a=\n',a )
"""
Explanation: Este outro exemplo é uma forma atraente de processar uma coluna de uma matriz bidimensional,
porém é preciso CUIDADO, pois o uso de b deve ser com b[:] se for atribuído um novo valor para
ele, caso contrário, se fizermos b = arange(5), uma nova variável é criada.
End of explanation
"""
a = np.arange(24).reshape((4,6))
print('a:\n',a )
print('a.T:\n',a.T )
print('np.may_share_memory(a,a.T):',np.may_share_memory(a,a.T) )
"""
Explanation: Transposto
A operação matricial de transposição que troca linhas por colunas produz também um view
da imagem, sem necessidade de cópia:
End of explanation
"""
a = np.arange(24).reshape((4,6))
print('a:\n',a )
av = a.ravel()
print('av.shape:',av.shape )
print('av:\n',av )
print('np.may_share_memory(a,av):',np.may_share_memory(a,av) )
"""
Explanation: Ravel
Aplicando-se o método ravel() a um ndarray, gera-se um view do raster
linearizado (i.e. uma única dimensão) do ndarray.
End of explanation
"""
b = a.copy()
c = np.array(a, copy=True)
print("id de a = ",id(a) )
print("id de b = ",id(b) )
print("id de c = ",id(c) )
"""
Explanation: Cópia profunda
Cria uma copia completa do array, do seu shape e conteúdo. A recomendação é utilizar a
função copy() para realizar a copia profunda, entretanto é possível conseguir a
copia profunda pelo np.array.
End of explanation
"""
a = np.arange(20).reshape(5,4)
b = 2 * np.ones((5,4))
c = np.arange(12,0,-1).reshape(4,3)
print('a=\n', a )
print('b=\n', b )
print('c=\n', c )
"""
Explanation: Operações matriciais
Uma das principais vantagens da estrutura ndarray é sua habilidade de processamento matricial.
Assim, para se multiplicar todos os elementos de um array por um escalar basta escrever a * 5 por
exemplo. Para se fazer qualquer operação lógica ou aritmética entre arrays, basta escrever a <oper> b:
End of explanation
"""
b5 = 5 * b
print('b5=\n', b5 )
"""
Explanation: Multiplicação de array por escalar: b x 5
End of explanation
"""
amb = a + b
print('amb=\n', amb )
"""
Explanation: Soma de arrays: a + b
End of explanation
"""
at = a.T
print('a.shape=',a.shape )
print('a.T.shape=',a.T.shape )
print('a=\n', a )
print('at=\n', at )
"""
Explanation: Transposta de uma matriz: a.T
A transposta de uma matriz, troca os eixos das coordenadas. O elemento que
estava na posição (r,c) vai agora estar na posição (c,r). O shape
da matriz resultante ficará portanto com os valores trocados. A operação
de transposição é feita através de cópia rasa, portanto é uma operação
muito eficiente e deve ser utilizada sempre que possível.
Veja o exemplo a seguir:
End of explanation
"""
ac = a.dot(c)
print('a.shape:',a.shape )
print('c.shape:',c.shape )
print('a=\n',a )
print('c=\n',c )
print('ac=\n', ac )
print('ac.shape:',ac.shape )
"""
Explanation: Multiplicação de matrizes: a x c
A multiplicação de matrizes é feita através do operador dot.
Para que a multiplicação seja possível é importante que o número de
colunas do primeiro ndarray seja igual ao número de linhas do
segundo. As dimensões do resultado será o número de linhas do
primeiro ndarray pelo número de colunas do segundo ndarray. Confira:
End of explanation
"""
# gera um numpy.array de 10 elementos, linearmente espaçados entre 0 a 1
print(np.linspace(0, 1.0, num=10).round(2) )
"""
Explanation: Linspace e Arange
As funções do numpy linspace e arange tem o mesmo objetivo: gerar numpy.arrays linearmente
espaçados em um intervalo indicado como parâmetro.
A diferença primordial entre essas funções é como será realizada a divisão no intervalo especificado.
Na função linspace essa divisão é feita através da definição do intervalo fechado [inicio,fim], isto é, contém o
início e o fim, e da quantidade de
elementos que o numpy.array final terá. O passo portanto é calculado como (fim - inicio)/(n - 1).
Dessa forma, se queremos gerar um numpy.array entre 0 e 1 com 10 elementos, utilizaremos o linspace da seguinte forma
End of explanation
"""
# gera um numpy.array linearmente espaçados entre 0 a 1 com passo 0.1
print(np.arange(0, 1.0, 0.1) )
"""
Explanation: Já na função arange, define-se o intervalo semi-aberto [inicio,fim) e o passo que será dado entre um elemento e outro.
Dessa forma, para gerar
um numpy.array entre 0 e 1 com 10 elementos, temos que calcular o passo (0.1) e passar esse passo como parâmetro.
End of explanation
"""
r,c = np.indices( (5, 10) )
print('r=\n', r )
print('c=\n', c )
"""
Explanation: Confirme que a principal diferença entre os dois que pode ser verificada nos exemplos acima é que
no linspace o limite superior da distribuição é inclusivo (intervalo fechado),
enquanto no arange isso não ocorre (intervalo semi-aberto).
Funções indices e meshgrid
As funções indices e meshgrid são extremamente úteis na geração de imagens sintéticas e o seu aprendizado permite também
entender as vantagens de programação matricial, evitando-se a varredura seqüencial da imagem muito usual na programação na linguagem C.
Operador indices em pequenos exemplos numéricos
A função indices recebe como parâmetros uma tupla com as dimensões (H,W) das matrizes a serem criadas. No exemplo a seguir, estamos
gerando matrizes de 5 linhas e 10 colunas. Esta função retorna uma tupla de duas matrizes que podem ser obtidas fazendo suas atribuições
como no exemplo a seguir onde criamos as matrizes r e c, ambas de tamanho (5,10), isto é, 5 linhas e 10 colunas:
End of explanation
"""
f = r + c
print('f=\n', f )
"""
Explanation: Note que a matriz r é uma matriz onde cada elemento é a sua coordenada linha e a matriz c é uma matriz onde cada elemento é
a sua coordenada coluna. Desta forma, qualquer operação matricial feita com r e c, na realidade você está processando as
coordenadas da matriz. Assim, é possível gerar diversas imagens sintéticas a partir de uma função de suas coordenadas.
Como o NumPy processa as matrizes diretamente, sem a necessidade de fazer um for explícito, a notação do programa fica bem simples
e a eficiência também. O único inconveniente é o uso da memória para se calcular as matrizes de índices r e c. Iremos
ver mais à frente que isto pode ser minimizado.
Por exemplo seja a função que seja a soma de suas coordenadas $f(r,c) = r + c$:
End of explanation
"""
f = r - c
print('f=\n', f )
"""
Explanation: Ou ainda a função diferença entre a coordenada linha e coluna $f(r,c) = r - c$:
End of explanation
"""
f = (r + c) % 2
print('f=\n', f )
"""
Explanation: Ou ainda a função $f(r,c) = (r + c) \% 2$ onde % é operador módulo. Esta função retorna 1 se a soma das coordenadas for ímpar e 0 caso contrário.
É uma imagem no estilo de um tabuleiro de xadrez de valores 0 e 1:
End of explanation
"""
f = (r == c//2)
print('f=\n', f )
"""
Explanation: Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2}c)$:
End of explanation
"""
f = r**2 + c**2
print('f=\n', f )
"""
Explanation: Ou ainda a função parabólica dada pela soma do quadrado de suas coordenadas $f(r,c) = r^2 + c^2$:
End of explanation
"""
f = ((r**2 + c**2) < 4**2)
print('f=\n', f * 1 )
"""
Explanation: Ou ainda a função do círculo de raio 4, com centro em (0,0) $f(r,c) = (r^2 + c^2 < 4^2)$:
End of explanation
"""
# Diretiva para mostrar gráficos inline no notebook
%matplotlib inline
import matplotlib.pylab as plt
r,c = np.indices( (200, 300) )
plt.subplot(121)
plt.imshow(r,cmap = 'gray')
plt.title("linhas")
plt.axis('off')
plt.subplot(122)
plt.imshow(c,cmap = 'gray')
plt.axis('off')
plt.title("colunas");
"""
Explanation: Operador indices em exemplo de imagens sintéticas
Vejamos os exemplos acima, porém gerados em imagens. A diferença será no tamanho da matriz, iremos utilizar matriz (200,300), e
a forma de visualizá-la através do adshow, ao invés de imprimir os valores como fizemos acima. Gerando as coordenadas utilizando indices:
Observe que o parâmetro de indices é uma tupla. Verifique o número de parêntesis utilizados:
End of explanation
"""
f = r + c
plt.imshow(f,cmap = 'gray')
plt.title("r+c")
plt.axis("off")
"""
Explanation: Soma
Função soma: $f(r,c) = r + c$:
End of explanation
"""
f = r - c
plt.imshow(f,cmap = 'gray')
plt.title("r-c")
plt.axis("off")
"""
Explanation: Subtração
Função subtração $f(r,c) = r - c$:
End of explanation
"""
f = (r//8 + c//8) % 2
plt.imshow(f,cmap = 'gray')
plt.title("(r+c)%2")
plt.axis("off")
"""
Explanation: Xadrez
Função xadrez $f(r,c) = \frac{(r + c)}{8} \% 2$. Aqui foi feita a divisão por 8 para que o tamanho das casas do xadrez fique 8 x 8, caso
contrário é muito difícil de visualizar o efeito xadrez pois a imagem possui muitos pixels.:
End of explanation
"""
f = (r == c//2)
plt.imshow(f,cmap = 'gray')
plt.title('r == c//2')
plt.axis("off")
"""
Explanation: Reta
Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2} c)$:
End of explanation
"""
f = r**2 + c**2
plt.imshow(f,cmap = 'gray')
plt.title('r**2 + c**2')
plt.axis("off")
"""
Explanation: Parábola
Função parabólica: $f(r,c) = r^2 + c^2$:
End of explanation
"""
f = (((r-100)**2 + (c-100)**2) < 19**2)
plt.imshow(f,cmap = 'gray')
plt.title('((r-100)**2 + (c-100)**2) < 19**2')
plt.axis("off")
"""
Explanation: Círculo
Função do círculo de raio 190, $f(r,c) = (r^2 + c^2 < 190^2)$:
End of explanation
"""
import numpy as np
r, c = np.meshgrid( np.array([-1.5, -1.0, -0.5, 0.0, 0.5]),
np.array([-20, -10, 0, 10, 20, 30]), indexing='ij')
print('r=\n',r )
print('c=\n',c )
"""
Explanation: Meshgrid
A função meshgrid é semelhante à função indices visto
anteriormente, porém, enquanto indices gera as coordenadas inteiras não negativas a partir de um shape(H,W),
o meshgrid gera os valores das matrizes a partir de dois vetores de valores reais quaisquer, um para as linhas e outro para as colunas.
Veja a seguir um pequeno exemplo numérico. Para que o meshgrid fique compatível com a nossa convenção de (linhas,colunas), deve-se
usar o parâmetro indexing='ij'.
End of explanation
"""
rows = np.linspace(-1.5, 0.5, 5)
cols = np.linspace(-20, 30, 6)
print('rows:', rows )
print('cols:', cols )
"""
Explanation: Gerando os vetores com linspace
A função linspace gera vetor em ponto flutuante recebendo os parâmetro de valor inicial, valor final e número de pontos do vetor.
Desta forma ele é bastante usado para gerar os parâmetro para o meshgrid.
Repetindo os mesmos valores do exemplo anterior, porém usando linspace. Observe que o primeiro vetor possui 5 pontos,
começando com valor -1.5 e o valor final é 0.5 (inclusive). O segundo vetor possui 6 pontos, começando de -20 até 30:
End of explanation
"""
r, c = np.meshgrid(rows, cols, indexing='ij')
print('r = \n', r )
print('c = \n', c )
"""
Explanation: Usando os dois vetores gerados pelo linspace no meshgrid:
End of explanation
"""
f = r * c
print('f=\n', f )
"""
Explanation: Podemos agora gerar uma matriz ou imagem que seja função destes valores. Por exemplo ser o produto deles:
End of explanation
"""
e = np.spacing(1) # epsilon to avoid 0/0
rows = np.linspace(-5.0, 5.0, 150) # coordenadas das linhas
cols = np.linspace(-6.0, 6.0, 180) # coordenadas das colunas
r, c = np.meshgrid(rows, cols, indexing='ij') # Grid de coordenadas estilo numpy
z = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is added to avoid 0/0
plt.imshow(z,cmap = 'gray')
plt.title('Função sinc: sen(r² + c²)/(r²+c²) em duas dimensões')
plt.axis("off")
"""
Explanation: Exemplo na geração da imagem sinc com meshgrid
Neste exemplo, geramos a imagem da função $sinc(r,c)$ em duas dimensões, nos intervalos na vertical, de -5 a 5 e na
horizontal de -6 a 6. A função sinc é uma função trigonométrica que pode ser utilizada para filtragens.
A equação é dada por:
$$ sinc(r,c) = \frac{\sin(r^2 + c^2)}{r^2 + c^2}, \text{para\ } -5 \leq r \leq 5, -6 \leq c \leq 6
$$
Na origem, tanto r como c são zeros, resultando uma divisão por zero. Entretanto pela teoria dos limites, $\frac{sin(x)}{x}$ é
igual a 1 quando $x$ é igual a zero.
Uma forma de se obter isto em ponto flutuante é somar tanto no numerador como no denominador um epsilon, que é a
menor valor em ponto flutuante. Epsilon pode ser obtido pela função np.spacing.
End of explanation
"""
n_rows = len(rows)
n_cols = len(cols)
r,c = np.indices((n_rows,n_cols))
r = -5. + 10.*r.astype(float)/(n_rows-1)
c = -6. + 12.*c.astype(float)/(n_cols-1)
zi = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is addes to avoid 0/0
plt.imshow(zi,cmap = 'gray')
plt.title('Função sinc: sin(r² + c²)/(r²+c²) em duas dimensões')
plt.axis("off")
"""
Explanation: Exemplo na geração da imagem sinc com indices
Outra forma de gerar a mesma imagem, usando a função indices é processar os
indices de modo a gerar os mesmos valores relativos à grade de espaçamento regular
acima, conforme ilustrado abaixo:
End of explanation
"""
print('Máxima diferença entre z e zi?', abs(z - zi).max() )
"""
Explanation: Verificando que as duas funções são iguais:
End of explanation
"""
a = np.array([0, 1, 2])
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,2): \n',np.tile(a,2) )
"""
Explanation: Para usuários avançados
Na realidade a função indices retorna um único array n-dimensional com uma dimensão a mais que o indicado pelo
shape usado como parâmetro. Assim, quando é feito r,c = np.indices((rows,cols)), r é atribuído para o elemento 0 e
c é atribuído para o elemento 1 do ndarray. No caso do meshgrid, ele retorna tantos arrays quanto forem o número
de vetores passados como parâmetro para meshgrid.
Tile
Uma função importante da biblioteca numpy é a tile, que gera repetições
do array passado com parâmetro. A quantidade de repetições é dada pelo
parâmetro reps
Exemplo unidimensional - replicando as colunas
End of explanation
"""
a = np.array([0, 1, 2])
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(2,1)): \n',np.tile(a,(2,1)) )
"""
Explanation: Exemplo unidimensional - replicando as linhas
Para modificar as dimensões na quais a replicação será realizada
modifica-se o parâmetro reps, passando ao invés de um int, uma tupla
com as dimensões que se deseja alterar
End of explanation
"""
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,2): \n',np.tile(a,2) )
"""
Explanation: Exemplo bidimensional - replicando as colunas
End of explanation
"""
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(3,1)): \n',np.tile(a,(3,1)) )
"""
Explanation: Exemplo bidimensional - replicando as linhas
End of explanation
"""
a = np.arange(4).reshape(2,2)
print('a = \n', a )
print()
print('Resultado da operação np.tile(a,(2,2)): \n',np.tile(a,(2,2)) )
"""
Explanation: Exemplo bidimensional - replicando as linhas e colunas simultaneamente
End of explanation
"""
a = np.array([[0,1],[2,3]])
print('a = \n', a )
print()
print('np.resize(a,(1,7)) = \n', np.resize(a,(1,7)) )
print()
print('np.resize(a,(2,5)) = \n', np.resize(a,(2,5)) )
"""
Explanation: Resize
A função np.resize recebe um array a e retorna um array com o shape desejado. Caso o novo array
seja maior que o array original o novo array é preenchido com cópias de a.
End of explanation
"""
a = np.array([11,1,2,3,4,5,12,-3,-4,7,4])
print('a = ',a )
print('np.clip(a,0,10) = ', np.clip(a,0,10) )
"""
Explanation: Clip
A função clip substitui os valores de um array que estejam abaixo de um limiar mínimo ou que estejam acima de um limiar máximo,
por esses limiares mínimo e máximo, respectivamente. Esta função é especialmente útil em processamento de imagens para evitar
que os índices ultrapassem os limites das imagens.
Exemplos
End of explanation
"""
a = np.arange(10).astype(np.int)
print('a=',a )
print('np.clip(a,2.5,7.5)=',np.clip(a,2.5,7.5) )
"""
Explanation: Exemplo com ponto flutuante
Observe que se os parâmetros do clip estiverem em ponto flutuante, o resultado também será em ponto flutuante:
End of explanation
"""
A = np.exp(np.linspace(0.1,10,32)).reshape(4,8)/3000.
print('A: \n', A )
"""
Explanation: Formatando arrays para impressão
Imprimindo arrays de ponto flutuante
Ao se imprimir arrays com valores em ponto flutuante, o NumPy em geral, imprime o array com muitas as casas
decimais e com notação científica, o que dificulta a visualização.
End of explanation
"""
np.set_printoptions(suppress=True, precision=3)
print('A: \n', A )
"""
Explanation: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando
a função set_printoption do numpy:
End of explanation
"""
A = np.random.rand(5,10) > 0.5
print('A = \n', A )
"""
Explanation: Imprimindo arrays binários
Array booleanos são impressos com as palavras True e False, como no exemplo a seguir:
End of explanation
"""
print ('A = \n', A.astype(int))
"""
Explanation: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando
o método astype(int):
End of explanation
"""
|
ajgpitch/qutip-notebooks
|
docs/guide/Visualization.ipynb
|
lgpl-3.0
|
%matplotlib inline
import numpy as np
from pylab import *
from qutip import *
"""
Explanation: Visualization of Quantum States and Processes
Contents
Introduction
Fock-Basis Probability Distributions
Quasi-Probability Distributions
Visualizing Operators
Quantum Process Tomography
End of explanation
"""
N = 20
rho_coherent = coherent_dm(N, np.sqrt(2))
rho_thermal = thermal_dm(N, 2)
rho_fock = fock_dm(N, 2)
"""
Explanation: <a id='intro'></a>
Introduction
Visualization is often an important complement to a simulation of a quantum
mechanical system. The first method of visualization that come to mind might be
to plot the expectation values of a few selected operators. But on top of that,
it can often be instructive to visualize for example the state vectors or
density matices that describe the state of the system, or how the state is
transformed as a function of time (see process tomography below). In this
section we demonstrate how QuTiP and matplotlib can be used to perform a few
types of visualizations that often can provide additional understanding of
quantum system.
<a id='fock'></a>
Fock-Basis Probability Distributions
In quantum mechanics probability distributions plays an important role, and as
in statistics, the expectation values computed from a probability distribution
does not reveal the full story. For example, consider an quantum harmonic
oscillator mode with Hamiltonian $H = \hbar\omega a^\dagger a$, which is
in a state described by its density matrix $\rho$, and which on average
is occupied by two photons, $\mathrm{Tr}[\rho a^\dagger a] = 2$. Given
this information we cannot say whether the oscillator is in a Fock state,
a thermal state, a coherent state, etc. By visualizing the photon distribution
in the Fock state basis important clues about the underlying state can be
obtained.
One convenient way to visualize a probability distribution is to use histograms.
Consider the following histogram visualization of the number-basis probability
distribution, which can be obtained from the diagonal of the density matrix,
for a few possible oscillator states with on average occupation of two photons.
First we generate the density matrices for the coherent, thermal and fock states, respectively.
End of explanation
"""
fig, axes = subplots(1, 3, figsize=(12,3))
bar0 = axes[0].bar(np.arange(0, N)-.5, rho_coherent.diag())
lbl0 = axes[0].set_title("Coherent state")
lim0 = axes[0].set_xlim([-.5, N])
bar1 = axes[1].bar(np.arange(0, N)-.5, rho_thermal.diag())
lbl1 = axes[1].set_title("Thermal state")
lim1 = axes[1].set_xlim([-.5, N])
bar2 = axes[2].bar(np.arange(0, N)-.5, rho_fock.diag())
lbl2 = axes[2].set_title("Fock state")
lim2 = axes[2].set_xlim([-.5, N])
"""
Explanation: Next, we plot histograms of the diagonals of the density matrices:
End of explanation
"""
fig, axes = subplots(1, 3, figsize=(10,3))
plot_fock_distribution(rho_coherent, fig=fig, ax=axes[0], title="Coherent state")
plot_fock_distribution(rho_thermal, fig=fig, ax=axes[1], title="Thermal state")
plot_fock_distribution(rho_fock, fig=fig, ax=axes[2], title="Fock state")
fig.tight_layout()
show()
"""
Explanation: All these states correspond to an average of two photons, but by visualizing
the photon distribution in Fock basis the differences between these states is
easily appreciated.
One frequently need to visualize the Fock-distribution in the way described
above, so QuTiP provides a convenience function for doing this, see
plot_fock_distribution, and the following example:
End of explanation
"""
xvec = np.linspace(-5,5,200)
W_coherent = wigner(rho_coherent, xvec, xvec)
W_thermal = wigner(rho_thermal, xvec, xvec)
W_fock = wigner(rho_fock, xvec, xvec)
fig, axes = subplots(1, 3, figsize=(12,3))
cont0 = axes[0].contourf(xvec, xvec, W_coherent, 100)
lbl0 = axes[0].set_title("Coherent state")
cont1 = axes[1].contourf(xvec, xvec, W_thermal, 100)
lbl1 = axes[1].set_title("Thermal state")
cont0 = axes[2].contourf(xvec, xvec, W_fock, 100)
lbl2 = axes[2].set_title("Fock state")
"""
Explanation: <a id='quasi'></a>
Quasi-Probability Distributions
The probability distribution in the number (Fock) basis only describes the
occupation probabilities for a discrete set of states. A more complete
phase-space probability-distribution-like function for harmonic modes are
the Wigner and Husumi Q-functions, which are full descriptions of the
quantum state (equivalent to the density matrix). These are called
quasi-distribution functions because unlike real probability distribution
functions they can for example be negative. In addition to being more complete descriptions of a state (compared to only the occupation probabilities plotted above),
these distributions are also great for demonstrating if a quantum state is
quantum mechanical, since for example a negative Wigner function
is a definite indicator that a state is distinctly nonclassical.
Wigner Function
In QuTiP, the Wigner function for a harmonic mode can be calculated with the
function wigner. It takes a ket or a density matrix as input, together with arrays that define the ranges of the phase-space coordinates (in the x-y plane). In the following example the Wigner functions are calculated and plotted for the same three states as in the previous section.
End of explanation
"""
import matplotlib as mpl
from matplotlib import cm
psi = (basis(10, 0) + basis(10, 3) + basis(10, 9)).unit()
xvec = np.linspace(-5, 5, 500)
W = wigner(psi, xvec, xvec)
wmap = wigner_cmap(W) # Generate Wigner colormap
nrm = mpl.colors.Normalize(-W.max(), W.max())
fig, axes = subplots(1, 2, figsize=(10, 4))
plt1 = axes[0].contourf(xvec, xvec, W, 100, cmap=cm.RdBu, norm=nrm)
axes[0].set_title("Standard Colormap")
cb1 = fig.colorbar(plt1, ax=axes[0])
plt2 = axes[1].contourf(xvec, xvec, W, 100, cmap=wmap) # Apply Wigner colormap
axes[1].set_title("Wigner Colormap")
cb2 = fig.colorbar(plt2, ax=axes[1])
fig.tight_layout()
show()
"""
Explanation: Custom Color Maps
The main objective when plotting a Wigner function is to demonstrate that the underlying
state is nonclassical, as indicated by negative values in the Wigner function. Therefore,
making these negative values stand out in a figure is helpful for both analysis and publication
purposes. Unfortunately, all of the color schemes used in Matplotlib (or any other plotting software)
are linear colormaps where small negative values tend to be near the same color as the zero values, and
are thus hidden. To fix this dilemma, QuTiP includes a nonlinear colormap function wigner_cmap
that colors all negative values differently than positive or zero values. Below is a demonstration of how to use
this function in your Wigner figures:
End of explanation
"""
Q_coherent = qfunc(rho_coherent, xvec, xvec)
Q_thermal = qfunc(rho_thermal, xvec, xvec)
Q_fock = qfunc(rho_fock, xvec, xvec)
fig, axes = subplots(1, 3, figsize=(12,3))
cont0 = axes[0].contourf(xvec, xvec, Q_coherent, 100)
lbl0 = axes[0].set_title("Coherent state")
cont1 = axes[1].contourf(xvec, xvec, Q_thermal, 100)
lbl1 = axes[1].set_title("Thermal state")
cont0 = axes[2].contourf(xvec, xvec, Q_fock, 100)
lbl2 = axes[2].set_title("Fock state")
show()
"""
Explanation: Husimi Q-function
The Husimi Q function is, like the Wigner function, a quasiprobability
distribution for harmonic modes. It is defined as
$$
Q(\alpha) = \frac{1}{\pi}\left<\alpha|\rho|\alpha\right>
$$
where $\left|\alpha\right>$ is a coherent state and $\alpha = x + iy$. In QuTiP, the Husimi Q function can be computed given a state ket or density matrix using the function qfunc, as demonstrated below.
End of explanation
"""
N = 5
a = tensor(destroy(N), qeye(2))
b = tensor(qeye(N), destroy(2))
sx = tensor(qeye(N), sigmax())
H = a.dag() * a + sx - 0.5 * (a * b.dag() + a.dag() * b)
# visualize H
lbls_list = [[str(d) for d in range(N)], ["u", "d"]]
xlabels = []
for inds in tomography._index_permutations([len(lbls) for lbls in lbls_list]):
xlabels.append("".join([lbls_list[k][inds[k]] for k in range(len(lbls_list))]))
fig, ax = matrix_histogram(H, xlabels, xlabels, limits=[-4,4])
ax.view_init(azim=-55, elev=45)
show()
"""
Explanation: <a id='visual'></a>
Visualizing Operators
Sometimes, it may also be useful to directly visualizing the underlying matrix
representation of an operator. The density matrix, for example, is an operator
whose elements can give insights about the state it represents, but one might
also be interesting in plotting the matrix of an Hamiltonian to inspect the
structure and relative importance of various elements.
QuTiP offers a few functions for quickly visualizing matrix data in the
form of histograms, matrix_histogram and matrix_histogram_complex, and as Hinton diagram of weighted squares, hinton. These functions takes a Qobj as first argument, and optional arguments to, for example, set the axis labels and figure title (see the function's documentation for details).
For example, to illustrate the use of matrix_histogram, let's visualize of the Jaynes-Cummings Hamiltonian:
End of explanation
"""
rho_ss = steadystate(H, [np.sqrt(0.1) * a, np.sqrt(0.4) * b.dag()])
fig, ax = hinton(rho_ss) # xlabels=xlabels, ylabels=xlabels)
show()
"""
Explanation: Similarly, we can use the function hinton, which is used below to visualize the corresponding steadystate density matrix:
End of explanation
"""
U_psi = iswap()
"""
Explanation: <a id='qpt'></a>
Quantum Process Tomography
Quantum process tomography (QPT) is a useful technique for characterizing experimental implementations of quantum gates involving a small number of qubits. It can also be a useful theoretical tool that can give insight in how a process transforms states, and it can be used for example to study how noise or other imperfections deteriorate a gate. Whereas a fidelity or distance measure can give a single number that indicates how far from ideal a gate is, a quantum process tomography analysis can give detailed information about exactly what kind of errors various imperfections introduce.
The idea is to construct a transformation matrix for a quantum process (for example a quantum gate) that describes how the density matrix of a system is transformed by the process. We can then decompose the transformation in some operator basis that represent well-defined and easily interpreted transformations of the input states.
To see how this works, consider a process that is described by quantum map $\epsilon(\rho_{\rm in}) = \rho_{\rm out}$, which can be written
$$
\epsilon(\rho_{\rm in}) = \rho_{\rm out} = \sum_{i}^{N^2} A_i \rho_{\rm in} A_i^\dagger,
$$
where $N$ is the number of states of the system (that is, $\rho$ is represented by an $[N\times N]$ matrix). Given an orthogonal operator basis of our choice ${B_i}i^{N^2}$, which satisfies ${\rm Tr}[B_i^\dagger B_j] = N\delta{ij}$, we can write the map as
$$
\epsilon(\rho_{\rm in}) = \rho_{\rm out} = \sum_{mn} \chi_{mn} B_m \rho_{\rm in} B_n^\dagger.
$$
where $\chi_{mn} = \sum_{ij} b_{im}b_{jn}^*$ and $A_i = \sum_{m} b_{im}B_{m}$. Here, matrix $\chi$ is the transformation matrix we are after, since it describes how much $B_m \rho_{\rm in} B_n^\dagger$ contributes to $\rho_{\rm out}$.
In a numerical simulation of a quantum process we usually do not have access to the quantum map in the above form. Instead, what we usually can do is to calculate the propagator $U$ for the density matrix in superoperator form, using for example the QuTiP function propagator. We can then write
$$
\epsilon(\tilde{\rho}{\rm in}) = U \tilde{\rho}{\rm in} = \tilde{\rho}_{\rm out}
$$
where $\tilde{\rho}$ is the vector representation of the density matrix $\rho$. If we write in superoperator form we obtain
$$
\tilde{\rho}{\rm out} = \sum{mn} \chi_{mn} \tilde{B}m \tilde{B}_n^\dagger \tilde{\rho}{\rm in} = U \tilde{\rho}_{\rm in}.
$$
so we can identify
$$
U = \sum_{mn} \chi_{mn} \tilde{B}_m \tilde{B}_n^\dagger.
$$
Now this is a linear equation systems for the $N^2 \times N^2$ elements in $\chi$. We can solve it by writing $\chi$ and the superoperator propagator as $[N^4]$ vectors, and likewise write the superoperator product $\tilde{B}_m\tilde{B}_n^\dagger$ as a $[N^4\times N^4]$ matrix $M$:
$$
U_I = \sum_{J}^{N^4} M_{IJ} \chi_{J}
$$
with the solution
$$
\chi = M^{-1}U.
$$
Note that to obtain $\chi$ with this method we have to construct a matrix $M$ with a size that is the square of the size of the superoperator for the system. Obviously, this scales very badly with increasing system size, but this method can still be a very useful for small systems (such as system comprised of a small number of coupled qubits).
Implementation in QuTiP
In QuTiP, the procedure described above is implemented in the function qpt, which returns the $\chi$ matrix given a density matrix propagator. To illustrate how to use this function, let's consider the $i$-SWAP gate for two qubits. In QuTiP the function iswap generates the unitary transformation for the state kets:
End of explanation
"""
U_rho = spre(U_psi) * spost(U_psi.dag())
"""
Explanation: To be able to use this unitary transformation matrix as input to the function qpt, we first need to convert it to a transformation matrix for the corresponding density matrix:
End of explanation
"""
op_basis = [[qeye(2), sigmax(), sigmay(), sigmaz()]] * 2
op_label = [["i", "x", "y", "z"]] * 2
"""
Explanation: Next, we construct a list of operators that define the basis ${B_i}$ in the form of a list of operators for each composite system. At the same time, we also construct a list of corresponding labels that will be used when plotting the $\chi$ matrix.
End of explanation
"""
chi = qpt(U_rho, op_basis)
fig = qpt_plot_combined(chi, op_label, r'$i$SWAP')
show()
"""
Explanation: We are now ready to compute $\chi$ using qpt, and to plot it using qpt_plot_combined.
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: For a slightly more advanced example, where the density matrix propagator is calculated from the dynamics of a system defined by its Hamiltonian and collapse operators using the function propagator, see the notebook Time-dependent master equation: Landau-Zener transitions on the tutorials section on the QuTiP web site.
End of explanation
"""
|
chungjjang80/FRETBursts
|
notebooks/Example - 2CDE Method.ipynb
|
gpl-2.0
|
from fretbursts import *
from fretbursts.phtools import phrates
sns = init_notebook(apionly=True)
sns.__version__
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
"""
Explanation: Example - 2CDE Method
This notebook is part of smFRET burst analysis software FRETBursts.
This notebook implements the 2CDE method from Tomov 2012.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
End of explanation
"""
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
filename = "data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(filename)
loader.alex_apply_period(d)
d.calc_bg(fun=bg.exp_fit, time_s=20, tail_min_us='auto', F_bg=1.7)
d.burst_search()
ds1 = d.select_bursts(select_bursts.size, th1=30)
ds = ds1.select_bursts(select_bursts.naa, th1=30)
alex_jointplot(ds)
ph = d.ph_times_m[0]
tau = 100e-6/d.clk_p
tau
"""
Explanation: Load Data
End of explanation
"""
tau = 1
tau2 = 2 * (tau**2)
xx = np.arange(-4*tau, 4*tau, tau/100.)
y1 = np.exp(-np.abs(xx) / tau)
y2 = np.exp(-xx**2 / tau2)
plt.plot(xx,y1, label=r'$\exp \left( - \frac{|t|}{\tau} \right)$')
plt.plot(xx, y2, label=r'$\exp \left( - \frac{t^2}{2\tau^2} \right)$')
plt.axvline(2*tau, color='k')
plt.axvline(-2*tau, color='k')
plt.xlabel('t')
plt.legend(fontsize=22, bbox_to_anchor=(1.05, 1), loc=2)
plt.title(r'$\tau = %d$' % tau, fontsize=22);
"""
Explanation: KDE considerations
In computing a KDE, the kernel can have different shapes. In the original
2CDE publication the "laplace distribution" kernel is used.
In the next sections we will show the 2CDE results
using both "laplace distribution" Gaussian kernel.
Here, we simply plot the two kernels for comparison:
End of explanation
"""
def calc_fret_2cde(tau, ph, mask_d, mask_a, bursts):
"""
Compute FRET-2CDE for each burst.
FRET-2CDE is a quantity that tends to be around 10 for bursts which have no
dynamics, while it has larger values (e.g. 30..100) for bursts with
millisecond dynamics.
References:
Tomov et al. BJ (2012) doi:10.1016/j.bpj.2011.11.4025
Arguments:
tau (scalar): time-constant of the exponential KDE
ph (1D array): array of all-photons timestamps.
mask_d (bool array): mask for DexDem photons
mask_a (bool array): mask for DexAem photons
bursts (Bursts object): object containing burst data
(start-stop indexes are relative to `ph`).
Returns:
FRET_2CDE (1D array): array of FRET_2CDE quantities, one element
per burst. This array contains NaN in correspondence of bursts
containing to few photons to compute FRET-2CDE.
"""
# Computing KDE burst-by-burst would cause inaccuracies at the burst edges.
# Therefore, we first compute KDE on the full timestamps array and then
# we take slices for each burst.
# These KDEs are evaluated on all-photons array `ph` (hence the Ti suffix)
# using D or A photons during D-excitation (argument ph[mask_d] or ph[mask_a]).
KDE_DTi = phrates.kde_laplace(ph[mask_d], tau, time_axis=ph)
KDE_ATi = phrates.kde_laplace(ph[mask_a], tau, time_axis=ph)
FRET_2CDE = []
for ib, burst in enumerate(bursts):
burst_slice = slice(int(burst.istart), int(burst.istop) + 1)
if ~mask_d[burst_slice].any() or ~mask_a[burst_slice].any():
# Either D or A photon stream has no photons in current burst,
# thus FRET_2CDE cannot be computed. Fill position with NaN.
FRET_2CDE.append(np.nan)
continue
# Take slices of KDEs for current burst
kde_adi = KDE_ATi[burst_slice][mask_d[burst_slice]]
kde_ddi = KDE_DTi[burst_slice][mask_d[burst_slice]]
kde_dai = KDE_DTi[burst_slice][mask_a[burst_slice]]
kde_aai = KDE_ATi[burst_slice][mask_a[burst_slice]]
# nbKDE does not include the "center" timestamp which contributes 1.
# We thus subtract 1 from the precomputed KDEs.
# The N_CHD (N_CHA) value in the correction factor is the number of
# timestamps in DexDem (DexAem) stream falling within the current burst.
N_CHD = mask_d[burst_slice].sum()
N_CHA = mask_a[burst_slice].sum()
nbkde_ddi = (1 + 2/N_CHD) * (kde_ddi - 1)
nbkde_aai = (1 + 2/N_CHA) * (kde_aai - 1)
# N_CHD (N_CHA) in eq. 6 (eq. 7) of (Tomov 2012) is the number of photons
# in DexDem (DexAem) in current burst. Thus the sum is a mean.
ED = np.mean(kde_adi / (kde_adi + nbkde_ddi)) # (E)_D
EA = np.mean(kde_dai / (kde_dai + nbkde_aai)) # (1 - E)_A
# Compute fret_2cde for current burst
fret_2cde = 110 - 100 * (ED + EA)
FRET_2CDE.append(fret_2cde)
return np.array(FRET_2CDE)
def calc_fret_2cde_gauss(tau, ph, mask_d, mask_a, bursts):
"""
Compute a modification of FRET-2CDE using a Gaussian kernel.
Reference: Tomov et al. BJ (2012) doi:10.1016/j.bpj.2011.11.4025
Instead of using the exponential kernel (i.e. laplace distribution)
of the original paper, here we use a Gaussian kernel.
Photon density using Gaussian kernel provides a smooth estimate
regardless of the evaluation time. On the contrary, the
laplace-distribution kernel has discontinuities in the derivative
(cuspids) on each time point corresponding to a timestamp.
Using a Gaussian kernel removes the need of using the heuristic
correction (pre-factor) of nbKDE.
Arguments:
tau (scalar): time-constant of the exponential KDE
ph (1D array): array of all-photons timestamps.
mask_d (bool array): mask for DexDem photons
mask_a (bool array): mask for DexAem photons
bursts (Bursts object): object containing burst data
Returns:
FRET_2CDE (1D array): array of FRET_2CDE quantities, one element
per burst. This array contains NaN in correspondence of bursts
containing to few photons to compute FRET-2CDE.
"""
# Computing KDE burst-by-burst would cause inaccuracies at the edges
# So, we compute KDE for the full timestamps
KDE_DTi = phrates.kde_gaussian(ph[mask_d], tau, time_axis=ph)
KDE_ATi = phrates.kde_gaussian(ph[mask_a], tau, time_axis=ph)
FRET_2CDE = []
for ib, burst in enumerate(bursts):
burst_slice = slice(int(burst.istart), int(burst.istop) + 1)
if ~mask_d[burst_slice].any() or ~mask_a[burst_slice].any():
# Either D or A photon stream has no photons in current burst,
# thus FRET_2CDE cannot be computed.
FRET_2CDE.append(np.nan)
continue
kde_ddi = KDE_DTi[burst_slice][mask_d[burst_slice]]
kde_adi = KDE_ATi[burst_slice][mask_d[burst_slice]]
kde_dai = KDE_DTi[burst_slice][mask_a[burst_slice]]
kde_aai = KDE_ATi[burst_slice][mask_a[burst_slice]]
ED = np.mean(kde_adi / (kde_adi + kde_ddi)) # (E)_D
EA = np.mean(kde_dai / (kde_dai + kde_aai)) # (1 - E)_A
fret_2cde = 110 - 100 * (ED + EA)
FRET_2CDE.append(fret_2cde)
return np.array(FRET_2CDE)
"""
Explanation: Notes on Kernel Shape
The Gaussian kernel gives a more accurate rate estimation with very little dependence on the position where the KDE is evaluated. On the contrary, with symmetric exponential kernel (laplace distribution), there is always a strong dependence on the evaluation position. In particular, when rates are estimated at the timestamps positions, the rates are systematically over-estimated (i.e. the peak is always sampled).
For a Gaussian kernel, given a $\tau$, the rate estimation will be accurate for rates higher than $1/(2\,\tau)$ counts-per-second. For lower rates, the estimation will strongly depend on where the KDE is evaluated. A similar condition can be also found for the exponential kernel, but this case the rate will aways be strongly dependent on the position.
2CDE
KDE and nbKDE Definitions
Following Tomov 2012 notation, we define KDE as (Tomov 2012, eq. 4):
$$KDE_{X_i}^Y \left(t_{(CHX)i}, t{{CHY}} \right) =
\sum_j^{N_{CHY}} \exp \left( - \frac{\lvert t_{(CHX)i} - t{(CHY)_j} \rvert}{\tau}\right) $$
and nbKDE as (Tomov 2012, eq. 5):
$$nbKDE_{X_i}^X \left(t_{{CHX}} \right) = \left(1 + \frac{2}{N_{CHX}} \right) \cdot
\sum_{j, \;j\ne i}^{N_{CHX}} \exp \left( - \frac{\lvert t_{(CHX)i} - t{(CHX)_j} \rvert}{\tau}\right) $$
These quantities can be computed for any slice of the timestamp arrays.
In the implementation of FRET-2CDE, they will be computed on slices of
timestamps corresponding to each burst.
In this context, $N_{CHX}$, (in the multiplicative correction factor of nbKDE),
is the number of photons in the current burst (selecting only photons in the $X$ channel).
FRET-2CDE Definition
To compute FRET-2CDE we need to define (Tomov 2012, eq. 6):
$$(E)D = \frac{1}{N{CHD}} \sum_{i=1}^{N_{CHD}} \frac{KDE_{Di}^A}{KDE_{Di}^A + nbKDE_{Di}^D} $$
and the symmetric estimator (Tomov 2012, eq. 7):
$$(1 - E)A = \frac{1}{N{CHA}} \sum_{i=1}^{N_{CHA}} \frac{KDE_{Ai}^D}{KDE_{Ai}^D + nbKDE_{Ai}^A} $$
Then FRET-2CDE is defined as (Tomov 2012, eq. 8):
$$ FRET-2CDE \left( t_{CHD}, t_{CHA} \right) =
110 - 100 \cdot \left[ (E)_D + (1 - E)_A \right]
$$
These quantities are computed for each burst, so that $N_{CHD}$ ($N_{CHA}$) are
the number of photons in the DexDem (AemDex) channel during current burst.
FRET-2CDE Functions
To implement the FRET-2CDE, we use the following FRETBursts function:
phrates.kde_laplace() (documentation)
This function computes the local photon rate using KDE with a laplace distribution kernel.
FRETBursts provides similar functions to use a Gaussian or rectangular kernel (kde_gaussian and
kde_rect
respectively).
Here we define two functions to compute FRET-2CDE using the laplace kernel
(as in the original paper) and Gaussian kernel:
End of explanation
"""
tau_s = 50e-6 # in seconds
tau = int(tau_s/d.clk_p) # in raw timestamp units
tau
"""
Explanation: FRET-2CDE Results
Let's define $\tau$, the kernel parameter which defines the "time range"
of the photon density estimation:
End of explanation
"""
ph = d.get_ph_times(ph_sel=Ph_sel('all'))
mask_d = d.get_ph_mask(ph_sel=Ph_sel(Dex='Dem'))
mask_a = d.get_ph_mask(ph_sel=Ph_sel(Dex='Aem'))
bursts = ds.mburst[0]
"""
Explanation: Next, we get the timestamps and selection masks for DexDem and DexAem photon streams,
as well as the burst data:
End of explanation
"""
fret_2cde = calc_fret_2cde(tau, ph, mask_d, mask_a, bursts)
fret_2cde_gauss = calc_fret_2cde_gauss(tau, ph, mask_d, mask_a, bursts)
len(fret_2cde), len(fret_2cde_gauss), bursts.num_bursts, ds.num_bursts
"""
Explanation: We can finally compute the FRET-2CDE for each burst:
End of explanation
"""
plt.figure(figsize=(4.5, 4.5))
hist_kws = dict(edgecolor='k', linewidth=0.2,
facecolor=sns.color_palette('Spectral_r', 100)[7])
valid = np.isfinite(fret_2cde)
sns.kdeplot(ds.E[0][valid], fret_2cde[valid],
cmap='Spectral_r', shade=True, shade_lowest=False, n_levels=20)
plt.xlabel('E', fontsize=16)
plt.ylabel('FRET-2CDE', fontsize=16);
plt.ylim(-10, 50);
plt.axhline(10, ls='--', lw=2, color='k')
plt.text(0.05, 0.95, '2CDE', va='top', fontsize=22, transform=plt.gca().transAxes)
plt.text(0.95, 0.95, '# Bursts: %d' % valid.sum(),
va='top', ha='right', transform=plt.gca().transAxes)
plt.savefig('2cde.png', bbox_inches='tight', dpi=200, transparent=False)
valid = np.isfinite(fret_2cde)
x, y = ds.E[0][valid], fret_2cde[valid]
hist_kws = dict(edgecolor='k', linewidth=0.2,
facecolor=sns.color_palette('Spectral_r', 100)[7])
g = sns.JointGrid(x=x, y=y, ratio=3)
g.plot_joint(sns.kdeplot, cmap='Spectral_r', shade=True, shade_lowest=False, n_levels=20)
g.ax_marg_x.hist(x, bins=np.arange(-0.2, 1.2, 0.0333), **hist_kws)
g.ax_marg_y.hist(y, bins=70, orientation="horizontal", **hist_kws)
g.ax_joint.set_xlabel('E', fontsize=16)
g.ax_joint.set_ylabel('FRET-2CDE', fontsize=16);
g.ax_joint.set_ylim(-10, 50);
g.ax_joint.set_xlim(-0.1, 1.1);
g.ax_joint.axhline(10, ls='--', lw=2, color='k')
g.ax_joint.text(0.05, 0.95, '2CDE', va='top', fontsize=22, transform=g.ax_joint.transAxes)
g.ax_joint.text(0.95, 0.95, '# Bursts: %d' % valid.sum(),
va='top', ha='right', transform=g.ax_joint.transAxes)
plt.savefig('2cde_joint.png', bbox_inches='tight', dpi=200, transparent=False)
"""
Explanation: And visualize the results with some plots:
End of explanation
"""
bursts = ds1.mburst[0]
ph_dex = d.get_ph_times(ph_sel=Ph_sel(Dex='DAem'))
ph_aex = d.get_ph_times(ph_sel=Ph_sel(Aex='Aem'))
mask_dex = d.get_ph_mask(ph_sel=Ph_sel(Dex='DAem'))
mask_aex = d.get_ph_mask(ph_sel=Ph_sel(Aex='Aem'))
KDE_DexTi = phrates.kde_laplace(ph_dex, tau, time_axis=ph)
KDE_AexTi = phrates.kde_laplace(ph_aex, tau, time_axis=ph)
ALEX_2CDE = []
BRDex, BRAex = [], []
for ib, burst in enumerate(bursts):
burst_slice = slice(int(burst.istart), int(burst.istop) + 1)
if ~mask_dex[burst_slice].any() or ~mask_aex[burst_slice].any():
# Either D or A photon stream has no photons in current burst,
# thus ALEX_2CDE cannot be computed.
ALEX_2CDE.append(np.nan)
continue
kde_dexdex = KDE_DexTi[burst_slice][mask_dex[burst_slice]]
kde_aexdex = KDE_AexTi[burst_slice][mask_dex[burst_slice]]
N_chaex = mask_aex[burst_slice].sum()
BRDex.append(np.sum(kde_aexdex / kde_dexdex) / N_chaex)
kde_aexaex = KDE_AexTi[burst_slice][mask_aex[burst_slice]]
kde_dexaex = KDE_DexTi[burst_slice][mask_aex[burst_slice]]
N_chdex = mask_dex[burst_slice].sum()
BRAex.append(np.sum(kde_dexaex / kde_aexaex) / N_chdex)
alex_2cde = 100 - 50*(BRDex[-1] - BRAex[-1])
ALEX_2CDE.append(alex_2cde)
ALEX_2CDE = np.array(ALEX_2CDE)
ALEX_2CDE.size, np.isfinite(ALEX_2CDE).sum(), np.isfinite(ds1.E[0]).sum()
"""
Explanation: ALEX-2CDE Definition
To compute ALEX-2CDE we need to define (Tomov 2012, eq. 10):
$$BR_{D_{EX}} = \frac{1}{ N_{CHA_{EX}} }
\sum_{i=1}^{N_{CHD_{EX}}} \frac{ KDE_{D_{EX}i}^A }{ KDE_{D_{EX}i}^D }$$
and the analogous (Tomov 2012, eq. 11):
$$BR_{A_{EX}} = \frac{1}{ N_{CHD_{EX}} }
\sum_{i=1}^{N_{CHA_{EX}}} \frac{ KDE_{A_{EX}i}^D }{ KDE_{A_{EX}i}^A }$$
Finally, ALEX-2CDE is defined as (Tomov 2012, eq. 12):
$$ ALEX-2CDE \left( t_{CHD}, t_{CHA} \right) =
110 - 50 \cdot \left[ BR_{D_{EX}} + BR_{A_{EX}} \right]
$$
ALEX-2CDE Implementation
End of explanation
"""
hist_kws = dict(edgecolor='k', linewidth=0.2,
facecolor=sns.color_palette('Spectral_r', 100)[7])
valid = np.isfinite(ALEX_2CDE)
g = sns.JointGrid(x=ds1.E[0][valid], y=ALEX_2CDE[valid], ratio=3)
g = g.plot_joint(plt.hexbin, **{'cmap': 'Spectral_r', 'mincnt': 1, 'gridsize': 40})
_ = g.ax_marg_x.hist(ds1.E[0][valid], bins=np.arange(-0.2, 1.2, 0.0333), **hist_kws)
_ = g.ax_marg_y.hist(ALEX_2CDE[valid], bins=40, orientation="horizontal", **hist_kws)
g.ax_joint.set_xlabel('E', fontsize=16)
g.ax_joint.set_ylabel('ALEX-2CDE', fontsize=16);
g.ax_joint.text(0.95, 0.95, '# Bursts: %d' % valid.sum(),
va='top', ha='right', transform=g.ax_joint.transAxes);
valid = np.isfinite(ALEX_2CDE)
print('Number of bursts (removing NaNs/Infs):', valid.sum())
g = sns.JointGrid(x=ds1.S[0][valid], y=ALEX_2CDE[valid], ratio=3)
g = g.plot_joint(plt.hexbin, **{'cmap': 'Spectral_r', 'mincnt': 1, 'gridsize': 40})
_ = g.ax_marg_x.hist(ds1.S[0][valid], bins=np.arange(0, 1.2, 0.0333), **hist_kws)
_ = g.ax_marg_y.hist(ALEX_2CDE[valid], bins=40, orientation="horizontal", **hist_kws)
g.ax_joint.set_xlabel('S', fontsize=16)
g.ax_joint.set_ylabel('ALEX-2CDE', fontsize=16)
g.ax_joint.text(0.95, 0.95, '# Bursts: %d' % valid.sum(),
va='top', ha='right', transform=g.ax_joint.transAxes);
masks = [valid * (ALEX_2CDE < 88) * (ds1.S[0] > 0.9)]
ds2 = ds1.select_bursts_mask_apply(masks)
alex_jointplot(ds2, vmax_fret=False)
"""
Explanation: And some final plots of ALEX-2CDE:
End of explanation
"""
|
robertclf/FAFT
|
FAFT_64-points_R2C/nbFAFT128_2D.ipynb
|
bsd-3-clause
|
import numpy as np
import ctypes
from ctypes import *
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import math
#To put images inside the notebook
%matplotlib inline
"""
Explanation: 2D Fourier Transform
with an extra gpu array for the 33th complex values
End of explanation
"""
gridDIM = 64
size = gridDIM*gridDIM
axes0 = 0
axes1 = 1
makeC2C = 0
makeR2C = 1
makeC2R = 1
axesSplit_0 = 0
axesSplit_1 = 1
m = size
segment_axes0 = 0
segment_axes1 = 0
DIR_BASE = "/home/robert/Documents/new1/FFT/code/"
# FAFT
_faft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_2D_R2C.so' )
_faft128_2D.FAFT128_2D_R2C.restype = int
_faft128_2D.FAFT128_2D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_faft = _faft128_2D.FAFT128_2D_R2C
# Inv FAFT
_ifaft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'IFAFT128_2D_C2R.so' )
_ifaft128_2D.IFAFT128_2D_C2R.restype = int
_ifaft128_2D.IFAFT128_2D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_ifaft = _ifaft128_2D.IFAFT128_2D_C2R
"""
Explanation: Loading FFT routines
End of explanation
"""
def Gaussian(x,sigma):
return np.exp( - x**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi ))
def fftGaussian(p,sigma):
return np.exp( - p**2*sigma**2/2. )
# Gaussian parameters
mu = 0
sigma = 1.
# Grid parameters
x_amplitude = 5.
p_amplitude = 6. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude )
dx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper
dp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper
delta = dx*dp/(2*np.pi)
x_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM)
p = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM)
x = x_range[ np.newaxis, : ]
y = x_range[ :, np.newaxis ]
f = Gaussian(x,sigma)*Gaussian(y,sigma)
plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )
axis_font = {'size':'24'}
plt.text( 0., 5.1, '$W$' , **axis_font)
plt.colorbar()
#plt.ylim(0,0.44)
print ' Amplitude x = ',x_amplitude
print ' Amplitude p = ',p_amplitude
print ' '
print 'sigma = ', sigma
print 'n = ', x.size
print 'dx = ', dx
print 'dp = ', dp
print ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' '
print ' '
print 'delta = ', delta
print ' '
print 'The Gaussian extends to the numerical error in single precision:'
print ' min = ', np.min(f)
"""
Explanation: Initializing Data
Gaussian
End of explanation
"""
f33 = np.zeros( [1 ,64], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
"""
Explanation: $W$ TRANSFORM FROM AXES-0
After the transfom, f_gpu[:32, :] contains real values and f_gpu[32:, :] contains imaginary values. g33_gpu contains the 33th. complex values
End of explanation
"""
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
plt.imshow( np.append( f_gpu.get()[:32, :], f33_gpu.get().real, axis=0 )/float(np.sqrt(size)),
extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( 0., 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(0 , p_amplitude)
plt.imshow( np.append( f_gpu.get()[32:, :], f33_gpu.get().imag, axis=0 )/float(np.sqrt(size)),
extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( 0., 6.2, '$Im \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(0 , p_amplitude)
"""
Explanation: Forward Transform
End of explanation
"""
plt.figure(figsize=(10,10))
plt.plot( p[:33], np.append( f_gpu.get()[:32, 32], f33_gpu.get()[:, 32].real, axis=0 )/(f_gpu.get().size) ,
'o', label='numerical')
plt.plot( p[:33], 4*fftGaussian(p,sigma)[:33] , label = 'analytical')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
plt.figure(figsize=(10,10))
plt.semilogy( p[:33], np.append( f_gpu.get()[:32, 32], f33_gpu.get()[:, 32].real, axis=0 )/ (f_gpu.get().size),
'o', label='numerical')
plt.semilogy( p[:33], 4*fftGaussian(p,sigma)[:33] , label = 'analytical')
plt.legend(loc='upper left')
plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
plt.plot( p[:33], np.append(f_gpu.get()[32:, 32], f33_gpu.get()[:, 32].imag, axis=0)/(f_gpu.get().size), 'o',
label='numerical imag error')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.xlabel('$p$',**axis_font)
"""
Explanation: Central Section $p_x =0$
End of explanation
"""
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 )
plt.imshow( f_gpu.get()/(float(size*size)) ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude, x_amplitude-dx] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -1, 6.2, '$W$', **axis_font )
plt.xlim(-x_amplitude , x_amplitude-dx)
plt.ylim(-x_amplitude , x_amplitude-dx)
plt.figure(figsize=(10,10))
plt.plot( y, f_gpu.get()[:, 32]/(f_gpu.get().size) /(43.8*f_gpu.get().size),
'o', label='numerical')
plt.plot( y, 4*Gaussian(y, sigma) , label = 'analytical')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
# LOG PLOT
plt.figure(figsize=(10,10))
plt.semilogy( y, f_gpu.get()[:, 32]/(f_gpu.get().size) /(450*f_gpu.get().size) , 'o', label='numerical')
plt.semilogy( y, f[:, 32] , '.', label='original')
plt.semilogy( y, Gaussian(y, sigma)/2.5 , label = 'analytical')
plt.legend(loc='upper left')
plt.ylim(1e-7,0.5)
plt.ylabel('$e^{- \\frac{x^2}{2 \\sigma^2 } }$',**axis_font)
plt.xlabel('$p$',**axis_font)
"""
Explanation: Inverse Transform
End of explanation
"""
f33 = np.zeros( [64, 1], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
"""
Explanation: $W$ TRANSFORM FROM AXES-1
After the transfom, f_gpu[:, :32] contains real values and f_gpu[:, 32:] contains imaginary values. f33_gpu contains the 33th. complex values
End of explanation
"""
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
plt.imshow( np.append( f_gpu.get()[:, :32], f33_gpu.get().real, axis=1 )/float(np.sqrt(size)),
extent=[-p_amplitude , 0, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , 0)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.imshow( np.append( f_gpu.get()[:, 32:], f33_gpu.get().imag, axis=1 )/float(np.sqrt(size)),
extent=[-p_amplitude , 0, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Im \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , 0)
plt.ylim(-p_amplitude , p_amplitude-dp)
"""
Explanation: Forward Transform
End of explanation
"""
plt.figure(figsize=(10,10))
plt.plot( p[:33], np.append( f_gpu.get()[32, :32], f33_gpu.get()[32, :].real, axis=0 )/(f_gpu.get().size) ,
'o', label='numerical')
plt.plot( p[:33], 4*fftGaussian(p,sigma)[:33] , label = 'analytical')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
plt.figure(figsize=(10,10))
plt.semilogy( p[:33], np.append( f_gpu.get()[32, :32], f33_gpu.get()[32, :].real, axis=0 )/ (f_gpu.get().size),
'o', label='numerical')
plt.semilogy( p[:33], 4*fftGaussian(p,sigma)[:33] , label = 'analytical')
plt.legend(loc='upper left')
plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
plt.plot( p[:33], np.append(f_gpu.get()[32, 32:], f33_gpu.get()[32, :].imag, axis=0)/(f_gpu.get().size), 'o',
label='numerical imag error')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.xlabel('$p$',**axis_font)
"""
Explanation: Central Section $p_y =0$
End of explanation
"""
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 )
plt.imshow( f_gpu.get()/float(size) ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -1, 5.2, '$W$', **axis_font )
plt.xlim(-x_amplitude , x_amplitude-dx)
plt.ylim(-x_amplitude , x_amplitude-dx)
plt.figure(figsize=(10,10))
plt.plot( y, f_gpu.get()[32, :]/(f_gpu.get().size) /(43.8*f_gpu.get().size),
'o', label='numerical')
plt.plot( y, 4*Gaussian(y, sigma) , label = 'analytical')
plt.legend(loc='upper left')
#plt.ylim(0,1.1)
plt.ylabel('$e^{- \\frac{\\sigma x^2}{2} }$',**axis_font)
plt.xlabel('$p$',**axis_font)
# LOG PLOT
plt.figure(figsize=(10,10))
plt.semilogy( y, f_gpu.get()[32, :]/(f_gpu.get().size) /(450*f_gpu.get().size) , 'o', label='numerical')
plt.semilogy( y, f[32, :] , '.', label='original')
plt.semilogy( y, Gaussian(y, sigma)/2.5 , label = 'analytical')
plt.legend(loc='upper left')
plt.ylim(1e-7,0.5)
plt.ylabel('$e^{- \\frac{x^2}{2 \\sigma^2 } }$',**axis_font)
plt.xlabel('$p$',**axis_font)
"""
Explanation: Inverse Transform
End of explanation
"""
|
turi-code/tutorials
|
notebooks/datas_messy_clean_it.ipynb
|
apache-2.0
|
import os
import graphlab as gl
"""
Explanation: <h1>Data's messy - clean it up!</h1>
Data cleaning is a critical process for improving data quality and ultimately the accuracy of machine learning model output. In this notebook we show how the GraphLab Create Data Matching toolkit can be used to get your data shiny clean.
Auto-tagging Stack Overflow questions and answers
Record linkage of house listings
Composite distances and choosing neighborhood parameters
Note: this notebook requires GraphLab Create 1.6 or higher.
<h2>Auto-tagging Stack Overflow questions *and* answers</h2>
End of explanation
"""
if os.path.exists('statistics_topics.csv'):
stats_topics = gl.SFrame.read_csv('statistics_topics.csv', header=False)
else:
stats_topics = gl.SFrame.read_csv('https://static.turi.com/datasets//statistics_topics.csv',
header=False)
stats_topics.save('statistics_topics', format='csv')
stats_topics = stats_topics.rename({'X1': 'tag'})
stats_topics.tail(10)
"""
Explanation: In the first section of this notebook we autotag posts from CrossValidated, the statistics section of the Stack Exchange network. Questions posted on this forum are typically annotated with tags by the authors but responses are not, making it more difficult to quickly scan responses for the most useful information. The raw data is available from the Stack Exchange data dump. For convenience we provide a preprocessed subsample (7.8MB) in the public Turi datasets bucket on Amazon S3, which is downloaded and saved locally with the first code snippet below.
For reference tags we use a lightly-curated list of statistics topics from Wikipedia. The preprocessed list is also available in the datasets S3 bucket.
A more extensive explanations of the code can be found in the Autotagger chapter of the User Guide.
<h3>Read in the metadata</h3>
The data is also saved locally to avoid repeated downloads.
End of explanation
"""
model = gl.autotagger.create(stats_topics)
model.list_fields()
model.tag?
"""
Explanation: <h3>Create the autotagger model</h3>
End of explanation
"""
if os.path.exists('stats_overflow_clean'):
posts = gl.SFrame('stats_overflow_clean')
else:
posts = gl.SFrame('https://static.turi.com/datasets/stats_overflow_clean')
posts.save('stats_overflow_clean')
print "Number of posts:", posts.num_rows()
posts[['Body', 'Title', 'PostTypeId', 'Tags']].tail(5)
posts['doc'] = posts['Title'] + ' ' + posts['Body']
"""
Explanation: <h3>Read in the document data</h3>
End of explanation
"""
tags = model.tag(posts, query_name='doc', k=5, similarity_threshold=0.1)
tags.print_rows(10, max_row_width=110, max_column_width=40)
"""
Explanation: <h3>Query the model</h3>
There are two key parameters when querying the model: k, which indicates the maximum number of tags to return for each query, and similarity_threshold, which indicates the maximum distance from a query document to the tag. The most typical usage is to get preliminary results by setting k to 5 and leaving similarity_threshold unspecified. Use the similarity_threshold parameter to tune the final results for optimal precision and recall.
End of explanation
"""
col_types = {'street_number': str, 'postcode': str}
address_features = ['street_number', 'address_1', 'suburb', 'state', 'postcode']
if os.path.exists('febrl_F_org_5000.csv'):
post_address = gl.SFrame.read_csv('febrl_F_org_5000.csv', column_type_hints=col_types)
else:
url = 'https://static.turi.com/datasets/febrl_synthetic/febrl_F_org_5000.csv'
post_address = gl.SFrame.read_csv(url, column_type_hints=col_types)
post_address.save('febrl_F_org_5000.csv')
post_address = post_address[address_features]
post_address.print_rows(5)
"""
Explanation: <h2>Record linkage of house listings</h2>
To illustrate usage of the record linker tool, we use synthetic address data generated by and packaged with the FEBRL program, another data matching tool. For the sake of illustration suppose the dataset called "post_address" is a relatively error free set of reference addresses (say, from the Australian postal service). The dataset called "agent_listings" contains data with the same schema, but it has many errors; imagine this is data created by real estate agencies.
<h3>Read in the reference data</h3>
As with the autotagger data, the datasets downloaded in this section are saved locally for repeated usage. From prior experience, we know only a handful of features are useful for this illustration, and they are enumerated in the address_features list.
End of explanation
"""
model = gl.record_linker.create(post_address, distance='jaccard')
model.summary()
model.list_fields()
"""
Explanation: <h3>Create the record linker model</h3>
End of explanation
"""
if os.path.exists('febrl_F_dup_5000.csv'):
agent_listings = gl.SFrame.read_csv('febrl_F_dup_5000.csv',
column_type_hints=col_types)
else:
url = 'https://static.turi.com/datasets/febrl_synthetic/febrl_F_dup_5000.csv'
agent_listings = gl.SFrame.read_csv(url, column_type_hints=col_types)
agent_listings.save('febrl_F_dup_5000.csv')
agent_listings = agent_listings[address_features]
agent_listings.print_rows(5)
"""
Explanation: <h3>Read in the query data</h3>
End of explanation
"""
model.link?
matches = model.link(agent_listings, k=None, radius=0.5)
matches.head(5)
"""
Explanation: <h3>Query the model</h3>
Results are obtained with the model's link method, which matches a new set of queries to the reference data passed in above to the create function. For our first pass, we set the radius parameter to 0.5, which means that matches must share at least roughly 50% of the information contained in both the post_address and agent_listings records.
End of explanation
"""
print agent_listings[1]
print post_address[2438]
"""
Explanation: <h3>Evaluate</h3>
The results mean that the address in query row 1 match the address in refs row number 2438, although the Jaccard distance is relatively high at 0.42. Inspecting these records manually we see this is in fact not a good match.
End of explanation
"""
print agent_listings[3]
print post_address[2947]
"""
Explanation: On the other hand, the match between query number 3 and reference number 2947 has a distance of 0.045, indicating these two records are far more similar. By pulling these records we confirm this to be the case.
End of explanation
"""
address_dist = [
[['street_number'], 'levenshtein', 1],
[address_features, 'jaccard', 1]
]
model2 = gl.record_linker.create(post_address, distance=address_dist)
model2.summary()
model2['distance']
"""
Explanation: Unfortunately, these records are still not a true match because the street numbers are different (in a way that is not likely to be a typo). Ideally we would like street number differences to be weighted heavily in our distance function, while still allowing for typos and misspellings in the street and city names. To do this we can build a composite distance function.
<h2>Composite distances and choosing neighborhood parameters</h2>
<h3>Create a composite distance and a new model</h3>
In this case we'll use Levenshtein distance to measure the dissimilarity in street number, in addition to our existing Jaccard distance measured over all of the address features. Both of these components will be given equal weight. In the summary of the created model, we see the number of distance components is now two---Levenshtein and Jaccard distances---instead of one in our first model.
End of explanation
"""
pre_match = model2.link(agent_listings, k=10, verbose=False)
pre_match['distance'].show()
from IPython.display import Image
Image(url='https://static.turi.com/datasets/house_link_distances.png')
"""
Explanation: <h3>Query the model for a large number of neighbors</h3>
One tricky aspect of using a composite distance is figuring out the best threshold for match quality. A simple way to do this is to first return a relatively high number of matches for each query, then look at the distribution of distances for good thresholds using the radius parameter. For this notebook, I've captured a screenshot of the canvas output and display it below.
End of explanation
"""
matches = model2.link(agent_listings, k=None, radius=0.64, verbose=False)
matches.head(5)
"""
Explanation: <h3>Calibrate the parameters for results quality</h3>
In this distribution we see a stark jump at 0.636 in the distribution of distances for the 10-nearest neighbors of every query (remember this is no longer simple Jaccard distance, but a sum of Jaccard and Levenshtein distances over different sets of features). In our final pass, we set the k parameter to None, but enforce this distance threshold with the radius parameter.
End of explanation
"""
print agent_listings[6]
print post_address[1266]
"""
Explanation: There are far fewer results now, but they are much more likely to be true matches than with our first model, even while allowing for typos in many of the address fields.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/ukesm1-0-mmh/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-MMH
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
param411singh/inf1340-2015-notebooks
|
Week 2.ipynb
|
mit
|
print("Hello world")
"""
Explanation: Preamble
This software is iPython Notebook. From the command line, change to the directory where your Notebooks (.ipynb) are located and type
ipython notebook
A Notebook contains "cells". Edit a cell by double clicking on it.
Some of the cells, like this one, contains text. The text is formatted using a language called "Markdown." It's similar to what's used in Wikipedia. You can find out more by going to the Help menu above.
Other cells, like the one below, contain Python code and can be run.
Both types of cells are "run" using the "play" button, above.
End of explanation
"""
#Input the hours worked
#Input the hourly pay
#Calculate the gross pay as hours worked multiplied by pay rate
#Display the gross pay
"""
Explanation: Rule 3
The best way to improve your programming and problem skills is to practice.
End of explanation
"""
def first_function():
print("\"Beware the Jabberwock, my son!")
print("The jaws that bite, the claws that catch!")
print("Beware the Jubjub bird, and shun")
print("The frumious Bandersnatch!\"")
first_function()
"""
Explanation: Input, Processing, and Output
Typically, a computer performs a three-step process
Retrieve input
Examples: keyboard input, text file, database
Perform some process on the input
Examples: mathematical calculation, summarize
Produce output
Example: a number, a report, write to file
We'll talk about output first, as that's the most straightforward and we did this in tutorial last week.
Output
The print function displays output on the screen.
A function is a collection of lines of code that is referred to by the function name
"Calling" a function causes all the lines of code inside it to run
Here is an example of a function. Don't worry about the details now. We'll be talking about them more in a couple of weeks.
End of explanation
"""
"This is a string literal"
'This is also a string literal'
"""He took his vorpal sword in hand:
Long time the manxome foe he sought --
So rested he by the Tumtum tree,
And stood awhile in thought.
"""
"""
Explanation: The print function outputs strings and variables. So let's look at those now.
Strings
A string is a sequence of characters that is use as data
A string literal is a string that appears in the actual code of a program
I usually say just "string"
String Delimiters
Must be enclosed in single (‘) or double (“) quotations marks
Can also be enclosed in triple quotes (‘’’ or “””)
Use with strings that span multiple lines, or contain both single and double quotes
Recommended style for Python is double quotation marks
End of explanation
"""
# Varible assignment
# Let age be equal to 25
# Let 25 be assigned to age
age = 25
# Comparison
# Is age equal to 25?
age == 25
# Evaluates to True or False
"""
Explanation: Things to Notice
The above code doesn't do anything. We've just declared some strings.
The triple quotation string helps with Rule 2.
Rule 2
A program is a human-readable essay on problem solving that also happens to execute on a computer.
Variables
A variable is a container that has a label and can hold data.
An assignment statement has the following general form
variable = expression
The left hand side creates the container and gives it a label.
The equal sign (=) puts the value on the right hand side into the container.
End of explanation
"""
"age" = 25
"""
Explanation: A variable name cannot be enclosed in quotation marks.
End of explanation
"""
age = 25
print(age)
"""
Explanation: Can be passed into a function.
End of explanation
"""
print(temperature)
temperature = 74.5
print(tmperature)
# This program demonstrates a variable
room = 503
print('I am staying in room number')
print(room)
room = 503
# Using print () is proper Python 3 Syntax
# It's acceptable in Python 2
print('I am staying in room number', room)
# Usual way in Python 2 is without ()
# This is not legal in Python 3
print 'I am staying in room number', room
print('I am staying in room number ' + str(room))
# This gives a type error
# print('I am staying in room number ' + room)
print('I am staying in room number %s') % room
print('I am staying in room number'),
# In Python 3
# print('I am staying in room number', end="")
print(room)
"""
Explanation: A variable can be used only after a value has been assigned.
End of explanation
"""
# This is a Python comment
# It's actually pretty similar to a hashtag
"""
Explanation: Comments
Notes of explanation within a program
Begin with a # character
Ignored by Python interpreter
Intended for people
Meant to help with Rule 2
End of explanation
"""
# Are these variable names legal or illegal?
units_per_day
daysOfWeek
3dGraph
June1997
Mixture%3
"""
Explanation: In general, comments should provide rationale
Design decisions or explain why code is a particular way
Should NOT repeat code
Increases work when both must be kept in sync
Variable Naming Rules
Cannot be a Python keyword
Cannot contain spaces
First character must be a letter or an underscore
Subsequent characters may use letters, digits, or underscores
Case sensitive
Variable name should reflect its use
Recommended style for Python is to use nouns separated by underscores
- called snake case
End of explanation
"""
name = raw_input('What is your name? ') # For Python 3, use input()
print(name)
"""
Explanation: Rule 4
A foolish consistency is the hobgoblin of little minds.
Input
There are a number of ways to specify input to a program.
As an argument on the command line
In a file
From the keyboard
Let's start with the last one.
variable = input(prompt) or variable = raw_input(prompt)
input() is a function
prompt is the string to be displayed on the screen
variable holds the data input from the keyboard
End of explanation
"""
# A program that repeats a user-inputed string twice
jacob_said = raw_input("What did you say?")
jacob_said = jacob_said + " " + jacob_said
print(jacob_said)
"""
Explanation: Exercise: Name and Age
Write a program that asks the user for their first name, last name, and age. Print out the names on one line. Print out the age on the next line. Include labels on your output.
Processing
This middle step involves manipulating the input in some way before outputting.
Exercise: Jacob Two-Two
Write a program that takes a string as input and prints it out twice.
End of explanation
"""
#Input the hours worked
#Input the hourly pay
#Calculate the gross pay as hours worked multiplied by pay rate
#Display the gross pay
"""
Explanation: A lot of computer processing involves math, because computers are good with numbers. They're less good with strings.
Math Operators
Addition (+)
Subtraction (-)
Multiplication (*)
Division (/)
Integer division (//)
Remainder or modulo (%)
Exponent (**)
Order of Operations Matter
BEDMAS
Brackets
Exponents
Division and Multiplication
Addition and Subtraction
What do the follow expressions evaluate to?
6 * 3 + 7 * 4
5 + 3 / 4
5 - 2 * 3 ** 4
Exercise: Calculate Pay
Calculate and display the gross pay for an hourly paid employee
End of explanation
"""
|
scottquiring/Udacity_Deeplearning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 18
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
from skimage import color
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
# def to_hsi(x):
# r = x[:,:,:,0]
# g = x[:,:,:,1]
# b = x[:,:,:,2]
# theta = np.acos((0.5 * ((r-g) + (r-b))) / (((r-g)**2) + (r-b *)))
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
x = x / 255.0
for i in range(x.shape[0]):
x[i,:,:,:] = color.rgb2hsv(x[i,:,:,:])
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
m = (x.shape[0] if hasattr(x, 'shape') else len(x))
rv = np.zeros((m, 10))
rv[range(m),x] = 1
return rv
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
%%time
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
tensor_shape = tuple([None] + list(image_shape))
return tf.placeholder(tf.float32, shape=tensor_shape, name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, shape=(None,n_classes), name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name='keep_prob')
def neural_net_learn_rate_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name='learn_rate')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
conv_ksize = list(conv_ksize)
conv_strides = list(conv_strides)
pool_ksize = list(pool_ksize)
pool_strides = list(pool_strides)
conv_num_inputs = int(x_tensor.shape[3])
conv_weights = tf.Variable(tf.truncated_normal(
conv_ksize + [conv_num_inputs,conv_num_outputs], stddev=0.01))
conv_strides = [1] + conv_strides + [1]
rv = tf.nn.conv2d(x_tensor, conv_weights, conv_strides, 'SAME')
conv_bias = tf.Variable(tf.zeros(rv.shape[1:]))
rv = tf.nn.relu(rv + conv_bias)
pool_ksize = [1] + pool_ksize + [1]
pool_strides = [1] + pool_strides + [1]
rv = tf.nn.max_pool(rv, pool_ksize, pool_strides, 'SAME')
return rv
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
dimensions = [int(x_tensor.shape[1]), num_outputs]
weights = tf.Variable(tf.truncated_normal(dimensions, stddev=0.01))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.relu(tf.matmul(x_tensor, weights) + bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
dimensions = [int(x_tensor.shape[1]), num_outputs]
weights = tf.Variable(tf.truncated_normal(dimensions, stddev=0.01))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.matmul(x_tensor, weights) + bias
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
#with tf.device('/gpu:0'):
x_tensor = x
x_tensor = conv2d_maxpool(x_tensor, 128, [8,8], [1,1], [2,2], [1,1])
x_tensor = conv2d_maxpool(x_tensor, 128, [4,4], [2,2], [2,2], [2,2])
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
#with tf.device('/gpu:1'):
x_tensor = conv2d_maxpool(x_tensor, 64, [4,4], [1,1], [2,2], [2,2])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = tf.nn.dropout(fully_conn(x_tensor, 512), keep_prob)
x_tensor = fully_conn(x_tensor, 256)
x_tensor = tf.nn.dropout(fully_conn(x_tensor, 256), keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x_tensor = output(x_tensor, 10)
# TODO: return output
return x_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
# x = (neural_net_image_input((32, 32, 3)), neural_net_image_input((32, 32, 3)))
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
learn_rate = neural_net_learn_rate_input()
# Model
logits = []
with tf.variable_scope(tf.get_variable_scope()):
logits.append(conv_net(x, keep_prob))
logits = tf.concat(logits, 0)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
# def split(t,n=2):
# if n == 1: return (t,)
# dimSize = int(t.shape[0])
# partSize = dimSize/n
# maxIdx = int(partSize)
# rv = [t[:maxIdx,...]]
# for i in range(n-2):
# myMin = int(maxIdx)
# nextMax = min(dimSize,float(maxIdx)+partSize)
# myMax = int(nextMax)
# rv.append(t[myMin:myMax,...])
# maxIdx = nextMax
# rv.append(t[int(maxIdx):,...])
# return tuple(rv)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability,
feature_batch, label_batch, epoch=0):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
if epoch < 125:
learning_rate=0.001
elif epoch < 175:
learning_rate=0.0003
elif epoch < 225:
learning_rate=0.0001
else:
learning_rate=0.00003
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch,
learn_rate: learning_rate,
keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: x = np.zeros((100,2,1,5))
xs = split(x,6)
print([x.shape for x in xs])
End of explanation
"""
import datetime
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
train_cost, train_acc = session.run((cost, accuracy),
feed_dict={x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_cost, valid_acc = session.run((cost, accuracy),
feed_dict={x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print("Training loss: {0:.02}, accuracy: {1:.02}".format(train_cost, train_acc))
print(datetime.datetime.now(),"Validation loss: {0:.02}, accuracy: {1:.02}".format(valid_cost, valid_acc))
return valid_acc
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 250
batch_size = 256
keep_probability = 0.4
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
fig, axis = plt.subplots(figsize=(13,13))
axis.plot(val_accuracy)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
full_val_accuracy = []
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, epoch)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
full_val_accuracy.append(print_stats(sess, batch_features, batch_labels, cost, accuracy))
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
fig, axis = plt.subplots(figsize=(13,13))
axis.plot(np.array(range(len(full_val_accuracy)))/5, full_val_accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# file_writer = tf.summary.FileWriter('tensorboard', sess.graph)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch,
loaded_y: test_label_batch,
loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/examples/spot_transit.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: Spot Transit
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
import numpy as np
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value(qualifier='mass', component='secondary', value=0.2)
b.set_value(qualifier='requiv', component='secondary', value=0.2)
b.set_value(qualifier='teff', component='secondary', value=300)
"""
Explanation: Let's set reasonable (although not necessarily physical) values for the secondary component.
End of explanation
"""
b.add_spot(component='primary',
relteff=0.90,
long=0,
colat=90,
radius=20,
feature='spot01')
"""
Explanation: We'll add a spot to the primary component.
End of explanation
"""
b.add_dataset('lc', compute_times=phoebe.linspace(-0.1, 0.1, 201))
"""
Explanation: Adding Datasets
End of explanation
"""
b.set_value(qualifier='atm', component='secondary', value='blackbody')
b.set_value(qualifier='ld_mode', component='secondary', value='manual')
anim_times = phoebe.linspace(-0.1, 0.1, 101)
b.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', columns='teffs')
"""
Explanation: Because we have such a cool transiting object, we'll have to use blackbody atmospheres and manually provide limb-darkening.
End of explanation
"""
b.run_compute(distortion_method='sphere', irrad_method='none')
"""
Explanation: Running Compute
End of explanation
"""
print(np.min(b.get_value('teffs', time=0.0, component='primary')), np.max(b.get_value('teffs', time=0.0, component='primary')))
"""
Explanation: Plotting
End of explanation
"""
afig, mplfig = b.plot(time=0.0,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
tight_layout=True,
show=True)
"""
Explanation: Let's go through these options (see also the plot API docs):
* time: make the plot at this single time
* fc: (will be ignored by everything but the mesh): set the facecolor to the teffs column.
* fcmap: use 'plasma' colormap instead of the default to avoid whites.
* fclim: set the limits on facecolor so that the much cooler transiting object doesn't drive the entire range.
* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to "see-through" the triangle edges.
* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.
End of explanation
"""
afig, mplfig = b.plot(times=anim_times,
fc='teffs', fcmap='plasma', fclim=(5000, 6000),
ec='face',
consider_for_limits={'primary': True, 'secondary': False},
tight_layout=True, pad_aspect=False,
animate=True,
save='spot_transit.gif',
save_kwargs={'writer': 'imagemagick'})
"""
Explanation: Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:
times: pass our array of times that we want the animation to loop over.
consider_for_limits: for the mesh panel, keep the primary star centered and allow the transiting object to move in and out of the frame.
pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.
animate: self-explanatory.
save: we could use show=True, but that doesn't always play nice with jupyter notebooks
save_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.0/tutorials/distance.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.0,<2.1"
"""
Explanation: Distance
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
print b.get_parameter(qualifier='distance', context='system')
print b.get_parameter(qualifier='t0', context='system')
"""
Explanation: Relevant Parameters
The 'distance' parameter lives in the 'system' context and is simply the distance between the center of the coordinate system and the observer (at t0)
End of explanation
"""
b.add_dataset('orb', times=np.linspace(0,3,101), dataset='orb01')
b.set_value('distance', 1.0)
b.run_compute(model='dist1')
b.set_value('distance', 2.0)
b.run_compute(model='dist2')
fig = plt.figure()
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['orb01'].plot(model='dist1', ax=ax1)
axs, artists = b['orb01'].plot(model='dist2', ax=ax2)
"""
Explanation: Influence on Orbits (Positions)
The distance has absolutely NO effect on the synthetic orbit as the origin of the orbit's coordinate system is such that the barycenter of the system is at 0,0,0 at t0.
To demonstrate this, let's create an 'orb' dataset and compute models at both 1 m and 2 m and then plot the resulting synthetic models.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,3,101), dataset='lc01')
"""
Explanation: Influence on Light Curves (Fluxes)
Fluxes are, however, affected by distance exactly as you'd expect as inverse of distance squared.
To illustrate this, let's add an 'lc' dataset and compute synthetic fluxes at 1 and 2 m.
End of explanation
"""
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value('distance', 1.0)
b.run_compute(model='dist1')
b.set_value('distance', 2.0)
b.run_compute(model='dist2')
"""
Explanation: To make things easier to compare, let's disable limb darkening
End of explanation
"""
fig = plt.figure()
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['lc01'].plot(model='dist1', ax=ax1)
axs, artists = b['lc01'].plot(model='dist2', ax=ax2)
"""
Explanation: Since we doubled the distance from 1 to 2 m, we expect the entire light curve at 2 m to be divided by 4 (note the y-scales on the plots below).
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01')
b.set_value('distance', 1.0)
b.run_compute(model='dist1')
b.set_value('distance', 2.0)
b.run_compute(model='dist2')
print "dist1 abs_intensities: ", b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist1').mean()
print "dist2 abs_intensities: ", b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='dist2').mean()
print "dist1 intensities: ", b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist1').mean()
print "dist2 intensities: ", b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='dist2').mean()
"""
Explanation: Note that 'pblum' is defined such that a (spherical, non-eclipsed, non-limb darkened) star with a pblum of 4pi will contribute a flux of 1.0 at 1.0 m (the default distance).
For more information, see the pblum tutorial
Influence on Meshes (Intensities)
Distance does not affect the intensities stored in the mesh (including those in relative units). In other words, like third light, distance only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our distances again and look at the values of the intensities in the mesh.
End of explanation
"""
|
AndreySheka/dl_ekb
|
hw5/Seminar5.ipynb
|
mit
|
from __future__ import print_function
from sys import version_info
import matplotlib.pyplot as plt
import numpy as np
import os
import scipy
import theano
import theano.tensor as T
import lasagne
try:
import cPickle as pickle
except ImportError:
import pickle
%matplotlib inline
from scipy.misc import imread, imsave, imresize
from lasagne.utils import floatX
"""
Explanation: Week6
In this part, we'll load a pre-trained network and play with it.
End of explanation
"""
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg16.pkl -O weights.pkl
# copyright: see http://www.robots.ox.ac.uk/~vgg/research/very_deep/
from lasagne.layers import InputLayer
from lasagne.layers import DenseLayer
from lasagne.layers import NonlinearityLayer
from lasagne.layers import DropoutLayer
from lasagne.layers import Pool2DLayer as PoolLayer
from lasagne.layers import Conv2DLayer as ConvLayer
from lasagne.nonlinearities import softmax
def build_model():
<paste network architecture here>
return net
#classes' names are stored here
classes = pickle.load(open('classes.pkl', 'rb'))
#for example, 10th class is ostrich:
print(classes[9])
"""
Explanation: Model Zoo (4 pts)
Lasagne has a plethora of pre-training netrworks in the model zoo
* Even more models within the community (neighbor repos, PRs, etc.)
We'll start by picking VGG16 and deploying it in our notebook.
Warning! VGG16 network requires around 3GB of memory to predict event for single-image batch. If you don't have that luxury, try binder or azure notebooks.
End of explanation
"""
MEAN_VALUES = np.array([104, 117, 123])
IMAGE_W = 224
def preprocess(img):
img = <convert RGB to BGR>
img = <substract mean>
#convert from [w,h,3 to 1,3,w,h]
img = np.transpose(img, (2, 0, 1))[None]
return floatX(img)
def deprocess(img):
img = img.reshape(img.shape[1:]).transpose((1, 2, 0))
for i in range(3):
img[:,:, i] += MEAN_VALUES[i]
return img[:, :, :: -1].astype(np.uint8)
img = (np.random.rand(IMAGE_W, IMAGE_W, 3) * 256).astype(np.uint8)
print(np.linalg.norm(deprocess(preprocess(img)) - img))
"""
Explanation: You have to implement two functions in the cell below.
Preprocess function should take the image with shape (w, h, 3) and transform it into a tensor with shape (1, 3, 224, 224). Without this transformation, our net won't be able to digest input image.
Additionally, your preprocessing function have to rearrange channels RGB -> BGR and subtract mean values from every channel.
End of explanation
"""
net = build_model()
with open('weights.pkl', 'rb') as f:
if version_info.major == 2:
weights = pickle.load(f)
elif version_info.major == 3:
weights = pickle.load(f, encoding='latin1')
<load weights into the network>
input_image = T.tensor4('input')
output = lasagne.layers.get_output(net[<which layer>], input_image)
prob = theano.function([input_image], output)
"""
Explanation: If your implementation is correct, the number above will be small, because deprocess function is the inverse of preprocess function
Deploy the network
End of explanation
"""
img = imread('sample_images/albatross.jpg')
plt.imshow(img)
plt.show()
p = prob(preprocess(img))
labels = p.ravel().argsort()[-1:-6:-1]
print('top-5 classes are:')
for l in labels:
print('%3f\t%s' % (p.ravel()[l], classes[l].split(',')[0]))
"""
Explanation: Sanity check
Let's make sure our network actually works.
To do so, we'll feed it with some example images.
End of explanation
"""
!wget https://www.dropbox.com/s/d61lupw909hc785/dogs_vs_cats.train.zip?dl=1 -O data.zip
!unzip data.zip
#you may need to adjust paths in the next section, depending on your OS
"""
Explanation: Ouch!
Try running network 2-3 times. If output changes, then we've probably done something wrong.
Figure out, what's the problem with the network.
hint there are two such 'problematic' layers in vgg16. They're all near the end.
You can make network deterministic by giving it such flag in the lasagne.layers.get_output function above.
Fun opportunity
ImageNet does not contain any human classes, so if you feed the network with some human photo, it will most likely hallucinate something which is closest to your image.
Try feeding the network with something peculiar: your avatar, Donald Trump, Victor Lempitsky or anyone.
Grand-quest: Dogs Vs Cats (6 pts)
original competition
https://www.kaggle.com/c/dogs-vs-cats
25k JPEG images of various size, 2 classes (guess what)
Your main objective
In this seminar your goal is to fine-tune a pre-trained model to distinguish between the two rivaling animals
The first step is to just reuse some network layer as features
End of explanation
"""
#extract features from images
from tqdm import tqdm
from scipy.misc import imresize
X = []
Y = []
#this may be a tedious process. If so, store the results in some pickle and re-use them.
for fname in tqdm(os.listdir('train/')):
y = fname.startswith("cat")
img = imread("train/"+fname)
img = preprocess(imresize(img,(IMAGE_W,IMAGE_W)))
features = <preprocess the image into features)
Y.append(y)
X.append(features)
X = np.concatenate(X) #stack all [1xfeature] matrices into one.
assert X.ndim==2
#WARNING! the concatenate works for [1xN] matrices. If you have other format, stack them yourself.
#crop if we ended prematurely
Y = Y[:len(X)]
from sklearn.cross_validation import train_test_split
<split data either here or by cross-validation>
"""
Explanation: for starters
Train sklearn model, evaluate validation accuracy (should be >80%
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier,ExtraTreesClassifier,GradientBoostingClassifier,AdaBoostClassifier
from sklearn.linear_model import LogisticRegression, RidgeClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
"""
Explanation: load our dakka
End of explanation
"""
print("I can do it!")
"""
Explanation: Main quest
Get the score improved!
No methods are illegal: ensembling, data augmentation, NN hacks.
Just don't let test data slip into training.
The main requirement is that you implement the NN fine-tuning recipe:
Split the raw image data
please do train/validation/test instead of just train/test
reasonable but not optimal split is 20k/2.5k/2.5k or 15k/5k/5k
Choose which vgg layers are you going to use
Anything but for prob is okay
Do not forget that vgg16 uses dropout
Build a few layers on top of chosen "neck" layers.
a good idea is to just stack more layers inside the same network
alternative: stack on top of get_output
Train the newly added layers for some iterations
you can selectively train some weights by only sending them to your optimizer
lasagne.updates.mysupermegaoptimizer(loss, only_those_weights_i_wanna_train)
selecting all weights from the head but not below the neck:
all_params = lasagne.layers.get_all_params(new_output_layer_or_layers,trainable=True)
old_params= lasagne.layers.get_all_params(neck_layers,trainable=True)
new_params = [w for w in all_params if w not in old_params]
it's cruicial to monitor the network performance at this and following steps
Fine-tune the network body
probably a good idea to SAVE your new network weights now 'cuz it's easy to mess things up.
Moreover, saving weights periodically is a no-nonsense idea
even more cruicial to monitor validation performance
main network body may need a separate, much lower learning rate
since updates are dictionaries, one can just compute union
updates = {}
updates.update(lasagne.updates.how_i_optimize_old_weights())
updates.update(lasagne.updates.how_i_optimize_old_weights())
make sure they do not have overlapping keys. Otherwise, earlier one will be forgotten.
assert len(updates) == len(old_updates) + len(new_updates)
PROFIT!!!
Evaluate the final score
Submit to kaggle
competition page https://www.kaggle.com/c/dogs-vs-cats
get test data https://www.kaggle.com/c/dogs-vs-cats/data
Some ways to get bonus points
explore other networks from the model zoo
play with architecture
85%/90%/93%/95%/97% kaggle score (screen pls).
data augmentation, prediction-time data augmentation
use any more advanced fine-tuning technique you know/read anywhere
ml hacks that benefit the final score
End of explanation
"""
|
dtamayo/reboundx
|
ipython_examples/IntegrateForce.ipynb
|
gpl-3.0
|
import rebound
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def system():
sim = rebound.Simulation()
sim.G = 4*np.pi**2
sim.add(m=0.93)
sim.add(m=4.5*3.e-7, P=0.571/365.25, e=0.01)
sim.add(m=41.*3.e-7, P=13.34/365.25, e=0.01)
sim.move_to_com()
sim.dt = 0.07*sim.particles[1].P
return sim
"""
Explanation: Integrating a Force Step
Here we show how to add force steps before and after a REBOUND timestep. This can be important if the force is velocity-dependent and you are using WHFast, since it assumes that additional forces are only position-dependent (see Sec. 5.1 of Tamayo et al. 2019, where an ultra-short period planet is spuriously ejected after 3 million years by velocity-dependent GR forces). Velocity-dependent forces should be integrated across the timestep, especially if they're non-dissipative, like GR.
End of explanation
"""
sim = system()
sim.integrator = "whfast"
import reboundx
rebx = reboundx.Extras(sim)
gr = rebx.load_force("gr")
rebx.add_force(gr)
gr.params["c"] = 63197.8# AU/yr
Nout = 1000
times = np.linspace(0, 1e4*sim.particles[1].P, Nout)
E0 = rebx.gr_hamiltonian(gr)
Eerr = np.zeros(Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
E = rebx.gr_hamiltonian(gr)
Eerr[i] = np.abs((E-E0)/E0)
fig, ax = plt.subplots(figsize=(9,6))
ax.loglog(times/sim.particles[1].P, Eerr, '.')
ax.set_xlabel('Time (Inner Planet Orbits)', fontsize=24)
ax.set_ylabel('Relative Energy Error', fontsize=24)
"""
Explanation: If we simply add the GR force and integrate as normal with WHFast, the energy error will grow secularly:
End of explanation
"""
sim = system()
sim.integrator = "whfast"
rebx = reboundx.Extras(sim)
gr = rebx.load_force("gr")
gr.params["c"] = 63197.8# AU/yr
intforce = rebx.load_operator('integrate_force')
rebx.add_operator(intforce)
intforce.params['force'] = gr
intforce.params['integrator'] = reboundx.integrators['implicit_midpoint']
"""
Explanation: If instead we add an operator step before and after the timestep that integrates the GR force across it, we fix the problem. We again load the gr force, but instead of adding it to rebx, we instead add an 'integrate_force' operator, and add the gr force as a parameter to that operator step:
End of explanation
"""
Nout = 1000
times = np.logspace(0, np.log10(1e4*sim.particles[1].P), Nout)
E0 = rebx.gr_hamiltonian(gr)
Eerr = np.zeros(Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
E = rebx.gr_hamiltonian(gr)
Eerr[i] = np.abs((E-E0)/E0)
fig, ax = plt.subplots(figsize=(9,6))
ax.loglog(times/sim.particles[1].P, Eerr, '.')
ax.set_xlabel('Time (Inner Planet Orbits)', fontsize=24)
ax.set_ylabel('Relative Energy Error', fontsize=24)
"""
Explanation: Above we have also chosen which integrator to use across the timestep, here implicit midpoint. See the REBOUNDx paper for more discussion
End of explanation
"""
|
dsacademybr/PythonFundamentos
|
Cap08/Notebooks/DSA-Python-Cap08-07-StatsModels.ipynb
|
gpl-3.0
|
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 8</font>
Download: http://github.com/dsacademybr
End of explanation
"""
# Para visualização de gráficos
from pylab import *
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels as st
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore")
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
np.__version__
pd.__version__
st.__version__
# Criando dados artificiais
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
X = sm.add_constant(X)
y = np.dot(X, beta) + e
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
res = sm.OLS(y, X).fit()
print(res.summary())
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best')
"""
Explanation: Statsmodels
Linear Regression Models
End of explanation
"""
from statsmodels.tsa.arima_process import arma_generate_sample
# Gerando dados
np.random.seed(12345)
arparams = np.array([.75, -.25])
maparams = np.array([.65, .35])
# Parâmetros
arparams = np.r_[1, -arparams]
maparam = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs)
y = pd.Series(y, index=dates)
arma_mod = sm.tsa.ARMA(y, order=(2,2))
arma_res = arma_mod.fit(trend='nc', disp=-1)
print(arma_res.summary())
"""
Explanation: Time-Series Analysis
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/sklearn_ensae_course/05_measuring_prediction_performance.ipynb
|
mit
|
# Get the data
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
# Instantiate and train the classifier
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X, y)
# Check the results using metrics
from sklearn import metrics
y_pred = clf.predict(X)
print(metrics.confusion_matrix(y_pred, y))
"""
Explanation: 2A.ML101.5: Measuring prediction performance
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
Using the K-neighbors classifier
Here we'll continue to look at the digits data, but we'll switch to the
K-Neighbors classifier. The K-neighbors classifier is an instance-based
classifier. The K-neighbors classifier predicts the label of
an unknown point based on the labels of the K nearest points in the
parameter space.
End of explanation
"""
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor
data = load_boston()
clf = DecisionTreeRegressor().fit(data.data, data.target)
predicted = clf.predict(data.data)
expected = data.target
plt.scatter(expected, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)');
"""
Explanation: Apparently, we've found a perfect classifier! But this is misleading
for the reasons we saw before: the classifier essentially "memorizes"
all the samples it has already seen. To really test how well this
algorithm does, we need to try some samples it hasn't yet seen.
This problem can also occur with regression models. In the following we fit an other instance-based model named "decision tree" to the Boston Housing price dataset we introduced previously:
End of explanation
"""
from sklearn.model_selection import train_test_split
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
print("%r, %r, %r" % (X.shape, X_train.shape, X_test.shape))
"""
Explanation: Here again the predictions are seemingly perfect as the model was able to perfectly memorize the training set.
A Better Approach: Using a validation set
Learning the parameters of a prediction function and testing it on the
same data is a methodological mistake: a model that would just repeat
the labels of the samples that it has just seen would have a perfect
score but would fail to predict anything useful on yet-unseen data.
To avoid over-fitting, we have to define two different sets:
a training set X_train, y_train which is used for learning the parameters of a predictive model
a testing set X_test, y_test which is used for evaluating the fitted predictive model
In scikit-learn such a random split can be quickly computed with the
train_test_split helper function. It can be used this way:
End of explanation
"""
clf = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(metrics.confusion_matrix(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))
"""
Explanation: Now we train on the training data, and test on the testing data:
End of explanation
"""
metrics.f1_score(y_test, y_pred, average="macro")
"""
Explanation: The averaged f1-score is often used as a convenient measure of the
overall performance of an algorithm. It appears in the bottom row
of the classification report; it can also be accessed directly:
End of explanation
"""
metrics.f1_score(y_train, clf.predict(X_train), average="macro")
"""
Explanation: The over-fitting we saw previously can be quantified by computing the
f1-score on the training data itself:
End of explanation
"""
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
import warnings # suppress warnings from older versions of KNeighbors
warnings.filterwarnings('ignore', message='kneighbors*')
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
for Model in [LinearSVC, GaussianNB, KNeighborsClassifier]:
clf = Model().fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(Model.__name__,
metrics.f1_score(y_test, y_pred, average="macro"))
print('------------------')
# test SVC loss
for loss, p, dual in [('squared_hinge', 'l1', False), ('squared_hinge', 'l2', True)]:
clf = LinearSVC(penalty=p, loss=loss, dual=dual)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("LinearSVC(penalty='{0}', loss='{1}')".format(p, loss),
metrics.f1_score(y_test, y_pred, average="macro"))
print('-------------------')
# test K-neighbors
for n_neighbors in range(1, 11):
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("KNeighbors(n_neighbors={0})".format(n_neighbors),
metrics.f1_score(y_test, y_pred, average="macro"))
"""
Explanation: Regression metrics In the case of regression models, we need to use different metrics, such as explained variance.
Application: Model Selection via Validation
In the previous notebook, we saw Gaussian Naive Bayes classification of the digits.
Here we saw K-neighbors classification of the digits. We've also seen support vector
machine classification of digits. Now that we have these
validation tools in place, we can ask quantitatively which of the three estimators
works best for the digits dataset.
With the default hyper-parameters for each estimator, which gives the best f1 score
on the validation set? Recall that hyperparameters are the parameters set when
you instantiate the classifier: for example, the n_neighbors in
clf = KNeighborsClassifier(n_neighbors=1)
For each classifier, which value for the hyperparameters gives the best results for
the digits data? For LinearSVC, use loss='l2' and loss='l1'. For
KNeighborsClassifier we use n_neighbors between 1 and 10. Note that GaussianNB
does not have any adjustable hyperparameters.
End of explanation
"""
clf = KNeighborsClassifier()
from sklearn.model_selection import cross_val_score
cross_val_score(clf, X, y, cv=5)
"""
Explanation: Cross-validation
Cross-validation consists in repetively splitting the data in pairs of train and test sets, called 'folds'. Scikit-learn comes with a function to automatically compute score on all these folds. Here we do 'K-fold' with k=5.
End of explanation
"""
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=5)
cross_val_score(clf, X, y, cv=cv)
"""
Explanation: We can use different splitting strategies, such as random splitting
End of explanation
"""
from sklearn.datasets import load_diabetes
data = load_diabetes()
X, y = data.data, data.target
print(X.shape)
"""
Explanation: There exists many different cross-validation strategies in scikit-learn. They are often useful to take in account non iid datasets.
Hyperparameter optimization with cross-validation
Consider regularized linear models, such as
Ridge Regression, which uses $\ell_2$ regularlization,
and Lasso Regression, which uses $\ell_1$ regularization. Choosing their regularization parameter is important.
Let us set these paramaters on the Diabetes dataset, a simple regression problem. The diabetes data consists of 10 physiological variables (age, sex, weight, blood pressure) measure on 442 patients, and an indication of disease progression after one year:
End of explanation
"""
from sklearn.linear_model import Ridge, Lasso
for Model in [Ridge, Lasso]:
model = Model()
print(Model.__name__, cross_val_score(model, X, y).mean())
"""
Explanation: With the default hyper-parameters: we use the cross-validation score to determine goodness-of-fit:
End of explanation
"""
alphas = np.logspace(-3, -1, 30)
for Model in [Lasso, Ridge]:
scores = [cross_val_score(Model(alpha), X, y, cv=3).mean()
for alpha in alphas]
plt.plot(alphas, scores, label=Model.__name__)
plt.legend(loc='lower left');
"""
Explanation: Basic Hyperparameter Optimization
We compute the cross-validation score as a function of alpha, the strength of the regularization for Lasso and Ridge. We choose 20 values of alpha between 0.0001 and 1:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
"""
Explanation: Can we trust our results to be actually useful?
Automatically Performing Grid Search
End of explanation
"""
for Model in [Ridge, Lasso]:
gscv = GridSearchCV(Model(), dict(alpha=alphas), cv=3).fit(X, y)
print(Model.__name__, gscv.best_params_)
"""
Explanation: GridSearchCV is constructed with an estimator, as well as a dictionary
of parameter values to be searched. We can find the optimal parameters this
way:
End of explanation
"""
from sklearn.linear_model import RidgeCV, LassoCV
for Model in [RidgeCV, LassoCV]:
model = Model(alphas=alphas, cv=3).fit(X, y)
print(Model.__name__, model.alpha_)
"""
Explanation: Built-in Hyperparameter Search
For some models within scikit-learn, cross-validation can be performed more efficiently
on large datasets. In this case, a cross-validated version of the particular model is
included. The cross-validated versions of Ridge and Lasso are RidgeCV and
LassoCV, respectively. The grid search on these estimators can be performed as
follows:
End of explanation
"""
for Model in [RidgeCV, LassoCV]:
scores = cross_val_score(Model(alphas=alphas, cv=3), X, y, cv=3)
print(Model.__name__, np.mean(scores))
"""
Explanation: We see that the results match those returned by GridSearchCV
Nested cross-validation
How do we measure the performance of these estimators? We have used data to set the hyperparameters, so we need to test on actually new data. We can do this by running cross_val_score on our CV objects. Here there are 2 cross-validation loops going on, this is called 'nested cross validation':
End of explanation
"""
|
ruchika05/demo
|
Notebook/Anomaly-detection-DSWB.ipynb
|
epl-1.0
|
from pyspark.sql import SQLContext
# adding the PySpark module to SparkContext
sc.addPyFile("https://raw.githubusercontent.com/seahboonsiew/pyspark-csv/master/pyspark_csv.py")
import pyspark_csv as pycsv
# you may need to modify this line if the filename or path is different.
sqlContext = SQLContext(sc)
data = sc.textFile("/resources/sample-data.csv")
def skip_header(idx, iterator):
if (idx == 0):
next(iterator)
return iterator
body = data.mapPartitionsWithIndex(skip_header)
header = data.first()
header_list = header.split(",")
# create Spark DataFrame using pyspark-csv
data_df = pycsv.csvToDataFrame(sqlContext, body, sep=",", columns=header_list)
data_df.cache()
data_df.printSchema()
"""
Explanation: Introduction
This Notebook will help you to identify anomalies in your historical timeseries data (IoT data) in simple steps. Also, derive the threshold value for your historical data. This threshold value can be used to set rules in Watson IoT Platform, such that you get an alert when your IoT device reports an abnormal reading in the future.
Accepted file format
Note that, this Notebook accepts the CSV file in one of the following file formats:
2 column format: <Date and time in DD/MM/YYYY or MM/DD/YYYY format, Numeric value>
1 column format: <Numeric value>
Sample data
In case if you don’t have any file, try downloading the sample file from this link. The sample file contains a temperature data updated for ever 15 minutes. Also, the sample data contains spikes to demonstrate the danger situation.
Load data
Drag and drop your CSV file into this Notebook. Once the file is uploaded successfully, you can see the file in the Recent Data section. Also, expand the file name and click on Insert Path link to get the location of the file. It must be like, /resources/file-name.
The next step is to create the SQL DataFrame from the CSV file. Instead of specifying the schema for a Spark DataFrame programmatically, you can use the pyspark-csv module. It is an external PySpark module and works like the pandas read_csv function.
Enter the following lines of code into your Notebook to create Spark SQL DataFrame from the given CSV file. Modify the path of the file if its different and click Run. And observe that it prints the schema.
End of explanation
"""
# retrieve the first row
data_df.take(1)
"""
Explanation: Enter the following command in the next cell to look at the first record and click Run
End of explanation
"""
# retrieve the number of rows
data_df.count()
"""
Explanation: Enter the following command in the next cell to get the number of rows in the CSV file (DataFrame) and click Run,
End of explanation
"""
# create a pandas dataframe from the SQL dataframe
import pprint
import pandas as pd
pandaDF = data_df.toPandas()
#Fill NA/NaN values to 0
pandaDF.fillna(0, inplace=True)
pandaDF.columns
"""
Explanation: Create Pandas DataFrame
Enter the following commands in the next cell to create a Pandas DataFrame from the Spark SQL DataFrame and click Run. This line prints the schema of the newly created Pandas DataFrame which will be same as the Spark SQL DataFrame.
The Python Data Analysis Library (a.k.a. pandas) provides high-performance, easy-to-use data structures and data analysis tools that are designed to make working with “relational” or “labeled” data both easy and intuitive. Also, plotting is very easy with Pandas DataFrame.
End of explanation
"""
# change index to time if its present
valueHeaderName = 'value'
timeHeaderName = 'null'
if (len(header_list) == 2):
timeHeaderName = header_list[0]
valueHeaderName = header_list[1]
else:
valueHeaderName = header_list[0]
# Drop the timestamp column as the index is replaced with timestamp now
if (len(header_list) == 2):
pandaDF.index = pandaDF[timeHeaderName]
pandaDF = pandaDF.drop([timeHeaderName], axis=1)
# Also, sort the index with the timestamp
pandaDF.sort_index(inplace=True)
pandaDF.head(n=5)
"""
Explanation: Enter the following commands in the next cell to set timestamp as the index if its present and click Run,
End of explanation
"""
# calculate z-score and populate a new column
pandaDF['zscore'] = (pandaDF[valueHeaderName] - pandaDF[valueHeaderName].mean())/pandaDF[valueHeaderName].std(ddof=0)
pandaDF.head(n=5)
"""
Explanation: Calculate z-score
We detect the anomaly events using z-score, aka, a standard score indicating how many standard deviations an element is from the mean.
Enter the following commands to calculate z-score for each of the values and add it as a new column in the same DataFrame,
End of explanation
"""
# ignore warnings if any
import warnings
warnings.filterwarnings('ignore')
# render the results as inline charts:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
'''
This function detects the spike and dip by returning a non-zero value
when the z-score is above 3 (spike) and below -3(dip). Incase if you
want to capture the smaller spikes and dips, lower the zscore value from
3 to 2 in this function.
'''
def spike(row):
if(row['zscore'] >=3 or row['zscore'] <=-3):
return row[valueHeaderName]
else:
return 0
pandaDF['spike'] = pandaDF.apply(spike, axis=1)
# select rows that are required for plotting
plotDF = pandaDF[[valueHeaderName,'spike']]
#calculate the y minimum value
y_min = (pandaDF[valueHeaderName].max() - pandaDF[valueHeaderName].min()) / 10
fig, ax = plt.subplots(num=None, figsize=(14, 6), dpi=80, facecolor='w', edgecolor='k')
ax.set_ylim(plotDF[valueHeaderName].min() - y_min, plotDF[valueHeaderName].max() + y_min)
x_filt = plotDF.index[plotDF.spike != 0]
plotDF['xyvaluexy'] = plotDF[valueHeaderName]
y_filt = plotDF.xyvaluexy[plotDF.spike != 0]
#Plot the raw data in blue colour
line1 = ax.plot(plotDF.index, plotDF[valueHeaderName], '-', color='blue', animated = True, linewidth=1)
#plot the anomalies in red circle
line2 = ax.plot(x_filt, y_filt, 'ro', color='red', linewidth=2, animated = True)
#Fill the raw area
ax.fill_between(plotDF.index, (pandaDF[valueHeaderName].min() - y_min), plotDF[valueHeaderName], interpolate=True, color='blue',alpha=0.6)
# Label the axis
ax.set_xlabel("Sequence",fontsize=20)
ax.set_ylabel(valueHeaderName,fontsize=20)
plt.tight_layout()
plt.legend()
plt.show()
"""
Explanation: Plot Anomalies
When we work in notebooks, we can decide how to present your anlysis results and derived information. So far, we have used normal print functions, which are informative. However, we can also show the results in a visual way by using the popular matplotlib package to create plots.
Enter the following snippet of the code in the next cell to view the anomaly events in your data and click Run. Observe that the values for which the z-score is above 3 or below -3, marked as abnormal events in the graph shown below,
End of explanation
"""
# calculate the value that is corresponding to z-score 3
(pandaDF[valueHeaderName].std(ddof=0) * 3) + pandaDF[valueHeaderName].mean()
"""
Explanation: As shown, the red marks are the unexpected spikes and dips whose z-score value is greater than 3 or less than -3. Incase if you want to detect the lower spikes, modify the value to 2 or even lower and run. Similarly, if you want to detect only the higher spikes, try increasing the z-score value from 3 to 4 and beyond.
Derive thresholds
Enter the following command into the next cell to derive the Spike threshold value corresponding to z-score value 3 and click Run.
End of explanation
"""
# calculate the value that is corresponding to z-score -3
(pandaDF[valueHeaderName].std(ddof=0) * -3) + pandaDF[valueHeaderName].mean()
"""
Explanation: Similarly, Enter the following command into the next cell to derive the dip threshold value corresponding to z-score value -3.
End of explanation
"""
|
fjaviersanchez/JupyterTutorial
|
QuickTutorial.ipynb
|
mit
|
# E.g., write/read a table with data
min_x = 0 #Let's assume this is right ascension
max_x = 360
nsamples = 10000
min_y = -90 #Let's assume this is declination
max_y = 90
rnd_x = min_x+(max_x-min_x)*np.random.random(size=nsamples)
rnd_y = np.degrees(np.arcsin(np.sin(np.radians(min_y))+(np.sin(np.radians(max_y))-np.sin(np.radians(min_y)))*np.random.random(size=nsamples)))
"""
Explanation: Sky coordinates
There are several coordinate systems to express the position of an object in the sky. See more at: https://en.wikipedia.org/wiki/Celestial_coordinate_system
Equatorial/celestial: This one is the coordinate system that we are going to use more often. The position of the objects are given but a couple of numbers: right ascension (RA -- longitude, $\alpha$), and declination (Dec -- latitude, $\delta$). The equatorial plane is located at the celestial equator (i.e., the projection of the Earth's equator) and the zero longitude mark is aligned with the vernal equinox. It is usually centered in Earth's center and since Earth's vernal equinox changes position (due to nutation, etc.) we usually use an epoch to fix the equinox. The most common is J2000 (i.e., the equinox in the year 2000).
Galactic: This coordinate system is also widely used in astronomy, especially in the case of Cosmic Microwave Background experiments. The position of any point in the sky is also given by two points: Galactic longitude ($l$) and galactic latitude ($b$). The equatorial plane corresponds to the galactic plane. The zero longitude corresponds to the position of the galactic center.
There are other coordinate systems but we are going to use mostly these two.
Practical coding
Let's try to use the most typical operations that we use starting from reading and writing data
End of explanation
"""
import pandas as pd
df = pd.DataFrame({'x': rnd_x, 'y': rnd_y})
df
"""
Explanation: We now have a couple of arrays that we can save in many, many ways (numpy arrays, ASCII files...) Data scientists like to use pandas. It is a convenient and powerful tool to read and write in multiple formats.
Reading/Writing data
End of explanation
"""
import astropy.table
tab = astropy.table.Table.from_pandas(df) # We can create it from a pandas dataframe
tab_from_dict = astropy.table.Table({'x': rnd_x, 'y':rnd_y}) # From a dictionary
tab_from_dict == tab
tab_from_arrays = astropy.table.Table([rnd_x, rnd_y], names=('x', 'y')) # Or from arrays
tab_from_arrays == tab
tab.write('test_table.fits', overwrite=True) # So now we write the table
"""
Explanation: Astronomers however like the FITS format and we use a library called astropy to read/write them. In particular, since this is a data table we are going to use astropy.table.
End of explanation
"""
?tab.write
"""
Explanation: There are many formats available using astropy
End of explanation
"""
tab_read = astropy.table.Table.read('test_table.fits')
"""
Explanation: Let's see how we can read a table
End of explanation
"""
#Typical plots: plt.plot, plt.hist, plt.hist2d, plt.scatter
plt.scatter(rnd_x, rnd_y, s=0.01)
plt.xlabel('RA [deg]')
plt.ylabel('Dec [deg]');
plt.hist2d(rnd_x, rnd_y);
plt.xlabel('RA [deg]')
plt.ylabel('Dec [deg]');
plt.hist(rnd_x, histtype='step', label='RA')
plt.hist(rnd_y, histtype='step', label='Dec')
plt.xlabel('X [deg]')
plt.ylabel('Number of datapoints')
plt.legend(loc='best')
"""
Explanation: If the format of the table is not FITS we have to specify the format, e.g.:
tab_read = astropy.table.Table.read('test_table.txt', format='ascii')
Plot data
Data visualization is very important since it allows us to understand them and think of useful ways in how to manipulate the data to obtain more information or to try to compress the information somehow. Let's try to represent our dataset.
End of explanation
"""
import healpy as hp
"""
Explanation: There are multiple tools to visualize data! Check for example seaborn. And there are even interactive tools like bokeh
Sky map representation (optional)
End of explanation
"""
nside = 64 # This parameter (which should be a power of 2) controls the resolution of the map, the bigger the number, the smaller the pixels
print(hp.nside2resol(nside, arcmin=True)) # This is the pixel resolution in arcminutes
pixel_numbers = hp.ang2pix(nside, rnd_x, rnd_y, lonlat=True) # This function tells us the number of pixel that corresponds to each pair x, y. We use the lonlat=True option to pass them as RA, Dec
print(pixel_numbers)
"""
Explanation: healpy is a library that allows to decompose the sky (tesellate) in equal area cells. It is pretty widely used in the cosmology community and there are some nice mathematical properties about that particular way of breaking the sky into pieces but for the purposes of this notebook we are going to just use it as a tool to make 2D histograms of the sky
End of explanation
"""
hp_map = np.bincount(pixel_numbers, minlength=hp.nside2npix(nside))
hp.mollview(hp_map, title='Sky plot', unit='Galaxies/pixel')
#Our first sky plot!
"""
Explanation: Now we have a bunch of pixel numbers (one per pair RA, Dec) so we have to count them to see how many pairs are in each cell/pixel
End of explanation
"""
|
ivannz/crossing_paper2017
|
experiments/plots_analysis.ipynb
|
mit
|
import os
import re
import time
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
BASE_PATH = "/Volumes/LaCie/from_macHD/Github/crossing_paper2017"
# BASE_PATH = ".."
"""
Explanation: Plots and analysis
End of explanation
"""
def offspring_empirical(Dmnk, levels, laplace=False):
# Get pooled frequencies
Djk = Dmnk[:, levels].sum(axis=1, keepdims=False, dtype=np.float)
Dj = Djk.sum(axis=1, keepdims=True)
# Compute the empirical probabilities
Pjk = Djk / Dj if not laplace else (Djk + 1.0) / (Dj + Djk.shape[1])
levels = np.arange(Dmnk.shape[1], dtype=np.int)[levels]
return levels + 1, np.nanmean(Pjk, axis=0), np.nanstd(Pjk, axis=0)
"""
Explanation: Compute the empirical probabilities by averaging across all replications
End of explanation
"""
from math import log
def offspring_prob(Z_max, hurst):
Z = np.arange(2, Z_max, 2)
theta = 2.0 ** (1.0 - 1.0 / hurst)
return Z, theta * np.exp((Z // 2 - 1) * log(1 - theta))
"""
Explanation: Get theoretical values of the probability according to the conjectured distribution:
$$ Z \sim \text{Geom}\bigl(4^{\frac{1}{2}-\frac{1}{2h}}\bigr) \text{ over } {2n\,:\,n\geq 1} \,. $$
For $\theta = 2^{1-h^{-1}}$, the law, once again, is
$$ \mathbb{P}(Z=2k) = \theta \cdot (1-\theta)^{k-1}\,. $$
End of explanation
"""
def offspring_hurst(Dmnk, levels, laplace=False):
# Get pooled frequencies
Dmj = Dmnk[:, levels].sum(axis=2, dtype=np.float)
# Compute the sum of the left-closed tails sums,
# and divide by the total number of offspring.
Mmj = 2 * Dmnk[:, levels, ::-1].cumsum(axis=-1).sum(axis=-1) / Dmj
Hmj = np.log(2) / np.log(Mmj)
levels = np.arange(Dmnk.shape[1], dtype=np.int)[levels]
return levels + 1, np.nanmean(Hmj, axis=0), np.nanstd(Hmj, axis=0)
"""
Explanation: Use the geometric distribution's mean value to estimate the hurst exponent:
$$ \mathbb{E} Z
= 2 \theta \sum_{k\geq 1} k (1 - \theta)^{k-1}
= 2 \theta \sum_{k\geq 1} \sum_{j\geq k} (1 - \theta)^{j-1}
= 2 \theta \sum_{k\geq 1} \theta^{-1} (1 - \theta)^{k-1}
= 2 \theta^{-1} \,, $$
whence
$$ 2^{1-h^{-1}} = \frac{2}{\mathbb{E} Z} \Leftrightarrow h = \frac{\log 2}{\log \mathbb{E}Z}\,. $$
End of explanation
"""
output_path = os.path.join("../plots", time.strftime("%Y%m%d_%H%M%S"))
if not os.path.exists(output_path):
os.mkdir(output_path)
print(output_path)
"""
Explanation: Experiments
Create the output folder.
End of explanation
"""
from crossing_tree.manager import ExperimentManager
experiment = ExperimentManager(name_format=re.compile(
r"^(?P<class>[^-]+)"+
r"-(?P<size>\d+)" +
r"-(?P<hurst>(\d*\.)?\d+)" +
r"-(?P<replications>\d+x\d+)" + # r"-(?P<n_batch>\d+)x(?P<n_jobs>\d+)" +
r"_(?:[\d-]+)" + # r"_(?P<dttm>[\d-]+)" +
r".gz$", flags=re.I | re.U))
experiment.update(os.path.join(BASE_PATH, "results/version_2"))
"""
Explanation: Load the experiment manager
End of explanation
"""
print(experiment.keys_)
"""
Explanation: Print the keys of the experiment
End of explanation
"""
method = "med" # needs bytes encoding
experiments = [# (8388608, "125x8", "FBM", method),
(33554432, "334x3", "FBM", method),
(8388608, "125x8", "HRP2_1", method),
(8388608, "125x8", "HRP3_1", method),
(8388608, "125x8", "HRP4_1", method),
# (524288, "125x8", "HRP2_16", method),
# (524288, "125x8", "HRP3_16", method),
# (524288, "125x8", "HRP4_16", method),
(8388608, "125x8", "WEI_1.2", method),
]
exponents = [0.500, 0.550, 0.600, 0.650, 0.700, 0.750, 0.800, 0.850, 0.900,
0.910, 0.915, 0.920, 0.925, 0.930, 0.935, 0.940, 0.945, 0.950,
0.990]
"""
Explanation: Choose a particular instance
End of explanation
"""
def figure_01(fig, generator, size, replications, method, p=6, q=7, bars=True, legend=True):
ax = fig.add_subplot(111)
results = experiment[generator, size, :, replications]
data = {float(info_[2]): data_[method] for info_, start_, finish_, seeds_, data_ in results}
color_ = plt.cm.rainbow(np.linspace(0, 1, num=len(exponents)))[::-1]
for col_, hurst_ in zip(color_, exponents):
try:
try:
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except ValueError:
scale_m, Nmn, Dmnk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except KeyError:
continue
levels, Pk_avg, Pk_std = offspring_empirical(Dmnk, slice(p, q), laplace=False)
k, Pk = offspring_prob(2*(Pk_avg.shape[0] + 1), hurst=hurst_)
ax.plot(k, Pk, linestyle='-', color='black', alpha=0.5, zorder=-99)
if bars:
ax.errorbar(k, Pk_avg, yerr=Pk_std, fmt='-s',
color=col_, markersize=3, alpha=1.0,
label="%s %0.3f"%(generator, hurst_))
else:
ax.plot(k, Pk_avg, "-s", color=col_, markersize=3,
alpha=1.0, label="%s %0.3f"%(generator, hurst_))
ax.set_xticks(np.arange(2, 43, 2))
ax.grid(alpha=0.5, linestyle=":", color="grey")
ax.set_xlim(1.9, 12.1)
ax.set_yscale("log", basey=2)
ax.set_ylim(.5e-4, 1.1)
ax.set_ylabel("probability")
ax.set_xlabel("number of offspring")
if legend:
legend_ = ax.legend(loc="lower left", frameon=True,
ncol=2, fontsize=7)
legend_.get_frame() #.set_facecolor("whitesmoke")
"""
Explanation: FIGURE 01
for label fig:fbm_offspring_distribution
End of explanation
"""
p, q = 6, 10 # 5, 8
for experiment_ in experiments:
size, replications, generator, method_ = experiment_
name_ = "fig_01-%d_%s-%s-%d-%s-%s.pdf"%(p, str(q) if isinstance(q, int) else "X",
generator, size, replications, method_,)
fig = plt.figure(figsize=(6, 5))
figure_01(fig, str(generator), str(size), str(replications), method_,
p, q, bars=False, legend=True)
fig.savefig(os.path.join(output_path, name_), format="pdf")
plt.close()
"""
Explanation: Generate a figure-01 for different sizes and numbers of replications.
End of explanation
"""
# exponents = [0.5, 0.6, 0.7, 0.8, 0.9]
# exponents = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
def figure_04(fig, generator, size, replications, method, p=6, q=7, bars=False, legend=True):
ax = fig.add_subplot(111)
results = experiment[generator, size, :, replications]
data = {float(info_[2]): data_[method] for info_, start_, finish_, seeds_, data_ in results}
first_, last_ = np.inf, -np.inf
color_ = plt.cm.rainbow(np.linspace(0, 1, num=len(exponents)))[::-1]
for col_, hurst_ in zip(color_, exponents):
try:
try:
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except ValueError:
scale_m, Nmn, Dmnk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except KeyError:
continue
levels, Hj_avg, Hj_std = offspring_hurst(Dmnk, slice(p, q))
ax.axhline(y=hurst_, color='black', linestyle='-', alpha=0.25, zorder=-99)
mask = Hj_avg < hurst_ * 1.35
if bars:
ax.errorbar(levels[mask], Hj_avg[mask], yerr=Hj_std[mask],
fmt="-s", color=col_, markersize=3, alpha=1.0,
label="%s %0.3f"%(generator, hurst_))
else:
ax.plot(levels[mask], Hj_avg[mask], "-s",
color=col_, markersize=3, alpha=1.0,
label="%s %0.3f"%(generator, hurst_))
first_ = min(levels[mask][0], first_)
last_ = max(levels[mask][-1], last_)
last_ = 20 # min(last_, 20)
ax.set_xticks(np.arange(first_, last_ + 1))
ax.grid(color="grey", linestyle=":", alpha=0.5)
ax.set_xlim(first_ - 0.1, last_ + 1.1)
ax.set_ylim(0.45, 1.01)
## Add a legend with white opaque background.
# ax.set_title( 'Crossing tree estimates of the Hurst exponent' )
ax.set_xlabel("level $\\delta 2^k$")
ax.set_ylabel("$H$")
if legend:
legend_ = ax.legend(loc="lower right", frameon=1,
ncol=2, fontsize=7)
legend_.get_frame() #.set_facecolor("whitesmoke")
"""
Explanation: FIGURE 04
for label fig:fbm_hurst_crossing_tree
End of explanation
"""
p, q = 0, None
for experiment_ in experiments:
size, replications, generator, method_ = experiment_
name_ = "fig_04-%d_%s-%s-%d-%s-%s.pdf"%(p, str(q) if isinstance(q, int) else "X",
generator, size, replications, method_,)
fig = plt.figure(figsize=(6, 5))
figure_04(fig, str(generator), str(size), str(replications), method_,
p, q, bars=False, legend=True)
fig.savefig(os.path.join(output_path, name_), format="pdf")
plt.close()
"""
Explanation: Create a figure-04 plot of mean-based hurst estimates
End of explanation
"""
# exponents = [0.5, 0.6, 0.7, 0.8, 0.9]
# exponents = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
def figure_08(fig, generator, size, replications, method, bars=False, legend=True):
ax = fig.add_subplot(111)
results = experiment[generator, size, :, replications]
data = {float(info_[2]): data_[method] for info_, start_, finish_, seeds_, data_ in results}
color_ = plt.cm.rainbow(np.linspace(0, 1, num=len(exponents)))[::-1]
for col_, hurst_ in zip(color_, exponents):
try:
try:
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except ValueError:
scale_m, Nmn, Dmnk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except KeyError:
continue
level = np.arange(Wavgmn.shape[-1], dtype=np.float128)
scale_ = (2 ** (-level / hurst_))
# scale_ *= (2 * hurst_ - 1) * 2 * hurst_
Wavgn_ = np.nanmean(Wavgmn / (scale_m[:, np.newaxis] ** (1 / hurst_)), axis=0) * scale_
if bars:
Wstdn_ = np.nanstd(Wavgmn / (scale_m[:, np.newaxis] ** (1 / hurst_)), axis=0) * scale_
ax.errorbar(1+level, Wavgn_, yerr=Wstdn_, fmt="-s", color=col_,
markersize=3, alpha=1.0, label="%s %0.3f"%(generator, hurst_))
else:
ax.plot(1+level, Wavgn_, "-s", color=col_, markersize=3,
alpha=1.0, label="%s %0.3f"%(generator, hurst_))
ax.set_xticks(range(1, 21))
ax.grid(color="grey", linestyle=":", alpha=0.5)
ax.set_yscale("log", basey=2)
ax.set_xlim(0.9, 20.1)
ax.set_xlabel("level")
ax.set_ylabel("$\\left(\\delta 2^n \\right)^{-H^{-1}} {\\mathbb{E}W^n}$")
if legend:
legend_ = ax.legend(loc="lower left", frameon=1,
ncol=3, fontsize=7)
legend_.get_frame() #.set_facecolor("whitesmoke")
"""
Explanation: FIGURE 08
for label fig:fbm_avg_crossing_durations
End of explanation
"""
for experiment_ in experiments:
size, replications, generator, method_ = experiment_
name_ = "fig_08-%s-%d-%s-%s.pdf"%(generator, size, replications, method_,)
fig = plt.figure(figsize=(6, 5))
figure_08(fig, str(generator), str(size), str(replications), method_,
bars=False, legend=True)
fig.savefig(os.path.join(output_path, name_), format="pdf")
plt.close()
"""
Explanation: Create a figure-08 plot of scaled average crossing durations.
End of explanation
"""
from math import floor
full_table = list()
for experiment_ in experiments:
size, replications, generator, method = experiment_
results = experiment[str(generator), str(size), :, str(replications)]
data = {float(info_[2]): data_[method] for info_, start_, finish_, seeds_, data_ in results}
table = list()
for hurst_ in exponents:
try:
try:
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except ValueError:
scale_m, Nmn, Dmnk, Vmnde, Wmnp, Wavgmn, Wstdmn = data[hurst_]
except KeyError:
continue
# Compute the average number of offspring and the standard deviation
# df_ = pd.DataFrame(dict(average=Nmn.mean(axis=0), std=Nmn.std(axis=0)),
# index=pd.RangeIndex(stop=Nmn.shape[1],name='Level'))
df_ = pd.Series(["$%1.1f\pm%0.2f\\%%$"%(m/1000, 100*s/m) if floor(m/100) > 0 else "--"
for m, s in zip(Nmn.mean(axis=0), Nmn.std(axis=0))],
index=pd.RangeIndex(stop=Nmn.shape[1],name='Level'), name=hurst_)
table.append((hurst_, df_))
table = pd.concat([tab_ for hurst_, tab_ in table], axis=1,
keys=[hurst_ for hurst_, tab_ in table], names=["hurst"])
full_table.append((experiment_, table))
table = pd.concat([tab_ for hurst_, tab_ in full_table], axis=1, join="inner",
keys=[hurst_ for hurst_, tab_ in full_table],
names=["size", "replications", "generator", "method"])
"""
Explanation: TABLE 01
For the table tab:avg_offspring showing the average number
of offspring at each tree level.
End of explanation
"""
for hurst_ in exponents:
name_ = "tab_01-%s-%0.3f.tex"%(method_, hurst_,)
out_ = table.xs(method, axis=1, level=3).xs(hurst_, axis=1, level=-1)
out_.columns = out_.columns.droplevel(0).droplevel(0)
# .style.format({"average":"{:1.0f}", "std":"±{:1.0f}"})
body_ = out_.to_latex(escape=False, na_rep="--", bold_rows=True)\
.replace("_", "\\_")
body_ += """\\caption{The average number of offspring at each level (in\n"""\
""" thousands; $\\pm$1 std. dev. in percent) for processes\n"""\
""" with $H=%0.3f$.} \n"""%(hurst_,)
body_ += """\\label{tab:avg_offspring_%0.3f}\n"""%(hurst_,)
with open(os.path.join(output_path, name_), "w") as fout_:
fout_.write(body_)
"""
Explanation: Might want to use \usepackage{booktabs} or \usepackage{lscape}
Output .tex files with name format "tab_01-%s-%0.3f.tex"%(method, hurst,)
End of explanation
"""
selector = np.s_[:12]
levels_ = np.r_[selector].astype(float)
log2ed_list = []
check_list_ = [
(33554432, "334x3", "FBM", 1.0),
(8388608, "125x8", "HRP2_1", 2.0),
(8388608, "125x8", "HRP3_1", 3.0),
(8388608, "125x8", "HRP4_1", 4.0),
]
for size, replications, name, degree in check_list_:
results = experiment[name, str(size), :, str(replications)]
data = {float(info_[2]): data_[method]
for info_, start_, finish_, seeds_, data_ in results
if float(info_[2]) > 0.5}
slices_ = {hurst_: (res_[0], res_[-2][:, selector]) for hurst_, res_ in data.items()}
log2ed_ = np.stack([(np.log2(dur_) - (np.log2(delta_[:, np.newaxis]) + levels_) / hurst_).mean(axis=0)
for hurst_, (delta_, dur_) in slices_.items()], axis=0)
hursts_ = np.array([*slices_.keys()])[:, np.newaxis]
order_ = hursts_.argsort(axis=0)[:, 0]
hursts_ = hursts_[order_]
log2ed_ = log2ed_[order_]
log2ed_ /= (1.5 - hursts_)
# h0_ = (hursts_ - 1) / degree + 1
# log2ed_ /= (1.5 - h0_)
log2ed_list.append(log2ed_)
# log2ed_ /= ((hursts_ - 1) / degree + 0.5) ** (-2 / float(degree))
log2ed_ = np.stack(log2ed_list, axis=0).mean(axis=-1)
plt.plot(hursts_, log2ed_[0], "r") # d - 1
plt.plot(hursts_, log2ed_[1], "g") # d - 1 - 0
plt.plot(hursts_, log2ed_[2], "b") # d - 1 - 0.75
plt.plot(hursts_, log2ed_[3], "k")# d - 1 -
dlog2ed_ = np.diff(log2ed_, axis=-1) / np.diff(hursts_.T, axis=-1)
dlog2ed_[:, :-2].mean(axis=-1)
plt.plot(hursts_[1:], dlog2ed_[0], "r")
plt.plot(hursts_[1:], dlog2ed_[1], "g")
plt.plot(hursts_[1:], dlog2ed_[2], "b")
plt.plot(hursts_[1:], dlog2ed_[3], "k")
plt.plot(hursts_[1:], dlog2ed_[0] + 0, "r") # d - 1
plt.plot(hursts_[1:], dlog2ed_[1] + 18.5, "g") # d - 1 - 0
plt.plot(hursts_[1:], dlog2ed_[2] + 25, "b") # d - 1 - 0.75
plt.plot(hursts_[1:], dlog2ed_[3] + 28.25, "k") # d - 1 -
fig = plt.figure(figsize=(16, 9))
ax = fig.add_subplot(111)
color_ = plt.cm.rainbow(np.linspace(0, 1, num=5))[::-1]
for hurst_, Wavgmn in slices_.items():
ax.hist(np.log2(Wavgmn[:, 0]),
bins=200, alpha=0.5, lw=0, normed=True, color="red")
# for level, (Wavgn, col_) in enumerate(zip(Wavgmn.T, color_), 7):
# ax.hist(np.log2(Wavgn) - (float(level) / hurst_)**(1-hurst_),
# bins=200, alpha=0.5, lw=0, normed=True, color=col_)
log_Wavghn = np.stack([np.nanmean(np.diff(np.log2(Wavgmn), axis=1), axis=0)
for hurst_, Wavgmn in slices_.items()])
hursts_ = np.array(slices_.keys())
log_Wavghn.shape
plt.plot(hursts_[np.newaxis, :] * log_Wavghn.T)
colors_ = plt.cm.rainbow_r(np.linspace(0, 1, num=log_Wavghn.shape[1]))
for col_, log_Wavgh in zip(colors_, log_Wavghn.T):
plt.scatter(hursts_, log_Wavgh * hursts_, lw=0, color=col_, alpha=0.5);
log_Wavgh * hursts_ - 1
y = np.log2(np.diff(log_Wavghn, axis=1).mean(axis=1))
X = hursts_
1.0 / (np.diff(y) / np.diff(X))
1.0 / np.diff(log_Wavghn, axis=1).mean(axis=1) - hursts_
plt.scatter(hursts_, np.diff(log_Wavghn, axis=1).mean(axis=1) - 1.0 / hursts_)
plt.scatter(hursts_, np.log2(np.diff(log_Wavghn, axis=1).mean(axis=1)))
plt.plot(np.diff(log_Wavghn, axis=1).T)
# / hursts_[np.newaxis]
"""
Explanation: FIGURE
Take the average crossing duration at each level
$(\delta 2^n)^{-H^{-1}} \mathbb{E}W^n = 2^{f(H, d)}$
$\log_2 \mathbb{E}W^n = f(H, d) + \frac1H \log_2 \delta + \frac{n}{H}$
$\log_2 (\delta 2^n)^{-H^{-1}} \mathbb{E}W^n = f(H, d)$
$F(H, d) = d \bigl(H - \frac12\bigr)^{-\frac2{d}}$
End of explanation
"""
|
ViralLeadership/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
Chapter2_MorePyMC/Chapter2.ipynb
|
mit
|
import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
"""
Explanation: Chapter 2
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
A little more on PyMC
Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables.
parent variables are variables that influence another variable.
child variable are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
End of explanation
"""
print "Children of `parameter`: "
print parameter.children
print "\nParents of `data_generator`: "
print data_generator.parents
print "\nChildren of `data_generator`: "
print data_generator.children
"""
Explanation: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Likewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.
End of explanation
"""
print "parameter.value =", parameter.value
print "data_generator.value =", data_generator.value
print "data_plus_one.value =", data_plus_one.value
"""
Explanation: Of course a child can have more than one parent, and a parent can have many children.
PyMC Variables
All PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:
End of explanation
"""
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
print
lambda_1.random(), lambda_2.random(), tau.random()
print "After calling random() on the variables..."
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
"""
Explanation: PyMC is concerned with two types of programming variables: stochastic and deterministic.
stochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.
deterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is.
We will detail each below.
Initializing Stochastic variables
Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:
some_variable = pm.DiscreteUniform("discrete_uni_var", 0, 4)
where 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)
The name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.
For multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays.
The size argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:
beta_1 = pm.Uniform("beta_1", 0, 1)
beta_2 = pm.Uniform("beta_2", 0, 1)
...
we can instead wrap them into a single variable:
betas = pm.Uniform("betas", 0, 1, size=N)
Calling random()
We can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
End of explanation
"""
type(lambda_1 + lambda_2)
"""
Explanation: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Warning: Don't update stochastic variables' values in-place.
Straight from the PyMC docs, we quote [4]:
Stochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:
A.value = new_value
The following are in-place updates and should never be used:
A.value += 3
A.value[2,1] = 5
A.value.attribute = new_attribute_value
Deterministic variables
Since most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:
@pm.deterministic
def some_deterministic_var(v1=v1,):
#jelly goes here.
For all purposes, we can treat the object some_deterministic_var as a variable and not a Python function.
Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:
End of explanation
"""
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
"""
Explanation: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
And in PyMC code:
End of explanation
"""
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
"""
Explanation: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:
@pm.deterministic
def some_deterministic(stoch=some_stochastic_var):
return stoch.value**2
will return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values.
Including observations in the Model
At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?"
End of explanation
"""
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print "value: ", fixed_variable.value
print "calling .random()"
fixed_variable.random()
print "value: ", fixed_variable.value
"""
Explanation: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:
End of explanation
"""
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print obs.value
"""
Explanation: This is how we include data into our models: initializing a stochastic variable to have a fixed value.
To complete our text message example, we fix the PyMC variable observations to the observed dataset.
End of explanation
"""
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
"""
Explanation: Finally...
We wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)
End of explanation
"""
tau = pm.rdiscrete_uniform(0, 80)
print tau
"""
Explanation: Modeling approaches
A good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$.
Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here.
What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
Below we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )
<img src="http://i.imgur.com/7J30oCG.png" width = 700/>
PyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:
Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.
Same story; different ending.
Interestingly, we can create new datasets by retelling the story.
For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
1. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$:
End of explanation
"""
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print lambda_1, lambda_2
"""
Explanation: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
End of explanation
"""
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
"""
Explanation: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
End of explanation
"""
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
"""
Explanation: 4. Plot the artificial dataset:
End of explanation
"""
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
"""
Explanation: It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
End of explanation
"""
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
"""
Explanation: Later we will see how we use this to make predictions and test the appropriateness of our models.
Example: Bayesian A/B testing
A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results.
Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards.
Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural.
A Simple Case
As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
fraction of users who make purchases,
frequency of social attributes,
percent of internet users with cats etc.
are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.
The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
To set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
End of explanation
"""
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print occurrences # Remember: Python treats True == 1, and False == 0
print occurrences.sum()
"""
Explanation: Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
End of explanation
"""
# Occurrences.mean is equal to n/N.
print "What is the observed frequency in Group A? %.4f" % occurrences.mean()
print "Does this equal the true frequency? %s" % (occurrences.mean() == p_true)
"""
Explanation: The observed frequency is:
End of explanation
"""
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
"""
Explanation: We combine the observations into the PyMC observed variable, and run our inference algorithm:
End of explanation
"""
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend()
"""
Explanation: We plot the posterior distribution of the unknown $p_A$ below:
End of explanation
"""
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print "Obs from Site A: ", observations_A[:30].astype(int), "..."
print "Obs from Site B: ", observations_B[:30].astype(int), "..."
print observations_A.mean()
print observations_B.mean()
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
"""
Explanation: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
A and B Together
A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
End of explanation
"""
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
"""
Explanation: Below we plot the posterior distributions for the three unknowns:
End of explanation
"""
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print "Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean()
print "Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean()
"""
Explanation: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
End of explanation
"""
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
"""
Explanation: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
An algorithm for human deceit
Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated).
To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.
The Binomial Distribution
The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
$$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
End of explanation
"""
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
"""
Explanation: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
Example: Cheating among students
We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:
In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.
I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars.
Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
End of explanation
"""
true_answers = pm.Bernoulli("truths", p, size=N)
"""
Explanation: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
End of explanation
"""
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print first_coin_flips.value
"""
Explanation: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.
End of explanation
"""
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
"""
Explanation: Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
End of explanation
"""
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
"""
Explanation: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable:
End of explanation
"""
observed_proportion.value
"""
Explanation: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
End of explanation
"""
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
"""
Explanation: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:
End of explanation
"""
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
"""
Explanation: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
\begin{align}
P(\text{"Yes"}) &= P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\
& = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\
& = \frac{p}{2} + \frac{1}{4}
\end{align}
Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
End of explanation
"""
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
"""
Explanation: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
End of explanation
"""
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
"""
Explanation: More PyMC Tricks
Protip: Lighter deterministic variables with Lambda class
Sometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example,
beta = pm.Normal("coefficients", 0, size=(N, 1))
x = np.random.randn((N, 1))
linear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))
Protip: Arrays of PyMC variables
There is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:
End of explanation
"""
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print "Temp (F), O-Ring failure?"
print challenger_data
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature")
"""
Explanation: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:
Example: Challenger Space Shuttle Disaster <span id="challenger"/>
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):
End of explanation
"""
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
"""
Explanation: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
End of explanation
"""
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
"""
Explanation: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
Some plots are below, with differing $\alpha$.
End of explanation
"""
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / _tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / _tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
"""
Explanation: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
The probability density function of a $N( \mu, 1/\tau)$ random variable is:
$$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
We plot some different density functions below.
End of explanation
"""
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
"""
Explanation: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
$$ E[ X | \mu, \tau] = \mu$$
and its variance is equal to the inverse of $\tau$:
$$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
Below we continue our modeling of the Challenger space craft:
End of explanation
"""
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
"""
Explanation: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
$$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.
End of explanation
"""
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
"""
Explanation: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
End of explanation
"""
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
"""
Explanation: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
End of explanation
"""
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
"""
Explanation: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
End of explanation
"""
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
"""
Explanation: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.
What about the day of the Challenger disaster?
On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
End of explanation
"""
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print simulations.shape
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
"""
Explanation: Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.
We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:
observed = pm.Bernoulli( "bernoulli_obs", p, value=D, observed=True)
Hence we create:
simulated_data = pm.Bernoulli("simulation_data", p)
Let's simulate 10 000:
End of explanation
"""
posterior_probability = simulations.mean(axis=0)
print "posterior prob of defect | realized defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[i], D[i])
"""
Explanation: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
End of explanation
"""
ix = np.argsort(posterior_probability)
print "probb | defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]])
"""
Explanation: Next we sort each column by the posterior probabilities:
End of explanation
"""
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
"""
Explanation: We can present the above data better in a figure: I've wrapped this up into a separation_plot function.
End of explanation
"""
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model")
"""
Explanation: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
the perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.
a completely random model, which predicts random probabilities regardless of temperature.
a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
End of explanation
"""
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
"""
Explanation: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
Exercises
1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: References
[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.
[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.
[4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.
[5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.
[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000
[7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.
[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/pcmdi/cmip6/models/pcmdi-test-1-0/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'pcmdi-test-1-0', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: PCMDI
Source ID: PCMDI-TEST-1-0
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
queq/calibpy
|
docs/ipynb-samples/Rich Output.ipynb
|
mit
|
from IPython.display import display
"""
Explanation: Rich Output
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including:
HTML
JSON
PNG
JPEG
SVG
LaTeX
A single object can declare some or all of these representations; all are handled by IPython's display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks.
Basic display imports
The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
End of explanation
"""
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
"""
Explanation: A few points:
Calling display on an object will send all possible representations to the Notebook.
These representations are stored in the Notebook document.
In general the Notebook will use the richest available representation.
If you want to display a particular representation, there are specific functions for that:
End of explanation
"""
from IPython.display import Image
i = Image(filename='../images/ipython_logo.png')
"""
Explanation: Images
To work with images (JPEG, PNG) use the Image class.
End of explanation
"""
i
"""
Explanation: Returning an Image object from an expression will automatically display it:
End of explanation
"""
display(i)
"""
Explanation: Or you can pass an object with a rich representation to display:
End of explanation
"""
Image(url='http://python.org/images/python-logo.gif')
"""
Explanation: An image can also be displayed from raw data or a URL.
End of explanation
"""
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
"""
Explanation: SVG images are also supported out of the box.
End of explanation
"""
from IPython.display import Image
img_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'
# by default Image data are embedded
Embed = Image(img_url)
# if kwarg `url` is given, the embedding is assumed to be false
SoftLinked = Image(url=img_url)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=img_url, embed=True)
"""
Explanation: Embedded vs non-embedded Images
By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley.
End of explanation
"""
Embed
"""
Explanation: Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
End of explanation
"""
SoftLinked
"""
Explanation: Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
End of explanation
"""
from IPython.display import HTML
s = """<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>"""
h = HTML(s)
display(h)
"""
Explanation: Of course, if you re-run this Notebook, the two images will be the same again.
HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
End of explanation
"""
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
"""
Explanation: You can also use the %%html cell magic to accomplish the same thing.
End of explanation
"""
from IPython.display import Javascript
"""
Explanation: JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
End of explanation
"""
js = Javascript('alert("hi")');
display(js)
"""
Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it.
End of explanation
"""
%%javascript
alert("hi");
"""
Explanation: The same thing can be accomplished using the %%javascript cell magic:
End of explanation
"""
Javascript(
"""$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')"""
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("data/flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
"""
Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
End of explanation
"""
from IPython.display import Math
Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
"""
Explanation: LaTeX
The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax.
You can pass raw LaTeX test as a string to the Math object:
End of explanation
"""
from IPython.display import Latex
Latex(r"""\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray}""")
"""
Explanation: With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray:
End of explanation
"""
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
"""
Explanation: Or you can enter LaTeX directly with the %%latex cell magic:
End of explanation
"""
from IPython.display import Audio
Audio(url="http://www.nch.com.au/acm/8k16bitpcm.wav")
"""
Explanation: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
End of explanation
"""
import numpy as np
max_time = 3
f1 = 220.0
f2 = 224.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
"""
Explanation: A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows:
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
"""
Explanation: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
End of explanation
"""
from IPython.display import HTML
from base64 import b64encode
video = open("../images/animation.m4v", "rb").read()
video_encoded = b64encode(video).decode('ascii')
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(data=video_tag)
"""
Explanation: Using the nascent video capabilities of modern browsers, you may also be able to display local
videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you;
we will continue testing this and looking for ways to make it more robust.
The following cell loads a local file called animation.m4v, encodes the raw video as base64 for http
transport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider.
End of explanation
"""
from IPython.display import IFrame
IFrame('http://jupyter.org', width='100%', height=350)
"""
Explanation: External sites
You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia
page for mobile users:
End of explanation
"""
from IPython.display import FileLink, FileLinks
FileLink('Cell Magics.ipynb')
"""
Explanation: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
End of explanation
"""
FileLinks('.')
"""
Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
End of explanation
"""
|
DS-100/sp17-materials
|
sp17/labs/lab05/lab05.ipynb
|
gpl-3.0
|
# Run this cell to set up the notebook.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from client.api.notebook import Notebook
ok = Notebook('lab05.ok')
"""
Explanation: Lab 5: Relational Algebra in Pandas
End of explanation
"""
young_sailors = pd.DataFrame({
"sid": [2701, 18869, 63940, 21869, 17436],
"sname": ["Jerry", "Morgan", "Danny", "Jack", "Dustin"],
"rating": [8, 6, 4, 9, 3],
"age": [25, 26, 21, 27, 22],
})
salty_sailors = pd.DataFrame({
"sid": [2701, 17436, 45433, 22689, 46535],
"sname": ["Jerry", "Dustin", "Balon", "Euron", "Victarion"],
"rating": [8, 3, 7, 10, 2],
"age": [25, 22, 39, 35, 37],
})
boats = pd.DataFrame({
"bid": [41116, 54505, 50041, 35168, 58324],
"bname": ["The Black Sparrow", "The Great Kraken", "The Prophetess", "Silence", "Iron Victory"],
"color": ["Black", "Orange", "Silver", "Red", "Grey"],
})
reservations = pd.DataFrame({
"sid": [21869, 45433, 18869, 22689, 21869, 17436, 63940, 45433, 21869, 18869],
"bid": [41116, 35168, 50041, 41116, 58324, 50041, 54505, 41116, 50041, 41116],
"day": ["3/1", "3/1", "3/2", "3/2", "3/2", "3/3", "3/3", "3/3", "3/3", "3/4"],
})
"""
Explanation: Boat Club
The Berkeley Boat Club wants to better organize their user data, and they've hired you to do it. Your first job is to implement code for relational algebra operators in python (unlike you, they don't know how to use pandas).
You may want to refer to these slides, to remember what each operation does. You may also want to refer to the pandas documentation.
Here are the Boat Club's databases. Your job is to implement a variety of unary and binary relational algebra operators.
End of explanation
"""
def project(df, columns):
...
project(salty_sailors, ["sname", "age"])
_ = ok.grade('qproject')
_ = ok.backup()
"""
Explanation: Question 1: Projection
Our arguments are a dataframe and a list of columns to select. This should be a simple one :)
End of explanation
"""
def select(df, condition):
...
select(young_sailors, lambda x: x["rating"] > 6)
_ = ok.grade('qselect')
_ = ok.backup()
"""
Explanation: Question 2: Selection
For selecton, our arguments are a dataframe and a function which determines which rows we select. For instance,
good_sailors = select(young_sailors, lambda x: x["rating"] > 6)
End of explanation
"""
def union(df1, df2):
...
union(young_sailors, salty_sailors)
_ = ok.grade('qunion')
_ = ok.backup()
"""
Explanation: Question 3: Union
This is a binary operator, so we pass in two dataframes as our arguments. You can assume that the two dataframes are union compatible - that is, that they have the same number of columns, and their columns have the same types.
End of explanation
"""
def intersection(df1, df2):
...
intersection(young_sailors, salty_sailors)
_ = ok.grade('qintersection')
_ = ok.backup()
"""
Explanation: Question 4: Intersection
Similar to Union, this is also a binary operator.
End of explanation
"""
def difference(df1, df2):
return df1.where(df1.apply(lambda x: ~x.isin(df2[x.name]))).dropna()
difference(young_sailors, salty_sailors)
_ = ok.grade('qdifference')
_ = ok.backup()
"""
Explanation: Question 5: Set-difference
This one is a bit harder. You might just want to convert the rows of the dataframes to tuple, if you're having trouble.
End of explanation
"""
def cross_product(df1, df2):
# add a column "tmp-key" of zeros to df1 and df2
df1 = pd.concat([df1, pd.Series(0, index=df1.index, name="tmp-key")], axis=1)
df2 = pd.concat([df2, pd.Series(0, index=df2.index, name="tmp-key")], axis=1)
# use Pandas merge functionality along with drop
# to compute outer product and remove extra column
return (pd
.merge(df1, df2, on="tmp-key")
...
cross_product(young_sailors, salty_sailors)
_ = ok.grade('qcross_product')
_ = ok.backup()
"""
Explanation: Question 6: Cross-product
This one is also tricky, so we've provided some help for you. Think about how the new key column could be used...
End of explanation
"""
def theta_join(df1, df2, condition):
return select(cross_product(df1, df2), condition)
theta_join(young_sailors, salty_sailors, lambda x: x["age_x"] > x["age_y"])
_ = ok.grade('qtheta_join')
_ = ok.backup()
"""
Explanation: Question 7: Theta-Join
Can you do this by using two other relational operators?
End of explanation
"""
def natural_join(df1, df2, attr):
return select(cross_product(df1, df2), lambda x: x[attr+"_x"] == x[attr+"_y"])
all_sailors = union(young_sailors, salty_sailors)
sailor_reservtions = natural_join(all_sailors, reservations, "sid")
sailors_and_boats = natural_join(sailor_reservtions, boats, "bid")
project(sailors_and_boats, ["sname", "bname", "day"])
_ = ok.grade('qnatural_join')
_ = ok.backup()
"""
Explanation: Question 8: Natural Join
Similar to above, try to implement this using two relational operators.
End of explanation
"""
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
"""
Explanation: Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.
End of explanation
"""
|
ajaybhat/DLND
|
Project 1/dlnd-your-first-neural-network.ipynb
|
apache-2.0
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x : 1/(1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output).T # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr*np.dot(output_errors,hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot((hidden_grad*hidden_errors), inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden , inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output , hidden_outputs)# signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.008
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\nProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods)
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
richiebful/scotusbot
|
.ipynb_checkpoints/JudicialRulings-checkpoint.ipynb
|
gpl-3.0
|
import csv
with open("judicialMetadata.csv", "w+") as metadata:
header = allRecentRecords[0].keys()
writer = csv.DictWriter(metadata, fieldnames=header)
writer.writerows(allRecentRecords)
"""
Explanation: Next block caches metadata for retrieving Supreme Court transcripts into a csv
End of explanation
"""
import pdfquery
import requests
def getPDFTree(url, tempURL):
pdf = requests.get(url)
with open(tempURL, "wb+", buffering=0) as fp:
fp.write(pdf.content)
pdfTree = pdfquery.PDFQuery(fp)
pdfTree.load()
return pdfTree
url = "https://www.supremecourt.gov/oral_arguments/argument_transcripts/2000/00-6374.pdf"
tempURL = "temp/argument_transcripts/2004/04-603.x,l"
pdfTree = getPDFTree(url, tempURL)
pdfTree.tree.write("temp/argument_transcripts/2006/05-85.xml", pretty_print=True, encoding="utf-8")
TOContents = pdfTree.pq('LTTextLineHorizontal:contains("C O N T E N T S ")')
assert TOContents != None, "Table of contents is formatted differently"
import re
with open("temp/argument_transcripts/2004/04-603.xml") as fp:
text = fp.read()
tagRegex = r"\</?[^\<\>/]*\>"
text = re.sub(tagRegex, "", text)
text = re.sub(r'\n\s*[0-9]+', "\n", text)
text = text.replace('FOURTEENTH STREET, N.W.WASHINGTON, D.C. 20005(800) FOR DEPO ALDERSON REPORTING COMPANY, INC. 1111 FOURTEENTH STREET, N.W. SUITE 400 WASHINGTON, D.C. 20005 (202)289-2260 (800) FOR DEPO', '')
tocLength = text.find("C O N T E N T S ")
print(text[tocLength : ])
"""
Explanation: Should put a csv reading block here for faster processing
End of explanation
"""
|
SimonBiggs/poc-brachyoptimisation
|
Proof of concept with probability minimisation.ipynb
|
agpl-3.0
|
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
%matplotlib inline
from utilities import BasinhoppingWrapper, create_green_cm
green_cm = create_green_cm()
"""
Explanation: Proof of concept
This is a proof of concept for the inclusion of positional uncertainty within the dwell time optimisation algorthim. The focus is on the optimisation method, and as such it is assumed that each seed radially deposits its energy purely based upon the inverse square law.
Two regions are defined, a central target region and a off centre avoid region.
Initialisation
End of explanation
"""
grid_spacing = 0.1 # Was 0.1, using 0.2 to speed up for testing
x_ = np.arange(-0.5, 0.5 + grid_spacing, grid_spacing)
y_ = np.arange(-0.5, 0.5 + grid_spacing, grid_spacing)
z_ = np.arange(-0.5, 0.5 + grid_spacing, grid_spacing)
x, y, z = np.meshgrid(x_, y_, z_)
x = np.ravel(x)
y = np.ravel(y)
z = np.ravel(z)
"""
Explanation: Create the calculation grid
End of explanation
"""
def target(x, y, z):
return (
(x < 0.45) & (x > -0.45) &
(y < 0.45) & (y > -0.45) &
(z < 0.45) & (z > -0.45))
def avoid(x, y, z):
return (
(x < 0.25) & (x > 0.05) &
(y < 0.15) & (y > -0.15) &
(z < 2) & (z > -2))
target_cube = target(x, y, z)
avoid_cube = avoid(x, y, z)
target_cube = target_cube & ~avoid_cube
"""
Explanation: Define the target and avoid cubes
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x[target_cube], y[target_cube], z[target_cube],
alpha=0.2, s=2, color='blue')
ax.scatter(
x[avoid_cube], y[avoid_cube], z[avoid_cube],
alpha=0.2, s=2, color='red')
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
ax.view_init(azim=-90, elev=90)
fig
"""
Explanation: Display the target and avoid cubes
End of explanation
"""
number_of_lines = 16 # Was 16, reduced to 9 for testing
line_start = np.meshgrid(
[-0.39, -0.13, 0.13, 0.39],
[-0.39, -0.13, 0.13, 0.39],
[1])
line_finish = np.array([
line_start[0] + np.random.normal(scale=0.02, size=[4, 4, 1]),
line_start[1] + np.random.normal(scale=0.02, size=[4, 4, 1]),
-line_start[2]])
line_start = np.array([np.ravel(mesh) for mesh in line_start])
line_finish = np.array([np.ravel(mesh) for mesh in line_finish])
"""
Explanation: Create initial equidistant parrallel lines to represent catheters. Inbuild a slight random skew to emulate what occurs physically
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x[target_cube], y[target_cube], z[target_cube],
alpha=0.2, s=2, color='blue')
ax.scatter(
x[avoid_cube], y[avoid_cube], z[avoid_cube],
alpha=0.2, s=2, color='red')
ax.scatter(*line_start)
ax.scatter(*line_finish)
for i in range(len(line_start[0])):
plt_coords = [
[line_start[j][i], line_finish[j][i]]
for j in range(len(line_start))]
ax.plot(*plt_coords, color='black', alpha=0.5)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
"""
Explanation: Display the lines overlayed
End of explanation
"""
diff = (line_finish - line_start)
line_length = np.sqrt(diff[0]**2 + diff[1]**2 + diff[2]**2)
def find_distance_coords(line_num=None, distance=None):
relative_dist = distance / line_length[line_num]
if (relative_dist > 1) | (relative_dist < 0):
return np.array([np.nan]*3)
x = (
line_start[0][line_num] * (1 - relative_dist) +
line_finish[0][line_num] * relative_dist)
y = (
line_start[1][line_num] * (1 - relative_dist) +
line_finish[1][line_num] * relative_dist)
z = (
line_start[2][line_num] * (1 - relative_dist) +
line_finish[2][line_num] * relative_dist)
coords = np.array([x, y, z])
return coords
"""
Explanation: Create a function to return x, y, z coords when a distance along a line is requested
End of explanation
"""
dwell_spacing = 0.1 # Was 0.1, increased to 0.3 for testing
dwell_distances_from_initial = np.arange(0, 2, dwell_spacing)
number_of_dwells = len(dwell_distances_from_initial)
inital_dwell_position = np.random.uniform(
low=0, high=dwell_spacing, size=number_of_lines)
inital_dwell_position
dwell_distances = np.reshape(inital_dwell_position, (-1, 1)) + np.reshape(dwell_distances_from_initial, (1, -1))
def find_dwell_coords(line_num=None, dwell_num=None):
distance = dwell_distances[line_num, dwell_num]
coords = find_distance_coords(
line_num=line_num, distance=distance)
return coords
"""
Explanation: Pick dwell positons starting at a random position along the line
End of explanation
"""
dwell_positions = np.array([
[
find_dwell_coords(
line_num=line_num, dwell_num=dwell_num)
for dwell_num in range(number_of_dwells)]
for line_num in range(number_of_lines)])
"""
Explanation: Find all the dwell positions that are on the grid
End of explanation
"""
line_colours = np.random.uniform(size=(number_of_lines,3))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x[target_cube], y[target_cube], z[target_cube],
alpha=0.2, s=2, color='blue')
ax.scatter(
x[avoid_cube], y[avoid_cube], z[avoid_cube],
alpha=0.2, s=2, color='red')
for line_num in range(number_of_lines):
ax.scatter(*np.transpose(dwell_positions[line_num]),
c=line_colours[line_num], alpha=0.7)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
"""
Explanation: Plot the dwell positions
End of explanation
"""
dwell_positions_to_be_filtered = np.reshape(dwell_positions, (-1, 3))
distance_to_dwell_pos_initial = np.array([
np.sqrt(
(x[i] - dwell_positions_to_be_filtered[:,0])**2 +
(y[i] - dwell_positions_to_be_filtered[:,1])**2 +
(z[i] - dwell_positions_to_be_filtered[:,2])**2
)
for i in range(len(x))
])
closest_voxel_to_dwell = np.reshape(np.argmin(distance_to_dwell_pos_initial, axis=0), [1, -1])
target_cube_voxels = np.reshape(np.where(target_cube)[0], [-1, 1])
is_dwell_closest_to_target = np.any(closest_voxel_to_dwell == target_cube_voxels, axis=0)
relevant_dwell_positions = dwell_positions_to_be_filtered[is_dwell_closest_to_target]
line_number_index = np.reshape(np.arange(number_of_lines), (number_of_lines, 1))
line_number_index = np.ones([number_of_lines, number_of_dwells]) * line_number_index
line_number_index = np.ravel(line_number_index)[is_dwell_closest_to_target]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x[target_cube], y[target_cube], z[target_cube],
alpha=0.2, s=2, color='blue')
ax.scatter(
x[avoid_cube], y[avoid_cube], z[avoid_cube],
alpha=0.2, s=2, color='red')
for line_num in range(number_of_lines):
ref = line_number_index == line_num
ax.scatter(*np.transpose(relevant_dwell_positions[ref]),
c=line_colours[line_num], alpha=0.7)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
"""
Explanation: Select the dwell positions that fall withing the target region. Only use these dwell positions.
End of explanation
"""
distance_to_dwell_pos = np.array([
np.sqrt(
(x[i] - relevant_dwell_positions[:,0])**2 +
(y[i] - relevant_dwell_positions[:,1])**2 +
(z[i] - relevant_dwell_positions[:,2])**2
)
for i in range(len(x))
])
exposure_per_unit_time = 1 / distance_to_dwell_pos**2
"""
Explanation: Initial optimisation
Create an array containing the distance to each dwell position for each voxel and translate this to exposure at that voxel per unit dwell time at each dwell position. This is defined in this way so that a reasonable portion of the calculation of exposure can be done outside of the optimisation.
End of explanation
"""
def calculate_exposure(dwell_times):
exposure = np.sum(dwell_times * exposure_per_unit_time, axis=1)
return exposure
"""
Explanation: Create exposure calculation function
End of explanation
"""
num_relevant_dwells = len(relevant_dwell_positions)
random_pick = np.random.uniform(
size=2, high=num_relevant_dwells, low=0).astype(int)
dwell_times = np.zeros([1, num_relevant_dwells])
dwell_times[0, random_pick] = 10
exposure = calculate_exposure(dwell_times)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
reference = exposure > 80
colour = exposure[reference]
colour[colour > 200] = 200
ax.scatter(
x[reference], y[reference], z[reference],
alpha=0.2, s=1, c=colour, cmap=cm.jet)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
"""
Explanation: Run a test of arbitrary dwell times
End of explanation
"""
def display_results(dwell_times):
dwell_times = np.reshape(dwell_times, (1, num_relevant_dwells))
exposure = calculate_exposure(dwell_times)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
reference = exposure > 25
colour = exposure[reference]
colour[colour > 100] = 100
small = exposure[reference] < 50
large = ~small
ax.scatter(
x[reference][small], y[reference][small], z[reference][small],
alpha=0.2, s=3, c=colour[small], cmap=green_cm)
ax.scatter(
x[reference][large], y[reference][large], z[reference][large],
alpha=0.4, s=20, c=colour[large], cmap=green_cm)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
cost_function(dwell_times, debug=True)
plt.show()
"""
Explanation: Create function to display the results of the optimisation as it is being calculated.
End of explanation
"""
def hot_exterior_cost_function(max_target_exterior):
return ((max_target_exterior)/30)**4
testx = np.linspace(0, 120)
testy = hot_exterior_cost_function(testx)
plt.plot(testx, testy)
plt.ylim([0, 20])
"""
Explanation: Create the optimisation cost function
The maximum exposure to a grid position not within the target is aimed to be less than 60. This cost function aims to achieve this.
End of explanation
"""
def cold_target_cost_function(min_target):
return ((min_target-80)/15)**4
testx = np.linspace(0, 120)
testy = cold_target_cost_function(testx)
plt.plot(testx, testy)
plt.ylim([0, 20])
"""
Explanation: No grid point within the target is to be less than about 50. This cost function aims to achieve this.
End of explanation
"""
def hot_avoid_cost_function(max_avoid):
return ((max_avoid)/25)**4
testx = np.linspace(0, 120)
testy = hot_avoid_cost_function(testx)
plt.plot(testx, testy)
plt.ylim([0, 20])
"""
Explanation: The avoid cost function aims to make no point within the avoid structure more than 45.
End of explanation
"""
def cost_function(dwell_times, debug=False):
dwell_times = np.reshape(dwell_times, (1, num_relevant_dwells))
exposure = calculate_exposure(dwell_times)
min_target = np.min(exposure[target_cube])
max_avoid = np.max(exposure[avoid_cube])
max_target_exterior = np.max(exposure[~target_cube])
cold_target_cost = cold_target_cost_function(min_target)
hot_exterior_cost = hot_exterior_cost_function(max_target_exterior)
hot_avoid_cost = hot_avoid_cost_function(max_avoid)
total_cost = hot_exterior_cost + cold_target_cost + hot_avoid_cost
if debug:
print("Minimum target = %0.4f, resulting cost = %0.4f" %
(min_target, cold_target_cost))
print("Maximum exterior = %0.4f, resulting cost = %0.4f" %
(max_target_exterior, hot_exterior_cost))
print("Maximum avoid = %0.4f, resulting cost = %0.4f" %
(max_avoid, hot_avoid_cost))
print("Total cost = %0.4f" % (total_cost))
return total_cost
"""
Explanation: Create the cost function to be used by the optimiser
End of explanation
"""
num_relevant_dwells
initial_conditions = np.ones(num_relevant_dwells)*0.1
"""
Explanation: Create initial conditions
End of explanation
"""
step_noise = np.ones(num_relevant_dwells) * 0.3
"""
Explanation: Step noise
End of explanation
"""
bounds = ((0, None),)*num_relevant_dwells
"""
Explanation: Bounds
End of explanation
"""
optimisation = BasinhoppingWrapper(
to_minimise=cost_function,
initial=initial_conditions,
step_noise=step_noise,
basinhopping_confidence=2,
optimiser_confidence=0.0001,
n=5,
debug=display_results,
bounds=bounds
)
"""
Explanation: Run the optimiser
End of explanation
"""
display_results(optimisation.result)
"""
Explanation: Presentation of results
End of explanation
"""
plt.hist(optimisation.result)
"""
Explanation: Display histogram of resulting dwell times
End of explanation
"""
for line_num in range(number_of_lines):
print("Line number %d" % (line_num))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(
x[target_cube], y[target_cube], z[target_cube],
alpha=0.2, s=2, color='blue')
ax.scatter(
x[avoid_cube], y[avoid_cube], z[avoid_cube],
alpha=0.2, s=2, color='red')
ref = line_number_index == line_num
ax.scatter(*np.transpose(relevant_dwell_positions[ref]),
c=line_colours[line_num], alpha=0.7)
ax.set_xlim([np.min(x_), np.max(x_)])
ax.set_ylim([np.min(y_), np.max(y_)])
ax.set_zlim([np.min(z_), np.max(z_)])
plt.show()
plt.scatter(np.arange(np.sum(ref)), optimisation.result[ref],
c=line_colours[line_num])
plt.xlabel("Dwell number")
plt.ylabel("Units of dwell time")
plt.ylim(bottom=0)
plt.show()
"""
Explanation: Give overview of dwell times segmented by catheter
End of explanation
"""
dwell_times = np.reshape(optimisation.result, (1, num_relevant_dwells))
"""
Explanation: Convert result into column vector for post analysis
End of explanation
"""
def create_calculate_exposure(dwell_positions,
x_new=None, y_new=None, z_new=None):
x, y, z = np.meshgrid(
x_new, y_new, z_new)
x = np.ravel(x)
y = np.ravel(y)
z = np.ravel(z)
distance_to_dwell_pos = np.array([
np.sqrt(
(x[i] - dwell_positions[:,0])**2 +
(y[i] - dwell_positions[:,1])**2 +
(z[i] - dwell_positions[:,2])**2
)
for i in range(len(x))
])
exposure_per_unit_time = 1 / distance_to_dwell_pos**2
def calculate_exposure(dwell_times, reshape=False):
exposure = np.sum(dwell_times * exposure_per_unit_time, axis=1)
if reshape:
exposure = np.reshape(exposure, (
len(x_new), len(y_new), len(z_new)))
return exposure
return calculate_exposure
dx = 0.01; dy = 0.01
contour_x_ = np.arange(np.min(x_), np.max(x_) + dx, dx)
contout_y_ = np.arange(np.min(y_), np.max(y_) + dy, dy)
contout_z_ = z_
contour_calculate_exposure = create_calculate_exposure(
relevant_dwell_positions, x_new=contour_x_, y_new=contout_y_, z_new=contout_z_)
contour_exposure = contour_calculate_exposure(dwell_times, reshape=True)
contour_exposure[contour_exposure > 250] = 250
for i, z_value in enumerate(z_):
plt.figure(figsize=(6,6))
c = plt.contourf(contour_x_, contout_y_,
contour_exposure[:, :, i], 250,
vmin=0, vmax=100, cmap=green_cm)
reference = z[target_cube] == z_value
x_target = x[target_cube][reference]
y_target = y[target_cube][reference]
plt.scatter(x_target, y_target, alpha=0.3, color='blue')
reference = z[avoid_cube] == z_value
x_avoid = x[avoid_cube][reference]
y_avoid = y[avoid_cube][reference]
plt.scatter(x_avoid, y_avoid, alpha=0.3, color='red')
plt.title("Slice z = %0.1f" % (z_value))
# plt.colorbar(c)
plt.xlim([np.min(x_), np.max(x_)])
plt.ylim([np.min(y_), np.max(y_)])
plt.show()
"""
Explanation: Create a custom resolution exposure calculation function for post analysis
End of explanation
"""
dx = 0.02
dy = 0.02
dz = 0.02
post_x_ = np.arange(np.min(x_), np.max(x_) + dx, dx)
post_y_ = np.arange(np.min(y_), np.max(y_) + dy, dy)
post_z_ = np.arange(np.min(z_), np.max(z_) + dz, dz)
post_x, post_y, post_z = np.meshgrid(
post_x_, post_y_, post_z_)
post_x = np.ravel(post_x)
post_y = np.ravel(post_y)
post_z = np.ravel(post_z)
post_calculate_exposure = create_calculate_exposure(
relevant_dwell_positions, x_new=post_x_, y_new=post_y_, z_new=post_z_)
post_exposure = post_calculate_exposure(dwell_times)
"""
Explanation: Analysis of results
Create analysis grid
End of explanation
"""
post_target_cube = target(post_x, post_y, post_z)
post_avoid_cube = avoid(post_x, post_y, post_z)
"""
Explanation: Define structures on new finer grid
End of explanation
"""
def create_dvh(reference, exposure, ax=None):
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
results = exposure[reference]
results[results>150] = 150
hist = np.histogram(results, 100)
freq = hist[0]
bin_edge = hist[1]
bin_mid = (bin_edge[1::] + bin_edge[:-1:])/2
cumulative = np.cumsum(freq[::-1])
cumulative = cumulative[::-1]
bin_mid = np.append([0], bin_mid)
cumulative = np.append(cumulative[0], cumulative)
percent_cumulative = cumulative / cumulative[0] * 100
ax.plot(bin_mid, percent_cumulative)
"""
Explanation: Make a function to plot DVHs given a reference
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
create_dvh(post_target_cube, post_exposure, ax=ax)
create_dvh(post_avoid_cube, post_exposure, ax=ax)
create_dvh(~post_target_cube, post_exposure, ax=ax)
plt.legend(["Target", "Avoid", "Patient"])
plt.xlim([0, 150])
"""
Explanation: Plotting of relevant DVHs
End of explanation
"""
relevant_dwell_distances = np.ravel(dwell_distances)[is_dwell_closest_to_target]
shift_uncertainty = 0.05
line_displacement = np.random.normal(scale=shift_uncertainty, size=number_of_lines)
shifted_dwell_positions = np.array([
find_distance_coords(
distance=relevant_dwell_distances[i] + line_displacement[line_number_index[i]],
line_num=line_number_index[i])
for i in range(len(relevant_dwell_distances))
])
shifted_calculate_exposure = create_calculate_exposure(
shifted_dwell_positions, x_new=post_x_, y_new=post_y_, z_new=post_z_)
shifted_exposure = shifted_calculate_exposure(dwell_times)
"""
Explanation: Add a random shift to each catheter, see the effect it has on target DVH.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
create_dvh(post_target_cube, post_exposure, ax=ax)
create_dvh(post_target_cube, shifted_exposure, ax=ax)
plt.legend(["Original", "With position error"], loc='lower left')
plt.xlim([0, 110])
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
create_dvh(post_avoid_cube, post_exposure, ax=ax)
create_dvh(post_avoid_cube, shifted_exposure, ax=ax)
plt.legend(["Original", "With position error"], loc='upper right')
plt.xlim([0, 110])
"""
Explanation: With the shift the change in the DVH is the following. For the optimisation given here there is very little change in DVH.
End of explanation
"""
|
kubeflow/pipelines
|
samples/core/parameterized_tfx_oss/taxi_pipeline_notebook.ipynb
|
apache-2.0
|
!python3 -m pip install pip --upgrade --quiet --user
!python3 -m pip install kfp --upgrade --quiet --user
pip install tfx==1.4.0 tensorflow==2.5.1 --quiet --user
"""
Explanation: TFX pipeline example - Chicago Taxi tips prediction
Overview
Tensorflow Extended (TFX) is a Google-production-scale machine
learning platform based on TensorFlow. It provides a configuration framework to express ML pipelines
consisting of TFX components, which brings the user large-scale ML task orchestration, artifact lineage, as well as the power of various TFX libraries. Kubeflow Pipelines can be used as the orchestrator supporting the
execution of a TFX pipeline.
This sample demonstrates how to author a ML pipeline in TFX and run it on a KFP deployment.
Permission
This pipeline requires Google Cloud Storage permission to run.
If KFP was deployed through K8S marketplace, please make sure "Allow access to the following Cloud APIs" is checked when creating the cluster. <img src="check_permission.png">
Otherwise, follow instructions in the guideline to guarantee at least, that the service account has storage.admin role.
End of explanation
"""
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
"""
Explanation: Note: if you're warned by
WARNING: The script {LIBRARY_NAME} is installed in '/home/jupyter/.local/bin' which is not on PATH.
You might need to fix by running the next cell and restart the kernel.
End of explanation
"""
import json
import os
import kfp
import tensorflow_model_analysis as tfma
from tfx import v1 as tfx
# In TFX MLMD schema, pipeline name is used as the unique id of each pipeline.
# Assigning workflow ID as part of pipeline name allows the user to bypass
# some schema checks which are redundant for experimental pipelines.
pipeline_name = 'taxi_pipeline_with_parameters'
# Path of pipeline data root, should be a GCS path.
# Note that when running on KFP, the pipeline root is always a runtime parameter.
# The value specified here will be its default.
pipeline_root = os.path.join('gs://{{kfp-default-bucket}}', 'tfx_taxi_simple',
kfp.dsl.RUN_ID_PLACEHOLDER)
# Location of input data, should be a GCS path under which there is a csv file.
data_root = '/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/data/simple'
# Path to the module file, GCS path.
# Module file is one of the recommended way to provide customized logic for component
# includeing Trainer and Transformer.
# See https://github.com/tensorflow/tfx/blob/93ea0b4eda5a6000a07a1e93d93a26441094b6f5/tfx/components/trainer/component.py#L38
taxi_module_file_param = tfx.dsl.experimental.RuntimeParameter(
name='module-file',
default='/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/taxi_utils_native_keras.py',
ptype=str,
)
# Path that ML models are pushed, should be a GCS path.
# TODO: CHANGE the GCS bucket name to yours.
serving_model_dir = os.path.join('gs://your-bucket', 'serving_model', 'tfx_taxi_simple')
push_destination = tfx.dsl.experimental.RuntimeParameter(
name='push_destination',
default=json.dumps({'filesystem': {'base_directory': serving_model_dir}}),
ptype=str,
)
"""
Explanation: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature.
RuntimeParameter in TFX DSL
Currently, TFX DSL only supports parameterizing field in the PARAMETERS section of ComponentSpec, see here. This prevents runtime-parameterizing the pipeline topology. Also, if the declared type of the field is a protobuf, the user needs to pass in a dictionary with exactly the same names for each field, and specify one or more value as RuntimeParameter objects. In other word, the dictionary should be able to be passed in to ParseDict() method and produce the correct pb message.
End of explanation
"""
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples'])
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False)
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
# The module file used in Transform and Trainer component is paramterized by
# _taxi_module_file_param.
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=taxi_module_file_param)
# The numbers of steps in train_args are specified as RuntimeParameter with
# name 'train-steps' and 'eval-steps', respectively.
trainer = tfx.components.Trainer(
module_file=taxi_module_file_param,
examples=transform.outputs['transformed_examples'],
schema=schema_gen.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=tfx.proto.TrainArgs(num_steps=10),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Set the TFMA config for Model Evaluation and Validation.
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(
signature_name='serving_default', label_key='tips_xf',
preprocessing_function_names=['transform_features'])
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
metrics=[
tfma.MetricConfig(class_name='ExampleCount')
],
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
thresholds = {
'binary_accuracy': tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
}
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
# The name of slicing column is specified as a RuntimeParameter.
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=push_destination)
# Create the DSL pipeline object.
# This pipeline obj carries the business logic of the pipeline, but no runner-specific information
# was included.
dsl_pipeline = tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=[
example_gen, statistics_gen, schema_gen, example_validator, transform,
trainer, evaluator, pusher
],
enable_cache=True,
beam_pipeline_args=['--direct_num_workers=%d' % 0],
)
# Specify a TFX docker image. For the full list of tags please see:
# https://hub.docker.com/r/tensorflow/tfx/tags
tfx_image = 'gcr.io/tfx-oss-public/tfx:1.4.0'
config = tfx.orchestration.experimental.KubeflowDagRunnerConfig(
kubeflow_metadata_config=tfx.orchestration.experimental
.get_default_kubeflow_metadata_config(),
tfx_image=tfx_image)
kfp_runner = tfx.orchestration.experimental.KubeflowDagRunner(config=config)
# KubeflowDagRunner compiles the DSL pipeline object into KFP pipeline package.
# By default it is named <pipeline_name>.tar.gz
kfp_runner.run(dsl_pipeline)
run_result = kfp.Client(
host='1234567abcde-dot-us-central2.pipelines.googleusercontent.com' # Put your KFP endpoint here
).create_run_from_pipeline_package(
pipeline_name + '.tar.gz',
arguments={
# Uncomment following lines in order to use custom GCS bucket/module file/training data.
# 'pipeline-root': 'gs://<your-gcs-bucket>/tfx_taxi_simple/' + kfp.dsl.RUN_ID_PLACEHOLDER,
# 'module-file': '<gcs path to the module file>', # delete this line to use default module file.
# 'data-root': '<gcs path to the data>' # delete this line to use default data.
})
"""
Explanation: TFX Components
Please refer to the official guide for the detailed explanation and purpose of each TFX component.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.24/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb
|
bsd-3-clause
|
# Authors: Britta Westner <britta.wstnr@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample, fetch_fsaverage
from mne.beamformer import make_lcmv, apply_lcmv
"""
Explanation: Source reconstruction using an LCMV beamformer
This tutorial gives an overview of the beamformer method
and shows how to reconstruct source activity using an LCMV beamformer.
End of explanation
"""
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # bad MEG channel
# Set up the epoching
event_id = 1 # those are the trials with left-ear auditory stimuli
tmin, tmax = -0.2, 0.5
events = mne.find_events(raw)
# pick relevant channels
raw.pick(['meg', 'eog']) # pick channels of interest
# Create epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
# for speed purposes, cut to a window of interest
evoked = epochs.average().crop(0.05, 0.15)
# Visualize averaged sensor space data
evoked.plot_joint()
del raw # save memory
"""
Explanation: Introduction to beamformers
A beamformer is a spatial filter that reconstructs source activity by
scanning through a grid of pre-defined source points and estimating activity
at each of those source points independently. A set of weights is
constructed for each defined source location which defines the contribution
of each sensor to this source.
Beamformers are often used for their focal reconstructions and their ability
to reconstruct deeper sources. They can also suppress external noise sources.
The beamforming method applied in this tutorial is the linearly constrained
minimum variance (LCMV) beamformer :footcite:VanVeenEtAl1997 operates on
time series.
Frequency-resolved data can be reconstructed with the dynamic imaging of
coherent sources (DICS) beamforming method :footcite:GrossEtAl2001.
As we will see in the following, the spatial filter is computed from two
ingredients: the forward model solution and the covariance matrix of the
data.
Data processing
We will use the sample data set for this tutorial and reconstruct source
activity on the trials with left auditory stimulation.
End of explanation
"""
data_cov = mne.compute_covariance(epochs, tmin=0.01, tmax=0.25,
method='empirical')
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0,
method='empirical')
data_cov.plot(epochs.info)
del epochs
"""
Explanation: Computing the covariance matrices
Spatial filters use the data covariance to estimate the filter
weights. The data covariance matrix will be inverted_ during the spatial
filter computation, so it is valuable to plot the covariance matrix and its
eigenvalues to gauge whether matrix inversion will be possible.
Also, because we want to combine different channel types (magnetometers and
gradiometers), we need to account for the different amplitude scales of these
channel types. To do this we will supply a noise covariance matrix to the
beamformer, which will be used for whitening.
The data covariance matrix should be estimated from a time window that
includes the brain signal of interest,
and incorporate enough samples for a stable estimate. A rule of thumb is to
use more samples than there are channels in the data set; see
:footcite:BrookesEtAl2008 for more detailed advice on covariance estimation
for beamformers. Here, we use a time
window incorporating the expected auditory response at around 100 ms post
stimulus and extend the period to account for a low number of trials (72) and
low sampling rate of 150 Hz.
End of explanation
"""
# Read forward model
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
forward = mne.read_forward_solution(fwd_fname)
"""
Explanation: When looking at the covariance matrix plots, we can see that our data is
slightly rank-deficient as the rank is not equal to the number of channels.
Thus, we will have to regularize the covariance matrix before inverting it
in the beamformer calculation. This can be achieved by setting the parameter
reg=0.05 when calculating the spatial filter with
:func:~mne.beamformer.make_lcmv. This corresponds to loading the diagonal
of the covariance matrix with 5% of the sensor power.
The forward model
The forward model is the other important ingredient for the computation of a
spatial filter. Here, we will load the forward model from disk; more
information on how to create a forward model can be found in this tutorial:
tut-forward.
Note that beamformers are usually computed in a :class:volume source space
<mne.VolSourceEstimate>, because estimating only cortical surface
activation can misrepresent the data.
End of explanation
"""
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='unit-noise-gain', rank=None)
# You can save the filter for later use with:
# filters.save('filters-lcmv.h5')
"""
Explanation: Handling depth bias
The forward model solution is inherently biased toward superficial sources.
When analyzing single conditions it is best to mitigate the depth bias
somehow. There are several ways to do this:
:func:mne.beamformer.make_lcmv has a depth parameter that normalizes
the forward model prior to computing the spatial filters. See the docstring
for details.
Unit-noise gain beamformers handle depth bias by normalizing the
weights of the spatial filter. Choose this by setting
weight_norm='unit-noise-gain'.
When computing the Neural activity index, the depth bias is handled by
normalizing both the weights and the estimated noise (see
:footcite:VanVeenEtAl1997). Choose this by setting weight_norm='nai'.
Note that when comparing conditions, the depth bias will cancel out and it is
possible to set both parameters to None.
Compute the spatial filter
Now we can compute the spatial filter. We'll use a unit-noise gain beamformer
to deal with depth bias, and will also optimize the orientation of the
sources such that output power is maximized.
This is achieved by setting pick_ori='max-power'.
This gives us one source estimate per source (i.e., voxel), which is known
as a scalar beamformer.
End of explanation
"""
filters_vec = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='vector',
weight_norm='unit-noise-gain', rank=None)
# save a bit of memory
src = forward['src']
del forward
"""
Explanation: It is also possible to compute a vector beamformer, which gives back three
estimates per voxel, corresponding to the three direction components of the
source. This can be achieved by setting
pick_ori='vector' and will yield a :class:volume vector source estimate
<mne.VolVectorSourceEstimate>. So we will compute another set of filters
using the vector beamformer approach:
End of explanation
"""
stc = apply_lcmv(evoked, filters, max_ori_out='signed')
stc_vec = apply_lcmv(evoked, filters_vec, max_ori_out='signed')
del filters, filters_vec
"""
Explanation: Apply the spatial filter
The spatial filter can be applied to different data types: raw, epochs,
evoked data or the data covariance matrix to gain a static image of power.
The function to apply the spatial filter to :class:~mne.Evoked data is
:func:~mne.beamformer.apply_lcmv which is
what we will use here. The other functions are
:func:~mne.beamformer.apply_lcmv_raw,
:func:~mne.beamformer.apply_lcmv_epochs, and
:func:~mne.beamformer.apply_lcmv_cov.
End of explanation
"""
lims = [0.3, 0.45, 0.6]
kwargs = dict(src=src, subject='sample', subjects_dir=subjects_dir,
initial_time=0.087, verbose=True)
"""
Explanation: Visualize the reconstructed source activity
We can visualize the source estimate in different ways, e.g. as a volume
rendering, an overlay onto the MRI, or as an overlay onto a glass brain.
The plots for the scalar beamformer show brain activity in the right temporal
lobe around 100 ms post stimulus. This is expected given the left-ear
auditory stimulation of the experiment.
End of explanation
"""
stc.plot(mode='stat_map', clim=dict(kind='value', pos_lims=lims), **kwargs)
"""
Explanation: On MRI slices (orthoview; 2D)
End of explanation
"""
stc.plot(mode='glass_brain', clim=dict(kind='value', lims=lims), **kwargs)
"""
Explanation: On MNI glass brain (orthoview; 2D)
End of explanation
"""
brain = stc_vec.plot_3d(
clim=dict(kind='value', lims=lims), hemi='both', size=(600, 600),
views=['sagittal'],
# Could do this for a 3-panel figure:
# view_layout='horizontal', views=['coronal', 'sagittal', 'axial'],
brain_kwargs=dict(silhouette=True),
**kwargs)
"""
Explanation: Volumetric rendering (3D) with vectors
These plots can also be shown using a volumetric rendering via
:meth:~mne.VolVectorSourceEstimate.plot_3d. Let's try visualizing the
vector beamformer case. Here we get three source time courses out per voxel
(one for each component of the dipole moment: x, y, and z), which appear
as small vectors in the visualization (in the 2D plotters, only the
magnitude can be shown):
End of explanation
"""
peak_vox, _ = stc_vec.get_peak(tmin=0.08, tmax=0.1, vert_as_index=True)
ori_labels = ['x', 'y', 'z']
fig, ax = plt.subplots(1)
for ori, label in zip(stc_vec.data[peak_vox, :, :], ori_labels):
ax.plot(stc_vec.times, ori, label='%s component' % label)
ax.legend(loc='lower right')
ax.set(title='Activity per orientation in the peak voxel', xlabel='Time (s)',
ylabel='Amplitude (a. u.)')
mne.viz.utils.plt_show()
del stc_vec
"""
Explanation: Visualize the activity of the maximum voxel with all three components
We can also visualize all three components in the peak voxel. For this, we
will first find the peak voxel and then plot the time courses of this voxel.
End of explanation
"""
fetch_fsaverage(subjects_dir) # ensure fsaverage src exists
fname_fs_src = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif'
src_fs = mne.read_source_spaces(fname_fs_src)
morph = mne.compute_source_morph(
src, subject_from='sample', src_to=src_fs, subjects_dir=subjects_dir,
niter_sdr=[5, 5, 2], niter_affine=[5, 5, 2], zooms=7, # just for speed
verbose=True)
stc_fs = morph.apply(stc)
del stc
stc_fs.plot(
src=src_fs, mode='stat_map', initial_time=0.085, subjects_dir=subjects_dir,
clim=dict(kind='value', pos_lims=lims), verbose=True)
"""
Explanation: Morph the output to fsaverage
We can also use volumetric morphing to get the data to fsaverage space. This
is for example necessary when comparing activity across subjects. Here, we
will use the scalar beamformer example.
We pass a :class:mne.SourceMorph as the src argument to
mne.VolSourceEstimate.plot. To save some computational load when applying
the morph, we will crop the stc:
End of explanation
"""
|
agile-geoscience/gio
|
docs/userguide/Read_OpendTect_horizons.ipynb
|
apache-2.0
|
import gio
ds = gio.read_odt('data/OdT/3d_horizon/Segment_ILXL_Single-line-header.dat')
ds
ds['twt'].plot()
"""
Explanation: Read OpendTect horizons
The best way to export horizons from OpendTect is with these options:
x/y and inline/crossline
with header (single or multi-line, it doesn't matter)
choose all the attributes you want
On the last point, if you choose multiple horizons in one file, you can only have one attribute in the file.
IL/XL only, single-line header, multiple attributes
End of explanation
"""
ds = gio.read_odt('../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat')
ds
import matplotlib.pyplot as plt
plt.scatter(ds.coords['cdp_x'], ds.coords['cdp_y'], s=5)
"""
Explanation: IL/XL and XY, multi-line header, multiple attributes
Load everything (default)
X and Y are loaded as cdp_x and cdp_y, to be consistent with the seisnc standard in segysak.
End of explanation
"""
fname = '../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat'
names = ['Inline', 'Crossline', 'Z'] # Must match OdT DAT file.
ds = gio.read_odt(fname, names=names)
ds
"""
Explanation: Load only inline, crossline, TWT
There is only one attribute here: Z, which is the two-way time of the horizon.
Note that when loading data from OpendTect, you always get an xarray.Dataset, even if there's only a single attribute. This is because the format supports multiple grids and we didn't want you to have to guess what a given file would produce.
End of explanation
"""
fname = '../data/OdT/3d_horizon/Segment_XY_Single-line-header.dat'
ds = gio.read_odt(fname, origin=(376, 812), step=(2, 2))
ds
ds['twt'].plot()
"""
Explanation: XY only
If you have a file with no IL/XL, gio can try to load data using only X and Y:
If there's a header you can load any number of attributes.
If there's no header, you can only one attribute (e.g. TWT) automagically...
OR, if there's no header, you can provide names to tell gio what everything is.
gio must create fake inline and crossline numbers; you can provide an origin and a step size. For example, notice above that the true inline and crossline numbers are:
inline: 376, 378, 380, etc.
crossline: 812, 814, 816, etc.
So we can pass an origin of (376, 812) and a step of (2, 2) to mimic these.
Header present
End of explanation
"""
fname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat'
ds = gio.read_odt(fname)
ds
# Raises an error:
fname = '../data/OdT/3d_horizon/Segment_XY_No-header.dat'
ds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])
ds
ds['twt'].plot()
"""
Explanation: No header, more than one attribute: raises an error
End of explanation
"""
fname = '../data/OdT/3d_horizon/Nimitz_Salmon_XY-and-ILXL_Single-line-header.dat'
ds = gio.read_odt(fname)
ds
ds['twt'].plot.imshow()
"""
Explanation: Sparse data
Sometimes a surface only exists at a few points, e.g. a 3D seismic interpretation grid. In general, loading data like this is completely safe if you have inline and xline locations. If you only have (x, y) locations, gio will attempt to load it, but you should inspect the result carefullly.
End of explanation
"""
ds['twt'].plot()
"""
Explanation: There's some sort of artifact with the default plot style, which uses pcolormesh I think.
End of explanation
"""
fname = '../../gio-dev/data/OdT/3d_horizon/multi_horizon/Multi_header_H2_and_H4_X_Y_iL_xL_Z_in_sec.dat'
ds = gio.read_odt(fname)
ds
ds['F3_Demo_2_FS6'].plot()
ds['F3_Demo_4_Truncation'].plot()
"""
Explanation: Multiple horizons in one file
You can export multiple horizons from OpendTect. These will be loaded as one xarray.Dataset as different Data variables. (The actual attribute you exported from OdT is always called Z; this information is not retained in the xarray.)
End of explanation
"""
import gio
fname = '../data/OdT/3d_horizon/Test_Multi_XY-and-ILXL_Z-only.dat'
ds = gio.read_odt(fname, names=['Horizon', 'X', 'Y', 'Inline', 'Crossline', 'Z'])
ds
"""
Explanation: Multi-horizon, no header
Unfortunately, OdT exports (x, y) in the first two columns, meaning you can't assume that columns 3 and 4 are inline, crossline. So if there's no header, and XY as well as inline/xline, you have to give the column names:
End of explanation
"""
fname = '../data/OdT/3d_horizon/Segment_XY_No-header_NULLs.dat'
ds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])
ds
ds['twt'].plot()
"""
Explanation: Undefined values
These are exported as '1e30' by default. You can override this (not add to it, which is the default pandas behaviour) by passing one or more na_values.
End of explanation
"""
|
flaviostutz/datascience-snippets
|
study/udacity-deep-learning/assignment3-regularization.ipynb
|
mit
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
batch_size = 128
hidden_nodes = 1024
learning_rate = 0.5
beta = 0.005
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_nodes]))
biases_1 = tf.Variable(tf.zeros([hidden_nodes]))
weights_2 = tf.Variable(
tf.truncated_normal([hidden_nodes, num_labels]))
biases_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
def forward_prop(input):
h1 = tf.nn.relu(tf.matmul(input, weights_1) + biases_1)
return tf.matmul(h1, weights_2) + biases_2
logits = forward_prop(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Add the regularization term to the loss.
loss += beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset))
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
End of explanation
"""
train_dataset_restricted = train_dataset[:130, :]
train_labels_restricted = train_labels[:130, :]
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels_restricted.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset_restricted[offset:(offset + batch_size), :]
batch_labels = train_labels_restricted[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
End of explanation
"""
batch_size = 128
hidden_nodes_1 = 1024
hidden_nodes_2 = 1024
learning_rate = 0.0001
beta = 0.005
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Placeholder to control dropout probability.
keep_prob = tf.placeholder(tf.float32)
# Variables.
weights_1 = tf.Variable(tf.random_normal([image_size * image_size, hidden_nodes_1]))
biases_1 = tf.Variable(tf.zeros([hidden_nodes_1]))
weights_2 = tf.Variable(tf.random_normal([hidden_nodes_1, hidden_nodes_2]))
biases_2 = tf.Variable(tf.zeros([hidden_nodes_2]))
weights_out = tf.Variable(tf.random_normal([hidden_nodes_2, num_labels]))
biases_out = tf.Variable(tf.zeros([num_labels]))
# Training computation.
def forward_prop(input):
h1 = tf.nn.dropout(tf.nn.relu(tf.matmul(input, weights_1) + biases_1), keep_prob)
h2 = tf.nn.dropout(tf.nn.relu(tf.matmul( h1, weights_2) + biases_2), keep_prob)
return tf.matmul(h2, weights_out) + biases_out
logits = forward_prop(tf_train_dataset)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Add the regularization term to the loss.
loss += beta * (tf.nn.l2_loss(weights_1) + tf.nn.l2_loss(weights_2) + tf.nn.l2_loss(weights_out))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset))
test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset))
num_steps = 5001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob: 1.0}
feed_dict_w_drop = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob: 0.5}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict_w_drop)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(feed_dict=feed_dict), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(feed_dict=feed_dict), test_labels))
"""
Explanation: Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
End of explanation
"""
|
christophe-pouzat/LASCON2016
|
AreTwoPSTHsIdentical.ipynb
|
cc0-1.0
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
import scipy
import h5py
"""
Explanation: Setting up Python
The analysis presented in the manuscript and detailed next is carried
out with Python 3 (the following code runs and gives identical results
with Python 2). We are going to use the 3 classical modules of
Python's scientific ecosystem: numpy, scipy and matplotlib. We are
also going to use two additional modules: sympy as well as h5py. We
start by importing these modules:
End of explanation
"""
try:
from urllib.request import urlretrieve # Python 3
except ImportError:
from urllib import urlretrieve # Python 2
name_on_disk = 'CockroachDataJNM_2009_181_119.h5'
urlretrieve('https://zenodo.org/record/14281/files/'+
name_on_disk,
name_on_disk)
"""
Explanation: Python 2 users must also type:
from __future__ import print_function, division, unicode_literals, absolute_import
Getting the data
Our data (Pouzat and Chaffiol,2015) are stored in
HDF5 format on the
zenodo server (DOI:10.5281/zenodo.1428145).
They are all contained in a file named
CockroachDataJNM_2009_181_119.h5. The data within this file have an
hierarchical organization similar to the one of a file system (one of
the main ideas of the HDF5 format). The first organization level is the
experiment; there are 4 experiments in the file: e060517, e060817,
e060824 and e070528. Each experiment is organized by neurons,
Neuron1, Neuron2, etc, (with a number of recorded neurons depending
on the experiment). Each neuron contains a dataset (in the HDF5
terminology) named spont containing the spike train of that neuron
recorded during a period of spontaneous activity. Each neuron also
contains one or several further sub-levels named after the odor used for
stimulation citronellal, terpineol, mixture, etc. Each a these
sub-levels contains as many datasets: stim1, stim2, etc, as
stimulations were applied; and each of these data sets contains the
spike train of that neuron for the corresponding stimulation. Another
dataset, named stimOnset containing the onset time of the stimulus
(for each of the stimulations). All these times are measured in seconds.
The data can be downloaded with Python as follows:
End of explanation
"""
f = h5py.File("CockroachDataJNM_2009_181_119.h5","r")
"""
Explanation: The file is opened with:
End of explanation
"""
def raster_plot(train_list,
stim_onset=None,
color = 'black'):
"""Create a raster plot.
Parameters
----------
train_list: a list of spike trains (1d vector with strictly
increasing elements).
stim_onst: a number giving the time of stimulus onset. If
specificied, the time are realigned such that the stimulus
comes at 0.
color: the color of the ticks representing the spikes.
Side effect:
A raster plot is created.
"""
import numpy as np
import matplotlib.pyplot as plt
if stim_onset is None:
stim_onset = 0
for idx,trial in enumerate(train_list):
plt.vlines(trial-stim_onset,
idx+0.6,idx+1.4,
color=color)
plt.ylim(0.5,len(train_list))
"""
Explanation: Making the raster plots
We make our raster plots with a short function raster_plot that we define with:
End of explanation
"""
citron_onset = f["e060817/Neuron1/citronellal/stimOnset"][...][0]
e060817citron = [[f[y][...] for y in
["e060817/Neuron"+str(i)+"/citronellal/stim"+str(x)
for x in range(1,21)]]
for i in range(1,4)]
fig = plt.figure(figsize=(16,8))
plt.subplot(121)
raster_plot(e060817citron[0],citron_onset)
plt.xlim(-2,6)
plt.ylim(0,21)
plt.grid(True)
plt.xlabel("Time (s)",fontdict={'fontsize':15})
plt.ylabel("Trial",fontdict={'fontsize':15})
plt.title("Citronellal",fontdict={'fontsize':20})
plt.subplot(122)
terpi_onset = f["e060817/Neuron1/terpineol/stimOnset"][...][0]
e060817terpi = [[f[y][...] for y in
["e060817/Neuron"+str(i)+"/terpineol/stim"+str(x)
for x in range(1,21)]]
for i in range(1,4)]
raster_plot(e060817terpi[0],terpi_onset)
plt.xlim(-2,6)
plt.ylim(0,21)
plt.grid(True)
plt.xlabel("Time (s)",fontdict={'fontsize':15})
plt.ylabel("Trial",fontdict={'fontsize':15})
plt.title("Terpineol",fontdict={'fontsize':20})
plt.subplots_adjust(wspace=0.4,hspace=0.4)
"""
Explanation: The raster plots of the responses of Neuron 1 to citronellal and terpineol are then obtained with:
End of explanation
"""
e060817_spont_nu = [len(f["e060817/Neuron"+str(i)+"/spont"])/60
for i in range(1,4)]
print("The spontaneous discharge rates are:")
for i in range(len(e060817_spont_nu)):
print(" Neuron {0}: {1:.2f} (Hz)".format(i+1,e060817_spont_nu[i]))
"""
Explanation: Choosing the initial bin size
The bin width should be chosen large enough to have a few events per bin most of the time. We set this width such that an expected number of 3 events per bin was obtained using the spontaneous frequency estimated from 60 seconds long recordings without stimulus presentation—more specifically, the width was set to the smallest millisecond larger or equal to the targeted count divided by the product of the spontaneous frequency and the number of trials.
For the neurons of the data set we get the following spontaneous discharge rates:
End of explanation
"""
target_mean = 3
n_stim_citron = len(e060817citron[0])
n1citron = np.sort(np.concatenate(e060817citron[0]))
left = -5+citron_onset
right = 6+citron_onset
n1citron = n1citron[np.logical_and(left <= n1citron,n1citron <= right)]-citron_onset
bin_width_citron = np.ceil(target_mean/n_stim_citron/e060817_spont_nu[0]*1000)/1000
n1citron_bin = np.arange(-5,6+bin_width_citron,bin_width_citron)
n1citron_counts, n1citron_bin = np.histogram(n1citron,n1citron_bin)
n1citron_stab = 2*np.sqrt(n1citron_counts+0.25)
n1citron_x = n1citron_bin[:-1]+bin_width_citron/2
"""
Explanation: Building the initial PSTH and stablizing them
For the citronellal response of Neuron 1 we do:
End of explanation
"""
n_stim_terpi = len(e060817terpi[0])
n1terpi = np.sort(np.concatenate(e060817terpi[0]))
left = -5+terpi_onset
right = 6+terpi_onset
n1terpi = n1terpi[np.logical_and(left <= n1terpi,n1terpi <= right)]-terpi_onset
bin_width_terpi = np.ceil(target_mean/n_stim_terpi/e060817_spont_nu[0]*1000)/1000
n1terpi_bin = np.arange(-5,6+bin_width_terpi,bin_width_terpi)
n1terpi_counts, n1terpi_bin = np.histogram(n1terpi,n1terpi_bin)
n1terpi_stab = 2*np.sqrt(n1terpi_counts+0.25)
n1terpi_x = n1terpi_bin[:-1]+bin_width_terpi/2
"""
Explanation: For the terpineol response we do:
End of explanation
"""
n1terpi_even = np.sort(np.concatenate(e060817terpi[0][0:20:2]))
n1terpi_even = n1terpi_even[np.logical_and(left <= n1terpi_even,n1terpi_even <= right)]-terpi_onset
bin_width_terpi_even = np.ceil(target_mean/n_stim_terpi/e060817_spont_nu[0]*1000)/1000/2
n1terpi_even_bin = np.arange(-5,6+bin_width_terpi_even,bin_width_terpi_even)
n1terpi_even_counts, n1terpi_even_bin = np.histogram(n1terpi_even,n1terpi_even_bin)
n1terpi_even_stab = 2*np.sqrt(n1terpi_even_counts+0.25)
n1terpi_odd = np.sort(np.concatenate(e060817terpi[0][1:20:2]))
n1terpi_odd = n1terpi_odd[np.logical_and(left <= n1terpi_odd,n1terpi_odd <= right)]-terpi_onset
bin_width_terpi_odd = np.ceil(target_mean/n_stim_terpi/e060817_spont_nu[0]*1000)/1000/2
n1terpi_odd_bin = np.arange(-5,6+bin_width_terpi_odd,bin_width_terpi_odd)
n1terpi_odd_counts, n1terpi_odd_bin = np.histogram(n1terpi_odd,n1terpi_odd_bin)
n1terpi_odd_stab = 2*np.sqrt(n1terpi_odd_counts+0.25)
n1terpi_odd_x = n1terpi_odd_bin[:-1]+bin_width_terpi_odd/2
"""
Explanation: For the even and odd stimuli terpineol responses we do:
End of explanation
"""
def c95(x): return 0.299958+2.348443*np.sqrt(x)
def c99(x): return 0.312456+2.890606*np.sqrt(x)
"""
Explanation: We define two functions returning the boundaries:
End of explanation
"""
n1_diff_y = (n1terpi_stab-n1citron_stab)/np.sqrt(2)
X1 = np.arange(1,len(n1terpi_x)+1)/len(n1terpi_x)
Y1 = np.cumsum(n1_diff_y)/np.sqrt(len(n1_diff_y))
n1_diff_terpi_y = (n1terpi_even_stab-n1terpi_odd_stab)/np.sqrt(2)
X2 = np.arange(1,len(n1terpi_odd_x)+1)/len(n1terpi_odd_x)
Y2 = np.cumsum(n1_diff_terpi_y)/np.sqrt(len(n1_diff_terpi_y))
xx = np.linspace(0,1,201)
fig = plt.figure(figsize=(14,8))
plt.plot(xx,c95(xx),color='grey',lw=2,linestyle='dashed')
plt.plot(xx,-c95(xx),color='grey',lw=2,linestyle='dashed')
plt.plot(xx,c99(xx),color='grey',lw=2)
plt.plot(xx,-c99(xx),color='grey',lw=2)
plt.plot(X2,Y2,color='grey',lw=2)
plt.plot(X1,Y1,color='black',lw=2)
plt.xlabel("Normalized time",fontdict={'fontsize':20})
plt.ylabel("$S_k(t)$",fontdict={'fontsize':20})
"""
Explanation: We just have to make the graph after computing the cumsum of the differences:
End of explanation
"""
f.close()
"""
Explanation: Before closing the session we close our file:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.