repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
dsg-bielefeld/deep_disfluency
|
deep_disfluency/rnn/dev/Keras_experimentation.ipynb
|
mit
|
#Try decoding with just the one step as in training
model.reset_states()
model.predict([np.array([[[ 0.2]],[[ 0.1 ]]],[[[ 0.1]],[[ 0.4 ]]],dtype="float")] )
model.predict([np.array([[[0.1]]],dtype="float")] )
model.predict([np.array([[[0.4]]],dtype="float")] )
model.predict([np.array([[[0.1]]],dtype="float")] )
#Now try with multple inputs and see if for input lengths >2 the results are the same
model.reset_states()
model.predict([np.array([[[0.2],[0.1]]],dtype="float")] )
model.reset_states()
model.predict([np.array([[[0.2],[0.1],[0.1]]],dtype="float")] )
model.reset_states()
model.predict([np.array([[[0.2],[0.1],[0.1],[0.4]]],dtype="float")] )
model.reset_states()
model.predict([np.array([[[0.2],[0.1],[0.1],[0.4],[0.1]]],dtype="float")] )
"""
Explanation: Problem 1
does the stateful method work for decoding (i.e. does the model start where it left off if sequence is not fed in all at once)
can we get varying length input, i.e. not just training batch size?
End of explanation
"""
print(model.layers)
print(len(model.layers))
layer_index = 3 #output from the 4th layer Dropout
get_activations = theano.function([model.layers[0].input], model.layers[layer_index].get_output(), allow_input_downcast=True)
model.reset_states()
get_activations([np.array([[0.2],[0.1],[0.1]],dtype="float")] )
layer_index = 5 #equiv to prediction?
get_activations = theano.function([model.layers[0].input], model.layers[layer_index].get_output(), allow_input_downcast=True)
model.reset_states()
get_activations([np.array([[0.2],[0.1],[0.1]],dtype="float")] )
"""
Explanation: Conclusion 1:
As result is the same if fed in one by one or as a sequence, we do get statefulness until the reset_states() is called.
As a consequence we can use varying length input in prediction.
Problem 2:
Can we access the activations/state of the hidden layers output from the network at run time?
End of explanation
"""
model.reset_states()
tic = time.clock()
print('warming up on training data') # Predict on all training data in order to warm up for testing data
warmupPredictions = []
warm_up_training = []
for i in range(0,totalTimeSteps-testingSize):
warm_up_training.append(X[:, numOfPrevSteps*i:(i+1)*numOfPrevSteps, :])
pred = model.predict(X[:, numOfPrevSteps*i:(i+1)*numOfPrevSteps, :] )
warmupPredictions.append(pred)
print(len(warmupPredictions))
print(time.clock() - tic)
model.reset_states()
tic = time.clock()
layer_index = 5 #equiv to prediction?
get_activations = theano.function([model.layers[0].input], model.layers[layer_index].get_output(), allow_input_downcast=True)
print('warming up on training data') # Predict on all training data in order to warm up for testing data
warmupPredictions2 = []
warm_up_training2 = []
for i in range(0,totalTimeSteps-testingSize):
warm_up_training2.append(X[:, numOfPrevSteps*i:(i+1)*numOfPrevSteps, :])
pred = get_activations(X[:, numOfPrevSteps*i:(i+1)*numOfPrevSteps, :] )
warmupPredictions2.append(pred)
print(len(warmupPredictions2))
print(time.clock() - tic)
for a,b,c,d in zip(warm_up_training,warm_up_training2,warmupPredictions,warmupPredictions2):
print(a,b,a==b,c,d)
#Conclusion 3:
* getting the internal states is faster than the overall prediction
"""
Explanation: Conclusion 2:
Yes we can get the internal activations of any layer during run time- though is it as fast as prediction?
Problem 3:
speed comparisons, (how much) does stateful slow things down, and is getting the hidden layer activations slower than prediction?
End of explanation
"""
|
parrt/dtreeviz
|
notebooks/classifier-boundary-animations.ipynb
|
mit
|
! pip install --quiet -U pltvid # simple animation support by parrt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, precision_score, recall_score
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
%config InlineBackend.figure_format = 'svg' # Looks MUCH better than retina
# %config InlineBackend.figure_format = 'retina'
from rfpimp import * # pip install rfpimp
from sklearn import tree
import dtreeviz
from dtreeviz import clfviz
"""
Explanation: Animations showing feature space and classification boundaries
While dtreeviz is dedicated primarily to showing decision trees, we have also provided a way to show the decision boundaries for one- and two- variable classifiers. The clfviz() function will work with any model that answers method predict_proba() and with Keras, for which we provided a special adapter (since that method is deprecated).
Using a silly little pltvid library I built, we can do some simple animations. I think it doesn't work on Windows because I directly relied on /tmp dir. Sorry.
Requirements
This notebook requires poppler lib due to pltvid lib
On mac:
brew install poppler
Also needs my helper lib:
End of explanation
"""
wine = load_wine()
X = wine.data
X = X[:,[12,6]]
y = wine.target
rf = RandomForestClassifier(n_estimators=50, min_samples_leaf=20, n_jobs=-1)
rf.fit(X, y)
import pltvid
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 10
for depth in range(1,max+1):
t = DecisionTreeClassifier(max_depth=depth)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(4,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['proline', 'flavanoid'], target_name="wine",
ax=ax)
plt.title(f"Wine tree depth {depth}")
plt.tight_layout()
if depth>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("wine-dtree-maxdepth.png", duration=500) # animated png
"""
Explanation: Wine data set
End of explanation
"""
def smiley(n = 1000):
# mouth
x1 = np.random.normal(1.0,.2,n).reshape(-1,1)
x2 = np.random.normal(0.4,.05,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=0, dtype=int)
d = np.hstack([x1,x2,cl])
data = d
# left eye
x1 = np.random.normal(.7,.2,n).reshape(-1,1)
x2 = x1 + .3 + np.random.normal(0,.1,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=1, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
# right eye
x1 = np.random.normal(1.3,.2,n).reshape(-1,1)
x2 = np.random.normal(0.8,.1,n).reshape(-1,1)
x2 = x1 - .5 + .3 + np.random.normal(0,.1,n).reshape(-1,1)
cl = np.full(shape=(n,1), fill_value=2, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
# face outline
noise = np.random.normal(0,.1,n).reshape(-1,1)
x1 = np.linspace(0,2,n).reshape(-1,1)
x2 = (x1-1)**2 + noise
cl = np.full(shape=(n,1), fill_value=3, dtype=int)
d = np.hstack([x1,x2,cl])
data = np.vstack([data, d])
df = pd.DataFrame(data, columns=['x1','x2','class'])
return df
"""
Explanation: Synthetic data set
End of explanation
"""
import pltvid
df = smiley(n=100)
X = df[['x1','x2']]
y = df['class']
rf = RandomForestClassifier(n_estimators=10, min_samples_leaf=1, n_jobs=-1)
rf.fit(X, y)
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 100
tree_sizes = [*range(1,10)]+[*range(10,max+1,5)]
for nt in tree_sizes:
np.random.seed(1) # use same bagging sets for animation
rf = RandomForestClassifier(n_estimators=nt, min_samples_leaf=1, n_jobs=-1)
rf.fit(X, y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(rf, X.values, y, feature_names=['x1', 'x2'],
ntiles=70, dot_w=15, boundary_markersize=.4, ax=ax)
plt.title(f"Synthetic dataset, {nt} trees")
plt.tight_layout()
if nt>=tree_sizes[-1]:
camera.snap(5)
else:
camera.snap()
# plt.show()
camera.save("smiley-numtrees.png", duration=500)
"""
Explanation: Animate num trees in RF
End of explanation
"""
import pltvid
df = smiley(n=100) # more stark changes with fewer
X = df[['x1','x2']]
y = df['class']
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 10
for depth in range(1,max+1):
t = DecisionTreeClassifier(max_depth=depth)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['x1', 'x2'], target_name="class",
colors={'scatter_edge': 'black',
'tesselation_alpha':.6},
ax=ax)
plt.title(f"Synthetic dataset, tree depth {depth}")
plt.tight_layout()
if depth>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("smiley-dtree-maxdepth.png", duration=500)
"""
Explanation: Animate decision tree max depth
End of explanation
"""
import pltvid
df = smiley(n=100)
X = df[['x1','x2']]
y = df['class']
dpi = 300
camera = pltvid.Capture(dpi=dpi)
max = 20
for leafsz in range(1,max+1):
t = DecisionTreeClassifier(min_samples_leaf=leafsz)
t.fit(X,y)
fig,ax = plt.subplots(1,1, figsize=(5,3.5), dpi=dpi)
clfviz(t, X, y,
feature_names=['x1', 'x2'], target_name="class",
colors={'scatter_edge': 'black',
'tesselation_alpha':.4},
ax=ax)
plt.title(f"Synthetic dataset, {leafsz} samples/leaf")
plt.tight_layout()
if leafsz>=max:
camera.snap(8)
else:
camera.snap()
# plt.show()
camera.save("smiley-dtree-minsamplesleaf.png", duration=500)
"""
Explanation: Animate decision tree min samples per leaf
End of explanation
"""
|
newlawrence/poliastro
|
docs/source/examples/Natural and artificial perturbations.ipynb
|
mit
|
# Temporary hack, see https://github.com/poliastro/poliastro/issues/281
from IPython.display import HTML
HTML('<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js"></script>')
import numpy as np
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
%matplotlib inline
import matplotlib.pyplot as plt
import functools
import numpy as np
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot, OrbitPlotter3D
"""
Explanation: Natural and artificial perturbations
End of explanation
"""
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = orbit.sample(tr, method=cowell_with_ad)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R)
"""
Explanation: Atmospheric drag
The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
End of explanation
"""
r0 = np.array([-2384.46, 5729.01, 3050.46]) # km
v0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s)
tof = (48.0 * u.h).to(u.s).value
rr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value)
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, s')
plt.plot(np.linspace(0, tof, 2000), raans)
"""
Explanation: Evolution of RAAN due to the J2 perturbation
We can also see how the J2 perturbation changes RAAN over time!
End of explanation
"""
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = initial.sample(tr, method=cowell_with_3rdbody)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon')
frame.show()
"""
Explanation: 3rd body
Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
End of explanation
"""
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr = s0.sample(tr, method=cowell_with_ad)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit with artificial thrust')
frame.show()
"""
Explanation: Thrusts
Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
End of explanation
"""
|
tritemio/multispot_paper
|
out_notebooks/usALEX-5samples-PR-leakage-dir-ex-all-ph-out-17d.ipynb
|
mit
|
ph_sel_name = "None"
data_id = "17d"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:38:52 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
"""
Explanation: Load the leakage coefficient from disk:
End of explanation
"""
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
"""
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
"""
d.leakage = leakage
d.dir_ex = dir_ex_aa
"""
Explanation: Update d with the correction coefficients:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
"""
Explanation: Donor Leakage fit
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst sizes
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_kde, S_gauss, S_gauss_sig, S_gauss_err
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err '
'E_pr_do_kde nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
informatics-isi-edu/deriva-py
|
docs/derivapy-datapath-example-2.ipynb
|
apache-2.0
|
# Import deriva modules
from deriva.core import ErmrestCatalog, get_credential
# Connect with the deriva catalog
protocol = 'https'
hostname = 'www.facebase.org'
catalog_number = 1
credential = get_credential(hostname)
catalog = ErmrestCatalog(protocol, hostname, catalog_number, credential)
# Get the path builder interface for this catalog
pb = catalog.getPathBuilder()
"""
Explanation: DataPath Example 2
This notebook gives a very basic example of how to access data.
It assumes that you understand the concepts presented in the
example 1 notebook.
End of explanation
"""
path = pb.schemas['isa'].tables['dataset'].path
"""
Explanation: DataPaths
The PathBuilder object allows you to begin DataPaths from the base Tables. A DataPath begins with a Table (or an TableAlias to be discussed later) as its "root" from which one can "link", "filter", and fetch its "entities".
Start a path rooted at a table from the catalog
We will reference a table from the PathBuilder pb variable from above. Using the PathBuilder, we will reference the "isa" schema, then the "dataset" table, and from that table start a path.
End of explanation
"""
path = pb.isa.dataset.path
"""
Explanation: We could have used the more compact dot-notation to start the same path.
End of explanation
"""
print(path.uri)
"""
Explanation: Getting the URI of the current path
All DataPaths have URIs for the referenced resources in ERMrest. The URI identifies the resources which are available through "RESTful" Web protocols supported by ERMrest.
End of explanation
"""
results = path.entities()
"""
Explanation: ResultSets
The data from a DataPath are accessed through a pythonic container object, the ResultSet. The ResultSet is returned by the DataPath's entities() and other methods.
End of explanation
"""
results.fetch()
"""
Explanation: Fetch entities from the catalog
Now we can get entities from the server using the ResultSet's fetch() method.
End of explanation
"""
len(results)
"""
Explanation: ResultSets behave like python containers. For example, we can check the count of rows in this ResultSet.
End of explanation
"""
results[9]
"""
Explanation: Note: If we had not explicitly called the fetch() method, then it would have been called implicitly on the first container operation such as len(...), list(...), iter(...) or get item [...].
Get an entity
To get one entity from the set, use the usual container operator to get an item.
End of explanation
"""
dataset = pb.schemas['isa'].tables['dataset']
print(results[9][dataset.accession.name])
"""
Explanation: Get a specific attribute value from an entity
To get one attribute value from an entity get the item using its Column's name property.
End of explanation
"""
results.fetch(limit=3)
len(results)
"""
Explanation: Fetch a Limited Number of Results
To set a limit on the number of results to be fetched from the catalog, use the explicit fetch(limit=...) method with the desired upper limit to fetch from the catalog.
End of explanation
"""
for entity in results:
print(entity[dataset.accession.name])
"""
Explanation: Iterate over the ResultSet
ResultSets are iterable like a typical container.
End of explanation
"""
from pandas import DataFrame
DataFrame(results)
"""
Explanation: Convert to Pandas DataFrame
ResultSets can be transformed into the popular Pandas DataFrame.
End of explanation
"""
results = path.attributes(dataset.accession, dataset.title, dataset.released.alias('is_released')).fetch(limit=5)
"""
Explanation: Selecting Attributes
It is also possible to fetch only a subset of attributes from the catalog. The attributes(...) method accepts a variable argument list followed by keyword arguments. Each argument must be a Column object from the table's columns container.
Renaming selected attributes
To rename the selected attributes, use the alias(...) method on the column object. For example, attributes(table.column.alias('new_name')) will rename table.column with new_name in the entities returned from the server. (It will not change anything in the stored catalog data.)
End of explanation
"""
list(results)
"""
Explanation: Convert to list
Now we can look at the results from the above fetch. To demonstrate a different access mode, we can convert the entities to a standard python list and dump to the console.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/snu/cmip6/models/sandbox-1/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
bblais/Classy
|
examples/Example Bio.ipynb
|
mit
|
print("original sequence:")
print("\t",sequence_data.data[0])
print("the first few chunks:")
for vector in data.vectors[:10]:
print("\t",bio.vector_to_sequence(vector,data.letters))
"""
Explanation: here'a a little sanity check...
End of explanation
"""
save_csv('small sequence dataset.csv',data)
"""
Explanation: you only need to save to csv if you feel like looking at the vectors in Excel - usually you don't need to do this
End of explanation
"""
sequence_data_train=bio.load_sequences('data/small sequence dataset.xlsx')
sequence_data_test=bio.load_sequences('data/another small sequence dataset.xlsx')
data_train,data_test=bio.sequences_to_vectors(sequence_data_train,sequence_data_test,chunksize=5)
"""
Explanation: Separate files for train and test, rather than split one file
End of explanation
"""
|
brettavedisian/phys202-2015-work
|
midterm/InteractEx06.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
return (np.exp((energy-mu)/kT)+1)**-1
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = {\displaystyle \frac{1}{e^{(\epsilon - \mu)/kT}+1}}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
np.arange(0,10.01,0.01)
def plot_fermidist(mu, kT):
energy=np.arange(0,10.01,0.01)
plt.figure(figsize=(10,6))
plt.plot(energy,fermidist(energy,mu,kT))
plt.tick_params(axis='x', top='off')
plt.tick_params(axis='y', right='off')
plt.xlabel('Energy')
plt.xlim(left=0, right=10)
plt.ylim(bottom=0.0,top=1.0)
plt.ylabel('Fermi Distribution')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
interact(plot_fermidist, mu=(0.0,5.0,0.1), kT=(0.1,10.0,0.1));
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
|
HazyResearch/snorkel
|
tutorials/cdr/CDR_Tutorial_3.ipynb
|
apache-2.0
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.models import candidate_subclass
ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])
train = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()
dev = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()
test = session.query(ChemicalDisease).filter(ChemicalDisease.split == 2).all()
print('Training set:\t{0} candidates'.format(len(train)))
print('Dev set:\t{0} candidates'.format(len(dev)))
print('Test set:\t{0} candidates'.format(len(test)))
"""
Explanation: Chemical-Disease Relation (CDR) Tutorial
In this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.
Let's start by reloading from the last notebook.
End of explanation
"""
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
from snorkel.learning.pytorch import LSTM
train_kwargs = {
'lr': 0.01,
'embedding_dim': 100,
'hidden_dim': 100,
'n_epochs': 20,
'dropout': 0.5,
'rebalance': 0.25,
'print_freq': 5,
'seed': 1701
}
lstm = LSTM(n_threads=None)
lstm.train(train, train_marginals, X_dev=dev, Y_dev=L_gold_dev, **train_kwargs)
"""
Explanation: Part V: Training an LSTM extraction model
In the intro tutorial, we automatically featurized the candidates and trained a linear model over these features. Here, we'll train a more complicated model for relation extraction: an LSTM network. You can read more about LSTMs here or here. An LSTM is a type of recurrent neural network and automatically generates a numerical representation for the candidate based on the sentence text, so no need for featurizing explicitly as in the intro tutorial. LSTMs take longer to train, and Snorkel doesn't currently support hyperparameter searches for them. We'll train a single model here, but feel free to try out other parameter sets. Just make sure to use the development set - and not the test set - for model selection.
Note: Again, training for more epochs than below will greatly improve performance- try it out!
End of explanation
"""
from load_external_annotations import load_external_labels
load_external_labels(session, ChemicalDisease, split=2, annotator='gold')
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
L_gold_test
lstm.score(test, L_gold_test)
"""
Explanation: Scoring on the test set
Finally, we'll evaluate our performance on the blind test set of 500 documents. We'll load labels similar to how we did for the development set, and use the score function of our extraction model to see how we did.
End of explanation
"""
|
openclimatedata/pymagicc
|
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
|
agpl-3.0
|
# NBVAL_IGNORE_OUTPUT
from datetime import datetime
from pymagicc.core import MAGICC6, MAGICC7
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")
"""
Explanation: Diagnosing MAGICC's TCR, ECS and TCRE
End of explanation
"""
with MAGICC6() as magicc:
# you can tweak whatever parameters you want in
# MAGICC6/run/MAGCFG_DEFAULTALL.CFG, here's a few
# examples that might be of interest
results = magicc.diagnose_tcr_ecs_tcre(
CORE_CLIMATESENSITIVITY=2.75,
CORE_DELQ2XCO2=3.65,
CORE_HEATXCHANGE_LANDOCEAN=1.5,
)
print(
"TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format(
**results
)
)
"""
Explanation: Basic usage
The simplest option is to simply call the diagnose_tcr_ecs_tcre method of the MAGICC instance and read out the results.
End of explanation
"""
with MAGICC6() as magicc:
results_default = magicc.diagnose_tcr_ecs_tcre()
results_low_ecs = magicc.diagnose_tcr_ecs_tcre(CORE_CLIMATESENSITIVITY=1.5)
results_high_ecs = magicc.diagnose_tcr_ecs_tcre(
CORE_CLIMATESENSITIVITY=4.5
)
print(
"Default TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format(
**results_default
)
)
print(
"Low TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format(
**results_low_ecs
)
)
print(
"High TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format(
**results_high_ecs
)
)
"""
Explanation: If we wish, we can alter the MAGICC instance's parameters before calling the diagnose_tcr_ecs method.
End of explanation
"""
# NBVAL_IGNORE_OUTPUT
join_year = 1900
pdf = (
results["timeseries"]
.filter(region="World")
.to_iamdataframe()
.swap_time_for_year()
.data
)
for variable, df in pdf.groupby("variable"):
fig, axes = plt.subplots(1, 2, sharey=True, figsize=(16, 4.5))
unit = df["unit"].unique()[0]
for scenario, scdf in df.groupby("scenario"):
scdf.plot(x="year", y="value", ax=axes[0], label=scenario)
scdf.plot(x="year", y="value", ax=axes[1], label=scenario)
axes[0].set_xlim([1750, join_year])
axes[0].set_ylabel("{} ({})".format(variable, unit))
axes[1].set_xlim(left=join_year)
axes[1].legend_.remove()
fig.tight_layout()
# NBVAL_IGNORE_OUTPUT
results["timeseries"].filter(
scenario="abrupt-2xCO2", region="World", year=range(1795, 1905)
).timeseries()
"""
Explanation: Making a plot
The output also includes the timeseries that were used in the diagnosis experiment. Hence we can use the output to make a plot.
End of explanation
"""
|
vadim-ivlev/STUDY
|
handson-data-science-python/DataScience-Python3/DecisionTree.ipynb
|
mit
|
import numpy as np
import pandas as pd
from sklearn import tree
input_file = "e:/sundog-consult/udemy/datascience/PastHires.csv"
df = pd.read_csv(input_file, header = 0)
df.head()
"""
Explanation: Decison Trees
First we'll load some fake data on past hires I made up. Note how we use pandas to convert a csv file into a DataFrame:
End of explanation
"""
d = {'Y': 1, 'N': 0}
df['Hired'] = df['Hired'].map(d)
df['Employed?'] = df['Employed?'].map(d)
df['Top-tier school'] = df['Top-tier school'].map(d)
df['Interned'] = df['Interned'].map(d)
d = {'BS': 0, 'MS': 1, 'PhD': 2}
df['Level of Education'] = df['Level of Education'].map(d)
df.head()
"""
Explanation: scikit-learn needs everything to be numerical for decision trees to work. So, we'll map Y,N to 1,0 and levels of education to some scale of 0-2. In the real world, you'd need to think about how to deal with unexpected or missing data! By using map(), we know we'll get NaN for unexpected values.
End of explanation
"""
features = list(df.columns[:6])
features
"""
Explanation: Next we need to separate the features from the target column that we're trying to bulid a decision tree for.
End of explanation
"""
y = df["Hired"]
X = df[features]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X,y)
"""
Explanation: Now actually construct the decision tree:
End of explanation
"""
from IPython.display import Image
from sklearn.externals.six import StringIO
import pydotplus
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data,
feature_names=features)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
"""
Explanation: ... and display it. Note you need to have pydotplus installed for this to work. (!pip install pydotplus)
To read this decision tree, each condition branches left for "true" and right for "false". When you end up at a value, the value array represents how many samples exist in each target value. So value = [0. 5.] mean there are 0 "no hires" and 5 "hires" by the tim we get to that point. value = [3. 0.] means 3 no-hires and 0 hires.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=10)
clf = clf.fit(X, y)
#Predict employment of an employed 10-year veteran
print (clf.predict([[10, 1, 4, 0, 0, 0]]))
#...and an unemployed 10-year veteran
print (clf.predict([[10, 0, 4, 0, 0, 0]]))
"""
Explanation: Ensemble learning: using a random forest
We'll use a random forest of 10 decision trees to predict employment of specific candidate profiles:
End of explanation
"""
|
Xilinx/PYNQ
|
boards/Pynq-Z1/base/notebooks/pmod/pmod_tc1.ipynb
|
bsd-3-clause
|
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from pynq.lib import Pmod_TC1
# TC1 sensor is on PMODB
my_tc1 = Pmod_TC1(base.PMODB)
print('Raw Register Value: %08x hex' % my_tc1.read_raw())
print('Ref Junction Temp: %.4f' % my_tc1.read_junction_temperature())
print('Thermocouple Temp: %.2f' % my_tc1.read_thermocouple_temperature())
print('Alarm flags: %08x hex' % my_tc1.read_alarm_flags())
"""
Explanation: PMOD TC1 Sensor demonstration
This demonstration shows how to use the PmodTC1. You will also see how to plot a graph using matplotlib.
The PmodTC1 is required.
The thermocouple sensor is initialized and set to log a reading every 1 second. The temperature of the sensor
can be changed by touching it with warm fingers or by blowing on it.
1. Use TC1 to read the current temperature
Connect the TC1 sensor to PMODB.
End of explanation
"""
my_tc1.start_log()
"""
Explanation: 2. Starting logging temperature once every second
Users can use set_log_interval_ms to set the time elapsed during 2 samples. By default it is set to 1 second.
End of explanation
"""
my_tc1.stop_log()
log = my_tc1.get_log()
"""
Explanation: 3. Modifying the temperature
Touch the thermocouple with warm fingers; or
Blow on the thermocouple with cool air
Stop the logging whenever you are finished trying to change the sensor's value.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pynq.lib.pmod.pmod_tc1 import reg_to_tc
from pynq.lib.pmod.pmod_tc1 import reg_to_ref
tc = [reg_to_tc(v) for v in log]
ref = [reg_to_ref(v) for v in log]
plt.plot(range(len(tc)), tc, 'ro', label='Thermocouple')
plt.plot(range(len(ref)), ref, 'bo', label='Ref Junction')
plt.title('TC1 Sensor log')
plt.axis([0, len(log), min(tc+ref)*0.9, max(tc+ref)*1.1])
plt.legend()
plt.xlabel('Sample Number')
plt.ylabel('Temperature (C)')
plt.grid()
plt.show()
"""
Explanation: 4. Plot values over time
End of explanation
"""
|
dmlc/mxnet
|
example/svrg_module/benchmarks/svrg_benchmark.ipynb
|
apache-2.0
|
import os
import json
import sys
import tempfile
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import mxnet as mx
from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_svmlight_file
sys.path.insert(0, "../linear_regression")
from data_reader import get_year_prediction_data
%matplotlib inline
"""
Explanation: Linear Regression Using SVRGModule on YearPredictionMSD Dataset
In this notebook, a linear regression model will be fit on YearPredictionMSD dataset, which contains predictions of the release year of a song based on its audio features. The dataset has 90 features and over 400,000 samples. The dataset is downsampled to 5,000 in this experiment.
End of explanation
"""
feature_dim, train_features, train_labels = get_year_prediction_data()
train_features = train_features[-5000:]
train_labels = train_labels[-5000:]
"""
Explanation: Read Data
The first step is to get the training features and labels and normalize the data. In this example, we will use 5000 data samples.
End of explanation
"""
def create_lin_reg_network(batch_size=100):
train_iter = mx.io.NDArrayIter(train_features, train_labels, batch_size=batch_size, shuffle=True,
data_name='data', label_name='label')
data = mx.sym.Variable("data")
label = mx.sym.Variable("label")
weight = mx.sym.Variable("fc_weight", shape=(1, 90))
net = mx.sym.dot(data, weight.transpose())
bias = mx.sym.Variable("fc_bias", shape=(1,), wd_mult=0.0, lr_mult=10.0)
net = mx.sym.broadcast_plus(net, bias)
net = mx.sym.LinearRegressionOutput(data=net, label=label)
return train_iter, net
"""
Explanation: Create Linear Regression Network
End of explanation
"""
def train_svrg_lin_reg(num_epoch=100, batch_size=100, update_freq=2, output='svrg_lr.json',
optimizer_params=None):
di, net = create_lin_reg_network(batch_size=batch_size)
#Create a SVRGModule
mod = SVRGModule(symbol=net, context=mx.cpu(0), data_names=['data'], label_names=['label'], update_freq=update_freq)
mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
mod.init_params(initializer=mx.init.Zero(), allow_missing=False, force_init=False, allow_extra=False)
mod.init_optimizer(kvstore='local', optimizer='sgd', optimizer_params=optimizer_params)
metrics = mx.metric.create("mse")
results = {}
for e in range(num_epoch):
results[e] = {}
metrics.reset()
if e % mod.update_freq == 0:
mod.update_full_grads(di)
di.reset()
for batch in di:
mod.forward_backward(data_batch=batch)
mod.update()
mod.update_metric(metrics, batch.label)
results[e]["mse"] = metrics.get()[1]
f = open(output, 'w+')
f.write(json.dumps(results, indent=4, sort_keys=True))
f.close()
"""
Explanation: SVRGModule with SVRG Optimization
In this example, we will use intermediate level API for SVRGModule and the dump mse per epoch to JSON file for plotting graphs.
End of explanation
"""
def train_sgd_lin_reg(num_epoch=100, batch_size=100, update_freq=2, output='sgd_lr.json',
optimizer_params=None):
di, net = create_lin_reg_network(batch_size=batch_size)
#Create a standard module
mod = mx.mod.Module(symbol=net, context=mx.cpu(0), data_names=['data'], label_names=['label'])
mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
mod.init_params(initializer=mx.init.Zero(), allow_missing=False, force_init=False, allow_extra=False)
mod.init_optimizer(kvstore='local', optimizer='sgd', optimizer_params=optimizer_params)
metrics = mx.metric.create("mse")
results = {}
for e in range(num_epoch):
results[e] = {}
metrics.reset()
di.reset()
for batch in di:
mod.forward_backward(data_batch=batch)
mod.update()
mod.update_metric(metrics, batch.label)
results[e]["mse"] = metrics.get()[1]
f = open(output, 'w+')
f.write(json.dumps(results, indent=4, sort_keys=True))
f.close()
"""
Explanation: Module with SGD Optimization
End of explanation
"""
train_svrg_lin_reg(optimizer_params={'lr_scheduler': mx.lr_scheduler.FactorScheduler(step=10, factor=0.99)})
train_sgd_lin_reg(optimizer_params={'lr_scheduler': mx.lr_scheduler.FactorScheduler(step=10, factor=0.99)})
# plot graph
#Plot training loss over Epochs:
color = sns.color_palette()
#Draw Weight Variance Ratio
dataplot3 = {"svrg_mse": [], "sgd_mse": []}
with open('sgd_lr.json') as sgd_data, open('svrg_lr.json') as svrg_data:
sgd = json.load(sgd_data)
svrg = json.load(svrg_data)
for epoch in range(100):
dataplot3["svrg_mse"].append(svrg[str(epoch)]["mse"])
dataplot3["sgd_mse"].append(sgd[str(epoch)]["mse"])
x3 = list(range(100))
plt.figure(figsize=(20, 12))
plt.title("Training Loss Over Epochs")
sns.pointplot(x3, dataplot3['svrg_mse'], color=color[9])
sns.pointplot(x3, dataplot3['sgd_mse'], color=color[8])
color_patch1 = mpatches.Patch(color=color[9], label="svrg_mse")
color_patch2 = mpatches.Patch(color=color[8], label="sgd_mse")
plt.legend(handles=[color_patch1, color_patch2])
plt.ylabel('Training Loss', fontsize=12)
plt.xlabel('Epochs', fontsize=12)
"""
Explanation: Training Loss over 100 Epochs Using lr_scheduler
When a large learning rate is used with SGD, training loss will drop fast but will oscillates above the minimum and never converges. With a small learning rate, it will eventually reach the minimum after many iterations. A common practice is to use learning rate scheduling by starting with a large learning rate and gradually decreasing it.
End of explanation
"""
train_svrg_lin_reg(output="svrg_0.025.json", optimizer_params=(('learning_rate', 0.025),))
train_sgd_lin_reg(output="sgd_0.001.json", optimizer_params=(("learning_rate", 0.001),))
train_sgd_lin_reg(output="sgd_0.0025.json", optimizer_params=(("learning_rate", 0.0025),))
train_sgd_lin_reg(output="sgd_0.005.json", optimizer_params=(("learning_rate", 0.005),))
#Plot training loss over Epochs:
color = sns.color_palette()
#Draw Weight Variance Ratio
dataplot3 = {"svrg_mse": [], "sgd_mse_lr_0.001": [], "sgd_mse_lr_0.0025": [], "sgd_mse_lr_0.005":[]}
with open('sgd_0.001.json') as sgd_data, open('svrg_0.025.json') as svrg_data, open('sgd_0.0025.json') as sgd_data_2, open('sgd_0.005.json') as sgd_data_3:
sgd = json.load(sgd_data)
svrg = json.load(svrg_data)
sgd_lr = json.load(sgd_data_2)
sgd_lr_2 = json.load(sgd_data_3)
for epoch in range(100):
dataplot3["svrg_mse"].append(svrg[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.001"].append(sgd[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.0025"].append(sgd_lr[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.005"].append(sgd_lr_2[str(epoch)]["mse"])
x3 = list(range(100))
plt.figure(figsize=(20, 12))
plt.title("Training Loss Over Epochs")
sns.pointplot(x3, dataplot3['svrg_mse'], color=color[9])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.001'], color=color[8])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.0025'], color=color[3])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.005'], color=color[7])
color_patch1 = mpatches.Patch(color=color[9], label="svrg_mse_lr_0.025")
color_patch2 = mpatches.Patch(color=color[8], label="sgd_mse_lr_0.001")
color_patch3 = mpatches.Patch(color=color[3], label="sgd_mse_lr_0.0025")
color_patch4 = mpatches.Patch(color=color[7], label="sgd_mse_lr_0.005")
plt.legend(handles=[color_patch1, color_patch2, color_patch3, color_patch4])
plt.ylabel('Training Loss', fontsize=12)
plt.xlabel('Epochs', fontsize=12)
"""
Explanation: Training Loss Comparison with SGD with fixed learning rates
Choosing learning rate (0.0025, 0.001, 0.005) for SGD and a relatively large learning rate 0.025 for SVRG, we can see SVRG smoothly goes down faster than SGD. Learning rate for SVRG does not need to decay to zero, which means we can start with a larger learning rate.
End of explanation
"""
|
mauroalberti/gsf
|
checks/Test divergence and curl module.ipynb
|
gpl-3.0
|
from pygsf.mathematics.arrays import *
from pygsf.spatial.rasters.geotransform import *
from pygsf.spatial.rasters.fields import *
"""
Explanation: Test divergence and curl module
Created by Mauro Alberti
Last run: 2019-06-22
This document present tests on divergence and curl module calculation using pygsf.
Preliminary settings
The modules to import for dealing with grids are:
End of explanation
"""
def z_func_fx(x, y):
return 0.0001 * x * y**3
def z_func_fy(x, y):
return - 0.0002 * x**2 * y
"""
Explanation: Divergence in 2D
The definition of divergence for our 2D case is:
\begin{align}
divergence = \nabla \cdot \vec{\mathbf{v}} & = \frac{\partial{v_x}}{\partial x} + \frac{\partial{v_y}}{\partial y}
\end{align}
Curl module in 2D
The definition of curl module in our 2D case is:
\begin{equation}
\nabla \times \vec{\mathbf{v}} = \begin{vmatrix}
\mathbf{i} & \mathbf{j} & \mathbf{k} \
\frac{\partial }{\partial x} & \frac{\partial }{\partial y} & \frac{\partial }{\partial z} \
{v_x} & {v_y} & 0
\end{vmatrix}
\end{equation}
so that the module of the curl is:
\begin{equation}
|curl| = \frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y}
\end{equation}
The implementation of the curl module calculation has been debugged using the code at [2] by Johnny Lin. Deviations from the expected theoretical values are the same for both implementations.
Vector field parameters: testing divergence
We calculate a theoretical, 2D vector field and check that the parameters calculated by pygsf is equal to the expected one.
We use a modified example from p. 67 in [3].
\begin{equation}
\vec{\mathbf{v}} = 0.0001 x y^3 \vec{\mathbf{i}} - 0.0002 x^2 y \vec{\mathbf{j}} + 0 \vec{\mathbf{k}}
\end{equation}
In order to create the two grids that represent the x- and the y-components, we therefore define the following two "transfer" functions from coordinates to z values:
End of explanation
"""
rows=100; cols=200
size_x = 10; size_y = 10
tlx = 500.0; tly = 250.0
"""
Explanation: The above functions define the value of the cells, using the given x and y geographic coordinates.
geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid:
End of explanation
"""
gt1 = GeoTransform(
inTopLeftX=tlx,
inTopLeftY=tly,
inPixWidth=size_x,
inPixHeight=size_y)
"""
Explanation: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is:
End of explanation
"""
fx1 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fx)
"""
Explanation: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
End of explanation
"""
fy1 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fy)
"""
Explanation: vector field y-component
End of explanation
"""
def z_func_div(x, y):
return 0.0001 * y**3 - 0.0002 * x**2
"""
Explanation: theoretical divergence
the theoretical divergence transfer function is:
End of explanation
"""
theor_div = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_div)
"""
Explanation: The theoretical divergence field can be created using the function expressing the analytical derivatives z_func_div:
End of explanation
"""
div = divergence(
fld_x=fx1,
fld_y=fy1,
cell_size_x=size_x,
cell_size_y=size_y)
"""
Explanation: pygsf-estimated divergence
Divergence as resulting from pygsf calculation:
End of explanation
"""
assert np.allclose(theor_div, div)
"""
Explanation: We check whether the theoretical and the estimated divergence fields are close:
End of explanation
"""
def z_func_fx(x, y):
return y
def z_func_fy(x, y):
return - x
"""
Explanation: Vector field parameters: testing curl module
We test another theoretical, 2D vector field, maintaining the same geotransform and other grid parameters as in the previous example. We use the field described in example 1 in [4]:
\begin{equation}
\vec{\mathbf{v}} = y \vec{\mathbf{i}} - x \vec{\mathbf{j}} + 0 \vec{\mathbf{k}}
\end{equation}
The "transfer" functions from coordinates to z values are:
End of explanation
"""
rows=200; cols=200
size_x = 10; size_y = 10
tlx = -1000.0; tly = 1000.0
"""
Explanation: geotransform and grid definitions
Gridded field values are calculated for the theoretical source vector field x- and y- components using the provided number of rows and columns for the grid:
End of explanation
"""
gt1 = GeoTransform(
inTopLeftX=tlx,
inTopLeftY=tly,
inPixWidth=size_x,
inPixHeight=size_y)
"""
Explanation: Arrays components are defined in terms of indices i and j, so to transform array indices to geographical coordinates we use a geotransform. The one chosen is:
End of explanation
"""
fx2 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fx)
"""
Explanation: Note that the chosen geotransform has no axis rotation, as is in the most part of cases with geographic grids.
vector field x-component
End of explanation
"""
fy2 = array_from_function(
row_num=rows,
col_num=cols,
geotransform=gt1,
z_transfer_func=z_func_fy)
"""
Explanation: vector field y-component
End of explanation
"""
curl_mod = curl_module(
fld_x=fx2,
fld_y=fy2,
cell_size_x=size_x,
cell_size_y=size_y)
"""
Explanation: theoretical curl module
The theoretical curl module is a constant value:
\begin{equation}
curl = -2
\end{equation}
pygsf-estimated module of curl
The module of curl as resulting from pygsf calculation is:
End of explanation
"""
assert np.allclose(-2.0, curl_mod)
"""
Explanation: We check whether the theoretical and the estimated curl module fields are close:
End of explanation
"""
|
MarsUniversity/ece387
|
website/block_1_basics/lsn3/lsn3.ipynb
|
mit
|
from __future__ import print_function
from __future__ import division
import numpy as np
"""
Explanation: Python
Kevin J. Walchko
created 16 Nov 2017
Here we will use python as our programming language. Python, like any other language, is really vast and complex. We will just cover the basics we need.
Objectives
Understand
general syntax
for/while loops
if/elif/else
functions
data types: tuples, list, strings, etc
intro to classes
References
Python tutorialspoint
Python classes/objects
Setup
End of explanation
"""
from __future__ import division # fix division
from __future__ import print_function # use print function
print('hello world') # single quotes
print("hello world") # double quotes
print('3/4 is', 3/4) # this prints 0.75
print('I am {} ... for {} yrs I have been training Jedhi'.format("Yoda", 853))
print('float: {:5.1f}'.format(3.1424567)) # prints float: 3.1
"""
Explanation: Python
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. An interpreted language, Python has a design philosophy which emphasizes code readability (notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax which allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. The language provides constructs intended to enable writing clear programs on both a small and large scale.
<img src="rossum.png" width="300px">
Python’s Benevolent Dictator For Life!
“Python is an experiment in how much freedom program-mers need. Too much freedom and nobody can read another's code; too little and expressive-ness is endangered.”
- Guido van Rossum
Why Use It?
Simple and easy to use and very efficient
What you can do in a 100 lines of python could take you a 1000 in C++ … this is the reason many startups (e.g., Instagram) use python and keep using it
90% of robotics uses either C++ or python
Although C++ is faster in run-time, development (write, compile, link, etc) is much slower due to complex syntax, memory management, pointers (they can be fun!) and difficulty in debugging any sort of real program
Java is dying (or dead)
Microsoft is still struggling to get people outside of the Windows OS to embrace C#
Apple's swift is too new and constantly making major changes ... maybe some day
Who Uses It?
Industrial Light & Magic (Stars Wars people): used in post production scripting to tie together outputs from other C++ programs
Eve-Online (big MMORGP game): used for both client and server aspects of the game
Instagram, Spotify, SurveyMonkey, The Onion, Bitbucket, Pinterest, and more use Django (python website template framework) to create/serve millions of users
Dropbox, Paypal, Walmart and Google (YouTube)
Note: Guido van Rossum worked for Google and now works for Dropbox
Running Programs on UNIX (or your robot)
Call python program via the python interpreter: python my_program.py
This is kind of the stupid way
Make a python file directly executable
Add a shebang (it’s a Unix thing) to the top of your program: #!/usr/bin/env python
Make the file executable: chmod a+x my_program.py
Invoke file from Unix command line: ./my_program.py
Enough to Understand Code (Short Version)
Indentation matters for functions, loops, classes, etc
First assignment to a variable creates it
Variable types (int, float, etc) don’t need to be declared.
Assignment is = and comparison is ==
For numbers + - * % are as expected
modulas (%) returns the remainder: 5%3 => 2
Logical operators are words (and, or, not) not symbols
We are using __future__ for python 2 / 3 compatibility
The basic printing command is print(‘hello’)
Division works like expected:
Float division: 5/2 = 2.5
Integer division: 5//2 = 2
Start comments with #, rest of line is ignored
Can include a “documentation string” as the first line of a new function or class you define
```python
def my_function(n):
"""
my_function(n) takes a positive integer and returns n + 5
"""
# assert ... remember this from ECE281?
assert n>0, "crap, n is 0 or negative!"
return n+5
```
Printing
Again, to have Python 3 compatability and help you in the future, we are going to print things using the print function. Python 2 by default uses a print statement. Also, it is good form to use the newer format() function on strings rather than the old C style %s for a string or %d for an integer. There are lots of cool things you can do with format() but we won't dive too far into it ... just the basics.
WARNING: Your homework with Code Academy uses the old way to print, just do it for that and get through it. For this class we are doing it this way!
End of explanation
"""
print(u'\u21e6 \u21e7 \u21e8 \u21e9')
print(u'\u2620')
# this is a dictionary, we will talk about it next ... sorry for the out of order
uni = {
'left': u'\u21e6',
'up': u'\u21e7',
'right': u'\u21e8',
'down': u'\u21e9',
}
print(u'\nYou must go {}'.format(uni['up'])) # notice all strings have u on the front
"""
Explanation: Unicode
Unicode sucks in python 2.7, but if you want to use it:
alphabets
arrows
emoji
End of explanation
"""
# bool
z = True # or False
# integers (default)
z = 3
# floats
z = 3.124
z = 5/2
print('z =', z)
# dictionary or hash tables
bob = {'a': 5, 'b': 6}
print('bob["a"]:', bob['a'])
# you can assign a new key/values pair
bob['c'] = 'this is a string!!'
print(bob)
print('len(bob) =', len(bob))
# you can also access what keys are in a dict
print('bob.keys() =', bob.keys())
# let's get crazy and do different types and have a key that is an int
bob = {'a': True, 11: [1,2,3]}
print('bob = ', bob)
print('bob[11] = ', bob[11]) # don't do this, it is confusing!!
# arrays or lists are mutable (changable)
# the first element is 0 like all good programming languages
bob = [1,2,3,4,5]
bob[2] = 'tom'
print('bob list', bob)
print('bob list[3]:', bob[3]) # remember it is zero indexed
# or ... tuple will do this too
bob = [1]*5
print('bob one-liner version 2:', bob)
print('len(bob) =', len(bob))
# strings
z = 'hello world!!'
z = 'hello' + ' world' # concatinate
z = 'hhhello world!@#$'[2:13] # strings are just an array of letters
print('my crazy string:', z)
print('{}: {} {:.2f}'.format('formatting', 3.1234, 6.6666))
print('len(z) =', len(z))
# tuples are immutable (not changable which makes them faster/smaller)
bob = (1,2,3,4)
print('bob tuple', bob)
print('bob tuple*3', bob*3) # repeats tuple 3x
print('len(bob) =', len(bob))
# since tuples are immutable, this will throw an error
bob[1] = 'tom'
# assign multiple variables at once
bob = (4,5,)
x,y = bob
print(x,y)
# wait, I changed by mind ... easy to swap
x,y = y,x
print(x,y)
"""
Explanation: Data Types
Python isn't typed, so you don't really need to keep track of variables and delare them as ints, floats, doubles, unsigned, etc. There are a few places where this isn't true, but we will deal with those as we encounter them.
End of explanation
"""
# range(start, stop, step) # this only works for integer values
range(3,10) # jupyter cell will always print the last thing
# iterates from start (default 0) to less than the highest number
for i in range(5):
print(i)
# you can also create simple arrays like this:
bob = [2*x+3 for x in range(4)]
print('bob one-liner:', bob)
for i in range(2,8,2): # start=2, stop<8, step=2, so notice the last value is 6 NOT 8
print(i)
# I have a list of things ... maybe images or something else.
# A for loop can iterate through the list. Here, each time
# through, i is set to the next letter in my array 'dfec'
things = ['d', 'e', 'f', 'c']
for ltr in things:
print(ltr)
# enumerate()
# sometimes you need a counter in your for loop, use enumerate
things = ['d', 'e', 'f', 3.14] # LOOK! the last element is a float not a letter ... that's OK
for i, ltr in enumerate(things):
print('things[{}]: {}'.format(i, ltr))
# zip()
# somethimes you have a couple arrays that you want to work on at the same time, use zip
# to combine them together
# NOTE: all arrays have to have the SAME LENGTH
a = ['bob', 'tom', 'sally']
b = ['good', 'evil', 'nice']
c = [10, 20, 15]
for name, age, status in zip(a, c, b): # notice I mixed up a, b, c
status = status.upper()
name = name[0].upper() + name[1:] # strings are immutable
print('{} is {} yrs old and totally {}'.format(name, age, status))
"""
Explanation: Flow Control
Logic Operators
Flow control is generally done via some math operator or boolean logic operator.
For Loop
End of explanation
"""
# classic if/then statements work the same as other languages.
# if the statement is True, then do something, if it is False, then skip over it.
if False:
print('should not get here')
elif True:
print('this should print')
else:
print('this is the default if all else fails')
n = 5
n = 3 if n==1 else n-1 # one line if/then statement
print(n)
"""
Explanation: if / elif / else
End of explanation
"""
x = 3
while True: # while loop runs while value is True
if not x: # I will enter this if statement when x = False or 0
break # breaks me out of a loop
else:
print(x)
x -= 1
"""
Explanation: While
End of explanation
"""
# exception handling ... use in your code in smart places
try:
a = (1,2,) # tupple ... notice the extra comma after the 2
a[0] = 1 # this won't work!
except: # this catches any exception thrown
print('you idiot ... you cannot modify a tuple!!')
# error
5/0
try:
5/0
except ZeroDivisionError as e:
print(e)
# raise # this rasies the error to the next
# level so i don't have to handle it here
try:
5/0
except ZeroDivisionError as e:
print(e)
raise # this rasies the error to the next (in this case, the Jupyter GUI handles it)
# level so i don't have to handle it here
"""
Explanation: Exception Handling
When you write code you should think about how you could break it, then design it so you can't. Now, you don't necessary need to write bullet proof code ... that takes a lot of time (and time is money), but you should make an effort to reduce your debug time.
A list of Python 2.7 exceptions is here. KeyboardInterrupt: is a common one when a user pressed ctl-C to quit the program. Some others:
BaseException
+-- SystemExit
+-- KeyboardInterrupt
+-- GeneratorExit
+-- Exception
+-- StopIteration
+-- StandardError
| +-- BufferError
| +-- ArithmeticError
| | +-- FloatingPointError
| | +-- OverflowError
| | +-- ZeroDivisionError
| +-- AssertionError
| +-- AttributeError
| +-- EnvironmentError
| | +-- IOError
| | +-- OSError
| | +-- WindowsError (Windows)
| | +-- VMSError (VMS)
| +-- EOFError
| +-- ImportError
| +-- LookupError
| | +-- IndexError
| | +-- KeyError
| +-- MemoryError
| +-- NameError
| | +-- UnboundLocalError
| +-- ReferenceError
| +-- RuntimeError
| | +-- NotImplementedError
| +-- SyntaxError
| | +-- IndentationError
| | +-- TabError
| +-- SystemError
| +-- TypeError
| +-- ValueError
| +-- UnicodeError
| +-- UnicodeDecodeError
| +-- UnicodeEncodeError
| +-- UnicodeTranslateError
+-- Warning
+-- DeprecationWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+-- FutureWarning
+-- ImportWarning
+-- UnicodeWarning
+-- BytesWarning
End of explanation
"""
# Honestly, I generally just use Exception from which most other exceptions
# are derived from, but I am lazy and it works fine for what I do
try:
5/0
except Exception as e:
print(e)
# all is right with the world ... these will work, nothing will print
assert True
assert 3 > 1
# this will fail ... and we can add a message if we want to
assert 3 < 1, 'hello ... this should fail'
"""
Explanation: When would you want to use raise?
Why not always handle the error here?
What is different when the raise command is used?
End of explanation
"""
import math
print('messy', math.cos(math.pi/4))
# that looks clumbsy ... let's do this instead
from math import cos, pi
print('simpler math:', cos(pi/4))
# or we just want to shorten the name to reduce typing ... good programmers are lazy!
import numpy as np
# well what is in the math library I might want to use????
dir(math)
# what is tanh???
help(math.tanh)
print(math.__doc__) # print the doc string for the library ... what does it do?
"""
Explanation: Libraries
We will need to import math to have access to trig and other functions. There will be other libraries like numpy, cv2, etc you will need to.
End of explanation
"""
def my_cool_function(x):
"""
This is my cool function which takes an argument x
and returns a value
"""
return 2*x/3
my_cool_function(6) # 2*6/3 = 4
"""
Explanation: Functions
There isn't too much that is special about python functions, just the format.
End of explanation
"""
class ClassName(object):
"""
So this is my cool class
"""
def __init__(self, x):
"""
This is called a constructor in OOP. When I make an object
this function is called.
self = contains all of the objects values
x = an argument to pass something into the constructor
"""
self.x = x
print('> Constructor called', x)
def my_cool_function(self, y):
"""
This is called a method (function) that works on
the class. It always needs self to access class
values, but can also have as many arguments as you want.
I only have 1 arg called y"""
self.x = y
print('> called function: {}'.format(self.x))
def __del__(self):
"""
Destructor. This is called when the object goes out of scope
and is destoryed. It take NO arguments other than self.
Note, this is hard to call in jupyter, because it will probably
get called with the program (notebook) ends (shutsdown)
"""
pass
a = ClassName('bob')
a.my_cool_function(3.14)
b = ClassName(28)
b.my_cool_function('tom')
for i in range(3):
a = ClassName('bob')
"""
Explanation: Classes and Object Oriented Programming (OOP)
Ok, we don't have time to really teach you how to do this. It would be better if your real programming classes did this. So we are just going to kludge this together here, because these could be useful in this class. In fact I generally (and 99% of the world) does OOP.
Classes are awesome because of a few reasons. First, they help you reuse code instead of duplicating the code in other places all over your program. Classes will save your life when you realize you want to change a function and you will only change it in one spot instead of 10 different spots with slightly different code. You can also put a bunch of related functions together because they make sense. Another important part of Classes is that they allow you to create more flexible functions.
We are going to keep it simple and basically show you how to do OOP in python very simply. This will be a little familar from ECE382 with structs (sort of)
End of explanation
"""
class Ball(object):
def __init__(self, color, radius):
# this ball always has this color and raduis below
self.radius = radius
self.color = color
def __str__(self):
"""
When something tries to turn this object into a string,
this function gets called
"""
s = 'Ball {}, radius: {:.1f}'.format(self.color, self.radius)
return s
def __add__(self, a):
c = Ball('gray', a.radius + self.radius)
return c
r = Ball('red', 3)
g = Ball('green', radius=4)
b = Ball(radius=5, color='blue')
print(r)
print(g)
print(b)
print('total size:', r.radius+b.radius+g.radius)
print('Add method:', r+b+g)
# the base class of all objects in Python should be
# object. It comes with these methods already defined.
dir(object)
"""
Explanation: There are tons of things you can do with objects. Here is one example. Say we have a ball class and for some reason we want to be able to add balls together.
End of explanation
"""
|
jamessdixon/Kaggle.HomeDepot
|
ProjectSearchRelevance.Python/Home Depot Product Search Relevance TF-IDF.ipynb
|
mit
|
import graphlab as gl
"""
Explanation: Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
End of explanation
"""
train = gl.SFrame.read_csv("../data/train.csv")
test = gl.SFrame.read_csv("../data/test.csv")
desc = gl.SFrame.read_csv("../data/product_descriptions.csv")
"""
Explanation: Load data from CSV files
End of explanation
"""
# merge train with description
train = train.join(desc, on = 'product_uid', how = 'left')
# merge test with description
test = test.join(desc, on = 'product_uid', how = 'left')
"""
Explanation: Data merging
End of explanation
"""
first_doc = train[0]
first_doc
"""
Explanation: Let's explore some data
Let's examine 3 different queries and products:
* first from the training set
* somewhere in the moddle in the training set
* the last one from the training set
End of explanation
"""
middle_doc = train[37033]
middle_doc
"""
Explanation: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
End of explanation
"""
last_doc = train[-1]
last_doc
"""
Explanation: only 'wood' is present from search term
End of explanation
"""
train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term'])
ranked3doc = train[train['relevance'] == 3]
print ranked3doc.head()
len(ranked3doc)
words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True)
words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True)
words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True)
wordsdiff_desc = []
wordsdiff_title = []
puid = []
search_term = []
ws_count = []
ws_count_used_desc = []
ws_count_used_title = []
for item in xrange(len(ranked3doc)):
ws = words_search[item]
pd = words_description[item]
pt = words_title[item]
diff = set(ws) - set(pd)
if diff is None:
diff = 0
wordsdiff_desc.append(diff)
diff2 = set(ws) - set(pt)
if diff2 is None:
diff2 = 0
wordsdiff_title.append(diff2)
puid.append(ranked3doc[item]['product_uid'])
search_term.append(ranked3doc[item]['search_term'])
ws_count.append(len(ws))
ws_count_used_desc.append(len(ws) - len(diff))
ws_count_used_title.append(len(ws) - len(diff2))
differences = gl.SFrame({"puid" : puid,
"search term": search_term,
"diff desc" : wordsdiff_desc,
"diff title" : wordsdiff_title,
"ws count" : ws_count,
"ws count used desc" : ws_count_used_desc,
"ws count used title" : ws_count_used_title})
differences.sort(['ws count used desc', 'ws count used title'])
print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0]))
print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0]))
print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) &
(differences['ws count used title'] == 0)]))
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
End of explanation
"""
train_search_tfidf = gl.text_analytics.tf_idf(train['search_term_word_count'])
train['search_tfidf'] = train_search_tfidf
train['product_desc_word_count'] = gl.text_analytics.count_words(train['product_description'])
train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count'])
train['desc_tfidf'] = train_desc_tfidf
train['product_title_word_count'] = gl.text_analytics.count_words(train['product_title'])
train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count'])
train['title_tfidf'] = train_title_tfidf
train['distance'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
train['distance2'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
model1 = gl.linear_regression.create(train, target = 'relevance', features = ['distance', 'distance2'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
test['search_term_word_count'] = gl.text_analytics.count_words(test['search_term'])
test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count'])
test['search_tfidf'] = test_search_tfidf
test['product_desc_word_count'] = gl.text_analytics.count_words(test['product_description'])
test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count'])
test['desc_tfidf'] = test_desc_tfidf
test['product_title_word_count'] = gl.text_analytics.count_words(test['product_title'])
test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count'])
test['title_tfidf'] = test_title_tfidf
test['distance'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
test['distance2'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
'''
predictions_test = model1.predict(test)
test_errors = predictions_test - test['relevance']
RSS_test = sum(test_errors * test_errors)
print RSS_test
'''
output
submission = gl.SFrame(test['id'])
submission.add_column(output)
submission.rename({'X1': 'id', 'X2':'relevance'})
submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: str(x['relevance']))
submission.export_csv('../data/submission.csv', quote_level = 3)
#gl.canvas.set_target('ipynb')
"""
Explanation: TF-IDF with linear regression
End of explanation
"""
|
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/scikit-learn/scikit-learn-gmm.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: Density Estimation: Gaussian Mixture Models
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
Here we'll explore Gaussian Mixture Models, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
End of explanation
"""
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
"""
Explanation: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution:
End of explanation
"""
from sklearn.mixture import GMM
clf = GMM(4, n_iter=500, random_state=3).fit(x)
xpdf = np.linspace(-10, 20, 1000)
density = np.exp(clf.score(xpdf))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
"""
Explanation: Gaussian mixture models will allow us to approximate this density:
End of explanation
"""
clf.means_
clf.covars_
clf.weights_
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covars_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
"""
Explanation: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:
End of explanation
"""
print(clf.bic(x))
print(clf.aic(x))
"""
Explanation: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
End of explanation
"""
n_estimators = np.arange(1, 10)
clfs = [GMM(n, n_iter=1000).fit(x) for n in n_estimators]
bics = [clf.bic(x) for clf in clfs]
aics = [clf.aic(x) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
"""
Explanation: Let's take a look at these as a function of the number of gaussians:
End of explanation
"""
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, n_iter=500, random_state=0).fit(y)
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.exp(clf.score(xpdf))
plt.hist(y, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
#plt.xlim(-10, 20);
"""
Explanation: It appears that for both the AIC and BIC, 4 components is preferred.
Example: GMM For Outlier Detection
GMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
End of explanation
"""
log_likelihood = clf.score_samples(y)[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
"""
Explanation: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:
End of explanation
"""
set(true_outliers) - set(detected_outliers)
"""
Explanation: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
End of explanation
"""
set(detected_outliers) - set(true_outliers)
"""
Explanation: And here are the non-outliers which were spuriously labeled outliers:
End of explanation
"""
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
"""
Explanation: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_evoked_whitening.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
"""
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise from which we expect values around
0 with less than 2 standard deviations. Covariance estimation and diagnostic
plots are based on [1].
References
[1] Engemann D. and Gramfort A. (2015) Automated model selection in covariance
estimation and spatial whitening of MEG and EEG signals, vol. 108,
328-342, NeuroImage.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, method='iir', n_jobs=1)
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
"""
Explanation: Set parameters
End of explanation
"""
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
"""
Explanation: Compute covariance using automated regularization
End of explanation
"""
evoked = epochs.average()
evoked.plot() # plot evoked response
# plot the whitened evoked data for to see if baseline signals match the
# assumption of Gaussian white noise from which we expect values around
# 0 with less than 2 standard deviations. For the Global field power we expect
# a value of 1.
evoked.plot_white(noise_covs)
"""
Explanation: Show whitening
End of explanation
"""
|
relopezbriega/mi-python-blog
|
content/notebooks/LinearAlgebraPython.ipynb
|
gpl-2.0
|
# Vector como lista de Python
v1 = [2, 4, 6]
v1
# Vectores con numpy
import numpy as np
v2 = np.ones(3) # vector de solo unos.
v2
v3 = np.array([1, 3, 5]) # pasando una lista a las arrays de numpy
v3
v4 = np.arange(1, 8) # utilizando la funcion arange de numpy
v4
"""
Explanation: Algebra Lineal con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="Algebra lineal" title="Algebra lineal" src="https://relopezbriega.github.io/images/lin-alg.jpg">
Introducción
Una de las herramientas matemáticas más utilizadas en machine learning y data mining es el Álgebra lineal; por tanto, si queremos incursionar en el fascinante mundo del aprendizaje automático y el análisis de datos es importante reforzar los conceptos que forman parte de sus cimientos.
El Álgebra lineal es una rama de las matemáticas que es sumamente utilizada en el estudio de una gran variedad de ciencias, como ser, ingeniería, finanzas, investigación operativa, entre otras. Es una extensión del álgebra que aprendemos en la escuela secundaria, hacia un mayor número de dimensiones; en lugar de trabajar con incógnitas a nivel de <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> comenzamos a trabajar con <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y vectores.
El estudio del Álgebra lineal implica trabajar con varios objetos matemáticos, como ser:
Los <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">Escalares</a>: Un escalar es un solo número, en contraste con la mayoría de los otros objetos estudiados en Álgebra lineal, que son generalmente una colección de múltiples números.
Los Vectores:Un vector es una serie de números. Los números tienen una orden preestablecido, y podemos identificar cada número individual por su índice en ese orden. Podemos pensar en los vectores como la identificación de puntos en el espacio, con cada elemento que da la coordenada a lo largo de un eje diferente. Existen dos tipos de vectores, los vectores de fila y los vectores de columna. Podemos representarlos de la siguiente manera, dónde f es un vector de fila y c es un vector de columna:
$$f=\begin{bmatrix}0&1&-1\end{bmatrix} ; c=\begin{bmatrix}0\1\-1\end{bmatrix}$$
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">Matrices</a>: Una matriz es un arreglo bidimensional de números (llamados entradas de la matriz) ordenados en filas (o renglones) y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una matriz cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera, A es una matriz de 3x2.
$$A=\begin{bmatrix}0 & 1& \-1 & 2 \ -2 & 3\end{bmatrix}$$
Los Tensores:En algunos casos necesitaremos una matriz con más de dos ejes. En general, una serie de números dispuestos en una cuadrícula regular con un número variable de ejes es conocido como un tensor.
Sobre estos objetos podemos realizar las operaciones matemáticas básicas, como ser adición, multiplicación, sustracción y <a href="https://es.wikipedia.org/wiki/Divisi%C3%B3n_(matem%C3%A1tica)" >división</a>, es decir que vamos a poder sumar vectores con <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>, multiplicar <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> a vectores y demás.
Librerías de Python para álgebra lineal
Los principales módulos que Python nos ofrece para realizar operaciones de Álgebra lineal son los siguientes:
Numpy: El popular paquete matemático de Python, nos va a permitir crear vectores, <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y tensores con suma facilidad.
numpy.linalg: Este es un submodulo dentro de Numpy con un gran número de funciones para resolver ecuaciones de Álgebra lineal.
scipy.linalg: Este submodulo del paquete científico Scipy es muy similar al anterior, pero con algunas más funciones y optimaciones.
Sympy: Esta librería nos permite trabajar con matemática simbólica, convierte a Python en un sistema algebraico computacional. Nos va a permitir trabajar con ecuaciones y fórmulas simbólicamente, en lugar de numéricamente.
CVXOPT: Este módulo nos permite resolver problemas de optimizaciones de programación lineal.
PuLP: Esta librería nos permite crear modelos de programación lineal en forma muy sencilla con Python.
Operaciones básicas
Vectores
Un vector de largo n es una secuencia (o array, o tupla) de n números. La solemos escribir como $x=(x1,...,xn)$ o $x=[x1,...,xn]$
En Python, un vector puede ser representado con una simple lista, o con un array de Numpy; siendo preferible utilizar esta última opción.
End of explanation
"""
import matplotlib.pyplot as plt
from warnings import filterwarnings
%matplotlib inline
filterwarnings('ignore') # Ignorar warnings
def move_spines():
"""Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda y de abajo
para que se intersecten con el origen. Elimina las lineas de la derecha y la de arriba.
Devuelve los ejes."""
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig():
"""Genera el grafico de los vectores en el plano"""
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [-3, 3], [-4, -3.5]] # lista de vectores
for v in vecs:
ax.annotate(" ", xy=v, xytext=[0, 0],
arrowprops=dict(facecolor="blue",
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
vect_fig() # crea el gráfico
"""
Explanation: Representación gráfica
Tradicionalmente, los vectores son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisiéramos representar graficamente a los vectores $v1=[2, 4]$, $v2=[-3, 3]$ y $v3=[-4, -3.5]$, podríamos hacerlo de la siguiente manera.
End of explanation
"""
# Ejemplo en Python
x = np.arange(1, 5)
y = np.array([2, 4, 6, 8])
x, y
# sumando dos vectores numpy
x + y
# restando dos vectores
x - y
# multiplicando por un escalar
x * 2
y * 3
"""
Explanation: Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con vectores son la suma, la resta y la multiplicación por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Cuando sumamos dos vectores, vamos sumando elemento por elemento de cada
vector.
$$ \begin{split}x + y = \left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right] + \left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right] := \left[
\begin{array}{c}
x_1 + y_1 \
x_2 + y_2 \
\vdots \
x_n + y_n
\end{array}
\right]\end{split}$$
De forma similar funciona la operación de resta.
$$ \begin{split}x - y = \left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right] - \left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right] := \left[
\begin{array}{c}
x_1 - y_1 \
x_2 - y_2 \
\vdots \
x_n - y_n
\end{array}
\right]\end{split}$$
La multiplicación por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> es una operación que toma a un número $\gamma$, y a un vector $x$ y produce un nuevo vector donde cada elemento del vector $x$ es multiplicado por el número $\gamma$.
$$\begin{split}\gamma x := \left[
\begin{array}{c}
\gamma x_1 \
\gamma x_2 \
\vdots \
\gamma x_n
\end{array}
\right]\end{split}$$
En Python podríamos realizar estas operaciones en forma muy sencilla:
End of explanation
"""
# Calculando el producto escalar de los vectores x e y
x @ y
# o lo que es lo mismo, que:
sum(x * y), np.dot(x, y)
# Calculando la norma del vector X
np.linalg.norm(x)
# otra forma de calcular la norma de x
np.sqrt(x @ x)
# vectores ortogonales
v1 = np.array([3, 4])
v2 = np.array([4, -3])
v1 @ v2
"""
Explanation: Producto escalar o interior
El producto escalar de dos vectores se define como la suma de los productos de sus elementos, suele representarse matemáticamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y > := \sum_{i=1}^n x_i y_i$$
Dos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> o perpendiculares cuando forman ángulo recto entre sí. Si el producto escalar de dos vectores es cero, ambos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a>.
Adicionalmente, todo producto escalar induce una norma sobre el espacio en el que está definido, de la siguiente manera:
$$\| x \| := \sqrt{< x, x>} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
En Python lo podemos calcular de la siguiente forma:
End of explanation
"""
# Ejemplo en Python
A = np.array([[1, 3, 2],
[1, 0, 0],
[1, 2, 2]])
B = np.array([[1, 0, 5],
[7, 5, 0],
[2, 1, 1]])
# suma de las matrices A y B
A + B
# resta de matrices
A - B
# multiplicando matrices por escalares
A * 2
B * 3
# ver la dimension de una matriz
A.shape
# ver cantidad de elementos de una matriz
A.size
"""
Explanation: Matrices
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> son una forma clara y sencilla de organizar los datos para su uso en operaciones lineales.
Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> n × k es una agrupación rectangular de números con n filas y k columnas; se representa de la siguiente forma:
$$\begin{split}A = \left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1k} \
a_{21} & a_{22} & \cdots & a_{2k} \
\vdots & \vdots & & \vdots \
a_{n1} & a_{n2} & \cdots & a_{nk}
\end{array}
\right]\end{split}$$
En la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A, el símbolo $a_{nk}$ representa el elemento n-ésimo de la fila en la k-ésima columna. La <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A también puede ser llamada un vector si cualquiera de n o k son iguales a 1. En el caso de n=1, A se llama un vector fila, mientras que en el caso de k=1 se denomina un vector columna.
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los sistemas de ecuaciones lineales o para representar transformaciones lineales dada una base. Pueden sumarse, multiplicarse y descomponerse de varias formas.
Operaciones con matrices
Al igual que con los vectores, que no son más que un caso particular, las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se pueden sumar, restar y la multiplicar por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Multiplicacion por escalares:
$$\begin{split}\gamma A
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right] := \left[
\begin{array}{ccc}
\gamma a_{11} & \cdots & \gamma a_{1k} \
\vdots & \vdots & \vdots \
\gamma a_{n1} & \cdots & \gamma a_{nk} \
\end{array}
\right]\end{split}$$
Suma de matrices: $$\begin{split}A + B = \left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]
+
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} + b_{11} & \cdots & a_{1k} + b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} + b_{n1} & \cdots & a_{nk} + b_{nk} \
\end{array}
\right]\end{split}$$
Resta de matrices: $$\begin{split}A - B = \left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]-
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right] := \left[
\begin{array}{ccc}
a_{11} - b_{11} & \cdots & a_{1k} - b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} - b_{n1} & \cdots & a_{nk} - b_{nk} \
\end{array}
\right]\end{split}$$
Para los casos de suma y resta, hay que tener en cuenta que solo se pueden sumar o restar <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> que tengan las mismas dimensiones, es decir que si tengo una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x2 (3 filas y 2 columnas) solo voy a poder sumar o restar la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B si esta también tiene 3 filas y 2 columnas.
End of explanation
"""
# Ejemplo multiplicación de matrices
A = np.arange(1, 13).reshape(3, 4) #matriz de dimension 3x4
A
B = np.arange(8).reshape(4,2) #matriz de dimension 4x2
B
# Multiplicando A x B
A @ B #resulta en una matriz de dimension 3x2
# Multiplicando B x A
B @ A
"""
Explanation: Multiplicacion o Producto de matrices
La regla para la multiplicación de matrices generaliza la idea del producto interior que vimos con los vectores; y esta diseñada para facilitar las operaciones lineales básicas.
Cuando multiplicamos matrices, el número de columnas de la primera <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debe ser igual al número de filas de la segunda <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>; y el resultado de esta multiplicación va a tener el mismo número de filas que la primer <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> y el número de la columnas de la segunda <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Es decir, que si yo tengo una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x4 y la multiplico por una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B de dimensión 4x2, el resultado va a ser una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> C de dimensión 3x2.
Algo a tener en cuenta a la hora de multiplicar matrices es que la propiedad connmutativa no se cumple. AxB no es lo mismo que BxA.
Veamos los ejemplos en Python.
End of explanation
"""
# Creando una matriz identidad de 2x2
I = np.eye(2)
I
# Multiplicar una matriz por la identidad nos da la misma matriz
A = np.array([[4, 7],
[2, 6]])
A
A @ I # AxI = A
# Calculando el determinante de la matriz A
np.linalg.det(A)
# Calculando la inversa de A.
A_inv = np.linalg.inv(A)
A_inv
# A x A_inv nos da como resultado I.
A @ A_inv
# Trasponiendo una matriz
A = np.arange(6).reshape(3, 2)
A
np.transpose(A)
"""
Explanation: Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, Python nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de multiplicación de matrices, pueden consultar el siguiente tutorial.
La matriz identidad, la matriz inversa, la matriz transpuesta y el determinante
La matriz identidad es el elemento neutro en la multiplicación de matrices, es el equivalente al número 1. Cualquier matriz multiplicada por la matriz identidad nos da como resultado la misma matriz. La matriz identidad es una matriz cuadrada (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \0 & 1 & 0\ 0 & 0 & 1\end{bmatrix}$$
Ahora que conocemos el concepto de la matriz identidad, podemos llegar al concepto de la matriz inversa. Si tenemos una matriz A, la matriz inversa de A, que se representa como $A^{-1}$ es aquella matriz cuadrada que hace que la multiplicación $A$x$A^{-1}$ sea igual a la matriz identidad I. Es decir que es la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de A.
$$A × A^{-1} = A^{-1} × A = I$$
Tener en cuenta que esta matriz inversa en muchos casos puede no existir.En este caso se dice que la matriz es singular o degenerada. Una matriz es singular si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es nulo.
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las matrices cuadradas. Se calcula como la suma de los productos de las diagonales de la matriz en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo |A|.
$$A=\begin{bmatrix}a_{11} & a_{12} & a_{13} & \a_{21} & a_{22} & a_{23} & \ a_{31} & a_{32} & a_{33} & \end{bmatrix}$$
$$|A| = (a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} ) - (a_{31} a_{22} a_{13} + a_{32} a_{23} a_{11} + a_{33} a_{21} a_{12})
$$
Por último, la matriz transpuesta es aquella en que las filas se transforman en columnas y las columnas en filas. Se representa con el símbolo $A^\intercal$
$$\begin{bmatrix}a & b & \c & d & \ e & f & \end{bmatrix}^T:=\begin{bmatrix}a & c & e &\b & d & f & \end{bmatrix}$$
Ejemplos en Python:
End of explanation
"""
# graficando el sistema de ecuaciones.
x_vals = np.linspace(0, 5, 50) # crea 50 valores entre 0 y 5
plt.plot(x_vals, (1 - x_vals)/-2) # grafica x - 2y = 1
plt.plot(x_vals, (11 - (3*x_vals))/2) # grafica 3x + 2y = 11
plt.axis(ymin = 0)
"""
Explanation: Sistemas de ecuaciones lineales
Una de las principales aplicaciones del Álgebra lineal consiste en resolver problemas de sistemas de ecuaciones lineales.
Una ecuación lineal es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la línea recta.Cuando nuestro problema esta representado por más de una ecuación lineal, hablamos de un sistema de ecuaciones lineales. Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incógnitas como el siguiente:
$$ x - 2y = 1$$
$$3x + 2y = 11$$
La idea es encontrar el valor de $x$ e $y$ que resuelva ambas ecuaciones. Una forma en que podemos hacer esto, puede ser representando graficamente ambas rectas y buscar los puntos en que las rectas se cruzan.
En Python esto se puede hacer en forma muy sencilla con la ayuda de matplotlib.
End of explanation
"""
# Comprobando la solucion con la multiplicación de matrices.
A = np.array([[1., -2.],
[3., 2.]])
x = np.array([[3.],[1.]])
A @ x
"""
Explanation: Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incógnitas, la solución gráfica puede ser de utilidad, pero para sistemas más complicados se necesita una solución numérica, es aquí donde entran a jugar las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>.
Ese mismo sistema se podría representar como una ecuación matricial de la siguiente forma:
$$\begin{bmatrix}1 & -2 & \3 & 2 & \end{bmatrix} \begin{bmatrix}x & \y & \end{bmatrix} = \begin{bmatrix}1 & \11 & \end{bmatrix}$$
Lo que es lo mismo que decir que la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A por la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $x$ nos da como resultado el vector b.
$$ Ax = b$$
En este caso, ya sabemos el resultado de $x$, por lo que podemos comprobar que nuestra solución es correcta realizando la multiplicación de matrices.
End of explanation
"""
# Creando matriz de coeficientes
A = np.array([[1, 2, 3],
[2, 5, 2],
[6, -3, 1]])
A
# Creando matriz de resultados
b = np.array([6, 4, 2])
b
# Resolviendo sistema de ecuaciones
x = np.linalg.solve(A, b)
x
# Comprobando la solucion
A @ x == b
"""
Explanation: Para resolver en forma numérica los sistema de ecuaciones, existen varios métodos:
El método de sustitución: El cual consiste en despejar en una de las ecuaciones cualquier incógnita, preferiblemente la que tenga menor coeficiente y a continuación sustituirla en otra ecuación por su valor.
El método de igualacion: El cual se puede entender como un caso particular del método de sustitución en el que se despeja la misma incógnita en dos ecuaciones y a continuación se igualan entre sí la parte derecha de ambas ecuaciones.
El método de reduccion: El procedimiento de este método consiste en transformar una de las ecuaciones (generalmente, mediante productos), de manera que obtengamos dos ecuaciones en la que una misma incógnita aparezca con el mismo coeficiente y distinto signo. A continuación, se suman ambas ecuaciones produciéndose así la reducción o cancelación de dicha incógnita, obteniendo una ecuación con una sola incógnita, donde el método de resolución es simple.
El método gráfico: Que consiste en construir el gráfica de cada una de las ecuaciones del sistema. Este método (manualmente aplicado) solo resulta eficiente en el plano cartesiano (solo dos incógnitas).
El método de Gauss: El método de eliminación de Gauss o simplemente método de Gauss consiste en convertir un sistema lineal de n ecuaciones con n incógnitas, en uno escalonado, en el que la primera ecuación tiene n incógnitas, la segunda ecuación tiene n - 1 incógnitas, ..., hasta la última ecuación, que tiene 1 incógnita. De esta forma, será fácil partir de la última ecuación e ir subiendo para calcular el valor de las demás incógnitas.
El método de Eliminación de Gauss-Jordan: El cual es una variante del método anterior, y consistente en triangular la matriz aumentada del sistema mediante transformaciones elementales, hasta obtener ecuaciones de una sola incógnita.
El método de Cramer: El cual consiste en aplicar la regla de Cramer para resolver el sistema. Este método solo se puede aplicar cuando la matriz de coeficientes del sistema es cuadrada y de determinante no nulo.
La idea no es explicar cada uno de estos métodos, sino saber que existen y que Python nos hacer la vida mucho más fácil, ya que para resolver un sistema de ecuaciones simplemente debemos llamar a la función solve().
Por ejemplo, para resolver este sistema de 3 ecuaciones y 3 incógnitas.
$$ x + 2y + 3z = 6$$
$$ 2x + 5y + 2z = 4$$
$$ 6x - 3y + z = 2$$
Primero armamos la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de coeficientes y la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> b de resultados y luego utilizamos solve() para resolverla.
End of explanation
"""
# Resolviendo la optimizacion con pulp
from pulp import *
# declarando las variables
x1 = LpVariable("x1", 0, 800) # 0<= x1 <= 40
x2 = LpVariable("x2", 0, 1000) # 0<= x2 <= 1000
# definiendo el problema
prob = LpProblem("problem", LpMaximize)
# definiendo las restricciones
prob += x1+1.5*x2 <= 750
prob += 2*x1+x2 <= 1000
prob += x1>=0
prob += x2>=0
# definiendo la funcion objetivo a maximizar
prob += 50*x1+40*x2
# resolviendo el problema
status = prob.solve(GLPK(msg=0))
LpStatus[status]
# imprimiendo los resultados
(value(x1), value(x2))
# Resolviendo el problema con cvxopt
from cvxopt import matrix, solvers
A = matrix([[-1., -2., 1., 0.], # columna de x1
[-1.5, -1., 0., 1.]]) # columna de x2
b = matrix([750., 1000., 0., 0.]) # resultados
c = matrix([50., 40.]) # funcion objetivo
# resolviendo el problema
sol=solvers.lp(c,A,b)
# imprimiendo la solucion.
print('{0:.2f}, {1:.2f}'.format(sol['x'][0]*-1, sol['x'][1]*-1))
# Resolviendo la optimizacion graficamente.
x_vals = np.linspace(0, 800, 10) # 10 valores entre 0 y 800
plt.plot(x_vals, ((750 - x_vals)/1.5)) # grafica x1 + 1.5x2 = 750
plt.plot(x_vals, (1000 - 2*x_vals)) # grafica 2x1 + x2 = 1000
plt.axis(ymin = 0)
plt.show()
"""
Explanation: Programación lineal
La programación lineal estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un sistema de inecuaciones lineales.
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos:
Elegir las incógnitas.
Escribir la función objetivo en función de los datos del problema.
Escribir las restricciones en forma de sistema de inecuaciones.
Averiguar el conjunto de soluciones factibles representando gráficamente las restricciones.
Calcular las coordenadas de los vértices del recinto de soluciones factibles (si son pocos).
Calcular el valor de la función objetivo en cada uno de los vértices para ver en cuál de ellos presenta el valor máximo o mínimo según nos pida el problema (hay que tener en cuenta aquí la posible no existencia de solución).
Veamos un ejemplo y como Python nos ayuda a resolverlo en forma sencilla.
Supongamos que tenemos la siguiente función objetivo:
$$f(x_{1},x_{2})= 50x_{1} + 40x_{2}$$
y las siguientes restricciones:
$$x_{1} + 1.5x_{2} \leq 750$$
$$2x_{1} + x_{2} \leq 1000$$
$$x_{1} \geq 0$$
$$x_{2} \geq 0$$
Podemos resolverlo utilizando PuLP, CVXOPT o graficamente (con matplotlib) de la siguiente forma.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/pcmdi/cmip6/models/sandbox-2/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
geektoni/shogun
|
doc/ipython-notebooks/classification/Classification.ipynb
|
bsd-3-clause
|
import numpy as np
import matplotlib.pyplot as plt
import os
import shogun as sg
%matplotlib inline
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
#Needed lists for the final plot
classifiers_linear = []*10
classifiers_non_linear = []*10
classifiers_names = []*10
fadings = []*10
"""
Explanation: Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID: <a href="https://github.com/youssef-emad/">youssef-emad</a>)
This notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear separable, and one is not.
<a href ="#section1">Data Generation and Visualization</a>
<a href ="#section2">Support Vector Machine</a>
<a href ="#section2a">Linear SVM</a>
<a href ="#section2b">Gaussian Kernel</a>
<a href ="#section2c">Sigmoid Kernel</a>
<a href ="#section2d">Polynomial Kernel</a>
<a href ="#section3">Naive Bayes</a>
<a href ="#section4">Nearest Neighbors</a>
<a href ="#section5">Linear Discriminant Analysis</a>
<a href ="#section6">Quadratic Discriminat Analysis</a>
<a href ="#section7">Gaussian Process</a>
<a href ="#section7a">Logit Likelihood model</a>
<a href ="#section7b">Probit Likelihood model</a>
<a href ="#section8">Putting It All Together</a>
End of explanation
"""
shogun_feats_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))
shogun_labels_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))
shogun_feats_non_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))
shogun_labels_non_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))
feats_linear = shogun_feats_linear.get('feature_matrix')
labels_linear = shogun_labels_linear.get('labels')
feats_non_linear = shogun_feats_non_linear.get('feature_matrix')
labels_non_linear = shogun_labels_non_linear.get('labels')
"""
Explanation: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html">BinaryLables</a> classes.
End of explanation
"""
def plot_binary_data(plot,X_train, y_train):
"""
This function plots 2D binary data with different colors for different labels.
"""
plot.xlabel(r"$x$")
plot.ylabel(r"$y$")
plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
def compute_plot_isolines(classifier,feats,size=200,fading=True):
"""
This function computes the classification of points on the grid
to get the decision boundaries used in plotting
"""
x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size)
x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size)
x, y = np.meshgrid(x1, x2)
plot_features = sg.create_features(np.array((np.ravel(x), np.ravel(y))))
if fading == True:
plot_labels = classifier.apply_binary(plot_features).get('current_values')
else:
plot_labels = classifier.apply(plot_features).get('labels')
z = plot_labels.reshape((size, size))
return x,y,z
def plot_model(plot,classifier,features,labels,fading=True):
"""
This function plots an input classification model
"""
x,y,z = compute_plot_isolines(classifier,features,fading=fading)
plot.pcolor(x,y,z,cmap='RdBu_r')
plot.contour(x, y, z, linewidths=1, colors='black')
plot_binary_data(plot,features, labels)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Linear Features")
plot_binary_data(plt,feats_linear, labels_linear)
plt.subplot(122)
plt.title("Non Linear Features")
plot_binary_data(plt,feats_non_linear, labels_non_linear)
"""
Explanation: Data visualization methods.
End of explanation
"""
plt.figure(figsize=(15,5))
c = 0.5
epsilon = 1e-3
svm_linear = sg.create_machine("LibLinear", C1=c, C2=c,
labels=shogun_labels_linear,
epsilon=epsilon,
liblinear_solver_type="L2R_L2LOSS_SVC")
svm_linear.train(shogun_feats_linear)
classifiers_linear.append(svm_linear)
classifiers_names.append("SVM Linear")
fadings.append(True)
plt.subplot(121)
plt.title("Linear SVM - Linear Features")
plot_model(plt,svm_linear,feats_linear,labels_linear)
svm_non_linear = sg.create_machine("LibLinear", C1=c, C2=c,
labels=shogun_labels_non_linear,
epsilon=epsilon,
liblinear_solver_type="L2R_L2LOSS_SVC")
svm_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(svm_non_linear)
plt.subplot(122)
plt.title("Linear SVM - Non Linear Features")
plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html">Support Vector Machine</a>
<a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Linear SVM</a>
Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification
End of explanation
"""
gaussian_c = 0.7
gaussian_kernel_linear = sg.create_kernel("GaussianKernel", width=20)
gaussian_svm_linear = sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c,
kernel=gaussian_kernel_linear, labels=shogun_labels_linear)
gaussian_svm_linear.train(shogun_feats_linear)
classifiers_linear.append(gaussian_svm_linear)
fadings.append(True)
gaussian_kernel_non_linear = sg.create_kernel("GaussianKernel", width=10)
gaussian_svm_non_linear=sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c,
kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear)
gaussian_svm_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(gaussian_svm_non_linear)
classifiers_names.append("SVM Gaussian Kernel")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Gaussian Kernel - Linear Features")
plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)
plt.subplot(122)
plt.title("SVM Gaussian Kernel - Non Linear Features")
plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html">Kernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html">KernelMachine</a> base class.
<a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html">Gaussian Kernel</a>
End of explanation
"""
sigmoid_c = 0.9
sigmoid_kernel_linear = sg.create_kernel("SigmoidKernel", cache_size=200, gamma=1, coef0=0.5)
sigmoid_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)
sigmoid_svm_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c,
kernel=sigmoid_kernel_linear, labels=shogun_labels_linear)
sigmoid_svm_linear.train()
classifiers_linear.append(sigmoid_svm_linear)
classifiers_names.append("SVM Sigmoid Kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Sigmoid Kernel - Linear Features")
plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)
sigmoid_kernel_non_linear = sg.create_kernel("SigmoidKernel", cache_size=400, gamma=2.5, coef0=2)
sigmoid_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
sigmoid_svm_non_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c,
kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear)
sigmoid_svm_non_linear.train()
classifiers_non_linear.append(sigmoid_svm_non_linear)
plt.subplot(122)
plt.title("SVM Sigmoid Kernel - Non Linear Features")
plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
End of explanation
"""
poly_c = 0.5
degree = 4
poly_kernel_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)
poly_svm_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c,
kernel=poly_kernel_linear, labels=shogun_labels_linear)
poly_svm_linear.train()
classifiers_linear.append(poly_svm_linear)
classifiers_names.append("SVM Polynomial kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Polynomial Kernel - Linear Features")
plot_model(plt,poly_svm_linear,feats_linear,labels_linear)
poly_kernel_non_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
poly_svm_non_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c,
kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear)
poly_svm_non_linear.train()
classifiers_non_linear.append(poly_svm_non_linear)
plt.subplot(122)
plt.title("SVM Polynomial Kernel - Non Linear Features")
plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
End of explanation
"""
multiclass_labels_linear = shogun_labels_linear.get('labels')
for i in range(0,len(multiclass_labels_linear)):
if multiclass_labels_linear[i] == -1:
multiclass_labels_linear[i] = 0
multiclass_labels_non_linear = shogun_labels_non_linear.get('labels')
for i in range(0,len(multiclass_labels_non_linear)):
if multiclass_labels_non_linear[i] == -1:
multiclass_labels_non_linear[i] = 0
shogun_multiclass_labels_linear = sg.MulticlassLabels(multiclass_labels_linear)
shogun_multiclass_labels_non_linear = sg.MulticlassLabels(multiclass_labels_non_linear)
naive_bayes_linear = sg.create_machine("GaussianNaiveBayes")
naive_bayes_linear.put('features', shogun_feats_linear)
naive_bayes_linear.put('labels', shogun_multiclass_labels_linear)
naive_bayes_linear.train()
classifiers_linear.append(naive_bayes_linear)
classifiers_names.append("Naive Bayes")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Naive Bayes - Linear Features")
plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)
naive_bayes_non_linear = sg.create_machine("GaussianNaiveBayes")
naive_bayes_non_linear.put('features', shogun_feats_non_linear)
naive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear)
naive_bayes_non_linear.train()
classifiers_non_linear.append(naive_bayes_non_linear)
plt.subplot(122)
plt.title("Naive Bayes - Non Linear Features")
plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
"""
Explanation: <a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html">Naive Bayes</a>
End of explanation
"""
number_of_neighbors = 10
distances_linear = sg.create_distance('EuclideanDistance')
distances_linear.init(shogun_feats_linear, shogun_feats_linear)
knn_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_linear,
labels=shogun_labels_linear)
knn_linear.train()
classifiers_linear.append(knn_linear)
classifiers_names.append("Nearest Neighbors")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Nearest Neighbors - Linear Features")
plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)
distances_non_linear = sg.create_distance('EuclideanDistance')
distances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
knn_non_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_non_linear,
labels=shogun_labels_non_linear)
knn_non_linear.train()
classifiers_non_linear.append(knn_non_linear)
plt.subplot(122)
plt.title("Nearest Neighbors - Non Linear Features")
plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
"""
Explanation: <a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html">Nearest Neighbors</a>
End of explanation
"""
gamma = 0.1
lda_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_linear)
lda_linear.train(shogun_feats_linear)
classifiers_linear.append(lda_linear)
classifiers_names.append("LDA")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("LDA - Linear Features")
plot_model(plt,lda_linear,feats_linear,labels_linear)
lda_non_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_non_linear)
lda_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(lda_non_linear)
plt.subplot(122)
plt.title("LDA - Non Linear Features")
plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
End of explanation
"""
qda_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_linear)
qda_linear.train(shogun_feats_linear)
classifiers_linear.append(qda_linear)
classifiers_names.append("QDA")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("QDA - Linear Features")
plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)
qda_non_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_non_linear)
qda_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(qda_non_linear)
plt.subplot(122)
plt.title("QDA - Non Linear Features")
plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
"""
Explanation: <a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html">Quadratic Discriminant Analysis</a>
End of explanation
"""
# create Gaussian kernel with width = 5.0
kernel = sg.create_kernel("GaussianKernel", width=5.0)
# create zero mean function
zero_mean = sg.create_gp_mean("ZeroMean")
# create logit likelihood model
likelihood = sg.create_gp_likelihood("LogitLikelihood")
# specify EP approximation inference method
inference_model_linear = sg.create_gp_inference("EPInferenceMethod",kernel=kernel,
features=shogun_feats_linear,
mean_function=zero_mean,
labels=shogun_labels_linear,
likelihood_model=likelihood)
# create and train GP classifier, which uses Laplace approximation
gaussian_logit_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_linear)
gaussian_logit_linear.train()
classifiers_linear.append(gaussian_logit_linear)
classifiers_names.append("Gaussian Process Logit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Logit - Linear Features")
plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)
inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_non_linear,
mean_function=zero_mean,
labels=shogun_labels_non_linear,
likelihood_model=likelihood)
gaussian_logit_non_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_non_linear)
gaussian_logit_non_linear.train()
classifiers_non_linear.append(gaussian_logit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Logit - Non Linear Features")
plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html">Gaussian Process</a>
<a id ="section7a">Logit Likelihood model</a>
Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html">LogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1EPInferenceMethod.html">EPInferenceMethod</a> classes are used.
End of explanation
"""
likelihood = sg.create_gp_likelihood("ProbitLikelihood")
inference_model_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_linear,
mean_function=zero_mean,
labels=shogun_labels_linear,
likelihood_model=likelihood)
gaussian_probit_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_linear)
gaussian_probit_linear.train()
classifiers_linear.append(gaussian_probit_linear)
classifiers_names.append("Gaussian Process Probit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Probit - Linear Features")
plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)
inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_non_linear,
mean_function=zero_mean,
labels=shogun_labels_non_linear,
likelihood_model=likelihood)
gaussian_probit_non_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_non_linear)
gaussian_probit_non_linear.train()
classifiers_non_linear.append(gaussian_probit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Probit - Non Linear Features")
plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
"""
Explanation: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html">ProbitLikelihood</a> class is used.
End of explanation
"""
figure = plt.figure(figsize=(30,9))
plt.subplot(2,11,1)
plot_binary_data(plt,feats_linear, labels_linear)
for i in range(0,10):
plt.subplot(2,11,i+2)
plt.title(classifiers_names[i])
plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])
plt.subplot(2,11,12)
plot_binary_data(plt,feats_non_linear, labels_non_linear)
for i in range(0,10):
plt.subplot(2,11,13+i)
plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])
"""
Explanation: <a id="section8">Putting It All Together</a>
End of explanation
"""
|
tensorflow/docs
|
site/en/r1/guide/eager.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow.compat.v1 as tf
"""
Explanation: Eager Execution
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
TensorFlow's eager execution is an imperative programming environment that
evaluates operations immediately, without building graphs: operations return
concrete values instead of constructing a computational graph to run later. This
makes it easy to get started with TensorFlow and debug models, and it
reduces boilerplate as well. To follow along with this guide, run the code
samples below in an interactive python interpreter.
Eager execution is a flexible machine learning platform for research and
experimentation, providing:
An intuitive interface—Structure your code naturally and use Python data
structures. Quickly iterate on small models and small data.
Easier debugging—Call ops directly to inspect running models and test
changes. Use standard Python debugging tools for immediate error reporting.
Natural control flow—Use Python control flow instead of graph control
flow, simplifying the specification of dynamic models.
Eager execution supports most TensorFlow operations and GPU acceleration. For a
collection of examples running in eager execution, see:
tensorflow/contrib/eager/python/examples.
Note: Some models may experience increased overhead with eager execution
enabled. Performance improvements are ongoing, but please
file a bug if you find a
problem and share your benchmarks.
Setup and basic usage
To start eager execution, add `` to the beginning of
the program or console session. Do not add this operation to other modules that
the program calls.
End of explanation
"""
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
"""
Explanation: Now you can run TensorFlow operations and the results will return immediately:
End of explanation
"""
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
"""
Explanation: Enabling eager execution changes how TensorFlow operations behave—now they
immediately evaluate and return their values to Python. tf.Tensor objects
reference concrete values instead of symbolic handles to nodes in a computational
graph. Since there isn't a computational graph to build and run later in a
session, it's easy to inspect results using print() or a debugger. Evaluating,
printing, and checking tensor values does not break the flow for computing
gradients.
Eager execution works nicely with NumPy. NumPy
operations accept tf.Tensor arguments. TensorFlow
math operations convert
Python objects and NumPy arrays to tf.Tensor objects. The
tf.Tensor.numpy method returns the object's value as a NumPy ndarray.
End of explanation
"""
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
"""
Explanation: Dynamic control flow
A major benefit of eager execution is that all the functionality of the host
language is available while your model is executing. So, for example,
it is easy to write fizzbuzz:
End of explanation
"""
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
"""
Explanation: This has conditionals that depend on tensor values and it prints these values
at runtime.
Build a model
Many machine learning models are represented by composing layers. When
using TensorFlow with eager execution you can either write your own layers or
use a layer provided in the tf.keras.layers package.
While you can use any Python object to represent a layer,
TensorFlow has tf.keras.layers.Layer as a convenient base class. Inherit from
it to implement your own layer:
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
"""
Explanation: Use tf.keras.layers.Dense layer instead of MySimpleLayer above as it has
a superset of its functionality (it can also add a bias).
When composing layers into models you can use tf.keras.Sequential to represent
models which are a linear stack of layers. It is easy to use for basic models:
End of explanation
"""
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
"""
Explanation: Alternatively, organize models in classes by inheriting from tf.keras.Model.
This is a container for layers that is a layer itself, allowing tf.keras.Model
objects to contain other tf.keras.Model objects.
End of explanation
"""
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
"""
Explanation: It's not required to set an input shape for the tf.keras.Model class since
the parameters are set the first time input is passed to the layer.
tf.keras.layers classes create and contain their own model variables that
are tied to the lifetime of their layer objects. To share layer variables, share
their objects.
Eager training
Computing gradients
Automatic differentiation
is useful for implementing machine learning algorithms such as
backpropagation for training
neural networks. During eager execution, use tf.GradientTape to trace
operations for computing gradients later.
tf.GradientTape is an opt-in feature to provide maximal performance when
not tracing. Since different operations can occur during each call, all
forward-pass operations get recorded to a "tape". To compute the gradient, play
the tape backwards and then discard. A particular tf.GradientTape can only
compute one gradient; subsequent calls throw a runtime error.
End of explanation
"""
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
"""
Explanation: Train a model
The following example creates a multi-layer model that classifies the standard
MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
trainable graphs in an eager execution environment.
End of explanation
"""
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
"""
Explanation: Even without training, call the model and inspect the output in eager execution:
End of explanation
"""
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
"""
Explanation: While keras models have a builtin training loop (using the fit method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
End of explanation
"""
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
"""
Explanation: Variables and optimizers
tf.Variable objects store mutable tf.Tensor values accessed during
training to make automatic differentiation easier. The parameters of a model can
be encapsulated in classes as variables.
Better encapsulate model parameters by using tf.Variable with
tf.GradientTape. For example, the automatic differentiation example above
can be rewritten:
End of explanation
"""
if tf.config.list_physical_devices('GPU'):
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
"""
Explanation: Use objects for state during eager execution
With graph execution, program state (such as the variables) is stored in global
collections and their lifetime is managed by the tf.Session object. In
contrast, during eager execution the lifetime of state objects is determined by
the lifetime of their corresponding Python object.
Variables are objects
During eager execution, variables persist until the last reference to the object
is removed, and is then deleted.
End of explanation
"""
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
"""
Explanation: Object-based saving
tf.train.Checkpoint can save and restore tf.Variables to and from
checkpoints:
End of explanation
"""
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
"""
Explanation: To save and load models, tf.train.Checkpoint stores the internal state of objects,
without requiring hidden variables. To record the state of a model,
an optimizer, and a global step, pass them to a tf.train.Checkpoint:
End of explanation
"""
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
"""
Explanation: Object-oriented metrics
tf.metrics are stored as objects. Update a metric by passing the new data to
the callable, and retrieve the result using the tf.metrics.result method,
for example:
End of explanation
"""
from tensorflow.compat.v2 import summary
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# your model code goes here
summary.scalar('global_step', global_step, step=global_step)
!ls tb/
"""
Explanation: Summaries and TensorBoard
TensorBoard is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
TensorFlow 1 summaries only work in eager mode, but can be run with the compat.v2 module:
End of explanation
"""
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
"""
Explanation: Advanced automatic differentiation topics
Dynamic models
tf.GradientTape can also be used in dynamic models. This example for a
backtracking line search
algorithm looks like normal NumPy code, except there are gradients and is
differentiable, despite the complex control flow:
End of explanation
"""
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
"""
Explanation: Custom gradients
Custom gradients are an easy way to override gradients in eager and graph
execution. Within the forward function, define the gradient with respect to the
inputs, outputs, or intermediate results. For example, here's an easy way to clip
the norm of the gradients in the backward pass:
End of explanation
"""
def log1pexp(x):
return tf.log(1 + tf.exp(x))
class Grad(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
x = tf.convert_to_tensor(x)
with tf.GradientTape() as tape:
tape.watch(x)
r = self.f(x)
g = tape.gradient(r, x)
return g
grad_log1pexp = Grad(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.).numpy()
"""
Explanation: Custom gradients are commonly used to provide a numerically stable gradient for a
sequence of operations:
End of explanation
"""
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = Grad(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.).numpy()
"""
Explanation: Here, the log1pexp function can be analytically simplified with a custom
gradient. The implementation below reuses the value for tf.exp(x) that is
computed during the forward pass—making it more efficient by eliminating
redundant calculations:
End of explanation
"""
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tf.config.list_physical_devices('GPU'):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
"""
Explanation: Performance
Computation is automatically offloaded to GPUs during eager execution. If you
want control over where a computation runs you can enclose it in a
tf.device('/gpu:0') block (or the CPU equivalent):
End of explanation
"""
if tf.config.list_physical_devices('GPU'):
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
"""
Explanation: A tf.Tensor object can be copied to a different device to execute its
operations:
End of explanation
"""
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tf.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
"""
Explanation: Benchmarks
For compute-heavy models, such as
ResNet50
training on a GPU, eager execution performance is comparable to graph execution.
But this gap grows larger for models with less computation and there is work to
be done for optimizing hot code paths for models with lots of small operations.
Work with graphs
While eager execution makes development and debugging more interactive,
TensorFlow graph execution has advantages for distributed training, performance
optimizations, and production deployment. However, writing graph code can feel
different than writing regular Python code and more difficult to debug.
For building and training graph-constructed models, the Python program first
builds a graph representing the computation, then invokes Session.run to send
the graph for execution on the C++-based runtime. This provides:
Automatic differentiation using static autodiff.
Simple deployment to a platform independent server.
Graph-based optimizations (common subexpression elimination, constant-folding, etc.).
Compilation and kernel fusion.
Automatic distribution and replication (placing nodes on the distributed system).
Deploying code written for eager execution is more difficult: either generate a
graph from the model, or run the Python runtime and code directly on the server.
Write compatible code
The same code written for eager execution will also build a graph during graph
execution. Do this by simply running the same code in a new Python session where
eager execution is not enabled.
Most TensorFlow operations work during eager execution, but there are some things
to keep in mind:
Use tf.data for input processing instead of queues. It's faster and easier.
Use object-oriented layer APIs—like tf.keras.layers and
tf.keras.Model—since they have explicit storage for variables.
Most model code works the same during eager and graph execution, but there are
exceptions. (For example, dynamic models using Python control flow to change the
computation based on inputs.)
Once eager execution is enabled with tf.enable_eager_execution, it
cannot be turned off. Start a new Python session to return to graph execution.
It's best to write code for both eager execution and graph execution. This
gives you eager's interactive experimentation and debuggability with the
distributed performance benefits of graph execution.
Write, debug, and iterate in eager execution, then import the model graph for
production deployment. Use tf.train.Checkpoint to save and restore model
variables, this allows movement between eager and graph execution environments.
See the examples in:
tensorflow/contrib/eager/python/examples.
Use eager execution in a graph environment
Selectively enable eager execution in a TensorFlow graph environment using
tfe.py_func. This is used when `` has not
been called.
End of explanation
"""
|
uber/pyro
|
tutorial/source/bayesian_regression.ipynb
|
apache-2.0
|
%reset -s -f
import os
from functools import partial
import torch
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pyro
import pyro.distributions as dist
# for CI testing
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(1)
# Set matplotlib settings
%matplotlib inline
plt.style.use('default')
"""
Explanation: Bayesian Regression - Introduction (Part 1)
Regression is one of the most common and basic supervised learning tasks in machine learning. Suppose we're given a dataset $\mathcal{D}$ of the form
$$ \mathcal{D} = { (X_i, y_i) } \qquad \text{for}\qquad i=1,2,...,N$$
The goal of linear regression is to fit a function to the data of the form:
$$ y = w X + b + \epsilon $$
where $w$ and $b$ are learnable parameters and $\epsilon$ represents observation noise. Specifically $w$ is a matrix of weights and $b$ is a bias vector.
In this tutorial, we will first implement linear regression in PyTorch and learn point estimates for the parameters $w$ and $b$. Then we will see how to incorporate uncertainty into our estimates by using Pyro to implement Bayesian regression. Additionally, we will learn how to use the Pyro's utility functions to do predictions and serve our model using TorchScript.
Tutorial Outline
Setup
Dataset
Linear Regression
Training with PyTorch Optimizers
Regression Fit
Bayesian Regression with Pyro's SVI
Model
Using an AutoGuide
Optimizing the Evidence Lower Bound
Model Evaluation
Serving the Model using TorchScript
Setup
Let's begin by importing the modules we'll need.
End of explanation
"""
DATA_URL = "https://d2hg8soec8ck9v.cloudfront.net/datasets/rugged_data.csv"
data = pd.read_csv(DATA_URL, encoding="ISO-8859-1")
df = data[["cont_africa", "rugged", "rgdppc_2000"]]
df = df[np.isfinite(df.rgdppc_2000)]
df["rgdppc_2000"] = np.log(df["rgdppc_2000"])
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = df[df["cont_africa"] == 1]
non_african_nations = df[df["cont_africa"] == 0]
sns.scatterplot(non_african_nations["rugged"],
non_african_nations["rgdppc_2000"],
ax=ax[0])
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
sns.scatterplot(african_nations["rugged"],
african_nations["rgdppc_2000"],
ax=ax[1])
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
"""
Explanation: Dataset
The following example is adapted from [1]. We would like to explore the relationship between topographic heterogeneity of a nation as measured by the Terrain Ruggedness Index (variable rugged in the dataset) and its GDP per capita. In particular, it was noted by the authors in [2] that terrain ruggedness or bad geography is related to poorer economic performance outside of Africa, but rugged terrains have had a reverse effect on income for African nations. Let us look at the data and investigate this relationship. We will be focusing on three features from the dataset:
rugged: quantifies the Terrain Ruggedness Index
cont_africa: whether the given nation is in Africa
rgdppc_2000: Real GDP per capita for the year 2000
The response variable GDP is highly skewed, so we will log-transform it.
End of explanation
"""
from torch import nn
from pyro.nn import PyroModule
assert issubclass(PyroModule[nn.Linear], nn.Linear)
assert issubclass(PyroModule[nn.Linear], PyroModule)
"""
Explanation: Linear Regression
We would like to predict log GDP per capita of a nation as a function of two features from the dataset - whether the nation is in Africa, and its Terrain Ruggedness Index. We will create a trivial class called PyroModule[nn.Linear] that subclasses PyroModule and torch.nn.Linear. PyroModule is very similar to PyTorch's nn.Module, but additionally supports Pyro primitives as attributes that can be modified by Pyro's effect handlers (see the next section on how we can have module attributes that are pyro.sample primitives). Some general notes:
Learnable parameters in PyTorch modules are instances of nn.Parameter, in this case the weight and bias parameters of the nn.Linear class. When declared inside a PyroModule as attributes, these are automatically registered in Pyro's param store. While this model does not require us to constrain the value of these parameters during optimization, this can also be easily achieved in PyroModule using the PyroParam statement.
Note that while the forward method of PyroModule[nn.Linear] inherits from nn.Linear, it can also be easily overridden. e.g. in the case of logistic regression, we apply a sigmoid transformation to the linear predictor.
End of explanation
"""
# Dataset: Add a feature to capture the interaction between "cont_africa" and "rugged"
df["cont_africa_x_rugged"] = df["cont_africa"] * df["rugged"]
data = torch.tensor(df[["cont_africa", "rugged", "cont_africa_x_rugged", "rgdppc_2000"]].values,
dtype=torch.float)
x_data, y_data = data[:, :-1], data[:, -1]
# Regression model
linear_reg_model = PyroModule[nn.Linear](3, 1)
# Define loss and optimize
loss_fn = torch.nn.MSELoss(reduction='sum')
optim = torch.optim.Adam(linear_reg_model.parameters(), lr=0.05)
num_iterations = 1500 if not smoke_test else 2
def train():
# run the model forward on the data
y_pred = linear_reg_model(x_data).squeeze(-1)
# calculate the mse loss
loss = loss_fn(y_pred, y_data)
# initialize gradients to zero
optim.zero_grad()
# backpropagate
loss.backward()
# take a gradient step
optim.step()
return loss
for j in range(num_iterations):
loss = train()
if (j + 1) % 50 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss.item()))
# Inspect learned parameters
print("Learned parameters:")
for name, param in linear_reg_model.named_parameters():
print(name, param.data.numpy())
"""
Explanation: Training with PyTorch Optimizers
Note that in addition to the two features rugged and cont_africa, we also include an interaction term in our model, which lets us separately model the effect of ruggedness on the GDP for nations within and outside Africa.
We use the mean squared error (MSE) as our loss and Adam as our optimizer from the torch.optim module. We would like to optimize the parameters of our model, namely the weight and bias parameters of the network, which corresponds to our regression coefficents and the intercept.
End of explanation
"""
fit = df.copy()
fit["mean"] = linear_reg_model(x_data).detach().cpu().numpy()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = fit[fit["cont_africa"] == 1]
non_african_nations = fit[fit["cont_africa"] == 0]
fig.suptitle("Regression Fit", fontsize=16)
ax[0].plot(non_african_nations["rugged"], non_african_nations["rgdppc_2000"], "o")
ax[0].plot(non_african_nations["rugged"], non_african_nations["mean"], linewidth=2)
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
ax[1].plot(african_nations["rugged"], african_nations["rgdppc_2000"], "o")
ax[1].plot(african_nations["rugged"], african_nations["mean"], linewidth=2)
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
"""
Explanation: Plotting the Regression Fit
Let us plot the regression fit for our model, separately for countries outside and within Africa.
End of explanation
"""
from pyro.nn import PyroSample
class BayesianRegression(PyroModule):
def __init__(self, in_features, out_features):
super().__init__()
self.linear = PyroModule[nn.Linear](in_features, out_features)
self.linear.weight = PyroSample(dist.Normal(0., 1.).expand([out_features, in_features]).to_event(2))
self.linear.bias = PyroSample(dist.Normal(0., 10.).expand([out_features]).to_event(1))
def forward(self, x, y=None):
sigma = pyro.sample("sigma", dist.Uniform(0., 10.))
mean = self.linear(x).squeeze(-1)
with pyro.plate("data", x.shape[0]):
obs = pyro.sample("obs", dist.Normal(mean, sigma), obs=y)
return mean
"""
Explanation: We notice that the relationship between terrain ruggedness has an inverse relationship with GDP for non-African nations, but it positively affects the GDP for African nations. It is however unclear how robust this trend is. In particular, we would like to understand how the regression fit would vary due to parameter uncertainty. To address this, we will build a simple Bayesian model for linear regression. Bayesian modeling offers a systematic framework for reasoning about model uncertainty. Instead of just learning point estimates, we're going to learn a distribution over parameters that are consistent with the observed data.
Bayesian Regression with Pyro's Stochastic Variational Inference (SVI)
Model
In order to make our linear regression Bayesian, we need to put priors on the parameters $w$ and $b$. These are distributions that represent our prior belief about reasonable values for $w$ and $b$ (before observing any data).
Making a Bayesian model for linear regression is very intuitive using PyroModule as earlier. Note the following:
The BayesianRegression module internally uses the same PyroModule[nn.Linear] module. However, note that we replace the weight and the bias of the this module with PyroSample statements. These statements allow us to place a prior over the weight and bias parameters, instead of treating them as fixed learnable parameters. For the bias component, we set a reasonably wide prior since it is likely to be substantially above 0.
The BayesianRegression.forward method specifies the generative process. We generate the mean value of the response by calling the linear module (which, as you saw, samples the weight and bias parameters from the prior and returns a value for the mean response). Finally we use the obs argument to the pyro.sample statement to condition on the observed data y_data with a learned observation noise sigma. The model returns the regression line given by the variable mean.
End of explanation
"""
from pyro.infer.autoguide import AutoDiagonalNormal
model = BayesianRegression(3, 1)
guide = AutoDiagonalNormal(model)
"""
Explanation: Using an AutoGuide
In order to do inference, i.e. learn the posterior distribution over our unobserved parameters, we will use Stochastic Variational Inference (SVI). The guide determines a family of distributions, and SVI aims to find an approximate posterior distribution from this family that has the lowest KL divergence from the true posterior.
Users can write arbitrarily flexible custom guides in Pyro, but in this tutorial, we will restrict ourselves to Pyro's autoguide library. In the next tutorial, we will explore how to write guides by hand.
To begin with, we will use the AutoDiagonalNormal guide that models the distribution of unobserved parameters in the model as a Gaussian with diagonal covariance, i.e. it assumes that there is no correlation amongst the latent variables (quite a strong modeling assumption as we shall see in Part II). Under the hood, this defines a guide that uses a Normal distribution with learnable parameters corresponding to each sample statement in the model. e.g. in our case, this distribution should have a size of (5,) correspoding to the 3 regression coefficients for each of the terms, and 1 component contributed each by the intercept term and sigma in the model.
Autoguide also supports learning MAP estimates with AutoDelta or composing guides with AutoGuideList (see the docs for more information).
End of explanation
"""
from pyro.infer import SVI, Trace_ELBO
adam = pyro.optim.Adam({"lr": 0.03})
svi = SVI(model, guide, adam, loss=Trace_ELBO())
"""
Explanation: Optimizing the Evidence Lower Bound
We will use stochastic variational inference (SVI) (for an introduction to SVI, see SVI Part I) for doing inference. Just like in the non-Bayesian linear regression model, each iteration of our training loop will take a gradient step, with the difference that in this case, we'll use the Evidence Lower Bound (ELBO) objective instead of the MSE loss by constructing a Trace_ELBO object that we pass to SVI.
End of explanation
"""
pyro.clear_param_store()
for j in range(num_iterations):
# calculate the loss and take a gradient step
loss = svi.step(x_data, y_data)
if j % 100 == 0:
print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))
"""
Explanation: Note that we use the Adam optimizer from Pyro's optim module and not the torch.optim module as earlier. Here Adam is a thin wrapper around torch.optim.Adam (see here for a discussion). Optimizers in pyro.optim are used to optimize and update parameter values in Pyro's parameter store. In particular, you will notice that we do not need to pass in learnable parameters to the optimizer since that is determined by the guide code and happens behind the scenes within the SVI class automatically. To take an ELBO gradient step we simply call the step method of SVI. The data argument we pass to SVI.step will be passed to both model() and guide(). The complete training loop is as follows:
End of explanation
"""
guide.requires_grad_(False)
for name, value in pyro.get_param_store().items():
print(name, pyro.param(name))
"""
Explanation: We can examine the optimized parameter values by fetching from Pyro's param store.
End of explanation
"""
guide.quantiles([0.25, 0.5, 0.75])
"""
Explanation: As you can see, instead of just point estimates, we now have uncertainty estimates (AutoDiagonalNormal.scale) for our learned parameters. Note that Autoguide packs the latent variables into a single tensor, in this case, one entry per variable sampled in our model. Both the loc and scale parameters have size (5,), one for each of the latent variables in the model, as we had remarked earlier.
To look at the distribution of the latent parameters more clearly, we can make use of the AutoDiagonalNormal.quantiles method which will unpack the latent samples from the autoguide, and automatically constrain them to the site's support (e.g. the variable sigma must lie in (0, 10)). We see that the median values for the parameters are quite close to the Maximum Likelihood point estimates we obtained from our first model.
End of explanation
"""
from pyro.infer import Predictive
def summary(samples):
site_stats = {}
for k, v in samples.items():
site_stats[k] = {
"mean": torch.mean(v, 0),
"std": torch.std(v, 0),
"5%": v.kthvalue(int(len(v) * 0.05), dim=0)[0],
"95%": v.kthvalue(int(len(v) * 0.95), dim=0)[0],
}
return site_stats
predictive = Predictive(model, guide=guide, num_samples=800,
return_sites=("linear.weight", "obs", "_RETURN"))
samples = predictive(x_data)
pred_summary = summary(samples)
mu = pred_summary["_RETURN"]
y = pred_summary["obs"]
predictions = pd.DataFrame({
"cont_africa": x_data[:, 0],
"rugged": x_data[:, 1],
"mu_mean": mu["mean"],
"mu_perc_5": mu["5%"],
"mu_perc_95": mu["95%"],
"y_mean": y["mean"],
"y_perc_5": y["5%"],
"y_perc_95": y["95%"],
"true_gdp": y_data,
})
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
african_nations = predictions[predictions["cont_africa"] == 1]
non_african_nations = predictions[predictions["cont_africa"] == 0]
african_nations = african_nations.sort_values(by=["rugged"])
non_african_nations = non_african_nations.sort_values(by=["rugged"])
fig.suptitle("Regression line 90% CI", fontsize=16)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["mu_mean"])
ax[0].fill_between(non_african_nations["rugged"],
non_african_nations["mu_perc_5"],
non_african_nations["mu_perc_95"],
alpha=0.5)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["true_gdp"],
"o")
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
idx = np.argsort(african_nations["rugged"])
ax[1].plot(african_nations["rugged"],
african_nations["mu_mean"])
ax[1].fill_between(african_nations["rugged"],
african_nations["mu_perc_5"],
african_nations["mu_perc_95"],
alpha=0.5)
ax[1].plot(african_nations["rugged"],
african_nations["true_gdp"],
"o")
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
"""
Explanation: Model Evaluation
To evaluate our model, we'll generate some predictive samples and look at the posteriors. For this we will make use of the Predictive utility class.
We generate 800 samples from our trained model. Internally, this is done by first generating samples for the unobserved sites in the guide, and then running the model forward by conditioning the sites to values sampled from the guide. Refer to the Model Serving section for insight on how the Predictive class works.
Note that in return_sites, we specify both the outcome ("obs" site) as well as the return value of the model ("_RETURN") which captures the regression line. Additionally, we would also like to capture the regression coefficients (given by "linear.weight") for further analysis.
The remaining code is simply used to plot the 90% CI for the two variables from our model.
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
fig.suptitle("Posterior predictive distribution with 90% CI", fontsize=16)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["y_mean"])
ax[0].fill_between(non_african_nations["rugged"],
non_african_nations["y_perc_5"],
non_african_nations["y_perc_95"],
alpha=0.5)
ax[0].plot(non_african_nations["rugged"],
non_african_nations["true_gdp"],
"o")
ax[0].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="Non African Nations")
idx = np.argsort(african_nations["rugged"])
ax[1].plot(african_nations["rugged"],
african_nations["y_mean"])
ax[1].fill_between(african_nations["rugged"],
african_nations["y_perc_5"],
african_nations["y_perc_95"],
alpha=0.5)
ax[1].plot(african_nations["rugged"],
african_nations["true_gdp"],
"o")
ax[1].set(xlabel="Terrain Ruggedness Index",
ylabel="log GDP (2000)",
title="African Nations");
"""
Explanation: The above figure shows the uncertainty in our estimate of the regression line, and the 90% CI around the mean. We can also see that most of the data points actually lie outside the 90% CI, and this is expected because we have not plotted the outcome variable which will be affected by sigma! Let us do so next.
End of explanation
"""
weight = samples["linear.weight"]
weight = weight.reshape(weight.shape[0], 3)
gamma_within_africa = weight[:, 1] + weight[:, 2]
gamma_outside_africa = weight[:, 1]
fig = plt.figure(figsize=(10, 6))
sns.distplot(gamma_within_africa, kde_kws={"label": "African nations"},)
sns.distplot(gamma_outside_africa, kde_kws={"label": "Non-African nations"})
fig.suptitle("Density of Slope : log(GDP) vs. Terrain Ruggedness");
"""
Explanation: We observe that the outcome from our model and the 90% CI accounts for the majority of the data points that we observe in practice. It is usually a good idea to do such posterior predictive checks to see if our model gives valid predictions.
Finally, let us revisit our earlier question of how robust the relationship between terrain ruggedness and GDP is against any uncertainty in the parameter estimates from our model. For this, we plot the distribution of the slope of the log GDP given terrain ruggedness for nations within and outside Africa. As can be seen below, the probability mass for African nations is largely concentrated in the positive region and vice-versa for other nations, lending further credence to the original hypothesis.
End of explanation
"""
from collections import defaultdict
from pyro import poutine
from pyro.poutine.util import prune_subsample_sites
import warnings
class Predict(torch.nn.Module):
def __init__(self, model, guide):
super().__init__()
self.model = model
self.guide = guide
def forward(self, *args, **kwargs):
samples = {}
guide_trace = poutine.trace(self.guide).get_trace(*args, **kwargs)
model_trace = poutine.trace(poutine.replay(self.model, guide_trace)).get_trace(*args, **kwargs)
for site in prune_subsample_sites(model_trace).stochastic_nodes:
samples[site] = model_trace.nodes[site]['value']
return tuple(v for _, v in sorted(samples.items()))
predict_fn = Predict(model, guide)
predict_module = torch.jit.trace_module(predict_fn, {"forward": (x_data,)}, check_trace=False)
"""
Explanation: Model Serving via TorchScript
Finally, note that the model, guide and the Predictive utility class are all torch.nn.Module instances, and can be serialized as TorchScript.
Here, we show how we can serve a Pyro model as a torch.jit.ModuleScript, which can be run separately as a C++ program without a Python runtime.
To do so, we will rewrite our own simple version of the Predictive utility class using Pyro's effect handling library. This uses:
the trace poutine to capture the execution trace from running the model/guide code.
the replay poutine to condition the sites in the model to values sampled from the guide trace.
End of explanation
"""
torch.jit.save(predict_module, '/tmp/reg_predict.pt')
pred_loaded = torch.jit.load('/tmp/reg_predict.pt')
pred_loaded(x_data)
"""
Explanation: We use torch.jit.trace_module to trace the forward method of this module and save it using torch.jit.save. This saved model reg_predict.pt can be loaded with PyTorch's C++ API using torch::jit::load(filename), or using the Python API as we do below.
End of explanation
"""
weight = []
for _ in range(800):
# index = 1 corresponds to "linear.weight"
weight.append(pred_loaded(x_data)[1])
weight = torch.stack(weight).detach()
weight = weight.reshape(weight.shape[0], 3)
gamma_within_africa = weight[:, 1] + weight[:, 2]
gamma_outside_africa = weight[:, 1]
fig = plt.figure(figsize=(10, 6))
sns.distplot(gamma_within_africa, kde_kws={"label": "African nations"},)
sns.distplot(gamma_outside_africa, kde_kws={"label": "Non-African nations"})
fig.suptitle("Loaded TorchScript Module : log(GDP) vs. Terrain Ruggedness");
"""
Explanation: Let us check that our Predict module was indeed serialized correctly, by generating samples from the loaded module and regenerating the previous plot.
End of explanation
"""
|
Upward-Spiral-Science/team1
|
code/Assignment11_Group.ipynb
|
apache-2.0
|
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import urllib2
import scipy.stats as stats
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,
x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
syn_normalized = syn_unmasked
print 'end setup'
"""
Explanation: Group
End of explanation
"""
# syn_unmasked_T = syn_unmasked.values.T.tolist()
# columns = [syn_unmasked[i] for i in [4]]
plt.boxplot(syn_unmasked[:,3], 0, 'gD')
plt.xticks([1], ['Set'])
plt.ylabel('Density Distribution')
plt.title('Density Distrubution Boxplot')
plt.show()
"""
Explanation: 1) Boxplot of General Density
End of explanation
"""
figure = plt.figure()
plt.hist(data_thresholded[:,4],5000)
plt.title('Histogram of Synapses in Brain Sample')
plt.xlabel('Synapses')
plt.ylabel('frequency')
"""
Explanation: 2) Is the spike noise? More evidence.
We saw from Emily's analysis that there is strong evidence against the spike being noise. If we see that the spike is noticeable in the histogram of synapses as well as the histogram of synapse density, we will gain even more evidence that the spike is noise.
End of explanation
"""
plt.hist(data_thresholded[:,3],5000)
plt.title('Histogram of Unmasked Values')
plt.xlabel('unmasked')
plt.ylabel('frequency')
"""
Explanation: Since we don't see the spike in the histogram of synapses, the spike may be some artifact of the unmasked value. Let's take a look!
3) What is the spike? We still don't know.
End of explanation
"""
# Spike
a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)
spike = a[np.logical_and(a <= 0.0015, a >= 0.0012)]
print "Average Density: ", np.mean(spike)
print "Std Deviation: ", np.std(spike)
# Histogram
n, bins, _ = plt.hist(spike, 2000)
plt.title('Histogram of Synaptic Density')
plt.xlabel('Synaptic Density (syn/voxel)')
plt.ylabel('frequency')
bin_max = np.where(n == n.max())
print 'maxbin', bins[bin_max][0]
bin_width = bins[1]-bins[0]
syn_normalized[:,3] = syn_normalized[:,3]/(64**3)
spike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]
print "There are ", len(spike), " points in the 'spike'"
spike_thres = data_thresholded[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]
print spike_thres
import math
fig, ax = plt.subplots(1,2,sharey = True, figsize=(20,5))
weights = np.ones_like(spike_thres[:,3])/len(spike_thres[:,3])
weights2 = np.ones_like(data_thresholded[:,3])/len(data_thresholded[:,3])
ax[0].hist(data_thresholded[:,3], bins = 100, alpha = 0.5, weights = weights2, label = 'all data')
ax[0].hist(spike_thres[:,3], bins = 100, alpha = 0.5, weights = weights, label = 'spike')
ax[0].legend(loc='upper right')
ax[0].set_title('Histogram of Unmasked values in the Spike vs All Data')
weights = np.ones_like(spike_thres[:,4])/len(spike_thres[:,4])
weights2 = np.ones_like(data_thresholded[:,4])/len(data_thresholded[:,4])
ax[1].hist(data_thresholded[:,4], bins = 100, alpha = 0.5, weights = weights2, label = 'all data')
ax[1].hist(spike_thres[:,4], bins = 100, alpha = 0.5, weights = weights, label = 'spike')
ax[1].legend(loc='upper right')
ax[1].set_title('Histogram of Synapses in the Spike vs All Data')
plt.show()
"""
Explanation: 4) Synapses and unmasked: Spike vs Whole Data Set
End of explanation
"""
import sklearn.mixture as mixture
n_clusters = 4
gmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag')
labels = gmm.fit_predict(syn_unmasked)
clusters = []
for l in range(n_clusters):
a = np.where(labels == l)
clusters.append(syn_unmasked[a,:])
print len(clusters)
print clusters[0].shape
counter = 0
indx = 0
indy = 0
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Working on cluster: ' + str(counter)
plt.boxplot(cluster[:,-1], 0, 'gD', showmeans=True)
plt.xticks([1])
plt.ylabel('Density')
plt.title('Boxplot of density \n at cluster = ' + str(int(counter)))
plt.show()
print "Done with cluster"
plt.show()
"""
Explanation: 5) Boxplot of different clusters by coordinates and densities
Cluster 4 has relatively high density
End of explanation
"""
data_uniques, UIndex, UCounts = np.unique(syn_unmasked[:,2], return_index = True, return_counts = True)
'''
print 'uniques'
print 'index: ' + str(UIndex)
print 'counts: ' + str(UCounts)
print 'values: ' + str(data_uniques)
'''
fig, ax = plt.subplots(3,4,figsize=(10,20))
counter = 0
for i in np.unique(syn_unmasked[:,2]):
# print 'calcuating for z: ' + str(int(i))
def check_z(row):
if row[2] == i:
return True
return False
counter += 1
xind = (counter%3) - 1
yind = (counter%4) - 1
index_true = np.where(np.apply_along_axis(check_z, 1, syn_unmasked))
syn_uniqueZ = syn_unmasked[index_true]
ax[xind,yind].boxplot(syn_uniqueZ[:,3], 0, 'gD')
ax[xind,yind].set_xticks([1], i)
ax[xind,yind].set_ylabel('Density')
ax[xind,yind].set_title('Boxplot at \n z = ' + str(int(i)))
#print 'yind = %d, xind = %d' %(yind,xind)
#print i
ax[xind+1,yind+1].boxplot(syn_uniqueZ[:,3], 0, 'gD',showmeans=True)
ax[xind+1,yind+1].set_xticks([1], 'set')
ax[xind+1,yind+1].set_ylabel('Density')
ax[xind+1,yind+1].set_title('Boxplot for \n All Densities')
print "Density Distrubtion Boxplots:"
plt.tight_layout()
plt.show()
"""
Explanation: 5 OLD ) Boxplot distrubutions of each Z layer
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/06_structured/labs/6_deploy.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.1'
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
# copy solution to Lab #5 (skip this step if you still have results from Lab 5 in your bucket)
gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight/trained_model
"""
Explanation: <h1> Deploying and predicting with model </h1>
This notebook illustrates:
<ol>
<li> Deploying model
<li> Predicting with model
</ol>
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
#gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
#gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
"""
Explanation: Task 1
What files are present in the model trained directory (gs://${BUCKET}/babyweight/trained_model)?
Hint (highlight to see): <p style='color:white'>
Run gsutil ls in a bash cell.
Answer: model checkpoints are in the trained model directory and several exported models (model architecture + weights) are in the export/exporter subdirectory
</p>
<h2> Task 2: Deploy trained, exported model </h2>
Uncomment and run the the appropriate gcloud lines ONE-BY-ONE to
deploy the trained model to act as a REST web service.
Hint (highlight to see): <p style='color:white'>
The very first time, you need only the last two gcloud calls to create the model and the version.
To experiment later, you might need to delete any deployed version, but should not have to recreate the model
</p>
End of explanation
"""
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = 'babyweight'
MODEL_VERSION = 'ml_on_gcp'
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = 'https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict' \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {'Authorization': 'Bearer ' + token }
data = {
'instances': [
# TODO: complete
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
"""
Explanation: Task 3: Write Python code to invoke the deployed model (online prediction)
<p>
Send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
The deployed model requires the input instances to be formatted as follows:
<pre>
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
</pre>
The key is an arbitrary string. Allowed values for is_male are True, False and Unknown.
Allowed values for plurality are Single(1), Twins(2), Triplets(3), Multiple(2+)
End of explanation
"""
%%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT --region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight --version=ml_on_gcp
"""
Explanation: <h2> Task 4: Try out batch prediction </h2>
<p>
Batch prediction is commonly used when you thousands to millions of predictions.
Create a file withe one instance per line and submit using gcloud.
End of explanation
"""
|
dolittle007/dolittle007.github.io
|
notebooks/dependent_density_regression.ipynb
|
gpl-3.0
|
%matplotlib inline
from IPython.display import HTML
from matplotlib import animation as ani, pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from theano import shared, tensor as tt
plt.rc('animation', writer='avconv')
blue, *_ = sns.color_palette()
SEED = 972915 # from random.org; for reproducibility
np.random.seed(SEED)
"""
Explanation: Dependent density regression
Author: Austin Rochford
In another example, we showed how to use Dirichlet processes to perform Bayesian nonparametric density estimation. This example expands on the previous one, illustrating dependent density regression.
Just as Dirichlet process mixtures can be thought of as infinite mixture models that select the number of active components as part of inference, dependent density regression can be thought of as infinite mixtures of experts that select the active experts as part of inference. Their flexibility and modularity make them powerful tools for performing nonparametric Bayesian Data analysis.
End of explanation
"""
DATA_URI = 'http://www.stat.cmu.edu/~larry/all-of-nonpar/=data/lidar.dat'
def standardize(x):
return (x - x.mean()) / x.std()
df = (pd.read_csv(DATA_URI, sep=' *', engine='python')
.assign(std_range=lambda df: standardize(df.range),
std_logratio=lambda df: standardize(df.logratio)))
df.head()
"""
Explanation: We will use the LIDAR data set from Larry Wasserman's excellent book, All of Nonparametric Statistics. We standardize the data set to improve the rate of convergence of our samples.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(df.std_range, df.std_logratio,
c=blue);
ax.set_xticklabels([]);
ax.set_xlabel("Standardized range");
ax.set_yticklabels([]);
ax.set_ylabel("Standardized log ratio");
"""
Explanation: We plot the LIDAR data below.
End of explanation
"""
fig, (scatter_ax, hist_ax) = plt.subplots(ncols=2, figsize=(16, 6))
scatter_ax.scatter(df.std_range, df.std_logratio,
c=blue, zorder=2);
scatter_ax.set_xticklabels([]);
scatter_ax.set_xlabel("Standardized range");
scatter_ax.set_yticklabels([]);
scatter_ax.set_ylabel("Standardized log ratio");
bins = np.linspace(df.std_range.min(), df.std_range.max(), 25)
hist_ax.hist(df.std_logratio, bins=bins,
color='k', lw=0, alpha=0.25,
label="All data");
hist_ax.set_xticklabels([]);
hist_ax.set_xlabel("Standardized log ratio");
hist_ax.set_yticklabels([]);
hist_ax.set_ylabel("Frequency");
hist_ax.legend(loc=2);
endpoints = np.linspace(1.05 * df.std_range.min(), 1.05 * df.std_range.max(), 15)
frame_artists = []
for low, high in zip(endpoints[:-1], endpoints[2:]):
interval = scatter_ax.axvspan(low, high,
color='k', alpha=0.5, lw=0, zorder=1);
*_, bars = hist_ax.hist(df[df.std_range.between(low, high)].std_logratio,
bins=bins,
color='k', lw=0, alpha=0.5);
frame_artists.append((interval,) + tuple(bars))
animation = ani.ArtistAnimation(fig, frame_artists,
interval=500, repeat_delay=3000, blit=True)
plt.close(); # prevent the intermediate figure from showing
HTML(animation.to_html5_video())
"""
Explanation: This data set has a two interesting properties that make it useful for illustrating dependent density regression.
The relationship between range and log ratio is nonlinear, but has locally linear components.
The observation noise is heteroskedastic; that is, the magnitude of the variance varies with the range.
The intuitive idea behind dependent density regression is to reduce the problem to many (related) density estimates, conditioned on fixed values of the predictors. The following animation illustrates this intuition.
End of explanation
"""
def norm_cdf(z):
return 0.5 * (1 + tt.erf(z / np.sqrt(2)))
def stick_breaking(v):
return v * tt.concatenate([tt.ones_like(v[:, :1]),
tt.extra_ops.cumprod(1 - v, axis=1)[:, :-1]],
axis=1)
N, _ = df.shape
K = 20
std_range = df.std_range.values[:, np.newaxis]
std_logratio = df.std_logratio.values[:, np.newaxis]
x_lidar = shared(std_range, broadcastable=(False, True))
with pm.Model() as model:
alpha = pm.Normal('alpha', 0., 5., shape=K)
beta = pm.Normal('beta', 0., 5., shape=K)
v = norm_cdf(alpha + beta * x_lidar)
w = pm.Deterministic('w', stick_breaking(v))
"""
Explanation: As we slice the data with a window sliding along the x-axis in the left plot, the empirical distribution of the y-values of the points in the window varies in the right plot. An important aspect of this approach is that the density estimates that correspond to close values of the predictor are similar.
In the previous example, we saw that a Dirichlet process estimates a probability density as a mixture model with infinitely many components. In the case of normal component distributions,
$$y \sim \sum_{i = 1}^{\infty} w_i \cdot N(\mu_i, \tau_i^{-1}),$$
where the mixture weights, $w_1, w_2, \ldots$, are generated by a stick-breaking process.
Dependent density regression generalizes this representation of the Dirichlet process mixture model by allowing the mixture weights and component means to vary conditioned on the value of the predictor, $x$. That is,
$$y\ |\ x \sim \sum_{i = 1}^{\infty} w_i\ |\ x \cdot N(\mu_i\ |\ x, \tau_i^{-1}).$$
In this example, we will follow Chapter 23 of Bayesian Data Analysis and use a probit stick-breaking process to determine the conditional mixture weights, $w_i\ |\ x$. The probit stick-breaking process starts by defining
$$v_i\ |\ x = \Phi(\alpha_i + \beta_i x),$$
where $\Phi$ is the cumulative distribution function of the standard normal distribution. We then obtain $w_i\ |\ x$ by applying the stick breaking process to $v_i\ |\ x$. That is,
$$w_i\ |\ x = v_i\ |\ x \cdot \prod_{j = 1}^{i - 1} (1 - v_j\ |\ x).$$
For the LIDAR data set, we use independent normal priors $\alpha_i \sim N(0, 5^2)$ and $\beta_i \sim N(0, 5^2)$. We now express this this model for the conditional mixture weights using pymc3.
End of explanation
"""
with model:
gamma = pm.Normal('gamma', 0., 10., shape=K)
delta = pm.Normal('delta', 0., 10., shape=K)
mu = pm.Deterministic('mu', gamma + delta * x_lidar)
"""
Explanation: We have defined x_lidar as a theano shared variable in order to use pymc3's posterior prediction capabilities later.
While the dependent density regression model theoretically has infinitely many components, we must truncate the model to finitely many components (in this case, twenty) in order to express it using pymc3. After sampling from the model, we will verify that truncation did not unduly influence our results.
Since the LIDAR data seems to have several linear components, we use the linear models
$$
\begin{align}
\mu_i\ |\ x
& \sim \gamma_i + \delta_i x \
\gamma_i
& \sim N(0, 10^2) \
\delta_i
& \sim N(0, 10^2)
\end{align}
$$
for the conditional component means.
End of explanation
"""
with model:
tau = pm.Gamma('tau', 1., 1., shape=K)
obs = pm.NormalMixture('obs', w, mu, tau=tau, observed=std_logratio)
"""
Explanation: Finally, we place the prior $\tau_i \sim \textrm{Gamma}(1, 1)$ on the component precisions.
End of explanation
"""
SAMPLES = 20000
BURN = 10000
THIN = 10
with model:
step = pm.Metropolis()
trace_ = pm.sample(SAMPLES, step, random_seed=SEED)
trace = trace_[BURN::THIN]
"""
Explanation: We now sample from the dependent density regression model.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 6))
ax.bar(np.arange(K) + 1,
trace['w'].mean(axis=0).max(axis=0));
ax.set_xlim(1 - 0.5, K + 0.5);
ax.set_xticks(np.arange(0, K, 2) + 1);
ax.set_xlabel('Mixture component');
ax.set_ylabel('Largest posterior expected\nmixture weight');
"""
Explanation: To verify that truncation did not unduly influence our results, we plot the largest posterior expected mixture weight for each component. (In this model, each point has a mixture weight for each component, so we plot the maximum mixture weight for each component across all data points in order to judge if the component exerts any influence on the posterior.)
End of explanation
"""
PP_SAMPLES = 5000
lidar_pp_x = np.linspace(std_range.min() - 0.05, std_range.max() + 0.05, 100)
x_lidar.set_value(lidar_pp_x[:, np.newaxis])
with model:
pp_trace = pm.sample_ppc(trace, PP_SAMPLES, random_seed=SEED)
"""
Explanation: Since only three mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.)
Visually, it is reasonable that the LIDAR data has three linear components, so these posterior expected weights seem to have identified the structure of the data well. We now sample from the posterior predictive distribution to get a better understand the model's performance.
End of explanation
"""
fig, ax = plt.subplots()
ax.scatter(df.std_range, df.std_logratio,
c=blue, zorder=10,
label=None);
low, high = np.percentile(pp_trace['obs'], [2.5, 97.5], axis=0)
ax.fill_between(lidar_pp_x, low, high,
color='k', alpha=0.35, zorder=5,
label='95% posterior credible interval');
ax.plot(lidar_pp_x, pp_trace['obs'].mean(axis=0),
c='k', zorder=6,
label='Posterior expected value');
ax.set_xticklabels([]);
ax.set_xlabel('Standardized range');
ax.set_yticklabels([]);
ax.set_ylabel('Standardized log ratio');
ax.legend(loc=1);
ax.set_title('LIDAR Data');
"""
Explanation: Below we plot the posterior expected value and the 95% posterior credible interval.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/migration/UJ6 AutoML for natural language with Vertex AI Text Classification.ipynb
|
apache-2.0
|
! pip3 install -U google-cloud-aiplatform --user
"""
Explanation: Vertex SDK: AutoML natural language text classification model
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
"""
! pip3 install google-cloud-storage
"""
Explanation: Install the Google cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
"""
Explanation: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import base64
import json
import os
import sys
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
End of explanation
"""
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex AI constants
Setup up the following constants for Vertex AI:
API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex AI location root path for dataset, model and endpoint resources.
End of explanation
"""
# Text Dataset type
TEXT_SCHEMA = "google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
IMPORT_SCHEMA_TEXT_CLASSIFICATION = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
# Text Training task
TRAINING_TEXT_CLASSIFICATION_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml"
"""
Explanation: AutoML constants
Next, setup constants unique to AutoML Text Classification datasets and training:
Dataset Schemas: Tells the managed dataset service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
! gsutil cat $IMPORT_FILE | head -n 10
"""
Explanation: Clients Vertex AI
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
"""
DATA_SCHEMA = TEXT_SCHEMA
dataset = {
"display_name": "happiness_" + TIMESTAMP,
"metadata_schema_uri": "gs://" + DATA_SCHEMA,
}
print(
MessageToJson(
aip.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
"""
Explanation: Example output:
I went on a successful date with someone I felt sympathy and connection with.,affection
I was happy when my son got 90% marks in his examination,affection
I went to the gym this morning and did yoga.,exercise
We had a serious talk with some friends of ours who have been flaky lately. They understood and we had a good evening hanging out.,bonding
I went with grandchildren to butterfly display at Crohn Conservatory,affection
I meditated last night.,leisure
"I made a new recipe for peasant bread, and it came out spectacular!",achievement
I got gift from my elder brother which was really surprising me,affection
YESTERDAY MY MOMS BIRTHDAY SO I ENJOYED,enjoy_the_moment
Watching cupcake wars with my three teen children,affection
Create a dataset
projects.locations.datasets.create
Request
End of explanation
"""
request = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/574578388396670976",
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml",
"labels": {
"aiplatform.googleapis.com/dataset_metadata_schema": "TEXT"
},
"metadata": {
"dataItemSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/dataitem/text_1.0.0.yaml"
}
}
End of explanation
"""
LABEL_SCHEMA = IMPORT_SCHEMA_TEXT_CLASSIFICATION
import_config = {
"gcs_source": {"uris": [IMPORT_FILE]},
"import_schema_uri": LABEL_SCHEMA,
}
print(
MessageToJson(
aip.ImportDataRequest(
name=dataset_short_id, import_configs=[import_config]
).__dict__["_pb"]
)
)
"""
Explanation: projects.locations.datasets.import
Request
End of explanation
"""
request = clients["dataset"].import_data(
name=dataset_id, import_configs=[import_config]
)
"""
Explanation: Example output:
{
"name": "574578388396670976",
"importConfigs": [
{
"gcsSource": {
"uris": [
"gs://cloud-ml-data/NL-classification/happiness.csv"
]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
}
]
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
TRAINING_SCHEMA = TRAINING_TEXT_CLASSIFICATION_SCHEMA
task = json_format.ParseDict(
{
"multi_label": False,
},
Value(),
)
training_pipeline = {
"display_name": "happiness_" + TIMESTAMP,
"input_data_config": {"dataset_id": dataset_short_id},
"model_to_upload": {"display_name": "happiness_" + TIMESTAMP},
"training_task_definition": TRAINING_SCHEMA,
"training_task_inputs": task,
}
print(
MessageToJson(
aip.CreateTrainingPipelineRequest(
parent=PARENT, training_pipeline=training_pipeline
).__dict__["_pb"]
)
)
"""
Explanation: Example output:
{}
Train a model
projects.locations.trainingPipelines.create
Request
End of explanation
"""
request = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"trainingPipeline": {
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {
"multi_label": false
},
"modelToUpload": {
"displayName": "happiness_20210226015238"
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the training pipeline
training_pipeline_id = request.name
# The short numeric ID for the training pipeline
training_pipeline_short_id = training_pipeline_id.split("/")[-1]
print(training_pipeline_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_PENDING",
"createTime": "2021-02-26T02:23:54.166560Z",
"updateTime": "2021-02-26T02:23:54.166560Z"
}
End of explanation
"""
request = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
"""
Explanation: projects.locations.trainingPipelines.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
while True:
response = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_name = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
model_id = response.model_to_upload.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(20)
print(model_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_SUCCEEDED",
"createTime": "2021-02-26T02:23:54.166560Z",
"startTime": "2021-02-26T02:23:54.396088Z",
"endTime": "2021-02-26T06:08:06.548524Z",
"updateTime": "2021-02-26T06:08:06.548524Z"
}
End of explanation
"""
request = clients["model"].list_model_evaluations(parent=model_id)
"""
Explanation: Evaluate the model
projects.locations.models.evaluations.list
Call
End of explanation
"""
model_evaluations = [json.loads(MessageToJson(mel.__dict__["_pb"])) for mel in request]
print(json.dumps(model_evaluations, indent=2))
# The evaluation slice
evaluation_slice = request.model_evaluations[0].name
"""
Explanation: Response
End of explanation
"""
request = clients["model"].get_model_evaluation(name=evaluation_slice)
"""
Explanation: Example output:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"id": "1528674105841156096",
"displayName": "achievement"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"id": "5563899371965120512",
"displayName": "enjoy_the_moment"
},
{
"id": "6140360124268544000",
"displayName": "nature"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"confidenceMetrics": [
{
"f1Score": 0.25,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.14285715,
"recallAt1": 0.88776374
},
{
"confidenceThreshold": 0.05,
"recall": 0.9721519,
"f1Score": 0.8101266,
"recallAt1": 0.88776374,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.69439423
},
# REMOVED FOR BREVITY
{
"f1Score": 0.0033698399,
"recall": 0.0016877637,
"confidenceThreshold": 1.0,
"recallAt1": 0.0016877637,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0
}
],
"auPrc": 0.95903283,
"logLoss": 0.08260541
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
]
```
projects.locations.models.evaluations.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
"""
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"displayName": "achievement",
"id": "1528674105841156096"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"displayName": "enjoy_the_moment",
"id": "5563899371965120512"
},
{
"displayName": "nature",
"id": "6140360124268544000"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"logLoss": 0.08260541,
"confidenceMetrics": [
{
"precision": 0.14285715,
"precisionAt1": 0.88776374,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"recallAt1": 0.88776374,
"f1Score": 0.25
},
{
"f1Score": 0.8101266,
"recall": 0.9721519,
"precision": 0.69439423,
"confidenceThreshold": 0.05,
"recallAt1": 0.88776374,
"precisionAt1": 0.88776374,
"f1ScoreAt1": 0.88776374
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 1.0,
"f1Score": 0.0033698399,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0,
"recall": 0.0016877637,
"recallAt1": 0.0016877637
}
],
"auPrc": 0.95903283
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
```
Make batch predictions
Prepare files for batch prediction
End of explanation
"""
import json
import tensorflow as tf
test_item_uri = "gs://" + BUCKET_NAME + "/test.txt"
with tf.io.gfile.GFile(test_item_uri, "w") as f:
f.write(test_item + "\n")
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_uri, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
! gsutil cat $gcs_input_uri
! gsutil cat $test_item_uri
"""
Explanation: Example output:
I went on a successful date with someone I felt sympathy and connection with. affection
Make the batch input file
Let's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each text file. The dictionary contains the key/value pairs:
content: The Cloud Storage path to the text file.
mimeType: The content type. In our example, it is an text/plain file.
End of explanation
"""
batch_prediction_job = {
"display_name": "happiness_" + TIMESTAMP,
"model": model_id,
"input_config": {
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
"output_config": {
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
"dedicated_resources": {
"machine_spec": {
"machine_type": "n1-standard-2",
"accelerator_count": 0,
},
"starting_replica_count": 1,
"max_replica_count": 1,
},
}
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
)
"""
Explanation: Example output:
{"content": "gs://migration-ucaip-trainingaip-20210226015238/test.txt", "mime_type": "text/plain"}
I went on a successful date with someone I felt sympathy and connection with.
projects.locations.batchPredictionJobs.create
Request
End of explanation
"""
request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
End of explanation
"""
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
"""
Explanation: projects.locations.batchPredictionJobs.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
End of explanation
"""
endpoint = {"display_name": "happiness_" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
)
"""
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015238/batch_output/prediction-happiness_20210226015238-2021-02-26T09:37:44.261133Z/predictions_00001.jsonl
{"instance":{"content":"gs://migration-ucaip-trainingaip-20210226015238/test.txt","mimeType":"text/plain"},"prediction":{"ids":["8446203133482237952","3834517115054850048","1528674105841156096","5563899371965120512","952213353537732608","3258056362751426560","6140360124268544000"],"displayNames":["affection","bonding","achievement","enjoy_the_moment","exercise","leisure","nature"],"confidences":[0.9183423,0.045685068,0.024327256,0.0057157497,0.0040851077,0.0012627868,5.8173126E-4]}}
Make online predictions
projects.locations.endpoints.create
Request
End of explanation
"""
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "happiness_20210226015238"
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The fully qualified ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296"
}
End of explanation
"""
deployed_model = {
"model": model_id,
"display_name": "happiness_" + TIMESTAMP,
"automatic_resources": {"min_replica_count": 1, "max_replica_count": 1},
}
traffic_split = {"0": 100}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split=traffic_split,
).__dict__["_pb"]
)
)
"""
Explanation: projects.locations.endpoints.deployModel
Request
End of explanation
"""
request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split
)
"""
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238",
"automaticResources": {
"minReplicaCount": 1,
"maxReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
"""
Explanation: Example output:
{
"deployedModel": {
"id": "418518105996656640"
}
}
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
instances_list = [{"content": test_item}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
request = aip.PredictRequest(
endpoint=endpoint_id,
)
request.instances.append(instances)
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: projects.locations.endpoints.predict
Request
End of explanation
"""
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances)
"""
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"instances": [
[
{
"content": "I went on a successful date with someone I felt sympathy and connection with."
}
]
]
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
"""
Explanation: Example output:
{
"predictions": [
{
"confidences": [
0.8867673277854919,
0.024743923917412758,
0.0034913308918476105,
0.07936617732048035,
0.0013463868526741862,
0.0002393187169218436,
0.0040455833077430725
],
"displayNames": [
"affection",
"achievement",
"enjoy_the_moment",
"bonding",
"leisure",
"nature",
"exercise"
],
"ids": [
"8446203133482237952",
"1528674105841156096",
"5563899371965120512",
"3834517115054850048",
"3258056362751426560",
"6140360124268544000",
"952213353537732608"
]
}
],
"deployedModelId": "418518105996656640"
}
projects.locations.endpoints.undeployModel
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
delete_dataset = True
delete_model = True
delete_endpoint = True
delete_pipeline = True
delete_batchjob = True
delete_bucket = True
# Delete the dataset using the Vertex AI fully qualified identifier for the dataset
try:
if delete_dataset:
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline
try:
if delete_pipeline:
clients["pipeline"].delete_training_pipeline(name=training_pipeline_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
"""
Explanation: Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
"""
|
vanheck/blog-notes
|
QuantTrading/creating_trading_strategy_03-zipline.ipynb
|
mit
|
NB_VERSION = 1,0
import sys
import datetime
import pandas as pd
import zipline
%load_ext zipline
print('Verze notebooku:', '.'.join(map(str, NB_VERSION)))
print('Verze pythonu:', '.'.join(map(str, sys.version_info[0:3])))
print('---')
print('Zipline:', zipline.__version__)
print('Pandas:', pd.__version__)
"""
Explanation: Zipline
Informace o notebooku
End of explanation
"""
%%zipline --start 2008-1-1 --end 2017-3-1
from zipline.api import order_target, record, symbol
import matplotlib.pyplot as plt
def initialize(context):
context.i = 0
context.my_smb = 'AAPL'
context.asset = symbol(context.my_smb)
context.short_period = 30
context.long_period = 90
def handle_data(context, data):
# Přeskočím 90 dní, aby se mohl správně vypočítat
# klouzavý průměr s delší periodou
context.i += 1
if context.i < context.long_period:
return
short_mavg = data.history(context.asset, 'price', bar_count=context.short_period, frequency="1d").mean()
long_mavg = data.history(context.asset, 'price', bar_count=context.long_period, frequency="1d").mean()
# Obchodní logika - nákup 100 akcií v případě překročení klouzavého průměru nahoru
if short_mavg > long_mavg:
order_target(context.asset, 100)
elif short_mavg < long_mavg:
order_target(context.asset, 0)
# Pomocí record uložím hodnoty, které můžu zpracovat později
record(MARKET=data.current(context.asset, 'price'),
short_mavg=short_mavg,
long_mavg=long_mavg)
def analyze(context, perf):
fig = plt.figure(figsize=(16,14))
ax1 = fig.add_subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('Hodnota portfolia v $')
ax2 = fig.add_subplot(212)
perf['MARKET'].plot(ax=ax2)
perf[['short_mavg', 'long_mavg']].plot(ax=ax2)
perf_trans = perf.ix[[t != [] for t in perf.transactions]]
buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]
sells = perf_trans.ix[
[t[0]['amount'] < 0 for t in perf_trans.transactions]]
ax2.plot(buys.index, perf.short_mavg.ix[buys.index],
'^', markersize=10, color='m')
ax2.plot(sells.index, perf.short_mavg.ix[sells.index],
'v', markersize=10, color='k')
ax2.set_ylabel('Vývoj ceny $')
plt.legend(loc=0)
plt.show()
"""
Explanation: Zipline info
V minulém příspěvku o backtestu, jsem se zabýval jak pomocí pandas provést backtest. Pandas je velmi mocný pomocník při algoritmickém obchodování, ale backtestování pomocí pandas je náchylné na chyby, už jen z toho důvodu, že pandas je zaměřený na analýzu dat obecně. Všechny datové sloupce si tam člověk musí vytvořit a vypočítat pomocí vzorců manuálně. Existují ale i nástroje, které pandas využívají a funkcionalitu zaměřenou na algoritmické obchodování už mají zabudovanou v sobě. Člověk se tak může zaměřit přímo na obchodování (skládání portfolií a logiku vstupů/výstupů). Backtest a jeho výsledky mu pak provede takovýto nástroj. Jedním z takových nástrojů je právě Zipline.
Zipline je open-source knihovna pro python, kterou vyvíjí lidé kolem Quantopianu a jejich komunita. Podporuje jak backtesting, tak i přímo live-trading a Quantopian používá tuto knihovnu jako backend pro jejich notebooky a algoritmy.
Instalace
Instaluje se přes pip - stačí v příkazové řádce spustit příkaz:
sh
pip install zipline
Po instalaci je třeba ještě říct zipline, jaký zdroj dat má použít. To se provede příkazem zipline ingest. Tím se aktivuje defaultní zdroj dat Quandl:
sh
zipline ingest
Funkce initialize a handle_data
Každý algoritmus v zipline využívá dvě funkce:
* initialize(context), která se volá jako první na začátku spuštění, v parametru context se definují proměnné, které jsou třeba a nemění se s novými daty.
* handle_data(context, data), volá se pokaždé, když jsou připravená nová data trhu.
Takže v jednoduchosti si do funkce initialize() definuju jaký trh chci obchodovat - jaké data mě zajímají, a popř. určité nastvení, které chci uchovat po celý průběh algoritmu.
Do funkce handle_data() naprogramuju svůj obchodní systém na bázi křížení klouzavých průměrů, jde prakticky o stejnou strategii jako jsem psal v minulém příspěvku.
Nakonec ještě doplním volitelnou funkci analyze(), která se zavolá na konci celého procesu a zobrazí mi výsledky.
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_algo/knn_high_dimension_correction.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: 2A.algo - Plus proches voisins en grande dimension - correction
La méthodes des plus proches voisins est un algorithme assez simple qui devient très lent en grande dimension. Ce notebook propose un moyen d'aller plus vite (ACP) mais en perdant un peu en performance.
End of explanation
"""
import time
from sklearn.datasets import make_classification
from sklearn.neighbors import KNeighborsClassifier
def what_to_measure(n, n_features, n_classes=3, n_clusters_per_class=2, n_informative=8,
neighbors=5, algorithm="brute"):
datax, datay = make_classification(n, n_features=n_features, n_classes=n_classes,
n_clusters_per_class=n_clusters_per_class,
n_informative=n_informative)
model = KNeighborsClassifier(neighbors, algorithm=algorithm)
model.fit(datax, datay)
t1 = time.perf_counter()
y = model.predict(datax)
t2 = time.perf_counter()
return t2 - t1, y
dt, y = what_to_measure(2000, 10)
dt
"""
Explanation: Q1 : k-nn : mesurer la performance
End of explanation
"""
x = []
y = []
ys = []
for nf in [10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]:
x.append(nf)
dt, _ = what_to_measure(5000, n_features=nf)
y.append(dt)
if nf <= 100:
dt2, _ = what_to_measure(5000, n_features=nf, algorithm="ball_tree")
else:
dt2 = None
ys.append(dt2)
print("nf={0} dt={1} dt2={2}".format(nf, dt, dt2))
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.plot(x, y, "o-", label="brute")
ax.plot(x, ys, "o-", label="ball_tree")
ax.set_xlabel("number of features")
ax.set_ylabel("prediction time in seconds")
ax.legend()
"""
Explanation: dimension
End of explanation
"""
x = []
y = []
ys = []
for nobs in [1000, 2000, 5000, 10000, 12000, 15000, 17000, 20000]:
x.append(nobs)
dt, _ = what_to_measure(nobs, n_features=200)
y.append(dt)
if nobs <= 5000:
dt2, _ = what_to_measure(nobs, n_features=200, algorithm="ball_tree")
else:
dt2 = None
ys.append(dt2)
print("nobs={0} dt={1} dt2={2}".format(nobs, dt, dt2))
fig, ax = plt.subplots(1, 1)
ax.plot(x, y, "o-", label="brute")
ax.plot(x, ys, "o-", label="ball_tree")
ax.set_xlabel("number of observations")
ax.set_ylabel("prediction time in seconds")
ax.legend()
"""
Explanation: observations
End of explanation
"""
import numpy
import numpy.random
import random
import scipy.sparse
def random_sparse_matrix(shape, ratio_sparse=0.2):
rnd = numpy.random.rand(shape[0] * shape[1])
sparse = 0
for i in range(0, len(rnd)):
x = random.random()
if x <= ratio_sparse - sparse:
sparse += 1 - ratio_sparse
else:
rnd[i] = 0
sparse -= ratio_sparse
rnd.resize(shape[0], shape[1])
return scipy.sparse.csr_matrix(rnd)
mat = random_sparse_matrix((20, 20))
"% non null coefficient", 1. * mat.nnz / (mat.shape[0] * mat.shape[1]), "shape", mat.shape
import random
from scipy.sparse import hstack
def what_to_measure_sparse(n, n_features, n_classes=3, n_clusters_per_class=2, n_informative=8,
neighbors=5, algorithm="brute", nb_sparse=20, ratio_sparse=0.2):
datax, datay = make_classification(n, n_features=n_features, n_classes=n_classes,
n_clusters_per_class=n_clusters_per_class,
n_informative=n_informative)
sp = random_sparse_matrix((datax.shape[0], (nb_sparse - n_features)), ratio_sparse=ratio_sparse)
datax = hstack([datax, sp])
model = KNeighborsClassifier(neighbors, algorithm=algorithm)
model.fit(datax, datay)
t1 = time.perf_counter()
y = model.predict(datax)
t2 = time.perf_counter()
return t2 - t1, y, datax.nnz / (datax.shape[0] * datax.shape[1])
dt, y, sparse_ratio = what_to_measure_sparse(2000, 10, nb_sparse=100, ratio_sparse=0.2)
dt, sparse_ratio
"""
Explanation: Q2 : k-nn avec sparse features
On recommence cette mesure de temps mais en créant des jeux de données sparses. On utilise le jeu précédent et on lui adjoint des coordonnées aléatoires et sparse. La première fonction random_sparse_matrix crée un vecteur sparse.
End of explanation
"""
x = []
y = []
nfd = 200
for nf in [10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]:
x.append(nf)
dt, _, ratio = what_to_measure_sparse(2000, n_features=nfd, nb_sparse=nfd+nf,
ratio_sparse=1.*nfd/(nfd+nf))
y.append(dt)
print("nf={0} dt={1} ratio={2}".format(nf, dt, ratio))
fig, ax = plt.subplots(1, 1)
ax.plot(x, y, "o-", label="brute")
ax.set_xlabel("number of dimensions")
ax.set_ylabel("prediction time in seconds")
ax.legend()
"""
Explanation: Seul l'algorithme brute accepte les features sparses.
End of explanation
"""
from sklearn.model_selection import train_test_split
def what_to_measure_perf(n, n_features, n_classes=3, n_clusters_per_class=2, n_informative=8,
neighbors=5, algorithm="brute"):
datax, datay = make_classification(n, n_features=n_features, n_classes=n_classes,
n_clusters_per_class=n_clusters_per_class,
n_informative=n_informative)
X_train, X_test, y_train, y_test = train_test_split(datax, datay)
model = KNeighborsClassifier(neighbors, algorithm=algorithm)
model.fit(X_train, y_train)
t1 = time.perf_counter()
y = model.predict(X_test)
t2 = time.perf_counter()
good = (y_test == y).sum() / len(datay)
return t2 - t1, good
what_to_measure_perf(5000, 100)
x = []
y = []
for nf in [10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]:
x.append(nf)
dt, perf = what_to_measure_perf(5000, n_features=nf)
y.append(perf)
print("nf={0} perf={1} dt={2}".format(nf, perf, dt))
fig, ax = plt.subplots(1, 1)
ax.plot(x, y, "o-", label="brute")
ax.set_xlabel("number of dimensions")
ax.set_ylabel("% good classification")
ax.legend()
"""
Explanation: La dimension augmente mais le nombre de features non nulle est constant. Comme l'algorithme est fortement dépendant de la distance entre deux éléments et le coût de cette distance dépend du nombre de coefficients non nuls.
Q3 : Imaginez une façon d'aller plus vite ?
Le coût d'un algorithme des plus proches voisins est linéaire selon la dimension car la majeure partie du temps est passé dans la fonction de distance et que celle-ci est linéaire. Mesurons la performance en fonction de la dimension. Ce n'est pas vraiment rigoureux de le faire dans la mesure où les données changent et n'ont pas les mêmes propriétés mais cela donnera une idée.
End of explanation
"""
from sklearn.decomposition import PCA
def what_to_measure_perf_acp(n, n_features, acp_dim=10,
n_classes=3, n_clusters_per_class=2, n_informative=8,
neighbors=5, algorithm="brute"):
datax, datay = make_classification(n, n_features=n_features, n_classes=n_classes,
n_clusters_per_class=n_clusters_per_class,
n_informative=n_informative)
X_train, X_test, y_train, y_test = train_test_split(datax, datay)
# sans ACP
model = KNeighborsClassifier(neighbors, algorithm=algorithm)
model.fit(X_train, y_train)
t1o = time.perf_counter()
y = model.predict(X_test)
t2o = time.perf_counter()
goodo = (y_test == y).sum() / len(datay)
# ACP
model = KNeighborsClassifier(neighbors, algorithm=algorithm)
pca = PCA(n_components=acp_dim)
t0 = time.perf_counter()
X_train_pca = pca.fit_transform(X_train)
model.fit(X_train_pca, y_train)
t1 = time.perf_counter()
X_test_pca = pca.transform(X_test)
y = model.predict(X_test_pca)
t2 = time.perf_counter()
good = (y_test == y).sum() / len(datay)
return t2o - t1o, goodo, t2 - t1, t1 - t0, good
what_to_measure_perf_acp(5000, 100)
x = []
y = []
yp = []
p = []
p_noacp = []
y_noacp = []
for nf in [10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]:
x.append(nf)
dt_noacp, perf_noacp, dt, dt_train, perf = what_to_measure_perf_acp(5000, n_features=nf)
p.append(perf)
y.append(perf)
yp.append(dt_train)
y_noacp.append(dt_noacp)
p_noacp.append(perf_noacp)
print("nf={0} perf={1} dt_predict={2} dt_train={3}".format(nf, perf, dt, dt_train))
fig, ax = plt.subplots(1, 2, figsize=(12,5))
ax[0].plot(x, y, "o-", label="prediction time with PCA")
ax[0].plot(x, yp, "o-", label="training time with PCA")
ax[0].plot(x, y_noacp, "o-", label="prediction time no PCA")
ax[0].set_xlabel("number of dimensions")
ax[0].set_ylabel("time")
ax[1].plot(x, p, "o-", label="with PCA")
ax[1].plot(x, p_noacp, "o-", label="no PCA")
ax[1].set_xlabel("number of dimensions")
ax[1].set_ylabel("% good classification")
ax[0].legend()
ax[1].legend()
"""
Explanation: Même si les performances ne sont pas tout-à-fait comparables, il est vrai qu'il est plus difficile de construire un classifieur basé sur une distance en grande dimension. La raison est simple : plus il y a de dimensions, plus la distance devient binaire : soit les coordonnées concordent sur les mêmes dimensions, soit elles ne concordent pas et la distance est presque équivalente à la somme des carrés des coordonnées.
Revenons au problème principal. Accélérer le temps de calcul des plus proches voisins.
L'idée est d'utiliser une ACP : l'ACP a la propriété de trouver un hyperplan qui réduit les dimensions mais qui conserve le plus possible l'inertie d'un nuage de points et on l'exprimer ainsi :
$$I = \frac{1}{n} \sum_i^n \left\Vert X_i - G \right\Vert^2 = \frac{1}{n^2} \sum_i^n\sum_j^n \left\Vert X_i - X_j \right\Vert^2$$
Bref, l'ACP conserve en grande partie les distances. Cela veut dire qu'une ACP réduit les dimensions, donc le temps de prédiction, tout en conservant le plus possible la distance entre deux points.
End of explanation
"""
|
guyk1971/deep-learning
|
batch-norm/Batch_Normalization_Lesson.ipynb
|
mit
|
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
"""
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
"""
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
"""
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
"""
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
"""
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
"""
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
"""
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
"""
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
"""
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
"""
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
"""
def plot_training_accuracies(*args, **kwargs):
"""
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
"""
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
"""
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
"""
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
"""
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu)
"""
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
"""
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.sigmoid)
"""
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid)
"""
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
"""
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
"""
train_and_test(False, 2, tf.nn.relu)
"""
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid)
"""
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
"""
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.relu)
"""
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.sigmoid)
"""
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
"""
train_and_test(True, 1, tf.nn.sigmoid)
"""
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
"""
train_and_test(True, 2, tf.nn.sigmoid)
"""
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
"""
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
"""
def batch_norm_test(test_training_accuracy):
"""
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
"""
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
"""
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
"""
batch_norm_test(True)
"""
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
"""
batch_norm_test(False)
"""
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation
"""
|
jomavera/Work
|
Interior_Point_Method_Example.ipynb
|
mit
|
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
"""
Explanation: Given the following LP
$\begin{gather}
\min\quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 &\geq 0\
x_1 - 3x_2 &\leq 0 \
x_1 + x_2 &\leq 4 \
\quad x_1, x_2 & \geq 0 \
\end{aligned}
\end{gather}$
End of explanation
"""
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.annotate('x_0',(1.05,1.05))
"""
Explanation: LP in standard form
$\begin{gather}
\min \quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 -x_3 &= 0\
x_1 - 3x_2 +x_4 &= 0 \
x_1 + x_2+x_5 &= 4 \
\quad x_1, x_2, x_3, x_4, x_5 & \geq 0 \
\end{aligned}
\end{gather}$
We see ($x_1,x_2$)=(1,1) is a interior point so we choose it as initial point x_0
x_0=$\begin{bmatrix}1\1\1\2\2\end{bmatrix}$
A= $\begin{bmatrix}2& -1 & -1 & 0 & 0\
1& -3 & 0 & 1 & 0\
1 & 1 & 0 & 0 & 1\
\end{bmatrix}$
Initial Solution z=-5
End of explanation
"""
mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
X = np.array([[1,0,0,0,0],[0,1,0,0,0],[0,0,1,0,0],[0,0,0,2,0],[0,0,0,0,2]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x_0 = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR x_0
x_1 = x_0 + dx
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.scatter(x_1[0,0],x_1[1,0],color='black') #graficar x_1
plt.annotate('x_0',(1.05,1.05))
plt.annotate('x_1',(x_1[0,0]+0.05,x_1[1,0]+0.05)) #anotar x_1
"""
Explanation: Iteration 1
End of explanation
"""
mu = mu*gamma
X = np.array([[x_1[0,0],0,0,0,0],[0,x_1[1,0],0,0,0],
[0,0,x_1[2,0],0,0],[0,0,0,x_1[3,0],0],[0,0,0,0,x_1[4,0]]])
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR x_1
x_2 = x_1 + dx
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.scatter(x_1[0,0],x_1[1,0],color='black') #graficar x_1
plt.scatter(x_2[0,0],x_2[1,0],color='black') #graficar x_2
plt.annotate('x_0',(1.05,1.05))
plt.annotate('x_1',(x_1[0,0]+0.05,x_1[1,0]+0.05)) #anotar x_1
plt.annotate('x_2',(x_2[0,0]+0.05,x_2[1,0]+0.05)) #anotar x_2
"""
Explanation: Iteration 2
End of explanation
"""
mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
x1s = [] #Lista vacia para guardar x_1's
x2s = [] #Lista vacia para guardar x_2's
x1s.append(x[0,0])
x2s.append(x[1,0])
for iteracion in range(100):
X = np.array([[x[0,0],0,0,0,0],[0,x[1,0],0,0,0],
[0,0,x[2,0],0,0],[0,0,0,x[3,0],0],[0,0,0,0,x[4,0]]])
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR vector x
x = x + dx
mu = mu*gamma
x1s.append( x[0,0] )
x2s.append( x[1,0] )
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
for iteracion in range(100):
plt.scatter(x1s[iteracion],x2s[iteracion],color='black')
if iteracion % 10 == 0:
nombre = 'x_'+str(iteracion)
plt.annotate(nombre,(x1s[iteracion]+0.05,x2s[iteracion]+0.05))
"""
Explanation: Now lets write a function to run $n$ iterations
End of explanation
"""
|
cgpotts/cs224u
|
hw_rel_ext.ipynb
|
apache-2.0
|
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
"""
Explanation: Homework and bake-off: Relation extraction using distant supervision
End of explanation
"""
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
"""
Explanation: Contents
Overview
Set-up
Baselines
Hand-build feature functions
Distributed representations
Homework questions
Different model factory [1 points]
Directional unigram features [1.5 points]
The part-of-speech tags of the "middle" words [1.5 points]
Bag of Synsets [2 points]
Your original system [3 points]
Bake-off [1 point]
Overview
This homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision.
As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off.
Set-up
See the first notebook in this unit for set-up instructions.
End of explanation
"""
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
"""
Explanation: As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
End of explanation
"""
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
"""
Explanation: You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in dataset is fair game:
End of explanation
"""
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
"""
Explanation: Baselines
Hand-build feature functions
End of explanation
"""
rel_ext.examine_model_weights(baseline_results)
"""
Explanation: Studying model weights might yield insights:
End of explanation
"""
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
"""
Explanation: Distributed representations
This simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard LogisticRegression-based model_factory. The crucial parameter that enables this is vectorize=False. This essentially says to rel_ext.experiment that your featurizer or your model will do the work of turning examples into vectors; in that case, rel_ext.experiment just organizes these representations by relation type.
End of explanation
"""
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
"""
Explanation: With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding.
Homework questions
Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)
Different model factory [1 points]
The code in rel_ext makes it very easy to experiment with other classifier models: one need only redefine the model_factory argument. This question asks you to assess a Support Vector Classifier.
To submit: A wrapper function run_svm_model_factory that does the following:
Uses rel_ext.experiment with the model factory set to one based in an SVC with kernel='linear' and all other arguments left with default values.
Trains on the 'train' part of splits.
Assesses on the dev part of splits.
Uses featurizers as defined above.
Returns the return value of rel_ext.experiment for this set-up.
The function test_run_svm_model_factory will check that your function conforms to these general specifications.
End of explanation
"""
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
"""
Explanation: Directional unigram features [1.5 points]
The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between X and his son Y and Y and his son X. This question asks you to modify simple_bag_of_words_featurizer to capture these differences.
To submit:
A feature function directional_bag_of_words_featurizer that is just like simple_bag_of_words_featurizer except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function test_directional_bag_of_words_featurizer should help verify that you've done this correctly.
A call to rel_ext.experiment with directional_bag_of_words_featurizer as the only featurizer. (Aside from this, use all the default values for rel_ext.experiment as exemplified above in this notebook.)
rel_ext.experiment returns some of the core objects used in the experiment. How many feature names does the vectorizer have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
End of explanation
"""
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
"""
Explanation: The part-of-speech tags of the "middle" words [1.5 points]
Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on middle_POS.
To submit:
A feature function middle_bigram_pos_tag_featurizer that is just like simple_bag_of_words_featurizer except that it creates a feature for bigram POS sequences. For example, given
The/DT dog/N napped/V
we obtain the list of bigram POS sequences
b = ['<s> DT', 'DT N', 'N V', 'V </s>'].
Of course, middle_bigram_pos_tag_featurizer should return count dictionaries defined in terms of such bigram POS lists, on the model of simple_bag_of_words_featurizer. Don't forget the start and end tags, to model those environments properly! The included function test_middle_bigram_pos_tag_featurizer should help verify that you've done this correctly.
A call to rel_ext.experiment with middle_bigram_pos_tag_featurizer as the only featurizer. (Aside from this, use all the default values for rel_ext.experiment as exemplified above in this notebook.)
End of explanation
"""
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
"""
Explanation: Bag of Synsets [2 points]
The following allows you to use NLTK's WordNet API to get the synsets compatible with dog as used as a noun:
from nltk.corpus import wordnet as wn
dog = wn.synsets('dog', pos='n')
dog
[Synset('dog.n.01'),
Synset('frump.n.01'),
Synset('dog.n.03'),
Synset('cad.n.01'),
Synset('frank.n.02'),
Synset('pawl.n.01'),
Synset('andiron.n.01')]
This question asks you to create synset-based features from the word/tag pairs in middle_POS.
To submit:
A feature function synset_featurizer that is just like simple_bag_of_words_featurizer except that it returns a list of synsets derived from middle_POS. Stringify these objects with str so that they can be dict keys. Use convert_tag (included below) to convert tags to pos arguments usable by wn.synsets. The included function test_synset_featurizer should help verify that you've done this correctly.
A call to rel_ext.experiment with synset_featurizer as the only featurizer. (Aside from this, use all the default values for rel_ext.experiment.)
End of explanation
"""
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
"""
Explanation: Your original system [3 points]
There are many options, and this could easily grow into a project. Here are a few ideas:
Try out different classifier models, from sklearn and elsewhere.
Add a feature that indicates the length of the middle.
Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).
Introduce features based on the entity mentions themselves. <!-- \[SPOILER: it helps a lot, maybe 4% in F-score. And combines nicely with the directional features.\] -->
Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.
Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The NLTK toolkit contains a variety of parsing algorithms that may help.
The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as GloVe?
In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
End of explanation
"""
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
"""
Explanation: Bake-off [1 point]
For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function rel_ext.bake_off_experiment. Rules:
Only one evaluation is permitted.
No additional system tuning is permitted once the bake-off has started.
The cells below this one constitute your bake-off entry.
People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.
Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.
The announcement will include the details on where to submit your entry.
End of explanation
"""
|
dynaryu/rmtk
|
rmtk/vulnerability/model_generator/SPBELA_approach/SPBELA.ipynb
|
agpl-3.0
|
import SPBELA
from rmtk.vulnerability.common import utils
%matplotlib inline
"""
Explanation: Generation of capacity curves using SP-BELA
The Simplified Pushover-based Earthquake Loss Assessment (SP-BELA) methodology allows the calculation of the displacement capacity (i.e. spectral displacement) and collapse multiplier (i.e. spectral acceleration) using a simplified mechanics-based procedure, similar to what has been proposed by Cosenza et al. 2005. The methodology currently implemented in the Risk Modeller's Toolkit only supports reinforced concrete frames.
<img src="../../../../figures/synthethic_capacity_curves.png" width="350" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
building_model_file = "../../../../../rmtk_data/SPBELA/bare_frames.csv"
damage_model_file = "../../../../../rmtk_data/damage_model_spbela.csv"
"""
Explanation: Load geometric and material properties
In order to use this methodology it is necessary to define a building model, which specifies the probabilistic distribution of the geometrical and material properties. These models need to be defined according to the format described in the RMTK manual. Please specify below the paths for the input files containing the building model and damage model:
End of explanation
"""
no_assets = 200
"""
Explanation: Number of samples
The parameter no_assets below controls the number of synthetic structural models or assets (each one with unique geometrical and material properties) that will be generated using a Monte Carlo sampling process:
End of explanation
"""
building_class_model = SPBELA.read_building_class_model(building_model_file)
assets = SPBELA.generate_assets(building_class_model, no_assets)
damage_model = utils.read_damage_model(damage_model_file)
capacity_curves = SPBELA.generate_capacity_curves(assets, damage_model)
"""
Explanation: Generate the capacity curves
End of explanation
"""
utils.plot_capacity_curves(capacity_curves)
"""
Explanation: Plot the capacity curves
End of explanation
"""
gamma = 1.2
yielding_point_index = 1.0
capacity_curves = utils.add_information(capacity_curves, "gamma", "value", gamma)
capacity_curves = utils.add_information(capacity_curves, "yielding point", "point", yielding_point_index)
"""
Explanation: Adding additional information
Additional information can be added to the capacity curves generated using the above method. For instance, by setting appropriate values for the parameters gamma and yielding_point_index in the cell below, the add_information function can be used to include this data in the previously generated capacity curves.
End of explanation
"""
output_file = "../../../../../rmtk_data/capacity_curves_spbela.csv"
utils.save_SdSa_capacity_curves(capacity_curves, output_file)
"""
Explanation: Save capacity curves
Please specify below the path for the output file to save the capacity curves:
End of explanation
"""
|
d-k-b/udacity-deep-learning
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
unique_words = set(text)
vocab_to_int = { word : idx for idx, word in enumerate(unique_words) }
int_to_vocab = { vocab_to_int[word] : word for word in unique_words }
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
punctuation_tokens = \
{
'.' : '<<period>>',
',' : '<<comma>>',
'"' : '<<quote>>',
';' : '<<semi>>',
'!' : '<<exclam>>',
'?' : '<<ques>>',
'(' : '<<left_par>>',
')' : '<<rght_par>>',
'--' : '<<dash>>',
'\n' : '<<ret>>'
}
return punctuation_tokens
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
inputs = tf.placeholder(tf.int32, [None, None], name= 'input')
targets = tf.placeholder(tf.int32, [None, None], name = 'targets')
learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.75)
lstm = tf.contrib.rnn.MultiRNNCell([lstm] * 1)
lstm_initial_state = lstm.zero_state(batch_size, tf.int32)
lstm_initial_state = tf.identity(lstm_initial_state, name = 'initial_state')
return lstm, lstm_initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
word_embeddings = tf.get_variable('embeddings', [vocab_size, embed_dim])
embedded_words = tf.nn.embedding_lookup(word_embeddings, input_data)
return embedded_words
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity(final_state, name = 'final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embeddings = get_embed(input_data, vocab_size, rnn_size)
output, final_state = build_rnn(cell, embeddings)
output = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn = None)
return output, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
noop_func = lambda x : None
def get_batches(int_text, batch_size, seq_length, func = noop_func):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
import math
block_size = batch_size * seq_length; func(block_size)
number_of_batches = math.floor(len(int_text) / block_size); func(number_of_batches)
qualified_length = number_of_batches * block_size
int_input = np.array(int_text[:qualified_length]); func(int_input)
int_target = np.concatenate((int_input[1:qualified_length], int_input[0:1])); func(int_target)
int_input = np.reshape(int_input, [batch_size, number_of_batches, seq_length]); func(int_input)
int_target = np.reshape(int_target, [batch_size, number_of_batches, seq_length]); func(int_target)
int_input = np.transpose(int_input, axes = [1, 0, 2]); func(int_input)
int_target = np.transpose(int_target, axes = [1, 0, 2]); func(int_target)
return np.array(list(zip(int_input, int_target)))
# get_batches([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], 2, 2, print)
# get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2, print)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 512
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 128
# Sequence Length
seq_length = 8
# Learning Rate
learning_rate = .001
# Show stats for every n number of batches
show_every_n_batches = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
input_tensor = loaded_graph.get_tensor_by_name('input:0')
initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
import random
num_words = len(probabilities)
while True:
idx = random.randint(0, num_words - 1)
if probabilities[idx] < random.random(): continue
selected_word = int_to_vocab[idx]
break
return selected_word
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 256
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.1/tutorials/LP.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: 'lp' (Line Profile) Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lp', times=[0,1,2], wavelengths=np.linspace(549, 551, 101))
print b.filter(kind='lp')
"""
Explanation: Dataset Parameters
Let's create the ParameterSets which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach both these ParameterSets for us.
components
Line profiles will be computed for each component in which the wavelengths are provided. If we wanted to expose the line profile for the binary as a whole, we'd set the wavelenghts for wavelengths@binary. If instead we wanted to expose per-star line profiles, we could set the wavelengths for both wavelengths@primary and wavelengths@secondary.
If you're passing wavelengths to the b.add_dataset call, it will default to filling the wavelengths at the system-level. To override this, pass components=['primary', 'secondary'], as well. For example: b.add_dataset('lp', wavelengths=np.linspace(549,551,101), components=['primary', 'secondary']).
times
Line profiles have an extra dimension than LC and RV datasets which have times as their independent variable. For that reason, the parameters in the LP dataset are tagged with individual times instead of having a separate times array. This allows the flux_densities and sigmas to be per-time. Because of this, times is not a variable, but instead must be passed when you call b.add_dataset. At that point, in order to change the times you would need to remove and re-add the dataset.
End of explanation
"""
print b.filter(kind='lp_dep')
"""
Explanation: Here we see that there are three wavelengths Parameters, with the wavelengths@primary being filled with the input array (since we didn't override the components or manually pass a dictionary). Because of this, the flux_densities and sigmas are only visible for the binary component as well. (If we were to fill either of the two other arrays, the corresponding Parameters would become visible as well). We can see, however, that there is an entry for flux_densities and sigmas for each of the times we passed.
In addition, there are some Parameters in the dataset not related directly to observations. These include information about the line profile, as well as passband-dependent parameters.
End of explanation
"""
print b.filter('wavelengths')
print b.get_parameter('wavelengths', component='binary')
"""
Explanation: For information on the passband-dependent parameters, see the section on the lc dataset (these are used only to compute fluxes when rv_method=='flux-weighted')
wavelengths
End of explanation
"""
print b.filter('flux_densities')
print b.get_parameter('flux_densities', time=0)
"""
Explanation: flux_densities
End of explanation
"""
print b.filter('sigmas')
print b.get_parameter('sigmas', time=0)
"""
Explanation: sigmas
End of explanation
"""
print b.get_parameter('profile_func')
"""
Explanation: profile_func
End of explanation
"""
print b.get_parameter('profile_rest')
"""
Explanation: profile_rest
End of explanation
"""
print b.get_parameter('profile_sv')
"""
Explanation: profile_sv
End of explanation
"""
b.run_compute(irrad_method='none')
b['lp@model'].twigs
"""
Explanation: Synthetics
End of explanation
"""
print b.filter('flux_densities', context='model')
print b.get_parameter('flux_densities', context='model', time=0)
"""
Explanation: The model for a line profile dataset will expose flux-densities at each time and for each component where the corresponding wavelengths Parameter was not empty. Here since we used the default and exposed line-profiles for the entire system, we have a single entry per-time.
End of explanation
"""
afig, mplfig = b.filter(dataset='lp01', context='model', time=0).plot(show=True)
"""
Explanation: Plotting
By default, LP datasets plot as 'flux_densities' vs 'wavelengths' for a single time.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01')
print b['columns'].choices
"""
Explanation: Mesh Fields
Let's add a single mesh and see which columns from the line profile dataset are available to expose as a column in the mesh.
End of explanation
"""
|
letsgoexploring/economicData
|
us-convergence/python/state_income_data.ipynb
|
mit
|
# Import BEA API key or set manually to variable api_key
try:
items = os.getcwd().split('/')[:3]
items.append('bea_api_key.txt')
path = '/'.join(items)
with open(path,'r') as api_key_file:
api_key = api_key_file.readline()
except:
api_key = None
# Dictionary of state abbreviations
stateAbbr = {
u'Alabama':u'AL',
u'Alaska *':u'AK',
u'Arizona':u'AZ',
u'Arkansas':u'AR',
u'California':u'CA',
u'Colorado':u'CO',
u'Connecticut':u'CT',
u'Delaware':u'DE',
u'District of Columbia':u'DC',
u'Florida':u'FL',
u'Georgia':u'GA',
u'Hawaii *':u'HI',
u'Idaho':u'ID',
u'Illinois':u'IL',
u'Indiana':u'IN',
u'Iowa':u'IA',
u'Kansas':u'KS',
u'Kentucky':u'KY',
u'Louisiana':u'LA',
u'Maine':u'ME',
u'Maryland':u'MD',
u'Massachusetts':u'MA',
u'Michigan':u'MI',
u'Minnesota':u'MN',
u'Mississippi':u'MS',
u'Missouri':u'MO',
u'Montana':u'MT',
u'Nebraska':u'NE',
u'Nevada':u'NV',
u'New Hampshire':u'NH',
u'New Jersey':u'NJ',
u'New Mexico':u'NM',
u'New York':u'NY',
u'North Carolina':u'NC',
u'North Dakota':u'ND',
u'Ohio':u'OH',
u'Oklahoma':u'OK',
u'Oregon':u'OR',
u'Pennsylvania':u'PA',
u'Rhode Island':u'RI',
u'South Carolina':u'SC',
u'South Dakota':u'SD',
u'Tennessee':u'TN',
u'Texas':u'TX',
u'Utah':u'UT',
u'Vermont':u'VT',
u'Virginia':u'VA',
u'Washington':u'WA',
u'West Virginia':u'WV',
u'Wisconsin':u'WI',
u'Wyoming':u'WY'
}
# List of states in the US
stateList = [s for s in stateAbbr]
"""
Explanation: State Income Data
Constructs a data set of real income per capita for the continental United States from 1840 to the present.
Nominal income per capita for 1840, 1880, a 1900 were found in Appendix A in "Interregional Differences in Per Capita Income, Population, and Total Income, 1840-1950" by Richard Easterlin in <ins>Trends in the American Economy in the Nineteenth Century</ins> (https://www.nber.org/books-and-chapters/trends-american-economy-nineteenth-century).
The CPI for 1840, 1880, and 1900 was taken from "<ins>Bicentennial Edition: Historical Statistics of the United States, Colonial Times to 1970</ins> (https://www.census.gov/library/publications/1975/compendia/hist_stats_colonial-1970.html)
Income data from 1929 are obtained from the BEA.
Preliminaries
End of explanation
"""
# Obtain data from BEA
gdp_deflator = urlopen('http://apps.bea.gov/api/data/?UserID='+api_key+'&method=GetData&datasetname=NIPA&TableName=T10109&TableID=13&Frequency=A&Year=X&ResultFormat=JSON&')
# Parse result
result = gdp_deflator.read().decode('utf-8')
json_response = json.loads(result)
# Import to DataFrame and organize
df = pd.DataFrame(json_response['BEAAPI']['Results']['Data'])
df['DataValue'] = df['DataValue'].astype(float)
df = df.set_index(['LineDescription',pd.to_datetime(df['TimePeriod'])])
df.index.names = ['line description','Year']
# Extract price level data
data_p = df['DataValue'].loc['Gross domestic product']/100
data_p.name = 'price level'
data_p = data_p.sort_index()
data_p
base_year = json_response['BEAAPI']['Results']['Notes'][0]['NoteText'].split('Index numbers, ')[-1].split('=')[0]
with open('../csv/state_income_metadata.csv','w') as newfile:
newfile.write(',Values\n'+'base_year,'+base_year)
"""
Explanation: Deflator data
End of explanation
"""
# Obtain data from BEA
state_y_pc = urlopen('http://apps.bea.gov/api/data/?UserID='+api_key+'&method=GetData&DataSetName=Regional&TableName=SAINC1&LineCode=3&Year=ALL&GeoFips=STATE&ResultFormat=JSON')
# Parse result
result = state_y_pc.read().decode('utf-8')
json_response = json.loads(result)
# Import to DataFrame and organize
df = pd.DataFrame(json_response['BEAAPI']['Results']['Data'])
df.GeoName = df.GeoName.replace(stateAbbr)
df = df.set_index(['GeoName',pd.DatetimeIndex(df['TimePeriod'])])
df.index.names = ['State','Year']
df['DataValue'] = df['DataValue'].replace('(NA)',np.nan)
# Extract income data
data_y = df['DataValue'].str.replace(',','').astype(float)
data_y.name = 'income'
data_y = data_y.unstack('State')
data_y = data_y.sort_index()
data_y = data_y.divide(data_p,axis=0)
data_y
"""
Explanation: Per capita income data
End of explanation
"""
# Import Easterlin's income data
easterlin_data = pd.read_csv('../historic_data/Historical Statistics of the US - Easterlin State Income Data.csv',index_col=0)
# Import historic CPI data
historic_cpi_data=pd.read_csv('../historic_data/Historical Statistics of the US - cpi.csv',index_col=0)
historic_cpi_data = historic_cpi_data/historic_cpi_data.loc[1929]*float(data_p.loc['1929'])
# Construct series for real incomes in 1840, 1880, and 1900
df_1840 = easterlin_data['Income per capita - 1840 - A [cur dollars]']/float(historic_cpi_data.loc[1840])
df_1880 = easterlin_data['Income per capita - 1880 [cur dollars]']/float(historic_cpi_data.loc[1890])
df_1900 = easterlin_data['Income per capita - 1900 [cur dollars]']/float(historic_cpi_data.loc[1900])
# Put into a DataFrame and concatenate with previous data beginning in 1929
df = pd.DataFrame({pd.to_datetime('1840'):df_1840,pd.to_datetime('1880'):df_1880,pd.to_datetime('1900'):df_1900}).transpose()
df = pd.concat([data_y,df]).sort_index()
# Export data to csv
series = df.sort_index()
dropCols = [u'AK', u'HI', u'New England', u'Mideast', u'Great Lakes', u'Plains', u'Southeast', u'Southwest', u'Rocky Mountain', u'Far West']
for c in dropCols:
series = series.drop([c],axis=1)
series.to_csv('../csv/state_income_data.csv',na_rep='NaN')
# Export notebook to .py
runProcs.exportNb('state_income_data')
"""
Explanation: Load Easterlin's data
End of explanation
"""
|
mbuchove/analysis-tools-m
|
GC/flux_lvl_detection_rough.ipynb
|
mit
|
sig = 34. # roughly
time = 6406. / 60. # hours
sens = sig / np.sqrt(time) # sensitivity
t = (5./sens)**2 # time required to find 5 sigma
print(sens)
print(t)
"""
Explanation: Finding rough minimum flux level required to detect
End of explanation
"""
gam_rate = 0.1582244 * 60 # gamma / hour
gam_err = 0.006188022 * 60
#bg_rate =
f = 1.75
print(gam_rate)
print(gam_err)
print(gam_rate*f)
max_exp = 3.0 # hours / night
exposure = 3
assert(exposure <= max_exp)
num_evts = exposure * gam_rate
print(num_evts)
print(np.sqrt(num_evts))
# if flux increased
f = 1.75
print(num_evts*f)
stdev_f = np.sqrt(num_evts*f)
print(stdev_f)
n_sigma = (num_evts*f-num_evts)/stdev_f
print(" a flux level of " + str(f) + " measured for " + str(exposure) + " hours"\
" times the average rate would be " + str(n_sigma) \
+ " stdevs away from the average rate")
"""
Explanation: $$S = s / \sqrt{t}$$
$$t = \left( \frac{s}{S} \right) ^2$$
Poisson distribution can be approximated as Gaussian for $n>10$ events
End of explanation
"""
print(np.sqrt(gam_rate*85))
print(gam_err*85)
"""
Explanation: $$ \frac{fN-N}{\sqrt{fN}} = t $$
$$ \left(f-1\right)^2N^2 = t^2fN $$
End of explanation
"""
|
google/tf-quant-finance
|
tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Upgrade to TensorFlow 2.5+
!pip install --upgrade tensorflow
#@title Install and import Libraries for this colab. RUN ME FIRST!
!pip install matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.summary.writer.writer import FileWriter
%load_ext tensorboard
"""
Explanation: Introduction to TensorFlow Part 2 - Debugging and Control Flow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
End of explanation
"""
def plus_one(x):
print("input has type %s, value %s"%(type(x), x))
output = x + 1.0
print("output has type %s, value %s"%(type(output), output))
return output
# Let us create a graph where `plus_one` is invoked during the graph contruction
g = tf.Graph()
with g.as_default():
x = tf.constant([1.0,2.0,3.0])
# Notice that print statemets are not called during the graph construction
a = tf.py_function(plus_one, inp = [x], Tout=tf.float32)
with tf.compat.v1.Session(graph=g) as sess:
# During the runtime, input `x` is passed as an EagerTensor to `plus_one`
print(sess.run(a))
"""
Explanation: What this notebook covers
This notebook carries on from part 1, and covers the basics of control flow and debugging in tensorflow:
* various debugging aids
* loops
* conditionals
These features are used in graph mode of inside a tf.function. For simplicity eager execution is disabled throughout the training.
Debugging
tf.py_function
Full documentation
This allows you to wrap a python function as an op. The function can make further calls into tensorflow, allowing e.g. a subset of tensorflow operations to be wrapped up inside the function and inspected using pdb.
There are various restrictions involved with this op and
* serializing execution graphs
* executing across distributed machines
Read the documentation for more information. As such, this should be viewed as more of a debugging tool, and its use should be avoided in performance-sensitive code.
End of explanation
"""
# Define a TensorFlow function
@tf.function
def print_fn(x):
# Note that `print_trace` is a TensorFlow Op. See the next section for details
print_trace = tf.print(
"`input` has value", x, ", type", type(x), "and shape", tf.shape(x))
# Create some inputs
a = tf.constant([1, 2])
# Call the function
print_fn(a)
"""
Explanation: tf.print
Full Documentation
The tf.print op is another useful debugging tool. It takes any number of tensors and python objects, and prints them to stdout. There are a few optional parameters, to control formatting and where it prints to. See the documentation for details.
End of explanation
"""
g = tf.Graph()
with g.as_default():
a = tf.constant(1) + tf.constant(1)
print_trace = tf.print("a is set to ", a)
b = a * 2
with tf.compat.v1.Session(graph=g) as sess:
results = sess.run(b)
"""
Explanation: If you're using eager execution mode, that's all you need to know. For deferred execution however there are some significant complications that we'll discuss in the next section.
tf.control_dependencies
Full Documentation
One of the easiest optimisations tensorflow makes when in deferred mode is to eliminate unused ops. So if we run this:
End of explanation
"""
g = tf.Graph()
with g.as_default():
a = tf.constant(1) + tf.constant(1)
print_trace = tf.compat.v1.print("a is set to", a)
b = a * 2
with tf.compat.v1.Session(graph=g) as sess:
results = sess.run((b, print_trace))
"""
Explanation: Then we don't get any output. Nothing depends on print_trace (in fact nothing can depend on it: tf.print doesn't return anything to depend on), so it gets dropped from the graph before execution occurs. If you want print_trace to be evaluated, then you need to ask for it explicitly:
End of explanation
"""
g = tf.Graph()
with g.as_default():
a = tf.constant(1) + tf.constant(1)
print_trace = tf.print("a is set to", a)
hello_world = tf.print("hello world")
with tf.control_dependencies((print_trace, hello_world)):
# print_trace and hello_world will always be evaluated
# before b can be evaluated
b = a * 2
c = a * 3
with tf.compat.v1.Session(graph=g) as sess:
results = sess.run(b)
"""
Explanation: That's fine for our noddy sample above. But obviously has problems as your graph grows larger or the sess.run method gets further removed from the graph definition. The solution for that is tf.control_dependencies. This signals to tensorflow that the given set of prerequisite ops must be evaluated before a set of dependent ops.
End of explanation
"""
# Nothing gets printed
with tf.compat.v1.Session(graph=g) as sess:
results = sess.run(c)
"""
Explanation: Note that if all of the dependent ops are pruned from the dependency tree and thus not evaluated, then the prerequisites will not be evaluated either: e.g. if we call sess.run(c) in the example above,then print_trace and hello_world won't be evaluated
End of explanation
"""
g = tf.Graph()
with g.as_default():
x = tf.compat.v1.placeholder(tf.float32, shape=[])
with tf.control_dependencies([
tf.debugging.Assert(tf.not_equal(x, 0), ["Invalid value for x:",x])]):
y = 2.0 / x
with tf.compat.v1.Session(graph=g) as sess:
try:
results = sess.run(y, feed_dict={x: 0.0})
except tf.errors.InvalidArgumentError as e:
print('Value of x is zero\nError message:')
print(e.message)
"""
Explanation: tf.debugging.Assert
Full Documentation
In addition to tf.print, the other common use of control_dependencies is tf.debugging.Assert. This op does what you'd expect: checks a boolean condition and aborts execution with a InvalidArgumentError if the condition is not true. Just like tf.print, it is likely to be pruned from the dependency tree and ignored if run in deferred execution mode without control_dependencies
End of explanation
"""
g = tf.Graph()
with g.as_default():
x = tf.compat.v1.placeholder(tf.float32, shape=[])
with tf.control_dependencies([tf.debugging.assert_none_equal(x, 0.0)]):
y = 2.0 / x
with tf.compat.v1.Session(graph=g) as sess:
try:
results = sess.run(y, feed_dict={x: 0.0})
except tf.errors.InvalidArgumentError as e:
print('Value of x is zero\nError message:')
print(e.message)
"""
Explanation: There are also a bunch of helper methods, such as
* assert_equal
* assert_positive
* assert_rank_at_least
* etc.
to simplify common uses of tf.Assert
So our sample above could have been written as
End of explanation
"""
# This won't work
try:
tf.cond(tf.constant(True), tf.constant(1), tf.constant(2))
except TypeError as e:
pass
# You need a callable:
tf.cond(tf.constant(True), lambda: tf.constant(1), lambda: tf.constant(2))
"""
Explanation: Control Flow
tf.cond
Full Documentation
The cond operand is the TensorFlow equivalent of if-else. It takes
* a condition, which must resolve down to a single scalar boolean
* true_fn: a python callable to generate one or more tensors that will be evaluated if the condition is true
* false_fn: same as true_fn, but the resulting tensors will only be evaluated if the the condition is false
The condition is evaluated, and if its result is true then, the tensors generated by true_fn are evaluated and those generated by false_fn are abandoned (and vice versa if the result is false).
Note that true_fn and false_fn must be python functions (or lambdas), not just tensors:
End of explanation
"""
def dependency_fn():
print ("DEPENDENCY: I'm always evaluated at execution time because I'm a dependency\n")
return tf.constant(2)
dependency = tf.py_function(dependency_fn, inp=[], Tout=tf.int32)
def true_op_fn():
print ("TRUE_OP_FN: I'm evaluated at execution time because condition is True\n")
return 1
def true_fn():
print ("TRUE_FN: I'm evaluated at graph building time")
return tf.py_function(true_op_fn, inp=[], Tout=tf.int32)
def false_op_fn(input):
print ("FALSE_OP_FN: I'm never evaluated because condition isn't False\n")
return 1 + input
def false_fn():
print ("FALSE_FN: I'm evaluated at graph building time")
return tf.py_function(false_op_fn, inp=[dependency], Tout=tf.int32)
def predicate_fn():
print("\n****** Executing the graph")
print("PREDICATE: I'm evaluated at execution time\n")
return tf.constant(True)
@tf.function
def test_fn():
print("****** Building graph")
tf.cond(tf.py_function(predicate_fn, inp=[], Tout=tf.bool),
true_fn, false_fn)
test_fn()
"""
Explanation: The exact order of execution is a little complicated
* true_fn and false_fn are executed just once, when the graph is being built.
* the tensors created by true_fn (if the condition is true) or false_fn (if the condition is false) will be evaluated stricly after the condition has been evaluated
* the tensors created by false_fn (if the condition is true) or true_fn (if the condition is false) will never be evaluated
* any tensors depended on by the tensors generated by true_fn or false_fn will be always evaluated, even regardless of what the condition evaluates to
End of explanation
"""
g = tf.Graph()
with g.as_default():
index = tf.constant(1)
accumulator = tf.constant(0)
loop = tf.while_loop(
loop_vars=[index, accumulator],
cond = lambda idx, acc: idx < 4,
body = lambda idx, acc: [idx+1, acc + idx] )
with tf.compat.v1.Session() as sess:
with FileWriter("logs", sess.graph):
results = sess.run(loop)
# Graph visualization
%tensorboard --logdir logs
"""
Explanation: tf.while_loop
Full Documentation
This is one of the more complicated ops in TensorFlow, so we'll take things step by step.
The most important parameter is loop_vars. This is a tuple/list of tensors.
Next up is cond this is a python callable that should take the same number of arguments as loop_vars contains, and returns a single boolean.
The third important parameter is body. This is a python callable that should take the same number of arguments as loop_vars contains, and returns a tuple/list of values with the same size as loop_vars, and whose members are of the same type/arity/shape as those in loop_vars.
Note that like the trun_fn and false_fn parameters discussed in tf.cond above, body and cond are callables, that are called once during graph definition.
To a first level approximation, the behaviour is then roughtly akin to the following pseudo-code:
python
working_vars = loop_vars
while(cond(*working_vars)):
working_vars = body(*working_vars)
return working_vars
There are optional complications:
* By default, each loop variable must have exactly the same shape/size/arity in all iterations. If you don't want that (e.g. because you want to increase the size in a particular dimnension by 1 each iteration), then you can use shape_invariants to loosen the checks.
* maximum_iterations can be used to put an upper bound on number of times the loop is executed, even if cond still returns true
The documentation contains some warnings about parallel executions, race conditions and variables getting out of sync. To understand these, we need to beyond the first level approximation above. Consider the following:
```python
index = tf.constant(1)
accumulator = tf.constant(0)
loop = tf.while_loop(
loop_vars=[index, accumulator],
cond = lambda idx, acc: idx < 4,
body = lambda idx, acc: [idx+1, acc + idx] )
```
To a second level approximation, this is equivalent to
```python
index_initial = tf.constant(1)
accumalator_initial = tf.constant(0)
index_iteration_1 = tf.add(index_initial, 1)
index_iteration_2 = tf.add(index_iteration_1, 1)
index_iteration_3 = tf.add(index_iteration_2, 1)
accumulator_iteration_1 = tf.add(
accumalator_initial, index_initial)
accumulator_iteration_2 = tf.add(
accumulator_iteration_1, index_iteration_1)
accumulator_iteration_3 = tf.add(
accumulator_iteration_2, index_iteration_2)
loop = [index_iteration_3, accumulator_iteration_3]
```
(for the full, unapproximated, gory details of the graph, run the code below
End of explanation
"""
# First let us explicitly disable Autograph
@tf.function(autograph=False)
def loop_fn(index, max_iterations):
for index in range(max_iterations):
index += 1
if index == 4:
tf.print('index is equal to 4')
return index
# Create some inputs
index = tf.constant(0)
max_iterations = tf.constant(5)
# Try calling the loop
try:
loop_fn(index, max_iterations)
except TypeError as e:
print(e)
# Autograph is enabled by default
@tf.function
def loop_fn(index, max_iterations):
for index in range(max_iterations):
index += 1
if index == 4:
tf.print('index is equal to 4')
return index
# Note that Autograph sucessfully converted Python code to TF graph
print(loop_fn(index, max_iterations))
"""
Explanation: but our second-level approximation will do for now).
Notice how index_iteration_3 doesn't depend the accumulator values at all. Thus assuming our accumulator tensors were doing something more complicated than adding two integers, and assuming we were running on hardware with plenty of execution units, then it's possible that index_iteration_3 could be fully calculated in one thead, while accumulator_iteration_1 is still being calculated in another.
It usually doesn't matter, because loop depends on both index_iteration_3 and accuulator_iteration_3, so it and any dependencies can't start their evaluation before all the accumulation steps have completed. But if you're depending on side effects, or clever operations depending on global state that tensorflow is unware of (e.g. in custom kernels, or py_function ops), then it's something to be aware of. You can use the while_loop's parallel_iterations parameter to restrict the number of iterations that can be calulated in parallel if this does become an issue.
Autograph to deal with for-loop and if-statement
One can be tempted to use Python for-loops and if-statements and expect TensorFlow to correctly map them to control flow. Autograph is a utlitity that
comes with tf.function and allows to treat Python loops and conditionals as
TensorFlow control flow ops. Autograph is enabled by default and is capable of converting Python logiv to a TF code. See more details on the official TF page here.
End of explanation
"""
MAX_ITERATIONS = 64
NUM_PIXELS = 512
def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)):
"""Generates a complex matrix of shape [nX, nY].
Generates an evenly spaced grid of complex numbers spanning the rectangle
between the supplied diagonal points.
Args:
nX: A positive integer. The number of points in the horizontal direction.
nY: A positive integer. The number of points in the vertical direction.
bottom_left: The coordinates of the bottom left corner of the rectangle to
cover.
top_right: The coordinates of the top right corner of the rectangle to
cover.
Returns:
A constant tensor of type complex64 and shape [nX, nY].
"""
x = tf.linspace(bottom_left[0], top_right[0], nX)
y = tf.linspace(bottom_left[1], top_right[1], nY)
real, imag = tf.meshgrid(x, y)
return tf.cast(tf.complex(real, imag), tf.complex128)
c_values = GenerateGrid(NUM_PIXELS, NUM_PIXELS)
initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128)
initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS
# You need to put the various values you want to change inside the loop here
loop_vars = ()
# this needs to take the same number of arguments as loop_vars contains and
# return a tuple of equal size with the next iteration's values
def body():
# hint: tf.abs will give the magnitude of a complex value
return ()
# this just needs to take the same number of arguments as loop_vars contains and
# return true (we'll use maximum_iterations to exit the loop)
def cond():
return True
results = tf.while_loop(
loop_vars=loop_vars,
body = body,
cond = cond,
maximum_iterations=MAX_ITERATIONS)
## extract the final value of diverged_after from the tuple
final_diverged_after = results[-1]
plt.matshow(final_diverged_after)
pass
#@title Solution: Mandlebrot set (Double-click to reveal)
MAX_ITERATIONS = 64
NUM_PIXELS = 512
def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)):
"""Generates a complex matrix of shape [nX, nY].
Generates an evenly spaced grid of complex numbers spanning the rectangle
between the supplied diagonal points.
Args:
nX: A positive integer. The number of points in the horizontal direction.
nY: A positive integer. The number of points in the vertical direction.
bottom_left: The coordinates of the bottom left corner of the rectangle to
cover.
top_right: The coordinates of the top right corner of the rectangle to
cover.
Returns:
A constant tensor of type complex64 and shape [nX, nY].
"""
x = tf.linspace(bottom_left[0], top_right[0], nX)
y = tf.linspace(bottom_left[1], top_right[1], nY)
real, imag = tf.meshgrid(x, y)
return tf.cast(tf.complex(real, imag), tf.complex128)
c_values = GenerateGrid(NUM_PIXELS, NUM_PIXELS)
initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128)
initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS
# You need to put the various values you want to change inside the loop here
loop_vars = (0, initial_Z_values, initial_diverged_after)
# this needs to take the same number of arguments as loop_vars contains and
# return a tuple of equal size with the next iteration's values
def body(iteration_count, Z_values, diverged_after):
new_Z_values = Z_values * Z_values + c_values
has_diverged = tf.abs(new_Z_values) > 2.0
new_diverged_after = tf.minimum(diverged_after, tf.where(
has_diverged, iteration_count, MAX_ITERATIONS))
return (iteration_count+1, new_Z_values, new_diverged_after)
# this just needs to take the same number of arguments as loop_vars contains and
# return true (we'll use maximum_iterations to exit the loop)
def cond(iteration_count, Z_values, diverged_after):
return True
results = tf.while_loop(
loop_vars=loop_vars,
body = body,
cond = cond,
maximum_iterations=MAX_ITERATIONS)
## extract the final value of diverged_after from the tuple
final_diverged_after = results[-1]
plt.matshow(final_diverged_after)
plt.show()
"""
Explanation: Exercise: Mandlebrot set
Recall that the Mandelbrot set is defined as the set of values of $c$ in the complex plane such that the recursion:
$z_{n+1} = z_{n}^2 + c$ does not diverge.
It is known that all such values of $c$ lie inside the circle of radius $2$ around the origin.
So what we'll do is
Create a 2-d tensor c_values ranging from -1-1i to 1+1i
Create a matching 2-d tensor Z_values with initial values of 0
Create a third 2-d tensor diverged_after which contains the iteration number that the matching Z_value's absolute value was > 2 (or MAX_ITERATIONS, if it always stayed below 2)
Update the above using a while_loop
display the final values of diverged_after as an image to see the famous shape
End of explanation
"""
|
letsgoexploring/teaching
|
winter2017/econ129/python/Econ129_Class_04.ipynb
|
mit
|
# Define T and g
T = 40
y0 =50
g = 0
# Compute yT using the direct approach and print
# Initialize a 1-dimensional array called y that has T+1 zeros
# Set the initial value of y to equal y0
# Use a for loop to update the values of y one at a time
# Print the final value in the array y
"""
Explanation: Class 4: matplotlib (and a quick Numpy example)
Brief introduction to the matplotlib module.
Preliminary example: Economic growth
A country with GDP in year $t-1$ denoted by $y_{t-1}$ and an annual GDP growth rate of $g$, will have GDP in year $t$ given by the recursive equation:
\begin{align}
y_{t} & = (1+g)y_{t-1}
\end{align}
Given an initial value of $y_0$, we can find $y_t$ for any given $t$ in one of two ways:
1. By iterating on the equation
2. Or by using substitution and deriving:
\begin{align}
y_t & = (1+g)^t y_0
\end{align}
In this example we'll do both.
Example: Economic growth
A country with GDP in year $t-1$ denoted by $y_{t-1}$ and an annual GDP growth rate of $g$, will have GDP in year $t$ given by the recursive equation:
\begin{align}
y_{t} & = (1+g)y_{t-1}
\end{align}
Given an initial value of $y_0$, we can find $y_t$ for any given $t$ in one of two ways:
1. By iterating on the equation
2. Or by using substitution and deriving:
\begin{align}
y_t & = (1+g)^t y_0
\end{align}
In this example we'll do both.
End of explanation
"""
# Import matplotlib.pyplot
"""
Explanation: matplotlib
matplotlib is a powerful plotting module that is part of Python's standard library. The website for matplotlib is at http://matplotlib.org/. And you can find a bunch of examples at the following two locations: http://matplotlib.org/examples/index.html and http://matplotlib.org/gallery.html.
matplotlib contains a module called pyplot that was written to provide a Matlab-style ploting interface.
End of explanation
"""
# Magic command for the Jupyter Notebook
"""
Explanation: Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a magic command. A magic command is preceded by a % character. Magics are not Python and will create errs if used outside of the Jupyter notebook
End of explanation
"""
# Import numpy as np
# Create an array of x values from -6 to 6
# Create a variable y equal to the sin of x
# Use the plot function to plot the
# Add a title and axis labels
"""
Explanation: A quick matplotlib example
Create a plot of the sine function for x values between -6 and 6. Add axis labels and a title.
End of explanation
"""
# Use the help function to see the documentation for plot
"""
Explanation: The plot function
The plot function creates a two-dimensional plot of one variable against another.
End of explanation
"""
# Create an array of x values from -6 to 6
# Create a variable y equal to the x squared
# Use the plot function to plot the line
# Add a title and axis labels
# Add grid
"""
Explanation: Example
Create a plot of $f(x) = x^2$ with $x$ between -2 and 2.
* Set the linewidth to 3 points
* Set the line transparency (alpha) to 0.6
* Set axis labels and title
* Add a grid to the plot
End of explanation
"""
# Create an array of x values from -6 to 6
# Create y variables
# Use the plot function to plot the lines
# Add a title and axis labels
# Set axis limits
# legend
# Add grid
"""
Explanation: Example
Create plots of the functions $f(x) = \log x$ (natural log) and $g(x) = 1/x$ between 0.01 and 5
* Set the limits for the $x$-axis to (0,5)
* Set the limits for the $y$-axis to (-2,5)
* Make the line for $log(x)$ solid blue
* Make the line for $1/x$ dashd magenta
* Set the linewidth of each line to 3 points
* Set the line transparency (alpha) for each line to 0.6
* Set axis labels and title
* Add a legend
* Add a grid to the plot
End of explanation
"""
# Set betas
# Create x values
# create epsilon values from the standard normal distribution
# create y
# plot
# Add a title and axis labels
# Set axis limits
# Add grid
"""
Explanation: Example
Consider the linear regression model:
\begin{align}
y_i = \beta_0 + \beta_1 x_i + \epsilon_i
\end{align}
where $x_i$ is the independent variable, $\epsilon_i$ is a random regression error term, $y_i$ is the dependent variable and $\beta_0$ and $\beta_1$ are constants.
Let's simulate the model
* Set values for $\beta_0$ and $\beta_1$
* Create an array of $x_i$ values from -5 to 5
* Create an array of $\epsilon_i$ values from the standard normal distribution equal in length to the array of $x_i$s
* Create an array of $y_i$s
* Plot y against x with either a circle ('o'), triangle ('^'), or square ('s') marker and transparency (alpha) to 0.5
* Add axis lables, a title, and a grid to the plot
End of explanation
"""
# Create an array of x values from -6 to 6
# Create y variables
# Use the plot function to plot the lines
# Add a title and axis labels
# Add grid
# legend
"""
Explanation: Example
Create plots of the functions $f(x) = x$, $g(x) = x^2$, and $h(x) = x^3$ for $x$ between -2 and 2
* Use the optional string format argument to format the lines:
- $x$: solid blue line
- $x^2$: dashed green line
- $x^3$: dash-dot magenta line
* Set the linewidth of each line to 3 points
* Set transparency (alpha) for each line to 0.6
* Add a legend to lower right with 3 columns
* Set axis labels and title
* Add a grid to the plot
End of explanation
"""
# Create data
# Create a new figure
# Create axis
# Plot
# Add grid
"""
Explanation: Figures, axes, and subplots
Often we want to create plots with multiple axes or we want to modify the size and shape of the plot areas. To be able to do these things, we need to explicity create a figure and then create the axes within the figure. The best way to see how this works is by example.
Example: A single plot with double width
The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide.
Plot the sine function on -6 to 6 using a figure with dimensions 12 inches by 4 inches
End of explanation
"""
# Create data
# Create a new figure
# Create axis 1 and plot with title
# Create axis 2 and plot with title
"""
Explanation: In the previous example the figure() function creates a new figure and add_subplot() puts a new axis on the figure. The command fig.add_subplot(1,1,1) means divide the figure fig into a 1 by 1 grid and assign the first component of that grid to the variable ax1.
Example: Two plots side-by-side
Create a new figure with two axes side-by-side and plot the sine function on -6 to 6 on the left axis and the cosine function on -6 to 6 on the right axis.
End of explanation
"""
# Create data
# Create a new figure
# Create axis 1 and plot with title
# Create axis 2 and plot with title
# Create axis 3 and plot with title
# Create axis 4 and plot with title
# Adjust margins
"""
Explanation: Example: Block of four plots
The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide.
Create a new figure with four axes in a two-by-two grid. Plot the following functions on the interval -2 to 2:
* $y = x$
* $y = x^2$
* $y = x^3$
* $y = x^4$
Leave the figure size at the default (6in. by 4in.) but run the command plt.tight_layout() to adust the figure's margins after creating your figure, axes, and plots.
End of explanation
"""
# Create data
x = np.arange(-6,6,0.001)
y = np.sin(x)
# Create a new figure, axis, and plot
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
ax1.plot(x,y,lw=3,alpha = 0.6)
ax1.grid()
# Save
plt.savefig('fig_econ129_class04_sine.png',dpi=120)
"""
Explanation: Exporting figures to image files
Use the plt.savefig() function to save figures to images.
End of explanation
"""
|
cgpotts/cs224u
|
vsm_03_retrofitting.ipynb
|
apache-2.0
|
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2022"
"""
Explanation: Vector-space models: retrofitting
End of explanation
"""
from collections import defaultdict
from nltk.corpus import wordnet as wn
import numpy as np
import os
import pandas as pd
import retrofitting
from retrofitting import Retrofitter
import utils
data_home = 'data'
"""
Explanation: Contents
Overview
Set-up
The retrofitting model
Examples
Only node 0 has outgoing edges
All nodes connected to all others
As before, but now 2 has no outgoing edges
All nodes connected to all others, but $\alpha = 0$
WordNet
Background on WordNet
WordNet and VSMs
Reproducing the WordNet synonym graph experiment
Other retrofitting models and ideas
Overview
Thus far, all of the information in our word vectors has come solely from co-occurrences patterns in text. This information is often very easy to obtain – though one does need a lot of text – and it is striking how rich the resulting representations can be.
Nonetheless, it seems clear that there is important information that we will miss this way – relationships that just aren't encoded at all in co-occurrences or that get distorted by such patterns.
For example, it is probably straightforward to learn representations that will support the inference that all puppies are dogs (puppy entails dog), but it might be difficult to learn that dog entails mammal because of the unusual way that very broad taxonomic terms like mammal are used in text.
The question then arises: how can we bring structured information – labels – into our representations? If we can do that, then we might get the best of both worlds: the ease of using co-occurrence data and the refinement that comes from using labeled data.
In this notebook, we look at one powerful method for doing this: the retrofitting model of Faruqui et al. 2016. In this model, one learns (or just downloads) distributed representations for nodes in a knowledge graph and then updates those representations to bring connected nodes closer to each other.
This is an incredibly fertile idea; the final section of the notebook reviews some recent extensions, and new ones are likely appearing all the time.
Set-up
End of explanation
"""
import nltk
nltk.download("wordnet")
"""
Explanation: Note: To make full use of this notebook, you will need the NLTK data distribution – or, at the very least, its WordNet files. Anaconda comes with NLTK but not with its data distribution. The following will download WordNet and make it available (if it's not already available):
End of explanation
"""
Q_hat = pd.DataFrame(
[[0.0, 0.0],
[0.0, 0.5],
[0.5, 0.0]],
columns=['x', 'y'])
Q_hat
"""
Explanation: If you decide to download the data to a different directory than the default, then you'll have to set NLTK_DATA in your shell profile. (If that doesn't make sense to you, then we recommend choosing the default download directory!)
The retrofitting model
For an an existing VSM $\widehat{Q}$ of dimension $m \times n$, and a set of edges $E$ (pairs of indices into rows in $\widehat{Q}$), the retrofitting objective is to obtain a new VSM $Q$ (also dimension $m \times n$) according to the following objective:
$$\sum_{i=1}^{m} \left[
\alpha_{i}\|q_{i} - \widehat{q}{i}\|{2}^{2}
+
\sum_{j : (i,j) \in E}\beta_{ij}\|q_{i} - q_{j}\|_{2}^{2}
\right]$$
The left term encodes a pressure to stay like the original vector. The right term encodes a pressure to be more like one's neighbors. In minimizing this objective, we should be able to strike a balance between old and new, VSM and graph.
Definitions:
$\|u - v\|_{2}^{2}$ gives the squared euclidean distance from $u$ to $v$.
$\alpha$ and $\beta$ are weights we set by hand, controlling the relative strength of the two pressures. In the paper, they use $\alpha=1$ and $\beta = \frac{1}{{j : (i, j) \in E}}$.
Examples
To get a feel for what's happening, it's helpful to visualize the changes that occur in small, easily understood VSMs and graphs. The function retrofitting.plot_retro_path helps with this.
End of explanation
"""
edges_0 = {0: {1, 2}, 1: set(), 2: set()}
_ = retrofitting.plot_retro_path(Q_hat, edges_0)
"""
Explanation: Only node 0 has outgoing edges
End of explanation
"""
edges_all = {0: {1, 2}, 1: {0, 2}, 2: {0, 1}}
_ = retrofitting.plot_retro_path(Q_hat, edges_all)
"""
Explanation: All nodes connected to all others
End of explanation
"""
edges_isolated = {0: {1, 2}, 1: {0, 2}, 2: set()}
_ = retrofitting.plot_retro_path(Q_hat, edges_isolated)
"""
Explanation: As before, but now 2 has no outgoing edges
End of explanation
"""
_ = retrofitting.plot_retro_path(
Q_hat, edges_all,
retrofitter=Retrofitter(alpha=lambda x: 0))
"""
Explanation: All nodes connected to all others, but $\alpha = 0$
End of explanation
"""
lems = wn.lemmas('crane', pos=None)
for lem in lems:
ss = lem.synset()
print("="*70)
print("Lemma name: {}".format(lem.name()))
print("Lemma Synset: {}".format(ss))
print("Synset definition: {}".format(ss.definition()))
"""
Explanation: WordNet
Faruqui et al. conduct experiments on three knowledge graphs: WordNet, FrameNet, and the Penn Paraphrase Database (PPDB). The repository for their paper includes the graphs that they derived for their experiments.
Here, we'll reproduce just one of the two WordNet experiments they report, in which the graph is formed based on synonymy.
Background on WordNet
WordNet is an incredible, hand-built lexical resource capturing a wealth of information about English words and their inter-relationships. (Here is a collection of WordNets in other languages.) For a detailed overview using NLTK, see this tutorial.
The core concepts:
A lemma is something like our usual notion of word. Lemmas are highly sense-disambiguated. For instance, there are six lemmas that are consistent with the string crane: the bird, the machine, the poets, ...
A synset is a collection of lemmas that are synonymous in the WordNet sense (which is WordNet-specific; words with intuitively different meanings might still be grouped together into synsets.).
WordNet is a graph of relations between lemmas and between synsets, capturing things like hypernymy, antonymy, and many others. For the most part, the relations are defined between nouns; the graph is sparser for other areas of the lexicon.
End of explanation
"""
def get_wordnet_edges():
edges = defaultdict(set)
for ss in wn.all_synsets():
lem_names = {lem.name() for lem in ss.lemmas()}
for lem in lem_names:
edges[lem] |= lem_names
return edges
wn_edges = get_wordnet_edges()
"""
Explanation: WordNet and VSMs
A central challenge of working with WordNet is that one doesn't usually encounter lemmas or synsets in the wild. One probably gets just strings, or maybe strings with part-of-speech tags. Mapping these objects to lemmas is incredibly difficult.
For our experiments with VSMs, we simply collapse together all the senses that a given string can have. This is expedient, of course. It might also be a good choice linguistically: senses are flexible and thus hard to individuate, and we might hope that our vectors can model multiple senses at the same time.
(That said, there is excellent work on creating sense-vectors; see Reisinger and Mooney 2010; Huang et al 2012.)
The following code uses the NLTK WordNet API to create the edge dictionary we need for using the Retrofitter class:
End of explanation
"""
glove_dict = utils.glove2dict(
os.path.join(data_home, 'glove.6B', 'glove.6B.300d.txt'))
"""
Explanation: Reproducing the WordNet synonym graph experiment
For our VSM, let's use the 300d file included in this distribution from the GloVe team, as it is close to or identical to the one used in the paper:
http://nlp.stanford.edu/data/glove.6B.zip
If you download this archive, place it in vsmdata, and unpack it, then the following will load the file into a dictionary for you:
End of explanation
"""
X_glove = pd.DataFrame(glove_dict).T
X_glove.T.shape
"""
Explanation: This is the initial embedding space $\widehat{Q}$:
End of explanation
"""
def convert_edges_to_indices(edges, Q):
lookup = dict(zip(Q.index, range(Q.shape[0])))
index_edges = defaultdict(set)
for start, finish_nodes in edges.items():
s = lookup.get(start)
if s:
f = {lookup[n] for n in finish_nodes if n in lookup}
if f:
index_edges[s] = f
return index_edges
wn_index_edges = convert_edges_to_indices(wn_edges, X_glove)
"""
Explanation: Now we just need to replace all of the strings in edges with indices into X_glove:
End of explanation
"""
wn_retro = Retrofitter(verbose=True)
X_retro = wn_retro.fit(X_glove, wn_index_edges)
"""
Explanation: And now we can retrofit:
End of explanation
"""
# Optionally write `X_retro` to disk for use elsewhere:
#
# X_retro.to_csv(
# os.path.join(data_home, 'glove6B300d-retrofit-wn.csv.gz'),
# compression='gzip')
"""
Explanation: You can now evaluate X_retro using the homework/bake-off notebook hw_wordrelatedness.ipynb!
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/pandas_long_to_wide.ipynb
|
mit
|
import pandas as pd
"""
Explanation: Title: Pandas: Long To Wide Format
Slug: pandas_long_to_wide
Summary: Pandas: Long To Wide Format
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
import modules
End of explanation
"""
raw_data = {'patient': [1, 1, 1, 2, 2],
'obs': [1, 2, 3, 1, 2],
'treatment': [0, 1, 0, 1, 0],
'score': [6252, 24243, 2345, 2342, 23525]}
df = pd.DataFrame(raw_data, columns = ['patient', 'obs', 'treatment', 'score'])
df
"""
Explanation: Create "long" dataframe
End of explanation
"""
df.pivot(index='patient', columns='obs', values='score')
"""
Explanation: Make a "wide" data
Now we will create a "wide" dataframe with the rows by patient number, the columns being by observation number, and the cell values being the score values.
End of explanation
"""
|
sot/aca_stats
|
mult_stars_flag_impact.ipynb
|
bsd-3-clause
|
from __future__ import division
import os
import matplotlib.pyplot as plt
from astropy.table import Table
import numpy as np
from Ska.DBI import DBI
%matplotlib inline
# Use development version of chandra_aca which has the new acq stats fit parameters
import sys
import os
sys.path.insert(0, os.path.join(os.environ['HOME'], 'git', 'chandra_aca'))
from chandra_aca import star_probs
"""
Explanation: THIS NOTEBOOK HAS BEEN MOVED
See https://github.com/sot/mult_stars_flag/blob/master/mult_stars_flag_impact.ipynb for the current version. This one is left purely for the redirect for existing links in email.
Impact of disabling multiple stars status flag filtering
Prior to uplink of the image status flag patch and subsequent operational use starting in the FEB0816 loads, if the multiple stars flag was set on the readout prior to acquisition then the star would be rejected in ACA data processing. This would result in a failed acquisition even if the correct star was in fact acquired.
It was previously recognized that disabling the multiple stars status flag would produce a notable improvement in acquisition success rate. However, having now created a new model of acquisition success and carried out detailed analysis, the improvement is quite substantial.
This provides cautious optimism of significant relief for ACA-related thermal constraints for the near future.
Note that this assumes that MS-filtering can be disable for guide star tracking as well. A rather complex (and potentially incorrect) analysis has been done which demonstrates that this should not lead to unexpected safing actions (NSM or BSH). The SSAWG and community will need to evaluate to what extent we require that analysis to be independently verified versus accepting the risk of occasional safing actions.
End of explanation
"""
def get_trak_stats(date='2014:180'):
"""
Get relevant info from guide star tracking statistics from Sybase database.
This returns one record per guide star per obsid.
"""
db = DBI(dbi='sybase', server='sybase', user='aca_read')
stats = db.fetchall('SELECT mult_star_samples, n_samples, aoacmag_median, obsid FROM trak_stats_data '
'WHERE kalman_datestart > "{}" '
'AND aoacmag_median is not NULL'
.format(date))
stats = Table(stats)
db.conn.close()
return stats
# Reading data from the database is slow, so cache in a FITS file
filename = 'mult_stars_flag_trak_stats.fits.gz'
if os.path.exists(filename):
stats = Table.read(filename)
else:
stats = get_trak_stats()
stats.write(filename)
# Select only stars in range 9.0 < mag < 11.0
mags = stats['aoacmag_median']
ok = (mags > 9) & (mags < 11)
stats = stats[ok]
mags = mags[ok]
# Compute fraction of samples
stats['frac_ms'] = stats['mult_star_samples'] / stats['n_samples']
# Bin the data using mean aggregation in 0.2 mag bins
stats['mag_bin'] = np.round(mags / 0.2) * 0.2
sg = stats.group_by('mag_bin')
sgm = sg.groups.aggregate(np.mean)
# Make the plot
plt.figure(1, figsize=(8, 5))
plt.clf()
randx = np.random.uniform(-0.05, 0.05, size=len(stats))
plt.plot(mags + randx, stats['frac_ms'], '.', alpha=0.5,
label='MS flag rate per obsid')
plt.plot(sgm['mag_bin'], sgm['frac_ms'], 'r', linewidth=5, alpha=0.7,
label='MS flag rate (0.2 mag bins)')
p_acqs = star_probs.acq_success_prob('2016:001', t_ccd=-15.0, mag=sgm['mag_bin'])
plt.plot(sgm['mag_bin'], 1 - p_acqs, 'g', linewidth=5,
label='Acq fail rate (model 2016:001, T=-15C)')
plt.legend(loc='upper left', fontsize='medium')
plt.xlabel('Magnitude')
plt.title('Acq fail rate compared to MS flag rate')
plt.grid()
plt.tight_layout()
"""
Explanation: Acquisition failure rate and multiple stars flag rate
Here we examine available statistics on the mean rate of the MS flag being set during guide star tracking and compares this to the model prediction of acquisition failure rate.
The time span used from 2014:180 to the present (around 2016:030 in the original iteration). During that epoch the ACA CCD planning limit was -14 C and temperatures were relatively stable.
End of explanation
"""
star_probs.__file__
star_probs.set_fit_pars(ms_enabled=False)
p_acqs_no_ms = star_probs.acq_success_prob('2016:001', t_ccd=-15.0, mag=sgm['mag_bin'])
plt.figure(1, figsize=(8, 5))
plt.clf()
plt.plot(sgm['mag_bin'], 1 - p_acqs, 'g', linewidth=5,
label='Acq fail rate (model 2016:001, T=-15C)')
plt.plot(sgm['mag_bin'], 1 - p_acqs_no_ms, 'r', linewidth=5,
label='Acq fail rate NO MS (model 2016:001, T=-15C)')
plt.arrow(10.7, 0.4, -0.4, 0.0, head_width=0.05, head_length=0.1, fc='k', ec='k')
plt.legend(loc='upper left', fontsize='medium')
plt.xlabel('Magnitude')
plt.title('Acq fail rate with (green) and without (red) MS-flag filtering')
plt.grid()
plt.tight_layout();
"""
Explanation: Figure 1: the plot above demonstrates that (statistically) most of the acquisition failures below 11th mag are actually due to the multiple stars flag being set. Below about 10.0 mag nearly all of failures can be attributed to the MS flag.
Acquisition failure probabilities with and without MS-flag filtering
The SOTA model for acquisition probabilities was re-fit using acquisition
statistics that did a post-facto removal of the MS-flag filtering on board.
It was assumed that if a star were acquired at the correct position (within 5 arcsec)
and did not have ionizing radiation or saturated pixel flags set, then the
OBC would have identified it (aka successful acquisition).
Refitting is done in the fit_sota_model_probit_no_ms Jupyter notebook in this directory.
End of explanation
"""
# Star catalog for obsid 17728
dat_str = """
type agasc_id ra dec mag yag zag notes
BOT 646185704 173.7895 -0.7888 10.571 -1.01622E-02 -1.12012E-02 a3g4
BOT 646190208 174.5589 -1.1497 10.463 -6.67958E-03 3.21581E-03 a3g4
BOT 646190528 173.8661 -1.2423 10.549 -2.67450E-03 -8.30566E-03 a3g4
BOT 646190912 173.9066 -1.5759 9.349 2.88817E-03 -6.44565E-03 a1g1
BOT 646192600 174.4886 -1.0417 10.305 -8.28102E-03 1.63611E-03 a3g3
BOT 646193600 174.1094 -2.0234 10.045 9.83073E-03 -1.41511E-03 a3g4
GUI 646189648 174.8442 -1.9800 10.757 6.52457E-03 1.09910E-02 g5
GUI 646191600 174.9020 -1.7752 10.576 2.81966E-03 1.12644E-02 g5
ACQ 646189528 174.0391 -1.7808 10.536 5.92820E-03 -3.46607E-03 a4
ACQ 646190064 174.3127 -1.3096 10.629 -3.08574E-03 -4.35447E-04 a4
"""
dat = Table.read(dat_str, format='ascii')
dat = dat[dat['type'] != 'GUI']
dat
"""
Explanation: Figure 2 - this plot shows that disabling MS-flag filtering is similar to making the star around 0.4 mags brighter (at a given acq fail rate). This is a significant improvement.
Worst case star catalog obsid 17728 currently in the LTS for around early April
ra, dec, roll = 174.301751, -1.487777, 258.450000
date = 2016:092
maneuver error=30
dither = 8
End of explanation
"""
# MS enabled case
star_probs.set_fit_pars(ms_enabled=True)
t_ccd = star_probs.t_ccd_warm_limit(dat['mag'], min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
"""
Explanation: MS filtering enabled (as per past operations before FEB0816)
End of explanation
"""
# MS disabled case
star_probs.set_fit_pars(ms_enabled=False)
t_ccd = star_probs.t_ccd_warm_limit(dat['mag'], min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
"""
Explanation: MS filtering disabled (as per current operations)
End of explanation
"""
# MS enabled case
star_probs.set_fit_pars(ms_enabled=True)
mags = [10.0, 10.2, 10.2, 9.3, 10.3, 10.0, 10.0, 10.0]
t_ccd = star_probs.t_ccd_warm_limit(mags, min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
# MS enabled case
star_probs.set_fit_pars(ms_enabled=False)
t_ccd = star_probs.t_ccd_warm_limit(mags, min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
"""
Explanation: Takeway -- acquisition seems quite feasible with MS filtering disabled
IMPORTANT CAVEAT - no statement made about guide star tracking. To do this catalog we would definitely need MS filtering disabled for the whole observation.
Run-of-the-mill synthetic constrained catalog
This represents a more typical case of a catalog that requires a temperature cooler than -14.9 C.
End of explanation
"""
|
HarshaDevulapalli/foundations-homework
|
05/05-Homework-Devulapalli-NYT_graded.ipynb
|
mit
|
import requests
date='2009-05-08' #Replace with 2010-05-09,2009-06-21,2010-06-20
url="https://api.nytimes.com/svc/books/v2/lists/"+date+"/hardcover-fiction.json?&num_results=1&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The Best Sellers on",date,"are the following:")
print("")
for n in range(0,len(data['results'])):
for title in data['results'][n]['book_details']:
print(n+1,".",title['title'],"by",title['author'])
import requests
date='2010-05-09' #Replace with 2010-05-09,2009-06-21,2010-06-20
url="https://api.nytimes.com/svc/books/v2/lists/"+date+"/hardcover-fiction.json?&num_results=1&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The Best Sellers on",date,"are the following:")
print("")
for n in range(0,len(data['results'])):
for title in data['results'][n]['book_details']:
print(n+1,".",title['title'],"by",title['author'])
import requests
date='2009-06-21' #Replace with 2010-05-09,2009-06-21,2010-06-20
url="https://api.nytimes.com/svc/books/v2/lists/"+date+"/hardcover-fiction.json?&num_results=1&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The Best Sellers on",date,"are the following:")
print("")
for n in range(0,len(data['results'])):
for title in data['results'][n]['book_details']:
print(n+1,".",title['title'],"by",title['author'])
import requests
date='2010-06-20' #Replace with 2010-05-09,2009-06-21,2010-06-20
url="https://api.nytimes.com/svc/books/v2/lists/"+date+"/hardcover-fiction.json?&num_results=1&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The Best Sellers on",date,"are the following:")
print("")
for n in range(0,len(data['results'])):
for title in data['results'][n]['book_details']:
print(n+1,".",title['title'],"by",title['author'])
"""
Explanation: graded = 7/8
1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
End of explanation
"""
import requests
date='2009-06-06'
url="https://api.nytimes.com/svc/books/v3/lists/overview.json?&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The different book categories NYT ranked on",date,"are:")
print("")
for n in range(0,len(data['results'])):
print(data['results']['lists'][n]['list_name'])
import requests
date='2015-06-06'
url="https://api.nytimes.com/svc/books/v3/lists/overview.json?&api-key=4182fa9aca904ae18f4a1f6bef2fc7e9"
response=requests.get(url)
data=response.json()
print("The different book categories NYT ranked on",date,"are:")
print("")
for n in range(0,len(data['results'])):
print(data['results']['lists'][n]['list_name'])
#Ta-Stephan: you need to specify a date in the API. Here is how you would do it.
url ="https://api.nytimes.com/svc/books/v3/lists/overview.json?published_date=2009-06-06&api-key=9bddd887c630b8078e017396214a150a:15:61062085"
response = requests.get(url)
data = response.json()
for book_list in data['results']['lists']:
print(book_list['list_name'])
"""
Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
End of explanation
"""
despot_names = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
for name in despot_names:
despot_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + name +'+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')
despot_data = despot_response.json()
despot_hits_meta = despot_data['response']['meta']
despot_hit_count = despot_hits_meta['hits']
print("The NYT has referred to the Libyan despot", despot_hit_count, "times using the spelling", name)
"""
Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
End of explanation
"""
import requests
term='hipster'
url='http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+ term +'&begin_date=19950101&end_date=19951231&api-key=0c3ba2a8848c44eea6a3443a17e57448'
response=requests.get(url)
data=response.json()
#print(data.keys())
#print(data['response'].keys())
print("The main headline for the first article in 1995 mentioning the term hipster is:",data['response']['docs'][0]['headline']['main'])
print("The kicker headline for the first article in 1995 mentioning the term hipster is:",data['response']['docs'][0]['headline']['kicker'])
print("")
print("The first paragrah for the first article in 1995 mentioning the term hipster is:",data['response']['docs'][0]['lead_paragraph'])
"""
Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
End of explanation
"""
search_term='gay marriage'
begin_date='19500101'
end_date='19591231'
gay_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+search_term+'&begin_date='+begin_date+'&end_date='+end_date+'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
gay_data=gay_response.json()
print(gay_data['response']['meta']['hits'],"is the number of times the term",search_term,",appears between",begin_date,"and",end_date)
search_term='gay marriage'
begin_date='19700101'
end_date='19791231'
gay_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+search_term+'&begin_date='+begin_date+'&end_date='+end_date+'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
gay_data=gay_response.json()
print(gay_data['response']['meta']['hits'],"is the number of times the term",search_term,",appears between",begin_date,"and",end_date)
search_term='gay marriage'
begin_date='19800101'
end_date='19891231'
gay_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+search_term+'&begin_date='+begin_date+'&end_date='+end_date+'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
gay_data=gay_response.json()
print(gay_data['response']['meta']['hits'],"is the number of times the term",search_term,",appears between",begin_date,"and",end_date)
search_term='gay marriage'
begin_date='19900101'
end_date='19991231'
gay_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+search_term+'&begin_date='+begin_date+'&end_date='+end_date+'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
gay_data=gay_response.json()
print(gay_data['response']['meta']['hits'],"is the number of times the term",search_term,",appears between",begin_date,"and",end_date)
search_term='gay marriage'
begin_date='20000101'
end_date='20091231'
gay_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q='+search_term+'&begin_date='+begin_date+'&end_date='+end_date+'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
gay_data=gay_response.json()
print(gay_data['response']['meta']['hits'],"is the number of times the term",search_term,",appears between",begin_date,"and",end_date)
"""
Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
End of explanation
"""
search_term='motorcycles'
motor_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=0c3ba2a8848c44eea6a3443a17e57448')
motor_data=motor_response.json()
print("The section that talks about,",search_term,"the most is",motor_data['response']['facets']['section_name']['terms'][0]['term'])
print("It is mentioned",motor_data['response']['facets']['section_name']['terms'][0]['count'],"times in the section.")
"""
Explanation: 6) What section talks about motorcycles the most? Tip: You'll be using facets
End of explanation
"""
critics_pick_count=0
meh_movie=0
offset_value = 0
movie_list=[]
critical_acclaimed_movies=[]
meh_movies=[]
for page in range(3):
movie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?publication_date=20160611&api-key=07c67436f1864abc8a144c14adff69c8&'+ str(offset_value))
movie_data=movie_response.json()
n=0
n1=0
movie_list=movie_list+movie_data['results']
for count in movie_data['results']:
if(movie_data['results'][n]['critics_pick']==1):
critics_pick_count=critics_pick_count+1
#print(movie_data['results'][n]['display_title'],".This movie is critically acclaimed")
critical_acclaimed_movies=critical_acclaimed_movies+movie_data['results']
else:
meh_movie=meh_movie+1
#print(movie_data['results'][n]['display_title'],".This movie is meh.")
n=n+1
print("The number of critically picked movies is",critics_pick_count)
print("The number of meh movies is",meh_movie)
"""
Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
End of explanation
"""
critics_pick_count=0
meh_movie=0
offset= 0
page=offset+20
reviewers=[]
reviewers1=[]
reviewers2=[]
x=0
import requests
for page in range(40):
movie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?publication_date=20160611&&offset='+str(offset)+'&api-key=07c67436f1864abc8a144c14adff69c8')
movie_data=movie_response.json()
movie_results=movie_data['results']
byline=movie_results[x]['byline']
reviewers1.insert(x,byline)
offset=offset+20
x=x+1
if(x>19):
x=0
reviewers2.insert(x,byline)
reviewers = reviewers1 + reviewers2
print("The list of all the reviewers are",reviewers)
print("")
print("")
from collections import Counter
most_common,num_most_common = Counter(reviewers).most_common(1)[0]
print("The reviewer who has reviewed the most in the last 40 films is",most_common,"and that person has reviewed",num_most_common,"times")
"""
Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
End of explanation
"""
|
ghvn7777/ghvn7777.github.io
|
content/fluent_python/14_iter.ipynb
|
apache-2.0
|
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
# 返回一个字符串列表,里面的元素是正则表达式的全部非重叠匹配
self.words = RE_WORD.findall(text)
def __getitem__(self, index):
return self.words[index]
# 为了完善序列协议,我们实现了 __len__ 方法,不过,为了让对象可迭代,没必要实现这个方法
def __len__(self):
return len(self.words)
def __repr__(self):
# 下面这个函数用于生成大型数据结构的简略字符串表示形式
return 'Sentence(%s)' % reprlib.repr(self.text)
s = Sentence('"The time has come,", the Walrus said')
s
for word in s:
print(word)
list(s)
s[0], s[-1]
"""
Explanation: 所有生成器都是迭代器,因为生成器完全实现了迭代器接口,不过迭代器一般用于从集合取出元素,生成器用于 “凭空” 创造元素。斐波那契数列例子可以很好的说明两者区别:斐波那契数列中的数有无穷个,在一个集合里放不下。
在 Python 3 中,生成器有广泛用途。现在即使是内置的 range() 函数也要返回一个类似生成器的对象,而以前返回完整列表。如果一定让 range() 函数返回列表,必须明确指明(例如,list(range(100)))。
在 Python 中,所有集合都能迭代。在 Python 内部,迭代器用于支持:
for 循环
构建和扩展集合类型
逐行遍历文本文件
列表推导,字典推导和集合推导
元组拆包
调用函数时,使用 * 拆包
本章探讨以下话题:
语言内部使用 iter(...) 内置函数处理可迭代对象的方式
如何使用 Python 经典的迭代器模式
详细说明生成器函数的工作原理
如何使用生成器函数或生成器表达式代替经典的迭代器
如何使用标准库中通用的生成器函数
如何使用 yield from 语句合并生成器
案例分析: 在一个数据库转换工具中使用生成器处理大型数据集
为什么生成器和协程看似相同,其实差别很大,不能混淆
Sentence 类第 1 版:单词序列
我们创建一个类,并向它传入一些包含文本的字符串,然后可以逐个单词迭代,第 1 版要实现序列协议,这个类的对象可以迭代,因为所有序列都可以迭代 -- 这一点前面已经说过,现在说明真正的原因
下面展示了一个可以通过索引从文本提取单词的类:
End of explanation
"""
from collections import abc
class Foo:
def __iter__(self):
pass
issubclass(Foo, abc.Iterable)
f = Foo()
isinstance(f, abc.Iterable)
"""
Explanation: 我们都知道,序列可以迭代,下面说明具体原因: iter 函数
解释器需要迭代对象 x 时候,会自动调用 iter(x)
内置的 iter 函数有以下作用。
检查对象是否实现了 __iter__ 方法,如果实现了就调用它,获取一个迭代器
如果没有实现 __iter__ 方法,但是实现了 __getitem__ 方法,Python 会创建一个迭代器,尝试按顺序(从索引 0 开始)获取元素
如果尝试失败,Python 抛出 TypeError 异常,通常提示 C object is not iterable,其中 C 是目标对象所属的类
任何 Pytho 序列都可迭代的原因是实现了 __getitem__ 方法。其实标准的序列也都实现了 __iter__ 方法,因此我们也应该这么做。之所以对 __getitem__ 方法特殊处理,是为了向后兼容,未来可能不会再这么做
11 章提到过,这是鸭子类型的极端形式,不仅要实现特殊的 __iter__ 方法,还要实现 __getitem__ 方法,而且 __getitem__ 方法的参数是从 0 开始的整数(int),这样才认为对象是可迭代的。
在白鹅类型理论中,可迭代对象定义的简单一些,不过没那么灵活,如果实现了 __iter__ 方法,那么就认为对象是可迭代的。此时,不需要创建子类,也不需要注册,因为 abc.Iterable 类实现了 __subclasshook__ 方法,下面举个例子:
End of explanation
"""
s = 'ABC'
for char in s:
print(char)
"""
Explanation: 不过要注意,前面定义的 Sentence 类是可迭代的,却无法通过 issubclass(Sentence, abc.Iterable) 测试
从 Python 3.4 开始,检测对象 x 是否可迭代,最准确的方法是调用 iter(x) 函数,如果不可迭代,再处理 TypeError 异常,这回比使用 isinstance(x, abc.Iterable) 更准确,因为 iter(x) 会考虑到 __getitem__ 方法
迭代对象之前显式检查或许没必要,因为试图迭代不可迭代对象时,抛出的错误很明显。如果除了跑出 TypeError 异常之外还要进一步处理,可以使用 try/except 块,无需显式检查。如果要保存对象,等以后迭代,或许可以显式检查,因为这种情况需要尽早捕捉错误
可迭代对象与迭代器对比
可迭代对象:
使用 iter 内置函数可以获取迭代器对象。如果对象实现了能返回迭代器的 __iter__ 方法,那么对象可迭代。序列都可以迭代:实现了 __getitem__ 方法,而且其参数是从 0 开始的索引,这种对象也可以迭代。
我们要明确可迭代对象和迭代器之间的关系: Python 从可迭代的对象中获取迭代器
下面是一个 for 循环,迭代一个字符串,这里字符串 'ABC' 是可迭代对象,背后有迭代器,只是我们看不到
End of explanation
"""
s = 'ABC'
it = iter(s)
while True:
try:
print(next(it))
except StopIteration: # 这个异常表示迭代器到头了
del it
break
"""
Explanation: 如果用 while 循环,要像下面这样:
End of explanation
"""
s3 = Sentence('Pig and Pepper')
it = iter(s3)
it
next(it)
next(it)
next(it)
next(it)
list(it) # 到头后,迭代器没用了
list(s3) # 如果想再次迭代,要重新构建迭代器
"""
Explanation: 标准迭代器接口有两个方法:
__next__ 返回下一个可用的元素,如果没有元素了,抛出 StopIteration 异常
__iter__ 返回 self,以便在应该使用可迭代对象的地方使用迭代器,比如 for 循环
这个接口在 collections.abc.Iterator 抽象基类中,这个类定义了 __next__ 抽象方法,而且继承自 Iterable 类: __iter__ 抽象方法则在 Iterable 类中定义
abc.Iterator 抽象基类中 __subclasshook__ 的方法作用就是检查有没有 __iter__ 和 __next__ 属性
检查对象 x 是否为 迭代器 的最好方式是调用 isinstance(x, abc.Iterator)。得益于 Iterator.__subclasshook__ 方法,即使对象 x 所属的类不是 Iterator 类的真实子类或虚拟子类,也能这样检查
下面可以看到 Sentence 类如何使用 iter 函数构建迭代器,和如何使用 next 函数使用迭代器
End of explanation
"""
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
self.words = RE_WORD.findall(text)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
return SentenceIterator(self.words)
class SentenceIterator:
def __init__(self, words):
self.words = words
self.index = 0
def __next__(self):
try:
word = self.words[self.index]
except IndexError:
raise StopIteration
self.index += 1
return word
def __iter__(self):
return self
"""
Explanation: 因为迭代器只需要 __next__ 和 __iter__ 两个方法,所以除了调用 next() 方法,以及捕获 StopIteration 异常之外,没有办法检查是否还有遗留元素。此外,也没有办法 ”还原“ 迭代器。如果想再次迭代,那就要调用 iter(...) 传入之前构造迭代器传入的可迭代对象。传入迭代器本身没用,因为前面说过 Iterator.__iter__ 方法实现方式是返回实例本身,所以传入迭代器无法还原已经耗尽的迭代器
我们可以得出迭代器定义如下:实现了无参数的 __next__ 方法,返回序列中的下一个元素,如果没有元素了,那么抛出 StopIteration 异常。Python 中迭代器还实现了 __iter__ 方法,因此迭代器也可以迭代。因为内置的 iter(...) 函数会对序列做特殊处理,所以第 1 版 的 Sentence 类可以迭代。
Sentence 类第 2 版:典型的迭代器
这一版根据《设计模式:可复用面向对象软件的基础》一书给出的模型,实现典型的迭代器设计模式。注意,这不符合 Python 的习惯做法,后面重构时候会说明原因。不过,通过这一版能明确可迭代集合和迭代器对象之间的区别
下面的类可以迭代,因为实现了 __iter__ 方法,构建并返回一个 SentenceIterator 实例,《设计模式:可复用面向对象软件的基础》一书就是这样描述迭代器设计模式的。
这里之所以这么做,是为了清楚的说明可迭代的对象和迭代器之间的重要区别,以及二者间的联系。
End of explanation
"""
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
self.words = RE_WORD.findall(text)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
for word in self.words:
yield word
# 这个 return 不是必要的,生成器函数不会抛出 StopIteration 异常,
#而是在生成全部值之后直接退出
return
a = Sentence('hello world')
one = iter(a)
print(next(one))
two = iter(a)
print(next(two)) # 两个迭代器之间不会互相干扰
"""
Explanation: 注意,对于这个例子来说,没有必要在 SentenceIterator 类中实现 __iter__ 方法,不过这么做是对的,因为迭代器应该实现 __next__ 和 __iter__ 两个方法,而且这么做能让迭代器通过 issubclass(SentenceInterator, abc.Iterator) 测试。如果让 SentenceIterator 继承 abc.Iterator 类,那么它会继承 abc.Iterator.__iter__ 这个具体方法
注意 SentenceIterator 类的大多数代码在处理迭代器内部状态,稍后会说明如何简化,不过我们先讨论一个看似合理实则错误的实现捷径
把 Sentence 变成迭代器:坏主意
构建可迭代的对象和迭代器经常出现错误,原因是混淆了二者。要知道,可迭代对象有个 __iter__ 方法,每次实例化一个新的迭代器,迭代器要实现 __next__ 方法,返回单个元素,此外要实现 __iter__ 方法,返回迭代器本身。
因此,迭代器可以迭代,但是可迭代的对象不是迭代器
除了 __iter__ 方法之外,你可能还想在 Sentence 类中实现 __next__ 方法,让 Sentence 实例既是可迭代对象,也是自身迭代器,可是这种想法非常糟糕,这也是常见的反模式
迭代器模式可以用来:
访问一个聚合对象的内容而无需暴露它的内部表示
支持对聚合对象的多种遍历
为遍历不同的聚合结构提供一个统一的接口(即支持多态迭代)
为了“支持多种遍历”,必须能从同一个迭代的实例中获取多个独立的迭代器,而且各个迭代器要能维护自身的内部状态,因此这一模式正确的实现方法是,每次调用 iter(my_iterable) 都新建一个独立的迭代器,这就是为什么这个示例需要定义 SentenceIterator 类
可迭代对象一定不能是自身的迭代器,也就是说,可迭代对象必须实现 __iter__ 方法,但不能实现 __next__ 方法。另一方面,迭代器应该可以一直迭代,迭代器的 __iter__ 应该返回自身
Sentence 类第 3 版:生成器函数
实现同样功能,却符合 Python 习惯的方式是,用生成器函数替代 SentenceIterator 类。先看下面的例子:
End of explanation
"""
def gen_123():
yield 1
yield 2
yield 3
gen_123
gen_123()
for i in gen_123():
print(i)
g = gen_123()
next(g)
next(g)
next(g)
next(g) # 生成器函数定义体执行完毕后,跑出 StopIteration 异常
"""
Explanation: 在这个例子中,迭代器其实是生成器对象,每次调用 __iter__ 方法都会自动创建,因为这里的 __iter__ 方法是生成器函数
生成器函数的工作原理
只要 Python 函数定义体中有 yield 关键字,该函数就是生成器函数,调用生成器函数时,会返回一个生成器对象。也就是说,生成器函数是生成器工厂
下面用一个特别简单的函数说明生成器行为:
End of explanation
"""
def gen_AB():
print('start')
yield 'A'
print('continue')
yield 'B'
print('end')
for c in gen_AB():
print('-->', c)
"""
Explanation: 生成器函数会创建一个生成器对象,包装生成器函数的定义体。把生成器传给 next(..) 函数时,生成器函数会向前,执行函数定义体中的下一个 yield 语句,返回产出的值,并在函数定义体的当前位置暂停。最终函数的定义体返回时,外层的生成器对象会抛出 StopIteration 异常 -- 这一点与迭代器协议一致
下面例子更清楚的说明了生成器函数定义体的执行过程:
End of explanation
"""
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
for match in RE_WORD.finditer(self.text):
yield match.group() # 从 MatchObject 实例中提取匹配正则表达式的具体文本
"""
Explanation: 现在在我们应该知道 Sentence.__iter__ 作用了: __iter__ 方法是生成器函数,调用时会构建一个实现了迭代器接口的生成器对象,因此不用再定义 SentenceIterator 类了。
这一版 Sentence 类比之前简短多了,但还不够懒惰,懒惰实现是指尽可能延后生成值,这样能节省内存,或许还可以避免做无用的处理
Sentence 类第 4 版:惰性实现
设计 Iterator 接口时考虑了惰性:next(my_iterator) 一次生成一个元素。惰性求值和及早求值是编程语言理论的技术术语
目前的 Sentence 类不具有惰性,因为 __init__ 方法急迫的构建好了文本中的单词列表,然后绑定到 self.words 属性上。这样就得到处理后的整个文本,列表使用的内存量可能与文本本身一样多(获取更多,这取决于文本中有多少非单词字符)。如果只需迭代前几个单词,大多数工作都是白费力气。
re.finditer 函数是 re.findall 函数的惰性版本,返回的不是列表,而是一个生成器,按需生成 re.MatchObject 实例。如果有很多匹配,re.finditer 能节省大量内存。如果我们要使用这个函数让上一版 Sentence 类变得懒惰,即只在需要时才生成下一个单词。代码如下所示:
End of explanation
"""
def gen_AB():
print('start')
yield 'A'
print('continue')
yield 'B'
print('end')
res1 = [x * 3 for x in gen_AB()]
for i in res1:
print('-->', i)
res2 = (x * 3 for x in gen_AB())
res2
for i in res2:
print('-->', i)
"""
Explanation: 生成器表达式
简单的生成器函数,如前面的例子中使用的那个,可以替换成生成器表达式
生成器表达式可以理解为列表推导式的惰性版本:不会迫切的构建列表,而是返回一共额生成器,按需惰性产称元素。也就是说,如果列表推导是制造列表的工厂,那么生成器表达式是制造生成器的工厂
下面展示了一个生成器表达式,并与列表推导式对比:
End of explanation
"""
import re
import reprlib
RE_WORD = re.compile('\w+')
class Sentence:
def __init__(self, text):
self.text = text
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
def __iter__(self):
return (match.group() for match in RE_WORD.finditer(self.text))
"""
Explanation: 可以看出,生成器表达式会产出生成器,因此可以使用生成器表达式进一步减少 Sentence 类的代码:
End of explanation
"""
class ArithmeticProgression:
def __init__(self, begin, step, end=None):
self.begin = begin
self.step = step
self.end = end # 无穷数列
def __iter__(self):
# self 赋值给 result,不过要先强制转成前面加法表达式类型(两个支持加法的对象返回一个对象)
result = type(self.begin + self.step)(self.begin)
forever = self.end is None
index = 0
while forever or result < self.end:
yield result
index += 1
result = self.begin + self.step * index
ap = ArithmeticProgression(0, 1, 3)
list(ap)
ap = ArithmeticProgression(1, 5, 3)
list(ap)
ap = ArithmeticProgression(0, 1 / 3, 1)
list(ap)
"""
Explanation: 这里用的是生成器表达式构建生成器,然后将其返回,不过最终效果一样:调用 __iter__ 方法会得到一个生成器对象
生成器表达式是语法糖:完全可以替换成生成器函数,不过有时使用生成器表达式更加便利
何时使用生成器表达式
遇到简单的情况,可以使用成器表达式,因为因为这样扫一眼就知道代码作用
如果生成器表达式要分成多行,最好使用生成器函数,提高可读性
如果函数或构造方法只有一个参数,传入生成器表达式时不用写一堆调用函数的括号,再写一堆括号围住生成器表达式,只写一对括号就行,如果生成器表达式后面还有其他参数,那么必须使用括号围住,否则会抛出 SynataxError 异常
另一个例子:等差数列生成器
End of explanation
"""
def aritprog_gen(begin, step, end=None):
result = type(begin + step)(begin)
forever = end is None
index = 0
while forever or result < end:
yield result
index += 1
result = begin + step * index
"""
Explanation: 上面的类完全可以用一个生成器函数代替
End of explanation
"""
import itertools
gen = itertools.count(1, .5)
next(gen)
next(gen)
next(gen)
next(gen)
"""
Explanation: 上面的实现很棒,但是要记住,标准库中有很多现成的生成器,下面会用 itertools 模块实现,这个版本更棒
使用 itertools 生成等差数列
itertools 提供了 19 个生成器函数,结合起来很有意思。
例如 itertools.count 函数返回的生成器能生成多个数。如果不传入参数,itertools.count 函数会生成从 0 开始的整数数列。不过,我们可以提供 start 和 step 值,这样实现的作用与 aritprog_gen 函数相似
End of explanation
"""
gen = itertools.takewhile(lambda n: n < 3, itertools.count(1, .5))
list(gen)
"""
Explanation: 然而 itertools.count 函数从不停止,因此,调用 list(count())) 会产生一个特别大的列表,超出可用的内存
不过,itertools.takewhile 函数不同,他会生成一个使用另一个生成器的生成器,在指定条件计算结果为 False 时候停止,因此,可以把这两个函数结合:
End of explanation
"""
import itertools
def aritprog_gen(begin, step, end=None):
first = type(begin+step)(begin)
ap_gen = itertools.count(first, step)
if end is not None:
ap_gen = itertools.takewhile(lambda n: n < end, ap_gen)
return ap_gen
"""
Explanation: 所以,我们可以将等差数列写成这样:
End of explanation
"""
def vowel(c):
return c.lower() in 'aeiou'
# 字符串各个元素传给 vowel 函数,为真则返回对应元素
list(filter(vowel, 'Aardvark'))
import itertools
# 与上面相反
list(itertools.filterfalse(vowel, 'Aardvark'))
# 处理 字符串,跳过 vowel 为真的元素,然后产出剩余的元素,不再检查
list(itertools.dropwhile(vowel, 'Aardvark'))
#返回真值对应的元素,立即停止,不再检查
list(itertools.takewhile(vowel, 'Aardvark'))
# 并行处理两个迭代对象,如果第二个是真值,则返回第一个
list(itertools.compress('Aardvark', (1, 0, 1, 1, 0, 1)))
list(itertools.islice('Aardvark', 4))
list(itertools.islice('Aardvark', 4, 7))
list(itertools.islice('Aardvark', 1, 7, 2))
"""
Explanation: 注意, aritprog_gen 不是生成器函数,因为没有 yield 关键字,但是会返回一个生成器,因此它和其他的生成器函数一样,是一个生成器工厂函数
标准库中的生成器函数
标准库中有很多生成器,有用于逐行迭代文本文件的对象,还有出色的 os.walk 函数,不过本节专注于通用的函数:参数为任意可迭代对象,返回值是生成器,用于生成选中的,计算出的和重新排列的元素。
第一组是过滤生成器函数,如下:
End of explanation
"""
sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]
import itertools
# 产出累计的总和
list(itertools.accumulate(sample))
# 如果提供了函数,那么把前两个元素给他,然后把计算结果和下一个元素给它,以此类推
list(itertools.accumulate(sample, min))
list(itertools.accumulate(sample, max))
import operator
list(itertools.accumulate(sample, operator.mul)) # 计算乘积
list(itertools.accumulate(range(1, 11), operator.mul))
list(enumerate('albatroz', 1)) #从 1 开始,为字母编号
import operator
list(map(operator.mul, range(11), range(11)))
# 计算两个可迭代对象中对应位置的两个之和,元素最少的迭代完毕就停止
list(map(operator.mul, range(11), [2, 4, 8]))
list(map(lambda a, b: (a, b), range(11), [2, 4, 8]))
import itertools
# starmap 把第二个参数的每个元素传给第一个函数 func,产出结果,
# 输入的可迭代对象应该产出可迭代对象 iit,
# 然后以(func(*iit) 这种形式调用 func)
list(itertools.starmap(operator.mul, enumerate('albatroz', 1)))
sample = [5, 4, 2, 8, 7, 6, 3, 0, 9, 1]
# 计算平均值
list(itertools.starmap(lambda a, b: b / a,
enumerate(itertools.accumulate(sample), 1)))
"""
Explanation: 下面是映射生成器函数:
End of explanation
"""
# 先产生第一个元素,然后产生第二个参数的所有元素,以此类推,无缝连接到一起
list(itertools.chain('ABC', range(2)))
list(itertools.chain(enumerate('ABC')))
# chain.from_iterable 函数从可迭代对象中获取每个元素,
# 然后按顺序把元素连接起来,前提是各个元素本身也是可迭代对象
list(itertools.chain.from_iterable(enumerate('ABC')))
list(zip('ABC', range(5), [10, 20, 30, 40])) #只要有一个生成器到头,就停止
# 处理到最长的迭代器到头,短的会填充 None
list(itertools.zip_longest('ABC', range(5)))
list(itertools.zip_longest('ABC', range(5), fillvalue='?')) # 填充问号
"""
Explanation: 接下来是用于合并的生成器函数:
End of explanation
"""
list(itertools.product('ABC', range(2)))
suits = 'spades hearts diamonds clubs'.split()
list(itertools.product('AK', suits))
# 传入一个可迭代对象,产生一系列只有一个元素的元祖,不是特别有用
list(itertools.product('ABC'))
# repeat = N 重复 N 次处理各个可迭代对象
list(itertools.product('ABC', repeat=2))
list(itertools.product(range(2), repeat=3))
rows = itertools.product('AB', range(2), repeat=2)
for row in rows: print(row)
"""
Explanation: itertools.product 生成器是计算笛卡尔积的惰性方式,从输入的各个迭代对象中获取元素,合并成由 N 个元素构成的元组,与嵌套的 for 循环效果一样。repeat指明重复处理多少次可迭代对象。下面演示 itertools.product 的用法
End of explanation
"""
ct = itertools.count()
next(ct) # 不能构建 ct 列表,因为 ct 是无穷的
next(ct), next(ct), next(ct)
list(itertools.islice(itertools.count(1, .3), 3))
cy = itertools.cycle('ABC')
next(cy)
list(itertools.islice(cy, 7))
rp = itertools.repeat(7) # 重复出现指定元素
next(rp), next(rp)
list(itertools.repeat(8, 4)) # 4 次数字 8
list(map(operator.mul, range(11), itertools.repeat(5)))
"""
Explanation: 把输入的各个元素扩展成多个输出元素的生成器函数:
End of explanation
"""
# 'ABC' 中每两个元素 len() == 2 的各种组合
list(itertools.combinations('ABC', 2))
# 包括相同元素的每两个元素的各种组合
list(itertools.combinations_with_replacement('ABC', 2))
# 每两个元素的各种排列
list(itertools.permutations('ABC', 2))
list(itertools.product('ABC', repeat=2))
"""
Explanation: itertools 中 combinations, comb 和 permutations 生成器函数,连同 product 函数称为组合生成器。itertool.product 和其余组合学函数有紧密关系,如下:
End of explanation
"""
# 产出由两个元素组成的元素,形式为 (key, group),其中 key 是分组标准,
#group 是生成器,用于产出分组里的元素
list(itertools.groupby('LLLAAGGG'))
for char, group in itertools.groupby('LLLLAAAGG'):
print(char, '->', list(group))
animals = ['duck', 'eagle', 'rat', 'giraffe', 'bear',
'bat', 'dolphin', 'shark', 'lion']
animals.sort(key=len)
animals
for length, group in itertools.groupby(animals, len):
print(length, '->', list(group))
# 使用 reverse 生成器从右往左迭代 animals
for length, group in itertools.groupby(reversed(animals), len):
print(length, '->', list(group))
# itertools 产生多个生成器,每个生成器都产出输入的各个元素
list(itertools.tee('abc'))
g1, g2 = itertools.tee('abc')
next(g1)
next(g2)
next(g2)
list(g1)
list(g2)
list(zip(*itertools.tee('ABC')))
"""
Explanation: 用于重新排列元素的生成器函数:
End of explanation
"""
def chain(*iterables): # 自己写的 chain 函数,标准库中的 chain 是用 C 写的
for it in iterables:
for i in it:
yield i
s = 'ABC'
t = tuple(range(3))
list(chain(s, t))
"""
Explanation: Python 3.3 中新语法 yield from
如果生成器函数需要产生两一个生成器生成的值,传统方法是使用 for 循环
End of explanation
"""
def chain(*iterables):
for i in iterables:
yield from i # 详细语法在 16 章讲
list(chain(s, t))
"""
Explanation: chain 生成器函数把操作依次交给接收到的各个可迭代对象处理。为此 Python 3.3 引入了新语法,如下:
End of explanation
"""
all([1, 2, 3]) # 所有元素为真返回 True
all([1, 0, 3])
any([1, 2, 3]) # 有元素为真就返回 True
any([1, 0, 3])
any([0, 0, 0])
any([])
g = (n for n in [0, 0.0, 7, 8])
any(g)
next(g) # any 碰到一个为真就不往下判断了
"""
Explanation: 可迭代的归约函数
接受可迭代对象,然后返回单个结果,叫归约函数。
End of explanation
"""
from random import randint
def d6():
return randint(1, 6)
d6_iter = iter(d6, 1)
d6_iter
for roll in d6_iter:
print(roll)
"""
Explanation: 还有一个内置的函数接受一个可迭代对象,返回不同的值 -- sorted,reversed 是生成器函数,与此不同,sorted 会构建并返回真正的列表,毕竟要读取每一个元素才能排序。它返回的是一个排好序的列表。这里提到 sorted,是因为它可以处理任何可迭代对象
当然,sorted 和这些归约函数只能处理最终会停止的可迭代对象,这些函数会一直收集元素,永远无法返回结果
深入分析 iter 函数
iter 函数还有一个鲜为人知的用法:传两个参数,使用常规的函数或任何可调用的对象创建迭代器。这样使用时,第一个参数必须是可调用对象,用于不断调用(没有参数),产出各个值,第二个是哨符,是个标记值,当可调用对象返回这个值时候,触发迭代器抛
出 StopIteration 异常,而不产出哨符。
下面是掷骰子,直到掷出 1
End of explanation
"""
# for line in iter(fp.readline, '\n'):
# process_line(line)
"""
Explanation: 内置函数 iter 的文档有一个实用的例子,逐行读取文件,直到遇到空行或者到达文件末尾为止:
End of explanation
"""
def f():
x=0
while True:
x += 1
yield x
"""
Explanation: 把生成器当成协程
Python 2.2 引入了 yield 关键字实现的生成器函数,Python 2.5 为生成器对象添加了额外的方法和功能,其中最引人关注的是 .send() 方法
与 .__next__() 方法一样,.send() 方法致使生成器前进到下一个 yield 语句。不过 send() 方法还允许使用生成器的客户把数据发给自己,即不管传给 .send() 方法什么参数,那个参数都会成为生成器函数定义体中对应的 yield 表达式的值。也就是说,.send() 方法允许在客户代码和生成器之间双向交换数据。而 .__next__() 方法只允许客户从生成器中获取数据
这是一项重要的 “改进”,甚至改变了生成器本性,这样使用的话,生成器就变成了协程。所以要提醒一下:
生成器用于生成供迭代的数据
协程是数据的消费者
为了避免脑袋爆炸,不能把两个概念混为一谈
协程与迭代无关
注意,虽然在协程中会使用 yield 产出值,但这与迭代无关
延伸阅读
有个简单的生成器函数例子
End of explanation
"""
def f():
def do_yield(n):
yield n
x = 0
while True:
x += 1
do_yield(x)
"""
Explanation: 我们无法通过函数调用抽象产出这个过程,下面似乎能抽象产出这个过程:
End of explanation
"""
def f():
def do_yield(n):
yield n
x = 0
while True:
x += 1
yield from do_yield(x)
"""
Explanation: 调用 f() 会得到一个死循环,而不是生成器,因为 yield 只能将最近的外层函数变成生成器函数。虽然生成器函数看起来像函数,可是我们不能通过简单的函数调用把职责委托给另一个生成器函数。
Python 新引入的 yield from 语法允许生成器或协程把工作委托给第三方完成,这样就无需嵌套 for 循环作为变通了。在函数调用前面加上 yield from 能 ”解决“ 上面的问题,如下:
End of explanation
"""
|
QuantStack/quantstack-talks
|
2019-06-04-deRSE19-widgets/notebooks/5 - Custom.ipynb
|
bsd-3-clause
|
import ipywidgets as widgets
from traitlets import Unicode
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
"""
Explanation: Custom Jupyter Widgets
The Hello World Example of the Cookie Cutter
The widget framework is built on top of the Comm framework (short for communication). The Comm framework is a framework that allows you send/receive JSON messages to/from the front end (as seen below).
To create a custom widget, you need to define the widget both in the browser and on the kernel size.
Python Kernel
DOMWidget and Widget
DOMWidget: Intended to be displayed in the Jupyter notebook
Widget: A terrible name for a synchronized object. It could not have any visual representation.
_view_name
Inheriting from the DOMWidget does not tell the widget framework what front end widget to associate with your back end widget. Instead, you must tell it yourself by defining a specially named traitlet, _view_name (as seen below).
End of explanation
"""
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
console.log(this)
this.el.innerText = 'Hello World!';
},
});
return {
HelloView: HelloView
};
});
"""
Explanation: Front end (JavaScript)
Models and Views
Jupyter widgets rely on Backbone.js.
Backbone.js is an MVC (model view controller) framework.
Widgets defined in the back end are automatically synchronized with generic Backbone.js models in the front end. The traitlets are added to the front end instance automatically on first state push. The _view_name trait that you defined earlier is used by the widget framework to create the corresponding Backbone.js view and link that view to the model.
Import jupyter-js-widgets, define the view, implement the render method
End of explanation
"""
HelloWidget()
"""
Explanation: Test
You should be able to display your widget just like any other widget now.
End of explanation
"""
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
value = Unicode('Hello World!').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.el.innerText = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
"""
Explanation: Making the widget stateful
Instead of displaying a static "hello world" message, we can display a string set by the back end.
First you need to add a traitlet in the back end.
(Use the name of value to stay consistent with the rest of the widget framework and to allow your widget to be used with interact.)
End of explanation
"""
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.el.innerText = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
w = HelloWidget()
w
w.value = 'Hello!'
"""
Explanation: Dynamic updates
Adding and registering a change handler.
End of explanation
"""
from traitlets import CInt
class SpinnerWidget(widgets.DOMWidget):
_view_name = Unicode('SpinnerView').tag(sync=True)
_view_module = Unicode('spinner').tag(sync=True)
value = CInt().tag(sync=True)
%%javascript
requirejs.undef('spinner');
define('spinner', ["@jupyter-widgets/base"], function(widgets) {
var SpinnerView = widgets.DOMWidgetView.extend({
render: function() {
var that = this;
this.$input = $('<input />');
this.$el.append(this.$input);
this.$spinner = this.$input.spinner({
change: function( event, ui ) {
that.handle_spin(that.$spinner.spinner('value'));
},
spin: function( event, ui ) {
//ui.value is the new value of the spinner
that.handle_spin(ui.value);
}
});
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.$spinner.spinner('value', this.model.get('value'));
},
handle_spin: function(value) {
this.model.set('value', value);
this.touch();
},
});
return {
SpinnerView: SpinnerView
};
});
"""
Explanation: An example including bidirectional communication: A Spinner Widget
End of explanation
"""
w = SpinnerWidget(value=5)
w
w.value = 7
"""
Explanation: Test of the spinner widget
End of explanation
"""
from IPython.display import display
w1 = SpinnerWidget(value=0)
w2 = widgets.IntSlider()
display(w1,w2)
from traitlets import link
mylink = link((w1, 'value'), (w2, 'value'))
"""
Explanation: Wiring the spinner with another widget
End of explanation
"""
|
ptpro3/ptpro3.github.io
|
Projects/Project2/Project2_Prashant.ipynb
|
mit
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import requests
from bs4 import BeautifulSoup
import dateutil.parser
import statsmodels.api as sm
import patsy
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
import sys, sklearn
from sklearn import linear_model, preprocessing
from sklearn import metrics
%matplotlib inline
"""
Explanation: Project: Project 2: Luther
Date: 02/03/2017
Name: Prashant Tatineni
Project Overview
For Project Luther, I gathered the set of all films listed under movie franchises on boxofficemojo.com. My goal was to predict the success of a movie sequel (i.e., domestic gross in USD) based on the performance of other sequels, and especially based on previous films in that particular franchise. I saw some linear correlation between certain variables, like number of theaters, and the total domestic gross, but the predictions from my final model were not entirely reasonable. More time could be spent on better addressing the various outliers in the dataset.
Summary of Solution Steps
Retrieve data from boxofficemojo.com.
Clean up data and reduce to a set of predictor variables, with "Adjusted Gross" as the target for prediction.
Run Linear Regression model.
Review model performance.
End of explanation
"""
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm'
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page,"lxml")
tables = soup.find_all("table")
rows = [row for row in tables[3].find_all('tr')]
rows = rows[1:]
# Initialize empty dictionary of movies
movies = {}
for row in rows:
items = row.find_all('td')
franchise = items[0].find('a')['href']
franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:]
response = requests.get(franchiseurl)
franchise_page = response.text
franchise_soup = BeautifulSoup(franchise_page,"lxml")
franchise_tables = franchise_soup.find_all("table")
franchise_gross = [row for row in franchise_tables[4].find_all('tr')]
franchise_gross = franchise_gross[1:len(franchise_gross)-2]
franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')]
franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2]
# Assign movieurl as key
# Add title, franchise, inflation-adjusted gross, release date.
for row in franchise_adjgross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
title = movie_info[1]
adjgross = movie_info[3]
release = movie_info[5]
movies[movieurl] = [title.text]
movies[movieurl].append(franchise)
movies[movieurl].append(adjgross.text)
movies[movieurl].append(release.text)
# Add number of theaters for the above movies
for row in franchise_gross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
theaters = movie_info[4]
if movieurl in movies.keys():
movies[movieurl].append(theaters.text)
df = pd.DataFrame(movies.values())
df.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters']
df.head()
df.shape
"""
Explanation: Step 1
I started with the "Franchises" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for "Franchises" list was sorted Ascending, so this conveniently rolls "subfranchises" into their "parent" franchise.
E.g., "Fantastic Beasts" and the "Harry Potter" movies have their own separate Franchises, but they will all be tagged as the "JKRowling" franchise, i.e. "./chart/?id=jkrowling.htm"
Also, because I was comparing sequels to their predecessors, I focused on Domestic Gross, adjusted for ticket price inflation.
End of explanation
"""
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions.
df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower())
df = df[(df.Ignore == False)]
del df['Ignore']
df.shape
# Convert Adjusted Gross to a number
df['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',','')))
# Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060'
df['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x)
df['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))
"""
Explanation: Step 2
Clean up data.
End of explanation
"""
df = df.sort_values(['Franchise','Release'])
df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum())
df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank())
df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
"""
Explanation: The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation.
- The Average Adjusted Gross of all previous films in the franchise
- The Adjusted Gross of the very first film in the franchise
- The Release Date of the previous film in the franchise
- The Release Date of the very first film in the franchise
- The Series Number of the film in that franchise
-- I considered using the film's number in the franchise as a rank value that could be split into indicator variables, but it's useful as a linear value because the total accrued sum of $ earned by the franchise is a linear combination of "SeriesNum" and "PrevAvgGross"
End of explanation
"""
df.Theaters = df.Theaters.replace('-','0')
df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',','')))
df['PrevRelease'] = df['Release'].shift()
# Create a second dataframe with franchise group-related information.
df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count()))
df_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first()
df_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first()
df_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum())
df_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters']
df_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms']
df_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first()
df = df.merge(df_group, on='Franchise')
df.head()
df['Theaters'] = df.Theaters.replace(0,df.AvgTheaters)
# Drop rows with NaN. Drops all first films, but I've already stored first film information within other features.
df = df.dropna()
df.shape
df['DaysSinceFirstFilm'] = df.Release - df.FirstRelease
df['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days)
df['DaysSincePrevFilm'] = df.Release - df.PrevRelease
df['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days)
df.sort_values('Release',ascending=False).head()
"""
Explanation: Number of Theaters in which the film showed
-- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.
End of explanation
"""
films17 = df.loc[[530,712,676]]
# Grabbing columns for regression model and dropping 2017 films
dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
dfreg = dfreg.drop([530,712,676])
dfreg.shape
"""
Explanation: For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.
End of explanation
"""
dfreg.corr()
sns.pairplot(dfreg);
sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross));
sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
"""
Explanation: Step 3
Apply Linear Regression.
End of explanation
"""
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
"""
Explanation: In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.
End of explanation
"""
model = sm.OLS(y, X)
fit = model.fit()
fit.summary()
fit.resid.plot(style='o');
"""
Explanation: First try: Initial linear regression model with statsmodels
End of explanation
"""
polyX=PolynomialFeatures(2).fit_transform(X)
polymodel = sm.OLS(y, polyX)
polyfit = polymodel.fit()
polyfit.rsquared
polyfit.resid.plot(style='o');
polyfit.rsquared_adj
"""
Explanation: Try Polynomial Regression
End of explanation
"""
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog)
zip(hetnames,hettest)
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)
zip(hetnames,hettest)
"""
Explanation: Heteroskedasticity
The polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:
End of explanation
"""
dfPolyX = pd.DataFrame(polyX)
bcPolyX = pd.DataFrame()
for i in range(dfPolyX.shape[1]):
bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0]
# Transformed data with Box-Cox:
bcPolyX.head()
# Introduce log(y) for target variable:
y = y.reset_index(drop=True)
logy = np.log(y)
"""
Explanation: Apply Box-Cox Transformation
As seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.
End of explanation
"""
logPolyModel = sm.OLS(logy, bcPolyX)
logPolyFit = logPolyModel.fit()
logPolyFit.rsquared_adj
"""
Explanation: Try Polynomial Regression again with Log Y and Box-Cox transformed X
End of explanation
"""
X_scaled = preprocessing.scale(bcPolyX)
en_cv = linear_model.ElasticNetCV(cv=10, normalize=False)
en_cv.fit(X_scaled, logy)
en_cv.coef_
logy_en = en_cv.predict(X_scaled)
mse = metrics.mean_squared_error(logy, logy_en)
# The mean square error for this model
mse
plt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));
"""
Explanation: Apply Regularization using Elastic Net to optimize this model.
End of explanation
"""
films17
df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe")
polyX17 = PolynomialFeatures(2).fit_transform(X17)
dfPolyX17 = pd.DataFrame(polyX17)
bcPolyX17 = pd.DataFrame()
for i in range(dfPolyX17.shape[1]):
bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0]
X17_scaled = preprocessing.scale(bcPolyX17)
# Run the "en_cv" model from above on the 2017 data:
logy_en_2017 = en_cv.predict(X17_scaled)
# Predicted Adjusted Gross:
pd.DataFrame(np.exp(logy_en_2017))
# Adjusted Gross as of 2/1:
y17
"""
Explanation: Step 4
As seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.
End of explanation
"""
|
dstrockis/outlook-autocategories
|
notebooks/1-Exploring ensemble classifier.ipynb
|
apache-2.0
|
# Load data
import pandas as pd
with open('./data_files/8lWZYw-u-yNbGBkC4B--ip77K1oVwwyZTHKLeD7rm7k.csv') as data_file:
df = pd.read_csv(data_file)
df.head()
"""
Explanation: Hypothesis
Training per-folder logistic regression models will be more effective than a single model
End of explanation
"""
# Remove messages without a Subject
print df.shape
df = df.dropna(subset=['Subject'])
print df.shape
# Perform bag of words feature extraction
# TODO: Need to train with fixed vocabulary, otherwise runtime feature construction won't work correctly
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer(stop_words='english', lowercase=True)
train_counts = count_vect.fit_transform(df['Subject'])
print 'Dimensions of vocabulary feature matrix are:'
print train_counts.shape
"""
Explanation: Building an Ensemble Classifier
Do some preprocessing on the text columns (subject, body, maybe to, cc, from)
Clean NaN's or remove rows of data with NaNs
Do stuff the Preprocess Text Azure module does for us (stopwords, etc)
Use scikit learn where possible
Do some feature construction using pandas & scikit learn
On subject, body, to, cc, from, etc
Feature Hashing
TF/IDF
Custom TF/IDF (per-folder)
One-Hot Encoding (get_dummies)
One-Hot Encode FolderId labels into their own boolean columns (1s & 0s)
Split data into training & test sets to be used for all ensemble members
For each folder, train a model on the training data
Probably use logistic regression to start out
Consider decision trees, SVMs, or other classifier models
Use subject, body, to, cc, from, etc as features
Use FolderId boolean column as label (yes/no)
Save each model for making predictions
Construct ensemble classifier from N folder models
For a query message, make N predictions (one per model)
Output probabilities/confidences
Ensemble prediction is the most confident per-folder prediction
Construct a composite probability & output with prediction result
Evaluate performance of model on test data, compare to Azure ML models
Compare to out of the box logistic regression sci kit learn model
Figure out how to persist models in NDB or Cloud Storage for making runtime predictions
Figure out how to perform preprocessing & feature construction at runtime
Construct REST API for serving predictions
Figure out how to deploy this all into production
Constructing Subject Feature Matrix
End of explanation
"""
# Add TF/IDF weighting to account for lenght of documents
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
train_tfidf = tfidf_transformer.fit_transform(train_counts)
print 'Dimensions of vocabulary feature matrix are:'
print train_tfidf.shape
print 'But, its a sparse matrix: ' + str(type(train_tfidf))
"""
Explanation: Strategies for reducing # of columns in feature matrix
Add more stop words
Remove email addresses
Remove URLs
Lemmatization
Remove number, special characters, sequences of characters like 'aaaaa'
Perform manual tokenization to get column names, and inspect types of cols created
...
End of explanation
"""
# Merge CC, To, From into one People column
df['CcRecipients'].fillna('', inplace=True)
df['ToRecipients'].fillna('', inplace=True)
df['Sender'].fillna('', inplace=True)
df['People'] = df['Sender'] + ';' + df['CcRecipients'] + ';' + df['ToRecipients']
df.head(10)
# Convert People to matrix representation
people_features = df['People'].str.get_dummies(sep=';')
print people_features.shape
people_features.head()
# Will need to store people vocabulary for feature construction during predictions
people_vocabulary = people_features.columns
print people_vocabulary[:2]
print len(people_vocabulary)
# Convert to csr_matrix and hstack with Subject feature matrix
import scipy
sparse_people_features = scipy.sparse.csr_matrix(people_features)
print people_features.shape
print sparse_people_features.shape
print sparse_people_features.shape
print train_tfidf.shape
feature_matrix = scipy.sparse.hstack([sparse_people_features, train_tfidf])
print feature_matrix.shape
# Now lets one-hot encode labels to perform binary classification
label_matrix = pd.get_dummies(df['FolderId'])
print label_matrix.shape
label_matrix.head()
"""
Explanation: Constructing CC, To, and From
End of explanation
"""
# Split into test and training data sets
from sklearn.model_selection import train_test_split
labels_train, labels_test, features_train, features_test, binary_labels_train, binary_labels_test = train_test_split(df['FolderId'], feature_matrix, label_matrix, test_size=0.20, random_state=42)
print labels_train.shape
print labels_test.shape
print features_train.shape
print features_test.shape
print binary_labels_train.shape
print binary_labels_test.shape
# Train a default Logistic Regression model, with no tuning
from sklearn.linear_model import LogisticRegression
default_lgr_model = LogisticRegression().fit(features_train, labels_train)
# Evaluate default Logistic Regression model on test data
default_lgr_predictions = default_lgr_model.predict(features_test)
from sklearn import metrics
print metrics.accuracy_score(labels_test, default_lgr_predictions)
# print np.mean(default_lgr_predictions == labels_test)
print metrics.confusion_matrix(labels_test, default_lgr_predictions)
# metrics.classification_report(labels_test, default_lgr_predictions)
import numpy as np
from sklearn.linear_model import LogisticRegression
class Folder_Ensemble_Classifier:
_folder_models = []
_model_class_labels = []
def fit(self, training_feature_matrix, label_matrix):
self._folder_models = []
self._model_class_labels = []
for folder in label_matrix.columns:
self._folder_models.append(LogisticRegression().fit(training_feature_matrix, label_matrix[folder]))
self._model_class_labels.append(folder)
return self
# TODO: This needs to work on arrays
def predict(self, input_feature_matrix):
model_predictions = []
for model in self._folder_models:
model_predictions.append(model.predict_proba(input_feature_matrix)[:,1])
# ^That's a matrix of predictions, with dimensions samples x models
# Array of highest probabilites & their labels, samples long
# best_predictions = np.array([np.zeros(input_feature_matrix.shape[0]), np.empty(input_feature_matrix.shape[0], dtype=str)])
best_predictions = pd.Series(np.zeros(input_feature_matrix.shape[0]))
best_predictions_labels = pd.Series(np.empty(input_feature_matrix.shape[0], dtype=str))
# For each sample, find the best model
for i in range(len(model_predictions)):
# # Fails, bool expression reverted to single bool, which can't be passed to np.place
# prediction_bools = model_predictions[i] > best_predictions[0]
# np.place(best_predictions[0], prediction_bools, model_predictions[i])
# np.place(best_predictions[1], prediction_bools, self._model_class_labels[i])
# # Fails, bool expression reverted to single bool which is not a valid index
# best_predictions[0][model_predictions[i] > best_predictions[0]] = model_predictions[i]
# best_predictions[1][model_predictions[i] > best_predictions[0]] = self._model_class_labels[i]
# Using pandas instead of numpy seems to work better
model_vals = pd.Series(model_predictions[i])
best_predictions_labels[model_vals > best_predictions] = self._model_class_labels[i]
best_predictions[model_vals > best_predictions] = model_vals
# TODO: Should I generate a composite/average/relative probability?
d = {'predictions' : best_predictions_labels,
'probabilities' : best_predictions}
return pd.DataFrame(d)
# Train ensemble model
ensemble_clf = Folder_Ensemble_Classifier().fit(features_train, binary_labels_train)
# Make predictions & evaluate
ensemble_lgr_predictions = ensemble_clf.predict(features_test)
ensemble_lgr_predictions.head()
# Evaluate model against test data
from sklearn import metrics
print metrics.accuracy_score(labels_test, ensemble_lgr_predictions['predictions'])
print metrics.confusion_matrix(labels_test, ensemble_lgr_predictions['predictions'])
"""
Explanation: Train two models & compare accuracies
End of explanation
"""
|
jegibbs/phys202-2015-work
|
assignments/assignment05/InteractEx02.ipynb
|
mit
|
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.html import widgets
"""
Explanation: Interact Exercise 2
Imports
End of explanation
"""
def plot_sine1(a,b):
x = np.arange(0.0, 12.56, 0.05)
plt.plot(x,np.sin(a*x+b))
plot_sine1(5, 3.4)
plt.box(False)
plt.xlim(0,6.28);
plt.ylim(-1.0,1.0)
plt.xlabel('$X$')
plt.ylabel('$Sin(ax+b)$');
plt.xticks([0,3.14,2*3.14], ['0','$3/pi$','$6/pi$']);
"""
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
"""
interact(plot_sine1, a=widgets.FloatSlider(min=0.0,max=5.0,step=0.1,value=2.5), b=widgets.FloatSlider(min=-5.0,max=5.0,step=0.1,value=0.0));
assert True # leave this for grading the plot_sine1 exercise
"""
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
"""
def plot_sine2(a, b, style=None):
x = np.arange(0.0, 12.56, 0.05)
if style == None:
style = 'b-'
else:
style = style
plt.plot(x,np.sin(a*x+b),style)
plot_sine2(4.0, -1.0, 'r--')
"""
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
"""
interact(plot_sine2, a=widgets.FloatSlider(min=0.0,max=5.0,step=0.1,value=2.5), b=widgets.FloatSlider(min=-5.0,max=5.0,step=0.1,value=0.0), style=widgets.Dropdown('b.','ko','r^'));
#This code doesn't work, but I can't figure out why, I don't really understand the error. It seems like this should be simple.
assert True # leave this for grading the plot_sine2 exercise
"""
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation
"""
|
jokedurnez/neuropower_new_ideas
|
peakdistribution/FDRcontrol_with_RFT.ipynb
|
mit
|
% matplotlib inline
import os
import numpy as np
import nibabel as nib
from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
import nipy.algorithms.statistics.rft as rft
from __future__ import print_function, division
import math
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
from nipype.interfaces import fsl
import pandas as pd
import nipy.algorithms.statistics.intvol as intvol
from matplotlib import colors
import scipy.stats as stats
import statsmodels.sandbox.stats.multicomp as multicomp
"""
Explanation: Does RFT FDR control the nominal FDR value?
In this notebook, I made a small simulation, to verify that the RFT peak FDR procedure does not control the overall FDR but the conditional FDR (for all peaks above u)
End of explanation
"""
def nulprobdensRFT(exc,peaks):
f0 = exc*np.exp(-exc*(peaks-exc))
return f0
"""
Explanation: Define the p-value for peaks using RFT
End of explanation
"""
thres = [0.01,0.02,0.03,0.04,0.05]
res = {}
means = []
exc = 3
for alphval in thres:
print(alphval)
hatFDR = []
for k in range(5000):
smooth_FWHM = 3
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(50,50,50),noise_level=1)
data[0:25,0:25,0:25] = data[0:25,0:25,0:25]+2.5
img=nib.Nifti1Image(data,np.eye(4))
img.to_filename("files/RF.nii.gz")
cl=fsl.model.Cluster()
cl.inputs.threshold = exc
cl.inputs.in_file="files/RF.nii.gz"
cl.inputs.out_localmax_txt_file="files/locmax.txt"
cl.inputs.num_maxima=1000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
peaks = pd.read_csv("files/locmax.txt",sep="\t").drop('Unnamed: 5',1)
peaks.Value = peaks.Value
peaks['Pvals'] = nulprobdensRFT(exc,peaks.Value)
psorted = np.sort(peaks.Pvals)
qsorted = (np.array(range(len(psorted)))/len(psorted))*alphval
pr = [1 if a<b else 0 for a,b in zip(psorted,qsorted)]
if np.sum(pr)==0:
peaks['Significant'] = 0
FDRh = 0
else:
pr = [x for x,val in enumerate(pr) if val == True]
pthres = max(qsorted[pr])
peaks['Significant'] = peaks.Pvals<pthres
truth = []
for i in range(len(peaks)):
peak_act = peaks.x[i] in range(25) and peaks.y[i] in range(25) and peaks.z[i] in range(25)
truth.append(peak_act)
peaks['Truth'] = truth
FP = np.sum([a and not b for a,b in zip(peaks.Significant == 1,peaks.Truth)])
TP = np.sum([a and b for a,b in zip(peaks.Significant == 1,peaks.Truth)])
FDRh = FP/(TP+FP)
hatFDR.append(FDRh)
res[alphval] = hatFDR
mn = np.nanmean(res[alphval])
print(mn)
means.append(mn)
"""
Explanation: Loop over simulations for different significance thresholds to see overall FDR
End of explanation
"""
plt.figure(figsize=(6,4))
plt.imshow(data[0:50,0:50,1])
plt.colorbar()
plt.show()
"""
Explanation: Plot figure of random field: 1/8 of the field is with signal
End of explanation
"""
means = []
for alpha in thres:
means.append(np.nanmean(res[alpha]))
cols = cb.qualitative.Set2_8.mpl_colors
plt.figure(figsize=(6,4))
plt.plot(np.arange(0.01,0.05,0.001),np.arange(0.01,0.05,0.001),color='grey')
plt.plot(thres,means,linewidth=3,color=cols[0])
plt.xlabel("nominal FDR")
plt.ylabel("observed FDR")
plt.xlim(0.01,0.05)
plt.ylim(0.01,0.05)
"""
Explanation: Plot results
End of explanation
"""
|
ML4DS/ML4all
|
TM3.Topic_Models_with_MLlib/ExB3_TopicModels/TM_Exam_Solution.ipynb
|
mit
|
%matplotlib inline
import nltk
import time
import matplotlib.pyplot as plt
import pylab
# import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
#from test_helper import Test
import collections
from pyspark.mllib.clustering import LDA, LDAModel
from pyspark.mllib.linalg import Vectors
# import gensim
# import numpy as np
"""
Explanation: Master Telefónica Big Data & Analytics
Prueba de Evaluación del Tema 4:
Topic Modelling.
Date: 2016/04/10
Para realizar esta prueba es necesario tener actualizada la máquina virtual con la versión más reciente de MLlib.
Para la actualización, debe seguir los pasos que se indican a continuación:
Pasos para actualizar MLlib:
Entrar en la vm como root:
vagrant ssh
sudo bash
Ir a /usr/local/bin
Descargar la última versión de spark desde la vm mediante
wget http://www-eu.apache.org/dist/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz
Desempaquetar:
tar xvf spark-1.6.1-bin-hadoop2.6.tgz (y borrar el tgz)
Lo siguiente es un parche, pero suficiente para que funcione:
Guardar copia de spark-1.3: mv spark-1.3.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6_old
Crear enlace a spark-1.6: ln -s spark-1.6.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6
Librerías
Puede utilizar este espacio para importar todas las librerías que necesite para realizar el examen.
End of explanation
"""
#nltk.download()
mycorpus = nltk.corpus.reuters
"""
Explanation: 0. Adquisición de un corpus.
Descargue el contenido del corpus reuters de nltk.
import nltk
nltk.download()
Selecciona el identificador reuters.
End of explanation
"""
n_docs = 500000
filenames = mycorpus.fileids()
fn_train = [f for f in filenames if f[0:5]=='train']
corpus_text = [mycorpus.raw(f) for f in fn_train]
# Reduced dataset:
n_docs = min(n_docs, len(corpus_text))
corpus_text = [corpus_text[n] for n in range(n_docs)]
print 'Loaded {0} files'.format(len(corpus_text))
"""
Explanation: Para evitar problemas de sobrecarga de memoria, o de tiempo de procesado, puede reducir el tamaño el corpus, modificando el valor de la variable n_docs a continuación.
End of explanation
"""
corpusRDD = sc.parallelize(corpus_text, 4)
print "\nRDD created with {0} elements".format(corpusRDD.count())
"""
Explanation: A continuación cargaremos los datos en un RDD
End of explanation
"""
def getTokenList(doc, stopwords_en):
# scode: tokens = <FILL IN> # Tokenize docs
tokens = word_tokenize(doc.decode('utf-8'))
# scode: tokens = <FILL IN> # Remove non-alphanumeric tokens and normalize to lowercase
tokens = [t.lower() for t in tokens if t.isalnum()]
# scode: tokens = <FILL IN> # Remove stopwords
tokens = [t for t in tokens if t not in stopwords_en]
return tokens
stopwords_en = stopwords.words('english')
corpus_tokensRDD = (corpusRDD
.map(lambda x: getTokenList(x, stopwords_en))
.cache())
# print "\n Let's check tokens after cleaning:"
print corpus_tokensRDD.take(1)[0][0:30]
"""
Explanation: 1. Ejercicios
Ejercicio 1: Preprocesamiento de datos.
Prepare los datos para aplicar un algoritmo de modelado de tópicos en pyspark. Para ello, aplique los pasos siguientes:
Tokenización: convierta cada texto a utf-8, y transforme la cadena en una lista de tokens.
Homogeneización: pase todas las palabras a minúsculas y elimine todos los tokens no alfanuméricos.
Limpieza: Elimine todas las stopwords utilizando el fichero de stopwords disponible en NLTK para el idioma inglés.
Guarde el resultado en la variable corpus_tokensRDD
End of explanation
"""
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
# scode: corpus_stemRDD = <FILL IN>
corpus_stemRDD = corpus_tokensRDD.map(lambda x: [stemmer.stem(token) for token in x])
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemRDD.take(1)[0][0:30]
"""
Explanation: Ejercicio 2: Stemming
Aplique un procedimiento de stemming al corpus, utilizando el SnowballStemmer de NLTK. Guarde el resultado en corpus_stemRDD.
End of explanation
"""
# corpus_wcRDD = <FILL IN>
corpus_wcRDD = (corpus_stemRDD
.map(collections.Counter)
.map(lambda x: [(t, x[t]) for t in x]))
print corpus_wcRDD.take(1)[0][0:20]
"""
Explanation: Ejercicio 3: Vectorización
En este punto cada documento del corpus es una lista de tokens.
Calcule un nuevo RDD que contenga, para cada documento, una lista de tuplas. La clave (key) de cada lista será un token y su valor el número de repeticiones del mismo en el documento.
Imprima una muestra de 20 tuplas uno de los documentos del corpus.
End of explanation
"""
# scode: wcRDD = < FILL IN >
wcRDD = (corpus_wcRDD
.flatMap(lambda x: x)
.reduceByKey(lambda x, y: x + y))
wcDict = dict(wcRDD.collect())
print wcDict['interpret']
"""
Explanation: Ejercicio 4: Cálculo del diccionario de tokens
Construya, a partir de corpus_wcRDD, un nuevo diccionario con todos los tokens del corpus. El resultado será un diccionario python de nombre wcDict, cuyas entradas serán los tokens y sus valores el número de repeticiones del token en todo el corpus.
wcDict = {token1: valor1, token2, valor2, ...}
Imprima el número de repeticiones del token interpret
End of explanation
"""
print wcRDD.count()
"""
Explanation: Ejercicio 5: Número de tokens.
Determine el número total de tokens en el diccionario. Imprima el resultado.
End of explanation
"""
print wcRDD.takeOrdered(5, lambda x: -x[1])
"""
Explanation: Ejercicio 6: Términos demasiado frecuentes:
Determine los 5 tokens más frecuentes del corpus. Imprima el resultado.
End of explanation
"""
tokenmasf = 'said'
ndocs = corpus_stemRDD.filter(lambda x: tokenmasf in x).count()
print 'El numerod de documentos es {0}, es decir, el {1} % del total de documentos'.format(
ndocs, float(ndocs)/corpus_stemRDD.count()*100)
"""
Explanation: Ejercicio 7: Número de documentos del token más frecuente.
Determine en qué porcentaje de documentos aparece el token más frecuente.
End of explanation
"""
corpus_wcRDD2 = corpus_wcRDD.map(lambda x: [tupla for tupla in x if tupla[0]
not in ['said', 'mln']])
print corpus_wcRDD2.take(1)
"""
Explanation: Ejercicio 8: Filtrado de términos.
Elimine del corpus los dós términos más frecuentes. Guarde el resultado en un nuevo RDD denominado corpus_wcRDD2, con la misma estructura que corpus_wcRDD (es decir, cada documento una lista de tuplas).
End of explanation
"""
# scode: wcRDD = < FILL IN >
wcRDD2 = (corpus_wcRDD2
.flatMap(lambda x: x)
.reduceByKey(lambda x, y: x + y)
.sortBy(lambda x: -x[1]))
# Token Dictionary:
n_tokens = wcRDD2.count()
TD = wcRDD2.takeOrdered(n_tokens, lambda x: -x[1])
D = map(lambda x: x[0], TD)
token_count = map(lambda x: x[1], TD)
# Compute inverse dictionary
invD = dict(zip(D, xrange(n_tokens)))
print invD
"""
Explanation: Ejercicio 9: Lista de tokens y diccionario inverso.
Determine la lista de topicos de todo el corpus, y construya un dictionario inverso, cuyas entradas sean los números consecutivos de 0 al número total de tokens, y sus salidas cada uno de los tokens, es decir
invD = {0: token0, 1: token1, 2: token2, ...}
End of explanation
"""
# Compute RDD replacing tokens by token_ids
corpus_sparseRDD = corpus_wcRDD2.map(lambda x: [(invD[t[0]], t[1]) for t in x])
# Convert list of tuplas into Vectors.sparse object.
corpus_sparseRDD = corpus_sparseRDD.map(lambda x: Vectors.sparse(n_tokens, x))
corpus4lda = corpus_sparseRDD.zipWithIndex().map(lambda x: [x[1], x[0]]).cache()
print corpus4lda.take(1)
"""
Explanation: Ejercicio 10: Algoritmo LDA.
Para aplicar el algoritmo LDA, es necesario reemplazar las tuplas (token, valor) de wcRDD por tuplas del tipo (token_id, value), sustituyendo cada token por un identificador entero.
El código siguiente se encarga de completar este proceso:
End of explanation
"""
print "Training LDA: this might take a while..."
start = time.time()
n_topics = 4
ldaModel = LDA.train(corpus4lda, k=n_topics, topicConcentration=2.0, docConcentration=3.0)
print "Modelo LDA entrenado en: {0} segundos".format(time.time()-start)
"""
Explanation: Aplique el algoritmo LDA con 4 tópicos sobre el corpus obtenido en corpus4lda, para un valor de topicConcentration = 2.0 y docConcentration = 3.0. (Tenga en cuenta que estos parámetros de entrada deben de ser tipo float).
End of explanation
"""
n_topics = 4
ldatopics = ldaModel.describeTopics(maxTermsPerTopic=2)
ldatopicnames = map(lambda x: x[0], ldatopics)
print ldatopicnames
for i in range(n_topics):
print "Topic {0}: {1}, {2}".format(i, D[ldatopicnames[i][0]], D[ldatopicnames[i][1]])
print ldatopics
"""
Explanation: Ejercicio 11: Tokens principales.
Imprima los dos tokens de mayor peso de cada tópico. (Debe imprimir el texto del token, no su índice).
End of explanation
"""
# Output topics. Each is a distribution over words (matching word count vectors)
iBank = invD['bank']
topicMatrix = ldaModel.topicsMatrix()
print topicMatrix[iBank]
"""
Explanation: Ejercicio 12: Pesos de un token.
Imprima el peso del token bank en cada tópico.
End of explanation
"""
VVVF
"""
Explanation: Test 13: Indique cuáles de las siguientes afirmaciones se puede asegurar que son verdaderas:
En LSI, cada documento se asigna a un sólo tópico.
De acuerdo con el modelo LDA, todos los tokens de un documento han sido generados por el mismo tópico
LSI descompone la matriz de datos de entrada en el producto de 3 matrices cuadradas.
Si el rango de la matriz de entrada a un modelo LSI es igual al número de tópicos. La descomposición SVD del modelo LSI es exacta (no es una aproximación).
FFFV
Test 14: Indique cuáles de las siguientes afirmaciones se puede asegurar que son verdaderas:
En un modelo LDA, la distribución de Dirichlet se utiliza para generar distribuciones de probabilidad de tokens.
Si una palabra aparece en pocos documentos del corpus, su IDF es mayor.
El resultado de la lematización de una palabra es una palabra
El resultado del stemming de una palabra es una palabra
End of explanation
"""
|
rvperry/phys202-2015-work
|
assignments/assignment04/TheoryAndPracticeEx01.ipynb
|
mit
|
from IPython.display import Image
"""
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
"""
# Add your filename and uncomment the following line:
Image(filename='TaP1.png')
"""
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation
"""
|
kaslusimoes/MurphyProbabilisticML
|
chapters/Chapter 1.ipynb
|
mit
|
%run ../src/LinearRegression.py
%run ../src/PolynomialFeatures.py
# LINEAR REGRESSION
# Generate random data
X = np.linspace(0,20,10)[:,np.newaxis]
y = 0.1*(X**2) + np.random.normal(0,2,10)[:,np.newaxis] + 20
# Fit model to data
lr = LinearRegression()
lr.fit(X,y)
# Predict new data
x_test = np.array([0,20])[:,np.newaxis]
y_predict = lr.predict(x_test)
# POLYNOMIAL REGRESSION
# Fit model to data
poly = PolynomialFeatures(2)
lr = LinearRegression()
lr.fit(poly.fit_transform(X),y)
# Predict new data
x_pol = np.linspace(0, 20, 100)[:, np.newaxis]
y_pol = lr.predict(poly.fit_transform(x_pol))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot linear regression
ax1 = fig.add_subplot(1, 2, 1)
plt.scatter(X,y)
plt.plot(x_test, y_predict, "r")
plt.xlim(0, 20)
plt.ylim(0, 50)
# Plot polynomial regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X,y)
plt.plot(x_pol, y_pol, "r")
plt.xlim(0, 20)
plt.ylim(0, 50);
"""
Explanation: Chapter 1 - Introduction to Machine Learning
This chapter introduces some common concepts about learning (such as supervised and unsupervised learning) and some simples applications.
Supervised Learning
Classification (labels)
Regression (real)
Learn when we have a dataset with points and true responses variables. If we use a probabilistic approach to this kind of inference, we want to find the probability distribution of the response $y$ given the training dataset $\mathcal{D}$ and a new point $x$ outside of it.
$$p(y\ |\ x, \mathcal{D})$$
A good guess $\hat{y}$ for $y$ is the Maximum a Posteriori estimator:
$$ŷ = \underset{c}{\mathrm{argmax}}\ p(y = c|x, \mathcal{D})$$
Unsupervised Learning
Clustering
Dimensionality Reduction / Latent variables
Discovering graph structure
Matrix completions
Parametric models
These models have a finite (and fixed) number of parameters.
Examples:
* Linear regression:
$$y(\mathbf{x}) = \mathbf{w}^\intercal\mathbf{x} + \epsilon$$
Which can be written as
$$p(y\ |\ x, \theta) = \mathcal{N}(y\ |\ \mu(x), \sigma^2) = \mathcal{N}(y\ |\ w^\intercal x, \sigma^2)$$
End of explanation
"""
%run ../src/LogisticRegression.py
X = np.hstack((np.random.normal(90, 2, 100), np.random.normal(110, 2, 100)))[:, np.newaxis]
y = np.array([0]*100 + [1]*100)[:, np.newaxis]
logr = LogisticRegression(learnrate=0.002, eps = 0.001)
logr.fit(X, y)
x_test = np.array([-logr.w[0]/logr.w[1]]).reshape(1,1) #np.linspace(-10, 10, 30)[:, np.newaxis]
y_probs = logr.predict_proba(x_test)[:, 0:1]
print("Probability:" + str(y_probs))
# Plot data
fig = plt.figure(figsize=(14, 6))
# Plot sigmoid function
ax1 = fig.add_subplot(1, 2, 1)
t = np.linspace(-15,15,100)
plt.plot(t, logr._sigmoid(t))
# Plot logistic regression
ax2 = fig.add_subplot(1, 2, 2)
plt.scatter(X, y)
plt.scatter(x_test, y_probs, c='r')
"""
Explanation: Logistic regression:
Despite the name, this is a classification model
$$p(y\ |\ x, w) = \mathrm{Ber}(y\ |\ \mu(x)) = \mathrm{Ber}(y\ |\ \mathrm{sigm}(w^\intercal x))$$
where
$$\displaystyle \mathrm{sigm}(x) = \frac{e^x}{1+e^x}$$
End of explanation
"""
%run ../src/KNearestNeighbors.py
# Generate data from 3 gaussians
gaussian_1 = np.random.multivariate_normal(np.array([1, 0.0]), np.eye(2)*0.01, size=100)
gaussian_2 = np.random.multivariate_normal(np.array([0.0, 1.0]), np.eye(2)*0.01, size=100)
gaussian_3 = np.random.multivariate_normal(np.array([0.1, 0.1]), np.eye(2)*0.001, size=100)
X = np.vstack((gaussian_1, gaussian_2, gaussian_3))
y = np.array([1]*100 + [2]*100 + [3]*100)
# Fit the model
knn = KNearestNeighbors(5)
knn.fit(X, y)
# Predict various points in space
XX, YY = np.mgrid[-5:5:.2, -5:5:.2]
X_test = np.hstack((XX.ravel()[:, np.newaxis], YY.ravel()[:, np.newaxis]))
y_test = knn.predict(X_test)
fig = plt.figure(figsize=(14, 6))
# Plot original data
ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(X[y == 1,0], X[y == 1,1], 'bo')
ax1.plot(X[y == 2,0], X[y == 2,1], 'go')
ax1.plot(X[y == 3,0], X[y == 3,1], 'ro')
# Plot predicted data
ax2 = fig.add_subplot(1, 2, 2)
ax2.contourf(XX, YY, y_test.reshape(50,50));
"""
Explanation: Non-parametric models
These models don't have a finite number of parameters. For example the number of parameters increase with the amount of training data, as in KNN:
$$p(y=c\ |\ x, \mathcal{D}, K) = \frac{1}{K} \sum_{i \in N_K(x, \mathcal{D})} \mathbb{I}(y_i = c)$$
End of explanation
"""
|
phuongxuanpham/SelfDrivingCar
|
CarND-Term1-Starter-Kit-Test/test.ipynb
|
gpl-3.0
|
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
img = mpimg.imread('test.jpg')
plt.imshow(img)
"""
Explanation: My note:
1. Install Anaconda
2. Setup the carnd-term1 environment as instructions in Starter Kit.
3. Run the test.ipynb in the carnd-term1 kernel
4. Troubleshoot with ffmpeg
Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
End of explanation
"""
import cv2
# convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='Greys_r')
"""
Explanation: Test OpenCV
End of explanation
"""
import tensorflow as tf
with tf.Session() as sess:
a = tf.constant(1)
b = tf.constant(2)
c = a + b
# Should be 3
print("1 + 2 = {}".format(sess.run(c)))
"""
Explanation: Test TensorFlow
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
"""
Explanation: Test Moviepy
End of explanation
"""
import imageio
imageio.plugins.ffmpeg.download()
"""
Explanation: Troubleshooting ffmpeg
NOTE: If you don't have ffmpeg installed on your computer you'll have to install it for moviepy to work. If this is the case you'll be prompted by an error in the notebook. You can easily install ffmpeg by running the following in a code cell in the notebook.
import imageio
imageio.plugins.ffmpeg.download()
End of explanation
"""
new_clip_output = 'test_output.mp4'
test_clip = VideoFileClip("test.mp4")
new_clip = test_clip.fl_image(lambda x: cv2.cvtColor(x, cv2.COLOR_RGB2YUV)) #NOTE: this function expects color images!!
%time new_clip.write_videofile(new_clip_output, audio=False)
HTML("""
<video width="640" height="300" controls>
<source src="{0}" type="video/mp4">
</video>
""".format(new_clip_output))
"""
Explanation: Create a new video with moviepy by processing each frame to YUV color space.
End of explanation
"""
|
tylere/docker-tmpnb-ee
|
notebooks/1 - IPython Notebook Examples/IPython Project Examples/IPython Kernel/Plotting in the Notebook.ipynb
|
apache-2.0
|
%matplotlib inline
"""
Explanation: Plotting with Matplotlib
IPython works with the Matplotlib plotting library, which integrates Matplotlib with IPython's display system and event loop handling.
matplotlib mode
To make plots using Matplotlib, you must first enable IPython's matplotlib mode.
To do this, run the %matplotlib magic command to enable plotting in the current Notebook.
This magic takes an optional argument that specifies which Matplotlib backend should be used. Most of the time, in the Notebook, you will want to use the inline backend, which will embed plots inside the Notebook:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3*np.pi, 500)
plt.plot(x, np.sin(x**2))
plt.title('A simple chirp');
"""
Explanation: You can also use Matplotlib GUI backends in the Notebook, such as the Qt backend (%matplotlib qt). This will use Matplotlib's interactive Qt UI in a floating window to the side of your browser. Of course, this only works if your browser is running on the same system as the Notebook Server. You can always call the display function to paste figures into the Notebook document.
Making a simple plot
With matplotlib enabled, plotting should just work.
End of explanation
"""
# %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py
"""
Plot demonstrating the integral as the area under a curve.
Although this is a simple example, it demonstrates some important tweaks:
* A simple line plot with custom color and line width.
* A shaded region created using a Polygon patch.
* A text label with mathtext rendering.
* figtext calls to label the x- and y-axes.
* Use of axis spines to hide the top and right spines.
* Custom tick placement and labels.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
def func(x):
return (x - 3) * (x - 5) * (x - 7) + 85
a, b = 2, 9 # integral limits
x = np.linspace(0, 10)
y = func(x)
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2)
plt.ylim(ymin=0)
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
plt.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([])
plt.show()
"""
Explanation: These images can be resized by dragging the handle in the lower right corner. Double clicking will return them to their original size.
One thing to be aware of is that by default, the Figure object is cleared at the end of each cell, so you will need to issue all plotting commands for a single figure in a single cell.
Loading Matplotlib demos with %load
IPython's %load magic can be used to load any Matplotlib demo by its URL:
End of explanation
"""
%matplotlib notebook
plt.figure()
x = np.linspace(0, 5 * np.pi, 1000)
for n in range(1, 4):
plt.plot(np.sin(n * x))
plt.show()
"""
Explanation: Matplotlib 1.4 introduces an interactive backend for use in the notebook,
called 'nbagg'. You can enable this with %matplotlib notebook.
With this backend, you will get interactive panning and zooming of matplotlib figures in your browser.
End of explanation
"""
|
georgetown-analytics/machine-learning
|
examples/bbengfort/home sales/home_sales.ipynb
|
mit
|
%matplotlib inline
import os
import numpy as np
import pandas as pd
import seaborn as sns
"""
Explanation: Home Sales
This data set is from the Kaggle Advanced Home Sales Regression challenge. Unfortunately the data is not available unless you sign up for Kaggle and agree to the restrictions from the data set. However this notebook is more to demonstrate the use of iPython widgets and sliders to create a simple application for determining home sales.
End of explanation
"""
data = pd.read_csv(os.path.join('data', 'train.csv'))
print("Data set of {} instances and {} attributes".format(*data.shape))
data.head()
g = sns.distplot(data.SalePrice, rug=True, kde=True)
t = g.set_title("Distribution of Sale Prices")
g = sns.boxplot(y='SalePrice', x='YrSold', data=data)
t = g.set_title("Distribution of Sale Price by Year")
data["TotalSF"] = data.TotalBsmtSF + data.GrLivArea
g = sns.jointplot(y="SalePrice", x="TotalSF", data=data, kind="hex")
data["TotalSF"] = data.TotalBsmtSF + data.GrLivArea
g = sns.lmplot(y="SalePrice", x="TotalSF", data=data, col="BldgType")
"""
Explanation: Data Exploration
End of explanation
"""
from sklearn.datasets.base import Bunch
def load_data(path="data", train="train.csv", test="test.csv", descr="data_description.txt", target="SalePrice"):
# Load the training data frame and split into X, y data frames
train = pd.read_csv(os.path.join(path, train))
feats = [col for col in train.columns if col != target]
data = train[feats]
target = train[target]
# Load the test data frame (no answers provided)
test = pd.read_csv(os.path.join(path, test))
# Read the description
with open(os.path.join(path, descr)) as f:
descr = f.read()
return Bunch(
data=data,
target=target,
test=test,
DESCR=descr,
)
data = load_data()
"""
Explanation: Data Loading
Kaggle has provided a train.csv and test.csv file with headers. However, while the train.csv has the target, SalePrice included in the CSV file, the test.csv does not. For simplicy, the data loader will just return a bunch with the data and target split up and the test data and a description.
End of explanation
"""
print(data.DESCR)
CATEGORICAL = [
"MSZoning", "Street", "Alley", "LotShape", "LandContour", "Utilities",
"LotConfig", "LandSlope", "Neighborhood", "Condition1", "Condition2", "BldgType",
"HouseStyle", "RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType",
"ExterQual", "ExterCond", "Foundation", "BsmtQual", "BsmtCond", "BsmtExposure",
"BsmtFinType1", "BsmtFinType2", "Heating", "HeatingQC", "CentralAir", "Electrical",
"KitchenQual", "Functional", "FireplaceQu", "GarageType", "GarageFinish",
"GarageQual", "GarageCond", "PavedDrive", "PoolQC", "Fence", "MiscFeature",
"SaleType", "SaleCondition",
]
for col in [
"Alley", "MasVnrType", "BsmtQual", "BsmtCond", "BsmtExposure", "BsmtFinType1",
"BsmtFinType2", "Electrical", "FireplaceQu", "GarageType", "GarageFinish", "GarageQual",
"GarageCond", "PoolQC", "Fence", "MiscFeature",
]:
data.data[col] = data.data[col].apply(str)
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import LabelEncoder
class EncodeCategorical(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = list(columns) if columns is not None else None
self.encoders = None
def fit(self, data, target=None):
if self.columns is None:
self.columns = list(data.columns)
self.encoders = {
column: LabelEncoder().fit(data[column])
for column in self.columns
}
return self
def transform(self, data):
data = data.copy()
for column, encoder in self.encoders.items():
data[column] = encoder.transform(data[column])
return data
EncodeCategorical(CATEGORICAL).fit_transform(data.data).head()
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer, OneHotEncoder, Imputer
pipeline = Pipeline([
('encoder', EncodeCategorical(CATEGORICAL)),
('imputer', Imputer('NaN', 'mean')),
('normalize', Normalizer()),
])
X = pipeline.fit_transform(data.data)
y = data.target
from sklearn.linear_model import RidgeCV, LassoCV
from sklearn.cross_validation import train_test_split as tts
X, Xt, y, yt = tts(X, y, test_size=0.2)
print(X.shape, y.shape)
print(Xt.shape, yt.shape)
alphas = np.logspace(-10, -2, 200)
ridge = RidgeCV(alphas=alphas)
lasso = LassoCV(alphas=alphas)
ridge.fit(X, y)
lasso.fit(X, y)
scores = [
ridge.score(Xt, yt),
lasso.score(Xt, yt),
]
print("Ridge: {:0.3f}, Lasso: {:0.3f}".format(*scores))
"""
Explanation: Feature Extraction
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Auto-Batched Joint Distributions: A Gentle Tutorial
Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
#@title Import and set ups{ display-mode: "form" }
import functools
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Modeling_with_JointDistribution"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/JointDistributionAutoBatched_A_Gentle_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
はじめに
TensorFlow Probability (TFP) は、ユーザーが確率的グラフィカルモデルを数学的な形式で簡単に表現できるようにすることで、確率的推論を容易にする多数の JointDistribution 抽象化を提供します。抽象化により、モデルからサンプリングし、モデルからのサンプルの対数確率を評価するためのメソッドが生成されます。このチュートリアルでは、元の JointDistribution 抽象化の後に開発された「自動バッチ処理」バリアントを見ていきます。自動バッチ処理されていない元の抽象化と比較して、自動バッチ処理されたバージョンは使いやすく人間工学的であるため、多くのモデルをより少ないボイラープレートで表現できます。このコラボでは、単純なモデルを詳細に調査し、自動バッチ処理が解決する問題を明らかにし、TFP 形状の概念について詳しく説明します。
自動バッチ処理が導入される前は、確率モデルを表現するためのさまざまな構文スタイルに対応する JointDistribution のいくつかの異なるバリアントがありました (JointDistributionSequential、JointDistributionNamed、JointDistributionCoroutineなど)。自動バッチ処理では、これらすべての AutoBatched バリアントを利用できます。このチュートリアルでは、JointDistributionSequential と JointDistributionSequentialAutoBatched の違いを見ていきますが、ここで行うことはすべて、基本的に変更せずに他のバリアントに適用できます。
依存関係と前提条件
End of explanation
"""
X = np.arange(7)
X
"""
Explanation: 前提条件: ベイズ回帰問題
非常に単純なベイズ回帰シナリオを検討します。
$$ \begin{align} m & \sim \text{Normal}(0, 1) \ b & \sim \text{Normal}(0, 1) \ Y & \sim \text{Normal}(mX + b, 1) \end{align} $$
このモデルでは、m および b は標準正規分布から抽出されます。観測値 Y は、平均が確率変数 m および b、および、いくつかの (非ランダム、既知の) 共変量 X に依存する正規分布から抽出されます。(簡単にするために、この例では、すべての確率変数のスケールが既知であると想定します。)
このモデルで推論を実行するには、共変量 X と観測値 Y の両方を知る必要があります。ただし、このチュートリアルでは、X のみが必要なので、単純なダミー X を定義します。
End of explanation
"""
jds = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
"""
Explanation: デシデラタ
確率的推論では、多くの場合、以下の 2 つの基本的な演算を実行します。
sample: モデルからサンプルを抽出する
log_prob: モデルからのサンプルの対数確率を計算します。
TFP の JointDistribution 抽象化の主な利点 (および確率的プログラミングへの他の多くのアプローチ) として、ユーザーはモデルを一回を記述すると、sample および log_prob の両方の計算を実行できます。
データセットに 7 つの点 (X.shape = (7,)) があることに注意して、JointDistribution のデシデラタを述べます。
sample() は、スカラー勾配、スカラーバイアス、およびベクトル観測値にそれぞれ対応する、形状 [(), (), (7,)] を持つ Tensors のリストを生成する必要があります。
log_prob(sample()) はスカラーを生成する必要があります (特定の勾配、バイアス、および観測値の対数確率)。
sample([5, 3]) はモデルからのサンプルの (5, 3)-バッチを表す形状が[(5, 3), (5, 3), (5, 3, 7)]の Tensors のリストを生成する必要があります。
log_prob(sample([5, 3])) は形状 (5, 3) の Tensor を生成する必要があります。
次に、一連の JointDistribution モデルを見て、上記の目標を達成する方法を確認しながら TFP の形状についても見ていきます。
ネタバレ注意: ボイラープレートを追加せずに上記のデシデラタを満たすには自動バッチ処理を使用します。
最初の試み、JointDistributionSequential
End of explanation
"""
dists, sample = jds.sample_distributions()
sample
"""
Explanation: これは、モデルをコードに直接変換したものです。勾配 m とバイアス b は単純です。Y は、lambda 関数を使用して定義されます。一般的なパターンは、JointDistributionSequential (JDS) の $k$ の lambda 関数がモデル内の事前の $k$ 分布を使用することです。「逆」の順序に注意してください。
sample_distributions を呼び出します。これは、サンプルとサンプルの生成に使用された基礎となる「サブディストリビューション」を返します。(sample を呼び出すことでサンプルだけを作成することもできます。分布はチュートリアルの後半で使用するので、用意しておくと便利です。) 生成されたサンプルには問題ありません。
End of explanation
"""
jds.log_prob(sample)
"""
Explanation: ただし、log_prob は、望ましくない形状の結果を生成します。
End of explanation
"""
try:
jds.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
"""
Explanation: また、複数のサンプリングは機能しません。
End of explanation
"""
sample
"""
Explanation: 問題がどこにあるか見てみましょう。
簡単な見直し: バッチとイベントの形状
TFP では、通常の (JointDistribution ではない) 確率分布には<em data-md-type="emphasis">イベント形状</em>と<em data-md-type="emphasis">バッチ形状</em>があります。TFP を効果的に使用するには、これらの違いを理解することが重要です。
イベントの形状は、分布からの 1 つの抽出の形状を表します。抽出は次元間で依存する場合があります。スカラー分布の場合、イベントの形状は [] です。5 次元の MultivariateNormal の場合、イベントの形状は [5] です。
バッチ形状は、独立した、同一に分散されていない抽出である「バッチ」の分布を表します。単一の Python オブジェクトで分布のバッチを表すことは、TFP が大規模な効率を達成するための重要な方法の 1 つです。
ここでは、分布からの単一のサンプルで log_prob を呼び出す場合、結果は常にバッチの形状と一致する (つまり、右端の次元を持つ) 形状になります。
形状の詳細については、「TensorFlow 分布の形状について」のチュートリアルを参照してください。
log_prob(sample()) がスカラーを生成しない理由
バッチとイベントの形状に関する知識を使用して、log_prob(sample()) で何が起こっているかを調べてみましょう。サンプルは以下のとおりです。
End of explanation
"""
dists
"""
Explanation: 分布は以下のとおりです。
End of explanation
"""
log_prob_parts = [dist.log_prob(s) for (dist, s) in zip(dists, sample)]
log_prob_parts
np.sum(log_prob_parts) - jds.log_prob(sample)
"""
Explanation: 対数確率は、部分の (一致した) 要素での劣確率分布の対数確率を合計することによって計算されます。
End of explanation
"""
dists[2]
"""
Explanation: したがって、log_prob_parts の 3 番目のサブコンポーネントが 7 テンソルであるため、対数確率計算が 7 テンソルを返すと説明できます。しかし、なぜでしょうか?
数学の定式化で Y の分布に対応する dists の最後の要素には、[7] の batch_shape があることがわかります。言い換えると、Y での分布は、7 つの独立した法線のバッチです (平均が異なり、この場合は同じスケールです)。
問題が何だかわかりました。JDS では、Y の分布には batch_shape=[7] があります。JDS のサンプルは、m と b のスカラーと、7 つの独立した法線の「バッチ」を表しています。log_prob は、7 つの別々の対数確率を計算します。それぞれが m と bを抽出する対数確率、そして、X[i] での単一の観測 Y[i] を表しています。
log_prob(sample()) を Independent で修正する
dists[2] にはevent_shape=[] と batch_shape=[7] があることを思い出してください。
End of explanation
"""
y_dist_i = tfd.Independent(dists[2], reinterpreted_batch_ndims=1)
y_dist_i
"""
Explanation: バッチの次元をイベントの次元に変換する TFP の Independent メタ分布を使用することにより、これを event_shape=[7] と batch_shape=[] の分布に変換できます。(Y の分布であり、_i が Independent ラッピングの代わりになるため、名前をy_dist_i に変更します。)
End of explanation
"""
y_dist_i.log_prob(sample[2])
"""
Explanation: これで、7 ベクトルの log_prob はスカラーになります。
End of explanation
"""
y_dist_i.log_prob(sample[2]) - tf.reduce_sum(dists[2].log_prob(sample[2]))
"""
Explanation: 裏で、Independent はバッチ全体の和を計算します。
End of explanation
"""
jds_i = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m*X + b, scale=1.),
reinterpreted_batch_ndims=1)
])
jds_i.log_prob(sample)
"""
Explanation: 実際、これを使用して新しい jds_i を作成できます (繰り返しますが、i は Independent を表します)。ここで、log_prob はスカラーを返します。
End of explanation
"""
try:
jds_i.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
"""
Explanation: 注意事項:
jds_i.log_prob(s) は tf.reduce_sum(jds.log_prob(s)) と同じではありません。前者は、同時分布の「正しい」対数確率を生成します。後者は 7 テンソルの合計であり、その各要素は m、b の対数確率、および対数確率 Y の単一要素の合計です。したがって、m と b がオーバーカウントされます。(log_prob(m) + log_prob(b) + log_prob(Y) では、TFP は TF および NumPy のブロードキャストルール (ベクトルにスカラーを追加すると、ベクトルサイズの結果が生成される) に従うため、例外をスローせずに結果を返します。)
この特定のケースでは、Independent(Normal(...))の代わりに MultivariateNormalDiag を使用して、問題を解決し、同じ結果を達成できます。MultivariateNormalDiag はベクトル値分布です(つまり、すでにベクトルイベント形状を持っています)。確かに MultivariateNormalDiag は、Independent と Normal を合わせて実装できます。ベクトルVが与えられた場合、n1 = Normal(loc=V) および n2 = MultivariateNormalDiag(loc=V) からのサンプルは区別できません。これらの分布の違いは、n1.log_prob(n1.sample()) がベクトルであり、n2.log_prob(n2.sample()) がスカラーであることです。
複数のサンプル
複数のサンプリングは機能しません。
End of explanation
"""
m = tfd.Normal(0., 1.).sample([5, 3])
try:
m * X
except tf.errors.InvalidArgumentError as e:
print(e)
"""
Explanation: 理由を考えてみましょう。jds_i.sample([5, 3]) を呼び出すと、最初にm と b のサンプルを抽出します。それぞれの形状は (5, 3) です。次に、次の方法で Normal 分布を構築します。
tfd.Normal(loc=m*X + b, scale=1.)
ただし、m の形状が (5, 3) で、X の形状が 7 の場合、それらを乗算することはできません。そのためにエラーが発生します。
End of explanation
"""
m[..., tf.newaxis].shape
(m[..., tf.newaxis] * X).shape
"""
Explanation: この問題を解決するために、Y の分布に必要なプロパティについて考えてみましょう。jds_i.sample([5, 3]) を呼び出した場合、m と b の両方の形状が (5, 3) になります。Y 分布で sample を呼び出すと、どのような形状になるでしょうか? 明らかに (5, 3, 7) です。バッチポイントごとに、X と同じサイズのサンプルが必要です。TensorFlow のブロードキャスト機能を使用すると、次のように次元を追加できます。
End of explanation
"""
jds_ia = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),
reinterpreted_batch_ndims=1)
])
shaped_sample = jds_ia.sample([5, 3])
shaped_sample
jds_ia.log_prob(shaped_sample)
"""
Explanation: m と b の両方に軸を追加すると、複数のサンプルをサポートする新しい JDS を定義できます。
End of explanation
"""
(jds_ia.log_prob(shaped_sample)[3, 1] -
jds_i.log_prob([shaped_sample[0][3, 1],
shaped_sample[1][3, 1],
shaped_sample[2][3, 1, :]]))
"""
Explanation: 追加のチェックとして、単一のバッチポイントの対数確率が以前の確率と一致することを確認します。
End of explanation
"""
jds_ab = tfd.JointDistributionSequentialAutoBatched([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
jds_ab.log_prob(jds.sample())
shaped_sample = jds_ab.sample([5, 3])
jds_ab.log_prob(shaped_sample)
jds_ab.log_prob(shaped_sample) - jds_ia.log_prob(shaped_sample)
"""
Explanation: <a id="AutoBatching-For-The-Win"></a>
優れた自動バッチ処理
これで、すべてのデシデラタを処理する JointDistribution のバージョンができました。log_prob は、tfd.Independent の使用によりスカラーを返し、軸を追加してブロードキャストを修正したため、複数のサンプリングが機能するようになりました。
しかし、JointDistributionSequentialAutoBatched (JDSAB) と呼ばれるより簡単で優れた方法があります。
End of explanation
"""
jds.batch_shape
jds_i.batch_shape
jds_ia.batch_shape
jds_ab.batch_shape
"""
Explanation: これはどのように機能するのでしょうか?深く理解するためにコードを読むこともできますが、ここでは、ほとんどのユースケースに十分な簡単な概要を提供します。
最初の問題は、Y の分布にbatch_shape=[7] および event_shape=[] があったことで、Independent を使用して、バッチの次元をイベントの次元に変換しました。JDSAB は、要素の分布のバッチ形状を無視し、バッチ形状をモデルの全体的なプロパティとして扱い、[] と見なされます (batch_ndims > 0 を設定して特に指定されていない限り)。結果は、上記で手動で行ったように、tfd.Independent を使用して要素の分布の{nbsp}全バッチ次元をイベント次元に変換するのと同じです。
2 番目の問題は、m と b の形状を変換して、複数のサンプルを作成するときに X で適切にブロードキャストできるようにする必要があることでした。JDSAB では、単一のサンプルを生成するモデルを記述し、TensorFlow の vectorized_map を使用して、モデル全体を「リフト」して複数のサンプルを生成します。 (この機能は、JAX の vmap に似ています。)
バッチ形状の問題をより詳細に調査するために、元のエラーのある同時分布 jds、バッチごとに修正された分布 jds_i と jds_ia、および自動バッチ処理された jds_ab のバッチ形状を比較します。
End of explanation
"""
X = np.arange(14).reshape((2, 7))
X
"""
Explanation: 元の jds には、さまざまなバッチ形状の劣確率分布があることがわかります。jds_i と jds_ia では、同じ (空の) バッチ形状で劣確率分布を作成することにより、これを修正します。jds_ab には 1 つの (空の) バッチ形状があります。
JointDistributionSequentialAutoBatched はいくつかの追加の一般性を無料で提供しています。共変量 X (および暗黙的に観測値 Y) を 2 次元にするとします。
End of explanation
"""
jds_ab = tfd.JointDistributionSequentialAutoBatched([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Normal(loc=m*X + b, scale=1.) # Y
])
shaped_sample = jds_ab.sample([5, 3])
shaped_sample
jds_ab.log_prob(shaped_sample)
"""
Explanation: JointDistributionSequentialAutoBatched は変更なしで機能します (X の形状は jds_ab.log_prob によってキャッシュされるため、モデルを再定義する必要があります)。
End of explanation
"""
jds_ia = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=1.), # m
tfd.Normal(loc=0., scale=1.), # b
lambda b, m: tfd.Independent( # Y
tfd.Normal(loc=m[..., tf.newaxis]*X + b[..., tf.newaxis], scale=1.),
reinterpreted_batch_ndims=1)
])
try:
jds_ia.sample([5, 3])
except tf.errors.InvalidArgumentError as e:
print(e)
"""
Explanation: 一方、慎重に作成された JointDistributionSequential は機能しなくなりました。
End of explanation
"""
|
mjabri/holoviews
|
doc/Tutorials/Options.ipynb
|
bsd-3-clause
|
import numpy as np
import holoviews as hv
%reload_ext holoviews.ipython
x,y = np.mgrid[-50:51, -50:51] * 0.1
image = hv.Image(np.sin(x**2+y**2), group="Function", label="Sine")
coords = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
curve = hv.Curve(coords)
curves = {phase: hv.Curve([(0.1*i, np.sin(phase+0.1*i)) for i in range(100)])
for phase in [0, np.pi/2, np.pi, np.pi*3/2]}
waves = hv.HoloMap(curves)
layout = image + curve
"""
Explanation: HoloViews is designed to be both highly customizable, allowing you to control how your visualizations appear, but also to enforce a strong separation between your data (with any semantically associated metadata, like type and label information) and all options related purely to visualization. This separation allows HoloViews objects to be generated easily by external programs, without giving them a dependency on any plotting or windowing libraries. It also helps make it completely clear which parts of your code deal with the actual data, and which are just about displaying it nicely, which becomes very important for complex visualizations that become more complicated than your data itself.
To achieve this separation, HoloViews stores visualization options independently from your data, and applies the options only when rendering the data to a file on disk or when displaying it in an IPython notebook cell.
This tutorial gives an overview of the different types of options available, how to find out more about them, and how to set them in both regular Python and using the IPython magic interface that is shown elsewhere in the tutorials.
Example objects
First, we'll create some HoloViews data objects ready to visualize:
End of explanation
"""
renderer = hv.Store.renderers['matplotlib'].instance(fig='svg', holomap='gif')
"""
Explanation: Rendering and saving objects from Python <a id='python-saving'></a>
To illustrate how to do plotting independently of IPython, we'll generate and save a plot directly to disk. First, let's create a renderer object that will render our files to SVG (for static figures) or GIF (for animations):
End of explanation
"""
renderer.save(layout, 'example_I')
"""
Explanation: We could instead have used the default Store.renderer, but that would have been PNG format. Using this renderer, we can save any HoloViews object as SVG or GIF:
End of explanation
"""
from IPython.display import SVG
SVG(filename='example_I.svg')
"""
Explanation: That's it! The renderer builds the figure in matplotlib, renders it to SVG, and saves that to "example_I.svg" on disk. Everything up to this point would have worked the same in IPython or in regular Python, even with no display available. But since we're in IPython Notebook at the moment, we can check whether the exporting worked:
End of explanation
"""
hv.help(image, visualization=False)
"""
Explanation: You can use this workflow for generating HoloViews visualizations directly from Python, perhaps as a part of a set of scripts that you run automatically, e.g. to put your results up on a web server as soon as data is generated. But so far, this plot just uses all the default options, with no customization. How can we change how the plot will appear when we render it?
HoloViews visualization options
HoloViews provides three categories of visualization options that can be set by the user. In this section we will first describe the different kinds of options, then later sections show you how to list the supported options of each type for a given HoloViews object or class, and how to change them in Python or IPython.
style options:
style options are passed directly to the underlying rendering backend that actually draws the plots, allowing you to control the details of how it behaves. The default backend is matplotlib, and the only other backend currently available is mpld3, both of which use matplotlib options. HoloViews can tell you which of these options are supported, but you will need to see the matplotlib documentation for the details of their use.
HoloViews has been designed to be easily extensible to additional backends in the future, such as Cairo, VTK, Bokeh, or D3.js, and if one of those backends were selected then the supported style options would differ.
plot options:
Each of the various HoloViews plotting classes declares various Parameters that control how HoloViews builds the visualization for that type of object, such as plot sizes and labels. HoloViews uses these options internally; they are not simply passed to the matplotlib backend. HoloViews documents these options fully in its online help and in the Reference Manual. These options may vary for different backends in some cases, but we try to keep any options that are meaningful for a variety of backends the same for all of them.
norm options:
norm options are a special type of plot option that are applied orthogonally to the above two types, to control normalization. Normalization refers to adjusting the properties of one plot relative to those of another. For instance, two images normalized together would appear with relative brightness levels, with the brightest image using the full range black to white, while the other image is scaled proportionally. Two images normalized independently would both cover the full range from black to white. Similarly, two axis ranges normalized together will expand to fit the largest range of either axis, while those normalized separately would cover different ranges.
There are currently only two norm options supported, axiswise and framewise, but they can be applied to any of the various object types in HoloViews to specify a huge range of different normalization options.
For a given category or group of HoloViews objects, if axiswise is True, normalization will be computed independently for all items in that category that have their own axes, such as different Image plots or Curve plots. If axiswise is False, all such objects are normalized together.
For a given category or group of HoloViews objects, if framewise is True, normalization of any HoloMap objects included is done independently per frame rendered -- each frame will appear as it would if it were extracted from the HoloMap and plotted separately. If framewise is False (the default), all frames in a given HoloMap are normalized together, so that you can see strength differences over the course of the animation.
As described below, these options can be controlled precisely and in any combination to make sure that HoloViews displays the data of most interest, ignoring irrelevant differences and highlighting important ones.
Finding out which options are available for an object
For the norm options, no further online documentation is provided, because all of the various visualization classes support only the two options described above. But there are a variety of ways to get the list of supported style options and detailed documentation for the plot options for a given component.
First, for any Python class or object in HoloViews, you can use holoviews.help(object-or-class, visualization=False) to find out about its parameters. For instance, these parameters are available for our Image object, shown with their current value (or default value, for a class), data type, whether it can be changed by the user (if it is constant, read-only, etc.), and bounds if any:
End of explanation
"""
hv.help(image)
"""
Explanation: This information can be useful, but we have explicitly suppressed information regarding the visualization parameters -- these all report metadata about your data, not about anything to do with plotting directly. That's because the normal HoloViews components have nothing to do with plotting; they are just simple containers for your data and a small amount of metadata.
Instead, the plotting implementation and its associated parameters are kept in completely separate Python classes and objects. To find out about visualizing a HoloViews component like an Image, you can simply use the help command holoviews.help(object-or-class) that looks up the code that plots that particular type of component, and then reports the style and plot options available for it.
For our image example, holoviews.help first finds that image is of type Image, then looks in its database to find that Image visualization is handled by the RasterPlot class (which users otherwise rarely need to access directly). holoviews.help then shows information about what objects are available to customize (either the object itself, or the items inside a container), followed by a brief list of style options supported by a RasterPlot, and a very long list of plot options (which are all the parameters of a RasterPlot):
End of explanation
"""
hv.Store.add_style_opts(hv.Image, ['filternorm'])
# To check that it worked:
RasterPlot = renderer.plotting_class(hv.Image)
print(RasterPlot.style_opts)
"""
Explanation: Supported style options
As you can see, HoloViews lists the currently allowed style options, but provides no further documentation because these settings are implemented by matplotlib and described at the matplotlib site. Note that matplotlib actually accepts a huge range of additional options, but they are not listed as being allowed because those options are not normally meaningful for this plot type. But if you know of a specific matplotlib option not on the list and really want to use it, you can add it manually to the list of supported options using Store.add_style_opts(holoviews-component-class, ['matplotlib-option ...']). For instance, if you want to use the filternorm parameter with this image object, you would run Store.add_style_opts(Image, ['filternorm']). This will add the new option to the corresponding plotting class RasterPlot:
End of explanation
"""
RasterPlot.colorbar=True
RasterPlot.set_param(show_title=False,show_frame=True)
"""
Explanation: Changing plot options at the class level
Any parameter in HoloViews can be set on an object or on the class of the object, so any of the above plot options can be set like:
End of explanation
"""
renderer.save(layout, 'example_II', style=dict(Image={'cmap':'Blues'}),
plot= dict(Image={'yaxis':None}))
SVG(filename='example_II.svg')
"""
Explanation: Here .set_param() allows you to set multiple parameters conveniently, but it works the same as the single-parameter .colorbar example above it. Setting these values at the class level affects all previously created and to-be-created plotting objects of this type, unless specifically overridden via Store as described below.
Note that if you look at the source code for a particular plotting class, you will only see some of the parameters it supports. The rest, such as show_frame above, are defined in a superclass of the given object. The Reference Manual shows the complete list of parameters available for any given class (those labeled param in the manual), but it can be an overwhelming list since it includes all superclasses, all the metadata about each parameter, etc. The holoviews.help command with visualization=True provides a much more concise listing, and also shows the style options that are not listed in the Reference Manual.
Because setting these parameters at the class level does not provide much control over individual plots, HoloViews provides a much more flexible system using the OptionTree mechanisms described below, which can override these class defaults according to the HoloViews object type, group, and label.
The rest of the sections show how to change any of the above options, once you have found the right one using the suitable call to holoviews.help.
Controlling options from Python
Once you know the name of the option you want to change, and the value you want to change it to, there are a number of ways to customize your plot.
For the Python output to SVG example above, you can specify the options for a given type using keywords supplying a dictionary for any of the above option categories. You can see that the colormap changes when we supply that style option and render a new SVG:
End of explanation
"""
options={'Image.Function.Sine': {'plot':dict(fig_size=50), 'style':dict(cmap='jet')}}
renderer.save(layout, 'example_III',options=options)
SVG(filename='example_III.svg')
"""
Explanation: As before, the SVG call is simply to display it here in the notebook; the actual image is saved on disk and then loaded back in here for display.
You can see that the image now has a colorbar, because we set colorbar=True on the RasterPlot class, that it has become blue, because we set the matplotlib cmap style option in the renderer.save call, and that the y axis has been disabled, because we set the plot option yaxis to None (which is normally 'left' by default, as you can see in the default value for RasterPlot's parameter yaxis above). Hopefully you can see that once you know the option value you want to use, it can be provided easily.
You can also create a whole set of options separately, perhaps holding a large collection of preferred values, and apply it whenever you wish to save:
End of explanation
"""
green_sine = image(style={'cmap':'Greens'})
"""
Explanation: Here you can see that the y axis has returned, because our previous setting to turn it off was just for the call to renderer.save. But we still have a colorbar, because that parameter was set at the class level, for all future plots of this type. Note that this form of option setting, while more verbose, accepts the full {type}[.{group}[.{label}]] syntax, like 'Image.Function.Sine' or 'Image.Function', while the shorter keyword approach above only supports the class, like 'Image'.
Note that for the options dictionary, the option nesting is inverted compared to the keyword approach: the outermost dictionary is by key (Image, or Image.Function.Sines), with the option categories underneath. You can see that with this mechanism, we can specify the options even for subobjects of a container, as long as we can specify them with an appropriate key.
There's also another way to customize options in Python that lets you build up customizations incrementally. To do this, you can associate a particular set of options persistently with a particular HoloViews object, even if that object is later combined with other objects into a container. Here a new copy of the object is created, with the given set of options (using either the keyword or options= format above) bound to it:
End of explanation
"""
green_sine
"""
Explanation: Here we could save the object to SVG just as before, but in this case we can skip a step and simply view it directly in the notebook:
End of explanation
"""
with hv.StoreOptions.options(green_sine, options={'Image':{'style':{'cmap':'Reds'}}}):
data, info = renderer(green_sine)
print(info)
SVG(data)
"""
Explanation: Both IPython notebook and renderer.save() use the same mechanisms for keeping track of the options, so they will give the same results. Specifically, what happens when you "bind" a set of options to an object is that there is an integer ID stored in the object (green_sine in this case), and a corresponding entry with that ID is stored in a database of options called an OptionTree (kept in holoviews.core.options.Store). The object itself is otherwise unchanged, but then if that object is later used in another container, etc. it will retain its ID and therefore its customization. Any customization stored in an OptionTree will override any class attribute defaults set like RasterGridPlot.border=5 above. This approach lets HoloViews keep track of any customizations you want to make, without ever affecting your actual data objects.
If the same object is later customized again to create a new customized object, the old customizations will be copied, and then the new customizations applied. The new customizations will thus override the old, while retaining any previous customizations not specified in the new step.
In this way, it is possible to build complex objects with arbitrary customization, step by step. As mentioned above, it is also possible to customize objects already combined into a complex container, just by specifying an option for a suitable key (e.g. 'Image.Function.Sine' above). This flexible system should allow for any level of customization that is needed.
Finally, there is one more way to apply options that is a mix of the above approaches -- temporarily assign a new ID to the object and apply a set of customizations during a specific portion of the code:
End of explanation
"""
%%opts Curve style(linewidth=8) Image style(interpolation='bilinear') plot[yaxis=None] norm{+framewise}
layout
"""
Explanation: Here the result is red, because it was rendered within the options context above, but were we to render the green_sine again it would still be green; the options are applied only within the scope of the with statement.
Controlling options in IPython using %%opts and %opts
The above sections describe how to set all of the options using regular Python. Similar functionality is provided in IPython, but with a more convenient syntax based on an IPython magic command:
End of explanation
"""
from holoviews.ipython.parser import OptsSpec
renderer.save(image + waves, 'example_V',
options=OptsSpec.parse("Image (cmap='gray')"))
"""
Explanation: The %%opts magic works like the pure-Python option for associating options with an object, except that it works on the item in the IPython cell, and it affects the item directly rather than making a copy or applying only in scope. Specifically, it assigns a new ID number to the object returned from this cell, and makes a new OptionTree containing the options for that ID number.
If the same layout object is used later in the notebook, even within a complicated container object, it will retain the options set on it.
The options accepted are just the same as for the Python version, but specified more succinctly:
%%opts target-specification style(styleoption=val ...) plot[plotoption=val ...] norm{+normoption -normoption...}
Here key lets you specify the object type (e.g. Image), and optionally its group (e.g. Image.Function) or even both group and label (e.g. Image.Function.Sine), if you want to control options very precisely. There is also an even further abbreviated syntax, because the special bracket types alone are enough to indicate which category of option is specified:
%%opts target-specification (styleoption=val ...) [plotoption=val ...] {+normoption -normoption ...}
Here parentheses indicate style options, square brackets indicate plot options, and curly brackets indicate norm options (with +axiswise and +framewise indicating True for those values, and -axiswise and -framewise indicating False). Additional target-specifications and associated options of each type for that target-specification can be supplied at the end of this line. This ultra-concise syntax is used throughout the other tutorials, because it helps minimize the code needed to specify the plotting options, and helps make it very clear that these options are handled separately from the actual data.
The %opts "line" magic (with one %) works just the same as the %%opts "cell" magic, but it changes the global default options for all future cells, allowing you to choose a new default colormap, line width, etc.
Apart from its brevity, a big benefit of using the IPython magic syntax %%opts or %opts is that it is fully tab-completable. Each of the options that is currently available will be listed if you press <TAB> when you are ready to write it, which makes it much easier to find the right parameter. Of course, you will still need to consult the full holoviews.help documentation (described above) to see the type, allowable values, and documentation for each option, but the tab completion should at least get you started and is great for helping you remember the list of options and see which options are available.
You can even use the succinct IPython-style specification directly in your Python code if you wish, but it requires the external pyparsing library (which is already available if you are using matplotlib):
End of explanation
"""
%%output info=True
curve
"""
Explanation: There is also a special IPython syntax for listing the visualization options for a plotting object in a pop-up window that is equivalent to calling holoviews.help(object):
End of explanation
"""
|
d-k-b/udacity-deep-learning
|
transfer-learning/Transfer_Learning_Solution.ipynb
|
mit
|
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels_vecs))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
samirma/deep-learning
|
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
|
mit
|
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
matthewljones/computingincontext
|
CiC_lecture_03_text_mining_redux.ipynb
|
gpl-2.0
|
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import textmining_blackboxes as tm
"""
Explanation: Computing In Context
Social Sciences Track
Lecture 3--text mining for real
Matthew L. Jones
like, with code and stuff
End of explanation
"""
#see if package imported correctly
tm.icantbelieve("butter")
"""
Explanation: IMPORTANT: tm is our temporarily helper, not a standard python package!!
download it from my github:
https://github.com/matthewljones/computingincontext
End of explanation
"""
title_info=pd.read_csv('data/na-slave-narratives/data/toc.csv')
#this is the "metadata" of these files--we didn't use today
#why does data appear twice?
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
len(our_texts)
our_texts[100][:300] # first 300 words of 100th text
"""
Explanation: Let's get some text
Let's use the remarkable narratives available from Documenting the American South (http://docsouth.unc.edu/docsouthdata/)
Assuming that you are storing your data in a directory in the same place as your iPython notebook.
Put the slave narratives texts within a data directory in the same place as this notebook
End of explanation
"""
lengths=[len(text) for text in our_texts]
"""
Explanation: list comprehensions!
most python thing evah!
how many words in each text within our_texts? can you make a list?
Sure, you could do this as a for loop
for text in our texts:
blah.blah.blah(our_texts) #not real code
or
for i in range(len(our_texts)
But super easy in python
End of explanation
"""
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
"""
Explanation: How to process text
Python Libraries
Python has an embarrasment of riches when it comes to working with texts. Some libraries are higher level with simpler, well thought out defaults, namely pattern and TextBlob. Most general, of long development, and foundational is the Natural Language Tool Kit--NLTK. The ideas we'll learn to today are key--they have slightly different instantiations in the different tools. Not everything is yet in Python 3, alas!!
nltk : grandparent of text analysis packages, cross-platform, complex
crucial for moving beyond bag of words: tagging & other grammatical analysis
pattern : higher level and easier to use the nltk but Python 2.7 only. (wah!)
textblob : even higher level range of natural language processing (3.4 but not yet in conda?)
scikit learn (sklearn): toolkit for scientists, faster, better (use for processing/memory intensive stuff) (Our choice!)
Things we might do to clean up text
tokenization
making .split much better
Examples??
stemming:
converting inflected forms into some normalized forms
e.g. "chefs" --> "chef"
"goes" --> "go"
"children" --> "child"
stopwords
they are the words you don't want to be included:
"from" "to" "a" "they" "she" "he"
If you need to do lots of such things, you'll want to use ntlk, pattern or TextBlob.
For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Our Zero-ith tool: cleaning up the text
I've included a little utility function in tm that takes a list of strings and cleans it up a bit
check out the code on your own time later
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
"""
Explanation: Our first tool: vectorizer from scikit learn
End of explanation
"""
# now let's get our vocabulary--the names corresponding to the rows
# "feature" is the general term in machine learning and data mining
# we seek to characterize data by picking out features that will enable discovery
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
"""
Explanation: for the documentation of sklearn's text data functionality, see http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
while this works, mini-lecture on crashes
see kernel above. Therein is the secret to eliminating the dreaded *.
End of explanation
"""
vocab[1000:1100]
"""
Explanation: so document_term_matrix is a matrix with 294 rows--the documents--and 1658 columns--the vocabulary or terms or features
End of explanation
"""
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
"""
Explanation: right now stored super efficiently as a sparse matrix
almost all zeros--good for our computers' limited memory
easier for us to see as a dense matrix
End of explanation
"""
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
#what'd we get?
similarity
similarity.shape
"""
Explanation: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Now we can throw wide variety of mining algorithms at our data!
Similarity and dissimilarity
We reduced our text to a vector of term-weights.
What can we do once we've committed this real violence on the text?
We can measure distance and similarity
I know. Crazy talk.
Right now our text is just a series of numbers, indexed to words. We can treat it like any collection of vectors more or less.
And the key way to distinguish two vectors is by measuring their distance or computing their similiarity (1-distance).
You already know how, though you may have buried it along with memories of high school.
Many distance metrics to choose from
key one in textual analysis:
cosine similarity
If $\mathbf{a}$ and $\mathbf{b}$ are vectors, then
$\mathbf{a}\cdot\mathbf{b}=\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta$
Or
$\text{similarity} = \cos(\theta) = {A \cdot B \over \|A\| \|B\|} = \frac{ \sum\limits_{i=1}^{n}{A_i \times B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{(A_i)^2}} \times \sqrt{\sum\limits_{i=1}^{n}{(B_i)^2}} }$
(h/t wikipedia)
End of explanation
"""
similarity[100]
#this gives the similarity of row 100 to each of the other rows
"""
Explanation: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
End of explanation
"""
term_document_matrix=document_term_matrix.T
# .T is the easy transposition method for a
# matrix in python's matrix packages.
# import a bunch of packages we need
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from scipy.cluster.hierarchy import ward, dendrogram
#distance is 1-similarity, so:
dist=1-cosine_similarity(term_document_matrix)
# ward is an algorithm for hierarchical clustering
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
"""
Explanation: HOMEWORK EXERCISE:
for given document find the most similar and give titles from the csv file
you'll see!
supervised vs. unsupervised learning
slides from class omitted
first example of unsupervised learning
hierarchical clustering
This time we're interested in relations among the words not the texts.
In other words, we're interested in the similarities between one column and another--one term and another term
So we'll work with the transposed matrix--the term-document matrix, rather than the document-term matrix.
For a description of hierarchical clustering, look at the example at https://en.wikipedia.org/wiki/Hierarchical_clustering
End of explanation
"""
vectorizer=TfidfVectorizer(min_df=.96, stop_words='english', use_idf=True)
#try a very high min_df
#rerun the model
document_term_matrix=vectorizer.fit_transform(our_texts)
vocab=vectorizer.get_feature_names()
#check the length of the vocab
len(vocab)
#switch again to the term_document_matrix
term_document_matrix=document_term_matrix.T
dist=1-cosine_similarity(term_document_matrix)
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
"""
Explanation: OMG U...G...L...Y!
WHAT THE? This is nonsense
what's the problem?
we just tried to plot a bunch o' features!
we need only the most significant words!
way to do this: change the min_df parameter in vectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
more an art than a science
End of explanation
"""
|
osemer01/us-domestic-flight-performance
|
flights.ipynb
|
cc0-1.0
|
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
import csv
import xlrd
%matplotlib inline
"""
Explanation: On Time Flight Performance of Domestic Flights in December 2014
Author Information:
Oguz Semerci<br>
oguz.semerci@gmail.com<br>
Introduction
In this report we analyze on-time performance data of domestic flights
in the USA for the month of December, 2014. Delays in airline traffic
can be attributed to many factors such as weather, security,
scheduling inefficiencies, imbalance between demand and capacity at
the airports as well as propagation of late arrivals and departures
between connecting flights. Our goal is to reveal patterns, or lack
thereof, of flight delays due to airport characteristics, carrier and
date and time of travel. More involved modelling of different possible
effects mentioned above is out of the scope of this report.
There are three sections in the report. Since iPython notebook is
chosen as the format, source codes implementing the described
computations are also presented in each section. Section I describes
the steps for loading, merging and cleaning the data sets in hand. An
exploratory analysis of selected attributes and their relation to
on-time performance was given in Section II. Section III describes a logistic regression model for the estimation of delay probability. Finally, Section IV summarizes the report and provides some future directions.
I. Data Preperation
Let us first import the modules that will be used:
End of explanation
"""
book = xlrd.open_workbook('airports_new.xlt')
sheet = book.sheet_by_index(0)
airport_data = [[sheet.cell_value(i,j) for j in range(sheet.ncols)] for i in range(sheet.nrows)]
#convert to dictionary for easy loop-up
airport_dict = {}
for j in range(len(airport_data[0])):
key = airport_data[0][j]
airport_dict[key] = [airport_data[i][j] for i in range(1,len(airport_data))]
book = xlrd.open_workbook('carriers.xls')
sheet = book.sheet_by_index(0)
#every other row in 'carrriers.xls' sheet is empty'
carrier_data = [[sheet.cell_value(i,j) for j in range(sheet.ncols)]
for i in range(0,sheet.nrows,2)]
#convert to dictionary for easy look-up
carrier_dict = {}
for j in range(len(carrier_data[0])):
key = carrier_data[0][j]
carrier_dict[key] = [carrier_data[i][j] for i in range(1,len(carrier_data))]
print('Fields in the additional carrier data set:')
print('-----------------------------------------')
for key in carrier_dict.keys():
print(key)
print('')
print('Fields in the additional airport data set:')
print('-----------------------------------------')
for key in airport_dict.keys():
print(key)
"""
Explanation: Load Additional Data Sets
End of explanation
"""
delay_data = []
f = open('532747144_T_ONTIME.csv', 'r')
reader = csv.reader(f)
delay_data_header = next(reader,None)
for row in reader:
delay_data.append(row)
f.close()
"""
Explanation: Load On-Time Performance Data
We downloaded the on time performance data from the Bureau of Transportation Statistics for December, 2014.
End of explanation
"""
for i,s in enumerate(delay_data_header):
print(str(i) + ': ' + s)
"""
Explanation: List of the fields in the delay_data array for reference:
End of explanation
"""
delay_data = [d[:-1] for d in delay_data]
delay_data_header = delay_data_header[:-1]
"""
Explanation: Last column is empty. Let's remove it from our data.
End of explanation
"""
#remove cancelled flights
delay_data = [d for d in delay_data if d[16] != '1.00']
"""
Explanation: Remove Canceled Flights and Flights with Missing Information
We are concerned with conducted flights. Therefore let us remove the canceled flights from the data.
End of explanation
"""
#determine the rows with missing data:
rows_with_missing_data = []
for i in range(len(delay_data)):
for j in range(20):
if len(delay_data[i][j]) == 0:
rows_with_missing_data.append(i)
break
"""
Explanation: Now a quick glance and printing some of the rows reveal that some flights have missing information. We remove them from the data for sake completeness. Note that rows 20:24 are empty when arrival delay <= 0.
End of explanation
"""
i = rows_with_missing_data[0]
print('Example row in the data with missing entries:\n')
for j in range(len(delay_data[i])):
print(delay_data_header[j] + ': ' + str(delay_data[i][j]))
#remove rows with missing entries:
delay_data = [delay_data[i] for i in range(len(delay_data)) if i not in rows_with_missing_data]
"""
Explanation: For example observe that the flight below is missing arrival delay and air time information. This is possibly because that particular flight was diverted (hopefully).
End of explanation
"""
float_index = set([11,12,13,15,17,18,19,20,21,22,23,24])
for i in range(len(delay_data)):
for j in float_index:
if len(delay_data[i][j]) > 0:
delay_data[i][j] = float(delay_data[i][j])
else:
#delay type fields
delay_data[i][j] = 0.0
int_index = set([1,2])
for i in range(len(delay_data)):
for j in int_index:
delay_data[i][j] = int(delay_data[i][j])
"""
Explanation: Now let's convert the fields with numerical values to float. Also note that delay type (rows[20:24]) are empty if arrival delay <= 0. We will fill those empty cells with zeros.
End of explanation
"""
#get the list of unique carrires:
carrier_ID = set()
airport_ID = set()
for d in delay_data:
carrier_ID.add(d[3])
airport_ID.add(d[4])
airport_ID.add(d[7])
#count total arrivals and departures from each airport
flight_count_dict = {iata: 0 for iata in airport_ID}
for d in delay_data:
flight_count_dict[d[4]] += 1
flight_count_dict[d[7]] += 1
pairs = []
for key, value in flight_count_dict.items():
pairs.append((key,value))
#sort airports according to
pairs.sort(key = lambda x: x[1], reverse = True)
"""
Explanation: Keep data only from the busiest airports
Now, we assume that the dynamics of busy airports might be significantly different that the ones with less busy ones. We would like to discard flights to and from smaller aiports so that the delay time dynamics are somewhat similar for each data point. For this aim we sorted all the airports with respect to total number of incoming and outgoing flights in December 2015. We decide the number of airports to be investigated to be 50 via visual inspection of number of flights at busiest airports:
End of explanation
"""
c = [c for a,c in pairs]
a = [a for a,c in pairs]
plt.figure(figsize = (20,4))
N = 60
plt.plot(c[:N])
plt.xticks(range(N), a[:N], fontsize = 8)
plt.ylabel('Total Number of Flights')
plt.xlabel('Airport IATA')
plt.grid()
plt.axvline(49, color = 'r')
plt.show()
print('\n'+'Use data from 50 most busy airports according to number of total incoming and outgoing domestic flights')
"""
Explanation: Decide cut-off point via visual inspections
End of explanation
"""
airports_to_keep = [a for a,c in pairs[:52]]
delay_data2 = [d for d in delay_data if (d[4] in airports_to_keep and d[7] in airports_to_keep)]
print('Size of the dataset is reduced from ' + str(len(delay_data)) + ' to ' + str(len(delay_data2)))
#let's delete the large dataset
delay_data = delay_data2
"""
Explanation: Remove data from non-busy airports
End of explanation
"""
#find out carrier names from carrier_data
carrier_info = {}
for code in carrier_ID:
k = carrier_dict['Code'].index(code)
carrier_info[code] = carrier_dict['Description'][k]
"""
Explanation: Now let's merge information of carriers and airports into two dictionaries 'carrier_info' and 'airport_info' for easy access during analysis.
Get Carrier Information
End of explanation
"""
airport_info = {}
for iata in airports_to_keep:
k = airport_dict['iata'].index(iata)
airport_info[iata] = {key: airport_dict[key][k] for key in airport_dict.keys()}
"""
Explanation: Get Airport Information
End of explanation
"""
dep_delay_time_vector = [d[11] for d in delay_data]
arr_delay_time_vector = [d[15] for d in delay_data]
print('Departure Delay Stats in minutes:')
print('--------------------------------')
print('95th percentile: ' + str(np.percentile(dep_delay_time_vector, 95)))
print('75th percentile: ' + str(np.percentile(dep_delay_time_vector, 75)))
print('5th percentile : ' + str(np.percentile(dep_delay_time_vector, 5)))
print('median : ' + str(np.median(dep_delay_time_vector)))
print('mean : ' + str(np.mean(dep_delay_time_vector)))
print('std : ' + str(np.std(dep_delay_time_vector)))
print('')
print('Arrival Delay Stats in minutes:')
print('--------------------------------')
print('95th percentile: ' + str(np.percentile(arr_delay_time_vector, 95)))
print('75th percentile: ' + str(np.percentile(arr_delay_time_vector, 75)))
print('5th percentile : ' + str(np.percentile(arr_delay_time_vector, 5)))
print('median : ' + str(np.median(arr_delay_time_vector)))
print('mean : ' + str(np.mean(arr_delay_time_vector)))
print('std : ' + str(np.std(arr_delay_time_vector)))
"""
Explanation: Removal of outliers with very large delay times
Above example also reveals that some departure delays are ridiclously high. We can consider them outliers as they are most pbobabably caused by some irrelevant incident beyond the scope of this investigation. Let's plot the histogram for departure delays and determine a cut-off point for departure time for outliers. Note that early arrivals and departures are given with negative values. Alternatively we could take 95th percentile. Let's investigate:
End of explanation
"""
arr_5th = np.percentile(arr_delay_time_vector, 5)
arr_95th = np.percentile(arr_delay_time_vector, 95)
dep_5th = np.percentile(dep_delay_time_vector, 5)
dep_95th = np.percentile(dep_delay_time_vector, 95)
fig = plt.figure(figsize = (16,3))
ax1 = plt.subplot(141)
ax2 = plt.subplot(142)
ax3 = plt.subplot(143)
ax4 = plt.subplot(144)
_,_,_ = ax1.hist(dep_delay_time_vector, bins = 30, range = [dep_5th, dep_95th])
ax1.set_xlabel('delay [min]')
ax1.set_ylabel('number of flights')
ax1.set_title('Departure Delay Histogram')
_,_,_ = ax2.hist(arr_delay_time_vector, bins = 30, range = [arr_5th, arr_95th])
ax2.set_xlabel('delay [min]')
ax2.set_title('Arrival Delay Histogram')
ax2.set_ylabel('number of flights')
_,_,_ = ax3.hist([a-b for a,b in zip(arr_delay_time_vector,dep_delay_time_vector)], bins = 30)
ax3.set_xlabel('delay [min]')
ax3.set_title('Arrival-Departure Delay Histogram')
ax3.set_ylabel('number of flights')
corr_coef = np.corrcoef(dep_delay_time_vector,arr_delay_time_vector)[0,1]
ax4.scatter(dep_delay_time_vector,arr_delay_time_vector)
ax4.set_xlim([-50,1500])
ax4.set_ylim([-50,1500])
ax4.set_title('correlation coefficient: %2.2f' %(corr_coef) )
ax4.set_xlabel('departure delay [min]')
ax4.set_ylabel('arrival delay [min]')
plt.tight_layout()
plt.show()
"""
Explanation: Let's plot histograms for departure and arrival delays in December 2015, as well as scatter plot for departure and arrival delays. Note that we strict the range of data points to [5-95]th percentile for arrival and departure delay histograms.
End of explanation
"""
N = len(dep_delay_time_vector)
delay_data = [delay_data[i] for i in range(N) if dep_delay_time_vector[i] < 69]
"""
Explanation: As expected, departure delays and arrival delay are highly correlated. Let us first remove outliers in terms of departure delay. 95th percentile gives us the departure delay of 69 minutes, which is not too drastic. Therefore, we remove flights with departure delay larger than 69 minutes. Note the very large departure delay times in the scatter plot. We reason that those extreme values are assumed to be governed by unusual events such as storms or errupting volcanos, which are needed to be removed from our data.
End of explanation
"""
dep_delay_time_vector = [d[11] for d in delay_data]
arr_delay_time_vector = [d[15] for d in delay_data]
print('Departure Delay Stats in minutes:')
print('--------------------------------')
print('95th percentile: ' + str(np.percentile(dep_delay_time_vector, 95)))
print('75th percentile: ' + str(np.percentile(dep_delay_time_vector, 75)))
print('5th percentile : ' + str(np.percentile(dep_delay_time_vector, 5)))
print('median : ' + str(np.median(dep_delay_time_vector)))
print('mean : ' + str(np.mean(dep_delay_time_vector)))
print('std : ' + str(np.std(dep_delay_time_vector)))
print('')
print('Arrival Delay Stats in minutes:')
print('--------------------------------')
print('95th percentile: ' + str(np.percentile(arr_delay_time_vector, 95)))
print('75th percentile: ' + str(np.percentile(arr_delay_time_vector, 75)))
print('5th percentile : ' + str(np.percentile(arr_delay_time_vector, 5)))
print('median : ' + str(np.median(arr_delay_time_vector)))
print('mean : ' + str(np.mean(arr_delay_time_vector)))
print('std : ' + str(np.std(arr_delay_time_vector)))
arr_5th = np.percentile(arr_delay_time_vector, 5)
arr_95th = np.percentile(arr_delay_time_vector, 95)
dep_5th = np.percentile(dep_delay_time_vector, 5)
dep_95th = np.percentile(dep_delay_time_vector, 95)
fig = plt.figure(figsize = (16,3))
ax1 = plt.subplot(141)
ax2 = plt.subplot(142)
ax3 = plt.subplot(143)
ax4 = plt.subplot(144)
ax1.boxplot(arr_delay_time_vector)
ax1.set_ylabel('arrival delay [min]')
ax1.set_title('Arrival Delay Box Plot')
_,_,_ = ax2.hist(arr_delay_time_vector, bins = 30, range = [arr_5th, arr_95th])
ax2.set_xlabel('delay [min]')
ax2.set_title('Arrival Delay Histogram')
ax2.set_ylabel('number of flights')
_,_,_ = ax3.hist([a-b for a,b in zip(arr_delay_time_vector,dep_delay_time_vector)], bins = 30)
ax3.set_xlabel('delay [min]')
ax3.set_title('Arrival-Departure Delay Histogram')
ax3.set_ylabel('number of flights')
corr_coef = np.corrcoef(dep_delay_time_vector,arr_delay_time_vector)[0,1]
ax4.scatter(dep_delay_time_vector,arr_delay_time_vector)
ax4.set_xlim([-20,100])
ax4.set_ylim([-50,300])
ax4.set_title('correlation coefficient: %2.2f' %(corr_coef) )
ax4.set_xlabel('departure delay [min]')
ax4.set_ylabel('arrival delay [min]')
plt.tight_layout()
plt.show()
"""
Explanation: Next, let us see if we have outliers in the arrival delays after the removal of departure delay outliers.
End of explanation
"""
N = len(arr_delay_time_vector)
delay_data = [delay_data[i] for i in range(N) if arr_delay_time_vector[i] < 125]
"""
Explanation: Notice the correlation between departure delay and arrival delay is reduced to 0.75. The distribution of the difference of arrival and departure delays has a peaked shape and most of the points are in the [-50,50] minutes range. Scatter plot also reveals that points with arrival time greater than ~125 minutes are somewhat outside of the big cluster of points. With these observations we assume arrival delays greater than 125 minutes are outliers. It would have been interesting to investigate the the causes of these big delay times. However we are concerned with common patterns in the on-time performance of airline traffic.
End of explanation
"""
delay_data_dict = {}
for j in range(len(delay_data_header)):
key = delay_data_header[j]
delay_data_dict[key] = [delay_data[i][j] for i in range(len(delay_data))]
#let's approximate arrival and departure times by only their hour
delay_data_dict['ARR_TIME'] = [round( float(v)*1e-2 ) for v in delay_data_dict['ARR_TIME']]
delay_data_dict['DEP_TIME'] = [round( float(v)*1e-2 ) for v in delay_data_dict['DEP_TIME']]
"""
Explanation: Finally, Let's convert delay_data to a set of dictionaries for easy access
End of explanation
"""
print("Example: Info on Logan Airport: \n")
for key,value in airport_info['BOS'].items():
print(key + ': ' + str(value))
"""
Explanation: Let's Summarize the availabe data
The dictionart 'airport_info' indexed by the 'iata' code. We remind the reader that only the busiest 52 US airports were kept in the data set. Each airport has further information on its location. Let's look at Boston's Logan Airport as an example
End of explanation
"""
for key,value in carrier_info.items():
print(key + ': ' + value)
#we will not delve into data before 07. Let's make US: US Airways
carrier_info['US'] = 'US Airways Inc.'
"""
Explanation: The dictionary 'carrier_info' pairs carrier codes with airline names:
End of explanation
"""
for key in delay_data_dict.keys():
print(key)
"""
Explanation: The main data, 'delay_data_dict' is also in a dictionary format where keys are the fields and each field has all the samples for that field (feature) in the data set. Here are the fields one more time for reference. Note that 'UNIQUE_CARRIER' corresponds to the carrier codes in the carrier_info dictionary, whereas DEST and ORIGIN fields are the 'iata' id's in the airpot_info dictionary.
End of explanation
"""
s1 = set(delay_data_dict['UNIQUE_CARRIER'])
s2 = set(carrier_info.keys())
print(list(s1-s2))
print(list(s2-s1))
"""
Explanation: By the way let's make sure that delay_data_dict does not have flights information on carrriers that are not known to us:
End of explanation
"""
delays = [sum(delay_data_dict['CARRIER_DELAY']),
sum(delay_data_dict['WEATHER_DELAY']),
sum(delay_data_dict['NAS_DELAY']),
sum(delay_data_dict['SECURITY_DELAY']),
sum(delay_data_dict['LATE_AIRCRAFT_DELAY'])]
total = sum(delays)
delays = [100*d/total for d in delays]
print('Delay Cause Percentages:')
print('-----------------------')
print('Carrier delay : ' + str(delays[0]))
print('Weather delay : ' + str(delays[1]))
print('NAS delay : ' + str(delays[2]))
print('Security delay : ' + str(delays[3]))
print('Late Aircraft : ' + str(delays[4]))
"""
Explanation: II. Exploratory Analysis to Reveal Features That Effect On-time Performance
Let's look at the distribution of delay causes among all delays in 12/2015:
End of explanation
"""
N = len(delay_data_dict['ORIGIN']) # N: sample size
carrier_performance = {}
airport_performance = {}
#airport on time performance
for airport in airport_info.keys():
#departures:
ind = [i for i in range(N) if delay_data_dict['DEST'][i] == airport]
total_flights = len(ind)
on_time_flights = sum( [delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )
#arrivals:
ind = [i for i in range(N) if delay_data_dict['ORIGIN'][i] == airport]
total_flights += len(ind)
on_time_flights += sum( [delay_data_dict['ARR_DELAY'][i] - delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )
if total_flights > 0:
airport_performance[airport] = {'total_flights': total_flights,
'on_time_flights': on_time_flights,
'on_time_ratio': on_time_flights/total_flights}
#carreir on time performance
for carrier in carrier_info.keys():
#departures:
ind = [i for i in range(N) if delay_data_dict['UNIQUE_CARRIER'][i] == carrier]
total_flights = len(ind)
on_time_flights = sum( [delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )
if total_flights > 0:
carrier_performance[carrier] = {'total_flights': total_flights,
'on_time_flights': on_time_flights,
'on_time_ratio': on_time_flights/total_flights}
"""
Explanation: One can say that most of the delays are caused by 'relative congestion' at the airports as more than 98% of the delays are caused by carrier, NAS and late aircraft related reasons. Weather also seem to be effecting the on-time performance. Please follow this link for definitions of types of delays.
On Time Performance Analysis of Airports and Carriers
Airline traffic network is extremely complex with interactions of many variables and propagation of delays during the day. Therefore we need to be careful in our definition of late flights. We already establied the fact that departure delays are highly correlated with arrival delays.
We will use the following definitions for delay at airports:
At the origin departure delay larger than 15 minutes is counted as late flight
At the destination if the difference between arrival delay and departure delay is larger than 15 that flight is considered late. Note that this definition regarding the destination assume that there are no causes of delay when the plane is on route in the air.
For carriers, we consider only late departures.
End of explanation
"""
name = []
code = []
on_time = []
flights = []
for key in carrier_performance.keys():
code.append(key)
name.append(carrier_info[key])
on_time.append(carrier_performance[key]['on_time_ratio'])
flights.append(carrier_performance[key]['total_flights'])
name, code, on_time, flights = zip( *sorted( zip(name, code, on_time, flights), key = lambda x: x[3], reverse = True ) )
fig = plt.figure(figsize = (15,3))
width = .6
ax1 = plt.subplot(121)
ax1.bar(range(len(on_time)), [1- v for v in on_time], width = width)
ax1.set_xticks(np.arange(len(on_time)) + width/2)
ax1.set_xticklabels(name, rotation = 90)
ax1.set_title('On-time Performance of Carriers in 12/2015')
ax1.set_ylabel('delay ratio')
ax2 = plt.subplot(122)
ax2.bar(range(len(on_time)), flights, width = width)
ax2.set_xticks(np.arange(len(on_time)) + width/2)
ax2.set_xticklabels(name, rotation = 90)
ax2.set_ylabel('toatl #of flights')
ax2.set_title('#of Flights in 12/2015')
plt.show()
fig = plt.figure(figsize=(3,3))
plt.scatter([1- v for v in on_time], flights)
#plt.xticks([0.14, 0.16, 0.20, 0.26])
plt.xlabel('delay ratio')
plt.ylabel('total #of flights')
plt.grid()
plt.show()
"""
Explanation: Overall on-time performance of carriers.
End of explanation
"""
#find the airlines within each category:
no_unless_its_really_cheap = []
not_bad = []
way_to_go = []
for c,v in zip(code, on_time):
r = 1-v
if r > 0.20:
no_unless_its_really_cheap.append(c)
elif r <= 0.15:
way_to_go.append(c)
else:
not_bad.append(c)
print('way_to_go carriers:')
print('------------------')
for c in way_to_go:
print(carrier_info[c])
"""
Explanation: The upper left shows that overall on-time performance varies quite a bit from carrier to carrier. Whereas no correlation between flight volume of the carrier and its on-time performance is observed. We decide to divide the carriers into performance categories according to their overall delay ratios as follows:
no_unless_its_really_cheap : {delay ratio greater than 0.20}.
not_bad: {delay ratio greater than 0.15 smaller or equal to 0.20}.
way_to_go: {delay ratio smaller than or equal to 0.15}.
End of explanation
"""
lat = []
lon = []
name = []
on_time = []
flights = []
for key in airport_performance.keys():
name.append(airport_info[key]['airport'])
lat.append(airport_info[key]['lat'])
lon.append(airport_info[key]['long'])
on_time.append(airport_performance[key]['on_time_ratio'])
flights.append(airport_performance[key]['total_flights'])
fig = plt.figure(figsize=[12,10])
m = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,
projection='lcc',lat_1=33,lat_2=45,lon_0=-95)
m.drawcoastlines(linewidth=1)
m.fillcontinents(color = 'green', lake_color = 'blue', alpha = 0.2)
m.drawcountries(linewidth=1)
x,y = m(lon, lat)
im = m.scatter(x,y, marker = 'o', s = np.array(flights)/10, c = on_time,
cmap = 'autumn')
cb = m.colorbar(im,'bottom')
cb.set_label('on time percentage', fontsize = '14')
plt.show()
"""
Explanation: Overall on-time performance of airports.
Let's visualize airport traffic and on time performance of all airports on the map of USA.
End of explanation
"""
middle_of_map = (min(lon)+max(lon))/2.0
distance_from_coasts = abs(np.array(lon)-np.array(middle_of_map))
fig = plt.figure(figsize = (14,5))
ax1 = plt.subplot(121)
im = ax1.scatter(flights, [1-v for v in on_time], marker = 'o', s = np.array(flights)/100, c = distance_from_coasts)
#cbar3 = plt.colorbar(im3, cax=cax3, ticks=MultipleLocator(0.2), format="%.2f")
cb = plt.colorbar(im)
cb.set_label('distance from coast [longitude]', fontsize = '14')
ax1.set_xlabel('number of flights', fontsize = '14')
ax1.set_ylabel('delay ratio', fontsize = '14')
x,y = zip(*sorted(zip(distance_from_coasts, [1- v for v in on_time]), key = lambda x: x[0]))
fit = np.polyfit(x,y,1)
fit_fn = np.poly1d(fit)
ax2 = plt.subplot(122)
ax2.plot(x, fit_fn(x), '--k', label = 'linear fit')
ax2.plot(x,y,'o-', label = 'data')
ax2.legend()
ax2.set_xlabel('distance from coast [longitude]', fontsize = '14')
ax2.set_ylabel('delay ratio', fontsize = '14')
plt.show()
"""
Explanation: In the map above, airport locations are shown with circles color coded accordinf to on-time performance. The area of each circle is proportional to the total number of flights at that airport. Similar to carrier and number of flights we observe no immediate relationship between flight volume and on-time percentage. One intereting question is whether there is a relationship between closeness of the airport to either of the coasts (east or west). Since longitude lines are nearly parellel to the east-west alignment of the map of the US we can measure closeness of an airport with its distance to the middle of the map in terms of longitude. The below scatter plot and delay ratio as a function of distance from the coast plots investigate this possibility.
End of explanation
"""
name, on_time, flights = zip( *sorted( zip(name, on_time, flights), key = lambda x: x[2], reverse = True ) )
fig = plt.figure(figsize = (10,3))
width = .6
ax1 = plt.subplot(121)
ax1.bar(range(10), [1-v for v in on_time[:10]], width = width)
ax1.set_xticks(np.arange(10) + width/2)
ax1.set_xticklabels(name[:10], rotation = 90)
ax1.set_title('On-time Performance of Airports in 12/2015')
ax1.set_ylabel('delay ratio')
ax2 = plt.subplot(122)
ax2.bar(range(10), flights[:10], width = width)
ax2.set_xticks(np.arange(10) + width/2)
ax2.set_xticklabels(name[:10], rotation = 90)
ax2.set_title('#of Flights in 12/2015')
plt.show()
"""
Explanation: Observing the plots above, we can say that coastal distance and delay ratio are negatively correlated. Although longitude is is a bit crude and a more precise computation of coastal distance is possible we chose to use it as a continious variable (predictor) in our model.
Finally in this section we list the busiest ten airports in the US and their on time performances.
End of explanation
"""
total_flights_month = [0]*32
on_time_flights_month = [0]*32
avg_delay_month = [0]*32
total_flights_day = [0]*8
on_time_flights_day = [0]*8
avg_delay_day = [0]*8
total_flights_time = [0]*25
on_time_flights_time = [0]*25
avg_delay_time = [0]*25
N = len(delay_data_dict['ARR_DELAY']) #sample size
day_dict = {1:'mon',2:'tue',3:'wed',4:'thu',5:'fri',6:'sat',7:'sun'}
days = ['']*32
for i in range(N):
j = delay_data_dict['DAY_OF_MONTH'][i]
day = delay_data_dict['DAY_OF_WEEK'][i]
t = delay_data_dict['ARR_TIME'][i]
days[j] = day_dict[day] # keep list of days for indexing purposes
delay = delay_data_dict['ARR_DELAY'][i]
total_flights_month[j] += 1
total_flights_day[day] += 1
total_flights_time[t] += 1
if delay <= 15:
on_time_flights_month[j] += 1
on_time_flights_day[day] += 1
on_time_flights_time[t] += 1
avg_delay_month[j] += delay
avg_delay_day[day] += delay
avg_delay_time[t] += delay
avg_delay_time[24] += avg_delay_time[0]
avg_delay_month = np.array(avg_delay_month[1:]) / np.array(total_flights_month[1:])
avg_delay_day = np.array(avg_delay_day[1:]) / np.array(total_flights_day[1:])
avg_delay_time = np.array(avg_delay_time[1:]) / np.array(total_flights_time[1:])
delay_ratio_month = 1.0 - np.array(on_time_flights_month[1:]) / np.array(total_flights_month[1:])
delay_ratio_day = 1.0 - np.array(on_time_flights_day[1:]) / np.array(total_flights_day[1:])
delay_ratio_time = 1.0 - np.array(on_time_flights_time[1:]) / np.array(total_flights_time[1:])
day = days[1:]
fig = plt.figure(figsize=(12,3))
plt.subplot(121)
plt.plot(avg_delay_day, 'o-')
plt.xticks(range(0,8), [day_dict[i] for i in range(1,8)])
plt.title('Average Delay (Early Arrivals Are Accounted)')
plt.ylabel('delay [min]')
plt.grid()
plt.subplot(122)
plt.plot(delay_ratio_day, 'o-')
plt.xticks(range(0,8), [day_dict[i] for i in range(1,8)])
plt.title('Delay Ratio')
plt.grid()
plt.show()
fig = plt.figure(figsize=(3,3))
plt.scatter(delay_ratio_day,avg_delay_day)
plt.xticks([0.14, 0.16, 0.20, 0.26])
plt.xlabel('delay ratio')
plt.ylabel('average delay [min]')
plt.grid()
plt.show()
"""
Explanation: Analysis of on-time performance in terms of flight date and time
When considering delays for each trip what passengers are really concerned acout is the arrival delay. Also considering the correlation between departure and arrival delays as well possible accumulation of delays we only count arrival delays when analyzing daily trends.
End of explanation
"""
fig = plt.figure(figsize=(20,6))
plt.subplot(221)
plt.plot(avg_delay_month, 'o-')
plt.xticks(range(5,32,7), days[6::7])
plt.title('Average Delay (Early Arrivals Are Accounted)')
plt.ylabel('delay [min]')
plt.grid()
plt.subplot(222)
plt.plot(delay_ratio_month, 'o-')
plt.xticks(range(0,31),range(1,32))
plt.ylabel('delay ratio')
plt.title('Delay Ratio for the day of month')
plt.axhline(y = 0.20, color = 'r')
plt.axhline(y = 0.15, color = 'r')
plt.grid()
plt.subplot(224)
plt.plot(total_flights_month[1:], 'o-')
plt.xticks(range(5,32,7), days[6::7])
plt.title('Total Number of Flights')
plt.grid()
fig.tight_layout()
plt.show()
"""
Explanation: It is interesting to observe Tuesday is the day with the highest probability of delay. Note that in 2014 Chiristmas day was Thursday. The behaviour above can be due to congestion two day before Christmas. We investigate this possibility below when we analyze the daily patterns withing the month. Here we also show the correlation between average delay in minutes and delay ratio with the scatter plot above. Similar behaviours are observed in weekly and hourly patterns.
End of explanation
"""
#find the airlines within each category:
very_bad_days = []
bad_days = []
good_days = []
for k in range(31):
r = delay_ratio_month[k]
if r > 0.20:
very_bad_days.append(k+1)
elif r <= 0.15:
good_days.append(k+1)
else:
bad_days.append(k+1)
print('very_bad_days:')
print('-------------')
print(very_bad_days)
"""
Explanation: We notice the weekly periodict of delay times and ratios, where Tuesday-Friday has higher delay ration than Saturdat-Monday. More interestingly, we also notice how the cycle breaks exacyly one week before Christmas, Thursday December 18th. First Tuesday and Wednesday also tend to be different than the general pattern perhaps due to their closeness to the Thanksgiving day. Also notice two peaks on December 23rd and 30th, which were both tuesdays. Even though the number of flights are similar to the Tuesdays before delay ratios are approximately doubled. In addition Chrismas Finally, we note that day-of-month analysis is more informative that the day-of-week analysis as the effect of holiday season then to deviate the daily patterns.
Due to several interactins of holidays and weekly patterns we decide to simply categorize days of the month as {good_day, bad_day, very_bad_day} according to following definitios:
very_bad_day : {days of the month with a average delay ratio greater than 0.20}. For example December 2, 11,19 are very bad days.
bad_day: {days of the month with average delay ratio greater than 0.15 smaller than 0.20}
good_day: {days of the month with average delay ratio smaller or equal to 0.15}
End of explanation
"""
fig = plt.figure(figsize=(20/31*24,6))
plt.subplot(221)
plt.plot(avg_delay_time, 'o-')
plt.xticks(range(0,24),range(0,24))
plt.title('Average Delay (Early Arrivals Are Accounted)')
plt.ylabel('delay [min]')
plt.xlabel('hour')
plt.grid()
plt.subplot(222)
plt.plot(delay_ratio_time, 'o-')
plt.axvline(x = 3, color = 'r')
plt.axvline(x = 16, color = 'r')
plt.axvline(x = 23, color = 'r')
plt.xticks(range(0,24),range(0,24))
plt.title('Delay Ratio for time of the day')
plt.xlabel('hour')
plt.ylabel('delay ratio')
plt.grid()
plt.subplot(224)
plt.plot(total_flights_time[1:], 'o-')
plt.xticks(range(0,24),range(0,24))
plt.title('Total number of Flights')
plt.xlabel('hour')
fig.tight_layout()
plt.grid()
plt.show()
"""
Explanation: Finally, let's investigate hourly patterns.
End of explanation
"""
#find the airlines within each category:
morning = range(3,13)
afternoon = range(13,17)
evening = range(17,23)
night = [23,24,0,1,2]
"""
Explanation: Inspired by the plots above we define the following categories for the hour of the day:
morning: {03:00-12:00}
afternoon: {13:00-16:00}
evening: {17:00-22:00}
night: {23:00-02:00}
End of explanation
"""
#create the data set dictionary and target vector Y
from sklearn.feature_extraction import DictVectorizer
training_set = []
Y = []
N = len(delay_data_dict['ARR_DELAY'])
for i in range(N):
lon = airport_info[delay_data_dict['DEST'][i]]['long']
coastal_dist = abs(np.array(lon)-np.array(middle_of_map))
arr_time = delay_data_dict['ARR_TIME'][i]
if arr_time in morning:
arr_time = 'morning'
elif arr_time in afternoon:
arr_time = 'afternoon'
elif arr_time in evening:
arr_time= 'evening'
else:
arr_time = 'night'
arr_day = delay_data_dict['DAY_OF_MONTH'][i]
if arr_day in good_days:
arr_day = 'good_days'
elif arr_day in bad_days:
arr_day= 'bad_days'
else:
arr_day = 'very_bad_days'
carrier = delay_data_dict['UNIQUE_CARRIER'][i]
if carrier in no_unless_its_really_cheap:
carrier = 'no_unless_its_really_cheap'
elif carrier in not_bad:
carrier = 'not_bad'
else:
carrier = 'way_to_go'
training_set.append({'bias': 1.0,'coastal_dist': coastal_dist, 'arr_time': arr_time, 'arr_day': arr_day, 'carrier': carrier})
Y.append(int(delay_data_dict['ARR_DELAY'][i]>15))
vec = DictVectorizer()
X = vec.fit_transform(training_set).toarray()
#Train our Logistic Regression Model
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.preprocessing import StandardScaler
Y = np.array(Y)
ratio0 = len(Y[Y==0])/len(Y)
model = LogisticRegression(fit_intercept = False)
model = model.fit(X, Y)
train_accuracy = model.score(X, Y)
Y = np.array(Y)
print('Ratio of the on-time flights in the data-set: {}'.format(ratio0))
print('\nTraining score: {}'.format(train_accuracy))
print('\nClassification report on training data:\n')
Y_pred = model.predict(X)
print(classification_report(Y, Y_pred))
cv = StratifiedKFold(Y, n_folds = 5)
cv_score = cross_val_score(model, X, Y, cv = cv)
print('\n5-fold cross validation score: {}'.format(np.mean(cv_score)))
"""
Explanation: III. A Logistic Regression Model for Estimating the Delay Probabilities
In this section a logistic regression model for the estimation of delay probability is described and implemented. Let us start with a summary of variables identified in Section II:
carrier: {no_unless_its_really_cheap, not_bad, way_to_go}
arrival airport: coastal distance given in longitude
day of the month: {good_day, bad_day, very_bad_day}
time of the day: {morning, afternoon, evening, night}
We stick to the definition of delay (use only arrival delay) that we used when computing deate and time related patterns in Section III. Therefore we define target vector $Y$ with components equal to zero for on time flights and one for delayed flights.
End of explanation
"""
from sklearn.metrics import roc_curve, auc
probas_ = model.predict_proba(X)
fpr, tpr, thresholds = roc_curve(Y, probas_[:,1])
ruc_score = auc(fpr, tpr)
plt.plot(fpr, tpr)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.plot([0, 1], [0, 1], '--k')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
"""
Explanation: The performance of our logistic classifier is basically same as predicting that a flight will always be on time.
Also only 60% of the flights that were predicted to be delayed were delayed (precision for label 1 is 0.59).
Only 2% of the delayed flights were correctly classified.
It may help to take a look into ROC curve to get more insight on the choice of the threshold:
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
model_rf = RandomForestClassifier()
model_rf.fit(X,Y)
train_accuracy = model_rf.score(X, Y)
print('\nTraining score - Random Forest Classifier: {}'.format(train_accuracy))
print('\nClassification report on training data:\n')
Y_pred_rf = model_rf.predict(X)
print(classification_report(Y, Y_pred_rf))
cv = StratifiedKFold(Y, n_folds = 5)
cv_score = cross_val_score(model_rf, X, Y, cv = cv)
print('\n5-fold cross validation score: {}'.format(np.mean(cv_score)))
"""
Explanation: We have been very crude on our design of features. The logistic regression did not work well with the current model.
Let's also try a random forrest classifier but I believe the issue is in feature engineering.
End of explanation
"""
features = vec.get_feature_names()
coeffs = model.coef_[0]
print('%34s %20s' %('Feature:', 'Coefficient:'))
print('%34s %20s' %('-'*34, '-'*20))
for f,c in zip(features, coeffs):
print(('%34s %20.4f' %(f, c)))
"""
Explanation: Fairly good test error reported above increases our confidence on our model. Finally let us report on the coefficients of the logistic regression model:
End of explanation
"""
|
wtbarnes/aia_response
|
notebooks/import_genx_files.ipynb
|
mit
|
import sys
import os
import numpy as np
import scipy.io
from astropy.table import Table
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
%matplotlib inline
"""
Explanation: Parse .genx Files
Parse .genx files from SSW into Python in order to calculate AIA wavelength response functions.
End of explanation
"""
v6_all_fullinst_genx = scipy.io.readsav('/Users/willbarnes/Documents/Projects/gsoc/aia_response/ssw_aia_response_data/aia_V6_all_fullinst')
"""
Explanation: Next, open up the instrument file. We'll use the most recent version of the files, V6. The _fullinst file should contain most of the information that we need. These .genx files have been read into IDL and then resaved as normal IDL .sav files.
End of explanation
"""
v6_all_fullinst_genx = v6_all_fullinst_genx['data']
"""
Explanation: Because of the way the files were saved, all of the information is inside the data keyword.
End of explanation
"""
type(v6_all_fullinst_genx)
"""
Explanation: The data structure returned here is a numpy.recarray. Here is a brief tutorial on how to use these data structures.
End of explanation
"""
v6_all_fullinst_genx.dtype.names
"""
Explanation: What are the members and how do we access them?
End of explanation
"""
v6_all_fullinst_genx['a94_full'][0].dtype.names
v6_all_fullinst_genx['a94'][0]['wave']
"""
Explanation: Let's look at the 94 Å data. It turns out that each member is a numpy array of length one and that the first (and only) entry is again a numpy.recarray. So let's see what this data structure looks like.
NOTE: A<CHANNEL_NUM> just contains wavelength, effective area, and the platescale while A<CHANNEL_NUM>_FULL contains many more pieces of data, such as information about the primary and secondary mirrors and the CCD. Furthermore, A<CHANNEL_NUM>_FILE is just a single filename, maybe where this data lived originally though it is not clear where this file actually lives or if it is accessible at all.
End of explanation
"""
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
cp = sns.color_palette('hls',int(len(v6_all_fullinst_genx['channels'][0])))
for channel,i in zip(v6_all_fullinst_genx['channels'][0],range(len(v6_all_fullinst_genx['channels'][0]))):
#skip the thick channels
if b'thick' in channel or b'THICK' in channel:
continue
print('Plotting channel %s'%(channel.decode('utf8')))
wavelength,a_eff = v6_all_fullinst_genx[channel.decode('utf8')][0]['wave'][0],v6_all_fullinst_genx[str(channel.decode('utf8'))][0]['ea'][0]
ax.plot(wavelength,a_eff/np.max(a_eff),label=channel.decode('utf8')[1::]+' angstrom',color=cp[i])
ax.set_xlabel(r'Wavelength (angstroms)')
ax.set_ylabel(r'Effective Area (normalized)')
ax.set_xlim([80,350])
ax.legend(loc='best')
"""
Explanation: Try plotting the effective area as a function of $\lambda$ for each channel.
End of explanation
"""
chan = np.array([93.9, 131.2, 171.1, 195.1, 211.3, 303.8, 335.4])
G = np.array([2.128, 1.523, 1.168, 1.024, 0.946, 0.658, 0.596])
g = G/12398.0*3.65*chan
boerner_table_2 = Table([chan,G,g],names=('Channel', '$G$', '$g$'))
boerner_table_2['Channel'].unit = 'angstrom'
boerner_table_2['$G$'].unit = 'DN/photon'
boerner_table_2['$g$'].unit = 'DN/electron'
boerner_table_2
"""
Explanation: Similarly, we should be able to to calculate $A_{eff}(\lambda)$ according to the equation in section 2 of Boerner et al. (2012),
$$
A_eff = A_{geo}R_pR_sT_eT_fD(t)Q
$$
where
$A_{geo}$: geometrical collecting area of the mirror, geoarea
$R_p,R_s$: reflectance of the primary and secondary mirrors, primary, secondary
$T_e,T_f$: transmission efficiency of the entry and focal-plane filters, ent_filter, fp_filter
$D(t)$: time-varying degradation due to contamination or deterioration, contam
$Q$: quantum efficiency of the CCD, ccd
Then, we are able to calculate the wavelength response function,
$$
R_i(\lambda) = A_{eff,i}(\lambda,t)G(\lambda)
$$
where $G(\lambda)=(12398/\lambda/3.65)g$ is the gain of the CCD-camera system in DN per photon. $g$ is the camera gain in DN per electron.
$g$ does not seem to be included in any of the data files. From Table 2 of Boerner et al. (2012),
End of explanation
"""
1.0/v6_all_fullinst_genx['a94_full'][0]['elecperdn'][0]
"""
Explanation: So $g$ is roughly constant and is approximately $g\approx0.0588$
End of explanation
"""
def ccd_gain(wavelength):
g = 0.0588
return 12398.0/wavelength/3.65*g
"""
Explanation: So it seems that this parameter is included in the data file as $1/g$? However, it is not clear why this number is off by about $4\times10^{-3}$. For now let's just make a gain function that sets $g=0.0588$.
End of explanation
"""
def wvl_response(data_struct):
response = {}
for channel in data_struct['channels'][0]:
if b'thick' in channel or b'THICK' in channel:
continue
full_channel = data_struct[channel.decode('utf8')+'_FULL'][0]
wave = full_channel['wave'][0]
effective_area_file = full_channel['effarea'][0]
a_geo,rp,rs,te,tf,d,q = full_channel['geoarea'][0],full_channel['primary'][0],full_channel['secondary'][0],full_channel['ent_filter'][0],full_channel['fp_filter'][0],full_channel['contam'][0],full_channel['ccd'][0]
G = ccd_gain(wave)
response_calc = a_geo*rp*rs*te*tf*d*q*G
response_file = effective_area_file*G
response[channel.decode('utf8')] = {'wave':wave,'file':response_file,'calc':response_calc}
return response
"""
Explanation: Now lets make a function that will calculate the effective area and return both the calculate version and that read from the file.
End of explanation
"""
response = wvl_response(v6_all_fullinst_genx)
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
cp = sns.color_palette('hls',len(response))
for key,i in zip(response,range(len(response))):
ax.plot(response[key]['wave'],response[key]['calc']/np.max(response[key]['calc']),color=cp[i],label=key)
ax.plot(response[key]['wave'],response[key]['file']/np.max(response[key]['file']),color=cp[i],linestyle='dashed')
ax.set_xlim([80,350])
ax.legend(loc='best')
ax.set_ylabel(r'$R_i(\lambda)$')
ax.set_xlabel(r'$\lambda$')
"""
Explanation: Now do the calculations and plot the functions on top of each other.
End of explanation
"""
|
google/iree
|
samples/colab/edge_detection.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
"""
Explanation: Copyright 2020 The IREE Authors
End of explanation
"""
!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases
#@title Imports
import os
import tempfile
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from iree import runtime as ireert
from iree.tf.support import module_utils
from iree.compiler import compile_str
from iree.compiler import tf as tfc
#@title Setup Artifacts Directory
# Used in the low-level compilation section.
ARTIFACTS_DIR = os.path.join(tempfile.gettempdir(), "iree", "colab_artifacts")
os.makedirs(ARTIFACTS_DIR, exist_ok=True)
#@title Define the EdgeDetectionModule
class EdgeDetectionModule(tf.Module):
@tf.function(input_signature=[tf.TensorSpec([1, 128, 128, 1], tf.float32)])
def edge_detect_sobel_operator(self, image):
# https://en.wikipedia.org/wiki/Sobel_operator
sobel_x = tf.constant([[-1.0, 0.0, 1.0],
[-2.0, 0.0, 2.0],
[-1.0, 0.0, 1.0]],
dtype=tf.float32, shape=[3, 3, 1, 1])
sobel_y = tf.constant([[ 1.0, 2.0, 1.0],
[ 0.0, 0.0, 0.0],
[-1.0, -2.0, -1.0]],
dtype=tf.float32, shape=[3, 3, 1, 1])
gx = tf.nn.conv2d(image, sobel_x, 1, "SAME")
gy = tf.nn.conv2d(image, sobel_y, 1, "SAME")
return tf.math.sqrt(gx * gx + gy * gy)
tf_module = EdgeDetectionModule()
#@title Load a test image of a [labrador](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg) and run the module with TF
def load_image(path_to_image):
image = tf.io.read_file(path_to_image)
image = tf.image.decode_image(image, channels=1)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, (128, 128))
image = image[tf.newaxis, :]
return image
content_path = tf.keras.utils.get_file(
'YellowLabradorLooking_new.jpg',
'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
image = load_image(content_path).numpy()
def show_images(image, edges):
fig, axs = plt.subplots(1, 2)
axs[0].imshow(image.reshape(128, 128), cmap="gray")
axs[0].set_title("Input image")
axs[1].imshow(edges.reshape(128, 128), cmap="gray")
axs[1].set_title("Output image")
axs[0].axis("off")
axs[1].axis("off")
fig.tight_layout()
fig.show()
# Invoke the function with the image as an argument
tf_edges = tf_module.edge_detect_sobel_operator(image).numpy()
# Plot the input and output images
show_images(image, tf_edges)
"""
Explanation: Image edge detection module
Setup
End of explanation
"""
#@markdown ### Backend Configuration
backend_choice = "iree_vmvx (CPU)" #@param [ "iree_vmvx (CPU)", "iree_llvmaot (CPU)", "iree_vulkan (GPU/SwiftShader)" ]
backend_choice = backend_choice.split(" ")[0]
backend = module_utils.BackendInfo(backend_choice)
#@title Compile and Run the EdgeDetectionModule with IREE.
module = backend.compile_from_class(EdgeDetectionModule)
# Compute the edges using the compiled module and display the result.
iree_edges = module.edge_detect_sobel_operator(image)
show_images(image, iree_edges)
"""
Explanation: High Level Compilation With IREE
End of explanation
"""
#@title Construct a module containing the edge detection function
# Do *not* further compile to a bytecode module for a particular backend.
#
# By stopping at mhlo in text format, we can more easily take advantage of
# future compiler improvements within IREE and can use iree_bytecode_module to
# compile and bundle the module into a sample application. For a production
# application, we would probably want to freeze the version of IREE used and
# compile as completely as possible ahead of time, then use some other scheme
# to load the module into the application at runtime.
compiler_module = tfc.compile_module(EdgeDetectionModule(), import_only=True)
print("Edge Detection MLIR: ", compiler_module.decode('utf-8'))
edge_detection_mlir_path = os.path.join(ARTIFACTS_DIR, "edge_detection.mlir")
with open(edge_detection_mlir_path, "wt") as output_file:
output_file.write(compiler_module.decode('utf-8'))
print(f"Wrote MLIR to path '{edge_detection_mlir_path}'")
#@title Compile and prepare to test the edge detection module
flatbuffer_blob = compile_str(compiler_module, target_backends=["vmvx"], input_type="mhlo")
vm_module = ireert.VmModule.from_flatbuffer(flatbuffer_blob)
# Register the module with a runtime context.
config = ireert.Config(backend.driver)
ctx = ireert.SystemContext(config=config)
ctx.add_vm_module(vm_module)
edge_detect_sobel_operator_f = ctx.modules.module["edge_detect_sobel_operator"]
low_level_iree_edges = edge_detect_sobel_operator_f(image)
show_images(image, low_level_iree_edges)
"""
Explanation: Low-Level Compilation
Overview:
Convert the tf.Module into an IREE compiler module (using mhlo)
Save the MLIR assembly from the module into a file (can stop here to use it from another application)
Compile the mhlo MLIR into a VM module for IREE to execute
Run the VM module through IREE's runtime to test the edge detection function
End of explanation
"""
|
PyLCARS/PythonUberHDL
|
myHDL_ComputerFundamentals/Memorys/.ipynb_checkpoints/Memory-checkpoint.ipynb
|
bsd-3-clause
|
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
"""
Explanation: \title{Memories in myHDL}
\author{Steven K Armour}
\maketitle
End of explanation
"""
#use type casting on list genrator to store 0-9 in 8bit binary
TupleROM=tuple([bin(i, 8) for i in range(10)])
TupleROM
f'accesss location 6: {TupleROM[6]}, read contents of location 6 to dec:{int(TupleROM[6], 2)}'
"""
Explanation: RTL and Implimentation Schamatics are from Xilinx Vivado 2016.1
Read Only Memory (ROM)
ROM is a memory structure that holds static information that can only be read from. In other words, these are hard-coded instruction memory. That should never change. Furthermore, this data is held in a sort of array; for example, we can think of a python tuple as a sort of read-only memory since the content of a tuple is static and we use array indexing to access a certain portions of the memory.
End of explanation
"""
#TupleROM[6]=bin(16,2)
"""
Explanation: And if we try writing to the tuple we will get an error
End of explanation
"""
@block
def ROMLoaded(addr, dout):
"""
A ROM laoded with data already incoded in the structer
insted of using myHDL inchanced parmter loading
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
"""
@always_comb
def readAction():
if addr==0:
dout.next=3
elif addr==1:
dout.next=2
elif addr==2:
dout.next=1
elif addr==3:
dout.next=0
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
DUT=ROMLoaded(addr, dout)
def ROMLoaded_TB():
"""Python Only Testbench for `ROMLoaded`"""
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMLoaded_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMLoaded');
"""
Explanation: Random and Sequntial Access Memory
So to start off with the Random in RAM does not mean Random in a proplositc sence. It refares to Random as in you can randomly access any part of the data array opposed to the now specility sequantil only memory wich are typicly made with a counter or stat machine to sequance that acation
HDL Memeorys
in HDL ROM the data is stored a form of a D flip flop that are structerd in a sort of two diminal array where one axis is the address and the other is the content and we use a mux to contorl wich address "row" we are trying to read. There fore we have two signals: address and content. Where the address contorls the mux.
ROM Preloaded
End of explanation
"""
@block
def ROMLoaded_TBV():
"""Verilog Only Testbench for `ROMLoaded`"""
clk = Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
DUT=ROMLoaded(addr, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(10)
@instance
def stimules():
for i in range(3+1):
addr.next=i
#yield delay(1)
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMLoaded_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMLoaded_TBV');
"""
Explanation: ROMLoaded RTL
<img src='ROMLoadedRTL.png'>
ROMLoaded Synthesis
<img src='ROMLoadedSynth.png'>
End of explanation
"""
@block
def ROMParmLoad(addr, dout, CONTENT):
"""
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
"""
@always_comb
def readAction():
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
def ROMParmLoad_TB():
"""Python Only Testbench for `ROMParmLoad`"""
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoad_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMParmLoad');
"""
Explanation: With myHDL we can dynamicaly load the contents that will be hard coded in the convertion to verilog/VHDL wich is an ammazing benfict for devlopment as is sean here
End of explanation
"""
@block
def ROMParmLoad_TBV():
"""Verilog Only Testbench for `ROMParmLoad`"""
clk=Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMParmLoad_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoad_TBV');
"""
Explanation: ROMParmLoad RTL
<img src="ROMParmLoadRTL.png">
ROMParmLoad Synthesis
<img src="ROMParmLoadSynth.png">
End of explanation
"""
@block
def ROMParmLoadSync(addr, dout, clk, rst, CONTENT):
"""
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
clk (bool): clock feed
rst (bool): reset
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
"""
@always(clk.posedge)
def readAction():
if rst:
dout.next=0
else:
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
def ROMParmLoadSync_TB():
"""Python Only Testbench for `ROMParmLoadSync`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoadSync_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ROMData=Peeker.to_dataframe()
#keep only clock high
ROMData=ROMData[ROMData['clk']==1]
ROMData.drop(columns='clk', inplace=True)
ROMData.reset_index(drop=True, inplace=True)
ROMData
DUT.convert()
VerilogTextReader('ROMParmLoadSync');
"""
Explanation: we can also create rom that insted of being asynchronous is synchronous
End of explanation
"""
@block
def ROMParmLoadSync_TBV():
"""Python Only Testbench for `ROMParmLoadSync`"""
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout, rst)
return instances()
#create instaince of TB
TB=ROMParmLoadSync_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoadSync_TBV');
@block
def SeqROMEx(clk, rst, dout):
"""
Seq Read Only Memory Ex
I/O:
clk (bool): clock
rst (bool): rst on counter
dout (signal >4): data out
"""
Count=Signal(intbv(0)[3:])
@always(clk.posedge)
def counter():
if rst:
Count.next=0
elif Count==3:
Count.next=0
else:
Count.next=Count+1
@always(clk.posedge)
def Memory():
if Count==0:
dout.next=3
elif Count==1:
dout.next=2
elif Count==2:
dout.next=1
elif Count==3:
dout.next=0
return instances()
Peeker.clear()
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=SeqROMEx(clk, rst, dout)
def SeqROMEx_TB():
"""Python Only Testbench for `SeqROMEx`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
return instances()
sim = Simulation(DUT, SeqROMEx_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
SROMData=Peeker.to_dataframe()
#keep only clock high
SROMData=SROMData[SROMData['clk']==1]
SROMData.drop(columns='clk', inplace=True)
SROMData.reset_index(drop=True, inplace=True)
SROMData
DUT.convert()
VerilogTextReader('SeqROMEx');
"""
Explanation: ROMParmLoadSync RTL
<img src="ROMParmLoadSyncRTL.png">
ROMParmLoadSync Synthesis
<img src="ROMParmLoadSyncSynth.png">
End of explanation
"""
@block
def SeqROMEx_TBV():
"""Verilog Only Testbench for `SeqROMEx`"""
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
DUT=SeqROMEx(clk, rst, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(clk, rst, dout)
return instances()
#create instaince of TB
TB=SeqROMEx_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('SeqROMEx_TBV');
"""
Explanation: SeqROMEx RTL
<img src="SeqROMExRTL.png">
SeqROMEx Synthesis
<img src="SeqROMExSynth.png">
End of explanation
"""
@block
def RAMConcur(addr, din, writeE, dout, clk):
"""
Random access read write memeory
I/O:
addr(signal>4): the memory cell arrdress
din (signal>4): data to write into memeory
writeE (bool): write enable contorl; false is read only
dout (signal>4): the data out
clk (bool): clock
Note:
this is only a 4 byte memory
"""
#create the memeory list (1D array)
memory=[Signal(intbv(0)[4:]) for i in range(4)]
@always(clk.posedge)
def writeAction():
if writeE:
memory[addr].next=din
@always_comb
def readAction():
dout.next=memory[addr]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
din=Signal(intbv(0)[4:]); Peeker(din, 'din')
writeE=Signal(bool(0)); Peeker(writeE, 'writeE')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
def RAMConcur_TB():
"""Python Only Testbench for `RAMConcur`"""
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, RAMConcur_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
RAMData=Peeker.to_dataframe()
RAMData=RAMData[RAMData['clk']==1]
RAMData.drop(columns='clk', inplace=True)
RAMData.reset_index(drop=True, inplace=True)
RAMData
RAMData[RAMData['writeE']==1]
RAMData[RAMData['writeE']==0]
DUT.convert()
VerilogTextReader('RAMConcur');
"""
Explanation: read and write memory
End of explanation
"""
@block
def RAMConcur_TBV():
"""Verilog Only Testbench for `RAMConcur`"""
addr=Signal(intbv(0)[4:])
din=Signal(intbv(0)[4:])
writeE=Signal(bool(0))
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(addr, din, writeE, dout, clk)
return instances()
#create instaince of TB
TB=RAMConcur_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('RAMConcur_TBV');
"""
Explanation: RAMConcur RTL
<img src="RAMConcurRTL.png">
RAMConcur Synthesis
<img src="RAMConcurSynth.png">
End of explanation
"""
|
aldian/tensorflow
|
tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_Numpy_Distributed_Image_Classification.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install --quiet --upgrade tf-nightly
!pip install --quiet --upgrade tensorflow-datasets
import collections
import functools
import matplotlib.pyplot as plt
import os
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import tensorflow_datasets as tfds
gpus = tf.config.list_physical_devices('GPU')
if gpus:
tf.config.set_logical_device_configuration(gpus[0], [
tf.config.LogicalDeviceConfiguration(memory_limit=128),
tf.config.LogicalDeviceConfiguration(memory_limit=128)])
devices = tf.config.list_logical_devices('GPU')
else:
cpus = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(cpus[0], [
tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
devices = tf.config.list_logical_devices('CPU')
print("Using following virtual devices", devices)
"""
Explanation: TensorFlow NumPy: Distributed Image Classification Tutorial
Overview
TensorFlow implements a subset of the NumPy API, available as tf.experimental.numpy. This allows running NumPy code, accelerated by TensorFlow together with access to all of TensorFlow's APIs. Please see TensorFlow NumPy Guide to get started.
Here you will learn how to build a deep model for an image classification task by using TensorFlow Numpy APIs. For using higher level tf.keras APIs, see the following tutorial.
Setup
tf.experimental.numpy will be available in the stable branch starting from TensorFlow 2.4. For now, it is available in nightly.
End of explanation
"""
NUM_CLASSES = 10
BATCH_SIZE = 64
INPUT_SIZE = 28 * 28
def process_data(data_dict):
images = tnp.asarray(data_dict['image']) / 255.0
images = images.reshape(-1, INPUT_SIZE).astype(tnp.float32)
labels = tnp.asarray(data_dict['label'])
labels = tnp.eye(NUM_CLASSES, dtype=tnp.float32)[labels]
return images, labels
with tf.device("CPU:0"):
train_dataset = tfds.load('mnist', split='train', shuffle_files=True,
batch_size=BATCH_SIZE).map(process_data)
test_dataset = tfds.load('mnist', split='test', shuffle_files=True,
batch_size=-1)
x_test, y_test = process_data(test_dataset)
# Plots some examples.
images, labels = next(iter(train_dataset.take(1)))
_, axes = plt.subplots(1, 8, figsize=(12, 96))
for i, ax in enumerate(axes):
ax.imshow(images[i].reshape(28, 28), cmap='gray')
ax.axis("off")
ax.set_title("Label: %d" % int(tnp.argmax(labels[i])))
"""
Explanation: Mnist dataset
Mnist contains 28 * 28 images of digits from 0 to 9. The task is to classify the images as these 10 possible classes.
Below, load the dataset and examine a few samples.
End of explanation
"""
class Dense(object):
def __init__(self, units, use_relu=True):
self.wt = None
self.bias = None
self._use_relu = use_relu
self._built = False
self._units = units
def __call__(self, inputs):
if not self._built:
self._build(inputs.shape)
x = tnp.add(tnp.matmul(inputs, self.wt), self.bias)
if self._use_relu:
return tnp.maximum(x, 0.)
else:
return x
@property
def params(self):
assert self._built
return [self.wt, self.bias]
def _build(self, input_shape):
size = input_shape[1]
stddev = 1 / tnp.sqrt(size)
# Note that model parameters are `tf.Variable` since they requires
# mutation, which is currently unsupported by TensorFlow NumPy.
# Also note interoperation with TensorFlow APIs below.
self.wt = tf.Variable(
tf.random.truncated_normal(
[size, self._units], stddev=stddev, dtype=tf.float32))
self.bias = tf.Variable(tf.zeros([self._units], dtype=tf.float32))
self._built = True
"""
Explanation: Define layers and model
Here, you will implement a multi-layer perceptron model that trains on the MNIST data. First, define a Dense class which applies a linear transform followed by a "relu" non-linearity.
End of explanation
"""
class Model(object):
"""A three layer neural network."""
def __init__(self):
self.layer1 = Dense(128)
self.layer2 = Dense(32)
self.layer3 = Dense(NUM_CLASSES, use_relu=False)
def __call__(self, inputs):
x = self.layer1(inputs)
x = self.layer2(x)
return self.layer3(x)
@property
def params(self):
return self.layer1.params + self.layer2.params + self.layer3.params
"""
Explanation: Next, create a Model object that applies two non-linear Dense transforms,
followed by a linear transform.
End of explanation
"""
def forward(model, inputs, labels):
"""Computes prediction and loss."""
logits = model(inputs)
# TensorFlow's loss function has numerically stable implementation of forward
# pass and gradients. So we prefer that here.
loss = tf.nn.softmax_cross_entropy_with_logits(labels, logits)
mean_loss = tnp.mean(loss)
return logits, mean_loss
def compute_gradients(model, inputs, labels):
"""Computes gradients of loss based on `labels` and prediction on `inputs`."""
with tf.GradientTape() as tape:
tape.watch(inputs)
_, loss = forward(model, inputs, labels)
gradients = tape.gradient(loss, model.params)
return gradients
def compute_sgd_updates(gradients, learning_rate):
"""Computes parameter updates based on SGD update rule."""
return [-learning_rate * grad for grad in gradients]
def apply_updates(model, updates):
"""Applies `update` to `model.params`."""
for param, update in zip(model.params, updates):
param.assign_add(update)
def evaluate(model, images, labels):
"""Evaluates accuracy for `model`'s predictions."""
prediction = model(images)
predicted_class = tnp.argmax(prediction, axis=-1)
actual_class = tnp.argmax(labels, axis=-1)
return float(tnp.mean(predicted_class == actual_class))
"""
Explanation: Training and evaluation
Checkout the following methods for performing training and evaluation.
End of explanation
"""
NUM_EPOCHS = 10
@tf.function
def train_step(model, input, labels, learning_rate):
gradients = compute_gradients(model, input, labels)
updates = compute_sgd_updates(gradients, learning_rate)
apply_updates(model, updates)
# Creates and build a model.
model = Model()
accuracies = []
for _ in range(NUM_EPOCHS):
for inputs, labels in train_dataset:
train_step(model, inputs, labels, learning_rate=0.1)
accuracies.append(evaluate(model, x_test, y_test))
def plot_accuracies(accuracies):
plt.plot(accuracies)
plt.xlabel("epoch")
plt.ylabel("accuracy")
plt.title("Eval accuracy vs epoch")
plot_accuracies(accuracies)
"""
Explanation: Single GPU training
End of explanation
"""
import threading
import queue
# Note that this code currently relies on dispatching operations from python
# threads.
class ReplicatedFunction(object):
"""Creates a callable that will run `fn` on each device in `devices`."""
def __init__(self, fn, devices, **kw_args):
self._shutdown = False
def _replica_fn(device, input_queue, output_queue):
while not self._shutdown:
inputs = input_queue.get()
with tf.device(device):
output_queue.put(fn(*inputs, **kw_args))
self.threads = []
self.input_queues = [queue.Queue() for _ in devices]
self.output_queues = [queue.Queue() for _ in devices]
for i, device in enumerate(devices):
thread = threading.Thread(
target=_replica_fn,
args=(device, self.input_queues[i], self.output_queues[i]))
thread.start()
self.threads.append(thread)
def __call__(self, *inputs):
all_inputs = zip(*inputs)
for input_queue, replica_input, in zip(self.input_queues, all_inputs):
input_queue.put(replica_input)
return [q.get() for q in self.output_queues]
def __del__(self):
self._shutdown = True
for t in self.threads:
t.join(3)
self.threads = None
def collective_mean(inputs, num_devices):
"""Performs collective mean reduction on inputs."""
outputs = []
for instance_key, inp in enumerate(inputs):
outputs.append(tnp.asarray(
tf.raw_ops.CollectiveReduce(
input=inp, group_size=num_devices, group_key=0,
instance_key=instance_key, merge_op='Add', final_op='Div',
subdiv_offsets=[])))
return outputs
"""
Explanation: Multi GPU runs
Next, run mirrored training on multiple GPUs. Note that the GPUs used here are virtual and map to the same physical GPU.
First, define a few utilities to run replicated computation and reductions.
Distribution primitives
Checkout primitives below for function replication and distributed reduction.
End of explanation
"""
# This is similar to `train_step` except for an extra collective reduction of
# gradients
@tf.function
def replica_step(model, inputs, labels,
learning_rate=None, num_devices=None):
gradients = compute_gradients(model, inputs, labels)
# Note that each replica performs a reduction to compute mean of gradients.
reduced_gradients = collective_mean(gradients, num_devices)
updates = compute_sgd_updates(reduced_gradients, learning_rate)
apply_updates(model, updates)
models = [Model() for _ in devices]
# The code below builds all the model objects and copies model parameters from
# the first model to all the replicas.
def init_model(model):
model(tnp.zeros((1, INPUT_SIZE), dtype=tnp.float32))
if model != models[0]:
# Copy the first models weights into the other models.
for p1, p2 in zip(model.params, models[0].params):
p1.assign(p2)
with tf.device(devices[0]):
init_model(models[0])
# Replicate and run the parameter initialization.
ReplicatedFunction(init_model, devices[1:])(models[1:])
# Replicate the training step
replicated_step = ReplicatedFunction(
replica_step, devices, learning_rate=0.1, num_devices=len(devices))
accuracies = []
print("Running distributed training on devices: %s" % devices)
for _ in range(NUM_EPOCHS):
for inputs, labels in train_dataset:
replicated_step(models,
tnp.split(inputs, len(devices)),
tnp.split(labels, len(devices)))
accuracies.append(evaluate(models[0], x_test, y_test))
plot_accuracies(accuracies)
"""
Explanation: Distributed training
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-hr/toplevel.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-hr', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-HR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
yassineAlouini/ml-experiments
|
deep-learning/activation_layers.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
sns.set(font_scale=1.5)
"""
Explanation: A notebook showing how some very common activation functions (used for deep-learning for example) look like. Enjoy!
End of explanation
"""
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def tanh(x):
return np.tanh(x)
def relu(x):
return np.maximum(x, 0) # element-wise maximum
"""
Explanation: Define the activiation metrics
End of explanation
"""
class ActivationPlots(object):
def __init__(self, metrics):
self.x = np.arange(-10, 10, 0.1)
self.metrics = metrics
self.n_plots = len(self.metrics)
def build(self, axes):
for ax, metric in zip(axes, self.metrics):
y = metric(self.x)
ax.plot(self.x, y)
ax.set_title(str(metric.__name__))
return axes
def plot(self):
n_rows = self.n_plots % 2
n_cols = int(self.n_plots / n_rows)
fig, axes = plt.subplots(n_rows, n_cols, figsize=(12, 4))
self.build(axes)
ActivationPlots([sigmoid, tanh, relu]).plot()
"""
Explanation: Plot them
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nims-kma/cmip6/models/sandbox-1/landice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'landice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
"""
|
maxalbert/tohu
|
notebooks/High_level_tests_for_tohu_generators.ipynb
|
mit
|
g = Integer(low=100, high=200)
g.reset(seed=12345); print_generated_sequence(g, num=15)
g.reset(seed=9999); print_generated_sequence(g, num=15)
some_integers = g.generate(5, seed=99999)
for x in some_integers:
print(x)
"""
Explanation: This notebook contains high-level tests for tohu's "standard" generators.
Class Integer
Generates random integers in the range [lo, hi].
End of explanation
"""
#g = Integer(low=100, high=200, distribution=None)
"""
Explanation: The default distribution is "uniform", but we can use any(?) of the distributions supported by numpy.
End of explanation
"""
g = Float(low=2.71828, high=3.14159)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=9999); print_generated_sequence(g, num=4)
"""
Explanation: Class Float
Generates random floating point numbers in the range [lo, hi].
End of explanation
"""
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g2 = NumpyRandomGenerator(method="poisson", lam=30)
g3 = NumpyRandomGenerator(method="exponential", scale=0.3)
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=15)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
"""
Explanation: Class NumpyRandomGenerator
Generates random numbers using one of the random number generators supported by numpy.
End of explanation
"""
g1 = FakerGenerator(method="name")
g2 = FakerGenerator(method="name", locale='hi_IN')
g3 = FakerGenerator(method="phone_number")
g4 = FakerGenerator(method="job")
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=4)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
g4.reset(seed=12345); print_generated_sequence(g4, num=4)
"""
Explanation: Class FakerGenerator
It is also possible to use any generator provided by the faker library.
End of explanation
"""
g = Constant("Foobar"); print_generated_sequence(g, num=10)
g = Constant(42); print_generated_sequence(g, num=20)
"""
Explanation: Class Constant
Generates a sequence repeating the same element indefinitely.
End of explanation
"""
g = Sequential(prefix='Foo_', digits=3)
"""
Explanation: Class Sequential
Generates a sequence of sequentially numbered strings with a given prefix.
End of explanation
"""
g.reset()
print_generated_sequence(g, num=5)
print_generated_sequence(g, num=5)
print("-----------------------------")
g.reset()
print_generated_sequence(g, num=5)
"""
Explanation: Calling reset() on the generator makes the numbering start from 1 again.
End of explanation
"""
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
"""
Explanation: Note: the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:
End of explanation
"""
g1 = Sequential(prefix="Quux_", digits=2)
g1.reset(seed=12345)
print_generated_sequence(g1, num=5)
g2 = g1._spawn()
print_generated_sequence(g1, num=5)
print_generated_sequence(g2, num=5)
"""
Explanation: If a new Sequential generator is created from an existing one via the _spawn() method then its count will start again from 1.
End of explanation
"""
g = SelectOne(values=['foobar', 42, 'quux', True, 1.2345])
g.reset(seed=12345); print_generated_sequence(g, num=15)
g.reset(seed=9999); print_generated_sequence(g, num=15)
"""
Explanation: Class SelectOne
End of explanation
"""
g = SelectOne(values=['aa', 'bb', 'cc'], p=[0.8, 0.15, 0.05])
g.reset(seed=12345); print_generated_sequence(g, num=20)
"""
Explanation: It is possible to specify different probabilities for each element to be chosen.
End of explanation
"""
g = SelectMultiple(values=['foobar', 42, 'quux', True, 1.2345], size=3)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=99999); print_generated_sequence(g, num=4)
"""
Explanation: Class SelectMultiple
End of explanation
"""
g = SelectMultiple(values=['aa', 'bb', 'cc', 'dd', 'ee'], size=3, p=[0.6, 0.1, 0.2, 0.05, 0.05])
g.reset(seed=12345); print_generated_sequence(g, num=4)
"""
Explanation: Similarly to SelectOne, one can pass a list of probabilities for the values to be chosen.
End of explanation
"""
rand_nums = Integer(low=2, high=5)
g = SelectMultiple(values=['a', 'b', 'c', 'd', 'e'], size=rand_nums)
g.reset(seed=11111); print_generated_sequence(g, num=10, sep='\n')
"""
Explanation: It is also possible to pass a random generator for the argument n. This produces tuples of varying length, where the length of each tuple is determined by the values produced by this generator.
End of explanation
"""
values = list(range(50))
g = Subsample(values, p=0.3)
g.reset(seed=12345); print_generated_sequence(g, num=10, sep='\n')
"""
Explanation: Class Subsample
The Subsample generator can extract a subsample from a given set of values, where each individual element is chosen with a given probability p.
End of explanation
"""
g = CharString(length=15)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
"""
Explanation: Class CharString
End of explanation
"""
g = CharString(min_length=4, max_length=12, charset="ABCDEFGHIJKLMNOPQRSTUVWXYZ")
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
"""
Explanation: It is possible to vary the length of generated character strings, and to specify the character set.
End of explanation
"""
g = DigitString(length=15)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=9999); print_generated_sequence(g, num=5)
g = DigitString(min_length=5, max_length=20)
g.reset(seed=9999); print_generated_sequence(g, num=10, sep='\n')
"""
Explanation: Class DigitString
End of explanation
"""
g = HashDigest(length=8)
g.reset(seed=12345); print_generated_sequence(g, num=9)
g.reset(seed=9999); print_generated_sequence(g, num=9)
g = HashDigest(length=20)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=9999); print_generated_sequence(g, num=4)
g = HashDigest(min_length=6, max_length=20)
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
g = HashDigest(length=16, as_bytes=True)
g.reset(seed=12345); print_generated_sequence(g, num=3, sep='\n')
"""
Explanation: Class HashDigest
End of explanation
"""
g = GeolocationPair()
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
"""
Explanation: Class Geolocation
End of explanation
"""
from tohu.generators import TimestampNEW
g = TimestampNEW(start='2016-02-14', end='2016-02-18')
g.reset(seed=12345); print_generated_sequence(g, num=5, sep='\n')
g = TimestampNEW(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15')
g.reset(seed=99999); print_generated_sequence(g, num=10, sep='\n')
"""
Explanation: Class TimestampNEW
End of explanation
"""
type(next(g))
"""
Explanation: Note that the generated items are datetime objects (even though they appear as strings when printed above).
End of explanation
"""
import json
from shapely.geometry import MultiPoint
with open('./data/ne_110m_admin_1_states_provinces_shp.geojson', 'r') as f:
geojson = json.load(f)
g = GeoJSONGeolocationPair(geojson)
pts = g.generate(N=200, seed=12345)
list(pts)[:10]
MultiPoint(pts)
"""
Explanation: Class GeoJSONGeolocationPair
The GeoJSONGeolocationPair allows generating points within a geographical area given by a GeoJSON object.
End of explanation
"""
class QuuxGenerator(CustomGenerator):
aaa = Integer(0, 100)
bbb = HashDigest(length=6)
g = QuuxGenerator()
"""
Explanation: Class ExtractAttribute
End of explanation
"""
h1 = ExtractAttribute(g, 'aaa')
h2 = ExtractAttribute(g, 'bbb')
g.reset(seed=99999); print_generated_sequence(g, num=5, sep='\n')
print_generated_sequence(h1, num=5)
print_generated_sequence(h2, num=5)
"""
Explanation: Using ExtractAttribute we can produce \"derived\" generators which extract the attributes aaa, bbb from the elements produced by g.
End of explanation
"""
seq = ['aa', 'bb', 'cc', 'dd', 'ee']
g = IterateOver(seq)
g.reset(); print(list(g.generate(N=3)))
g.reset(); print(list(g.generate(N=10)))
g.reset(); print(list(g))
"""
Explanation: Class IterateOver
End of explanation
"""
int_generator = Integer(low=100, high=500).reset(seed=99999)
for i, x in enumerate(int_generator):
if i > 20:
break
print(x, end=" ")
"""
Explanation: Using tohu generators as iterators
Each tohu generator can also be used as a Python iterator producing an (infinite) series of elements.
End of explanation
"""
g = HashDigest(length=6)
item_list = g.generate(N=10, seed=12345)
print(item_list)
"""
Explanation: ItemList
The .generate() method produces an ItemList instance.
End of explanation
"""
print(list(item_list))
item_list.reset(seed=999999)
print(list(item_list.subsample(num=6)))
print(list(item_list.subsample(num=6)))
print(list(item_list.subsample(num=6)))
item_list.reset(seed=99999)
print(list(item_list.subsample(p=0.4)))
print(list(item_list.subsample(p=0.4)))
print(list(item_list.subsample(p=0.4)))
"""
Explanation: Fundamentally an ItemList behaves like a regular list.
End of explanation
"""
|
mathemage/h2o-3
|
examples/deeplearning/notebooks/deeplearning_tensorflow_cat_dog_mouse_lenet.ipynb
|
apache-2.0
|
import sys, os
import h2o
from h2o.estimators.deepwater import H2ODeepWaterEstimator
import os.path
from IPython.display import Image, display, HTML
import pandas as pd
import numpy as np
import random
PATH=os.path.expanduser("~/h2o-3")
h2o.init(port=54321, nthreads=-1)
if not H2ODeepWaterEstimator.available(): exit
!nvidia-smi
%matplotlib inline
from IPython.display import Image, display, HTML
import matplotlib.pyplot as plt
"""
Explanation: Using Tensorflow with H2O
This notebook shows how to use the tensorflow backend to tackle a simple image classification problem.
We start by connecting to our h2o cluster:
End of explanation
"""
frame = h2o.import_file(PATH + "/bigdata/laptop/deepwater/imagenet/cat_dog_mouse.csv")
print(frame.dim)
print(frame.head(5))
"""
Explanation: Image Classification Task
H2O DeepWater allows you to specify a list of URIs (file paths) or URLs (links) to images, together with a response column (either a class membership (enum) or regression target (numeric)).
For this example, we use a small dataset that has a few hundred images, and three classes: cat, dog and mouse.
End of explanation
"""
model = H2ODeepWaterEstimator(epochs=500, network = "lenet", backend="tensorflow")
model.train(x=[0],y=1, training_frame=frame)
model.show()
model = H2ODeepWaterEstimator(epochs=100, backend="tensorflow",
image_shape=[28,28],
network="user",
network_definition_file=PATH + "/examples/deeplearning/notebooks/pretrained/lenet_28x28x3_3.meta",
network_parameters_file=PATH + "/examples/deeplearning/notebooks/pretrained/lenet-100epochs")
model.train(x=[0],y=1, training_frame=frame)
model.show()
"""
Explanation: To build a LeNet image classification model in H2O, simply specify network = "lenet" and backend="tensorflow" to use the our pre-built TensorFlow lenet implementation:
End of explanation
"""
model.deepfeatures(frame, "fc1/Relu")
"""
Explanation: DeepFeatures
We can also compute the output of any hidden layer, if we know its name.
End of explanation
"""
def simple_model(w, h, channels, classes):
import json
import tensorflow as tf
from tensorflow.python.framework import ops
# always create a new graph inside ipython or
# the default one will be used and can lead to
# unexpected behavior
graph = tf.Graph()
with graph.as_default():
size = w * h * channels
x = tf.placeholder(tf.float32, [None, size])
W = tf.Variable(tf.zeros([size, classes]))
b = tf.Variable(tf.zeros([classes]))
y = tf.matmul(x, W) + b
predictions = tf.nn.softmax(y)
# labels
y_ = tf.placeholder(tf.float32, [None, classes])
# train
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
tf.add_to_collection(ops.GraphKeys.TRAIN_OP, train_step)
tf.add_to_collection("predictions", predictions)
# this is required by the h2o tensorflow backend
global_step = tf.Variable(0, name="global_step", trainable=False)
init = tf.global_variables_initializer()
tf.add_to_collection(ops.GraphKeys.INIT_OP, init.name)
tf.add_to_collection("logits", y)
saver = tf.train.Saver()
meta = json.dumps({
"inputs": {"batch_image_input": x.name, "categorical_labels": y_.name},
"outputs": {"categorical_logits": y.name},
"parameters": {"global_step": global_step.name},
})
print(meta)
tf.add_to_collection("meta", meta)
filename = "/tmp/lenet_tensorflow.meta"
tf.train.export_meta_graph(filename, saver_def=saver.as_saver_def())
return filename
filename = simple_model(28, 28, 3, classes=3)
model = H2ODeepWaterEstimator(epochs=500,
network_definition_file=filename, ## specify the model
image_shape=[28,28], ## provide expected (or matching) image size
channels=3,
backend="tensorflow",
)
model.train(x=[0], y=1, training_frame=frame)
model.show()
"""
Explanation: Custom models
If you'd like to build your own Tensorflow network architecture, then this is easy as well.
In this example script, we are using the Tensorflow backend.
Models can easily be imported/exported between H2O and Tensorflow since H2O uses Tensorflow's format for model definition.
End of explanation
"""
import tensorflow as tf
import json
from keras.layers.core import Dense, Flatten, Reshape
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras import backend as K
from keras.objectives import categorical_crossentropy
from tensorflow.python.framework import ops
def keras_model(w, h, channels, classes):
# always create a new graph inside ipython or
# the default one will be used and can lead to
# unexpected behavior
graph = tf.Graph()
with graph.as_default():
size = w * h * channels
# Input images fed via H2O
inp = tf.placeholder(tf.float32, [None, size])
# Actual labels used for training fed via H2O
labels = tf.placeholder(tf.float32, [None, classes])
# Keras network
x = Reshape((w, h, channels))(inp)
x = Conv2D(20, (5, 5), padding='same', activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Conv2D(50, (5, 5), padding='same', activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Flatten()(x)
x = Dense(500, activation='relu')(x)
out = Dense(classes)(x)
predictions = tf.nn.softmax(out)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels,logits=out))
train_step = tf.train.AdamOptimizer(1e-3).minimize(loss)
init_op = tf.global_variables_initializer()
# Metadata required by H2O
tf.add_to_collection(ops.GraphKeys.INIT_OP, init_op.name)
tf.add_to_collection(ops.GraphKeys.TRAIN_OP, train_step)
tf.add_to_collection("logits", out)
tf.add_to_collection("predictions", predictions)
meta = json.dumps({
"inputs": {"batch_image_input": inp.name,
"categorical_labels": labels.name},
"outputs": {"categorical_logits": out.name,
"layers": ','.join([m.name for m in tf.get_default_graph().get_operations()])},
"parameters": {}
})
tf.add_to_collection("meta", meta)
# Save the meta file with the graph
saver = tf.train.Saver()
filename = "/tmp/keras_tensorflow.meta"
tf.train.export_meta_graph(filename, saver_def=saver.as_saver_def())
return filename
filename = keras_model(28, 28, 3, classes=3)
model = H2ODeepWaterEstimator(epochs=50,
network_definition_file=filename, ## specify the model
image_shape=[28,28], ## provide expected (or matching) image size
channels=3,
backend="tensorflow",
)
model.train(x=[0], y=1, training_frame=frame)
model.show()
"""
Explanation: Custom models with Keras
It is also possible to use libraries/APIs such as Keras to define the network architecture.
End of explanation
"""
|
chrishah/genomisc
|
popogeno/QTlight/QTLight_demo.ipynb
|
mit
|
import QTLight_functions as QTL
"""
Explanation: Import the functions (assumes that QTLight_functions.py is in your current working directory or in your python path)
End of explanation
"""
%%bash
ln -s test-data/batch_1.vcf.gz .
ln -s test-data/populationmap .
mkdir matrix
"""
Explanation: Fetch relevant files from stacks populations run
End of explanation
"""
%%bash
#pip install pyvcf
for a in {1..10}
do
echo -e "\nrepitition $a:\n"
python /home/chrishah/Dropbox/Github/genomisc/popogeno/vcf_2_bayenv.py batch_1.vcf.gz --min_number 6 -r 5000 -o matrix/random_5000_rep_$a -m populationmap
done
"""
Explanation: create 10 Bayenv input files with 5000 randomly selected loci in each
End of explanation
"""
%%bash
cd matrix/
for a in {1..10}
do
rand=$RANDOM
echo -e "repitition $a (random seed: -$rand)\n"
/home/chrishah/src/Bayenv/bayenv2 0 -p 4 -r -$RANDOM -k 100000 -i random_5000_rep_$a.bayenv.SNPfile > random_5000_rep_$a.log
done
cd ../
"""
Explanation: create 10 covariance matrizes with 100000 iterations each
End of explanation
"""
%%bash
dimensions=4
dimensions=$((dimensions+1))
for a in {1..10}
do
tail -n $dimensions matrix/random_5000_rep_$a.log | grep "^$" -v > matrix/random_5000_rep_$a\_it-10e5.matrix
done
"""
Explanation: extract covariance matrizes from final iteration into txt file
End of explanation
"""
import numpy as np
main_list = []
for a in range(10):
current = "matrix/random_5000_rep_"+str(a+1)+"_it-10e5.matrix"
# print current
IN = open(current,"r")
temp_list = []
for line in IN:
temp_list.extend(line.rstrip().split("\t"))
for i in range(len(temp_list)):
if a == 0:
main_list.append([float(temp_list[i])])
else:
main_list[i].append(float(temp_list[i]))
#print main_list
av_out_list = []
std_out_list = []
for j in range(len(main_list)):
av_out_list.append(np.mean(main_list[j]))
#print av_out_list
outstring = ""
for z in range(len(av_out_list)):
av_out_list[z] = "%s\t" %av_out_list[z]
if not outstring:
outstring = av_out_list[z]
else:
outstring = outstring+av_out_list[z]
if ((z+1) % 4 == 0):
outstring = "%s\n" %(outstring)
OUT = open("matrix/av_matrix.matrix","w")
OUT.write(outstring)
OUT.close()
"""
Explanation: construct average covariance matrix from 10 random sets
End of explanation
"""
populations, IDs = QTL.normalize(csv='../Diplotaxodon_Morphometric_Data_raw.csv', normalize=True, norm_prefix='Diplotaxodon_Morphometric_Data_normalized', boxplot=False)
print populations
print IDs
"""
Explanation: Prepare environmental data - average and normalize
raw data is provided in a csv file with the first column containing the population id. See example in test-data.
End of explanation
"""
%%bash
mkdir SNPfiles
python /home/chrishah/Dropbox/Github/genomisc/popogeno/vcf_2_div.py ../batch_1.vcf.gz --min_number 6 -o SNPfiles/full_set -m ../populationmap
"""
Explanation: convert vcf to bayenv - generate full SNP files
End of explanation
"""
QTL.split_for_Bayenv(infile='SNPfiles/full_set.bayenv.SNPfile', out_prefix='SNPfiles/Diplo_SNP')
"""
Explanation: split up SNPfiles into single files
End of explanation
"""
#find the number of SNP files to add to specify in loop below
!ls -1 SNPfiles/SNP-* |wc -l
!mkdir running_Bayenv
%%bash
#adjust bayenv command to your requirements
iterations=1000000
cd running_Bayenv/
for rep in {1..10}; do ran=$RANDOM; for a in {0000001..0021968}; do /home/chrishah/src/Bayenv/bayenv2 -i ../SNPfiles/SNP-$a.txt -e ../Nyassochromis_normalized.bayenv -m ../matrix/av_matrix.matrix -k $iterations -r -$ran -p 3 -n 14 -t -X -o bayenv_out_k100000_env_rep_$rep-rand_$ran; done > log_rep_$rep; done
"""
Explanation: Run Bayenv2 for 10 replications serially
for this run I used bayenv2 version: tguenther-bayenv2_public-48f0b51ced16
End of explanation
"""
mkdir RANK_STATISTIC/
#create the list of Bayenv results files to be processed
import os
bayenv_res_dir = './running_bayenv/'
bayenv_files = []
for fil in os.listdir(bayenv_res_dir):
if fil.endswith(".bf"):
print(bayenv_res_dir+"/"+fil)
bayenv_files.append(bayenv_res_dir+"/"+fil)
print bayenv_files
print "\n%i" %len(bayenv_files)
print IDs
rank_results = QTL.calculate_rank_stats(SNP_map="SNPfiles/full_set.bayenv.SNPmap", infiles = bayenv_files, ids = IDs, prefix = 'RANK_STATISTIC/Diplo_k_1M')
"""
Explanation: ALTERNATIVE
Bayenv can be run on a HPC cluster in parallel. I provide a script submit_Bayenv_array_multi.sh that I used to run 10 replicates as arrayjob on a cluster that was running a PBS scheduling system. Total runtime for 10 replicates with 1M Bayenv iterations/SNP was ~ 24h. The results from the individual runs were then concatenated with the script concat_sorted.sh and moved to the directory running_Bayenv on the local machine.
ANALYSE RANK STATISTICS
please make sure you load all functions below first
Calculating RANK STATISTICS
End of explanation
"""
print IDs
full_rank_files = []
file_dir = 'RANK_STATISTIC/'
for id in IDs:
# print id
for file in os.listdir(file_dir):
if file.endswith('_'+id+'.txt'):
# print [id,file_dir+'/'+file]
full_rank_files.append([id,file_dir+'/'+file])
break
print full_rank_files
QTL.plot_pope(files_list=full_rank_files, cutoff=0.95, num_replicates=10)
"""
Explanation: CREATE POPE PLOTS and extract the SNP ids in the top 5 percent (assumes that the script pope_plot.sh is in your working directory)
End of explanation
"""
QTL.plot_pope(files_list=full_rank_files, cutoff=0.99, num_replicates=10)
"""
Explanation: CREATE POPE PLOTS and extract the SNP ids in the top 1 percent
End of explanation
"""
#make list desired rank statistic tsv files
import os
file_dir = 'RANK_STATISTIC/'
rank_stats_files = []
for file in os.listdir(file_dir):
if file.endswith('.tsv'):
print file_dir+'/'+file
rank_stats_files.append(file_dir+'/'+file)
"""
Explanation: find genes up and downstream of correlated SNPs
End of explanation
"""
gff_per_scaffold = QTL.parse_gff(gff='Metriaclima_zebra.BROADMZ2.gtf')
"""
Explanation: parse a gff file
End of explanation
"""
genes_per_analysis = QTL.find_genes(rank_stats = rank_stats_files, gff = gff_per_scaffold, distance = 15)
"""
Explanation: identify genes within a defined distance (in kb) up and down-stream of the SNPs
End of explanation
"""
QTL.annotate_genes(SNPs_to_genes=genes_per_analysis, annotations='blast2go_table_20150630_0957.txt')
mkdir find_genes
"""
Explanation: annotated relevant genes based on blast2go annotation table
End of explanation
"""
QTL.write_candidates(SNPs_to_genes=genes_per_analysis, whitelist=genes_per_analysis.keys(), out_dir='./find_genes/')
"""
Explanation: write summary table for SNPs and relevant genes in the vicinity
End of explanation
"""
mkdir RANK_STATISTIC_reduced
QTL.exclude_extreme_rep(dictionary = rank_results, ids = IDs, prefix = 'RANK_STATISTIC_reduced/Diplotaxodon_reduced')
reduced_rank_files = []
file_dir = 'RANK_STATISTIC_reduced/'
for id in IDs:
# print id
for file in os.listdir(file_dir):
if '_'+id+'_ex_rep' in file and file.endswith('.txt'):
# print [id,file_dir+'/'+file]
reduced_rank_files.append([id,file_dir+'/'+file])
break
print reduced_rank_files
QTL.plot_pope(files_list=reduced_rank_files, cutoff=0.95, num_replicates=9)
"""
Explanation: A strategy for removing noise could be to remove the most extreme Bayenv results and recalculate rank stats
End of explanation
"""
#make list desired rank statistic tsv files
import os
file_dir = 'RANK_STATISTIC_reduced/'
rank_stats_files = []
for file in os.listdir(file_dir):
if file.endswith('.tsv'):
print file_dir+'/'+file
rank_stats_files.append(file_dir+'/'+file)
mkdir find_genes_reduced/
genes_per_analysis = QTL.find_genes(rank_stats = rank_stats_files, gff = gff_per_scaffold, distance = 15)
QTL.annotate_genes(SNPs_to_genes=genes_per_analysis, annotations='blast2go_table_20150630_0957.txt')
mkdir find_genes_reduced/
QTL.write_candidates(SNPs_to_genes=genes_per_analysis, whitelist=genes_per_analysis.keys(), out_dir='./find_genes_reduced/')
"""
Explanation: find genes up and downstream of correlated SNPs
End of explanation
"""
|
EBIvariation/eva-cttv-pipeline
|
data-exploration/complex-events/notebooks/complex-events-explore.ipynb
|
apache-2.0
|
complex_xml = os.path.join(PROJECT_ROOT, 'complex-events.xml.gz')
# get just "complex events"
# Q: what's complex? -- complex == no full coordinates
def complex_measures(x):
if x.measure:
return (
# smattering of all non SNV variants
(x.measure.variant_type.lower() not in {'single nucleotide variant'} and np.random.random() < 0.01)
# be sure to get the rare ones
or (x.measure.variant_type.lower() in {'tandem duplication', 'fusion', 'complex', 'translocation', 'inversion'})
)
return False
filter_xml(
input_xml=clinvar_path,
output_xml=complex_xml,
filter_fct=complex_measures,
)
dataset = ClinVarDataset(complex_xml)
"""
Explanation: ClinVar documentation
Submission guidelines can be found here, in particular note the requirement:
a valid description of the variant, one of:
* an HGVS expression
* chromosome coordinates and change
* cytogenetic description
Also found the xsd for ClinVar XML submissions, which includes all possible measure types:
xml
<xs:simpleType name="Measuretype">
<xs:restriction base="xs:string">
<xs:enumeration value="Gene"/>
<xs:enumeration value="Variation"/>
<xs:enumeration value="Insertion"/>
<xs:enumeration value="Mobile element insertion"/>
<xs:enumeration value="Novel sequence insertion"/>
<xs:enumeration value="Microsatellite"/>
<xs:enumeration value="Deletion"/>
<xs:enumeration value="single nucleotide variant"/>
<xs:enumeration value="Multiple nucleotide variation"/>
<xs:enumeration value="Indel"/>
<xs:enumeration value="Duplication"/>
<xs:enumeration value="Tandem duplication"/>
<xs:enumeration value="copy number loss"/>
<xs:enumeration value="copy number gain"/>
<xs:enumeration value="protein only"/>
<xs:enumeration value="Inversion"/>
<xs:enumeration value="Translocation"/>
<xs:enumeration value="Interchromosomal breakpoint"/>
<xs:enumeration value="Intrachromosomal breakpoint"/>
<xs:enumeration value="Complex"/>
</xs:restriction>
</xs:simpleType>
Compare measure types we've actually found in the data, here.
Filter
Filter full dataset to get just a manageable (hopefully representative) sample of records with measures representing complex events
End of explanation
"""
def get_measures(dataset):
for r in dataset:
if r.measure:
yield r.measure
for m in get_measures(dataset):
break
dir(m)
# just all the properties
props = [
'all_names',
'clinvar_record',
# 'explicit_insertion_length',
# 'has_complete_coordinates',
'hgnc_ids',
'hgvs',
'is_repeat_expansion_variant',
# 'measure_xml',
'microsatellite_category',
'nsv_id',
'preferred_gene_symbols',
# 'preferred_name',
'preferred_or_other_name',
# 'pubmed_refs',
'rs_id',
# 'sequence_location_helper',
# 'toplevel_refseq_hgvs',
'variant_type',
'chr',
'vcf_alt',
# 'vcf_full_coords',
'vcf_pos',
'vcf_ref'
]
# replaces empty list with None
measures = [[getattr(v, p) if getattr(v, p) != [] else None for p in props] for v in get_measures(dataset)]
df = pd.DataFrame(measures, columns=props)
df.count()
set(df['variant_type'])
df[df.variant_type == 'Translocation']
"""
Explanation: Dataframe
Load measures into a dataframe where columns are the properties.
End of explanation
"""
def get_measure_xml_for_rcv(dataset, rcv):
for r in dataset:
if r.accession == rcv:
return r.measure.measure_xml
# pretty print xml
def pprint(x):
print(ElementTree.tostring(x, encoding='unicode'))
def print_measure_xml_for_rcv(dataset, rcv):
x = get_measure_xml_for_rcv(dataset, rcv)
pprint(x)
xml = get_measure_xml_for_rcv(dataset, 'RCV001372309')
pprint(xml)
xml2 = get_measure_xml_for_rcv(dataset, 'RCV001255994')
pprint(xml2)
"""
Explanation: XML
Probably not all elements from the raw xml are captured in the object, can we get find anything there?
End of explanation
"""
|
pfschus/fission_bicorrelation
|
methods/calculate_Asym_energy_space.ipynb
|
mit
|
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks')
import sys
import os
import os.path
import scipy.io as sio
import time
import numpy as np
np.set_printoptions(threshold=np.nan) # print entire matrices
import pandas as pd
from tqdm import *
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_math as bicorr_math
import bicorr_plot as bicorr_plot
import bicorr_e as bicorr_e
import bicorr_sums as bicorr_sums
%load_ext autoreload
%autoreload 2
"""
Explanation: Calculate Asym vs. Emin from bhm_e
Rewriting calc_Asym_vs_emin_energies for bhm_e.
Generate Asym_df for a specific dataset.
P. Schuster
July 18, 2018
End of explanation
"""
det_df = bicorr.load_det_df()
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
singles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = '../analysis/Cf072115_to_Cf072215b/datap/',plot_flag=True, save_flag=True)
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')
bhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))
for index in det_df.index.values: # index is same as in `bhm`
bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]
emins = np.arange(0.5,5,.2)
emax = 12
print(emins)
angle_bin_edges = np.arange(8,190,10)
print(angle_bin_edges)
"""
Explanation: Load data
End of explanation
"""
Asym_df = bicorr_sums.calc_Asym_vs_emin_energies(det_df, dict_index_to_det, singles_hist_e_n, e_bin_edges, bhp_e, e_bin_edges, emins, emax, angle_bin_edges, plot_flag=True, show_flag = True, save_flag=False)
Asym_df.head()
"""
Explanation: Functionalize
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial
|
community/teach_me_qiskit_2018/state_distribution_in_qubit_chains/qubit_chain_mod.ipynb
|
apache-2.0
|
from pprint import pprint
import math
import numpy as np
# importing the Qiskit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, execute
# import state tomography functions
from qiskit.tools.visualization import plot_histogram, plot_state
# Definition of matchgate
def gate_mu3(qcirc,theta,phi,lam,a,b):
qcirc.cx(a,b)
qcirc.cu3(theta,phi,lam,b,a)
qcirc.cx(a,b)
# Number of qubits (should be odd)
n_nodes = 5
# Number of steps
n_step = 2
# Histogram
hist = True
# Quantum Sphere
# hist = False
# Creating Registers
qr = QuantumRegister(n_nodes)
cr = ClassicalRegister(n_nodes)
# Creating Circuits
qc = QuantumCircuit(qr,cr)
# Initial state
qc.x(qr[0])
# Creating of two partitions with M1' and M2
# Repeating that n_step times
for k in range(0,n_step):
for i in range(0,n_nodes-1,2):
gate_mu3(qc,math.pi, math.pi, 0, qr[i], qr[i+1])
for i in range(1,n_nodes,2):
gate_mu3(qc,math.pi/2, 0, 0, qr[i], qr[i+1])
if hist:
for i in range(0,n_nodes):
qc.measure(qr[i], cr[i])
# To print the circuit
# QASM_source = qc.qasm()
# print(QASM_source)
if hist:
backend = 'qasm_simulator'
shots = 4096
else:
backend = 'statevector_simulator'
shots = 1 # amplitudes instead of probabilities
job = execute(qc, Aer.get_backend(backend), shots = shots ) # Execute quantum walk
result = job.result()
print(result)
"""
Explanation: Modeling of Qubit Chain
<img src="images/line_qubits.png" alt="Qubit Chain">
Contributor
Alexander Yu. Vlasov
The model may be illustrated using images from composer.
First image is for one step of quantum walk.
Each step uses two partitions described earlier.
For five qubits each partition includes two two-qubit gates denoted here as m1 and m2
<img src="images/qx_quchain.png" alt="Q-Walk Firts Step">
Two (or more) steps of quantum walk should repeat the sequences of gates described above
<img src="images/qx_quchain_t2.png" alt="Q-Walk Two Steps">
The program below uses QISKit with the similar purposes.
Parameter n_nodes is used for number of nodes and should be odd due to implementation of partitions.
Parameter n_step is number of steps.
Boolean parameter hist provides two method of simulation.
The example below uses hist = True.
In such a case simulator produces probabilities of different outcomes with qasm_simulator backend.
The hist = False uses statevector_simulator backend to calculate amplitudes.
It may be useful sometimes, but unitary_simulator may be more convenient
(see link and comments below).
End of explanation
"""
if hist:
plot_histogram(result.get_counts(qc))
else:
data_ampl = result.get_data(qc)
state_walk = data_ampl['statevector']
rho_walk = np.outer(state_walk,state_walk.conj())
plot_state(rho_walk,'qsphere')
"""
Explanation: The result of simulation represented below as histogram if hist=True
(or as Quantum Sphere if hist=False)
End of explanation
"""
|
drcgw/bass
|
Kitchen Sink-Bass.ipynb
|
gpl-3.0
|
from bass import *
"""
Explanation: Welcome to BASS
Development Version
This notebook is inteded for very advanced users, as there is almost no interactivity features. However, this notebook is all about speed. If you know exactly what you are doing, then this is the notebook for you.
BASS: Biomedical Analysis Software Suite for event detection and signal processing.
Copyright (C) 2015 Abigail Dobyns
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
End of explanation
"""
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= r"/Users/abigaildobyns/Desktop"
Settings['Label'] = r'rat34_ECG.txt'
Settings['Output Folder'] = r"/Users/abigaildobyns/Desktop/demo"
#transformation Settings
Settings['Absolute Value'] = True #Must be True if Savitzky-Golay is being used
Settings['Bandpass Highcut'] = 30 #in Hz
Settings['Bandpass Lowcut'] = 100 #in Hz
Settings['Bandpass Polynomial'] = 4 #integer
Settings['Linear Fit'] = False #between 0 and 1 on the whole time series
Settings['Linear Fit-Rolling R'] = 0.75 #between 0 and 1
Settings['Linear Fit-Rolling Window'] = 1000 #window for rolling mean for fit, unit is index not time
Settings['Relative Baseline'] = 0 #default 0, unless data is normalized, then 1.0. Can be any float
Settings['Savitzky-Golay Polynomial'] = 4 #integer
Settings['Savitzky-Golay Window Size'] = 251 #must be odd. units are index not time
#Baseline Settings
Settings['Baseline Type'] = r'static' #'linear', 'rolling', or 'static'
#For Linear
Settings['Baseline Start'] = 50.04 #start time in seconds
Settings['Baseline Stop'] = 50.18 #end time in seconds
#For Rolling
Settings['Rolling Baseline Window'] = 1 # in seconds. leave as 'none' if linear or static
#Peaks
Settings['Delta'] = 0.25
Settings['Peak Minimum'] = -1 #amplitude value
Settings['Peak Maximum'] = 1 #amplitude value
#Bursts
Settings['Burst Area'] = False #calculate burst area
Settings['Exclude Edges'] = False #False to keep edges, True to discard them
Settings['Inter-event interval minimum (seconds)'] = 0.0100 #only for bursts, not for peaks
Settings['Maximum Burst Duration (s)'] = 10
Settings['Minimum Burst Duration (s)'] = 0
Settings['Minimum Peak Number'] = 1 #minimum number of peaks/burst, integer
Settings['Threshold']= 1.0 #linear: proportion of baseline.
#static: literal value.
#rolling, linear ammount grater than rolling baseline at each time point.
#Outputs
Settings['Generate Graphs'] = False #create and save the fancy graph outputs
#Settings that you should not change unless you are a super advanced user:
#These are settings that are still in development
Settings['Graph LCpro events'] = False
Settings['File Type'] = r'Plain' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = False
############################################################################################
#Load in a Settings File
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= "/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data"
Settings['Label'] = 'voltage-TBModel-sec1320-eL-IP0_9.txt'
Settings['Output Folder'] = "/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data/IP0_9"
Settings['File Type'] = r'Plain' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = True
#Load a Settings file
Settings['Settings File'] = '/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data/IP0_9/voltage-TBModel-sec1320-eL-IP0_9.txt_Settings.csv'
Settings = load_settings(Settings)
Data, Settings, Results = analyze(Data, Settings, Results)
#plot raw data
plot_rawdata(Data)
#grouped summary for peaks
Results['Peaks-Master'].groupby(level=0).describe()
#grouped summary for bursts
Results['Bursts-Master'].groupby(level=0).describe()
#Call one time series by Key
key = 'Mean1'
graph_ts(Data, Settings, Results, key)
#raw and transformed event plot
key = 'Mean1'
start =100 #start time in seconds
end= 101#end time in seconds
results_timeseries_plot(key, start, end, Data, Settings, Results)
#Frequency plot
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1' #'Mean1' default for single wave
frequency_plot(event_type, meas, key, Data, Settings, Results)
#Get average plots, display only
event_type = 'peaks'
meas = 'Peaks Amplitude'
average_measurement_plot(event_type, meas,Results)
#raster
raster(Results)
#Batch
event_type = 'Peaks'
meas = 'all'
Results = poincare_batch(event_type, meas, Data, Settings, Results)
pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})
#quick poincare
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
poincare_plot(Results[event_type][key][meas])
#PSD of DES
Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.
Settings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve.
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Results = psd_event(event_type, meas, key, scale, Data, Settings, Results)
Results['PSD-Event'][key]
#PSD of raw signal
#optional
Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Signal']['ULF'] = 100 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Signal']['VLF'] = 200 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Signal']['LF'] = 300 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Signal']['HF'] = 400 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Signal']['dx'] = 10 #segmentation for the area under the curve.
scale = 'raw' #raw or db
Results = psd_signal(version = 'original', key = 'Mean1', scale = scale,
Data = Data, Settings = Settings, Results = Results)
Results['PSD-Signal']
#spectrogram
version = 'original'
key = 'Mean1'
spectogram(version, key, Data, Settings, Results)
#Moving Stats
event_type = 'Peaks'
meas = 'all'
window = 60 #seconds
Results = moving_statistics(event_type, meas, window, Data, Settings, Results)
#Histogram Entropy-events
event_type = 'Bursts'
meas = 'all'
Results = histent_wrapper(event_type, meas, Data, Settings, Results)
Results['Histogram Entropy']
"""
Explanation: WARNING All strings should be raw, especially if in Windows.
r'String!'
End of explanation
"""
#Approximate Entropy-events
event_type = 'Peaks'
meas = 'all'
Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Approximate Entropy']
#Sample Entropy-events
event_type = 'Peaks'
meas = 'all'
Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Sample Entropy']
#Approximate Entropy on raw signal
#takes a VERY long time
from pyeeg import ap_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
#Sample Entropy on raw signal
#takes a VERY long time
from pyeeg import samp_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
"""
Explanation: pyEEG required for approximate and sample entropy
End of explanation
"""
moving_statistics?
import pyeeg
pyeeg.samp_entropy?
"""
Explanation: Need help?
Try checking the docstring of a function you are struggling with.
moving_statistics?
help(moving_statistics)
End of explanation
"""
|
ddtm/dl-course
|
Seminar5/Seminar5.ipynb
|
mit
|
import numpy as np
import theano
import theano.tensor as T
import lasagne
import cPickle as pickle
import os
import matplotlib.pyplot as plt
%matplotlib inline
import scipy
from scipy.misc import imread, imsave, imresize
from lasagne.utils import floatX
from lasagne.layers import InputLayer
from lasagne.layers import DenseLayer
from lasagne.layers import NonlinearityLayer
from lasagne.layers import DropoutLayer
from lasagne.layers import Pool2DLayer as PoolLayer
from lasagne.layers import Conv2DLayer as ConvLayer
from lasagne.nonlinearities import rectify, softmax
IMAGE_W = 224
#vgg19 model
#http://www.robots.ox.ac.uk/~vgg/research/very_deep/
def build_model():
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1_1'] = ConvLayer(net['input'], 64, 3, pad=1, flip_filters=False)
net['conv1_2'] = ConvLayer(net['conv1_1'], 64, 3, pad=1, flip_filters=False)
net['pool1'] = PoolLayer(net['conv1_2'], 2)
net['conv2_1'] = ConvLayer(net['pool1'], 128, 3, pad=1, flip_filters=False)
net['conv2_2'] = ConvLayer(net['conv2_1'], 128, 3, pad=1, flip_filters=False)
net['pool2'] = PoolLayer(net['conv2_2'], 2)
net['conv3_1'] = ConvLayer(net['pool2'], 256, 3, pad=1, flip_filters=False)
net['conv3_2'] = ConvLayer(net['conv3_1'], 256, 3, pad=1, flip_filters=False)
net['conv3_3'] = ConvLayer(net['conv3_2'], 256, 3, pad=1, flip_filters=False)
net['conv3_4'] = ConvLayer(net['conv3_3'], 256, 3, pad=1, flip_filters=False)
net['pool3'] = PoolLayer(net['conv3_4'], 2)
net['conv4_1'] = ConvLayer(net['pool3'], 512, 3, pad=1, flip_filters=False)
net['conv4_2'] = ConvLayer(net['conv4_1'], 512, 3, pad=1, flip_filters=False)
net['conv4_3'] = ConvLayer(net['conv4_2'], 512, 3, pad=1, flip_filters=False)
net['conv4_4'] = ConvLayer(net['conv4_3'], 512, 3, pad=1, flip_filters=False)
net['pool4'] = PoolLayer(net['conv4_4'], 2)
net['conv5_1'] = ConvLayer(net['pool4'], 512, 3, pad=1, flip_filters=False)
net['conv5_2'] = ConvLayer(net['conv5_1'], 512, 3, pad=1, flip_filters=False)
net['conv5_3'] = ConvLayer(net['conv5_2'], 512, 3, pad=1, flip_filters=False)
net['conv5_4'] = ConvLayer(net['conv5_3'], 512, 3, pad=1, flip_filters=False)
net['pool5'] = PoolLayer(net['conv5_4'], 2)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['fc6_dropout'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['fc6_dropout'], num_units=4096)
net['fc7_dropout'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['fc7_dropout'], num_units=1000, nonlinearity=None)
net['prob'] = NonlinearityLayer(net['fc8'], softmax)
return net
#classes' names are stored here
classes = pickle.load(open('classes.pkl'))
#for example, 10th class is ostrich:
print classes[9]
"""
Explanation: Seminar 5: Deep Networks
Run the code and read the text boxes carefully!
End of explanation
"""
MEAN_VALUES = np.array([104, 117, 123])
IMAGE_W = 224
def preprocess(img):
pass
def deprocess(img):
pass
img = np.random.rand(IMAGE_W, IMAGE_W, 3)
print np.linalg.norm(deprocess(preprocess(img)) - img)
"""
Explanation: You have to implement two functions in the cell below.
Preprocess function should take the image with shape (w, h, 3) and transform it into a tensor with shape (1, 3, 224, 224). Without this transformation, vgg19 won't be able to digest input image.
Additionally, your preprocessing function have to rearrange channels RGB -> BGR and subtract mean values from every channel.
End of explanation
"""
#load model weights
#vgg19.npz is available for download at
#https://yadi.sk/d/UQPXeM_GqEmGg
net = build_model()
params = np.load('vgg19.npz')['params']
for i in range(32,len(params)):
params[i] = params[i].T
lasagne.layers.set_all_param_values(net.values(), params)
input_image = T.tensor4('input')
output = lasagne.layers.get_output(net['prob'], input_image)
prob = theano.function([input_image], output)
"""
Explanation: If your implementation is correct, the number above will be small, because deprocess function is the inverse of preprocess function
End of explanation
"""
img = imread('sample_images/albatross.jpg')
plt.imshow(img)
plt.show()
p = prob(preprocess(img))
labels = p.ravel().argsort()[-1:-6:-1]
print 'top-5 classes are:'
for l in labels:
print '%3f\t%s' % (p.ravel()[l], classes[l].split(',')[0])
"""
Explanation: In the cell below, you can test your preprocessing function on some sample images. If it is implemented correctly, albatross.jpg will be classified as albatross with 99.9% certainty, and with other pictures the network will produce mostly meaningful result.
You can notice that network output varies from run to run. This behaviour can be supressed with help of "deterministic" keyword in get_output function in the cell above.
End of explanation
"""
def classify(img):
if np.random.rand() > 0.5:
return 'cat'
else:
return 'dog'
path = 'catsvsdogs/test/'
files = sorted(os.listdir(path))
labels = []
for f in files:
img = imread(path + f)
label = classify(img)
labels.append(label)
pickle.dump(labels, open('test_labels.pickle', 'wb'))
"""
Explanation: Now, use vgg19 network and your knowledge of machine learning to classify cats and dogs!
data: https://yadi.sk/d/m6ZO4BvWqEmR9
catsvsdogs/val/ validation images
catsvsdogs/val_labels.pickle labels for validation images, sorted by filename
catsvsdogs/test/ test images
You have to implement classification algorithm, tune it on validation images, save output of your algorithm on test images in form of pickled file, as shown below. Your results, as well as this notebook, have to be attached to your letter to rdlclass@yandex.ru
I expect classification accuracy >95%, or >90% at least
Cheating is not allowed
End of explanation
"""
w = net['conv1_1'].W.eval().copy()
w -= w.min()
w /= w.max()
plt.figure(figsize=(10, 10))
for i in range(8):
for j in range(8):
n = 8*j + i
if n < 64:
plt.subplot(8,8,n)
plt.axis('off')
plt.imshow(w[n,:,:,:].transpose((1,2,0)), interpolation='none')
plt.show()
"""
Explanation: Visualizations
It is easy to visualize the weights of the first convolutional layer:
End of explanation
"""
generated_image = theano.shared(floatX(np.zeros((1, 3, IMAGE_W, IMAGE_W))))
gen_features = lasagne.layers.get_output(net.values(), generated_image)
gen_features = {k: v for k, v in zip(net.keys(), gen_features)}
layer_name = 'pool1'
c = 0
blob_width = gen_features[layer_name].shape[2]
x = blob_width/2
y = blob_width/2
activation_loss = 1e10*(1e1 - gen_features[layer_name][0, c, x, y])**2
tv_loss = T.mean(T.abs_(generated_image[:,:,1:,1:] - generated_image[:,:,:-1,1:]) +
T.abs_(generated_image[:,:,1:,1:] - generated_image[:,:,1:,:-1]))
loss = activation_loss + 1.0 * tv_loss
grad = T.grad(loss, generated_image)
f_loss = theano.function([], loss)
f_grad = theano.function([], grad)
# Helper functions to interface with scipy.optimize
def eval_loss(x0):
x_ = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))
generated_image.set_value(x_)
return f_loss().astype('float64')
def eval_grad(x0):
x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))
generated_image.set_value(x0)
return np.array(f_grad()).flatten().astype('float64')
#run input image optimization via scipy.optimize.fmin_l_bfgs_b
generated_image.set_value(floatX(np.zeros((1, 3, IMAGE_W, IMAGE_W))))
x0 = generated_image.get_value().astype('float64')
status = scipy.optimize.fmin_l_bfgs_b(eval_loss, x0.flatten(), fprime=eval_grad, maxfun=20)
x0 = generated_image.get_value().astype('float64')
"""
Explanation: On higher layers, filters have more than 3 channels, so it is impossible to visualize them directly. However, of we want to understand something about features on higher layers, it is possible to visualize them via optimization of the input image.
Namely, we can solve the following problem
$$J=\mathrm{argmax} \left( n^i_{xyc}(I) \right)$$
there $n^i_{xyc}$ is the activation of neuron on $i$'th layer in position $x$,$y$,$c$ given input image $I$.
Basically, $J$ is the answer on a question "what our neuron is looking for?"
End of explanation
"""
#show the results
w = IMAGE_W
for d in [112, 64, 32, 16, 8]:
pic = deprocess(x0)[w/2-d:w/2+d,w/2-d:w/2+d,:]
pic -= pic.min()
pic /= pic.max()
plt.imshow(pic, interpolation='None')
plt.show()
"""
Explanation: If your deprocess function is implemented correctly, you'll see that the neuron on the first pooling layer is looking for. The result should look like gabor filter, simular to ones found in the first layer of networks with large filters, such as AlexNet.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nasa-giss/cmip6/models/giss-e2-1g/land.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1g', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: GISS-E2-1G
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
turbomanage/training-data-analyst
|
CPB100/lab4a/demandforecast.ipynb
|
apache-2.0
|
import google.datalab.bigquery as bq
import pandas as pd
import numpy as np
import shutil
%bq tables describe --name bigquery-public-data.new_york.tlc_yellow_trips_2015
"""
Explanation: <h1>Demand forecasting with BigQuery and TensorFlow</h1>
In this notebook, we will develop a machine learning model to predict the demand for taxi cabs in New York.
To develop the model, we will need to get historical data of taxicab usage. This data exists in BigQuery. Let's start by looking at the schema.
End of explanation
"""
%bq query
SELECT
EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_2015`
LIMIT 5
"""
Explanation: <h2> Analyzing taxicab demand </h2>
Let's pull the number of trips for each day in the 2015 dataset using Standard SQL.
End of explanation
"""
%bq query -n taxiquery
WITH trips AS (
SELECT EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_*`
where _TABLE_SUFFIX = @YEAR
)
SELECT daynumber, COUNT(1) AS numtrips FROM trips
GROUP BY daynumber ORDER BY daynumber
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': 2015}
}
]
trips = taxiquery.execute(query_params=query_parameters).result().to_dataframe()
trips[:5]
"""
Explanation: <h3> Modular queries and Pandas dataframe </h3>
Let's use the total number of trips as our proxy for taxicab demand (other reasonable alternatives are total trip_distance or total fare_amount). It is possible to predict multiple variables using Tensorflow, but for simplicity, we will stick to just predicting the number of trips.
We will give our query a name 'taxiquery' and have it use an input variable '$YEAR'. We can then invoke the 'taxiquery' by giving it a YEAR. The to_dataframe() converts the BigQuery result into a <a href='http://pandas.pydata.org/'>Pandas</a> dataframe.
End of explanation
"""
avg = np.mean(trips['numtrips'])
print('Just using average={0} has RMSE of {1}'.format(avg, np.sqrt(np.mean((trips['numtrips'] - avg)**2))))
"""
Explanation: <h3> Benchmark </h3>
Often, a reasonable estimate of something is its historical average. We can therefore benchmark our machine learning model against the historical average.
End of explanation
"""
%bq query
SELECT * FROM `bigquery-public-data.noaa_gsod.stations`
WHERE state = 'NY' AND wban != '99999' AND name LIKE '%LA GUARDIA%'
"""
Explanation: The mean here is about 400,000 and the root-mean-square-error (RMSE) in this case is about 52,000. In other words, if we were to estimate that there are 400,000 taxi trips on any given day, that estimate is will be off on average by about 52,000 in either direction.
Let's see if we can do better than this -- our goal is to make predictions of taxicab demand whose RMSE is lower than 52,000.
What kinds of things affect people's use of taxicabs?
<h2> Weather data </h2>
We suspect that weather influences how often people use a taxi. Perhaps someone who'd normally walk to work would take a taxi if it is very cold or rainy.
One of the advantages of using a global data warehouse like BigQuery is that you get to mash up unrelated datasets quite easily.
End of explanation
"""
%bq query -n wxquery
SELECT EXTRACT (DAYOFYEAR FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP)) AS daynumber,
MIN(EXTRACT (DAYOFWEEK FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP))) dayofweek,
MIN(min) mintemp, MAX(max) maxtemp, MAX(IF(prcp=99.99,0,prcp)) rain
FROM `bigquery-public-data.noaa_gsod.gsod*`
WHERE stn='725030' AND _TABLE_SUFFIX = @YEAR
GROUP BY 1 ORDER BY daynumber DESC
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': 2015}
}
]
weather = wxquery.execute(query_params=query_parameters).result().to_dataframe()
weather[:5]
"""
Explanation: <h3> Variables </h3>
Let's pull out the minimum and maximum daily temperature (in Fahrenheit) as well as the amount of rain (in inches) for La Guardia airport.
End of explanation
"""
data = pd.merge(weather, trips, on='daynumber')
data[:5]
"""
Explanation: <h3> Merge datasets </h3>
Let's use Pandas to merge (combine) the taxi cab and weather datasets day-by-day.
End of explanation
"""
j = data.plot(kind='scatter', x='maxtemp', y='numtrips')
"""
Explanation: <h3> Exploratory analysis </h3>
Is there a relationship between maximum temperature and the number of trips?
End of explanation
"""
j = data.plot(kind='scatter', x='dayofweek', y='numtrips')
"""
Explanation: The scatterplot above doesn't look very promising. There appears to be a weak downward trend, but it's also quite noisy.
Is there a relationship between the day of the week and the number of trips?
End of explanation
"""
j = data[data['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
"""
Explanation: Hurrah, we seem to have found a predictor. It appears that people use taxis more later in the week. Perhaps New Yorkers make weekly resolutions to walk more and then lose their determination later in the week, or maybe it reflects tourism dynamics in New York City.
Perhaps if we took out the <em>confounding</em> effect of the day of the week, maximum temperature will start to have an effect. Let's see if that's the case:
End of explanation
"""
data2 = data # 2015 data
for year in [2014, 2016]:
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': year}
}
]
weather = wxquery.execute(query_params=query_parameters).result().to_dataframe()
trips = taxiquery.execute(query_params=query_parameters).result().to_dataframe()
data_for_year = pd.merge(weather, trips, on='daynumber')
data2 = pd.concat([data2, data_for_year])
data2.describe()
j = data2[data2['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
"""
Explanation: Removing the confounding factor does seem to reflect an underlying trend around temperature. But ... the data are a little sparse, don't you think? This is something that you have to keep in mind -- the more predictors you start to consider (here we are using two: day of week and maximum temperature), the more rows you will need so as to avoid <em> overfitting </em> the model.
<h3> Adding 2014 and 2016 data </h3>
Let's add in 2014 and 2016 data to the Pandas dataframe. Note how useful it was for us to modularize our queries around the YEAR.
End of explanation
"""
import tensorflow as tf
shuffled = data2.sample(frac=1, random_state=13)
# It would be a good idea, if we had more data, to treat the days as categorical variables
# with the small amount of data, we have though, the model tends to overfit
#predictors = shuffled.iloc[:,2:5]
#for day in range(1,8):
# matching = shuffled['dayofweek'] == day
# key = 'day_' + str(day)
# predictors[key] = pd.Series(matching, index=predictors.index, dtype=float)
predictors = shuffled.iloc[:,1:5]
predictors[:5]
shuffled[:5]
targets = shuffled.iloc[:,5]
targets[:5]
"""
Explanation: The data do seem a bit more robust. If we had even more data, it would be better of course. But in this case, we only have 2014-2016 data for taxi trips, so that's what we will go with.
<h2> Machine Learning with Tensorflow </h2>
We'll use 80% of our dataset for training and 20% of the data for testing the model we have trained. Let's shuffle the rows of the Pandas dataframe so that this division is random. The predictor (or input) columns will be every column in the database other than the number-of-trips (which is our target, or what we want to predict).
The machine learning models that we will use -- linear regression and neural networks -- both require that the input variables are numeric in nature.
The day of the week, however, is a categorical variable (i.e. Tuesday is not really greater than Monday). So, we should create separate columns for whether it is a Monday (with values 0 or 1), Tuesday, etc.
Against that, we do have limited data (remember: the more columns you use as input features, the more rows you need to have in your training dataset), and it appears that there is a clear linear trend by day of the week. So, we will opt for simplicity here and use the data as-is. Try uncommenting the code that creates separate columns for the days of the week and re-run the notebook if you are curious about the impact of this simplification.
End of explanation
"""
trainsize = int(len(shuffled['numtrips']) * 0.8)
avg = np.mean(shuffled['numtrips'][:trainsize])
rmse = np.sqrt(np.mean((targets[trainsize:] - avg)**2))
print('Just using average={0} has RMSE of {1}'.format(avg, rmse))
"""
Explanation: Let's update our benchmark based on the 80-20 split and the larger dataset.
End of explanation
"""
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model_linear', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean(np.power((targets[trainsize:].values - pred), 2)))
print('LinearRegression has RMSE of {0}'.format(rmse))
"""
Explanation: <h2> Linear regression with tf.contrib.learn </h2>
We scale the number of taxicab rides by 400,000 so that the model can keep its predicted values in the [0-1] range. The optimization goes a lot faster when the weights are small numbers. We save the weights into ./trained_model_linear and display the root mean square error on the test dataset.
End of explanation
"""
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.DNNRegressor(model_dir='./trained_model',
hidden_units=[5, 5],
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean((targets[trainsize:].values - pred)**2))
print('Neural Network Regression has RMSE of {0}'.format(rmse))
"""
Explanation: The RMSE here (57K) is lower than the benchmark (62K) indicates that we are doing about 10% better with the machine learning model than we would be if we were to just use the historical average (our benchmark).
<h2> Neural network with tf.contrib.learn </h2>
Let's make a more complex model with a few hidden nodes.
End of explanation
"""
input = pd.DataFrame.from_dict(data =
{'dayofweek' : [4, 5, 6],
'mintemp' : [60, 40, 50],
'maxtemp' : [70, 90, 60],
'rain' : [0, 0.5, 0]})
# read trained model from ./trained_model
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(input.values))
pred = np.multiply(list(estimator.predict(input.values)), SCALE_NUM_TRIPS )
print(pred)
"""
Explanation: Using a neural network results in similar performance to the linear model when I ran it -- it might be because there isn't enough data for the NN to do much better. (NN training is a non-convex optimization, and you will get different results each time you run the above code).
<h2> Running a trained model </h2>
So, we have trained a model, and saved it to a file. Let's use this model to predict taxicab demand given the expected weather for three days.
Here we make a Dataframe out of those inputs, load up the saved model (note that we have to know the model equation -- it's not saved in the model file) and use it to predict the taxicab demand.
End of explanation
"""
|
oseledets/nla2016
|
lectures/lecture-1.ipynb
|
mit
|
import numpy as np
import random
#c = random.random()
#print(c)
c = np.float32(0.925924589693)
a = np.float32(8.9)
b = np.float32(c / a)
print('{0:10.16f}'.format(b))
print a * b - c
#a = np.array(1.585858585887575775757575e-5, dtype=np.float)
a = np.array(5.0, dtype=np.float32)
b = np.sqrt(a)
print('{0:10.16f}'.format(b ** 2 - a))
a = np.array(2.28827272710, dtype=np.float32)
b = np.exp(a)
print np.log(b) - a
"""
Explanation: Lecture 1: Floating point arithmetic, vector norms
Syllabus
Week 1: floating point, vector norms, matrix multiplication
Today
Fixed/floating point arithmetic; concept of backward and forward stability of algorithms
How to measure accuracy: vector norms
Representation of numbers
Real numbers represent quantities: probabilities, velocities, masses, ...
It is important to know, how they are represented in the computer (which only knows about bits).
Fixed point
The most straightforward format for the representation of real numbers is fixed point representation,
also known as Qm.n format.
A Qm.n number is in the range $[-(2^m), 2^m - 2^{-n}]$, with resolution $2^{-n}$.
Total storage is $m + n + 1$ bits.
The range of numbers represented is fixed.
Representation of numbers
The numbers in computer memory are typically represented as floating point numbers
A floating point number is represented as
$$\textrm{number} = \textrm{significand} \times \textrm{base}^{\textrm{exponent}},$$
where $\textrm{significand}$ is integer, $\textrm{base}$ is positive integer and $\textrm{exponent}$ is integer (can be negative), i.e.
$$ 1.2 = 12 \cdot 10^{-1}.$$
Fixed vs Floating
Q: What are the advantages/disadvantages of the fixed and floating points?
A: In most cases, they work just fine.
However, fixed point represents numbers within specified range and controls absolute accuracy.
Floating point represent numbers with relative accuracy, and is suitable for the case when numbers in the computations have varying scale
(i.e., $10^{-1}$ and $10^{5}$).
In practice, if speed is of no concern, use float32 or float64.
IEEE 754
In modern computers, the floating point representation is controlled by IEEE 754 standard which was published in 1985 and before that point different computers behaved differently with floating point numbers.
IEEE 754 has:
- Floating point representation (as described above), $(-1)^s \times c \times b^q$.
- Two infinities, $+\infty$ and $-\infty$
- Two kinds of NaN: a quiet NaN (qNaN) and signalling NaN (sNaN)
- Rules for rounding
- Rules for $\frac{0}{0}, \frac{1}{-0}, \ldots$
$ 0 \leq c \leq b^p - 1, \quad 1 - emax \leq q + p - 1 \leq emax$
The two most common format, single & double
The two most common format, called binary32 and binary64 (called also single and double formats).
| Name | Common Name | Base | Digits | Emin | Emax |
|------|----------|----------|-------|------|------|
|binary32| single precision | 2 | 11 | -14 | + 15 |
|binary64| double precision | 2 | 24 | -126 | + 127 |
Accuracy and memory
The relative accuracy of single precision is $10^{-7}-10^{-8}$,
while for double precision is $10^{-14}-10^{-16}$.
<font color='red'> Crucial note 1: </font> A float32 takes 4 bytes, float64, or double precision, takes 8 bytes.
<font color='red'> Crucial note 2: </font> These are the only two floating point-types supported in hardware.
<font color='red'> Crucial note 3: </font> You should use double precision in CSE and float on GPU/Data Science.
Some demo (for division accuracy)
End of explanation
"""
n = 10 ** 8
#x = #np.random.randn(n)
#x = (-1) ** np.arange(n) + 1e-3 * np.random.randn(n)
sm = 1e-10
x = np.ones(n, dtype=np.float32) * sm
x[0] = 1.0
#x16 = np.array(x, dtype=np.float32)
#x = np.array(x16, dtype=np.float64)
true_sum = 1.0 + (n - 1)*sm
approx_sum = np.sum(x)
from numba import jit
@jit
def dumb_sum2(x):
s = np.float32(0.0)
for i in range(len(x)):
s = s + x[i]
return s
@jit
def kahan_sum(x):
s = np.float32(0.0)
c = np.float32(0.0)
for i in range(len(x)):
y = x[i] - c
t = s + y
c = (t - s) - y
s = t
return s
k_sum = kahan_sum(x)
d_sum = dumb_sum2(x)
print('Error in sum: {0:3.1e}, kahan: {1:3.1e}, dumb_sum: {2:3.1e} '.format(approx_sum - true_sum, k_sum - true_sum, d_sum - true_sum))
"""
Explanation: Summary
For some values the inverse functions give exact answers
The relative accuracy should be kept due to the IEEE standard.
Does not hold for many modern GPU.
Loss of significance
Many operations lead to the loss of digits [loss of significance] (https://en.wikipedia.org/wiki/Loss_of_significance)
For example, it is a bad idea to subtract two big numbers that are close, the difference will have fewer correct digits.
This is related to algorithms and their properties (forward/backward stability), which we will discuss later.
Summation algorithm
However, the rounding errors can depend on the algorithm.
Consider the simplest problem: given $n$ numbers floating point numbers $x_1, \ldots, x_n$
compute their sum
$$S = \sum_{i=1}^n x_i = x_1 + \ldots + x_n.$$
The simplest algorithm is to add one-by-one.
What is the actual error for such algorithm?
Naive algorithm
Naive algorithm adds numbers one-by-one,
$$y_1 = x_1, \quad y_2 = y_1 + x_2, \quad y_3 = y_2 + x_3, \ldots.$$
The worst-case error is then proportional to $\mathcal{O}(n)$, while mean-squared error is $\mathcal{O}(\sqrt{n})$.
The Kahan algorithm gives the worst-case error bound $\mathcal{O}(1)$ (i.e., independent of $n$).
<font color='red'> Can you find the $\mathcal{O}(\log n)$ algorithm? </font>
Kahan summation
The following algorithm gives $2 \varepsilon + \mathcal{O}(n \varepsilon^2)$ error, where $\varepsilon$ is the machine precision.
python
s = 0
c = 0
for i in range(len(x)):
y = x[i] - c
t = s + y
c = (t - s) - y
s = t
End of explanation
"""
import math
print math.fsum([1, 1e20, 1, -1e20] * 10000), np.sum([1, 1e20, 1, -1e20] * 10000)
"""
Explanation: More complicated example
End of explanation
"""
import numpy as np
n = 100
a = np.ones(n)
b = a + 1e-3 * np.random.randn(n)
print 'Relative error:', np.linalg.norm(a - b, np.inf) / np.linalg.norm(b, np.inf)
"""
Explanation: Summary
You should be really careful with floating point, since it may give you incorrect answers due to rounding-off errors.
For many standard algorithms, the stability is well-understood and problems can be easily detected.
Vectors
In NLA we typically work not with numbers but with vectors.
Recall that a vector is a 1D array with $n$ numbers. Typically, it is considered as an $n \times 1$ matrix (column vector).
Vector norm
Vectors typically provide an (approximate) description of a physical (or some other) object.
One of the main question is how accurate the approximation is (1%, 10%).
What is an acceptable representation, of course, depends on the particular applications. For example:
- In partial differential equations accuracies $10^{-5} - 10^{-10}$ are the typical case
- In data mining sometimes an error of $80\%$ is ok, since the interesting signal is corrupted by a huge noise.
Distances and norms
Norm is a qualitative measure of smallness of a vector and is typically denoted as $\Vert x \Vert$.
The norm should satisfy certain properties:
$\Vert \alpha x \Vert = |\alpha| \Vert x \Vert$,
$\Vert x + y \Vert \leq \Vert x \Vert + \Vert y \Vert$ (triangle inequality),
If $\Vert x \Vert = 0$ then $x = 0$.
The distance between two vectors is then defined as
$$
d(x, y) = \Vert x - y \Vert.
$$
Standard norms
The most well-known and widely used norm is euclidean norm:
$$\Vert x \Vert_2 = \sqrt{\sum_{i=1}^n |x_i|^2},$$
which corresponds to the distance in our real life (the vectors might have complex elements, thus is the modulus here).
$p$-norm
Euclidean norm, or $2$-norm, is a subclass of an important class of $p$-norms:
$$
\Vert x \Vert_p = \Big(\sum_{i=1}^n |x_i|^p\Big)^{1/p}.
$$
There are two very important special cases:
- Infinity norm, or Chebyshev norm which is defined as the maximal element: $\Vert x \Vert_{\infty} = \max_i | x_i|$
- $L_1$ norm (or Manhattan distance) which is defined as the sum of modules of the elements of $x$: $\Vert x \Vert_1 = \sum_i |x_i|$
<img src="chebyshev.jpeg" style="float: left; height: 1%"> <img src="manhattan.jpeg" style="height">
We will give examples where Manhattan is very important: it all relates to the compressed sensing methods
that emerged in the mid-00s as one of the most popular research topics.
Equivalence of the norms
All norms are equivalent in the sense that
$$
C_1 \Vert x \Vert_ \leq \Vert x \Vert_{} \leq C_2 \Vert x \Vert_
$$
for some constants $C_1(n), C_2(n)$, $x \in \mathbb{R}^n$ for any pairs of norms $\Vert \cdot \Vert_$ and $\Vert \cdot \Vert_{*}$. The equivalence of the norms basically means that if the vector is small in one norm, it is small in another norm. However, the constants can be large.
Computing norms in Python
The numpy package has all you need for computing norms (np.linalg.norm function)
End of explanation
"""
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
p = 0.5 #Which norm do we use
M = 40000 #Number of sampling points
a = np.random.randn(M, 2)
b = []
for i in xrange(M):
if np.linalg.norm(a[i, :], p) <= 1:
b.append(a[i, :])
b = np.array(b)
plt.fill(b[:, 0], b[:, 1])
plt.axis('equal')
"""
Explanation: Unit disks in different norms
A unit disk is a set of point such that $\Vert x \Vert \leq 1$. For the Frobenius norm is a disk; for other norms the "disks" look different.
End of explanation
"""
import numpy as np
n = 500
a = [[1.0/(i + j + 1) for i in range(n)] for j in range(n)] #Hil
a = np.array(a)
rhs = np.random.randn(n)
sol = np.linalg.solve(a, rhs)
print np.linalg.norm(a.dot(sol) - rhs)/np.linalg.norm(rhs) #Ax - y
#print sol
plt.plot(sol)
"""
Explanation: Why $L_1$-norm can be important
$L_1$ norm, as it was discovered quite recently, plays an important role in compressed sensing.
The simplest formulation is as follows:
- You have some observations $f$
- You have a linear model $Ax = f$, where $A$ is an $n \times m$ matrix, $A$ is known
- The number of equations, $n$ is less than the number of unknowns, $m$
The question: can we find the solution?
The solution is obviously non-unique, so a natural approach is to find the solution that is minimal in the certain sense:
$$ \Vert x \Vert \rightarrow \min, \quad \mbox{subject to } Ax = f$$
Typical choice of $\Vert x \Vert = \Vert x \Vert_2$ leads to the linear least squares problem (and has been used for ages).
The choice $\Vert x \Vert = \Vert x \Vert_1$ leads to the [compressed sensing]
(https://en.wikipedia.org/wiki/Compressed_sensing) and what happens, it typically yields the sparsest solution.
A short demo
What is a stable algorithm?
And we finalize the lecture by the concept of stability.
Let $x$ be an object (for example, a vector). Let $f(x)$ be the function (functional) you want to evaluate.
You also have a numerical algorithm alg(x) that actually computes approximation to $f(x)$.
The algorithm is called forward stable, if $$\Vert alg(x) - f(x) \Vert \leq \varepsilon $$
The algorithm is called backward stable, if for any $x$ there is a close vector $x + \delta x$ such that
$$alg(x) = f(x + \delta x)$$
and $\Vert \delta x \Vert$ is small.
Classical example
A classical example is the solution of linear systems of equations using LU-factorizations
We consider the Hilbert matrix with the elements
$$a_{ij} = 1/(i + j + 1), \quad i = 0, \ldots, n-1, \quad j = 0, \ldots n-1.$$
And consider a linear system
$$Ax = f.$$
(We will look into matrices in more details in the next lecture, and for linear systems in the upcoming weeks, but now you actually see the linear system)
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Take home message
Floating point (double, single, number of bytes), rounding error
Norms are measures of smallness, used to compute the accuracy
$1$, $p$ and Euclidean norms
$L_1$ is used in compressed sensing as a surrogate for sparsity (later lectures)
Forward/backward error (and stability of algorithms) (later lectures)
Next lecture
Forward/backward stability: more details
Matrices and operators
Matrix multiplication
Complexity of matrix multiplication
Questions?
End of explanation
"""
|
bspalding/research_public
|
lectures/drafts/Fundamental factor models.ipynb
|
apache-2.0
|
import numpy as np
import statsmodels.api as sm
from statsmodels import regression
import matplotlib.pyplot as plt
import pandas as pd
# Get market cap and book-to-price for all assets in universe
fundamentals = init_fundamentals()
data = get_fundamentals(query(fundamentals.valuation.market_cap,
fundamentals.valuation_ratios.book_value_yield), '2015-07-31').T
# Drop missing data
data.dropna(inplace=True)
# Following the Fama-French model, ignore assets with negative book-to-price
data = data.loc[data['book_value_yield'] > 0]
# As per Fama-French, get the top 30% and bottom 30% of stocks by market cap
market_cap_top = data.sort('market_cap')[7*len(data)/10:]
market_cap_bottom = data.sort('market_cap')[:3*len(data)/10]
# Factor 1 is returns on portfolio that is long the top stocks and short the bottom stocks
f1 = (np.mean(get_pricing(market_cap_top.index, fields='price',
start_date='2014-07-31', end_date='2015-07-31').pct_change()[1:].T.dropna()) -
np.mean(get_pricing(market_cap_bottom.index, fields='price',
start_date='2014-07-31', end_date='2015-07-31').pct_change()[1:].T.dropna()))
# Repeat above procedure for book-to-price
bp_top = data.sort('book_value_yield')[7*len(data)/10:]
bp_bottom = data.sort('book_value_yield')[:3*len(data)/10]
f2 = (np.mean(get_pricing(bp_top.index, fields='price',
start_date='2014-07-31', end_date='2015-07-31').pct_change()[1:].T.dropna()) -
np.mean(get_pricing(bp_bottom.index, fields='price',
start_date='2014-07-31', end_date='2015-07-31').pct_change()[1:].T.dropna()))
"""
Explanation: Fundamental factor models
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Fundamentals are data having to do with the asset issuer, like the sector, size, and expenses of the company. We can use this data to build a linear factor model, expressing the returns as
$$R_i = a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i$$
There are two different approaches to computing the factors $F_j$, which represent the returns associated with some fundamental characteristics, and the factor sensitivities $b_{ij}$.
In the first, we start by representing each characteristic of interest by a portfolio: we sort all assets by that characteristic, then build the portfolio by going long the top quantile of assets and short the bottom quantile. The factor corresponding to this characteristic is the return on this portfolio. Then, the $b_{ij}$ are estimated for each asset $i$ by regressing over the historical values of $R_i$ and of the factors.
We start by getting the fundamentals data for all assets and constructing the portfolios for each characteristic:
End of explanation
"""
# Get returns data for our asset
asset = get_pricing('AA', fields='price', start_date='2014-07-31', end_date='2015-07-31').pct_change()[1:]
# Perform linear regression to get the coefficients in the model
mlr = regression.linear_model.OLS(asset, sm.add_constant(np.column_stack((f1, f2)))).fit()
# Print the coefficients from the linear regression
print'Historical sensitivities of AA returns to factors:\nMarket cap: %f\nB/P: %f' % (mlr.params[1],
mlr.params[2])
# Print the latest values for each of the factors
print '\nValues of factors on 2015-07-31:\nMarket cap: %f\nB/P: %f' % (f1[-1], f2[-1])
"""
Explanation: Now that we have returns series representing our factors, we can compute the factor model for any return stream using a linear regression. Below, we compute the factor sensitivities for returns on Alcoa stock:
End of explanation
"""
# Get one day's worth of cross-sectional returns
cs_returns = get_pricing(data.index, fields='price',
start_date='2015-07-30', end_date='2015-07-31').pct_change()[1:].T.dropna()
# Only look at fundamentals data of assets that we have pricing data for
data = data.loc[cs_returns.index]
# Compute coefficients according to formula above
coeffs = (data - data.mean())/data.std()
"""
Explanation: With the other method, we calculate the coefficients $b_{ij}$ from the formula
$$ b_{ij} = \frac{\text{Value of factor for asset }i - \text{Average value of factor}}{\sigma(\text{Factor values})} $$
By scaling the value of the factor in this way, we make the coefficients comparable across factors. The exceptions to this formula are indicator variables, which are set to 1 for true and 0 for false. One example is industry membership: the coefficient tells us whether the asset belongs to the industry or not. After we calculate all of the coefficients, we estimate $F_j$ and $a_i$ using a cross-sectional regression (i.e. at each time step, we perform a regression using the equations for all of the assets).
Following this procedure, we get the cross-sectional returns on 2015-07-31, and compute the coefficients for all assets:
End of explanation
"""
mlr = regression.linear_model.OLS(cs_returns,
sm.add_constant(coeffs)).fit()
# Print the coefficients we computed for AA
print 'Sensitivities of AA returns:\n', coeffs.iloc[0]
# Print factor values from linear regression
print '\nFactors on 2015-07-31:\n', mlr.params[1:]
"""
Explanation: Now that we have the factor sensitivities, we use a linear regression to compute the factors on 2015-07-31:
End of explanation
"""
|
jrg365/gpytorch
|
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
|
mit
|
from torch.optim import SGD, Adam
from torch.optim.lr_scheduler import MultiStepLR
import torch.nn.functional as F
from torch import nn
import torch
import os
import torchvision.datasets as dset
import torchvision.transforms as transforms
import gpytorch
import math
import tqdm
"""
Explanation: SVDKL (Stochastic Variational Deep Kernel Learning) on CIFAR10/100
In this notebook, we'll demonstrate the steps necessary to train a medium sized DenseNet (https://arxiv.org/abs/1608.06993) on either of two popularly used benchmark dataset in computer vision (CIFAR10 and CIFAR100). We'll be training the DKL model entirely end to end using the standard 300 Epoch training schedule and SGD.
This notebook is largely for tutorial purposes. If your goal is just to get (for example) a trained DKL + CIFAR100 model, we recommend that you move this code to a simple python script and run that, rather than training directly out of a python notebook. We find that training is just a bit faster out of a python notebook. We also of course recommend that you increase the size of the DenseNet used to a full sized model if you would like to achieve state of the art performance.
Furthermore, because this notebook involves training an actually reasonably large neural network, it is strongly recommended that you have a decent GPU available for this, as with all large deep learning models.
End of explanation
"""
normalize = transforms.Normalize(mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761])
aug_trans = [transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()]
common_trans = [transforms.ToTensor(), normalize]
train_compose = transforms.Compose(aug_trans + common_trans)
test_compose = transforms.Compose(common_trans)
"""
Explanation: Set up data augmentation
The first thing we'll do is set up some data augmentation transformations to use during training, as well as some basic normalization to use during both training and testing. We'll use random crops and flips to train the model, and do basic normalization at both training time and test time. To accomplish these transformations, we use standard torchvision transforms.
End of explanation
"""
dataset = "cifar10"
if ('CI' in os.environ): # this is for running the notebook in our testing framework
train_set = torch.utils.data.TensorDataset(torch.randn(8, 3, 32, 32), torch.rand(8).round().long())
test_set = torch.utils.data.TensorDataset(torch.randn(4, 3, 32, 32), torch.rand(4).round().long())
train_loader = torch.utils.data.DataLoader(train_set, batch_size=4, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=2, shuffle=False)
num_classes = 2
elif dataset == 'cifar10':
train_set = dset.CIFAR10('data', train=True, transform=train_compose, download=True)
test_set = dset.CIFAR10('data', train=False, transform=test_compose)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=256, shuffle=False)
num_classes = 10
elif dataset == 'cifar100':
train_set = dset.CIFAR100('data', train=True, transform=train_compose, download=True)
test_set = dset.CIFAR100('data', train=False, transform=test_compose)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=256, shuffle=False)
num_classes = 100
else:
raise RuntimeError('dataset must be one of "cifar100" or "cifar10"')
"""
Explanation: Create DataLoaders
Next, we create dataloaders for the selected dataset using the built in torchvision datasets. The cell below will download either the cifar10 or cifar100 dataset, depending on which choice is made. The default here is cifar10, however training is just as fast on either dataset.
After downloading the datasets, we create standard torch.utils.data.DataLoaders for each dataset that we will be using to get minibatches of augmented data.
End of explanation
"""
from densenet import DenseNet
class DenseNetFeatureExtractor(DenseNet):
def forward(self, x):
features = self.features(x)
out = F.relu(features, inplace=True)
out = F.avg_pool2d(out, kernel_size=self.avgpool_size).view(features.size(0), -1)
return out
feature_extractor = DenseNetFeatureExtractor(block_config=(6, 6, 6), num_classes=num_classes)
num_features = feature_extractor.classifier.in_features
"""
Explanation: Creating the DenseNet Model
With the data loaded, we can move on to defining our DKL model. A DKL model consists of three components: the neural network, the Gaussian process layer used after the neural network, and the Softmax likelihood.
The first step is defining the neural network architecture. To do this, we use a slightly modified version of the DenseNet available in the standard PyTorch package. Specifically, we modify it to remove the softmax layer, since we'll only be needing the final features extracted from the neural network.
End of explanation
"""
class GaussianProcessLayer(gpytorch.models.ApproximateGP):
def __init__(self, num_dim, grid_bounds=(-10., 10.), grid_size=64):
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
num_inducing_points=grid_size, batch_shape=torch.Size([num_dim])
)
# Our base variational strategy is a GridInterpolationVariationalStrategy,
# which places variational inducing points on a Grid
# We wrap it with a IndependentMultitaskVariationalStrategy so that our output is a vector-valued GP
variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy(
gpytorch.variational.GridInterpolationVariationalStrategy(
self, grid_size=grid_size, grid_bounds=[grid_bounds],
variational_distribution=variational_distribution,
), num_tasks=num_dim,
)
super().__init__(variational_strategy)
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(
math.exp(-1), math.exp(1), sigma=0.1, transform=torch.exp
)
)
)
self.mean_module = gpytorch.means.ConstantMean()
self.grid_bounds = grid_bounds
def forward(self, x):
mean = self.mean_module(x)
covar = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean, covar)
"""
Explanation: Creating the GP Layer
In the next cell, we create the layer of Gaussian process models that are called after the neural network. In this case, we'll be using one GP per feature, as in the SV-DKL paper. The outputs of these Gaussian processes will the be mixed in the softmax likelihood.
End of explanation
"""
class DKLModel(gpytorch.Module):
def __init__(self, feature_extractor, num_dim, grid_bounds=(-10., 10.)):
super(DKLModel, self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = GaussianProcessLayer(num_dim=num_dim, grid_bounds=grid_bounds)
self.grid_bounds = grid_bounds
self.num_dim = num_dim
def forward(self, x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
# This next line makes it so that we learn a GP for each feature
features = features.transpose(-1, -2).unsqueeze(-1)
res = self.gp_layer(features)
return res
model = DKLModel(feature_extractor, num_dim=num_features)
likelihood = gpytorch.likelihoods.SoftmaxLikelihood(num_features=model.num_dim, num_classes=num_classes)
# If you run this example without CUDA, I hope you like waiting!
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
"""
Explanation: Creating the full SVDKL Model
With both the DenseNet feature extractor and GP layer defined, we can put them together in a single module that simply calls one and then the other, much like building any Sequential neural network in PyTorch. This completes defining our DKL model.
End of explanation
"""
n_epochs = 1
lr = 0.1
optimizer = SGD([
{'params': model.feature_extractor.parameters(), 'weight_decay': 1e-4},
{'params': model.gp_layer.hyperparameters(), 'lr': lr * 0.01},
{'params': model.gp_layer.variational_parameters()},
{'params': likelihood.parameters()},
], lr=lr, momentum=0.9, nesterov=True, weight_decay=0)
scheduler = MultiStepLR(optimizer, milestones=[0.5 * n_epochs, 0.75 * n_epochs], gamma=0.1)
mll = gpytorch.mlls.VariationalELBO(likelihood, model.gp_layer, num_data=len(train_loader.dataset))
def train(epoch):
model.train()
likelihood.train()
minibatch_iter = tqdm.notebook.tqdm(train_loader, desc=f"(Epoch {epoch}) Minibatch")
with gpytorch.settings.num_likelihood_samples(8):
for data, target in minibatch_iter:
if torch.cuda.is_available():
data, target = data.cuda(), target.cuda()
optimizer.zero_grad()
output = model(data)
loss = -mll(output, target)
loss.backward()
optimizer.step()
minibatch_iter.set_postfix(loss=loss.item())
def test():
model.eval()
likelihood.eval()
correct = 0
with torch.no_grad(), gpytorch.settings.num_likelihood_samples(16):
for data, target in test_loader:
if torch.cuda.is_available():
data, target = data.cuda(), target.cuda()
output = likelihood(model(data)) # This gives us 16 samples from the predictive distribution
pred = output.probs.mean(0).argmax(-1) # Taking the mean over all of the sample we've drawn
correct += pred.eq(target.view_as(pred)).cpu().sum()
print('Test set: Accuracy: {}/{} ({}%)'.format(
correct, len(test_loader.dataset), 100. * correct / float(len(test_loader.dataset))
))
"""
Explanation: Defining Training and Testing Code
Next, we define the basic optimization loop and testing code. This code is entirely analogous to the standard PyTorch training loop. We create a torch.optim.SGD optimizer with the parameters of the neural network on which we apply the standard amount of weight decay suggested from the paper, the parameters of the Gaussian process (from which we omit weight decay, as L2 regualrization on top of variational inference is not necessary), and the mixing parameters of the Softmax likelihood.
We use the standard learning rate schedule from the paper, where we decrease the learning rate by a factor of ten 50% of the way through training, and again at 75% of the way through training.
End of explanation
"""
for epoch in range(1, n_epochs + 1):
with gpytorch.settings.use_toeplitz(False):
train(epoch)
test()
scheduler.step()
state_dict = model.state_dict()
likelihood_state_dict = likelihood.state_dict()
torch.save({'model': state_dict, 'likelihood': likelihood_state_dict}, 'dkl_cifar_checkpoint.dat')
"""
Explanation: We are now ready to train the model. At the end of each Epoch we report the current test loss and accuracy, and we save a checkpoint model out to a file.
End of explanation
"""
|
zerothi/ts-tbt-sisl-tutorial
|
A_06/run.ipynb
|
gpl-3.0
|
graphene = sisl.geom.graphene(1.44)
elec = graphene.tile(2, axis=0)
elec.write('ELEC_GRAPHENE.fdf')
elec.write('ELEC_GRAPHENE.xyz')
C1d = sisl.Geometry([[0,0,0]], graphene.atom[0], [10, 10, 1.4])
elec_chain = C1d.tile(4, axis=2)
elec_chain.write('ELEC_CHAIN.fdf')
elec_chain.write('ELEC_CHAIN.xyz')
chain = elec_chain.tile(3, axis=2)
device = elec.tile(5, axis=1).tile(4, axis=0)
# Attach the chain on-top of an atom
# First find an atom in the middle of the device
idx = device.close(device.center(what='xyz'), R=1.45)[1]
# Attach the chain at a distance of 2.25 along the third lattice vector
device = device.attach(idx, chain, 0, dist=2.25, axis=2)
# Add vacuum along chain, we really no not care how much vacuum, but it
# is costly on memory, not so much on performance.
device = device.add_vacuum(15, axis=2)
device.write('DEVICE.fdf')
device.write('DEVICE.xyz')
"""
Explanation: In this example you will familiarize your-self to the concept of buffer atoms. A buffer atom is an atom that is completely neglected in the TranSiesta self-consistent calculation, but is used as an initialization for the bulk electrode regions.
Here a pristine graphene flake will be constructed and subsequently a Carbon chain will act as an STM-like tip to simulate STM experiments.
As the Carbon chain is terminated to vacuum, the dangling bonds will create spurious effects very different from a pristine, bulk chain. To understand why it is necessary to add buffer atoms it is useful to understand the TranSiesta method. Any TranSiesta calculation starts with calculating an initial guess for the Hamiltonian as input for the Green function method:
\begin{equation}
\mathbf G^{-1}(E) = \mathbf S (E+i\eta) - \mathbf H - \sum_i\boldsymbol\Sigma_i
\end{equation}
If the initial $\mathbf H'$ represents a Hamiltonian close to the open-boundary problem $\mathbf H$; it will converge with a higher probability, and in much less time. Improving the initial guess Hamiltonian is time well-worth spent as TranSiesta is, typically, more difficult to converge. The initial guess Hamiltonian is a Siesta calculation with full periodicity.
As an example consider the Hamiltonian for the chain:
<vacuum> C -- C -- C -- C -- C -- C ...
It is clear that the atom closest to the vacuum region resides in a very different chemical and potential landscape than an atom in the middle of the chain. If TranSiesta uses the initial Hamiltonian for the chain electrode as the atom closest to the vacuum region it will be very far from the potential landscape of a bulk electrode. So to mitigate this one can specify:
%block TBT.Atoms.Buffer
atom [ 1 -- 2 ]
%endblock
to remove the first 2 atoms from the TranSiesta calculation (note that negative indices counts from the end). Then the electrode will begin from the 3th atom which is farther from the dangling bond. This will be a much better initial guess for the Hamiltonian. Other strategies to improve the potential landscape may be to terminate the dangling bonds with Hydrogens or other atomic species.
End of explanation
"""
# Adapt to read in the siesta.TBT.nc from different directories and plot them.
tbt = sisl.get_sile('siesta.TBT.nc')
"""
Explanation: Exercises
Add missing electrode information in RUN.fdf.
Perform all required TranSiesta calculations, first the electrodes, then the device region.
Create a new directory for a different range of buffer atoms, from 0 to 4, start by using 4 buffer atoms.
How does convergence behave for different number of buffer atoms?
REMARK there are 2 places in the fdf file you should change when changing the number of atoms (the electrode atom specification and the buffer atoms).
- TIME: one can combine electrode options bulk and DM-init to improve the initial $\mathbf H$ for TranSiesta. Take a system with 1 buffer atom and play with the effect of these options.
- Calculate transport properties for all (converged) TranSiesta calculations
- Plot the transmission and DOS for all TBtrans calculations, do they differ?
End of explanation
"""
|
Riptawr/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x, range_min=0, range_max=255):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# Avoiding exactly zero and one, due to possible saturation issues with some activation functions
# or risks of underflow
a = 0
b = 1.0
range_min = 0
range_max = 255
return a + ( ( (x - range_min)*(b - a) )/( range_max - range_min ) )
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x, n_labels=10):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# ohe via identity matrix for labels times examples
# should not change between uses unless labels change and there is
# no need for outer scope mutation of variables
return np.eye(n_labels)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32, shape=[None, *image_shape], name="x")
def neural_net_label_input(n_classes, channels=3):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, shape=[None, n_classes,], name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name="keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
W = tf.Variable(tf.random_normal(
shape=[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs],
mean=0.0,
stddev=0.01,
dtype=tf.float32))
b = tf.Variable(tf.zeros([conv_num_outputs]))
#print(conv_strides)
conv = tf.nn.conv2d(x_tensor, W, strides=[1, *conv_strides, 1], padding="SAME")
conv = tf.nn.bias_add(conv, b)
conv = tf.nn.relu(conv)
conv = tf.nn.max_pool(conv,
[1, *pool_ksize, 1],
[1, *pool_strides, 1],
padding="SAME")
return conv
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# Highlevel is nice
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor,
num_outputs,
weights_initializer=tf.random_normal_initializer(mean=0.0, stddev=0.1),
#biased in favor of activating, with biases > 0, since we use relu
biases_initializer=tf.random_normal_initializer(mean=0.1, stddev=0.01),
activation_fn=tf.nn.relu)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor,
num_outputs,
weights_initializer=tf.random_normal_initializer(mean=0.0, stddev=0.01),
biases_initializer=tf.zeros_initializer(), activation_fn=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
#x_ = tf.cast(x, tf.float32)
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, 32, (2,2), (2,2), (3,3), (2,2))
conv2 = conv2d_maxpool(conv1, 64, (2,2), (2,2), (1,1), (1,1))
conv3 = conv2d_maxpool(conv2, 128, (2,2), (2,2), (1,1), (1,1))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
f1 = flatten(conv3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
net = fully_conn(f1,400)
drop1 = tf.nn.dropout(net, keep_prob)
net2 = fully_conn(drop1,200)
drop2 = tf.nn.dropout(net2, keep_prob)
net3 = fully_conn(drop2,100)
drop3 = tf.nn.dropout(net3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
return output(drop3,10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session|
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# Just the side-effect
session.run(optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob:keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y:label_batch, keep_prob:1.0})
valid_acc = session.run(accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
print("Current loss: {0}, validation accuracy: {1}".format(loss, valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 100
batch_size = 1024 # 1080 TI
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
alvaroing12/CADL
|
session-5/session-5-part-1.ipynb
|
apache-2.0
|
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, nb_utils
except ImportError as e:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo.")
print(e)
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
"""
Explanation: Session 5: Generative Networks
Assignment: Generative Adversarial Networks and Recurrent Neural Networks
<p class="lead">
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning with Google's Tensorflow</a><br />
<a href="http://pkmital.com">Parag K. Mital</a><br />
<a href="https://www.kadenze.com">Kadenze, Inc.</a>
</p>
Table of Contents
<!-- MarkdownTOC autolink="true" autoanchor="true" bracket="round" -->
Overview
Learning Goals
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
Introduction
Building the Encoder
Building the Discriminator for the Training Samples
Building the Decoder
Building the Generator
Building the Discriminator for the Generated Samples
GAN Loss Functions
Building the Optimizers w/ Regularization
Loading a Dataset
Training
Equilibrium
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
Batch Normalization
Building the Encoder
Building the Variational Layer
Building the Decoder
Building VAE/GAN Loss Functions
Creating the Optimizers
Loading the Dataset
Training
Part 3 - Latent-Space Arithmetic
Loading the Pre-Trained Model
Exploring the Celeb Net Attributes
Find the Latent Encoding for an Attribute
Latent Feature Arithmetic
Extensions
Part 4 - Character-Level Language Model
Part 5 - Pretrained Char-RNN of Donald Trump
Getting the Trump Data
Basic Text Analysis
Loading the Pre-trained Trump Model
Inference: Keeping Track of the State
Probabilistic Sampling
Inference: Temperature
Inference: Priming
Assignment Submission
<!-- /MarkdownTOC -->
<a name="overview"></a>
Overview
This is certainly the hardest session and will require a lot of time and patience to complete. Also, many elements of this session may require further investigation, including reading of the original papers and additional resources in order to fully grasp their understanding. The models we cover are state of the art and I've aimed to give you something between a practical and mathematical understanding of the material, though it is a tricky balance. I hope for those interested, that you delve deeper into the papers for more understanding. And for those of you seeking just a practical understanding, that these notebooks will suffice.
This session covered two of the most advanced generative networks: generative adversarial networks and recurrent neural networks. During the homework, we'll see how these work in more details and try building our own. I am not asking you train anything in this session as both GANs and RNNs take many days to train. However, I have provided pre-trained networks which we'll be exploring. We'll also see how a Variational Autoencoder can be combined with a Generative Adversarial Network to allow you to also encode input data, and I've provided a pre-trained model of this type of model trained on the Celeb Faces dataset. We'll see what this means in more details below.
After this session, you are also required to submit your final project which can combine any of the materials you have learned so far to produce a short 1 minute clip demonstrating any aspect of the course you want to invesitgate further or combine with anything else you feel like doing. This is completely open to you and to encourage your peers to share something that demonstrates creative thinking. Be sure to keep the final project in mind while browsing through this notebook!
<a name="learning-goals"></a>
Learning Goals
Learn to build the components of a Generative Adversarial Network and how it is trained
Learn to combine the Variational Autoencoder with a Generative Adversarial Network
Learn to use latent space arithmetic with a pre-trained VAE/GAN network
Learn to build the components of a Character Recurrent Neural Network and how it is trained
Learn to sample from a pre-trained CharRNN model
End of explanation
"""
# We'll keep a variable for the size of our image.
n_pixels = 32
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# And then create the input image placeholder
X = tf.placeholder(name='X'...
"""
Explanation: <a name="part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan"></a>
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
<a name="introduction"></a>
Introduction
Recall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The "generator" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as: 1, the image you gave me was real, or 0, the image you gave me was generated by the generator, it's a FAKE! So the discriminator is like an encoder which takes an image and then perfoms lie detection. Are you feeding me lies? Or is the image real?
Consider the AE and VAE we trained in Session 3. The loss function operated partly on the input space. It said, per pixel, what is the difference between my reconstruction and the input image? The l2-loss per pixel. Recall at that time we suggested that this wasn't the best idea because per-pixel differences aren't representative of our own perception of the image. One way to consider this is if we had the same image, and translated it by a few pixels. We would not be able to tell the difference, but the per-pixel difference between the two images could be enormously high.
The GAN does not use per-pixel difference. Instead, it trains a distance function: the discriminator. The discriminator takes in two images, the real image and the generated one, and learns what a similar image should look like! That is really the amazing part of this network and has opened up some very exciting potential future directions for unsupervised learning. Another network that also learns a distance function is known as the siamese network. We didn't get into this network in this course, but it is commonly used in facial verification, or asserting whether two faces are the same or not.
The GAN network is notoriously a huge pain to train! For that reason, we won't actually be training it. Instead, we'll discuss an extension to this basic network called the VAEGAN which uses the VAE we created in Session 3 along with the GAN. We'll then train that network in Part 2. For now, let's stick with creating the GAN.
Let's first create the two networks: the discriminator and the generator. We'll first begin by building a general purpose encoder which we'll use for our discriminator. Recall that we've already done this in Session 3. What we want is for the input placeholder to be encoded using a list of dimensions for each of our encoder's layers. In the case of a convolutional network, our list of dimensions should correspond to the number of output filters. We also need to specify the kernel heights and widths for each layer's convolutional network.
We'll first need a placeholder. This will be the "real" image input to the discriminator and the discrimintator will encode this image into a single value, 0 or 1, saying, yes this is real, or no, this is not real.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
def encoder(x, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(...
# Now apply the activation function
h = activation(h)
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs
"""
Explanation: <a name="building-the-encoder"></a>
Building the Encoder
Let's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
def discriminator(X,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
# Encode X:
H, Hs = encoder(X, channels, filter_sizes, activation, reuse)
# Now make one last layer with just 1 output. We'll
# have to reshape to 2-d so that we can create a fully
# connected layer:
shape = H.get_shape().as_list()
H = tf.reshape(H, [-1, shape[1] * shape[2] * shape[3]])
# Now we can connect our 2D layer to a single neuron output w/
# a sigmoid activation:
D, W = utils.linear(...
return D
"""
Explanation: <a name="building-the-discriminator-for-the-training-samples"></a>
Building the Discriminator for the Training Samples
Finally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
D_real = discriminator(X)
"""
Explanation: Now let's create the discriminator for the real training data coming from X:
End of explanation
"""
graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def())
"""
Explanation: And we can see what the network looks like now:
End of explanation
"""
# We'll need some variables first. This will be how many
# channels our generator's feature vector has. Experiment w/
# this if you are training your own network.
n_code = 16
# And in total how many feature it has, including the spatial dimensions.
n_latent = (n_pixels // 16) * (n_pixels // 16) * n_code
# Let's build the 2-D placeholder, which is the 1-d feature vector for every
# element in our batch. We'll then reshape this to 4-D for the decoder.
Z = tf.placeholder(name='Z', shape=[None, n_latent], dtype=tf.float32)
# Now we can reshape it to input to the decoder. Here we have to
# be mindful of the height and width as described before. We need
# to make the height and width a factor of the final height and width
# that we want. Since we are using strided convolutions of 2, then
# we can say with 4 layers, that first decoder's layer should be:
# n_pixels / 2 / 2 / 2 / 2, or n_pixels / 16:
Z_tensor = tf.reshape(Z, [-1, n_pixels // 16, n_pixels // 16, n_code])
"""
Explanation: <a name="building-the-decoder"></a>
Building the Decoder
Now we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above.
Let's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3.
But how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want.
Let's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator).
End of explanation
"""
def decoder(z, dimensions, channels, filter_sizes,
activation=tf.nn.relu, reuse=None):
h = z
hs = []
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = activation(h)
hs.append(h)
return h, hs
"""
Explanation: Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter "channels" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network.
End of explanation
"""
# Explore these parameters.
def generator(Z,
dimensions=[n_pixels//8, n_pixels//4, n_pixels//2, n_pixels],
channels=[50, 50, 50, n_channels],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu):
with tf.variable_scope('generator'):
G, Hs = decoder(Z_tensor, dimensions, channels, filter_sizes, activation)
return G
"""
Explanation: <a name="building-the-generator"></a>
Building the Generator
Now we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3.
End of explanation
"""
G = generator(Z)
graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def())
"""
Explanation: Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image.
End of explanation
"""
D_fake = discriminator(G, reuse=True)
"""
Explanation: <a name="building-the-discriminator-for-the-generated-samples"></a>
Building the Discriminator for the Generated Samples
Lastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values.
End of explanation
"""
nb_utils.show_graph(graph.as_graph_def())
"""
Explanation: Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator.
End of explanation
"""
with tf.variable_scope('loss/generator'):
loss_G = tf.reduce_mean(utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake)))
"""
Explanation: <a name="gan-loss-functions"></a>
GAN Loss Functions
We now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times!
Recall from Session 3's Supervised Network, we created a binary classification task: music or speech. We again have a binary classification task: real or fake. So our loss metric will again use the binary cross entropy to measure the loss of our three different modules: the generator, the discriminator for our real images, and the discriminator for our generated images.
To find out the loss function for our generator network, answer the question, what makes the generator successful? Successfully fooling the discriminator. When does that happen? When the discriminator for the fake samples produces all ones. So our binary cross entropy measure will measure the cross entropy with our predicted distribution and the true distribution which has all ones.
End of explanation
"""
with tf.variable_scope('loss/discriminator/real'):
loss_D_real = utils.binary_cross_entropy(D_real, ...
with tf.variable_scope('loss/discriminator/fake'):
loss_D_fake = utils.binary_cross_entropy(D_fake, ...
with tf.variable_scope('loss/discriminator'):
loss_D = tf.reduce_mean((loss_D_real + loss_D_fake) / 2)
nb_utils.show_graph(graph.as_graph_def())
"""
Explanation: What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
# Grab just the variables corresponding to the discriminator
# and just the generator:
vars_d = [v for v in tf.trainable_variables()
if ...]
print('Training discriminator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('discriminator')]
vars_g = [v for v in tf.trainable_variables()
if ...]
print('Training generator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('generator')]
"""
Explanation: With our loss functions, we can create an optimizer for the discriminator and generator:
<a name="building-the-optimizers-w-regularization"></a>
Building the Optimizers w/ Regularization
We're almost ready to create our optimizers. We just need to do one extra thing. Recall that our loss for our generator has a flow from the generator through the discriminator. If we are training both the generator and the discriminator, we have two measures which both try to optimize the discriminator, but in opposite ways: the generator's loss would try to optimize the discriminator to be bad at its job, and the discriminator's loss would try to optimize it to be good at its job. This would be counter-productive, trying to optimize opposing losses. What we want is for the generator to get better, and the discriminator to get better. Not for the discriminator to get better, then get worse, then get better, etc... The way we do this is when we optimize our generator, we let the gradient flow through the discriminator, but we do not update the variables in the discriminator. Let's try and grab just the discriminator variables and just the generator variables below:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
d_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_d)
g_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_g)
"""
Explanation: We can also apply regularization to our network. This will penalize weights in the network for growing too large.
End of explanation
"""
learning_rate = 0.0001
lr_g = tf.placeholder(tf.float32, shape=[], name='learning_rate_g')
lr_d = tf.placeholder(tf.float32, shape=[], name='learning_rate_d')
"""
Explanation: The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so:
End of explanation
"""
opt_g = tf.train.AdamOptimizer(learning_rate=lr_g).minimize(...)
opt_d = tf.train.AdamOptimizer(learning_rate=lr_d).minimize(loss_D + d_reg, var_list=vars_d)
"""
Explanation: Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders.
With your optimizers, you can now train the network by "running" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
# You'll want to change this to your own data if you end up training your own GAN.
batch_size = 64
n_epochs = 1
crop_shape = [n_pixels, n_pixels, 3]
crop_factor = 0.8
input_shape = [218, 178, 3]
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
"""
Explanation: <a name="loading-a-dataset"></a>
Loading a Dataset
Let's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
ckpt_name = './gan.ckpt'
sess = tf.Session()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
tf.get_default_graph().finalize()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
if os.path.exists(ckpt_name + '.index') or os.path.exists(ckpt_name):
saver.restore(sess, ckpt_name)
print("VAE model restored.")
n_examples = 10
zs = np.random.uniform(0.0, 1.0, [4, n_latent]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_examples)
"""
Explanation: <a name="training"></a>
Training
We'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train.
End of explanation
"""
equilibrium = 0.693
margin = 0.2
"""
Explanation: <a name="equilibrium"></a>
Equilibrium
Equilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is:
\begin{align}
\sum_i \text{X}_i * \text{log}(\tilde{\text{X}}_i) + (1 - \text{X}_i) * \text{log}(1 - \tilde{\text{X}}_i)
\end{align}
Which is written out in tensorflow as:
python
(-(x * tf.log(z) + (1. - x) * tf.log(1. - z)))
Where x is the discriminator's prediction of the true distribution, in the case of GANs, the input images, and z is the discriminator's prediction of the generated images corresponding to the mathematical notation of $\tilde{\text{X}}$. We sum over all features, but in the case of the discriminator, we have just 1 feature, the guess of whether it is a true image or not. If our discriminator guesses at chance, i.e. 0.5, then we'd have something like:
\begin{align}
0.5 * \text{log}(0.5) + (1 - 0.5) * \text{log}(1 - 0.5) = -0.693
\end{align}
So this is what we'd expect at the start of learning and from a game theoretic point of view, where we want things to remain. So unlike our previous networks, where our loss continues to drop closer and closer to 0, we want our loss to waver around this value as much as possible, and hope for the best.
End of explanation
"""
t_i = 0
batch_i = 0
epoch_i = 0
n_files = len(files)
if not os.path.exists('imgs'):
os.makedirs('imgs')
while epoch_i < n_epochs:
batch_i += 1
batch_xs = sess.run(batch) / 255.0
batch_zs = np.random.uniform(
0.0, 1.0, [batch_size, n_latent]).astype(np.float32)
real_cost, fake_cost = sess.run([
loss_D_real, loss_D_fake],
feed_dict={
X: batch_xs,
Z: batch_zs})
real_cost = np.mean(real_cost)
fake_cost = np.mean(fake_cost)
if (batch_i % 20) == 0:
print(batch_i, 'real:', real_cost, '/ fake:', fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_g,
feed_dict={
Z: batch_zs,
lr_g: learning_rate})
if dis_update:
sess.run(opt_d,
feed_dict={
X: batch_xs,
Z: batch_zs,
lr_d: learning_rate})
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={Z: zs})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
recon = sess.run(G, feed_dict={Z: batch_zs})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstructions_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# Tell all the threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close()
"""
Explanation: When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders.
Don't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell.
From there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the "real" and "fake" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once.
End of explanation
"""
tf.reset_default_graph()
"""
Explanation: <a name="part-2---variational-auto-encoding-generative-adversarial-network-vaegan"></a>
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
In our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image.
The closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN: https://arxiv.org/abs/1512.09300. It is just like the regular GAN, except we also use an encoder to create our feature vector Z.
We then get the best of both worlds: a GAN that looks more or less the same, but uses the encoding from an encoder instead of an arbitrary feature vector; and an autoencoder that can model an input distribution using a trained distance function, the discriminator, leading to nicer encodings/decodings.
Let's try to build it! Refer to the paper for the intricacies and a great read. Luckily, by building the encoder and decoder functions, we're almost there. We just need a few more components and will change these slightly.
Let's reset our graph and recompose our network as a VAEGAN:
End of explanation
"""
# placeholder for batch normalization
is_training = tf.placeholder(tf.bool, name='istraining')
"""
Explanation: <a name="batch-normalization"></a>
Batch Normalization
You may have noticed from the VAE code that I've used something called "batch normalization". This is a pretty effective technique for regularizing the training of networks by "reducing internal covariate shift". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by "smoothing" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates.
In order for us to use batch normalization, we need another placeholder which is a simple boolean: True or False, denoting when we are training. We'll use this placeholder to conditionally update batch normalization's statistics required for normalizing our minibatches. Let's create the placeholder and then I'll get into how to use this.
End of explanation
"""
from tensorflow.contrib.layers import batch_norm
help(batch_norm)
"""
Explanation: The original paper that introduced the idea suggests to use batch normalization "pre-activation", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the tensorflow.contrib.layers.batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you.
End of explanation
"""
def encoder(x, is_training, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
print('encoder/input:', h.get_shape().as_list())
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(h, channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
d_h=2,
d_w=2,
reuse=reuse)
h = batch_norm(h, is_training=is_training)
# Now apply the activation function
h = activation(h)
print('layer:', layer_i, ', shape:', h.get_shape().as_list())
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs
"""
Explanation: <a name="building-the-encoder-1"></a>
Building the Encoder
We can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied:
End of explanation
"""
n_pixels = 64
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# placeholder for the input to the network
X = tf.placeholder(...)
"""
Explanation: Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
channels = [64, 64, 64]
filter_sizes = [5, 5, 5]
activation = tf.nn.elu
n_hidden = 128
with tf.variable_scope('encoder'):
H, Hs = encoder(...
Z = utils.linear(H, n_hidden)[0]
"""
Explanation: And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
def variational_bayes(h, n_code):
# Model mu and log(\sigma)
z_mu = tf.nn.tanh(utils.linear(h, n_code, name='mu')[0])
z_log_sigma = 0.5 * tf.nn.tanh(utils.linear(h, n_code, name='log_sigma')[0])
# Sample from noise distribution p(eps) ~ N(0, 1)
epsilon = tf.random_normal(tf.stack([tf.shape(h)[0], n_code]))
# Sample from posterior
z = z_mu + tf.multiply(epsilon, tf.exp(z_log_sigma))
# Measure loss
loss_z = -0.5 * tf.reduce_sum(
1.0 + 2.0 * z_log_sigma - tf.square(z_mu) - tf.exp(2.0 * z_log_sigma),
1)
return z, z_mu, z_log_sigma, loss_z
"""
Explanation: <a name="building-the-variational-layer"></a>
Building the Variational Layer
In Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a "black box" in this course.
For those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called "Pattern Recognition and Machine Learning" and reading the original Kingma and Welling paper on Variational Bayes.
Now back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z.
End of explanation
"""
# Experiment w/ values between 2 - 100
# depending on how difficult the dataset is
n_code = 32
with tf.variable_scope('encoder/variational'):
Z, Z_mu, Z_log_sigma, loss_Z = variational_bayes(h=Z, n_code=n_code)
"""
Explanation: Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
def decoder(z, is_training, dimensions, channels, filter_sizes,
activation=tf.nn.elu, reuse=None):
h = z
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = batch_norm(h, is_training=is_training)
h = activation(h)
return h
"""
Explanation: <a name="building-the-decoder-1"></a>
Building the Decoder
In the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix: add a second argument for is_training and then apply batch normalization just after the deconv2d operation and just before the nonlinear activation.
End of explanation
"""
dimensions = [n_pixels // 8, n_pixels // 4, n_pixels // 2, n_pixels]
channels = [30, 30, 30, n_channels]
filter_sizes = [4, 4, 4, 4]
activation = tf.nn.elu
n_latent = n_code * (n_pixels // 16)**2
with tf.variable_scope('generator'):
Z_decode = utils.linear(
Z, n_output=n_latent, name='fc', activation=activation)[0]
Z_decode_tensor = tf.reshape(
Z_decode, [-1, n_pixels//16, n_pixels//16, n_code], name='reshape')
G = decoder(
Z_decode_tensor, is_training, dimensions,
channels, filter_sizes, activation)
"""
Explanation: Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network.
End of explanation
"""
def discriminator(X,
is_training,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=tf.nn.elu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
H, Hs = encoder(
X, is_training, channels, filter_sizes, activation, reuse)
shape = H.get_shape().as_list()
H = tf.reshape(
H, [-1, shape[1] * shape[2] * shape[3]])
D, W = utils.linear(
x=H, n_output=1, activation=tf.nn.sigmoid, name='fc', reuse=reuse)
return D, Hs
"""
Explanation: Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs.
End of explanation
"""
D_real, Hs_real = discriminator(X, is_training)
D_fake, Hs_fake = discriminator(G, is_training, reuse=True)
"""
Explanation: Recall the regular GAN and DCGAN required 2 discriminators: one for the generated samples in Z, and one for the input samples in X. We'll do the same thing here. One discriminator for the real input data, X, which the discriminator will try to predict as 1s, and another discriminator for the generated samples that go from X through the encoder to Z, and finally through the decoder to G. The discriminator will be trained to try and predict these as 0s, whereas the generator will be trained to try and predict these as 1s.
End of explanation
"""
with tf.variable_scope('loss'):
# Loss functions
loss_D_llike = 0
for h_real, h_fake in zip(Hs_real, Hs_fake):
loss_D_llike += tf.reduce_sum(tf.squared_difference(
utils.flatten(h_fake), utils.flatten(h_real)), 1)
eps = 1e-12
loss_real = tf.log(D_real + eps)
loss_fake = tf.log(1 - D_fake + eps)
loss_GAN = tf.reduce_sum(loss_real + loss_fake, 1)
gamma = 0.75
loss_enc = tf.reduce_mean(loss_Z + loss_D_llike)
loss_dec = tf.reduce_mean(gamma * loss_D_llike - loss_GAN)
loss_dis = -tf.reduce_mean(loss_GAN)
nb_utils.show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: <a name="building-vaegan-loss-functions"></a>
Building VAE/GAN Loss Functions
Let's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper: https://arxiv.org/abs/1512.09300 - please refer to this paper for more details that are way beyond the scope of this course! One parameter within this to pay attention to is gamma, which the authors of the paper suggest control the weighting between content and style, just like in Session 4's Style Net implementation.
End of explanation
"""
learning_rate = 0.0001
opt_enc = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_enc,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_gen = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dec,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_dis = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dis,
var_list=[var_i for var_i in tf.trainable_variables()
if var_i.name.startswith('discriminator')])
"""
Explanation: <a name="creating-the-optimizers"></a>
Creating the Optimizers
We now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network: the encoder's optimizer will only update the encoder variables, the generator's optimizer will only update the generator variables, and the discriminator's optimizer will only update the discriminator variables.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
from libs import datasets, dataset_utils
batch_size = 64
n_epochs = 100
crop_shape = [n_pixels, n_pixels, n_channels]
crop_factor = 0.8
input_shape = [218, 178, 3]
# Try w/ CELEB first to make sure it works, then explore w/ your own dataset.
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
"""
Explanation: <a name="loading-the-dataset"></a>
Loading the Dataset
We'll now load our dataset just like in Part 1. Here is where you should explore with your own data!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
n_samples = 10
zs = np.random.uniform(
-1.0, 1.0, [4, n_code]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_samples)
"""
Explanation: We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between them to create a "hyper-plane" and show the decoding of 10 x 10 points on that hyperplane.
End of explanation
"""
# We create a session to use the graph
sess = tf.Session()
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
sess.run(init_op)
"""
Explanation: Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners:
End of explanation
"""
if os.path.exists("vaegan.ckpt"):
saver.restore(sess, "vaegan.ckpt")
print("GAN model restored.")
"""
Explanation: Load an existing checkpoint if it exists to continue training.
End of explanation
"""
n_files = len(files)
test_xs = sess.run(batch) / 255.0
if not os.path.exists('imgs'):
os.mkdir('imgs')
m = utils.montage(test_xs, 'imgs/test_xs.png')
plt.imshow(m)
"""
Explanation: We'll also try resynthesizing a test set of images. This will help us understand how well the encoder/decoder network is doing:
End of explanation
"""
t_i = 0
batch_i = 0
epoch_i = 0
ckpt_name = './vaegan.ckpt'
"""
Explanation: <a name="training-1"></a>
Training
Almost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number.
End of explanation
"""
equilibrium = 0.693
margin = 0.4
"""
Explanation: Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds.
End of explanation
"""
while epoch_i < n_epochs:
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
batch_i += 1
batch_xs = sess.run(batch) / 255.0
real_cost, fake_cost, _ = sess.run([
loss_real, loss_fake, opt_enc],
feed_dict={
X: batch_xs,
is_training: True})
real_cost = -np.mean(real_cost)
fake_cost = -np.mean(fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_gen, feed_dict={
X: batch_xs,
is_training: True})
if dis_update:
sess.run(opt_dis, feed_dict={
X: batch_xs,
is_training: True})
if batch_i % 50 == 0:
print('real:', real_cost, '/ fake:', fake_cost)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={
Z: zs,
is_training: False})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
# Plot example reconstructions
recon = sess.run(G, feed_dict={
X: test_xs,
is_training: False})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstruction_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
if batch_i % 200 == 0:
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# One of the threads has issued an exception. So let's tell all the
# threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close()
"""
Explanation: Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU.
End of explanation
"""
tf.reset_default_graph()
from libs import celeb_vaegan as CV
net = CV.get_celeb_vaegan_model()
"""
Explanation: <a name="part-3---latent-space-arithmetic"></a>
Part 3 - Latent-Space Arithmetic
<a name="loading-the-pre-trained-model"></a>
Loading the Pre-Trained Model
We're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model:
End of explanation
"""
sess = tf.Session()
g = tf.get_default_graph()
tf.import_graph_def(net['graph_def'], name='net', input_map={
'encoder/variational/random_normal:0': np.zeros(512, dtype=np.float32)})
names = [op.name for op in g.get_operations()]
print(names)
"""
Explanation: We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes).
End of explanation
"""
X = g.get_tensor_by_name('net/x:0')
Z = g.get_tensor_by_name('net/encoder/variational/z:0')
G = g.get_tensor_by_name('net/generator/x_tilde:0')
"""
Explanation: Now let's get the relevant parts of the network: X, the input image to the network, Z, the input image's encoding, and G, the decoded image. In many ways, this is just like the Autoencoders we learned about in Session 3, except instead of Y being the output, we have G from our generator! And the way we train it is very different: we use an adversarial process between the generator and discriminator, and use the discriminator's own distance measure to help train the network, rather than pixel-to-pixel differences.
End of explanation
"""
files = datasets.CELEB()
img_i = 50
img = plt.imread(files[img_i])
plt.imshow(img)
"""
Explanation: Let's get some data to play with:
End of explanation
"""
p = CV.preprocess(img)
synth = sess.run(G, feed_dict={X: p[np.newaxis]})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(p)
axs[1].imshow(synth[0] / synth.max())
"""
Explanation: Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding).
End of explanation
"""
net.keys()
len(net['labels'])
net['labels']
"""
Explanation: So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected!
<a name="exploring-the-celeb-net-attributes"></a>
Exploring the Celeb Net Attributes
Let's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary:
End of explanation
"""
plt.imshow(img)
[net['labels'][i] for i, attr_i in enumerate(net['attributes'][img_i]) if attr_i]
"""
Explanation: Let's see what attributes exist for one of the celeb images:
End of explanation
"""
Z.get_shape()
"""
Explanation: <a name="find-the-latent-encoding-for-an-attribute"></a>
Find the Latent Encoding for an Attribute
The Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. "smiling". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space:
End of explanation
"""
bald_label = net['labels'].index('Bald')
bald_label
"""
Explanation: We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute:
End of explanation
"""
bald_img_idxs = np.where(net['attributes'][:, bald_label])[0]
bald_img_idxs
"""
Explanation: Let's get all the bald image indexes:
End of explanation
"""
bald_imgs = [plt.imread(files[bald_img_i])[..., :3]
for bald_img_i in bald_img_idxs[:100]]
"""
Explanation: Now let's just load 100 of their images:
End of explanation
"""
plt.imshow(np.mean(bald_imgs, 0).astype(np.uint8))
"""
Explanation: Let's see if the mean image looks like a good bald person or not:
End of explanation
"""
bald_p = np.array([CV.preprocess(bald_img_i) for bald_img_i in bald_imgs])
"""
Explanation: Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the "bald" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness.
Let's first preprocess the images:
End of explanation
"""
bald_zs = sess.run(Z, feed_dict=...
"""
Explanation: Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
bald_feature = np.mean(bald_zs, 0, keepdims=True)
bald_feature.shape
"""
Explanation: Now let's calculate the mean encoding:
End of explanation
"""
bald_generated = sess.run(G, feed_dict=...
plt.imshow(bald_generated[0] / bald_generated.max())
"""
Explanation: Let's try and synthesize from the mean bald feature now and see how it looks:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
def get_features_for(label='Bald', has_label=True, n_imgs=50):
label_i = net['labels'].index(label)
label_idxs = np.where(net['attributes'][:, label_i] == has_label)[0]
label_idxs = np.random.permutation(label_idxs)[:n_imgs]
imgs = [plt.imread(files[img_i])[..., :3]
for img_i in label_idxs]
preprocessed = np.array([CV.preprocess(img_i) for img_i in imgs])
zs = sess.run(Z, feed_dict={X: preprocessed})
return np.mean(zs, 0)
"""
Explanation: <a name="latent-feature-arithmetic"></a>
Latent Feature Arithmetic
Let's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have...
End of explanation
"""
# Explore different attributes
z1 = get_features_for('Male', True, n_imgs=10)
z2 = get_features_for('Male', False, n_imgs=10)
z3 = get_features_for('Smiling', True, n_imgs=10)
z4 = get_features_for('Smiling', False, n_imgs=10)
b1 = sess.run(G, feed_dict={Z: z1[np.newaxis]})
b2 = sess.run(G, feed_dict={Z: z2[np.newaxis]})
b3 = sess.run(G, feed_dict={Z: z3[np.newaxis]})
b4 = sess.run(G, feed_dict={Z: z4[np.newaxis]})
fig, axs = plt.subplots(1, 4, figsize=(15, 6))
axs[0].imshow(b1[0] / b1.max()), axs[0].set_title('Male'), axs[0].grid('off'), axs[0].axis('off')
axs[1].imshow(b2[0] / b2.max()), axs[1].set_title('Not Male'), axs[1].grid('off'), axs[1].axis('off')
axs[2].imshow(b3[0] / b3.max()), axs[2].set_title('Smiling'), axs[2].grid('off'), axs[2].axis('off')
axs[3].imshow(b4[0] / b4.max()), axs[3].set_title('Not Smiling'), axs[3].grid('off'), axs[3].axis('off')
"""
Explanation: Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
notmale_vector = z2 - z1
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z1 + notmale_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: Now let's interpolate between the "Male" and "Not Male" categories:
End of explanation
"""
smiling_vector = z3 - z4
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off')
"""
Explanation: And the same for smiling:
End of explanation
"""
n_imgs = 5
amt = np.linspace(-1.5, 2.5, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space.
End of explanation
"""
def slerp(val, low, high):
"""Spherical interpolation. val has a range of 0 to 1."""
if val <= 0:
return low
elif val >= 1:
return high
omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)))
so = np.sin(omega)
return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high
amt = np.linspace(0, 1, n_imgs)
zs = np.array([slerp(amt_i, z1, z2) for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: <a name="extensions"></a>
Extensions
Tom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint: https://arxiv.org/abs/1609.04468 - as well, be sure to check out his twitter bot, https://twitter.com/smilevector - which adds smiles to people :) - Note that the network we're using is only trained on aligned faces that are frontally facing, though this twitter bot is capable of adding smiles to any face. I suspect that he is running a face detection algorithm such as AAM, CLM, or ASM, cropping the face, aligning it, and then running a similar algorithm to what we've done above. Or else, perhaps he has trained a new model on faces that are not aligned. In any case, it is well worth checking out!
Let's now try and use sinusoidal interpolation using his implementation in plat which I've copied below:
End of explanation
"""
img = plt.imread('parag.png')[..., :3]
img = CV.preprocess(img, crop_factor=1.0)[np.newaxis]
"""
Explanation: It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways.
Let's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to "align" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face...
End of explanation
"""
img_ = sess.run(G, feed_dict={X: img})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(img[0]), axs[0].grid('off')
axs[1].imshow(np.clip(img_[0] / np.max(img_), 0, 1)), axs[1].grid('off')
"""
Explanation: Let's see how the network encodes it:
End of explanation
"""
z1 = get_features_for('Blurry', True, n_imgs=25)
z2 = get_features_for('Blurry', False, n_imgs=25)
unblur_vector = z2 - z1
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the "Blurry" attribute vector:
End of explanation
"""
from scipy.ndimage import gaussian_filter
idxs = np.random.permutation(range(len(files)))
imgs = [plt.imread(files[idx_i]) for idx_i in idxs[:100]]
blurred = []
for img_i in imgs:
img_copy = np.zeros_like(img_i)
for ch_i in range(3):
img_copy[..., ch_i] = gaussian_filter(img_i[..., ch_i], sigma=3.0)
blurred.append(img_copy)
# Now let's preprocess the original images and the blurred ones
imgs_p = np.array([CV.preprocess(img_i) for img_i in imgs])
blur_p = np.array([CV.preprocess(img_i) for img_i in blurred])
# And then compute each of their latent features
noblur = sess.run(Z, feed_dict={X: imgs_p})
blur = sess.run(Z, feed_dict={X: blur_p})
synthetic_unblur_vector = np.mean(noblur - blur, 0)
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + synthetic_unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so:
End of explanation
"""
z1 = get_features_for('Eyeglasses', True)
z2 = get_features_for('Eyeglasses', False)
glass_vector = z1 - z2
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: For some reason, it also doesn't like my glasses very much. Let's try and add them back.
End of explanation
"""
n_imgs = 5
amt = np.linspace(0, 1.0, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i + amt_i * smiling_vector for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
"""
Explanation: Well, more like sunglasses then. Let's try adding everything in there now!
End of explanation
"""
n_imgs = 5
amt = np.linspace(0, 1.5, n_imgs)
z = sess.run(Z, feed_dict={X: imgs_p})
imgs = []
for amt_i in amt:
zs = z + synthetic_unblur_vector * amt_i + amt_i * smiling_vector
g = sess.run(G, feed_dict={Z: zs})
m = utils.montage(np.clip(g, 0, 1))
imgs.append(m)
gif.build_gif(imgs, saveto='celeb.gif')
ipyd.Image(url='celeb.gif?i={}'.format(
np.random.rand()), height=1000, width=1000)
"""
Explanation: Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result:
End of explanation
"""
imgs = []
... DO SOMETHING AWESOME ! ...
gif.build_gif(imgs=imgs, saveto='vaegan.gif')
"""
Explanation: Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as "celeb.gif" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance: http://www.scipy-lectures.org/advanced/image_processing/ - and see if you can find the encoding of another attribute that you then apply to your own images. You can even try it with many images and use the utils.montage function to create a large grid of images that evolves over your attributes. Or create a set of expressions perhaps. Up to you just explore!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
"""
|
jcmgray/quijy
|
docs/basics.ipynb
|
mit
|
qu(data, qtype='ket')
"""
Explanation: Kets are column vectors, i.e. with shape (d, 1):
End of explanation
"""
qu(data, qtype='bra') # also conjugates the data
"""
Explanation: The normalized=True option can be used to ensure a normalized output.
Bras are row vectors, i.e. with shape (1, d):
End of explanation
"""
qu(data, qtype='dop')
"""
Explanation: And operators are square matrices, i.e. have shape (d, d):
End of explanation
"""
qu(data, qtype='dop', sparse=True)
psi = 1.0j * bell_state('psi-')
psi
psi.H
psi = up()
psi
psi.H @ psi # inner product
X = pauli('X')
X @ psi # act as gate
psi.H @ X @ psi # operator expectation
expec(psi, psi)
expec(psi, X)
"""
Explanation: Which can also be sparse:
End of explanation
"""
psi = rand_ket(2**20)
A = rand_herm(2**20, sparse=True) + speye(2**20)
A
expec(A, psi) # should be ~ 1
%%timeit
expec(A, psi)
dims = [2] * 10 # overall space of 10 qubits
X = pauli('X')
IIIXXIIIII = ikron(X, dims, inds=[3, 4]) # act on 4th and 5th spin only
IIIXXIIIII.shape
dims = [2] * 3
XZ = pauli('X') & pauli('Z')
ZIX = pkron(XZ, dims, inds=[2, 0])
ZIX.real.astype(int)
dims = [2] * 10
D = prod(dims)
psi = rand_ket(D)
rho_ab = ptr(psi, dims, [0, 9])
rho_ab.round(3) # probably pretty close to identity
"""
Explanation: Here's an example for a much larger (20 qubit), sparse operator expecation,
which will be automatically parallelized:
End of explanation
"""
|
benozol/codemapper
|
evaluation/lib/CoMap evaluation.ipynb
|
agpl-3.0
|
ev = pd.read_csv('../{}.evaluations.csv'.format(PROJECT))
for key in ['generated', 'reference', 'tp', 'fp', 'fn']:
ev[key] = ev[key].map(lambda x: x if x != x else json.loads(x))
ev['variation event database recall precision'.split()].head()
"""
Explanation: Load evaluations ev
End of explanation
"""
df_m = mappings.describe()
df_m.index = df_m.index.map(database_label)
df_m.columns = df_m.columns.map(event_label)
df_m.index.name = 'Inclusion codes'
df_m['Sum'] = df_m.iloc[:4,:7].sum(axis=1)
df_m['Average'] = df_m.iloc[:4,:7].mean(axis=1).round(2)
#df.ix['Sum'] = df.iloc[:4, :7].sum()
#df.ix['Average'] = df.iloc[:4, :7].mean().round(2)
#df.ix['Sum']['Sum'] = df['Sum'].sum()
#df.ix['Average']['Average'] = df['Average'].mean()
df_m.fillna('-').T[['ICD-9', 'ICD-10', 'ICPC-2', 'READ-2']]
df_e = mappings.describe(exclusions=True)
df_e.index = df_e.index.map(database_label)
df_e.columns = df_e.columns.map(event_label)
df_e.index.name = 'Exclusion codes'
df_e['Sum'] = df_e.iloc[:4,:7].sum(axis=1)
df_e['Average'] = df_e.iloc[:4,:7].mean(axis=1).round(2)
df_e.fillna('-').T[['ICD-9', 'ICD-10', 'ICPC-2', 'READ-2']]
def combine_pair(t):
return '{} ({})'.format(t.inc, t.exc)
def combine_row(inc, exc):
return (pd.DataFrame({'inc': inc.fillna('-'), 'exc': exc.fillna('-')})
.apply(combine_pair, axis=1))
df = df_m.astype('float64').combine(df_e.astype('float64'), combine_row)
df = df.T[['ICD-9', 'ICD-10', 'ICPC-2', 'READ-2']]
df.index.name = 'Events'
df
"""
Explanation: Mappings
End of explanation
"""
pd.DataFrame([
(database, databases.coding_system(database), database_label(database))
for database in databases.databases()
], columns=("Database", "Coding system", "Label")).set_index("Database")
"""
Explanation: Notes
Should exclusion codes from the reference be generated?
No. Exclusion codes are often added database specifically, where the codes are not represented in the case definition.
Coding systems
End of explanation
"""
types_distr = pd.DataFrame(json.load(open('../{}.types-distrs.json'.format(PROJECT)))).T
df = pd.DataFrame()
df['All'] = types_distr.groupby('group')[['pos', 'neg']].sum().sum()
df['All %'] = df['All'] / df['All'].sum()
df['DISO'] = types_distr.groupby('group')[['pos', 'neg']].sum().ix['DISO']
df['DISO %'] = df['DISO'] / df['DISO'].sum()
df
"""
Explanation: Baseline-0
DISO filtering for concepts
Number / Percentage of concepts with true positive codes overall (All) and in semantic group disorders (DISO).
End of explanation
"""
df = (ev[ev.variation == 'baseline0'].
groupby('event').
first().
cuis.
map(json.loads).
map(len).
to_frame('#CUIs'))
df.index = df.index.map(event_label)
df.ix['SUM'] = df['#CUIs'].sum()
df.T
"""
Explanation: Number of concepts in each mapping
End of explanation
"""
df = ev[ev.variation == 'baseline0'][['event', 'database', 'generated', 'reference', 'tp', 'fp', 'fn']]
for key in ['generated', 'reference', 'tp', 'fp', 'fn']:
df[key] = df[key].map(len_if_notnull)
df['database'] = df['database'].map(database_label)
df.groupby('database').sum()
df = pd.DataFrame([
ev[ev.variation == 'baseline0'].groupby('database').recall.mean(),
ev[ev.variation == 'baseline0'].groupby('database').precision.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
with mystyle(measures_palette, savefig='baseline-performance-by-db.pdf'):
with sns.plotting_context(font_scale=1):
ax = draw_lines(df['Average'])
df.iloc[:,:-1].T.plot(kind='bar', title='Baseline_0', ax=ax)
baseline0_performance = df
df.round(3)
"""
Explanation: Number of generated, reference codes and confusion by coding system.
End of explanation
"""
df = ev[ev.variation == 'baseline'][['event', 'database', 'generated', 'reference', 'tp', 'fp', 'fn']]
for key in 'generated reference tp fp fn'.split():
df[key] = df[key].map(len_if_notnull)
df['database'] = df['database'].map(database_label)
df.groupby('database').sum()
"""
Explanation: Baseline
End of explanation
"""
df = (ev[ev.variation == 'baseline'].
groupby('event').
first().
cuis.
map(json.loads).
map(len).
to_frame('#CUIs'))
df.index = df.index.map(event_label)
df.ix['SUM'] = df['#CUIs'].sum()
df.T
df = pd.DataFrame([
ev[ev.variation == 'baseline'].groupby('database').recall.mean(),
ev[ev.variation == 'baseline'].groupby('database').precision.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
with mystyle(measures_palette, savefig='baseline-performance-by-db.pdf'):
with sns.plotting_context(font_scale=1):
ax = draw_lines(df['Average'])
df.iloc[:,:-1].T.plot(kind='bar', title='Baseline', ax=ax)
baseline_performance = df
df.round(3)
df = pd.DataFrame([
ev[ev.variation == 'baseline'].groupby('event').recall.mean(),
ev[ev.variation == 'baseline'].groupby('event').precision.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(event_label)
df['Average'] = df.mean(axis=1)
with mystyle(measures_palette, xrot=45, ha='right', savefig='baseline-performance-by-event.pdf'):
ax = draw_lines(df['Average'])
df.iloc[:,:-1].T.plot(kind='bar', title='Baseline', ax=ax)
df.round(3)
"""
Explanation: Number of concepts in the mapping
End of explanation
"""
df = ev[ev.variation == 'max-recall'][['event', 'database', 'generated', 'reference', 'tp', 'fp', 'fn']]
for key in ['generated', 'reference', 'tp', 'fp', 'fn']:
df[key] = df[key].map(len_if_notnull)
df['database'] = df['database'].map(database_label)
df = df.groupby('database').sum()
df.ix['Overall'] = df.sum()
df['fn/reference'] = df['fn'] / df['reference']
#df['tp/generated'] = 1 - (df.tp / df.generated).round(3)
df.round(3)
df = pd.DataFrame([
ev[ev.variation == 'max-recall'].groupby('database').recall.mean(),
ev[ev.variation == 'max-recall'].groupby('database').precision.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
with mystyle(measures_palette, ylim=(0,1), savefig='max-recall-performance-by-db.pdf'):
ax = draw_lines(df['Average'])
df.iloc[:,:-1].T.plot(kind='bar', title='Maximum recall', ax=ax)
maxrecall_performance = df
df.round(3)
df = pd.DataFrame([
ev[ev.variation == 'max-recall'].groupby('database').recall.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
with mystyle(measures_palette, ylim=(.9, 1), savefig='max-recall-recall-by-db.pdf'):
ax = draw_lines(df['Average'])
df.iloc[:,:-1].T.plot(kind='bar', legend=False, title='Maximum recall', ax=ax)
plt.ylabel(measure_label('recall'))
df.round(3)
"""
Explanation: Max-recall
End of explanation
"""
with open('../{}.code-stats.csv'.format(PROJECT)) as f:
code_stats = pd.read_csv(f)
stats = pd.DataFrame()
stats['Mapping'] = (code_stats[code_stats.InMapping]
.groupby('Database')
.Code.count())
stats['Not maximum-recall'] = (code_stats[code_stats.InMapping & ~code_stats.InDnf]
.groupby('Database')
.Code.count())
stats = stats.fillna(0)
stats['% of missing'] = (stats['Not maximum-recall'] / stats['Not maximum-recall'].sum()).map("{:.2%}".format)
stats['% of mapping'] = (stats['Not maximum-recall'] / stats['Mapping']).map("{:.2%}".format)
stats.index = stats.index.map(database_label)
stats
max_recall_fn = ev[(ev.variation == 'max-recall') & (ev.recall < 1)][["database", "fn"]]
max_recall_fn.database = max_recall_fn.database.map(database_label)
max_recall_fn = max_recall_fn.groupby('database').fn.sum().to_frame('fn')
max_recall_fn['fn'] = max_recall_fn['fn'].map(lambda x: set() if x != x else set(x)).map(', '.join)
max_recall_fn.index.name = 'Database'
max_recall_fn.columns = ['False negatives of maximum recall']
max_recall_fn
"""
Explanation: Reasons for imperfect sensitivity
End of explanation
"""
averages_compare = pd.DataFrame([
ev[ev.variation == 'max-recall'].groupby('event').recall.mean(),
ev[ev.variation == 'max-recall'].groupby('event').precision.mean(),
])
averages_compare.index = averages_compare.index.map(measure_label)
averages_compare.columns = averages_compare.columns.map(event_names.get)
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(measures_palette, xrot=45, ha='right', savefig='max-recall-by-event.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:,:-1].T.plot(kind='bar', title="Maximum recall", ax=ax)
averages_compare.round(3)
"""
Explanation: CPRD: READ2 codes from the reference are mapped to READ CTV3 codes that are not in UMLS, for example 7L1H6 (READ2) -> XaM3E, XaPuP, 7L1H6, 7L1h6.
End of explanation
"""
compare_variations = OrderedDict([
('3-RN-RB.expand', 'Expand 3 RN, RB'),
('3-CHD-PAR.expand', 'Expand 3 PAR, CHD'),
('3-RN-CHD-RB-PAR.expand', 'Expand 3 RN, CHD, RB, PAR'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('database').recall.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare.index = compare_variations.values()
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), savefig='relations-recall-by-db.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:, :-1].T.plot(kind='bar', title="Relations in expansion step 3", ax=ax)
plt.ylabel(measure_label('recall'))
print(averages_compare)
compare_variations = OrderedDict([
('3-RN-RB.expand', 'Expand 3 RN, RB'),
('3-CHD-PAR.expand', 'Expand 3 PAR, CHD'),
('3-RN-CHD-RB-PAR.expand', 'Expand 3 RN, CHD, RB, PAR'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('database').precision.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare.index = compare_variations.values()
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), savefig='relations-recall-by-db.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:, :-1].T.plot(kind='bar', title="Relations in expansion step 3", ax=ax)
plt.ylabel(measure_label('precision'))
print(averages_compare)
compare_variations = OrderedDict([
('4-RN-RB.expand', 'Expand 4 RN, RB'),
('4-CHD-PAR.expand', 'Expand 4 PAR, CHD'),
('4-RN-CHD-RB-PAR.expand', 'Expand 4 RN, CHD, RB, PAR'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('database').recall.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare.index = compare_variations.values()
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), savefig='relations-recall-by-db.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:, :-1].T.plot(kind='bar', title="Relations in expansion step 4", ax=ax)
plt.ylabel(measure_label('recall'))
print(averages_compare)
compare_variations = OrderedDict([
('baseline', 'Baseline'),
('1-RN-RB.expand', 'RN, RB'),
('1-RN-CHD.expand', 'RN, CHD'),
('1-RB-PAR.expand', 'RB, PAR'),
('1-PAR-CHD.expand', 'PAR, CHD'),
('1-RN-CHD-RB-PAR.expand', 'RN, CHD, RB, PAR'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('database').recall.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare.index = compare_variations.values()
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), savefig='relations-recall-by-db.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:, :-1].T.plot(kind='bar', title="Relations for expansion", ax=ax)
plt.ylabel(measure_label('recall'))
compare_variations = OrderedDict([
('baseline', 'Baseline'),
('1-RN-RB.expand', 'RN, RB'),
('1-RN-CHD.expand', 'RN, CHD'),
('1-RB-PAR.expand', 'RB, PAR'),
('1-PAR-CHD.expand', 'PAR, CHD'),
('1-RN-CHD-RB-PAR.expand', 'RN, CHD, RB, PAR'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('event').recall.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(event_names.get)
averages_compare.index = compare_variations.values()
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), xrot=45, ha='right', savefig='relations-recall-by-event.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:,:-1].T.plot(kind='bar', title="Relations for expansion", ax=ax)
plt.ylabel(measure_label('recall'))
"""
Explanation: Compare relations for expansion
End of explanation
"""
variations_names = OrderedDict([
('baseline', 'baseline'),
('1-RN-CHD-RB-PAR.expand', 'expand$_1$'),
('2-RN-CHD-RB-PAR.expand', 'expand$_2$'),
('3-RN-CHD-RB-PAR.expand', 'expand$_3$'),
('4-RN-CHD-RB-PAR.expand', 'expand$_4$'),
])
df = pd.DataFrame({
name: ev[ev.variation == variation].groupby('database').recall.mean()
for variation, name in variations_names.items()
}).T
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
with mystyle(graded_recall_palette(len(variations_names), rev=0), savefig='steps-recall-by-db.pdf'):
ax = draw_lines(df['Average'])
df.iloc[:-1,:-1].T.plot(kind='bar', ax=ax)
plt.ylabel(measure_label('recall'))
df.round(3)
"""
Explanation: Increasing sensitivity with more expansion steps
End of explanation
"""
expands_performance = OrderedDict()
for i in [1,2,3,4]:
v = '{}-RN-CHD-RB-PAR.expand'.format(i)
df = pd.DataFrame([
ev[ev.variation == v].groupby('database').recall.mean(),
ev[ev.variation == v].groupby('database').precision.mean(),
])
df.index = df.index.map(measure_label)
df.columns = df.columns.map(database_label)
df = df[coding_systems]
df['Average'] = df.mean(axis=1)
#with mystyle(measures_palette, ylim=(0,1), savefig='max-recall-performance-by-db.pdf'):
# ax = draw_lines(df['Average'])
# df.iloc[:,:-1].T.plot(kind='bar', title='Maximum recall', ax=ax)
expands_performance['expand_{}'.format(i)] = df
num_concepts = pd.Series(OrderedDict([
(var_name, ev[(ev.variation == var) & (ev.cuis.notnull())]
.groupby('event').first()
.cuis.map(json.loads).map(len)
.sum())
for var_name, var in [('baseline0', 'baseline0'), ('baseline', 'baseline')] + \
[('expand_{}'.format(i), '{}-RN-CHD-RB-PAR.expand'.format(i)) for i in range(1,5)] + \
[(('max-sensitivity', 'max-recall'))]
])).to_frame('Concepts')
num_concepts
performances = OrderedDict()
performances['baseline_0'] = baseline0_performance
performances['baseline'] = baseline_performance
for v in expands_performance:
performances[v] = expands_performance[v]
performances['max_sensitivity'] = maxrecall_performance
performances_df = pd.concat(performances).round(3)
performances_df
s = (performances_df
.set_index(performances_df.index.rename(['Variation', 'Measurement']))
.stack()
.reset_index()
.rename(columns={'level_2': 'Terminology', 0: 'Value'})
)
s.head()
ev1 = (ev[['cuis', 'variation', 'event', 'database', 'recall', 'precision', 'tp', 'fp', 'fn']]
[ev.variation.isin(['baseline']+['{}-RN-CHD-RB-PAR.expand'.format(n) for n in [1,2,3,4]])]
.replace({'variation': {'baseline': '0-baseline'}})
.replace({'variation': {'{}-RN-CHD-RB-PAR.expand'.format(n): "{}-expansion".format(n) for n in [1,2,3,4]}})
.sort_values(by=['variation', 'database', 'event'])
.copy())
for f in ['cuis', 'generated', 'reference', 'tp', 'fp', 'fn']:
ev1[f] = ev[f].fillna('').map(len)#lambda x: len(x) if x == x else '-')
ev1.head()
ev2 = ev1.groupby(['variation', 'database']).aggregate(OrderedDict([
('recall', np.mean),
('precision', np.mean),
('cuis', sum),
('generated', sum), ('reference', sum), ('tp', sum), ('fp', sum), ('fn', sum)
]))
ev2.head()
"""
Explanation: Reasons for low performance in IPCI when including exclusion codes
Exclusion codes are not in the evaluation any more. See note above.
The IPCI mapping contains very broad codes that are refined with additional terms. For example
K24 (Fear of heart attack)
K90 (stroke)
K93 (Pulmonary embolism)
D70 (Dementia) OR "dementia" AND "infarct"
U14 (Kidney symptom/complaint ) OR "nier" AND "infarct"
End of explanation
"""
(ev1[['variation', 'database', 'event', 'generated' ,'reference', 'tp']]
.assign(precision1=lambda df: df.tp / df.generated)
.assign(recall1=lambda df: df.tp / df.reference)
.groupby(('variation', 'database'))
.aggregate(OrderedDict([('recall1', np.mean), ('precision1', np.mean)]))
.reset_index()
.groupby('variation')
.aggregate(OrderedDict([('recall1', np.mean), ('precision1', np.mean)]))
.round(2))
"""
Explanation: Verification of macro-average performance measures
End of explanation
"""
(ev1
.groupby('variation')
.aggregate(OrderedDict([('recall', np.mean), ('precision', np.mean)]))
.round(2))
"""
Explanation: Micro-average performance measures
End of explanation
"""
ev3a = (ev1
.groupby(['variation', 'database'])
.aggregate(dict({key: lambda s: s.fillna(0).sum() for key in 'generated reference tp fp fn'.split()}, cuis=np.mean, **{'precision': np.mean, 'recall': np.mean}))
.reset_index())
ev3b = (ev3a
.groupby('variation')
.aggregate(dict({key: lambda s: s.fillna(0).sum() for key in 'generated reference tp fp fn'.split()}, cuis=np.mean))
['cuis generated reference tp fp fn'.split()]
.assign(recall=lambda df: df.tp / df.reference,
precision=lambda df: df.tp / df.generated)
.assign(database='ZZZ')
.reset_index())
ev3 = (pd.concat([ev3a, ev3b])
.sort_values(['variation', 'database'])
['variation database cuis generated reference tp fp fn recall precision'.split()]
.set_index(['variation', 'database']))
ev3
(ev1
.groupby('variation')
.aggregate({key: lambda s: s.fillna(0).sum() for key in 'generated reference tp fp fn'.split()})
.assign(recall=lambda df: df.tp / df.reference)
.assign(precision=lambda df: df.tp / df.generated)
['recall precision generated reference tp fp fn'.split()]
.round(2))
# Remove [s.Terminology == 'Average'] for all terminologies
variation_names = {
'baseline': 'Baseline',
'baseline_0': None,
'expand_1': '1 expansion step',
'expand_2': '2 expansion steps',
'expand_3': '3 expansion steps',
'expand_4': None,
'max_sensitivity': '(Maximum sensitivity)'
}
s1 = s.copy()
s1['Code sets'] = s1.Variation.map(variation_names)
s1 = s1[s1.Variation.notnull()]
g = (sns.factorplot(kind='bar', data=s1[s1.Terminology == 'Average'],
x='Measurement', y='Value', col='Terminology', hue='Code sets',
saturation=1, legend=True, legend_out=True, size=4, aspect=2,
#palette=sns.color_palette("Set2", 7),
palette=sns.color_palette("Set1", n_colors=5, desat=.5),
hue_order=['Baseline', '1 expansion step', '2 expansion steps', '3 expansion steps', '(Maximum sensitivity)'])
.set_titles('') #Performance (average over events and vocabularies)")
.set_xlabels('')
.set_ylabels('')
.set(ylim=(0, 1))
.despine(left=True))
for p in g.axes[0][0].patches:
height = p.get_height()
g.ax.text(p.get_x()+1/12, height-0.025, '%.2f' % height,
fontsize=10, horizontalalignment='center', verticalalignment='top', color='white')
g.savefig('performance.pdf')
compare_variations = OrderedDict([
('baseline', 'Baseline'),
('1-RN-CHD-RB-PAR.expand', 'Expansion 1 step'),
('2-RN-CHD-RB-PAR.expand', 'Expansion 2 steps'),
('3-RN-CHD-RB-PAR.expand', 'Expansion 3 steps'),
('4-RN-CHD-RB-PAR.expand', 'Expansion 4 steps'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('database').precision.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare.index = compare_variations.values()
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_precision_palette(len(compare_variations), rev=0), savefig='steps-precision-by-db.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.T.plot(kind='bar', title="Expansion steps", ax=ax)
plt.ylabel(measure_label('precision'))
averages_compare.round(3)
compare_variations = OrderedDict([
('baseline', 'Baseline'),
('1-RN-CHD-RB-PAR.expand', '1 step'),
('2-RN-CHD-RB-PAR.expand', '2 steps'),
('3-RN-CHD-RB-PAR.expand', '3 steps'),
# ('4-RN-CHD-RB-PAR.expand', '4 steps'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('event').recall.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(event_names.get)
averages_compare.index = compare_variations.values()
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_recall_palette(len(compare_variations), rev=0), xrot=45, ha='right', savefig='steps-recall-by-event.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.T.plot(kind='bar', title="Expansion steps", ax=ax)
plt.ylabel(measure_label('recall'))
compare_variations = OrderedDict([
('baseline', 'Baseline'),
('1-RN-CHD-RB-PAR.expand', '1 step'),
('2-RN-CHD-RB-PAR.expand', '2 steps'),
('3-RN-CHD-RB-PAR.expand', '3 steps'),
# ('4-RN-CHD-RB-PAR.expand', '4 steps'),
])
averages_compare = pd.DataFrame([
ev[ev.variation == variation].groupby('event').precision.mean()
for variation in compare_variations
], index=compare_variations)
averages_compare.columns = averages_compare.columns.map(event_names.get)
averages_compare.index = compare_variations.values()
averages_compare['Average'] = averages_compare.mean(axis=1)
with mystyle(graded_precision_palette(len(compare_variations), rev=0), xrot=45, ha='right', savefig='steps-precision-by-event.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.T.plot(kind='bar', title="Expansion steps", ax=ax)
plt.ylabel(measure_label('precision'))
measures = ['recall', 'precision']
averages_compare = pd.DataFrame([
ev[ev.variation == '3-RN-CHD-RB-PAR.expand'].groupby('database')[measure].mean()
for measure in measures
], index=map(measure_label, measures))
averages_compare.columns = averages_compare.columns.map(database_label)
averages_compare = averages_compare[coding_systems]
averages_compare['Average'] = averages_compare.mean(axis=1)
name = 'expansion3-performance-by-db'
with mystyle(measures_palette, savefig=name+'.pdf'):
ax = draw_lines(averages_compare['Average'])
averages_compare.iloc[:,:-1].T.plot(kind='bar', title="Performance of 3-step expansion", ax=ax)
averages_compare.to_csv(name+'.csv')
"""
Explanation: Micro-performance measures
End of explanation
"""
variation = '3-RN-CHD-RB-PAR.expand'
with open("../{}.{}.error-analyses.yaml".format(PROJECT, variation)) as f:
error_analyses = yaml.load(f)
def get_category(fn_or_fp, database, event, code):
if database in error_analyses[fn_or_fp] and event in error_analyses[fn_or_fp][database]:
return error_analyses[fn_or_fp][database][event]['code-categories'].get(code) or '?'
else:
return '??'
evs = ev[(ev.variation == variation) & ev.fn.notnull()][['event', 'database', 'fn', 'fp']]
fn = evs.apply(lambda row: pd.Series(row.fn), axis=1).stack().reset_index(level=1, drop=True)
fn.name = 'code'
# fns : | event | database | code |
fns = evs.drop(['fn', 'fp'], axis=1).join(fn, how='inner').drop_duplicates()
fns['category'] = fns.apply(lambda r: get_category('fn', r.database, r.event, r.code), axis=1)
fp = evs.apply(lambda row: pd.Series(row.fp), axis=1).stack().reset_index(level=1, drop=True)
fp.name = 'code'
# fps : | event | database | code |
fps = evs.drop(['fn', 'fp'], axis=1).join(fp, how='inner').drop_duplicates()
fps['category'] = fps.apply(lambda r: get_category('fp', r.database, r.event, r.code), axis=1)
fns.groupby(['category', 'database']).code.aggregate(lambda s: set(s)).map(', '.join).to_frame()
fps.groupby(['category', 'database']).code.aggregate(lambda s: set(s)).map(', '.join).to_frame()
code_counts = pd.Series({
database: len(set(mappings.all_codes(database)))
for database in databases.databases()
})
code_counts.ix['All'] = code_counts.sum()
code_counts.index.name = 'database'
def category_label(category):
return {
# FN
'not-in-dnf': 'Not in UMLS',
'database-specific': 'DB specific',
'next-expansion': 'expansion_{4}',
'isolated': 'Isolated',
# FP
'in-dnf': 'Cosynonym',
'other-fp': 'Indexing FP',
}.get(category, category)
def counts(code_categories, FN_or_FP):
"code_categories : | code | category |"
# (database, category) | int
s1 = code_categories.groupby('database').category.value_counts()
# category | int
s2 = code_categories.category.value_counts()
s2.index = pd.MultiIndex.from_product([['Overall'], s2.index])
res = pd.concat([s1, s2]).to_frame('count')
res['ratio'] = res['count'] / s2.sum()
res['%'] = res['ratio'].map('{:.1%}'.format)
#res['% (mapping)'] = (res['count'] / code_counts).map('{:.1%}'.format)
res = res.rename(columns={'count': '{} category'.format(FN_or_FP)}).reset_index()
res['category'] = res['category'].map(category_label)
res['database'] = res['database'].map(lambda db: db if db == 'Overall' else database_label(db))
res['error-type'] = [FN_or_FP] * len(res)
return res
fp_counts = counts(fps, 'FP')
fp_counts
fn_counts = counts(fns, 'FN')
fn_counts
category_names = {
'DB specific': 'No synonym in reference',
'Indexing FP': 'No TP synonym',
'Cosynonym': 'Sibling of TP code'
}
data = pd.concat([
(fn_counts[fn_counts.database == 'Overall']
.rename(columns={'FN category': 'Count'})
.assign(Category=lambda df: df.category.map(category_names))),
(fp_counts[fp_counts.database == 'Overall']
.rename(columns={'FP category': 'Count'})
.assign(Category=lambda df: df.category.map(category_names)))
])
print(df)
(sns.factorplot(kind='bar', data=data[data['error-type'] == 'FP'], x='category', y='ratio',
legend=True, legend_out=True, size=4, ci=None))
(sns.factorplot(kind='bar', data=data[data['error-type'] == 'FN'], x='category', y='ratio',
legend=True, legend_out=True, size=4, ci=None))
"""
Explanation: FN error-analysis
End of explanation
"""
|
MingChen0919/learning-apache-spark
|
notebooks/02-data-manipulation/.ipynb_checkpoints/2.7.1-column-expression-checkpoint.ipynb
|
mit
|
mtcars = spark.read.csv('../../../data/mtcars.csv', inferSchema=True, header=True)
mtcars = mtcars.withColumnRenamed('_c0', 'model')
mtcars.show(5)
"""
Explanation: Column expression
A Spark column instance is NOT a column of values from the DataFrame: when you crate a column instance, it does not give you the actual values of that column in the DataFrame. I found it makes more sense to me if I consider a column instance as a column of expressions. These expressions are evaluated by other methods (e.g., the select(), groupby(), and orderby() from pyspark.sql.DataFrame)
Example data
End of explanation
"""
mpg_col = mtcars.mpg
mpg_col
"""
Explanation: Use dot (.) to select column from DataFrame
End of explanation
"""
mpg_col + 1
mtcars.select(mpg_col * 100).show(5)
"""
Explanation: Modify a column to generate a new column
End of explanation
"""
mtcars.select(mtcars.gear.isin([2,3])).show(5)
mtcars.mpg.asc()
"""
Explanation: The pyspark.sql.Column has many methods that acts on a column and returns a column instance.
End of explanation
"""
|
deculler/TableDemos
|
HealthDemo.ipynb
|
bsd-2-clause
|
# Lets draw two samples of equal size
n_sample = 200
smoker_sample = smokers.sample(n_sample)
nosmoker_sample = nosmokers.sample(n_sample)
weight = Table([nosmoker_sample['weight'],smoker_sample['weight']],['NoSmoke','Smoke'])
weight.hist(overlay=True,bins=30,normed=True)
bins=np.arange(39,139,5)
weight_dist = weight.bin(bins=bins, normed=True)
weight_dist['diff']=weight_dist['NoSmoke density']-weight_dist['Smoke density']
print('TVD: ',sum(np.abs(weight_dist['diff'])))
weight_dist.show()
weight_dist.select(['bin','diff']).bar('bin')
weight.stats(summary_ops)
np.mean(weight['NoSmoke'])-np.mean(weight['Smoke'])
"""
Explanation: It would appear that nosmokers are older, more educated, more 'coupled' and heavier - but of similar height and blood pressure
End of explanation
"""
combined = Table([np.append(nosmoker_sample['weight'],smoker_sample['weight'])],['all'])
# permutation test, split the combined into two random groups, do the comparison of those
def getdiff():
A,B = combined.split(300)
return (np.mean(A['all'])-np.mean(B['all']))
# Do the permutation many times and form the distribution of results
num_samples = 100
diff_samples = Table([[getdiff() for i in range(num_samples)]],['diffs'])
diff_samples.hist(bins=20, normed=True)
"""
Explanation: Is the difference observed between these samples representative of the larger population?
End of explanation
"""
|
chungjjang80/FRETBursts
|
notebooks/Example - Working with timestamps and bursts.ipynb
|
gpl-2.0
|
from fretbursts import *
sns = init_notebook()
filename = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(filename)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)
d.burst_search()
"""
Explanation: Working with timestamps and bursts
This notebook is part of a tutorial series for the FRETBursts burst analysis software.
In this notebook we show how to access different streams of timestamps,
burst data (i.e. start and stop times and indexes, counts, etc...).
These operations are useful for users wanting to access or process bursts data
and timestamps in custom ways.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
Load data
We start by loading the data, computing background and performing a standard burst search:
End of explanation
"""
ph = d.get_ph_times() # all the recorded photons
ph_dd = d.get_ph_times(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
ph_d = d.get_ph_times(ph_sel=Ph_sel(Dex='DAem')) # donor excitation, donor+acceptor emission
ph_aa = d.get_ph_times(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
"""
Explanation: Getting the timestamps
To get the timestamps arrays for a given photon stream we use Data.get_ph_times. Here a few example:
End of explanation
"""
mask_dd = d.get_ph_mask(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
mask_d = d.get_ph_mask(ph_sel=Ph_sel(Dex='DAem')) # donor excitation, donor+acceptor emission
mask_aa = d.get_ph_mask(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
"""
Explanation: This are streams of all timestamps (both inside and outside the bursts).
Similarly, we can get "masks" of photons for each photon stream using
Data.get_ph_mask:
End of explanation
"""
ph.size, mask_dd.size, mask_d.size, mask_aa.size
"""
Explanation: Masks are arrays of booleans (True or False values) which are True
when the corresponding photon is in the stream. Note that all masks
have same number of elements as the all-photons timestamps array:
End of explanation
"""
mask_d.sum()
"""
Explanation: Masks can be used to count photons in one stream:
End of explanation
"""
ph[mask_d]
"""
Explanation: and to obtain the timestamps for one stream:
End of explanation
"""
bursts = d.mburst[0]
nd = d.nd[0]
na = d.na[0]
naa = d.naa[0]
E = d.E[0]
S = d.S[0]
"""
Explanation: Note that the arrays ph[mask_d] and ph_d are equal. This is an important point to understand.
Burst data
There are several fields containing burst data:
Start-stop:
Data.mburst: start-stop information for each burst
Counts:
- Data.nd: donor detector counts during donor excitation
- Data.na: acceptor detector counts during donor excitation
- Data.naa: acceptor detector counts during acceptor excitation (ALEX only)
- Data.nda: donor detector counts during acceptor excitation
FRET:
- Data.E: Proximity Ratio (when uncorrected) or FRET efficiency (when corrected)
- Data.S: "Stoichiometry" (ALEX only)
All previous fields are lists containing one element per excitation spot.
In single-spot data, these lists have only one element which is accessed
using the [0] notation:
End of explanation
"""
bursts
"""
Explanation: All previous variables are numpy arrays, except for bursts which is
a Bursts object (see next section).
Burst start and stop
The start-stop burst data is in bursts (a variable of type Bursts, plural):
End of explanation
"""
firstburst = bursts[0]
firstburst
"""
Explanation: Indexing bursts we can access a single burst:
End of explanation
"""
bursts.istart
firstburst.istart
"""
Explanation: The first two "columns" (both in bursts or firstburst) are the index of
first and last timestamps (relative to the all-photons timestamps).
The last two columns (start and stop) are the actual times of burst
start and stop. To access any of these fields we type:
End of explanation
"""
ph[firstburst.istart], firstburst.start
"""
Explanation: Note that ph[firstburst.istart] is equal to firstburst.start:
End of explanation
"""
ph[firstburst.istop], firstburst.stop
"""
Explanation: The same holds for stop:
End of explanation
"""
d.burst_search(computefret=False)
d.calc_fret(count_ph=True, corrections=False)
"""
Explanation: Note that bursts is a Bursts object (plural, a bursts-set)
and firstburst is a Burst object (singular, only one burst).
You can find more info on these objects in the documentation:
Low-level burst search functions
Burst photon-counts
The variables nd, na, naa contains the number of photon in different photon streams.
By default these values are background corrected and, if the correction coefficients
are specified, are also corrected for leakage, direct excitation and gamma factor.
To get the raw counts before correction we can redo the burst search as follow:
End of explanation
"""
ds = d.select_bursts(select_bursts.size, th1=30, computefret=False)
nd = ds.nd[0] # Donor-detector counts during donor excitation
na = ds.na[0] # Acceptor-detector counts during donor excitation
naa = ds.naa[0] # Acceptor-detector counts during acceptor excitation
E = ds.E[0] # FRET efficiency or Proximity Ratio
S = ds.S[0] # Stoichiometry, as defined in µs-ALEX experiments
nd
"""
Explanation: Note that if you select bursts, you also need to use computefret=False
to avoid recomputing E and S (which by default applies the corrections):
End of explanation
"""
from fretbursts.phtools.burstsearch import Burst, Bursts
times = d.ph_times_m[0] # timestamps array
"""
Explanation: Note that the burst counts are integer values, confirming that the background
correction was not applied.
Slice bursts in time bins
Here we slice each burst in fixed time bins.
End of explanation
"""
ds_fused = ds.fuse_bursts(ms=0)
bursts = ds_fused.mburst[0]
print('\nNumber of bursts:', bursts.num_bursts)
"""
Explanation: We start fusing bursts with separation <= 0 milliseconds,
to avoid having overlapping bursts:
End of explanation
"""
time_bin = 0.5e-3 # 0.5 ms
time_bin_clk = time_bin / ds.clk_p
sub_bursts_list = []
for burst in bursts:
# Compute binning of current bursts
bins = np.arange(burst.start, burst.stop + time_bin_clk, time_bin_clk)
counts, _ = np.histogram(times[burst.istart:burst.istop+1], bins)
# From `counts` in each bin, find start-stop times and indexes (sub-burst).
# Note that start and stop are the min and max timestamps in the bin,
# therefore they are not on the bin edges. Also the burst width is not
# exactly equal to the bin width.
sub_bursts_l = []
sub_start = burst.start
sub_istart = burst.istart
for count in counts:
# Let's skip bins with 0 photons
if count == 0:
continue
sub_istop = sub_istart + count - 1
sub_bursts_l.append(Burst(istart=sub_istart, istop=sub_istop,
start=sub_start, stop=times[sub_istop]))
sub_istart += count
sub_start = times[sub_istart]
sub_bursts = Bursts.from_list(sub_bursts_l)
assert sub_bursts.num_bursts > 0
assert sub_bursts.width.max() < time_bin_clk
sub_bursts_list.append(sub_bursts)
"""
Explanation: Now we can slice each burst using a constant time bin:
End of explanation
"""
len(sub_bursts_list)
ds_fused.num_bursts
"""
Explanation: The list sub_bursts_list has one set of sub-bursts per each original burst:
End of explanation
"""
print('Sub-bursts from burst 0:')
sub_bursts_list[0]
iburst = 10
print('Sub-bursts from burst %d:' % iburst)
sub_bursts_list[iburst]
"""
Explanation: Each set of sub-bursts is a usual Bursts object:
End of explanation
"""
bursts = sub_bursts_list[0]
bursts
"""
Explanation: Photon counts in custom bursts
When performing a burst search,
FRETBursts automatically computes donor and acceptor counts (in both
excitation periods). These quantities are available as Data attributes:
nd, na, naa and nda (as described in Burst data).
When a custom bursts-set is created, like in the previous section in which we
sliced bursts in sub-bursts, we may want to computed the photon counts
in the various photon streams. Let consider, as an example, the following Bursts object:
End of explanation
"""
mask_dd = d.get_ph_mask(ph_sel=Ph_sel(Dex='Dem')) # donor excitation, donor emission
mask_ad = d.get_ph_mask(ph_sel=Ph_sel(Dex='Aem')) # donor excitation, acceptor emission
mask_aa = d.get_ph_mask(ph_sel=Ph_sel(Aex='Aem')) # acceptor excitation, acceptor emission
"""
Explanation: <p class="lead">How do we count the <b>donor</b> and <b>acceptor</b> photons in these bursts?<p>
First we need to prepare the masks for the various photon streams
(as explained [before](#Getting-the-timestamps)):
End of explanation
"""
from fretbursts.phtools.burstsearch import count_ph_in_bursts
counts_dd = count_ph_in_bursts(bursts, mask_dd)
counts_dd
"""
Explanation: Next, we use the function counts_ph_in_bursts:
End of explanation
"""
counts_ad = count_ph_in_bursts(bursts, mask_ad)
counts_aa = count_ph_in_bursts(bursts, mask_aa)
"""
Explanation: counts_dd contains the raw counts in each burst (in bursts)
in the Donor-emission during Donor-acceptor stream. Similarly,
we can compute counts for the other photon streams:
End of explanation
"""
counts_ad / (counts_dd + counts_ad)
"""
Explanation: With these values, we can compute, for example, the uncorrected
Proximity Ratio (PR):
End of explanation
"""
|
sraejones/phys202-2015-work
|
assignments/midterm/InteractEx06.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math as m
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
%% html
<equation>
F($epsilon$)=$1/(e^(($epsilon$ -$mu$)/kT)-1)
</equation>
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
# YOUR CODE HERE
a = np.array(energy - mu)
b = np.array(a/kT)
c = np.array(m.exp(b))
d = np.array(c+1)
f = np.array(1/d)
return f
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
YOUR ANSWER HERE
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
def plot_fermidist(mu, kT):
# YOUR CODE HERE
a = np.array(mu)
b = np.array(kT)
plt.scatter(a, b)
plt.ylabel('Temperature')
plt.xlabel('Chemical Potential')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
# YOUR CODE HERE
w = interactive(plot_fermidist, mu =(0.0,5.0,0.1), kT=(0.1,10.0,0.1));
w
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
When kT is low then energy is high and when kT is high then energy is low.
Lowering the chemical potential would result in a higher energy and raising the chemical potental would result in a lower energy.
A smaller area would result in less particls
"""
Explanation: Provide complete sentence answers to the following questions in the cell below:
What happens when the temperature $kT$ is low?
What happens when the temperature $kT$ is high?
What is the effect of changing the chemical potential $\mu$?
The number of particles in the system are related to the area under this curve. How does the chemical potential affect the number of particles.
Use LaTeX to typeset any mathematical symbols in your answer.
YOUR ANSWER HERE
End of explanation
"""
|
bburan/psiexperiment
|
examples/notebooks/Calibration tutorial.ipynb
|
mit
|
%matplotlib inline
from scipy import signal
from scipy import integrate
import pylab as pl
import numpy as np
"""
Explanation: Acoustic system calibration
Since the calibration measurements may be dealing with very small values, there's potential for running into the limitations of <a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html">floating-point arithmetic</a>. When implementing the computational algorithms, using dB is recommended to avoid floating-point errors.
Throughout this description, we express sensitivity (e.g. of the microphone or speaker) in units of $\frac{V}{Pa}$ (which is commonly used throughout the technical literature) rather than the notation used in the EPL cochlear function test suite which are $\frac{Pa}{V}$. Sensitivity in the context of microphones is the voltage generated by the microphone in response to a given pressure. In the context of speakers, sensitivity is the output, in Pa, produced by a given voltage. We assume that the sensitivity of the calibration microphone is uniform across all frequencies (and it generally is if you spend enough money on the microphone). Sometimes you may wish to use a cheaper microphone to record audio during experiments. Since this microphone is cheap, sensitivity will vary as a function of frequency.
End of explanation
"""
fs = 10e3
t = np.arange(fs)/fs
frequency = 500
tone_waveform = np.sin(2*np.pi*frequency*t)
chirp_waveform = signal.chirp(t, 100, 1, 900)
clipped_waveform = np.clip(tone_waveform, -0.9, 0.9)
ax = pl.subplot(131)
ax.plot(t, tone_waveform)
ax = pl.subplot(132, sharex=ax, sharey=ax)
ax.plot(t, chirp_waveform)
ax = pl.subplot(133, sharex=ax, sharey=ax)
ax.plot(t, clipped_waveform)
ax.axis(xmin=0, xmax=0.01)
pl.tight_layout()
s = tone_waveform
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.axis(xmin=490, xmax=520)
pl.legend()
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
pl.figure(); plot_fft_windows(tone_waveform); pl.axis(xmin=490, xmax=510)
pl.figure(); plot_fft_windows(chirp_waveform); pl.axis(xmin=0, xmax=1500, ymin=-100)
pl.figure(); plot_fft_windows(clipped_waveform);
"""
Explanation: Calculating the frequency response
Using a hamming window for the signal is strongly recommended. The only exception is when measuring the sensitivity of the calibration microphone using a standard (e.g. a pistonphone that generates 114 dB SPL at 1 kHz). When you're using a single-tone calibration, a flattop window is best.
Speaker output
Output of speaker in Pa, $O(\omega)$, can be measured by playing a signal with known RMS voltage, $V_{speaker}(\omega)$ and measuring the voltage of a calibration microphone, $V_{cal}(\omega)$, with a known sensitivity, $S_{cal} = \frac{V_{rms}}{Pa}$.
$O(\omega) = \frac{V_{cal}(\omega)}{S_{cal}}$
Alternatively, the output can be specified in dB
$O_{dB}(\omega) = 20 \times log_{10}(\frac{V_{cal}(\omega)}{S_{cal}})$
$O_{dB}(\omega) = 20 \times log_{10}(V_{cal}(\omega))-20 \times log_{10}(S_{cal})$
Experiment microphone sensitivity
If we wish to calibrate an experiment microphone, we will record the voltage, $V_{exp}(\omega)$, at the same time we measure the speaker's output in the previous exercise. Using the known output of the speaker, we can then determine the experiment microphone sensitivity, $S_{exp}(\omega)$.
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{O(\omega)}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{\frac{V_{cal}(\omega)}{S_{cal}}}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega) \times S_{cal}}{V_{cal}(\omega)}$
The resulting sensitivity is in $\frac{V}{Pa}$. Alternatively the sensitivity can be expressed in dB, which gives us sensitivity as dB re Pa.
$S_{exp_{dB}}(\omega) = 20 \times log_{10}(V_{exp})+20 \times log_{10}(S_{cal})-20 \times log_{10}(V_{cal})$
In-ear speaker calibration
Since the acoustics of the system will change once the experiment microphone is inserted in the ear (e.g. the ear canal acts as a compliance which alters the harmonics of the system), we need to recalibrate each time we reposition the experiment microphone while it's in the ear of an animal. We need to compute the speaker transfer function, $S_{s}(\omega)$, in units of $\frac{V_{rms}}{Pa}$ which will be used to compute the actual voltage needed to drive the speaker at a given level. To compute the calibration, we generate a stimulus via the digital to analog converter (DAC) with known frequency content, $V_{DAC}(\omega)$, in units of $V_{RMS}$.
The output of the speaker is measured using the experiment microphone and can be determined using the experiment microphone sensitivity
$O(\omega) = \frac{V_{PT}(\omega)}{S_{PT}(\omega)}$
The sensitivity of the speaker can then be calculated as
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{O(\omega)}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{\frac{V_{PT}(\omega)}{S_{PT}(\omega)}}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega) \times S_{PT}(\omega)}{V_{PT}(\omega)}$
Alternatively, we can express the sensitivity as dB
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+20 \times log_{10}(S_{PT}(\omega))-20 \times log_{10}(V_{PT}(\omega))$
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+S_{PT_{dB}}(\omega)-20 \times log_{10}(V_{PT}(\omega))$
Generating a tone at a specific level
Given the speaker sensitivity, $S_{s}(\omega)$, we can compute the voltage at the DAC required to generate a tone at a specific amplitude in Pa, $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times O$
Usually, however, we generally prefer to express the amplitude in dB SPL.
$O_{dB SPL} = 20 \times log_{10}(\frac{O}{20 \times 10^{-6}})$
Solving for $O$.
$O = 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Substituting $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Expressed in dB
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + 20 \times log_{10}(10^{\frac{O_{dB SPL}}{20}}) + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
We can use the last equation to compute the voltage since it expresses the speaker calibration in units that we have calculated. However, we need to convert the voltage back to a linear scale.
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
Estimating output at a specific $V_{rms}$
Taking the equation above and solving for $O_{dB SPL}(\omega)$
$O_{dB SPL}(\omega) = 20 \times log_{10}(V_{DAC}) - S_{s_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Or, if we want to compute in Pa
$O(\omega) = \frac{V_{DAC}}{S_{s}(\omega)}$
Common calculations based on $S_{s_{dB}}(\omega)$ and $S_{PT_{dB}}(\omega)$
To estimate the voltage required at the DAC for a given dB SPL
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
To convert the microphone voltage measurement to dB SPL
$O_{dB SPL} = V_{DAC_{dB}}(\omega) - S_{PT_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Given the dB SPL, $O_{dB SPL}(\omega)$ at 1 VRMS
$S(\omega) = (10^{\frac{O_{dB SPL}(\omega)}{20}} \times 20 \times 10^{-6})^{-1}$
$S_{dB}(\omega) = - [O_{dB SPL}(\omega) + 20 \times log_{10}(20 \times 10^{-6})]$
Less common calculations
Given sensitivity calculated using a different $V_{rms}$, $x$, (e.g. $10 V_{rms}$), compute the sensitivity at $1 V_{rms}$ (used by the attenuation calculation in the neurogen package).
$S_{dB}(\omega) = S_{dB_{1V}}(\omega) = S_{dB_{x}}(\omega) - 20 \times log_{10}x$
Estimating the PSD
Applying a window to the signal is not always a good idea.
End of explanation
"""
print(0.5**2*8)
R = 8
P = 1
V = 2.83
print('Voltage is', R*np.sqrt(P/R))
print('Power is', V**2/R)
"""
Explanation: Designing an output circuit
Speaker sensitivity is typically reported in $\frac{dB}{W}$ at a distance of 1 meter. For an $8\Omega$ speaker, $2.83V$ produces exactly $1W$. We know this because $P = I^2 \times R$ and $V = I \times R$. Solving for $I$:
$I = \sqrt{\frac{P}{R}}$ and $I = \frac{V}{R}$
$\sqrt{\frac{P}{R}} = \frac{V}{R}$
$P = \frac{V^2}{R}$
$V = R \times \sqrt{\frac{P}{R}}$
End of explanation
"""
P = 0.5
R = 8
print('Voltage is', R*np.sqrt(P/R))
print('Current is', np.sqrt(P/R))
P_test = 0.1
P_max = 1
O_test = 90
dB_incr = 10*np.log10(P_max/P_test)
O_max = O_test+dB_incr
print('{:0.2f} dB increase giving {:0.2f} max output'.format(dB_incr, O_max))
"""
Explanation: Let's say we have an $8\Omega$ speaker with a handling capacity is $0.5W$. If we want to achieve the maximum (i.e. $0.5W$), then we need to determine the voltage that will achieve that wattage given the speaker rating.
$V = R \times \sqrt{\frac{P}{R}}$
$V = 8\Omega \times \sqrt{\frac{0.5W}{8\Omega}}$
$V = 2V$
Even if your system can generate larger values, there is no point in driving the speaker at values greater than 1V. It will simply distort or get damaged. However, your system needs to be able to provide the appropriate current to drive the speaker.
$I = \sqrt{\frac{P}{R}}$
$I = \sqrt{\frac{0.5W}{8\Omega}}$
$I = 0.25A$
This is based on nominal specs.
So, what is the maximum output in dB SPL? Assume that the spec sheet reports $92dB$ at $0.3W$.
$10 \times log_{10}(0.5W/0.3W) = 2.2 dB$
This means that we will get only $2.2dB$ more for a total of $94.2 dB SPL$.
$10 \times log_{10}(0.1W/0.3W) = -4.7 dB$
End of explanation
"""
P_max = 0.3 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe long-term voltage'.format(V))
P_max = 0.5 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe short-term voltage'.format(V))
R_speaker = 8
V_speaker = 2
V_out = 10
R = (R_speaker*(V_out-V_speaker))/V_speaker
print('Series divider resistor is {:.2f}'.format(R))
"""
Explanation: Now that you've figured out the specs of your speaker, you need to determine whether you need a voltage divider to bring output voltage down to a safe level (especially if you are trying to use the full range of your DAC).
$V_{speaker} = V_{out} \times \frac{R_{speaker}}{R+R_{speaker}}$
Don't forget to compensate for any gain you may have built into the op-amp and buffer circuit.
$R = \frac{R_{speaker} \times (V_{out}-V_{speaker})}{V_{speaker}}$
End of explanation
"""
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'hamming'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
fs = 100e3
duration = 50e-3
t = np.arange(int(duration*fs))/fs
f1 = 500
f2 = f1/1.2
print(duration*f1)
print(duration*f2)
coerced_f2 = np.round(duration*f2)/duration
print(f2, coerced_f2)
t1 = np.sin(2*np.pi*f1*t)
t2 = np.sin(2*np.pi*f2*t)
t2_coerced = np.sin(2*np.pi*coerced_f2*t)
pl.figure(); plot_fft_windows(t1); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2_coerced); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2_coerced); pl.axis(xmax=f1*2)
"""
Explanation: Good details here http://www.dspguide.com/ch9/1.htm
End of explanation
"""
n = 50e3
npow2 = 2**np.ceil(np.log2(n))
s = np.random.uniform(-1, 1, size=n)
spow2 = np.random.uniform(-1, 1, size=npow2)
%timeit np.fft.fft(s)
%timeit np.fft.fft(spow2)
"""
Explanation: Size of the FFT
End of explanation
"""
rs = np.random.RandomState(seed=1)
a1 = rs.uniform(-1, 1, 5000)
a2 = rs.uniform(-1, 1, 5000)
rs = np.random.RandomState(seed=1)
b1 = rs.uniform(-1, 1, 3330)
b2 = rs.uniform(-1, 1, 3330)
b3 = rs.uniform(-1, 1, 10000-6660)
np.equal(np.concatenate((a1, a2)), np.concatenate((b1, b2, b3))).all()
b, a = signal.iirfilter(7, (1e3/5000, 2e3/5000), rs=85, rp=0.3, ftype='ellip', btype='band')
zi = signal.lfilter_zi(b, a)
a1f, azf1 = signal.lfilter(b, a, a1, zi=zi)
a2f, azf2 = signal.lfilter(b, a, a2, zi=azf1)
b1f, bzf1 = signal.lfilter(b, a, b1, zi=zi)
b2f, bzf2 = signal.lfilter(b, a, b2, zi=bzf1)
b3f, bzf3 = signal.lfilter(b, a, b3, zi=bzf2)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
zi = signal.lfilter_zi(b, a)
a1f = signal.lfilter(b, a, a1)
a2f = signal.lfilter(b, a, a2)
b1f = signal.lfilter(b, a, b1)
b2f = signal.lfilter(b, a, b2)
b3f = signal.lfilter(b, a, b3)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
"""
Explanation: Ensuring reproducible generation of bandpass filtered noise
End of explanation
"""
frequency = np.fft.rfftfreq(int(200e3), 1/200e3)
flb, fub = 4e3, 64e3
mask = (frequency >= flb) & (frequency < fub)
noise_floor = 0
for sl in (56, 58, 60, 62, 64, 66, 96, 98):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = sl
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
pl.semilogx(frequency, power_db)
print(f'{total_db:.2f}dB with spectrum level at {sl:.2f}dB, expected {sl+10*np.log10(fub-flb):0.2f}dB')
frequency = np.fft.rfftfreq(int(100e3), 1/100e3)
mask = (frequency >= 4e3) & (frequency < 8e3)
for noise_floor in (-20, -10, 0, 10, 20, 30, 40, 50, 60):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
print('{}dB SPL with noise floor at {}dB SPL'.format(int(total_db), noise_floor))
# Compute power in dB then convert to power in volts
power_db = np.ones_like(frequency)*30
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
psd = power/2*len(power)*np.sqrt(2)
phase = np.random.uniform(0, 2*np.pi, len(psd))
csd = psd*np.exp(-1j*phase)
signal = np.fft.irfft(csd)
pl.plot(signal)
rms = np.mean(signal**2)**0.5
print(rms)
print('RMS power, dB SPL', 20*np.log10(rms/20e-6))
signal = np.random.uniform(-1, 1, len(power_v))
rms = np.mean(signal**2)**0.5
20*np.log10(rms/20e-6)
csd = np.fft.rfft(signal)
psd = np.real(csd*np.conj(csd))**2
print(psd[:5])
psd = np.abs(csd)**2
print(psd[:5])
"""
Explanation: Computing noise power
End of explanation
"""
flb, fub = 100, 100e3
# resonant frequency of cable
c = 299792458 # speed of light in m/s
l = 3 # length of cable in meters
resonant_frequency = 1/(l*4/c)
flb, fub = 100, 100e3
llb = c/flb/4
lub = c/fub/4
print(llb, lub)
# As shown here, since we're not running cables for 750 meters,
# we don't have an issue.
c/resonant_frequency/4.0
"""
Explanation: Analysis of grounding
Signal cables resonate when physical length is a quarter wavelength.
End of explanation
"""
f = 14000.0 # Hz, cps
w = (1/f)*340.0
w*1e3 # resonance in mm assuming quarter wavelength is what's important
length = 20e-3
period = length/340.0
frequency = 1.0/period
frequency
import numpy as np
def exp_ramp_v1(f0, k, t):
return f0*k**t
def exp_ramp_v2(f0, f1, t):
k = np.exp(np.log(f1/f0)/t[-1])
return exp_ramp_v1(f0, k, t)
t = np.arange(10e3)/10e3
f0 = 0.5e3
f1 = 50e3
e1 = exp_ramp_v2(50e3, 200e3, t)
e2 = exp_ramp_v2(0.5e3, 200e3, t)
pl.plot(t, e1)
pl.plot(t, e2)
"""
Explanation: Resonance of acoustic tube
End of explanation
"""
fs = 1000.0
f = np.linspace(1, 200, fs)
t = np.arange(fs)/fs
pl.plot(t, np.sin(f.cumsum()/fs))
(2*np.pi*f[-1]*t[-1]) % 2*np.pi
(f.cumsum()[-1]/fs) % 2*np.pi
"""
Explanation: chirps
End of explanation
"""
signal.iirfilter?
signal.freqs?
from scipy import signal
fs = 100e3
kwargs = dict(N=1, Wn=1e3/(2*fs), rp=0.4, rs=50, btype='highpass', ftype='ellip')
b, a = signal.iirfilter(analog=False, **kwargs)
ba, aa = signal.iirfilter(analog=True, **kwargs)
t, ir = signal.impulse([ba, aa], 50)
w, h = signal.freqz(b, a)
pl.figure()
pl.plot(t, ir)
pl.figure()
pl.plot(w, h)
rs = np.random.RandomState(seed=1)
noise = rs.uniform(-1, 1, 5000)
f = np.linspace(100, 25000, fs)
t = np.arange(fs)/fs
chirp = np.sin(f.cumsum()/fs)
psd = np.abs(np.fft.rfft(chirp)**2)
freq = np.fft.rfftfreq(len(chirp), fs**-1)
pl.semilogx(freq, 20*np.log10(psd), 'k')
chirp_ir = signal.lfilter(b, a, chirp)
psd_ir = np.abs(np.fft.rfft(chirp_ir)**2)
pl.semilogx(freq, 20*np.log10(psd_ir), 'r')
#pl.axis(ymin=40, xmin=10, xmax=10000)
chirp_eq = signal.lfilter(ir**-1, 1, chirp_ir)
psd_eq = np.abs(np.fft.rfft(chirp_eq)**2)
pl.semilogx(freq, 20*np.log10(psd_eq), 'g')
"""
Explanation: Converting band level to spectrum level
$BL = 10 \times log{\frac{I_{tot}}{I_{ref}}} $ where $ I_{tot} = I_{SL}*\Delta f$. Using multiplication rule for logarigthms, $BL = 10 \times log{\frac{I_{SL} \times 1 Hz}{I_{ref}}} + 10 \times log \frac{\Delta f}{1 Hz}$ which simplifies to $BL = ISL_{ave}+ 10\times log(\Delta f)$
Equalizing a signal using the impulse response
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/guide/basic_training_loops.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
"""
Explanation: 基本训练循环
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/guide/basic_training_loops"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/basic_training_loops.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/basic_training_loops.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/basic_training_loops.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a>
</td>
</table>
在前面的教程里,您已经了解了tensors, variables, gradient tape, 和modules。在这篇教程,您将把它们放在一起训练模型。
TensorFlow 同样具有更高层次的神经网络应用程序编程接口 tf.Keras。它提供有效的抽象来减少代码量。然而,这篇教程中,您将使用基础类。
创建
End of explanation
"""
# 实际的线
TRUE_W = 3.0
TRUE_B = 2.0
NUM_EXAMPLES = 1000
# 随机向量x
x = tf.random.normal(shape=[NUM_EXAMPLES])
# 生成噪声
noise = tf.random.normal(shape=[NUM_EXAMPLES])
# 计算y
y = x * TRUE_W + TRUE_B + noise
# 绘制所有的数据
import matplotlib.pyplot as plt
plt.scatter(x, y, c="b")
plt.show()
"""
Explanation: 解决机器学习问题
解决一个机器学习问题通常包含以下步骤:
获得训练数据。
定义模型。
定义损失函数。
遍历训练数据,从目标值计算损失。
计算该损失的梯度,并使用optimizer调整变量以适合数据。
计算结果。
为了便于说明,在本指南中,您将开发一个简单的线性模型, $f(x) = x * W + b$, 其中包含两个变量: $W$ (权重) 和 $b$ (偏差)。
这是一个最基础的机器学习问题: 已知 $x$ 和 $y$, 尝试解得直线的斜率和偏移量。 [simple linear regression](https://en.wikipedia.org/wiki/Linear_regression#Simple_and_multiple_linear_regress
数据
监督学习使用输入(通常表示为 x)和输出(表示为 y,通常称为标签)。目标是从成对的输入和输出中学习,以便您可以根据输入预测输出的值。
TensorFlow中几乎每个输入数据都是由张量表示,并且通常是向量。监督学习中,输出(即想到预测值)同样是个张量。
这是通过将高斯(即正态分布)噪声添加到直线上的点而合成的一些数据。
End of explanation
"""
class MyModel(tf.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# 初始化权重值为`5.0`,偏差值为`0.0`
# 实际项目中,应该随机初始化
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.w * x + self.b
model = MyModel()
# 列出变量tf.modules的内置变量聚合
print("Variables:", model.variables)
# 验证模型是否有效
assert model(3.0).numpy() == 15.0
"""
Explanation: 张量通常以 batches 的形式聚集在一起,或者是成组的输入和输出堆叠在一起。批处理能够对训练过程带来一些好处,并且可以与加速器和矢量化计算很好地配合使用。给定此数据集的大小,您可以将整个数据集视为一个批次。
定义模型
使用 tf.Variable 代表模型中的所有权重。tf.Variable 能够存储值,并根据需要以张量形式提供它。详情请见 variable guide。
使用 tf.Module 封装变量和计算。您可以使用任何Python对象,但是通过这种方式可以轻松保存它。
这里,您可以定义 w 和 b 为变量。
End of explanation
"""
# 计算整个批次的单个损失值
def loss(target_y, predicted_y):
return tf.reduce_mean(tf.square(target_y - predicted_y))
"""
Explanation: 初始化变量在这里是以固定方式设置的,但是Keras附带了许多您可以使用的 initalizers 。无论是否使用其余的Keras模块,都不影响你对初始化器的使用。
定义损失函数
损失函数衡量给定输入的模型输出与目标输出的匹配程度。目的是在训练过程中尽量减少这种差异。定义标准的L2损失,也称为“均方误差”:
End of explanation
"""
plt.scatter(x, y, c="b")
plt.scatter(x, model(x), c="r")
plt.show()
print("Current loss: %1.6f" % loss(model(x), y).numpy())
"""
Explanation: 在训练模型之前,您可以可视化损失值。使用红色绘制模型的预测值,使用蓝色绘制训练数据。
End of explanation
"""
# 给定一个可调用的模型,输入,输出和学习率...
def train(model, x, y, learning_rate):
with tf.GradientTape() as t:
# 可训练变量由GradientTape自动跟踪
current_loss = loss(y, model(x))
# 使用GradientTape计算相对于W和b的梯度
dw, db = t.gradient(current_loss, [model.w, model.b])
# 减去由学习率缩放的梯度
model.w.assign_sub(learning_rate * dw)
model.b.assign_sub(learning_rate * db)
"""
Explanation: 定义训练循环
训练循环按顺序重复执行以下任务:
发送一批输入值,通过模型生成输出值
通过比较输出值与输出(标签),来计算损失值
使用梯度带(GradientTape)找到梯度值
使用这些梯度优化变量
这个例子中,您可以使用 gradient descent训练数据。
tf.keras.optimizers中有许多梯度下降的变量。但是本着搭建的第一原则,您将在这里 借助tf.GradientTape的自动微分和tf.assign_sub的递减值(结合了tf.assign和tf.sub)自己实现基本数学:
End of explanation
"""
model = MyModel()
# 收集W值和b值的历史记录以供以后绘制
Ws, bs = [], []
epochs = range(10)
# 定义用于训练的循环
def training_loop(model, x, y):
for epoch in epochs:
# 用单个大批次处理更新模型
train(model, x, y, learning_rate=0.1)
# 在更新之前进行跟踪
Ws.append(model.w.numpy())
bs.append(model.b.numpy())
current_loss = loss(y, model(x))
print("Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f" %
(epoch, Ws[-1], bs[-1], current_loss))
print("Starting: W=%1.2f b=%1.2f, loss=%2.5f" %
(model.w, model.b, loss(y, model(x))))
# 开始训练
training_loop(model, x, y)
# 绘制
plt.plot(epochs, Ws, "r",
epochs, bs, "b")
plt.plot([TRUE_W] * len(epochs), "r--",
[TRUE_B] * len(epochs), "b--")
plt.legend(["W", "b", "True W", "True b"])
plt.show()
# 可视化训练后的模型如何执行
plt.scatter(x, y, c="b")
plt.scatter(x, model(x), c="r")
plt.show()
print("Current loss: %1.6f" % loss(model(x), y).numpy())
"""
Explanation: 为了了解训练,您可以发送同一批 x 和* y * 经过循环训练,同时查看W和b的变化情况。
End of explanation
"""
class MyModelKeras(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# 初始化权重为`5.0`,偏差为`0.0`
# 实际中应该随机初始化该值
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x, **kwargs):
return self.w * x + self.b
keras_model = MyModelKeras()
# 使用Keras模型重新进行循环训练
training_loop(keras_model, x, y)
# 您同样可以使用Keras内置的功能保存检查点(checkpoint)
keras_model.save_weights("my_checkpoint")
"""
Explanation: 使用Keras完成相同的解决方案
将上面的代码与Keras中的等效代码进行对比很有用。
如果您将tf.keras.Model子类化,则定义模型与其看起来完全相同。请记住,Keras模型最终从模块继承。
End of explanation
"""
keras_model = MyModelKeras()
# 编译设置培训参数
keras_model.compile(
# 默认情况下,fit()调用tf.function()。
# Debug时你可以关闭这一功能,但是现在是打开的。
run_eagerly=False,
# 使用内置的优化器,配置为对象
optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),
# Keras内置MSE
# 您也可以使用损失函数像上面一样进行定义
loss=tf.keras.losses.mean_squared_error,
)
"""
Explanation: 您可以使用Keras的内置功能作为捷径,而不必在每次创建模型时都编写新的训练循环。当您不想编写或调试Python训练循环时,这很有用。
如果您使用Keras,您将会需要使用 model.compile() 去设置参数, 使用model.fit() 进行训练。借助Keras实现L2损失和梯度下降需要的代码量更少,就像一个捷径。Keras损失和优化器也可以在这些便利功能之外使用,而前面的示例也可以使用它们。
End of explanation
"""
print(x.shape[0])
keras_model.fit(x, y, epochs=10, batch_size=1000)
"""
Explanation: Kerasfit期望批处理数据或完整的数据集作为NumPy数组。 NumPy数组分为多个批次,默认批次大小为32。
这一案例中,为了匹配手写训练循环,您应该以大小为1000的单批次传递x。
End of explanation
"""
|
syednasar/datascience
|
deeplearning/sentiment-analysis/sentiment_network/Sentiment Classification - Project 3 Solution.ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Project 2: Creating the Input/Output Data
End of explanation
"""
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
"""
|
pysg/pyther
|
Modelo de impregnacion/modelo1/Activité 4 (1).ipynb
|
mit
|
import numpy as np
import pandas as pd
import math
import cmath
from scipy.optimize import root
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Introduction
Ce programme nous permet de modéliser la concentration (c2) pour différents food simulant. Cela nous permet également de tracer différents graphiques.
End of explanation
"""
a = ("Table1.txt")
a
"""
Explanation: Polymère
End of explanation
"""
class InterfazPolimero:
def __init__ (self,a):
self.a=a
def Lire(self):
self.tab = pd.read_csv(self.a,sep=" ")
coef =self.tab.values
self.Experiment = coef[:,0]
self.Thickness = coef[:,1]
self.FoodSimulant = coef[:,2]
self.Cpo = coef[:,3]
self.K = coef [:,4]
self.Dp = coef[:,5]
self.RMSE = coef[:,6]
self.k = coef[:,7]
self.c4 = coef[:,8]
# self.c1 =coef[:,9]
self.c2 = np.zeros(10)
return self.tab
def inicializarC2(self):
self.c2 = np.zeros(10)
self.dimension = np.shape(self.c2)
print(self.dimension)
return self.c2
def calcul(self):
self.tab["j1"] = (self.tab["Dp"] / (self.tab["Thickness"] / 2)) * (self.tab["Cpo"] - self.c2)
print(self.tab["j1"])
self.c3 = self.c2 / self.K
self.j2 = self.k * (self.c3 - self.tab["c4"])
return (self.tab["j1"] - self.j2) / self.tab["j1"]
def calcul2(self):
i = 0
for self.tab["Thickness"], self.tab["Dp"], self.tab["K"], self.tab["k"], self.tab["c"] in enumerate(tab):
self.sol = root(calcul,15,args=(float(self.tab["Dp"]),float(self.tab["k"]),float(self.tab["K"]),float(self.tab["c4"]),float(self.tab["Cpo"]),float(self.tab["Thickness"])))
c2[i]= self.sol.x
i = i + 1
print(self.c2)
return self.c2
def Garder(self):
raw_data ={"résultat" : [1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793]}
df = pd.DataFrame(raw_data,index=["1","2","3","4","5","6","7","8","9","10"])
df.to_csv("c2rep")
return df
def Graphique(self):
plt.plot(self.tab["Dp"],self.Cpo,"^")
plt.title("f(Dp)=Cpo")
plt.xlabel("Dp")
plt.ylabel("Cpo")
def Graphique2(self):
plt.plot(self.tab["Dp"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Dp)=c2")
plt.xlabel("Dp")
plt.ylabel("c2")
def Graphique3(self):
plt.plot(self.tab["Cpo"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Cpo)=c2")
plt.xlabel("Cpo")
plt.ylabel("c2")
def Graphique4(self):
plt.plot(self.tab["Thickness"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Epaisseur)=c2")
plt.xlabel("Epaisseur")
plt.ylabel("c2")
def Graphique5(self):
fig,axes=plt.subplots(2,2)
axes[0,0].plot(self.tab["Dp"],self.Cpo,"^")
axes[1,1].plot(self.tab["Dp"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
axes[0,1].plot(self.tab["Cpo"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
axes[1,0].plot(self.tab["Thickness"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
p = InterfazPolimero("Table1.txt")
p
"""
Explanation: Calcul de la concentration finale
Nous avons besoin de différentes valeurs de concentration qui sont les suivantes :
Afin de calculer la concentration finale, nous avons besoin d'équations qui sont les suivantes :
Afin de calculer cela, il faut suivre la méthode précédente. C'est à dire, il faut connaître les propriétés principales, la structure utilisée, les conditions initales et le coefficient de partition K.
Ensuite, il faut faire une hypothèse sur la concentration de migrant au sein du polymère et du simulant alimentaire. Ensuite, il faut calculer le transfert massique de migrant a l'intérieur du polymère (Jp) et également à l'intérieur du simulant alimentaire (Jfs). Ceux sont des phénomènes de transfert de matière. Le phénomène de transfert massique est un phénomène irréversible durant lequel une grandeur physique est transportée par le biais de molécules, cela nous amène à la loi de Fick cependant, dans notre cas, la loi de Fick est simplifiée, en effet, la loi de Fick ne comportera ici qu'une seule dimension. Et ce transfert peut être défini de cette manière J = (Dp/(L/2))x(C1-C2).
Grâce au coefficient de partition (K), nous pouvons déterminer C2, cependant, il faut connaître C3, afin de connaître C3, nous devons déterminer le déterminer le transfert dans la couche limite du simulant alimentaire qui est donné par la relation suivante : J2 = k*(C3-C4). Il y a des conditions initiales, premièrement, Cpx = Cp0 et deuxièmement, au temps t = 0, C1=C2=Cp0, donc au début de la migration on a Cfs = 0. La dernière condition est que $ \frac{\partial c_{Cpx}}{\partial x}$ est égale à 0. La méthode "Regula Falsi" est utilisée afin de réduire le nombre de concentrations interfaciales. Le nombre d'itérations s'arrête losque J1=J2.
End of explanation
"""
p.Lire()
"""
Explanation: Table des valeurs
Ici, nous pouvons voir les valeurs obtenus pour chaque expériences. Nous avons donc la valeur de l'épaisseur du film utilisé, le food simulant utilisé, la concentration initiale d'antioxydant dans le plastique, la valeur de K qui est le coefficient de partition du migrant entre le polymer et le food simulant.Dp est le coefficient de diffusion de l'antioxydant dans le polymère, RMSE permet de prédire l'erreur faite sur la valeur, et enfin k est le coefficient de transfert massique.
Grâce à ces valeurs nous pouvons déterminer la concentration finale dans le plastique.
End of explanation
"""
p.calcul()
"""
Explanation: Calcul de c2
Ce calcul nous permet donc d'obtenir les valeurs de la concentration finale dans le plastique et donc de déterminer l'efficacité du procédé.
End of explanation
"""
p.Graphique()
"""
Explanation: Graphique : f(Dp) = Cpo
End of explanation
"""
p.Graphique2()
"""
Explanation: Graphique : f(Dp) = c2
End of explanation
"""
p.Graphique3()
"""
Explanation: Graphique : f(Cpo) = c2
End of explanation
"""
p.Graphique4()
p.Graphique5()
"""
Explanation: Grapgique : f(Epaisseur) = c2
End of explanation
"""
|
GoogleCloudPlatform/healthcare
|
datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb
|
apache-2.0
|
import numpy as np
import os
import pandas as pd
import random
import tensorflow as tf
from google.colab import auth
from google.cloud import storage
from io import BytesIO
# The next import is used to print out pretty pandas dataframes
from IPython.display import display, HTML
from PIL import Image
"""
Explanation: 2018 NUS-MIT Datathon Tutorial: Machine Learning on CBIS-DDSM
Goal
In this colab, we are going to train a simple convolutional neural network (CNN) with Tensorflow, which can be used to classify the mammographic images based on breast density.
The network we are going to build is adapted from the official tensorflow tutorial.
CBIS-DDSM
The dataset we are going to work with is CBIS-DDSM. Quote from their website:
"This CBIS-DDSM (Curated Breast Imaging Subset of DDSM) is an updated and standardized version of the Digital Database for Screening Mammography (DDSM)."
CBIS-DDSM differs from the original DDSM dataset in that it converted images to DICOM format, which is easier to work with.
Note that although this tutorial focuses on the CBIS-DDSM dataset, most of it can be easily applied to The International Skin Imaging Collaboration (ISIC) dataset as well. More details will be provided in the Datasets section below.
Setup
To be able to run the code cells in this tutorial, you need to create a copy of this Colab notebook by clicking "File" > "Save a copy in Drive..." menu.
You can share your copy with your teammates by clicking on the "SHARE" button on the top-right corner of your Colab notebook copy. Everyone with "Edit" permission is able to modify the notebook at the same time, so it is a great way for team collaboration.
First Let's import modules needed to complete the tutorial. You can run the following cell by clicking on the triangle button when you hover over the [ ] space on the top-left corner of the code cell below.
End of explanation
"""
auth.authenticate_user()
"""
Explanation: Next, we need to authenticate ourselves to Google Cloud Platform. If you are running the code cell below for the first time, a link will show up, which leads to a web page for authentication and authorization. Login with your crendentials and make sure the permissions it requests are proper, after clicking Allow button, you will be redirected to another web page which has a verification code displayed. Copy the code and paste it in the input field below.
End of explanation
"""
project_id = 'nus-datathon-2018-team-00'
os.environ["GOOGLE_CLOUD_PROJECT"] = project_id
"""
Explanation: At the same time, let's set the project we are going to use throughout the tutorial.
End of explanation
"""
# Should output something like '/device:GPU:0'.
tf.test.gpu_device_name()
"""
Explanation: Optional: In this Colab we can opt to use GPU to train our model by clicking "Runtime" on the top menus, then clicking "Change runtime type", select "GPU" for hardware accelerator. You can verify that GPU is working with the following code cell.
End of explanation
"""
client = storage.Client()
bucket_name = 'datathon-cbis-ddsm-colab'
bucket = client.get_bucket(bucket_name)
def load_images(folder):
images = []
labels = []
# The image name is in format: <LABEL>_Calc_{Train,Test}_P_<Patient_ID>_{Left,Right}_CC.
for label in [1, 2, 3, 4]:
blobs = bucket.list_blobs(prefix=("%s/%s_" % (folder, label)))
for blob in blobs:
byte_stream = BytesIO()
blob.download_to_file(byte_stream)
byte_stream.seek(0)
img = Image.open(byte_stream)
images.append(np.array(img, dtype=np.float32))
labels.append(label-1) # Minus 1 to fit in [0, 4).
return np.array(images), np.array(labels, dtype=np.int32)
def load_train_images():
return load_images('small_train_demo')
def load_test_images():
return load_images('small_test_demo')
"""
Explanation: Dataset
We have already extracted the images from the DICOM files to separate folders on GCS, and some preprocessing were also done with the raw images (If you need custom preprocessing, please consult our tutorial on image preprocessing).
The folders ending with _demo contain subsets of training and test images. Specifically, the demo training dataset has 100 images, with 25 images for each breast density category (1 - 4). There are 20 images in the test dataset which were selected randomly. All the images were first padded to 5251x7111 (largest width and height among the selected images) and then resized to 95x128 to fit in memory and save training time. Both training and test images are "Cranial-Caudal" only.
ISIC dataset is organized in a slightly different way, the images are in JPEG format and each image comes with a JSON file containing metadata information. In order to make this tutorial work for ISIC, you will need to first pad and resize the images (we provide a script to do that here), and extract the labels from the JSON files based on your interests.
Training
Before coding on our neurual network, let's create a few helper methods to make loading data from Google Cloud Storage (GCS) easier.
End of explanation
"""
KERNEL_SIZE = 5 #@param
DROPOUT_RATE = 0.25 #@param
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer.
# Reshape to 4-D tensor: [batch_size, height, width, channels]
# DDSM images are grayscale, which have 1 channel.
input_layer = tf.reshape(features["x"], [-1, 95, 128, 1])
# Convolutional Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 95, 128, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 47, 64, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 47, 64, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 23, 32, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 23, 32, 64]
# Output Tensor Shape: [batch_size, 23 * 32 * 64]
pool2_flat = tf.reshape(pool2, [-1, 23 * 32 * 64])
# Dense Layer.
# Input Tensor Shape: [batch_size, 25 * 17 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Dropout operation.
# 0.75 probability that element will be kept.
dropout = tf.layers.dropout(inputs=dense, rate=DROPOUT_RATE,
training=(mode == tf.estimator.ModeKeys.TRAIN))
# Logits Layer.
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 4]
logits = tf.layers.dense(inputs=dropout, units=4)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Loss Calculation.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode).
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
"""
Explanation: Let's create a model function, which will be passed to an estimator that we will create later. The model has an architecture of 6 layers:
Convolutional Layer: Applies 32 5x5 filters, with ReLU activation function
Pooling Layer: Performs max pooling with a 2x2 filter and stride of 2
Convolutional Layer: Applies 64 5x5 filters, with ReLU activation function
Pooling Layer: Same setup as #2
Dense Layer: 1,024 neurons, with dropout regulartization rate of 0.25
Logits Layer: 4 neurons, one for each breast density category, i.e. [0, 4)
Note that you can change the parameters on the right (or inline) to tune the neurual network. It is highly recommended to check out the original tensorflow tutorial to get a deeper understanding of the network we are building here.
End of explanation
"""
BATCH_SIZE = 20 #@param
STEPS = 1000 #@param
artifacts_bucket_name = 'nus-datathon-2018-team-00-shared-files'
# Append a random number to avoid collision.
artifacts_path = "ddsm_model_%s" % random.randint(0, 1000)
model_dir = "gs://%s/%s" % (artifacts_bucket_name, artifacts_path)
def main(_):
# Load training and test data.
train_data, train_labels = load_train_images()
eval_data, eval_labels = load_test_images()
# Create the Estimator.
ddsm_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn,
model_dir=model_dir)
# Set up logging for predictions.
# Log the values in the "Softmax" tensor with label "probabilities".
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model.
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=BATCH_SIZE,
num_epochs=None,
shuffle=True)
ddsm_classifier.train(
input_fn=train_input_fn,
steps=STEPS,
hooks=[logging_hook])
# Evaluate the model and print results.
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = ddsm_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
"""
Explanation: Now that we have a model function, next step is feeding it to an estimator for training. Here are are creating a main function as required by tensorflow.
End of explanation
"""
# Remove temporary files.
artifacts_bucket = client.get_bucket(artifacts_bucket_name)
artifacts_bucket.delete_blobs(artifacts_bucket.list_blobs(prefix=artifacts_path))
# Set logging level.
tf.logging.set_verbosity(tf.logging.INFO)
# Start training, this will call the main method defined above behind the scene.
# The whole training process will take ~5 mins.
tf.app.run()
"""
Explanation: Finally, here comes the exciting moment. We are going to train and evaluate the model we just built! Run the following code cell and pay attention to the accuracy printed at the end of logs.
Note if this is not the first time you run the following cell, to avoid weird errors like "NaN loss during training", please run the following command to remove the temporary files.
End of explanation
"""
|
cranmer/look-elsewhere-2d
|
create_gaussian_process_examples.ipynb
|
mit
|
%pylab inline --no-import-all
"""
Explanation: Testing look-elsewhere effect by creating 2d chi-square random fields with a Gaussian Process
by Kyle Cranmer, Dec 7, 2015
The correction for 2d look-elsewhere effect presented in
Estimating the significance of a signal in a multi-dimensional search by Ofer Vitells and Eilam Gross http://arxiv.org/pdf/1105.4355v1.pdf
is based on the fact that the test statistic
\begin{equation}
q(\nu_1, \nu_2) = -2 \log \frac{ \max_{\theta} L(\mu=0, \nu_1, \nu_2, \theta)}{ \max_{\mu, \theta} L(\mu, \nu_1, \nu_2, \theta)}
\end{equation}
is a chi-square random field (with 1 degree of freedom). That means that, for any point in $\nu_1, \nu_2$, the quantity $q(\nu_1, \nu_2)$ would have a chi-square distribution if you repeated the experiment many times.
That is what you expect if you have a background model $p_b(x|\theta)$ and you look for a signal on top of it with signal strength $\mu$. Creating that scan is somewhat time consuming, so here we make realizations of a chi-square random field by using a Gaussian Process.
The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. As you might have guessed, a Gaussian Process (GP) is like a chi-square random field, but it is Gaussian-distributed at each point.
Note, the distributions are not independent at each point, there is some covaraince. So if the $q(\nu_1, \nu_2)$ is high at one point, you can expect it to be high near by. We can control this behavior via the GP's kernel.
For more on the theory of Gaussian Processes, the best resource is available for free online: Rasmussen & Williams (2006). We will george -- a nice python package for Gaussian Processes (GP).
End of explanation
"""
from scipy.stats import chi2, norm
chi2_array = chi2.rvs(1, size=10000)
norm_array = norm.rvs(size=10000)
_ = plt.hist(chi2_array, bins=100, alpha=.5, label='chi-square')
_ = plt.hist(norm_array**2, bins=100, alpha=.5, color='r', label='x^2')
plt.yscale('log', nonposy='clip')
plt.legend(('chi-square', 'x^2'))
#plt.semilogy()
"""
Explanation: The main trick we will use is that a chi-square distribution for one degree of freedom is the same as the distribution of $x^2$ if $x$ is normally distributed. Here's a quick demonstration of that:
End of explanation
"""
import george
from george.kernels import ExpSquaredKernel
length_scale_of_correaltion=0.1
kernel = ExpSquaredKernel(length_scale_of_correaltion, ndim=2)
# Create the Gaussian process
# gp = george.GP(kernel)
gp = george.GP(kernel, solver=george.HODLRSolver) #faster
n_scan_points=50
aspect_ratio = 10. # make excesses look like stripes
x_scan = np.arange(0,aspect_ratio,aspect_ratio/n_scan_points)
y_scan = np.arange(0,1,1./n_scan_points)
xx, yy = np.meshgrid(x_scan, y_scan)
# reformat the independent coordinates where we evaluate the GP
indep = np.vstack((np.hstack(xx),np.hstack(yy))).T
# illustration of what is being done here
np.vstack([[1,2],[3,4]]).T
# slow part: pre-compute internal stuff for the GP
gp.compute(indep)
# evaluate one realization of the GP
z = gp.sample(indep)
# reformat output for plotting
zz = z.reshape((n_scan_points,n_scan_points))
# plot the chi-square random field
plt.imshow(zz**2, cmap='gray')
plt.colorbar()
"""
Explanation: Ok, now to the Gaussian processes.
End of explanation
"""
# plot the gaussian distributed x and chi-square distributed x**2
plt.subplot(1,2,1)
count, edges, patches = plt.hist(np.hstack(zz), bins=100)
plt.xlabel('z')
plt.subplot(1,2,2)
count, edges, patches = plt.hist(np.hstack(zz)**2, bins=100)
plt.xlabel('q=z**2')
plt.yscale('log', nonposy='clip')
"""
Explanation: Now lets histogram the values of the random field.
Don't get confused here... if you pick a single point and histogram the value of over many instances, you expect a Gaussian. However, for a single instance, you don't expect the histogram for the value of the field to be Gaussian (because of the correlations). Thought experiments: if you make length_scale_of_correaltion very small, then each point is essentially independent and you do expect to see a Gaussian; however, if length_scale_of_correaltion is very large then you expect the field to be nearly constant and the histogram below would be a delta function.
End of explanation
"""
from lee2d import *
from scipy.ndimage import grey_closing, binary_closing
def fill_holes(array):
zero_array = array==0.
temp = grey_closing(array, size=2)*zero_array
return temp+array
"""
Explanation: Ok, now let's repeat that several times and test lee2d
End of explanation
"""
n_samples = 100
z_array = gp.sample(indep,n_samples)
q_max = np.zeros(n_samples)
phis = np.zeros((n_samples,2))
u1,u2 = 0.5, 1.
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
q_max[scan_no] = np.max(scan)
# fill holes from failures in original likelihood
scan = fill_holes(scan)
#get excursion sets above those two levels
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan>u2) + 0.
#print '\nu1,u2 = ', u1, u2
#print 'diff = ', np.sum(exc1), np.sum(exc2)
if scan_no < n_plots:
aspect = 1.
plt.subplot(n_plots,3,3*scan_no+1)
aspect = 1.*scan.shape[0]/scan.shape[1]
plt.imshow(scan.T, cmap='gray', aspect=aspect)
plt.subplot(n_plots,3,3*scan_no+2)
plt.imshow(exc1.T, cmap='gray', aspect=aspect, interpolation='none')
plt.subplot(n_plots,3,3*scan_no+3)
plt.imshow(exc2.T, cmap='gray', aspect=aspect, interpolation='none')
phi1 = calculate_euler_characteristic(exc1)
phi2 = calculate_euler_characteristic(exc2)
#print 'phi1, phi2 = ', phi1, phi2
#print 'q_max = ', np.max(scan)
phis[scan_no] = [phi1, phi2]
plt.savefig('chi-square-random-fields.png')
exp_phi_1, exp_phi_2 = np.mean(phis[:,0]), np.mean(phis[:,1])
exp_phi_1, exp_phi_2
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=exp_phi_1, exp_phi_2=exp_phi_2)
print n1, n2
"""
Explanation: Generate 25 realizations of the GP, calculate the Euler characteristic for two thresholds, and use the mean of those Euler characteristics to estimate $N_1$ and $N_2$
End of explanation
"""
u = np.linspace(5,25,100)
global_p = global_pvalue(u,n1,n2)
"""
Explanation: With estimates of $N_1$ and $N_2$ predict the global p-value vs. u
End of explanation
"""
n_samples = 5000
z_array = gp.sample(indep,n_samples)
q_max = np.zeros(n_samples)
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
q_max[scan_no] = np.max(scan)
bins, edges, patches = plt.hist(q_max, bins=30)
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.xlabel('q_max')
plt.ylabel('counts / bin')
# plot the p-value
plt.subplot(121)
plt.plot(edges,icdf, c='r')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p)
plt.xlabel('u')
plt.ylabel('P(q_max >u)')
plt.xlim(0,25)
plt.subplot(122)
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('u')
plt.legend(('toys', 'prediction'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
"""
Explanation: Generate 5000 instances of the Gaussian Process, find maximum local significance for each, and check the prediction for the LEE-corrected global p-value
End of explanation
"""
from scipy.stats import poisson
n_samples = 1000
z_array = gp.sample(indep,n_samples)
phis = np.zeros((n_samples,2))
for scan_no, z in enumerate(z_array):
scan = z.reshape((n_scan_points,n_scan_points))**2
#get excursion sets above those two levels
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan>u2) + 0.
phi1 = calculate_euler_characteristic(exc1)
phi2 = calculate_euler_characteristic(exc2)
phis[scan_no] = [phi1, phi2]
bins = np.arange(0,25)
counts, bins, patches = plt.hist(phis[:,0], bins=bins, normed=True, alpha=.3, color='b')
_ = plt.hist(phis[:,1], bins=bins, normed=True,alpha=.3, color='r')
plt.plot(bins,poisson.pmf(bins,np.mean(phis[:,0])), c='b')
plt.plot(bins,poisson.pmf(bins,np.mean(phis[:,1])), c='r')
plt.xlabel('phi_i')
plt.legend(('obs phi1', 'obs phi2', 'poisson(mean(phi1)', 'poisson(mean(phi2))'), loc='upper left')
print 'Check Poisson phi1', np.mean(phis[:,0]), np.std(phis[:,0]), np.sqrt(np.mean(phis[:,0]))
print 'Check Poisson phi1', np.mean(phis[:,1]), np.std(phis[:,1]), np.sqrt(np.mean(phis[:,1]))
print 'correlation coefficients:'
print np.corrcoef(phis[:,0], phis[:,1])
print 'covariance:'
print np.cov(phis[:,0], phis[:,1])
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,0])], np.cov(phis[:,0], phis[:,1]), 5000).T
_ = plt.scatter(phis[:,0], phis[:,1], alpha=0.1)
plt.plot(x, y, 'x', alpha=0.1)
plt.axis('equal')
plt.xlabel('phi_0')
plt.ylabel('phi_1')
toy_n1, toy_n2 = np.zeros(x.size),np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
toy_n1[i] = n1
toy_n2[i] = n2
plt.scatter(toy_n1, toy_n2, alpha=.1)
plt.xlabel('n1')
plt.ylabel('n2')
# now propagate error exp_phi_1 and exp_phi_2 (by dividing cov matrix by n_samples) including correlations
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,1])],
np.cov(phis[:,0], phis[:,1])/n_samples,
5000).T
'''
# check consistency with next cell by using diagonal covariance
dummy_cov = np.cov(phis[:,0], phis[:,1])/n_samples
dummy_cov[0,1]=0
dummy_cov[1,0]=0
print dummy_cov
x, y = np.random.multivariate_normal([np.mean(phis[:,0]),np.mean(phis[:,1])],
dummy_cov,
5000).T
'''
toy_global_p = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p[i] = global_pvalue(u,n1,n2)
# now propagate error assuming uncorrelated but observed std. on phi_1 and phi_2 / sqrt(n_samples)
x = np.random.normal(np.mean(phis[:,0]), np.std(phis[:,0])/np.sqrt(n_samples), 5000)
y = np.random.normal(np.mean(phis[:,1]), np.std(phis[:,1])/np.sqrt(n_samples), 5000)
toy_global_p_uncor = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p_uncor[i] = global_pvalue(u,n1,n2)
# now propagate error assuming uncorrelated Poisson stats on phi_1 and phi_2
x = np.random.normal(np.mean(phis[:,0]), np.sqrt(np.mean(phis[:,0]))/np.sqrt(n_samples), 5000)
y = np.random.normal(np.mean(phis[:,1]), np.sqrt(np.mean(phis[:,1]))/np.sqrt(n_samples), 5000)
toy_global_p_uncor_pois = np.zeros(x.size)
for i, (toy_exp_phi_1, toy_exp_phi_2) in enumerate(zip(x,y)):
n1, n2 = get_coefficients(u1=u1, u2=u2, exp_phi_1=toy_exp_phi_1, exp_phi_2=toy_exp_phi_2)
u = 16
#global_p = global_pvalue(u,n1,n2)
toy_global_p_uncor_pois[i] = global_pvalue(u,n1,n2)
counts, bins, patches = plt.hist(toy_global_p_uncor_pois, bins=50, normed=True, color='g', alpha=.3)
counts, bins, patches = plt.hist(toy_global_p_uncor, bins=bins, normed=True, color='r', alpha=.3)
counts, bins, patches = plt.hist(toy_global_p, bins=bins, normed=True, color='b', alpha=.3)
plt.xlabel('global p-value')
#plt.ylim(0,1.4*np.max(counts))
plt.legend(('uncorrelated Poisson approx from mean',
'uncorrelated Gaus. approx of observed dist',
'correlated Gaus. approx of observed dist'),
bbox_to_anchor=(1., 1.3))
"""
Explanation: Study statistical uncertainty
Outline:
1. generate n_samples likelihood scans using the GP
1. make exclusion sets, calculate phi1, phi2 for levels u1, u2
1. look at histogram of phi1, phi2 (notice that they are narrower than Poisson)
1. look at 2-d scatter of phi1, phi2 (notice that they are positively correlated)
1. look at 2-d scatter of coefficients n1, n2 (notice tha they are negatively correlated)
1. Compare three ways of propagating error to global p-value
1. Poisson, no correlations: estimate uncertainty on Exp[phi1] as sqrt(exp_phi_1)/sqrt(n_samples)
1. Gaus approx of observed, no correlations: estimate uncertainty on Exp[phi1] as std(exp_phi_1)/sqrt(n_samples)
1. Gaus approx of observed, with correlations: estimate covariance of (Exp[phi1], Exp[phi2]) with cov(phi1, phi2)/n_samples -- note since it's covariance we divide by n_samples not sqrt(n_samples)
Conclusions:
The number of islands (as quantified by the Euler characteristic) is not Poisson distributed.
Deviation from the Poisson distribution will depend on the properties of the underlying 2-d fit (equivalently, the Gaussian Process kernel). In this example, the deviation isn't that big. It is probably generic that the uncertainty in phi is smaller than Poisson because one can only fit in so many islands into the scan... so it's probably more like a Binomial.
Unsurpringly there is also a positive correlation between the number of islands at levels u1 and u2.
This turns into an anti-correlation on the coefficients n1 and n2.
The two effects lead to the Poisson approximation over estimating the uncertainty on the global p-value.
End of explanation
"""
|
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
|
notebooks/13_01_Big_Data.ipynb
|
mit
|
import pyspark
sc = pyspark.SparkContext('local[*]')
# We define our input
l = range(10)
l
# We "upload" it as an RDD
rdd = sc.parallelize(l)
rdd
"""
Explanation: Contents
Introduction
Big Data & Hadoop
HDFS
MapReduce
Apache Spark
Map & Reduce
RDD
Key-Value RDD
DataFrame
Pandas-like interface
SQL interface
Introduction
<a id='Big_Data_Hadoop'></a>
Big data & Hadoop
There was a time when a researcher could gather all available data in their field of knowledge in a small library at home and produce results using a pen and a sheet of paper. With personal computers and laptops we have been able to extend our storage and processing power farther than we ever expected, but they cannot cope with it anymore.
Nowadays, scientific experiments generate such amounts of data that they don't fit in a personal computer, not even in a data center such as PIC. This huge need of computing and storage resources is one of the factors that drive the scientific collaborations worldwide. Also, this dramatic increase in capacity and performance that is needed for current experiments requires specific architectures to store and process all this data.
Big Data platforms are a combination of hardware and software designed to handle massive amounts of data. The most popular one is Hadoop. Hadoop is based on the design originally published by Google in several papers comprising, among others, of a:
- distributed file system (HDFS)
- MapReduce programming model
HDFS
The Hadoop Distributed File System (HDFS) is the basis of the Hadoop platform, and it is built to work on top of commodity computer clusters. In this architecture, dozens up to thousands of cheap computers work in a coordinate manner to store and process the data. Due to the large number of elements involved (computer components, network, power, etc.) the platform was designed from the ground up to be failure tolerant. Should any element fail at any time, the system would detect the condition and recover from it transparently, and the user will not ever notice.
HDFS works by splitting the files in 128 MiB blocks and replicating them on the cluster nodes in such a way that if a node fails, data is still accessible from any other replica.
MapReduce
MapReduce is programming model used for generating and processing big data sets with parallel and distributed algorithms. Inspired by the map and reduce functions commonly used in functional programming, its key contribution is the scalability and fault-tolerance achieved by optimizing the execution engine.
In MapReduce, data operations are defined with respect to data structured in (key, value) pairs:
- Map takes one pair of data in one data domain and returns a list of pairs in a different domain:
Map(k1,v1) → list(k2,v2)
The Map function is applied in parallel to every pair (keyed by k1) in the input dataset. This produces a list of pairs (keyed by k2) for each call. After that, the MapReduce framework collects all pairs with the same key (k2) from all lists and groups them together, creating one group for each key.
Reduce is then applied in parallel to each group, which in turn produces a collection of values in the same domain:
Reduce(k2, list (v2)) → list(v3)
Each Reduce call typically produces either one value v3 or an empty return, though one call is allowed to return more than one value. The returns of all calls are collected as the desired result list.
<a id='Apache_Spark'></a>
Apache Spark
Is an open-source cluster-computing framework that can run on top of Apache Hadoop. Built on top of MapReduce, if offers an improved interface for non-linear algorithms and operations. Apache Spark is based on a specialized data structure called the resilient distributed dataset (RDD). The use of RDDs facilitates the implementation of iterative algorithms and interactive/exploratory analysis. The latency of Spark applications, compared to a pure MapReduce implementation, may be reduced by several orders of magnitude.
Apache Spark comprises several modules which implement additional processing abilities to the RDDs such as:
- Spark SQL: structured data like database result sets
- Spark Streaming: real-time data
- Spark MLlib: machine learning
- Spark Graphx: graph processing
For this course, we will introduce the mechanics of working with large datasets using Spark. Ideally, each one of you would have a entire Hadoop cluster to work with but, we are not CERN... so we make use of the ability of Spark to run locally, without a cluster. Later, you could run the same code on top of a Hadoop cluster without changing anything.
<a id='Map_Reduce'></a>
Map & Reduce
Note:
Spark operations can be classified as either:
- ACTIONS: Trigger a computation and return a result
- reduce, collect, aggregate, groupBy, take, ...
- TRANSFORMATIONS: return a new RDD with the transformation applied (think of composing functions)
- map, reduce, filter, join, ...
End of explanation
"""
# We define a map function
def power_of_2(k):
return 2**k
# And we apply it to our RDD
rdd.map(power_of_2)
# So we use collect() to retrieve all results.
rdd.map(power_of_2).collect()
### WARNING ###
# Never do that in real cases, or you will transfer ALL data to your browser, effectibly killing it.
"""
Explanation: map()
End of explanation
"""
# What about summing, everything?
# We define a reduce function
def sum_everything(k1, k2):
return k1 + k2
# And we apply the reduce operation
rdd.reduce(sum_everything)
# Or we can use the built in operation `sum`
rdd.sum()
"""
Explanation: reduce()
End of explanation
"""
# What if I wanted to compute the sum of the powers of 2?
rdd.map(power_of_2).reduce(sum_everything)
# or
rdd.map(power_of_2).sum()
# How can we count the number of elements in the array?
rdd.count()
"""
Explanation: pipelining
End of explanation
"""
def set_to_1(k):
return 1
rdd.map(set_to_1).reduce(sum_everything)
"""
Explanation: Ok, too easy, this is supposed to be a map & reduce tutorial...
How can we do it WITHOUT the count() action, just using map & reduce.
SPOILER, you could add 1 for each element in the RDD:
- Build a map function that given an element, it transforms it into a 1.
- Then apply our sum_everything reduce function
End of explanation
"""
# Load all Shakespeare works
import os
shakespeare = sc.textFile(os.path.normpath('file:///../../resources/shakespeare.txt'))
# Show the first lines
shakespeare.take(10)
# Get the longest line
def keep_longest(k1, k2):
if len(k1) > len(k2):
return k1
else:
return k2
shakespeare.reduce(keep_longest)
# Compute the average line length
def line_length(k):
return len(k)
shakespeare.map(line_length).sum() / shakespeare.count()
"""
Explanation: RDD
End of explanation
"""
# Split the text in words
def split_in_words(k):
return k.split()
shakespeare.map(split_in_words).take(2)
shakespeare.flatMap(split_in_words).take(15)
"""
Explanation: flatMap() vs map()
End of explanation
"""
shakespeare.flatMap(
lambda k: k.split() # Split in words
).take(15)
"""
Explanation: lambda functions
End of explanation
"""
# Retrieve 10 words longer than 15 characters
shakespeare.flatMap(
lambda k: k.split() # Split in words
).filter(
lambda k: len(k)>15 # Keep words longer than 15 characters
).take(10)
"""
Explanation: filter()
End of explanation
"""
%load -r 1-9 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
How many times did use the word 'murder'? (case insensitive)
End of explanation
"""
%load -r 10-19 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Show 10 words longer than 15 characters
End of explanation
"""
%load -r 20-29 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Show all words longer than 15 characters, but dropping those with any of the following characters (. , -)
End of explanation
"""
%load -r 30-39 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Retrieve the longest word (without . , -), reusing the keep_longest reduce function.
End of explanation
"""
words = shakespeare.flatMap(
lambda k: k.split() # Split in words
).filter(
lambda k: not (set('.,-') & set(k)) # Drop words with special characters
)
"""
Explanation: Which, as you all know, means "the state of being able to achieve honours".
<a id='Key_Value_RDD'></a>
Key-Value RDD
We want to count the number of appearances of every word
End of explanation
"""
words.groupBy(lambda k: k).take(10)
# That method returns an iterable for each different word. This iterable contains a list of all the appearances of the word.
# Lets print its contents
tuples = words.groupBy(lambda k: k).take(5)
for t in tuples:
print(t[0], list(t[1]))
# Now, to compute the number of appearances, we just have to count the elements in the iterator
words.groupBy(
lambda k: k
).map(
lambda t: (t[0], len(list(t[1])))
).take(5)
# But this is VERY EXPENSIVE in terms of memory,
# as all the word instances must be stored in a list before they can be counted.
# We can do it much better!
"""
Explanation: groupBy()
End of explanation
"""
words.map(
lambda w: (w, 1)
).take(10)
words.map(
lambda w: (w, 1)
).reduceByKey(
lambda k1, k2: k1 + k2
).take(10)
"""
Explanation: reduceByKey
End of explanation
"""
%load -r 40-49 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Get the 10 most-used words and its number of appearances
End of explanation
"""
%load -r 50-69 solutions/13_01_Big_Data.py
%load -r 70-79 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Print then 10 most used words longer than 5 characters (case-insensitive)
How many words, longer than 50 characters, are used more than 500 times? (case-insensitive)
End of explanation
"""
from pyspark.sql import SQLContext
sqlc = SQLContext(sc)
gaia = sqlc.read.csv('../resources/gaia.csv.bz2', comment='#', header=True, inferSchema=True)
gaia
gaia.count()
gaia.head(5)
"""
Explanation: DataFrame
End of explanation
"""
%matplotlib inline
import pyspark.sql.functions as func
g_hist = gaia.groupBy(
(
func.floor(gaia.mag_g * 10) / 10
).alias('mag_g'),
).count().orderBy(
'mag_g'
)
g_hist.take(10)
g_hist.toPandas().set_index('mag_g').plot(loglog=True)
"""
Explanation: <a id='Pandas_interface'></a>
Pandas-like interface
End of explanation
"""
%load -r 90-99 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Plot an 'ra' histogram in 1-degree bins (count how many stars are in each bin).
Can you spot the galaxy center? ;)
End of explanation
"""
sqlc.registerDataFrameAsTable(gaia, "gaia")
g_hist = sqlc.sql("""
SELECT CAST(FLOOR(mag_g*10)/10. AS FLOAT) AS mag_g, COUNT(*) AS `count`
FROM gaia
GROUP BY 1
ORDER BY 1
""")
g_hist.take(10)
g_hist.toPandas().set_index('mag_g').plot(loglog=True)
"""
Explanation: <a id='SQL_interface'></a>
SQL interface
End of explanation
"""
%load -r 100-109 solutions/13_01_Big_Data.py
"""
Explanation: Exercise
Plot an 'ra' histogram in 1-degree bins (count how many stars are in each bin).
Can you spot the galaxy center? ;)
End of explanation
"""
|
ijstokes/bokeh-blaze-tutorial
|
solutions/.ipynb_checkpoints/1.1 Charts - Timeseries (solution)-checkpoint.ipynb
|
mit
|
import pandas as pd
from bokeh.charts import TimeSeries, output_notebook, show
# Get data
df = pd.read_csv('data/Land_Ocean_Monthly_Anomaly_Average.csv')
# Process data
df['datetime'] = pd.to_datetime(df['datetime'])
df = df[['anomaly','datetime']]
# Output option
output_notebook()
# Create timeseries chart
t = TimeSeries(df, x='datetime')
# Show chart
show(t)
"""
Explanation: <img src=images/continuum_analytics_bw.png align="left" width="15%" style="margin-right:15%">
<h1 align='center'>Berkeley Earth</h1>
1.1 Charts - Timeseries
Exercise: Visualize the evolution of the temperature anomaly monthly average over time with a timeseries chart
Data: 'data/Land_Ocean_Monthly_Anomaly_Average.csv'
Tips:
import pandas as pd
pd.read_csv()
pd.to_datetime()
End of explanation
"""
# Style your timeseries chart
t = TimeSeries(df, x='datetime', xlabel='time', ylabel='Anomaly(ºC)',
xgrid = False, ygrid=True, tools=False, width=950, height=300,
title="Temperature Anomaly(ºC) Monthly Average", palette=["grey"])
# Show new chart
show(t)
"""
Explanation: Exercise: Style your plot
Ideas:
Add a title
Add axis labels
Change width and height
Deactivate toolbox or customize available tools
Change line color
Charts arguments can be found: http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html#generic-arguments
End of explanation
"""
# Compute moving average
df['moving_average'] = pd.rolling_mean(df['anomaly'], 12)
# Create chart with moving average
t = TimeSeries(df, x='datetime', xlabel='time', ylabel='Anomaly(ºC)',
xgrid = False, ygrid=True, tools=False, width=950, height=300, legend="bottom_right",
title="Temperature Anomaly(ºC) Monthly Average", palette=["grey", "red"])
# Show chart with moving average
show(t)
"""
Explanation: Exercise: Add the moving annual average to your chart
Tips:
pd.rolling_mean()
End of explanation
"""
|
bekbote/project_repository
|
0207_Vectors-1549598493596.ipynb
|
apache-2.0
|
plt.quiver(0,0,3,4)
plt.show()
plt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1)
plt.show()
plt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1)
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
plt.quiver(0,0,3,4, scale_units='xy', angles='xy', scale=1, color='r')
plt.quiver(0,0,-3,4, scale_units='xy', angles='xy', scale=1, color='g')
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
def plot_vectors(vecs):
colors = ['r', 'b', 'g', 'y']
i = 0
for vec in vecs:
plt.quiver(vec[0], vec[1], vec[2], vec[3], scale_units='xy', angles='xy', scale=1, color=colors[i%len(colors)])
i += 1
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
plot_vectors([(0,0,3,4), (0,0,-3,4), (0,0,-3,-2), (0,0,4,-1)])
"""
Explanation: Vector plotting
End of explanation
"""
vecs = [np.asarray([0,0,3,4]), np.asarray([0,0,-3,4]), np.asarray([0,0,-3,-2]), np.asarray([0,0,4,-1])]
plot_vectors([vecs[0], vecs[3]])
vecs[0] + vecs[3]
plot_vectors([vecs[0], vecs[3], vecs[0] + vecs[3]])
plot_vectors([vecs[0], vecs[0], vecs[0] + vecs[0]])
plot_vectors([vecs[0], vecs[3], vecs[0] - vecs[3]])
plot_vectors([-vecs[0], vecs[3], - vecs[0] + (vecs[3])])
"""
Explanation: Vector addition and subtraction
End of explanation
"""
vecs = [np.asarray([0,0,5,4]), np.asarray([0,0,-3,4]), np.asarray([0,0,-3,-2]), np.asarray([0,0,4,-1])]
plot_vectors(vecs)
a = np.asarray([5, 4])
b = np.asarray([-3, -2])
"""
Explanation: Vector dot product
End of explanation
"""
a_dot_b = np.dot(a, b)
print(a_dot_b)
"""
Explanation: $\vec{a}\cdot\vec{b} = |\vec{a}| |\vec{b}| \cos(\theta) = a_x b_x + a_y b_y$
End of explanation
"""
a_b = np.dot(a, b)/np.linalg.norm(b)
print(a_b)
"""
Explanation: $a_b = |\vec{a}| \cos(\theta) = |\vec{a}|\frac{\vec{a}\cdot\vec{b}}{|\vec{a}||\vec{b}|} = \frac{\vec{a}\cdot\vec{b}}{|\vec{b}|}$
End of explanation
"""
vec_a_b = (a_b/np.linalg.norm(b))*b
print(vec_a_b)
plot_vectors([np.asarray([0,0,3,4]), np.asarray([0,0,4,-1]), np.asarray([0, 0, 1.88235294, -0.47058824])])
"""
Explanation: $\vec{a_b} = a_b \hat{b} = a_b \frac{\vec{b}}{|\vec{b}|}$
End of explanation
"""
def plot_linear_combination(a, b, w1, w2):
plt.quiver(0,0,a[0],a[1], scale_units='xy', angles='xy', scale=1, color='r')
plt.quiver(0,0,b[0],b[1], scale_units='xy', angles='xy', scale=1, color='b')
c = w1 * a + w2 * b
plt.quiver(0,0,c[0],c[1], scale_units='xy', angles='xy', scale=1, color='g')
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
a = np.asarray([3, 4])
b = np.asarray([1.5, 2])
plot_linear_combination(a, b, -1, 1)
def plot_span(a, b):
for i in range(1000):
w1 = (np.random.random(1) - 0.5) * 3
w2 = (np.random.random(1) - 0.5) * 3
c = w1 * a + w2 * b
plt.quiver(0,0,c[0],c[1], scale_units='xy', angles='xy', scale=1, color='g')
plt.quiver(0,0,a[0],a[1], scale_units='xy', angles='xy', scale=1, color='r')
plt.quiver(0,0,b[0],b[1], scale_units='xy', angles='xy', scale=1, color='b')
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
plot_span(a, b)
"""
Explanation: Linear combination
$\vec{c} = w_1 \vec{a} + w_2 \vec{b}$
End of explanation
"""
|
ljchang/psyc63
|
Notebooks/2_Introduction_to_Dataframes_&_Plotting.ipynb
|
mit
|
# matplotlib inline is an example of 'cell magic' and
# enables plotting IN the notebook and not opening another window.
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Dataframes ( Pandas ) and Plotting ( Matplotlib/Seaborn )
Written by Jin Cheong & Luke Chang
In this lab we are going to learn how to load and manipulate datasets in a dataframe format using Pandas
and create beautiful plots using Matplotlib and Seaborn. Pandas is akin to a data frame in R and provides an intuitive way to interact with data in a 2D data frame. Matplotlib is a standard plotting library that is similar in functionality to Matlab's object oriented plotting. Seaborn is also a plotting library built on the Matplotlib framework which carries useful pre-configured plotting schemes.
After the tutorial you will have the chance to apply the methods to a new set of data.
Also, here is a great set of notebooks that also covers the topic
First we load the basic packages we will be using in this tutorial. Notice how we import the modules using an abbreviated name. This is to reduce the amount of text we type when we use the functions.
End of explanation
"""
# Import data
df = pd.read_csv('../Data/salary.csv',sep = ',', header='infer')
# recap on how to look for Docstrings.
pd.read_csv?
"""
Explanation: Pandas
Loading Data
We use the pd.read_csv() to load a .csv file into a dataframe.
Note that read_csv() has many options that can be used to make sure you load the data correctly.
End of explanation
"""
df
"""
Explanation: Ways to check the dataframe
There are many ways to examine your dataframe.
One easy way is to execute the dataframe itself.
End of explanation
"""
print('There are %i rows and %i columns in this data set' % df.shape)
df.head()
"""
Explanation: However, often the dataframes can be large and we may be only interested in seeing the first few rows. df.head() is useful for this purpose. shape is another useful method for getting the dimensions of the matrix. We will print the number of rows and columns in this data set by using output formatting. Use the % sign to indicate the type of data (e.g., %i=integer, %d=float, %s=string), then use the % followed by a tuple of the values you would like to insert into the text. See here for more info about formatting text.
End of explanation
"""
print("Indexes")
print(df.index)
print("Columns")
print(df.columns)
print("Columns are like keys of a dictionary")
print(df.keys())
"""
Explanation: On the top row, you have column names, that can be called like a dictionary (a dataframe can be essentially thought of as a dictionary with column names as the keys). The left most column (0,1,2,3,4...) is called the index of the dataframe. The default index is sequential integers, but it can be set to anything as long as each row is unique (e.g., subject IDs)
End of explanation
"""
df[['salary']]
"""
Explanation: You can access the values of a column by calling it directly. Double bracket returns a dataframe
End of explanation
"""
df['salary']
"""
Explanation: Single bracket returns a Series
End of explanation
"""
df.salary
"""
Explanation: You can also call a column like an attribute if the column name is a string
End of explanation
"""
df['pubperyear'] = 0
"""
Explanation: You can create new columns to fit your needs.
For instance you can set initialize a new column with zeros.
End of explanation
"""
df['pubperyear'] = df['publications']/df['years']
df.head()
"""
Explanation: Here we can create a new column pubperyear, which is the ratio of the number of papers published per year
End of explanation
"""
df.loc[0,['salary']]
"""
Explanation: Indexing and slicing
Indexing in Pandas can be tricky. There are four ways to index: loc, iloc, ix, and explicit indexing(useful for booleans).
First, we will try using .loc. This method references the explicit index. it works for both index names and also column names.
End of explanation
"""
df.iloc[0:3,0:3]
"""
Explanation: Next we wil try .iloc. This method references the implicit python index (starting from 0, exclusive of last number). You can think of this like row by column indexing using integers.
End of explanation
"""
df.ix[0:3,0:3]
"""
Explanation: There is also an older method called .ix, which will likely eventually be phased out of pandas. It can be useful to combine explicit and implicit indexing.
End of explanation
"""
maledf = df[df.gender==0].reset_index(drop=True)
femaledf = df[df.gender==1].reset_index(drop=True)
"""
Explanation: Let's make a new data frame with just Males and another for just Females. Notice, how we added the .reset_index(drop=True) method? This is because assigning a new dataframe based on indexing another dataframe will retain the original index. We need to explicitly tell pandas to reset the index if we want it to start from zero.
End of explanation
"""
df[ (df.salary > 90000) & (df.salary < 100000)]
"""
Explanation: Boolean or logical indexing is useful if you need to sort the data based on some True or False value.
For instance, who are the people with salaries greater than 90K but lower than 100K ?
End of explanation
"""
df.isnull()
"""
Explanation: Dealing with missing values
It is easy to quickly count the number of missing values for each column in the dataset using the isnull() method. One thing that is nice about Python is that you can chain commands, which means that the output of one method can be the input into the next method. This allows us to write intuitive and concise code. Notice how we take the sum() of all of the null cases.
The isnull() method will return a dataframe with True/False values on whether a datapoint is null or not a number (nan).
End of explanation
"""
df.isnull().sum()
"""
Explanation: We can chain the .null() and .sum() methods to see how many null values are added up.
End of explanation
"""
df[df.isnull().any(axis=1)]
# you may look at where the values are not null
# Note that indexes 18, and 24 are missing.
df[~df.isnull().any(axis=1)]
"""
Explanation: You can use the boolean indexing once again to see the datapoints that have missing values. We chained the method .any() which will check if there are any True values for a given axis. Axis=0 indicates rows, while Axis=1 indicates columns. So here we are creating a boolean index for row where any column has a missing value.
End of explanation
"""
df = df.dropna()
"""
Explanation: There are different techniques for dealing with missing data. An easy one is to simply remove rows that have any missing values using the dropna() method.
End of explanation
"""
print('There are %i rows and %i columns in this data set' % df.shape)
df.isnull().sum()
"""
Explanation: Now we can check to make sure the missing rows are removed. Let's also check the new dimensions of the dataframe.
End of explanation
"""
df.describe().transpose()
"""
Explanation: Describing the data
We can use the .describe() method to get a quick summary of the continuous values of the data frame. We will .transpose() the output to make it slightly easier to read.
End of explanation
"""
df.departm.describe()
"""
Explanation: We can also get quick summary of a pandas series, or specific column of a pandas dataframe.
End of explanation
"""
df.groupby('gender').mean()
"""
Explanation: Manipulating data in Groups
One manipulation we often do is look at variables in groups.
One way to do this is to usethe .groupby(key) method.
The key is a column that is used to group the variables together.
For instance, if we want to group the data by gender and get group means, we perform the following.
End of explanation
"""
df[df['gender']==2]
"""
Explanation: Other default aggregation methods include .count(), .mean(), .median(), .min(), .max(), .std(), .var(), and .sum()
Before we move on, it looks like there were more than 2 genders specified in our data.
This is likely an error in the data collection process so let recap on how we might remove this datapoint.
End of explanation
"""
df = df[df['gender']!=2]
"""
Explanation: replace original dataframe without the miscoded data
End of explanation
"""
df.groupby('gender').mean()
"""
Explanation: Now we have a corrected dataframe!
End of explanation
"""
# key: We use the departm as the grouping factor.
key = df['departm']
# Let's create an anonmyous function for calculating zscores using lambda:
# We want to standardize salary for each department.
zscore = lambda x: (x - x.mean()) / x.std()
# Now let's calculate zscores separately within each department
transformed = df.groupby(key).transform(zscore)
df['salary_in_departm'] = transformed['salary']
"""
Explanation: Another powerful tool in Pandas is the split-apply-combine method.
For instance, let's say we also want to look at how much each professor is earning in respect to the department.
Let's say we want to subtract the departmental mean from professor and divide it by the departmental standard deviation.
We can do this by using the groupby(key) method chained with the .transform(function) method.
It will group the dataframe by the key column, perform the "function" transformation of the data and return data in same format.
To learn more, see link here
End of explanation
"""
df.head()
"""
Explanation: Now we have salary_in_departm column showing standardized salary per department.
End of explanation
"""
pd.concat([femaledf,maledf],axis = 0)
"""
Explanation: Combining datasets : pd.concat
Recall that we sliced the dataframes into male and female dataframe in 2.3 Indexing and Slicing. Now we will learn how to put dataframes together which is done by the pd.concat method. Note how the index of this output retains the old index.
End of explanation
"""
pd.concat([maledf,femaledf],axis = 0).reset_index(drop=True)
"""
Explanation: We can reset the index to start at zero using the .reset_index() method
End of explanation
"""
df[['salary','gender']].boxplot(by='gender')
"""
Explanation: Plotting in pandas
Before we move into Matplotlib, here are a few plotting methods already implemented in Pandas.
Boxplot
End of explanation
"""
df[['salary','years']].plot(kind='scatter', x='years', y='salary')
"""
Explanation: Scatterplot
End of explanation
"""
# create a new numericalSeries called dept_num for visualization.
df['dept_num'] = 0
df.loc[:,['dept_num']] = df.departm.map({'bio':0, 'chem':1,'geol':2,'neuro':3,'stat':4,'physics':5,'math':6})
df.tail()
## Now plot all four categories
f, axs = plt.subplots(1, 4, sharey=True)
f.suptitle('Salary in relation to other variables')
df.plot(kind='scatter', x='gender', y='salary', ax=axs[0], figsize=(15, 4))
df.plot(kind='scatter', x='dept_num', y='salary', ax=axs[1])
df.plot(kind='scatter', x='years', y='salary', ax=axs[2])
df.plot(kind='scatter', x='age', y='salary', ax=axs[3])
# The problem is that it treats department as a continuous variable.
"""
Explanation: Plotting Categorical Variables. Replacing variables with .map
If we want to plot department on the x-axis, Pandas plotting functions won't know what to do
because they don't know where to put bio or chem on a numerical x-axis.
Therefore one needs to change them to numerical variable to plot them with basic functionalities (we will later see how Seaborn sovles this).
End of explanation
"""
means = df.groupby('gender').mean()['salary']
errors = df.groupby('gender').std()['salary'] / np.sqrt(df.groupby('gender').count()['salary'])
ax = means.plot.bar(yerr=errors,figsize=(5,3))
"""
Explanation: Generating bar - errorbar plots in Pandas
End of explanation
"""
plt.figure(figsize=(2,2))
plt.plot(range(0,10),np.sqrt(range(0,10)))
plt.show()
"""
Explanation: Matplotlib
Learn other matplotlib tutorials here
create a basic lineplot
End of explanation
"""
plt.figure(figsize=(2,2))
plt.scatter(df.salary,df.age,color='b',marker='*')
plt.show()
"""
Explanation: create a basic scatterplot
End of explanation
"""
# plt.subplots allows you to control different aspects of multiple plots
f,ax = plt.subplots(1,1,figsize=(4,2))
ax.scatter(df.salary,df.age,color='k',marker='o')
# Setting limits on axes
ax.set_xlim([40000,120000])
ax.set_ylim([20,70])
# Changing tick labels
ax.set_xticklabels([str(int(tick)/1000)+'K' for tick in ax.get_xticks()])
# changing label names
ax.set_xlabel('salary')
ax.set_ylabel('age')
# changing the title
ax.set_title('Scatterplot of age and salary')
plt.show()
# save figure
f.savefig('MyFirstPlot.png')
"""
Explanation: Modify different aspects of the plot
End of explanation
"""
f,axs = plt.subplots(1,2,figsize=(15,5)) # create a plot figure, specify the size and number of figures.
axs[0].scatter(df.age,df.salary,color='k',marker='o')
axs[0].set_ylim([40000,120000])
axs[0].set_xlim([20,70])
axs[0].set_yticklabels([str(int(tick)/1000)+'K' for tick in axs[0].get_yticks()])
axs[0].set_ylabel('salary')
axs[0].set_xlabel('age')
axs[0].set_title('Scatterplot of age and salary')
axs[1].scatter(df.publications,df.salary,color='k',marker='o')
axs[1].set_ylim([40000,120000])
axs[1].set_xlim([20,70])
axs[1].set_yticklabels([str(int(tick)/1000)+'K' for tick in axs[1].get_yticks()])
axs[1].set_ylabel('salary')
axs[1].set_xlabel('publications')
axs[1].set_title('Scatterplot of publication and salary')
f.suptitle('Scatterplots of salary and other factors')
plt.show()
"""
Explanation: Create multiple plots
End of explanation
"""
ax = sns.regplot(df.age,df.salary)
ax.set_title('Salary and age')
plt.show()
sns.jointplot("age", "salary", data=df, kind='reg');
"""
Explanation: Seaborn
Seaborn is a plotting library built on Matplotlib that has many pre-configured plots that are often used for visualization.
Other great tutorials about seaborn are here
End of explanation
"""
sns.catplot(x='departm',y='salary',hue='gender',data=df,ci=68,kind='bar')
plt.show()
"""
Explanation: Factor plots
Factor plots allow you to visualize the distribution of parameters in different forms such as point, bar, or violin graphs.
Here are some possible values for kind : {point, bar, count, box, violin, strip}
End of explanation
"""
sns.heatmap(df[['salary','years','age','publications']].corr(),annot=True,linewidths=.5)
"""
Explanation: Heatmap plots
Heatmap plots allow you to visualize matrices such as correlation matrices that show relationships across multiple variables
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.