repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
joommf/tutorial
workshops/Durham/reference/standard_problem3.ipynb
bsd-3-clause
import discretisedfield as df import oommfc as oc """ Explanation: Micromagnetic standard problem 3 Authors: Marijan Beg, Ryan A. Pepper, and Hans Fangohr Date: 12 December 2016 Problem specification This problem is to calculate a single domain limit of a cubic magnetic particle. This is the size $L$ of equal energy for the so-called flower state (which one may also call a splayed state or a modified single-domain state) on the one hand, and the vortex or curling state on the other hand. Geometry: A cube with edge length, $L$, expressed in units of the intrinsic length scale, $l_\text{ex} = \sqrt{A/K_\text{m}}$, where $K_\text{m}$ is a magnetostatic energy density, $K_\text{m} = \frac{1}{2}\mu_{0}M_\text{s}^{2}$. Material parameters: uniaxial anisotropy $K_\text{u}$ with $K_\text{u} = 0.1 K_\text{m}$, and with the easy axis directed parallel to a principal axis of the cube (0, 0, 1), exchange energy constant is $A = \frac{1}{2}\mu_{0}M_\text{s}^{2}l_\text{ex}^{2}$. More details about the standard problem 3 can be found in Ref. 1. Simulation Firstly, we import all necessary modules. End of explanation """ import numpy as np # Function for initiaising the flower state. def m_init_flower(pos): x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9 mx = 0 my = 2*z - 1 mz = -2*y + 1 norm_squared = mx**2 + my**2 + mz**2 if norm_squared <= 0.05: return (1, 0, 0) else: return (mx, my, mz) # Function for initialising the vortex state. def m_init_vortex(pos): x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9 mx = 0 my = np.sin(np.pi/2 * (x-0.5)) mz = np.cos(np.pi/2 * (x-0.5)) return (mx, my, mz) """ Explanation: The following two functions are used for initialising the system's magnetisation [1]. End of explanation """ def minimise_system_energy(L, m_init): print("L={:9}, {} ".format(L, m_init.__name__), end="") N = 16 # discretisation in one dimension cubesize = 100e-9 # cube edge length (m) cellsize = cubesize/N # discretisation in all three dimensions. lex = cubesize/L # exchange length. Km = 1e6 # magnetostatic energy density (J/m**3) Ms = np.sqrt(2*Km/oc.mu0) # magnetisation saturation (A/m) A = 0.5 * oc.mu0 * Ms**2 * lex**2 # exchange energy constant K = 0.1*Km # Uniaxial anisotropy constant u = (0, 0, 1) # Uniaxial anisotropy easy-axis p1 = (0, 0, 0) # Minimum sample coordinate. p2 = (cubesize, cubesize, cubesize) # Maximum sample coordinate. cell = (cellsize, cellsize, cellsize) # Discretisation. mesh = oc.Mesh(p1=(0, 0, 0), p2=(cubesize, cubesize, cubesize), cell=(cellsize, cellsize, cellsize)) # Create a mesh object. system = oc.System(name="stdprob3") system.hamiltonian = oc.Exchange(A) + oc.UniaxialAnisotropy(K, u) + oc.Demag() system.m = df.Field(mesh, value=m_init, norm=Ms) md = oc.MinDriver() md.drive(system) return system """ Explanation: The following function is used for convenience. It takes two arguments: $L$ - the cube edge length in units of $l_\text{ex}$, and the function for initialising the system's magnetisation. It returns the relaxed system object. Please refer to other tutorials for more details on how to create system objects and drive them using specific drivers. End of explanation """ %matplotlib inline system = minimise_system_energy(8, m_init_vortex) fig = system.m.plot_slice('y', 50e-9, xsize=4) """ Explanation: Relaxed magnetisation states Now, we show the magnetisation configurations of two relaxed states. Vortex state: End of explanation """ system = minimise_system_energy(8, m_init_flower) fig = system.m.plot_slice('x', 50e-9, xsize=4) """ Explanation: Flower state: End of explanation """ L_array = np.linspace(8, 9, 5) # values of L for which the system is relaxed. vortex_energies = [] flower_energies = [] for L in L_array: vortex = minimise_system_energy(L, m_init_vortex) flower = minimise_system_energy(L, m_init_flower) vortex_energies.append(vortex.total_energy()) flower_energies.append(flower.total_energy()) # Plot the energy dependences. import matplotlib.pyplot as plt plt.plot(L_array, vortex_energies, 'o-', label='vortex') plt.plot(L_array, flower_energies, 'o-', label='flower') plt.xlabel('L (lex)') plt.ylabel('E') plt.xlim([8.0, 9.0]) plt.grid() plt.legend() """ Explanation: Energy crossing Now, we can plot the energies of both vortex and flower states as a function of cube edge length. This will give us an idea where the state transition occurrs. End of explanation """ from scipy.optimize import bisect def energy_difference(L): vortex = minimise_system_energy(L, m_init_vortex) flower = minimise_system_energy(L, m_init_flower) return vortex.total_energy() - flower.total_energy() cross_section = bisect(energy_difference, 8, 9, xtol=0.1) print("The transition between vortex and flower states occurs at {}*lex".format(cross_section)) """ Explanation: We now know that the energy crossing occurrs between $8l_\text{ex}$ and $9l_\text{ex}$, so a bisection algorithm can be used to find the exact crossing. End of explanation """
piyueh/SEM-Toolbox
Huynh2007/check_fL_not_eq_f(uL).ipynb
mit
xi = quad.GaussJacobi(4).nodes """ Explanation: The coordinates of solution points using Gauss-Legendre quadrature points. End of explanation """ Lk = poly.LagrangeBasis(xi) """ Explanation: The Lagrange basis using the Gauss-Legendre quadrature points. End of explanation """ def u_exact(x): '''exact solution of u''' return numpy.sin(x) """ Explanation: The exact solution of $u(x)$. End of explanation """ def f(ui): '''flux at spificic location using known velocity at the same location''' return 0.5 * ui * ui """ Explanation: Define a function to calculate flux at a location given a known $u$. End of explanation """ ui = numpy.dot(Lk(xi), u_exact(xi)) """ Explanation: Calculate the $u$ at solution points. End of explanation """ fi = f(ui) """ Explanation: Calculate the flux at solution points using $u$ at those locations. End of explanation """ uL = numpy.dot(Lk(-1), u_exact(xi)) """ Explanation: Use $u$ at solution points and Lagrange interpolation to calculate $u$ at left boundary. End of explanation """ f(uL) """ Explanation: Use $u$ at the left boundary to calculate flux at that boundary (that is, the $f(u_L)$ in the paper). End of explanation """ fL = numpy.dot(Lk(-1), fi) print(fL) """ Explanation: Use $f$ at the solution points and Lagrange interpolation to calculate the flux at that boundary (i.e., the $f_L$ in the paper). End of explanation """
tensorflow/docs-l10n
site/en-snapshot/lattice/tutorials/keras_layers.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ #@test {"skip": true} !pip install tensorflow-lattice pydot """ Explanation: Creating Keras Models with TFL Layers <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/keras_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview You can use TFL Keras layers to construct Keras models with monotonicity and other shape constraints. This example builds and trains a calibrated lattice model for the UCI heart dataset using TFL layers. In a calibrated lattice model, each feature is transformed by a tfl.layers.PWLCalibration or a tfl.layers.CategoricalCalibration layer and the results are nonlinearly fused using a tfl.layers.Lattice. Setup Installing TF Lattice package: End of explanation """ import tensorflow as tf import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl from tensorflow import feature_column as fc logging.disable(sys.maxsize) """ Explanation: Importing required packages: End of explanation """ # UCI Statlog (Heart) dataset. csv_file = tf.keras.utils.get_file( 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv') training_data_df = pd.read_csv(csv_file).sample( frac=1.0, random_state=41).reset_index(drop=True) training_data_df.head() """ Explanation: Downloading the UCI Statlog (Heart) dataset: End of explanation """ LEARNING_RATE = 0.1 BATCH_SIZE = 128 NUM_EPOCHS = 100 """ Explanation: Setting the default values used for training in this guide: End of explanation """ # Lattice layer expects input[i] to be within [0, lattice_sizes[i] - 1.0], so lattice_sizes = [3, 2, 2, 2, 2, 2, 2] """ Explanation: Sequential Keras Model This example creates a Sequential Keras model and only uses TFL layers. Lattice layers expect input[i] to be within [0, lattice_sizes[i] - 1.0], so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the calibration layers. End of explanation """ combined_calibrators = tfl.layers.ParallelCombination() """ Explanation: We use a tfl.layers.ParallelCombination layer to group together calibration layers which have to be executed in parallel in order to be able to create a Sequential model. End of explanation """ # ############### age ############### calibrator = tfl.layers.PWLCalibration( # Every PWLCalibration layer must have keypoints of piecewise linear # function specified. Easiest way to specify them is to uniformly cover # entire input range by using numpy.linspace(). input_keypoints=np.linspace( training_data_df['age'].min(), training_data_df['age'].max(), num=5), # You need to ensure that input keypoints have same dtype as layer input. # You can do it by setting dtype here or by providing keypoints in such # format which will be converted to desired tf.dtype by default. dtype=tf.float32, # Output range must correspond to expected lattice input range. output_min=0.0, output_max=lattice_sizes[0] - 1.0, ) combined_calibrators.append(calibrator) # ############### sex ############### # For boolean features simply specify CategoricalCalibration layer with 2 # buckets. calibrator = tfl.layers.CategoricalCalibration( num_buckets=2, output_min=0.0, output_max=lattice_sizes[1] - 1.0, # Initializes all outputs to (output_min + output_max) / 2.0. kernel_initializer='constant') combined_calibrators.append(calibrator) # ############### cp ############### calibrator = tfl.layers.PWLCalibration( # Here instead of specifying dtype of layer we convert keypoints into # np.float32. input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32), output_min=0.0, output_max=lattice_sizes[2] - 1.0, monotonicity='increasing', # You can specify TFL regularizers as a tuple ('regularizer name', l1, l2). kernel_regularizer=('hessian', 0.0, 1e-4)) combined_calibrators.append(calibrator) # ############### trestbps ############### calibrator = tfl.layers.PWLCalibration( # Alternatively, you might want to use quantiles as keypoints instead of # uniform keypoints input_keypoints=np.quantile(training_data_df['trestbps'], np.linspace(0.0, 1.0, num=5)), dtype=tf.float32, # Together with quantile keypoints you might want to initialize piecewise # linear function to have 'equal_slopes' in order for output of layer # after initialization to preserve original distribution. kernel_initializer='equal_slopes', output_min=0.0, output_max=lattice_sizes[3] - 1.0, # You might consider clamping extreme inputs of the calibrator to output # bounds. clamp_min=True, clamp_max=True, monotonicity='increasing') combined_calibrators.append(calibrator) # ############### chol ############### calibrator = tfl.layers.PWLCalibration( # Explicit input keypoint initialization. input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0], dtype=tf.float32, output_min=0.0, output_max=lattice_sizes[4] - 1.0, # Monotonicity of calibrator can be decreasing. Note that corresponding # lattice dimension must have INCREASING monotonicity regardless of # monotonicity direction of calibrator. monotonicity='decreasing', # Convexity together with decreasing monotonicity result in diminishing # return constraint. convexity='convex', # You can specify list of regularizers. You are not limited to TFL # regularizrs. Feel free to use any :) kernel_regularizer=[('laplacian', 0.0, 1e-4), tf.keras.regularizers.l1_l2(l1=0.001)]) combined_calibrators.append(calibrator) # ############### fbs ############### calibrator = tfl.layers.CategoricalCalibration( num_buckets=2, output_min=0.0, output_max=lattice_sizes[5] - 1.0, # For categorical calibration layer monotonicity is specified for pairs # of indices of categories. Output for first category in pair will be # smaller than output for second category. # # Don't forget to set monotonicity of corresponding dimension of Lattice # layer to '1'. monotonicities=[(0, 1)], # This initializer is identical to default one('uniform'), but has fixed # seed in order to simplify experimentation. kernel_initializer=tf.keras.initializers.RandomUniform( minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1)) combined_calibrators.append(calibrator) # ############### restecg ############### calibrator = tfl.layers.CategoricalCalibration( num_buckets=3, output_min=0.0, output_max=lattice_sizes[6] - 1.0, # Categorical monotonicity can be partial order. monotonicities=[(0, 1), (0, 2)], # Categorical calibration layer supports standard Keras regularizers. kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001), kernel_initializer='constant') combined_calibrators.append(calibrator) """ Explanation: We create a calibration layer for each feature and add it to the parallel combination layer. For numeric features we use tfl.layers.PWLCalibration, and for categorical features we use tfl.layers.CategoricalCalibration. End of explanation """ lattice = tfl.layers.Lattice( lattice_sizes=lattice_sizes, monotonicities=[ 'increasing', 'none', 'increasing', 'increasing', 'increasing', 'increasing', 'increasing' ], output_min=0.0, output_max=1.0) """ Explanation: We then create a lattice layer to nonlinearly fuse the outputs of the calibrators. Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonicity. This includes partial monotonicity of CategoricalCalibration layer. End of explanation """ model = tf.keras.models.Sequential() model.add(combined_calibrators) model.add(lattice) """ Explanation: We can then create a sequential model using the combined calibrators and lattice layers. End of explanation """ features = training_data_df[[ 'age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg' ]].values.astype(np.float32) target = training_data_df[['target']].values.astype(np.float32) model.compile( loss=tf.keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adagrad(learning_rate=LEARNING_RATE)) model.fit( features, target, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS, validation_split=0.2, shuffle=False, verbose=0) model.evaluate(features, target) """ Explanation: Training works the same as any other keras model. End of explanation """ # We are going to have 2-d embedding as one of lattice inputs. lattice_sizes = [3, 2, 2, 3, 3, 2, 2] """ Explanation: Functional Keras Model This example uses a functional API for Keras model construction. As mentioned in the previous section, lattice layers expect input[i] to be within [0, lattice_sizes[i] - 1.0], so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the calibration layers. End of explanation """ model_inputs = [] lattice_inputs = [] # ############### age ############### age_input = tf.keras.layers.Input(shape=[1], name='age') model_inputs.append(age_input) age_calibrator = tfl.layers.PWLCalibration( # Every PWLCalibration layer must have keypoints of piecewise linear # function specified. Easiest way to specify them is to uniformly cover # entire input range by using numpy.linspace(). input_keypoints=np.linspace( training_data_df['age'].min(), training_data_df['age'].max(), num=5), # You need to ensure that input keypoints have same dtype as layer input. # You can do it by setting dtype here or by providing keypoints in such # format which will be converted to desired tf.dtype by default. dtype=tf.float32, # Output range must correspond to expected lattice input range. output_min=0.0, output_max=lattice_sizes[0] - 1.0, monotonicity='increasing', name='age_calib', )( age_input) lattice_inputs.append(age_calibrator) # ############### sex ############### # For boolean features simply specify CategoricalCalibration layer with 2 # buckets. sex_input = tf.keras.layers.Input(shape=[1], name='sex') model_inputs.append(sex_input) sex_calibrator = tfl.layers.CategoricalCalibration( num_buckets=2, output_min=0.0, output_max=lattice_sizes[1] - 1.0, # Initializes all outputs to (output_min + output_max) / 2.0. kernel_initializer='constant', name='sex_calib', )( sex_input) lattice_inputs.append(sex_calibrator) # ############### cp ############### cp_input = tf.keras.layers.Input(shape=[1], name='cp') model_inputs.append(cp_input) cp_calibrator = tfl.layers.PWLCalibration( # Here instead of specifying dtype of layer we convert keypoints into # np.float32. input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32), output_min=0.0, output_max=lattice_sizes[2] - 1.0, monotonicity='increasing', # You can specify TFL regularizers as tuple ('regularizer name', l1, l2). kernel_regularizer=('hessian', 0.0, 1e-4), name='cp_calib', )( cp_input) lattice_inputs.append(cp_calibrator) # ############### trestbps ############### trestbps_input = tf.keras.layers.Input(shape=[1], name='trestbps') model_inputs.append(trestbps_input) trestbps_calibrator = tfl.layers.PWLCalibration( # Alternatively, you might want to use quantiles as keypoints instead of # uniform keypoints input_keypoints=np.quantile(training_data_df['trestbps'], np.linspace(0.0, 1.0, num=5)), dtype=tf.float32, # Together with quantile keypoints you might want to initialize piecewise # linear function to have 'equal_slopes' in order for output of layer # after initialization to preserve original distribution. kernel_initializer='equal_slopes', output_min=0.0, output_max=lattice_sizes[3] - 1.0, # You might consider clamping extreme inputs of the calibrator to output # bounds. clamp_min=True, clamp_max=True, monotonicity='increasing', name='trestbps_calib', )( trestbps_input) lattice_inputs.append(trestbps_calibrator) # ############### chol ############### chol_input = tf.keras.layers.Input(shape=[1], name='chol') model_inputs.append(chol_input) chol_calibrator = tfl.layers.PWLCalibration( # Explicit input keypoint initialization. input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0], output_min=0.0, output_max=lattice_sizes[4] - 1.0, # Monotonicity of calibrator can be decreasing. Note that corresponding # lattice dimension must have INCREASING monotonicity regardless of # monotonicity direction of calibrator. monotonicity='decreasing', # Convexity together with decreasing monotonicity result in diminishing # return constraint. convexity='convex', # You can specify list of regularizers. You are not limited to TFL # regularizrs. Feel free to use any :) kernel_regularizer=[('laplacian', 0.0, 1e-4), tf.keras.regularizers.l1_l2(l1=0.001)], name='chol_calib', )( chol_input) lattice_inputs.append(chol_calibrator) # ############### fbs ############### fbs_input = tf.keras.layers.Input(shape=[1], name='fbs') model_inputs.append(fbs_input) fbs_calibrator = tfl.layers.CategoricalCalibration( num_buckets=2, output_min=0.0, output_max=lattice_sizes[5] - 1.0, # For categorical calibration layer monotonicity is specified for pairs # of indices of categories. Output for first category in pair will be # smaller than output for second category. # # Don't forget to set monotonicity of corresponding dimension of Lattice # layer to '1'. monotonicities=[(0, 1)], # This initializer is identical to default one ('uniform'), but has fixed # seed in order to simplify experimentation. kernel_initializer=tf.keras.initializers.RandomUniform( minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1), name='fbs_calib', )( fbs_input) lattice_inputs.append(fbs_calibrator) # ############### restecg ############### restecg_input = tf.keras.layers.Input(shape=[1], name='restecg') model_inputs.append(restecg_input) restecg_calibrator = tfl.layers.CategoricalCalibration( num_buckets=3, output_min=0.0, output_max=lattice_sizes[6] - 1.0, # Categorical monotonicity can be partial order. monotonicities=[(0, 1), (0, 2)], # Categorical calibration layer supports standard Keras regularizers. kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001), kernel_initializer='constant', name='restecg_calib', )( restecg_input) lattice_inputs.append(restecg_calibrator) """ Explanation: For each feature, we need to create an input layer followed by a calibration layer. For numeric features we use tfl.layers.PWLCalibration and for categorical features we use tfl.layers.CategoricalCalibration. End of explanation """ lattice = tfl.layers.Lattice( lattice_sizes=lattice_sizes, monotonicities=[ 'increasing', 'none', 'increasing', 'increasing', 'increasing', 'increasing', 'increasing' ], output_min=0.0, output_max=1.0, name='lattice', )( lattice_inputs) """ Explanation: We then create a lattice layer to nonlinearly fuse the outputs of the calibrators. Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonicity. This includes partial monotonicity of tfl.layers.CategoricalCalibration layer. End of explanation """ model_output = tfl.layers.PWLCalibration( input_keypoints=np.linspace(0.0, 1.0, 5), name='output_calib', )( lattice) """ Explanation: To add more flexibility to the model, we add an output calibration layer. End of explanation """ model = tf.keras.models.Model( inputs=model_inputs, outputs=model_output) tf.keras.utils.plot_model(model, rankdir='LR') """ Explanation: We can now create a model using the inputs and outputs. End of explanation """ feature_names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg'] features = np.split( training_data_df[feature_names].values.astype(np.float32), indices_or_sections=len(feature_names), axis=1) target = training_data_df[['target']].values.astype(np.float32) model.compile( loss=tf.keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE)) model.fit( features, target, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS, validation_split=0.2, shuffle=False, verbose=0) model.evaluate(features, target) """ Explanation: Training works the same as any other keras model. Note that, with our setup, input features are passed as separate tensors. End of explanation """
tpin3694/tpin3694.github.io
machine-learning/preprocessing_iris_data.ipynb
mit
from sklearn import datasets import numpy as np from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler """ Explanation: Title: Preprocessing Iris Data Slug: preprocessing_iris_data Summary: Preprocessing iris data using scikit learn. Date: 2016-09-21 12:00 Category: Machine Learning Tags: Preprocessing Structured Data Authors: Chris Albon Preliminaries End of explanation """ # Load the iris data iris = datasets.load_iris() # Create a variable for the feature data X = iris.data # Create a variable for the target data y = iris.target """ Explanation: Load Data End of explanation """ # Random split the data into four new datasets, training features, training outcome, test features, # and test outcome. Set the size of the test data to be 30% of the full dataset. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) """ Explanation: Split Data For Cross Validation End of explanation """ # Load the standard scaler sc = StandardScaler() # Compute the mean and standard deviation based on the training data sc.fit(X_train) # Scale the training data to be of mean 0 and of unit variance X_train_std = sc.transform(X_train) # Scale the test data to be of mean 0 and of unit variance X_test_std = sc.transform(X_test) # Feature Test Data, non-standardized X_test[0:5] # Feature Test Data, standardized. X_test_std[0:5] """ Explanation: Standardize Feature Data End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-2/cmip6/models/sandbox-1/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: TEST-INSTITUTE-2 Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:44 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml.ipynb
apache-2.0
import os import tensorflow as tf PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID # Do not change these os.environ["PROJECT"] = PROJECT os.environ["TFVERSION"] = '2.6' %%bash mkdir bqml_data cd bqml_data curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip' unzip ml-20m.zip yes | bq rm -r $PROJECT:movielens bq --location=US mk --dataset \ --description 'Movie Recommendations' \ $PROJECT:movielens bq --location=US load --source_format=CSV \ --autodetect movielens.ratings gs://cloud-training/recommender-systems/movielens/ratings.csv bq --location=US load --source_format=CSV \ --autodetect movielens.movies_raw gs://cloud-training/recommender-systems/movielens/movies.csv """ Explanation: Collaborative filtering on the MovieLense Dataset Learning objectives 1. Explore the data using BigQuery. 2. Use the model to make recommendations for a user. 3. Use the model to recommend an item to a group of users. Introduction This notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani. MovieLens dataset To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation. Download the data and load it as a BigQuery table using: End of explanation """ %%bigquery --project $PROJECT SELECT * FROM movielens.ratings LIMIT 10 """ Explanation: Exploring the data Two tables should now be available in <a href="https://console.cloud.google.com/bigquery">BigQuery</a>. Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings. End of explanation """ %%bigquery --project $PROJECT SELECT COUNT(DISTINCT userId) numUsers, COUNT(DISTINCT movieId) numMovies, COUNT(*) totalRatings FROM movielens.ratings """ Explanation: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM movielens.movies_raw WHERE movieId < 5 """ Explanation: On examining the first few movies using the query following query, we can see that the genres column is a formatted string: End of explanation """ %%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.movies AS SELECT * REPLACE(SPLIT(genres, "|") AS genres) FROM movielens.movies_raw %%bigquery --project $PROJECT SELECT * FROM movielens.movies WHERE movieId < 5 """ Explanation: We can parse the genres into an array and rewrite the table as follows: End of explanation """ %%bash bq --location=US cp \ cloud-training-demos:movielens.recommender_16 \ movielens.recommender %%bigquery --project $PROJECT SELECT * -- Note: remove cloud-training-demos if you are using your own model: FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`) %%bigquery --project $PROJECT SELECT * -- Note: remove cloud-training-demos if you are using your own model: FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`) """ Explanation: Matrix factorization Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, ( SELECT movieId, title, 903 AS userId FROM movielens.movies, UNNEST(genres) g WHERE g = 'Comedy' )) ORDER BY predicted_rating DESC LIMIT 5 """ Explanation: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. Making recommendations With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, ( WITH seen AS ( SELECT ARRAY_AGG(movieId) AS movies FROM movielens.ratings WHERE userId = 903 ) SELECT movieId, title, 903 AS userId FROM movielens.movies, UNNEST(genres) g, seen WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies) )) ORDER BY predicted_rating DESC LIMIT 5 """ Explanation: Filtering out already rated movies Of course, this includes movies the user has already seen and rated in the past. Let’s remove them. TODO 1: Make a prediction for user 903 that does not include already seen movies. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, ( WITH allUsers AS ( SELECT DISTINCT userId FROM movielens.ratings ) SELECT 96481 AS movieId, (SELECT title FROM movielens.movies WHERE movieId=96481) title, userId FROM allUsers )) ORDER BY predicted_rating DESC LIMIT 5 """ Explanation: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen. Customer targeting In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest. TODO 2: Find the top five users who will likely enjoy American Mullet (2001) End of explanation """ %%bigquery --project $PROJECT SELECT * FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`) LIMIT 10 """ Explanation: Batch predictions for all users and movies What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook. End of explanation """
trangel/Data-Science
deep_learning_ai/Tensorflow+Tutorial.ipynb
gpl-3.0
import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) """ Explanation: TensorFlow Tutorial Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: Initialize variables Start your own session Train algorithms Implement a Neural Network Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow Library To start, you will import the library: End of explanation """ y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss """ Explanation: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ End of explanation """ a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) """ Explanation: Writing and running programs in TensorFlow has the following steps: Create Tensors (variables) that are not yet executed/evaluated. Write operations between those Tensors. Initialize your Tensors. Create a Session. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value. Now let us look at an easy example. Run the cell below: End of explanation """ sess = tf.Session() print(sess.run(c)) """ Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. End of explanation """ # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() """ Explanation: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. End of explanation """ # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3,1), name = 'X') W = tf.constant(np.random.randn(4,3), name = 'W') b = tf.constant(np.random.randn(4,1), name = 'b') Y = tf.constant(np.random.randn(4,1), name = 'Y') ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(tf.add(tf.matmul(W,X),b)) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) """ Explanation: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear function Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. Exercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1): ```python X = tf.constant(np.random.randn(3,1), name = "X") ``` You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication - tf.add(..., ...) to do an addition - np.random.randn(...) to initialize randomly End of explanation """ # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name='x') # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid, feed_dict = {x: z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) """ Explanation: Expected Output : <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session. Exercise : Implement the sigmoid function below. You should use the following: tf.placeholder(tf.float32, name = "...") tf.sigmoid(...) sess.run(..., feed_dict = {x: z}) Note that there are two typical ways to create and use sessions in tensorflow: Method 1: ```python sess = tf.Session() Run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) sess.close() # Close the session **Method 2:**python with tf.Session() as sess: # run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) # This takes care of closing the session for you :) ``` End of explanation """ # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32, name='logits') y = tf.placeholder(tf.float32, name='labels') # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost, feed_dict = {z:logits, y:labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) """ Explanation: Expected Output : <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> To summarize, you how know how to: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create the session 4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the Cost You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$ you can do it in one line of code in tensorflow! Exercise: Implement the cross entropy loss. The function you will use is: tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...) Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes $$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{2}) + (1-y^{(i)})\log (1-\sigma(z^{2})\large )\small\tag{2}$$ End of explanation """ # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, name='C') # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(labels, C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) """ Explanation: Expected Output : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows: <img src="images/onehot.png" style="width:600px;height:150px;"> This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: tf.one_hot(labels, depth, axis) Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this. End of explanation """ # GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) """ Explanation: Expected Output: <table> <tr> <td> **one_hot** </td> <td> [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] </td> </tr> </table> 1.5 - Initialize with zeros and ones Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. Exercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). tf.ones(shape) End of explanation """ # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() """ Explanation: Expected Output: <table> <tr> <td> **ones** </td> <td> [ 1. 1. 1.] </td> </tr> </table> 2 - Building your first neural network in tensorflow In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model: Create the computation graph Run the graph Let's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS Dataset One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language. Training set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number). Test set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number). Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs. Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. <img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center> Run the following code to load the dataset. End of explanation """ # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) """ Explanation: Change the index below and run the cell to visualize some examples in the dataset. End of explanation """ # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) """ Explanation: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. End of explanation """ # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(dtype=tf.float32, shape=[n_x, None], name='X') Y = tf.placeholder(dtype=tf.float32, shape=[n_y, None], name='Y') ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) """ Explanation: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholders Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session. Exercise: Implement the function below to create the placeholders in tensorflow. End of explanation """ # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) W2 = tf.get_variable('W2', [12,25], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable('b2', [12,1], initializer = tf.zeros_initializer()) W3 = tf.get_variable('W3', [6,12], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable('b3', [6,1], initializer = tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) """ Explanation: Expected Output: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) </td> </tr> </table> 2.2 - Initializing the parameters Your second task is to initialize the parameters in tensorflow. Exercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: python W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) Please use seed = 1 to make sure your results match ours. End of explanation """ # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) """ Explanation: Expected Output: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > </td> </tr> <tr> <td> **W2** </td> <td> < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref > </td> </tr> <tr> <td> **b2** </td> <td> < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref > </td> </tr> </table> As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: tf.add(...,...) to do an addition tf.matmul(...,...) to do a matrix multiplication tf.nn.relu(...) to apply the ReLU activation Question: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3! End of explanation """ # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) """ Explanation: Expected Output: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute cost As seen before, it is very easy to compute the cost using: python tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) Question: Implement the cost function below. - It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you. - Besides, tf.reduce_mean basically does the summation over the examples. End of explanation """ def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters """ Explanation: Expected Output: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model. After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate. For instance, for gradient descent the optimizer would be: python optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) To make the optimization you would do: python _ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs. Note When coding, we often use _ as a "throwaway" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable). 2.6 - Building the model Now, you will bring it all together! Exercise: Implement the model. You will be calling the functions you had previously implemented. End of explanation """ parameters = model(X_train, Y_train, X_test, Y_test) """ Explanation: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! End of explanation """ import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) """ Explanation: Expected Output: <table> <tr> <td> **Train Accuracy** </td> <td> 0.999074 </td> </tr> <tr> <td> **Test Accuracy** </td> <td> 0.716667 </td> </tr> </table> Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy. Insights: - Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise) Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! End of explanation """
zerothi/ts-tbt-sisl-tutorial
A_03/run.ipynb
gpl-3.0
# Create a Hall bar """ Explanation: Quantum Hall Effect In this exercise, we will build on TB_07 to simulate the quantum hall effect. Here, we will extract the Hall resistance from the transmissions calculated with TBtrans using the Landauer-Büttiker formalism. Exercise Overview: Create a Hall bar Construct Hamiltonians and add magnetic fields (see TB_07) Calculate the transmission with tbtrans Extract the Hall resistance $R_H$. Exercise 1. Create a Hall bar In order to be able to observe the quantum hall effect, the size of the hall bar needs to be big enough. For a 4(6) lead device reasonable dimensions are: 1. 4-lead device: - Width of electrodes (perpendicular to the semi-infinite axis): 30 atoms - Offset of the electrodes 2(3) from the corner of the central part: >= 5 atoms 6-lead device: Width of electrodes (perpendicular to the semi-infinite axis): 30 atoms Spacing between electrodes on the same side: 50 atoms Offset of the electrodes 2(3,5,6) from the corner of the central part: > 15 atoms |4-lead device |6-lead device | |:-------------|:-------------| |<img src="img/set-up-4.png" alt="4 lead hall bar" style="width: 310px;"/>|<img src="img/set-up-6.png" alt="6 lead hall bar" style="width: 490px;"/>| End of explanation """ def peierls(B=0): def peierls(self, ia, atoms, atoms_xyz=None): idx = self.geometry.close(ia, R=[0.1, 1.01], atoms=atoms, atoms_xyz=atoms_xyz) # Onsite self[ia, idx[0]] = 4 # Hopping if B == 0: self[ia, idx[1]] = -1 else: xyz = self.geometry.xyz[ia] dxyz = self.geometry[idx[1]] self[ia, idx[1]] = - np.exp(-0.5j * B * (dxyz[:, 0] - xyz[0])*(dxyz[:,1] + xyz[1])) return peierls # H0 = sisl.Hamiltonian(geom, dtype=np.float64) # H0.construct(peierls())) # # HB = sisl.Hamiltonian(geom, dtype=np.complex128) # HB.construct(peierls(B)) # # dH = ... """ Explanation: 2. Construct Hamiltonian and add magnetic fields The required field strengths may vary depending on the size of the hall bar. A good starting point might by B = 1 / np.arange(1,31) End of explanation """ # Create short-hand function to open files gs = sisl.get_sile # No magnetic field tbt0 = gs('M_0/siesta.TBT.nc') # All magnetic fields in increasing order tbts = [gs('M_{}/siesta.TBT.nc'.format(rec_phi)) for rec_phi in rec_phis] def G_matrix(tbtsile): # Construct G G0 = G_matrix(tbt0) # Remove one row and column from the matrix G0 = np.delete(np.delete(G0, ..., axis=...), ..., axis=...) R0 = np.linalg.inv(G) G = ... R = ... RH = ... # Plot the hall resitance # - as a function of energy for a fixed magnetic field strength # - as a funciton of the magnetic field strength for a fixed energy """ Explanation: 3. Calculate the transmission with TBtrans The folder of this exercise contains the skeleton of an input file for a 4-lead (RUN-4.fdf) and 6-lead device (RUN-6.fdf), as well as a script to run TBtrans for all values of the magnetic field (run.sh). Depending on the size of the Hall bar this step might require some a considerable amount of time. 4. Extract the Hall resistance $(R_H)$ The Hall resistance ($R_H$) in a 4 lead Hall bar like the one shown above is given by $$R_H = \left.\frac{V_2-V_3}{I_1}\right|_{I_2 = I_3 = 0}.$$ In order to find a relationship between the transmission curves and the hall resistance, we express the lead currents $I_i$ in terms of applied biases $V_i$ and the transmissions $T_{ij}$ between leads $i$ and $j$ $$I_i = \sum_j G_{ij} (V_i - V_j)\quad\text{where}\quad G_{ij} = \frac{2e^2}{h} T_{ij}.$$ Since the currents only depend on bias differences, we can set one of them to zero without loss of generality (here $V_3 = 0$). This allows us to rewrite the relation as $$\mathbf{I} = \mathcal{G} \mathbf{V} \quad\text{, where}\quad \mathcal{G}{ii} = \sum{i\neq j} G_{ij} \quad\text{and}\quad \mathcal{G}{ij} = - G{ij}.$$ Using the inverse $\mathbf{R}$ of $\mathcal{G}$, we can express $V_2$ and $V_3$ in terms of the lead currents $I_i$ and calculate the Hall conductance: $$ V_i = R_{i1} I_1 + R_{i2} I_2 + R_{i3} I_3, $$ and finally, we find the Hall resistance: $$ R_H = R_{21}-R_{31} $$ The derivation for the 6-lead device is analogous and yields: $$ R_H = R_{21}-R_{61} $$ End of explanation """
nick5435/Pokemon-Data-Analytics
Analytics2.ipynb
lgpl-3.0
mons["AVERAGE_STAT"] = mons["STAT_TOTAL"]/6 gens = pd.Series([0 for i in range(len(mons.index))], index=mons.index) for ID, mon in mons.iterrows(): if 0<mon.DEXID<=151: gens[ID] = 1 elif 151<mon.DEXID<=251: gens[ID] = 2 elif 251<mon.DEXID<=386: gens[ID] = 3 elif 386<mon.DEXID<=493: gens[ID] = 4 elif 493<mon.DEXID<=649: gens[ID] = 5 elif 649<mon.DEXID<=721: gens[ID] = 6 elif 721<mon.DEXID<=805: gens[ID] = 7 else: gens[ID] = 0 mons["GEN"] = gens mons.to_csv("./data/pokemon_preUSUM_data.csv") gen = {} for i in range(1,8): gen[i] = mons[mons.GEN == i] plt.figure(100) colors = sns.color_palette("colorblind", 7) for i in range(1,8): sns.distplot( mons[mons["GEN"] == i]["STAT_TOTAL"], hist=False,kde=True, color=colors[i-1], label=f"Gen {i}") plt.legend() plt.show() """ Explanation: Data Processing We need to make a column for average stats for each 'mon We need to label each 'mon by its generation (We should figure out a way to ignore non-stat changed formes i.e. Arceus, as he may be upsetting the Gen IV data) End of explanation """ stat_averages_by_gen = {i:gen[i].AVERAGE_STAT for i in range(1,8)} testable_data = list(stat_averages_by_gen.values()) data = [list(gen) for gen in testable_data] data = np.array(data) averages = {i: stat_averages_by_gen[i].mean() for i in range(1,8)} averages stats.kruskal(*data) recarray = mons.to_records() test = comp.pairwise_tukeyhsd(recarray["AVERAGE_STAT"], recarray["GEN"]) test.summary() """ Explanation: Some Stats End of explanation """ np.random.seed(525_600) stats_gens = mons[['HP', 'ATTACK', 'DEFENSE', 'SPECIAL_ATTACK', 'SPECIAL_DEFENSE', 'SPEED', 'GEN']] X = np.c_[stats_gens] """ Explanation: Machine Learning and Clustering End of explanation """ pca = decomposition.PCA() pca.fit(X) pca.explained_variance_ pca.n_components = 3 X_reduced = pca.fit_transform(X) X_reduced.shape pca.get_params() """ Explanation: PCA End of explanation """ from sklearn import cluster k_means = cluster.KMeans(n_clusters = 6) k_means.fit(X) mons["KMEANS_LABEL"] = pd.Series(k_means.labels_) plotData = mons[["GEN", "STAT_TOTAL", "KMEANS_LABEL"]] colors = sns.color_palette("colorblind", 7) for i in range(1,8): sns.distplot( plotData[plotData["GEN"] == i]["STAT_TOTAL"], color=colors[i-1]) plt.figure(925) sns.boxplot(x="KMEANS_LABEL", y="STAT_TOTAL", data=plotData) plt.show() plt.figure(9050624) sns.pairplot(plotData, kind="scatter", hue="GEN", palette=colors) plt.show() plotData.to_csv("./data/kmeans.csv") """ Explanation: K-Means Clustering End of explanation """
phoebe-project/phoebe2-docs
2.3/examples/sun_earth.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" """ Explanation: Sun-Earth System NOTE: planets are currently under testing and not yet supported Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ import phoebe from phoebe import u # units from phoebe import c # constants import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary(starA='sun', starB='earth', orbit='earthorbit') """ Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation """ b.set_value('teff@sun', 1.0*u.solTeff) b.set_value('requiv@sun', 1.0*u.solRad) b.flip_constraint('period@sun', solve_for='syncpar') b.set_value('period@sun', 24.47*u.d) #b.set_value('incl', 23.5*u.deg) b.set_value('teff@earth', 252*u.K) b.set_value('requiv@earth', 1.0*c.R_earth) b.flip_constraint('period@earth', solve_for='syncpar') b.set_value('period@earth', 1*u.d) b.set_value('sma@earthorbit', 1*u.AU) b.set_value('period@earthorbit', 1*u.yr) b.set_value('q@earthorbit', c.M_earth/c.M_sun) #b.set_value('ecc@earthorbit') print("Msun: {}".format(b.get_quantity('mass@sun@component', unit=u.solMass))) print("Mearth: {}".format(b.get_quantity('mass@earth@component', unit=u.solMass))) """ Explanation: Setting Parameters End of explanation """ b.add_dataset('mesh', times=[0.5], dataset='mesh01') b.add_dataset('lc', times=np.linspace(-0.5,0.5,51), dataset='lc01') b.set_value('ld_func@earth', 'logarithmic') b.set_value('ld_coeffs@earth', [0.0, 0.0]) """ Explanation: Running Compute End of explanation """ b['distortion_method@earth'] = 'rotstar' """ Explanation: We'll have the sun follow a roche potential and the earth follow a rotating sphere (rotstar). NOTE: this doesn't work yet because the rpole<->potential is still being defined by roche, giving the earth a polar radius way too small. End of explanation """ b['atm@earth'] = 'blackbody' b.set_value_all('ld_func@earth', 'logarithmic') b.set_value_all('ld_coeffs@earth', [0, 0]) b.run_compute() axs, artists = b.plot(dataset='mesh01', show=True) axs, artists = b.plot(dataset='mesh01', component='sun', show=True) axs, artists = b.plot(dataset='mesh01', component='earth', show=True) b['requiv@earth@component'] axs, artists = b.plot(dataset='lc01', show=True) """ Explanation: The temperatures of earth will fall far out of bounds for any atmosphere model, so let's set the earth to be a blackbody and use a supported limb-darkening model (the default 'interp' is not valid for blackbody atmospheres). End of explanation """
fabiencampillo/systemes_dynamiques_agronomie
6.1_kalman_general.ipynb
gpl-3.0
%matplotlib inline from ipywidgets import interact, fixed import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats barZ = np.array([[1],[3]]) QZ = np.array([[3,1],[1,1]]) a = barZ[0] b = QZ[0,0] xx = np.linspace(-6, 10, 100) R = QZ[0,0]-QZ[0,1]*QZ[0,1]/QZ[1,1] def pltbayesgauss(obs): hatX = barZ[0]+QZ[0,1]*(obs-barZ[1])/QZ[1,1] plt.plot([obs,obs],[0,1],':') plt.plot(xx, stats.norm.pdf(xx, a, b),label='loi a priori') plt.plot(xx, stats.norm.pdf(xx, hatX, R),label='loi a posteriori') plt.ylim([0,0.25]) plt.legend() plt.show() interact(pltbayesgauss, obs=(-6,10,0.1)) plt.show() """ Explanation: Filtrage de Kalman Loi conditionnelle gaussienne Soit $Z=\left(\begin{matrix}X \ Y\end{matrix}\right)$ un vecteur aléatoire gaussien à valeurs dans $\mathbb R^{n+d}$ de moyenne $\bar Z$ et de covariance $Q_Z$ avec: \begin{align} \bar Z &= \begin{pmatrix}\bar X \ \bar Y \end{pmatrix} & Q_Z &=\begin{pmatrix} Q_{X X} & Q_{XY} \ Q_{XY}^ & Q_{YY}\end{pmatrix} \end{align*} où $Q_{YY}>0$ alors $X|Y$ est gaussien $N(\widehat{X},R)$ avec: \begin{align} \widehat{X} &= \bar X + Q_{XY}\,Q_{YY}^{-1}\,(Y-\bar Y) \ R &= Q_{XX}-Q_{XY}\,Q_{YY}^{-1}\,Q_{XY}^ \hskip 2em\textrm{(ne dépend pas de l'observation)} \end{align*} C'est un problème bayésien gaussien: - avant observation: notre connaissance de $X$ est $N(\bar X,Q_{X X})$ - après observation: notre connaissance de $X$ est $N(\widehat{X},R)$ et on espère $Q_{X X}>R$ End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt kmax = 300 EX0, VX0 = 10, 5 A, B, QW = 0.9, 1, 0.1 sQW = np.sqrt(QW) sVX0 = np.sqrt(VX0) def sys_lin(EX0, sVX0, A, B, sQW): W = sQW*np.random.randn(kmax) X = np.ones(kmax+1) X[0] = EX0+sVX0*np.random.randn() for k in range(kmax): X[k+1] = A*X[k]+B*W[k] return X def sys_lin_loi(EX0, sVX0, A, B, sQW): espX = np.ones(kmax+1) varX = np.ones(kmax+1) espX[0] = EX0 for k in range(kmax): espX[k+1] = A*espX[k] varX[k+1] = A*A*varX[k]+B*B*QW return espX, varX X = sys_lin(EX0, sVX0, A, B, sQW) espX, varX = sys_lin_loi(EX0, sVX0, A, B, sQW) plt.plot([0, kmax], [0, 0], color="g", linestyle=':') plt.plot(espX,color='k') plt.fill_between(range(kmax+1),espX+2*np.sqrt(varX), espX-2*np.sqrt(varX), color = '0.75', alpha=0.4) plt.plot(X) plt.show() from ipywidgets import interact, fixed def plt_sys_lin(A, B, iseed): np.random.seed(iseed) X = sys_lin(10, 0, A, B, 1) plt.plot([0, kmax], [0, 0], color="g", linestyle=':') plt.plot(X) plt.ylim([-4,15]) plt.show() interact(plt_sys_lin, A=(0,1,0.01), B=(0.,6,0.1), iseed=(1,100,1)) plt.show() """ Explanation: Système linéaire gaussien en tems discret $$ X_{k+1} = A\,X_{k} + B\,W_k\,,\ 0\leq k < k_{max} $$ $X_k\to\mathbb{R}^n$ bruit: $W_k\to\mathbb{R}^m$, $W_k\sim N(0,Q_W)$ $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{m\times n}$ Il s'agit d'un système gaussien: $X_{0:k_{max}}$ est un vecteur aléatoire gaussien. (Notation: $Z_{k':k}=(Z_{k'},Z_{k'+1},\dots ,Z_{k})$, $k'\leq k$). La moyenne $\bar X_k = \mathbb{E}(X_k)$ et la covariance $Q^X_k = \mathrm{Var}(X_k)$ sont donnés par: \begin{align} \bar X_{k+1} &= A\,\bar X_{k} \ Q^X_{k+1} &= A\,Q^X_{k}\,A^+B\,Q_W\,B^ \end{align} End of explanation """ kmax = 300 mcmax = 300 EX0, VX0 = 10, 5 A, B, QW = 0.9, 1, 0.1 sQW = np.sqrt(QW) sVX0 = np.sqrt(VX0) def sys_lin_vec(mcmax,EX0, sVX0, A, B, sQW): W = sQW*np.random.randn(kmax,mcmax) X = np.ones((kmax+1,mcmax)) X[0,] = EX0+sVX0*np.random.randn() for k in range(kmax): X[k+1,] = A*X[k,]+B*W[k,] return X X = sys_lin_vec(mcmax, EX0, sVX0, A, B, sQW) plt.plot(X,alpha=.04,color='b') plt.plot(espX,color='w') plt.plot(espX+2*np.sqrt(varX),color='k') plt.plot(espX-2*np.sqrt(varX),color='k') plt.show() mcmax = 10000 X = sys_lin_vec(mcmax, EX0, sVX0, A, B, sQW) num_bins = 30 n, bins, patches = plt.hist(X[-1,], num_bins, normed=1, facecolor='green', alpha=0.5) plt.show() """ Explanation: Un peu de vectorisation End of explanation """ import numpy as np import matplotlib.pyplot as plt kmax = 300 EX0, VX0 = 10, 5 A, B, QW = 0.9, 1, 0.1 H, QV = 1, 0.2 sQW = np.sqrt(QW) sQV = np.sqrt(QV) sVX0 = np.sqrt(VX0) def sys_lin_esp_etat(EX0, sVX0, A, B, H, sQW, sQV): W = sQW*np.random.randn(kmax) V = sQV*np.random.randn(kmax) X = np.ones(kmax+1) Y = np.ones(kmax+1) X[0] = EX0+sVX0*np.random.randn() Y[0] = 0 # on s en moque for k in range(kmax): X[k+1] = A*X[k]+B*W[k] Y[k+1] = H*X[k+1]+V[k] return X,Y def kalman(EX0, sVX0, A, B, H, sQW, sQV, Y): hatX = np.ones(kmax+1) R = np.ones(kmax+1) hatX[0] = EX0 R[0] = sVX0*sVX0 for k in range(kmax): # prediction predX = A*hatX[k] predR = A*A*R[k]+B*B*sQW*sQW # correction gain = predR * H / (H*predR*H+sQV*sQV) hatX[k+1] = predX + gain * (Y[k+1]-H*predX) R[k+1] = (1-gain*H)*predR return hatX, R X,Y = sys_lin_esp_etat(EX0, sVX0, A, B, H, sQW, sQV) espX, varX = sys_lin_loi(EX0, sVX0, A, B, sQW) hatX, R = kalman(EX0, sVX0, A, B, H, sQW, sQV, Y) plt.fill_between(range(kmax+1),espX+2*np.sqrt(varX), espX-2*np.sqrt(varX), color = 'g', alpha=0.12, label=r'$\bar X_k\pm 2\,\sqrt{Q^X_k}$ (a priori)') plt.fill_between(range(kmax+1),hatX+2*np.sqrt(R), hatX-2*np.sqrt(R), color = 'r', alpha=0.12, label=r'$\hat X_k\pm 2\,\sqrt{R_k}$ (a posteriori)') plt.plot(X,label=r'$X_k$') plt.plot(espX,color='g',label=r'$\bar X_k$') plt.plot(hatX,color='r',alpha=0.5,label=r'$\hat X_k$') plt.legend() plt.show() """ Explanation: Filtrage linéaire gaussien \begin{align} \tag{équation d'état} X_{k+1} &= A\,X_{k} + B\,W_k\,\ 0\leq k<k_{max} \ \tag{équation d'observation} Y_{k} &= H\,X_{k} + V_k \,\ 0< k\leq k_{max} \end{align} $X_k\to\mathbb{R}^n$, $Y_k\to\mathbb{R}^d$ bruit d'état: $W_k\to\mathbb{R}^m$, $W_k\sim N(0,Q_W)$ bruit de mesure: $V_k\to\mathbb{R}^d$, $V_k\sim N(0,Q_V)$, $Q_V>0$ $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{m\times n}$, $H\in\mathbb{R}^{n\times d}$ Il s'agit d'un système gaussien: $(X_0,\dots,X_{k_{max}},Y_1,\dots,Y_{k_{max}})$ est un vecteur aléatoire gaussien. (Notation: $Z_{k':k}=(Z_{k'},Z_{k'+1},\dots ,Z_{k})$, $k'\leq k$). Filtrage: On veut estimer l'état caché à l'aide des observations. À l'instant $k$, on dispose des observations $Y_{1:k}$ et on veut estimer $X_k$. Filtre de Kalman La loi de $X_k$ sachant les observations $Y_{1:k}$ est gaussienne de moyenne $\widehat{X}_k$ et de covariance $R_k$ donné par: initialisation - $\widehat{X}_0 \leftarrow \bar{X}_0$ - $R_0 \leftarrow Q_0$ itérations $k=1,2,3\dots$ - prédiction (calcul de la loi de $X_k|Y_{0:k-1}$) * $\widehat{X}{k^-} \leftarrow A\,\widehat{X}{k-1} $ * $R_{k^-} \leftarrow A\,R_{k-1}\,A^ + B\,Q^W\,B^$ - correction (calcul de la loi de $X_k|Y_{0:k}$) * $K_k \leftarrow R_{k^-}\,H^\;[ H\,R_{k^-}\,H^+Q^ V ]^{-1}$ gain * $\widehat{X}k \leftarrow \widehat{X}{k^-} + K_k\;[ { Y_k}-H\,\widehat{X}{k^-}]$ * $R_k \leftarrow [ I-K_k\,H ]\;R{k^-}$ End of explanation """
timothydmorton/isochrones
notebooks/triceratops_ebs.ipynb
mit
from isochrones import get_ichrone mist = get_ichrone('mist', bands=['TESS', 'V', 'K']) mass, age, feh = (0.8, 9.7, 0.0) distance = 10 # pc AV = 0.0 simulated_props = mist.generate(mass, age, feh, distance=distance, AV=AV) simulated_props[['mass', 'radius', 'TESS_mag', 'V_mag', 'K_mag']] """ Explanation: Testing TRICERATOPS EB modeling vs. isochrones We want to test how the luminosity-scaling method compares to physical modeling based on colors & parallax. This is the layout of the test: Choose arbitrary properties for our test target star. Sample space of possible stellar binaries physically consistent with observed apparent V, K, and parallax. Compute f_EB for each sample--this gives a distribution of the true allowed distribution of f_EB. For each of these f_EB samples, compute the primary and secondary radius via the "luminosity scaling" method. Compare these comupted R1, R2 with the actual true primary and secondary radii from the samples. Set properties of primary star End of explanation """ from isochrones import BinaryStarModel # set "observed" properties to the true simulated V, K, and parallax. props = {'V': (float(simulated_props['V_mag']), 0.02), 'K': (float(simulated_props['K_mag']), 0.02), 'parallax': (100, 0.05)} mod = BinaryStarModel(mist, **props, maxAV=0.0001, eep_bounds=(0, 450), name='triceratops_eb_1') mod.fit() """ Explanation: Sample space of allowed binaries End of explanation """ from corner import corner corner(mod.derived_samples[['radius_0', 'radius_1']]); """ Explanation: Let's check out the joint distribution of R1, R2 allowed by this sampling. End of explanation """ dmag = mod.derived_samples['TESS_mag_1'] - mod.derived_samples['TESS_mag_0'] f_ratio = 10**(-0.4 * dmag) f_EB = 1 / (1./f_ratio + 1) f_EB.describe() """ Explanation: Compute $f_{EB}$ for each sample The above sampling provides derived samples of the primary and secondary TESS mags. This allows us to compute $f_{EB}$. First note that $$ f_{EB} = \frac{f_2}{f_1 + f_2} $$ From the magnitude difference TESS_mag_1 - TESS_mag_0 we can compute the secondary/primary flux ratio $f_2/f_1$, and we can rewrite $f_{EB}$ as follows: $$ f_{EB} = \frac{1}{\frac{f_1}{f_2} + 1} $$ End of explanation """ from triceratops.funcs import stellar_relations def get_radii(L, f_EB): L1 = L * (1 - f_EB) L2 = L * f_EB _, R1, _, _, _ = stellar_relations(lum=L1) _, R2, _, _, _ = stellar_relations(lum=L2) return R1, R2 R1, R2 = zip(*[get_radii(float(10**simulated_props['logL']), f) for f in f_EB]) """ Explanation: Compute primary & secondary radii using TRICERATOPS method End of explanation """ import numpy as np def compare_radii(mod, R1, R2): samples1 = np.array([R1, R2]).T samples2 = mod.derived_samples[['radius_0', 'radius_1']] param_range = [(min(min(R1), samples2.radius_0.min()), max(max(R1), samples2.radius_0.max())), (min(min(R2), samples2.radius_1.min()), max(max(R2), samples2.radius_0.max()))] fig = corner(samples1, range=param_range, color='red') return corner(samples2, fig=fig, range=param_range) compare_radii(mod, R1, R2); """ Explanation: Compare derived R1, R2 with true R1, R2 End of explanation """ import matplotlib.pyplot as plt def bias_hist(mod, R1, R2): R2R1_iso = mod.derived_samples.radius_1 / mod.derived_samples.radius_0 R2R1_tri = np.array(R2)/np.array(R1) bias = R2R1_tri / R2R1_iso plt.hist(bias); plt.axvline(bias.mean(), color='k', ls='--') plt.xlabel('radius ratio bias: derived/true') bias_hist(mod, R1, R2) """ Explanation: OK, let's ask a more specific question. For each sample here (representing a fixed value of $f_{EB}$), how much is the estimate of the radius ratio $R_2/R_1$ biased with respect to the "true" value? End of explanation """ plt.hist(f_EB); """ Explanation: OK, so for this particular star, the estimated radius ratio tends to be off by about 15% from the truth. So the question remains: does this matter? Well, the TRICERATOPS algorithm computes the EB light curve as a function of $f_{EB}$ and looks for the value of $f_{EB}$ that gives the best fit to the data. I think this bias should then not affect the actual maximum likelihood value (the most important number for FPP analysis), but rather that the actual computed value of the radius ratio $at$ the max-likelihood value of $f_{EB}$ will be biased by about this much. However, one thing that perhaps should be different if we wanted to properly take into account the fact that we know the color of the target star, would be the prior on $f_{EB}$. Here's the distribution of $f_{EB}$ allowed by the color constraint: End of explanation """ mass1, mass2, age, feh = (1.0, 0.6, 9.7, 0.0) distance = 10 # pc AV = 0.0 simulated_props_2 = mist.generate_binary(mass1, mass2, age, feh) props = {'V': (float(simulated_props_2['V_mag']), 0.02), 'K': (float(simulated_props_2['K_mag']), 0.02), 'parallax': (100, 0.05)} mod_2 = BinaryStarModel(mist, **props, maxAV=0.0001, eep_bounds=(0, 450), name='triceratops_eb_2') mod_2.fit() corner(mod_2.derived_samples[['radius_0', 'radius_1']]); dmag_2 = mod_2.derived_samples['TESS_mag_1'] - mod_2.derived_samples['TESS_mag_0'] f_ratio_2 = 10**(-0.4 * dmag_2) f_EB_2 = 1 / (1./f_ratio_2 + 1) L_tot = float(10**simulated_props_2['logL_0'] + 10**simulated_props_2['logL_1']) R1_2, R2_2 = zip(*[get_radii(L_tot, f) for f in f_EB_2]) compare_radii(mod_2, R1_2, R2_2); bias_hist(mod_2, R1_2, R2_2) """ Explanation: This is fairly different from assuming a flat prior on $f_{EB}$, though this will probably only matter in borderline cases, i.e., where the max-likelihood of EB model is close to that of the TP model. Now remember, this was all done in the context of the true simulated star actually being a single star, meaning all the purported binary companions are forced toward low masses. Is this any different if the true scenario is actually a more luminous binary? Pt 2: binary simulation Here, we do the same as above, but generate properties of a binary star instead of a single star End of explanation """
trangel/Data-Science
tmp/times-series.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import numpy as np import matplotlib as plt import seaborn as sns users = pd.read_csv('timeseries_users.csv') users.head() events = pd.read_csv('timeseries_events.csv') events.index = pd.to_datetime(events['event_date'], format='%Y-%m-%d %H:%M:%S') del events['event_date'] events.tail() """ Explanation: Time series test Data exploration Events per user Inter-events intervals End of explanation """ users.describe() """ Explanation: <a id='data_exploration'></a> Data exploration End of explanation """ # 2. Check for NaNs: print(users.isnull().values.any()) print(events.isnull().values.any()) # 3. Check for duplicated entries users_duplicated = users[users.duplicated() == True ] print('Users: duplicated entries {}'.format(len(users_duplicated))) events_duplicated = events[events.duplicated() == True ] print('Events: duplicated entries {}'.format(len(events_duplicated))) """ Explanation: User's age mean is from 24 to 63 years old, with a mean of 41 years old. End of explanation """ # 1. count all events for each user: events_per_user = events.groupby('user_id').size() events_per_user.head() # Select only 30+ male users: for user_id in events_per_user.index: if user_id in users['user_id'].values: user = users[ users['user_id'] ==user_id] age = user['age'].values[0] gender = user['gender'].values[0] if ( age < 30 ) or (gender == 'f'): del events_per_user[user_id] else: del events_per_user[user_id] print(type(events_per_user)) events_per_user.values sns.set(style="ticks") # Show the results of a linear regression within each dataset ax = sns.distplot(events_per_user.values) ax.set_title('Event per male users of age 30+ old') ax.set_ylabel('Normalized distribution') ax.set_xlabel('Counts') """ Explanation: Many duplicated entries are found in the events dataset. We could decide to drop them if needed. Here I keep them because I don't know if duplicates are valid entries of this particular dataset. <a id='events_per_user'></a> Events per user Plot a histogram of total number of events per user for all male users who are 30+ years old. End of explanation """ def get_inter_events(events_per_user): """From a list of events for a given user, gets a list of inter time events in dates.""" from datetime import datetime nanosecond_to_days=float(1.15741e-14) inter_times = [] for event_index in range(1,len(events_per_user)): time1 = events_per_user[event_index-1] time2 = events_per_user[event_index] time_diff = time2 - time1 # Convert from nanoseconds to days: time_diff = int(float(time_diff)*nanosecond_to_days) inter_times.append(time_diff) return inter_times # Cycle by user inter_event_intervals=[] for user_id in users['user_id'].values: # Get events for this user: events_per_user = events[events['user_id']==user_id].sort_index() events_per_user = events_per_user.index.values if len(events_per_user) > 1: inter_event_intervals_this = get_inter_events(events_per_user) inter_event_intervals = list(inter_event_intervals)+ list(inter_event_intervals_this) inter_event_intervals=np.array(inter_event_intervals) type(inter_event_intervals) print(len(inter_event_intervals)) print(inter_event_intervals.shape) sns.set(style="ticks") # Show the results of a linear regression within each dataset ax = sns.distplot(inter_event_intervals) ax.set_ylim(0,0.005) ax.set_title('Inter-event intervals') ax.set_ylabel('Normalized distribution') ax.set_xlabel('Inter-event interval (days)') """ Explanation: <a id = 'inter_events'></a> Inter events For each user, compute the list of inter-event intervals in days. An inter-event interval is the period of time between an event and the one directly before it in time for the same user. Once you have a list of all the inter-event intervals across all users, plot a histogram of them below: End of explanation """
ethen8181/machine-learning
trees/random_forest.ipynb
mit
# code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', 'notebook_format')) from formats import load_style load_style(css_style = 'custom2.css', plot_style = False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.tree import DecisionTreeRegressor, DecisionTreeClassifier from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor %watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Ensemble-Trees" data-toc-modified-id="Ensemble-Trees-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Ensemble Trees</a></span><ul class="toc-item"><li><span><a href="#Bagging" data-toc-modified-id="Bagging-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Bagging</a></span></li><li><span><a href="#Random-Forest" data-toc-modified-id="Random-Forest-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Random Forest</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Implementation</a></span></li><li><span><a href="#Feature-Importance" data-toc-modified-id="Feature-Importance-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Feature Importance</a></span></li><li><span><a href="#Extra-Trees" data-toc-modified-id="Extra-Trees-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Extra Trees</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> End of explanation """ # generate 1000 random numbers (between 0 and 1) for each model, # representing 1000 observations np.random.seed(1234) mod1 = np.random.rand(1000) mod2 = np.random.rand(1000) mod3 = np.random.rand(1000) mod4 = np.random.rand(1000) mod5 = np.random.rand(1000) # each model independently predicts 1 (the "correct response") # if random number was at least 0.3 preds1 = np.where(mod1 > 0.3, 1, 0) preds2 = np.where(mod2 > 0.3, 1, 0) preds3 = np.where(mod3 > 0.3, 1, 0) preds4 = np.where(mod4 > 0.3, 1, 0) preds5 = np.where(mod5 > 0.3, 1, 0) # how accurate was each individual model? print(preds1.mean()) print(preds2.mean()) print(preds3.mean()) print(preds4.mean()) print(preds5.mean()) # average the predictions, and then round to 0 or 1 # you can also do a weighted average, as long as the weight adds up to 1 ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5) / 5).astype(int) # how accurate was the ensemble? print(ensemble_preds.mean()) """ Explanation: Ensemble Trees Some of the materials builds on top of the previous documentation/implementation on decision trees, thus it might be best to walk through that one first. Ensembling is a very popular method for improving the predictive performance of machine learning models. Let's pretend that instead of building a single model to solve a binary classification problem, you created five independent models, and each model was correct about 70% of the time. If you combined these models into an "ensemble" and used their majority vote as a prediction, how often would the ensemble be correct? End of explanation """ wine = pd.read_csv('winequality-white.csv', sep = ';') # train/test split the features and response column y = wine['quality'].values X = wine.drop('quality', axis = 1).values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.2, random_state = 1234) print('dimension of the dataset: ', wine.shape) wine.head() # this cell simply demonstrates how to create boostrap samples # we create an array of numbers from 1 to 20 # create the boostrap sample on top of that np.random.seed(1) nums = np.arange(1, 21) print('original:', nums) print('bootstrap: ', np.random.choice(nums, size = 20, replace = True)) class RandomForest: """ Regression random forest using scikit learn's decision tree as the base tree Parameters ---------- n_estimators: int the number of trees that you're going built on the bagged sample (you can even shutoff the bagging procedure for some packages) max_features: int the number of features that you allow when deciding which feature to split on all the other parameters for a decision tree like max_depth or min_sample_split also applies to Random Forest, it is just not used here as that is more related to a single decision tree """ def __init__(self, n_estimators, max_features): self.n_estimators = n_estimators self.max_features = max_features def fit(self, X, y): # for each base-tree models: # 1. draw bootstrap samples from the original data # 2. train the tree model on that bootstrap sample, and # during training, randomly select a number of features to # split on each node self.estimators = [] for i in range(self.n_estimators): boot = np.random.choice(y.shape[0], size = y.shape[0], replace = True) X_boot, y_boot = X[boot], y[boot] tree = DecisionTreeRegressor(max_features = self.max_features) tree.fit(X_boot, y_boot) self.estimators.append(tree) return self def predict(self, X): # for the prediction, we average the # predictions made by each of the bagged tree pred = np.empty((X.shape[0], self.n_estimators)) for i, tree in enumerate(self.estimators): pred[:, i] = tree.predict(X) pred = np.mean(pred, axis = 1) return pred # compare the results between a single decision tree, # bagging and random forest, the lower the mean square # error, the better tree = DecisionTreeRegressor() tree.fit(X_train, y_train) tree_y_pred = tree.predict(X_test) print('tree: ', mean_squared_error(y_test, tree_y_pred)) # bagged decision tree # max_feature = None simply uses all features bag = RandomForest(n_estimators = 50, max_features = None) bag.fit(X_train, y_train) bag_y_pred = bag.predict(X_test) print('bagged tree: ', mean_squared_error(y_test, bag_y_pred)) # random forest using a random one third of the features at every split rf = RandomForest(n_estimators = 50, max_features = 1 / 3) rf.fit(X_train, y_train) rf_y_pred = rf.predict(X_test) print('random forest: ', mean_squared_error(y_test, rf_y_pred)) # use library to confirm results are comparable rf_reg = RandomForestRegressor(n_estimators = 50, max_features = 1 / 3) rf_reg.fit(X_train, y_train) rf_reg_y_pred = rf_reg.predict(X_test) print('random forest library: ', mean_squared_error(y_test, rf_reg_y_pred)) """ Explanation: Bagging The primary weakness of decision trees is that they don't tend to have the best predictive accuracy and the result can be very unstable. This is partially due to the fact that we were using greedy algorithm to choose the rule/feature to split the tree. Hence a small variations in the data might result in a completely different tree being generated. Fortunately, this problem can be mitigated by training an ensemble of decision trees and use these trees to form a "forest". This first idea we'll introduce is Bagging. Bagging, short for bootstrap aggregation is a general procedure for reducing the variance of a machine learning algorithm, although it can used with any type of method, it is most commonly applied to tree-based models. The way it works is: Given a training set $X = x_1, ..., x_n$ with responses $Y = y_1, ..., y_n$, bagging repeatedly ($B$ times) selects a random sample with replacement (a.k.a bootstrap sample) of the training set and fits trees to these newly generated samples: For $b = 1, ..., B$: Sample, with replacement, $n$ training examples from $X$, $Y$; call these $X_b$, $Y_b$. Note that the bootstrap sample should be the same size as the original training set Train a tree, $f_b$, on $X_b$, $Y_b$. For these individual tree, we should allow them to grow deeper (increase the max_depth parameter) so that they have low bias/high variance After training, predictions for unseen samples $x'$ can be made by averaging the predictions from all the individual regression trees on $x'$: $$ \begin{align} f' = \frac {1}{B}\sum {b=1}^{B}{f}{b}(x') \end{align} $$ Or by taking the majority vote in the case of classification trees. If you are wondering why bootstrapping is a good idea, the rationale is: We wish to ask a question of a population but we can't. Instead, we take a sample and ask the question to it instead. Now, how confident we should be that the sample answer is close to the population answer obviously depends on the structure of population. One way we might learn about this is to take samples from the population again and again, ask them the question, and see how variable the sample answers tended to be. But often times this isn't possible (we wouldn't relaunch the Titanic and crash it into another iceberg), thus we can use the information in the sample we actually have to learn about it. This is a reasonable thing to do because not only is the sample you have the best and the only information you have about what the population actually looks like, but also because most samples will, if they're randomly chosen, look quite like the population they came from. In the end, sampling with replacement is just a convenient way to treat the sample like it's a population and to sample from it in a way that reflects its shape. Random Forest Random Forest is very similar to bagged trees. Exactly like bagging, we create an ensemble of decision trees using bootstrapped samples of the training set. When building each tree, however, each time a split is considered, a random sample of $m$ features is chosen as split candidates from the full set of $p$ features. The split is only allowed to use one of those $m$ features to generate the best rule/feature to split on. For classification, $m$ is typically chosen to be, $\sqrt{p}$, the square root of $p$. For regression, $m$ is typically chosen to be somewhere between $p/3$ and $p$. The whole point of choosing a new random sample of features for every single tree at every single split is to correct for decision trees' habit of overfitting to their training set. Suppose there is one very strong feature in the data set, when using bagged trees, most of the trees will use that feature as the top split, resulting in an ensemble of similar trees that are highly correlated. By randomly leaving out candidate features from each split, Random Forest "decorrelates" the trees, such that the averaging process can further reduce the variance of the resulting model. Implementation Here, we will use the Wine Quality Data Set to test our implementation. This link should download the .csv file. The task is to predict the quality of the wine (a scale of 1 ~ 10) given some of its features. We'll build three types of regression model, decision tree, bagged decision tree and random forest on the training set and compare the result on the test set. End of explanation """ from tree import Tree # load a sample dataset iris = load_iris() iris_X = iris.data iris_y = iris.target # train model and print the feature importance tree = Tree() tree.fit(iris_X, iris_y) print(tree.feature_importance) # use library to confirm result # note that the result might not always be the same # because of decision tree's high variability clf = DecisionTreeClassifier(criterion = 'entropy', min_samples_split = 10, max_depth = 3) clf.fit(iris_X, iris_y) print(clf.feature_importances_) """ Explanation: Feature Importance When using Bagging with decision tree or using Random Forest, we can increase the predictive accuracy of individual tree. These methods, however, do decrease model interpretability, because it is no longer possible to visualize all the trees that are built to form the "forest". Fortunately, we can still obtain an overall summary of feature importance from these models. The way feature importance works is as follows (there are many ways to do it, this is the implementation that scikit-learn uses): We first compute the feature importance values of a single tree: We can initialize an array feature_importances of all zeros with size n_features We start building the tree and for each internal node that splits on feature $i$ we compute the information gain (error reduction) of that node multiplied by the proportion of samples that were routed to the node and add this quantity to feature_importances[i] The information gain (error reduction) depends on the impurity criterion that you use (e.g. Gini, Entropy for classification, MSE for regression). Its the impurity of the set of examples that gets routed to the internal node minus the sum of the impurities of the two partitions created by the split. Now, recall that these Ensemble Tree models simply consists of a bunch of individual trees, hence after computing the feature_importance values across all individual trees, we sum them up and take the average across all of them (normalize the values to sum up to 1 if necessary). Building on top of the previous documentation/implementation on decision trees, we add the code to compute the feature importance. The code is not shown here, but can be obtained here for those that are interested in the implementation. End of explanation """ def vis_importance(estimator, feature_names, threshold = 0.05): """ Visualize the relative importance of predictors. Parameters ---------- estimator : sklearn-like ensemble tree model A tree estimator that contains the attribute ``feature_importances_``. feature_names : str 1d array or list[str] Feature names that corresponds to the feature importance. threshold : float, default 0.05 Features that have importance scores lower than this threshold will not be presented in the plot, this assumes the feature importance sum up to 1. """ if not hasattr(estimator, 'feature_importances_'): msg = '{} does not have the feature_importances_ attribute' raise ValueError(msg.format(estimator.__class__.__name__)) imp = estimator.feature_importances_ feature_names = np.asarray(feature_names) mask = imp > threshold importances = imp[mask] idx = np.argsort(importances) scores = importances[idx] names = feature_names[mask] names = names[idx] y_pos = np.arange(1, len(scores) + 1) if hasattr(estimator, 'estimators_'): # apart from the mean feature importance, for scikit-learn we can access # each individual tree's feature importance and compute the standard deviation tree_importances = np.asarray([tree.feature_importances_ for tree in estimator.estimators_]) importances_std = np.std(tree_importances[:, mask], axis = 0) scores_std = importances_std[idx] plt.barh(y_pos, scores, align = 'center', xerr = scores_std) else: plt.barh(y_pos, scores, align = 'center') plt.yticks(y_pos, names) plt.xlabel('Importance') plt.title('Feature Importance Plot') # change default figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 # visualize the feature importance of every variable vis_importance(rf_reg, wine.columns[:-1]) """ Explanation: For ensemble tree, we simply sum all the feauture importance up and take the average (normalize it to sum up to 1 if necessary). Thus, we will not go through the process of building that from scratch, we'll simply visualize the feature importance of the regression Random Forest that we've previously trained on the wine dataset. End of explanation """ size = 10000 np.random.seed(10) X_seed = np.random.normal(0, 1, size) X0 = X_seed + np.random.normal(0, 0.1, size) X1 = X_seed + np.random.normal(0, 0.1, size) X2 = X_seed + np.random.normal(0, 0.1, size) X_012 = np.array([ X0, X1, X2 ]).T Y = X0 + X1 + X2 rf = RandomForestRegressor(n_estimators = 20, max_features = 2) rf.fit(X_012, Y) print('Scores for X0, X1, X2:', np.round(rf.feature_importances_, 3)) """ Explanation: Caveat: One thing to keep in mind when using the impurity based feature importance ranking is that when the dataset has two (or more) correlated features, then from the model's point of view, any of these correlated features can be used as the predictor, with no preference of one over the others. But once one of them is used, the importance of others is significantly reduced since the impurity they can effectively remove has already been removed by the first feature. As a consequence, they will have a lower reported importance. This is not an issue when we want to use feature selection to reduce overfitting, since it makes sense to remove features that are mostly duplicated by other features. But when we're interpreting the data, it can lead to incorrect conclusions that one of the variables is a strong predictor while the others in the same group are unimportant, while actually they are very close in terms of their relationship with the response variable. The effect of this phenomenon for Random Forest is somewhat reduced thanks to random selection of features at each node creation, but in general the effect is not removed completely. In the following example, we have three correlated variables $X_0$, $X_1$, $X_2$, and no noise in the data, with the output variable being the sum of the three features: End of explanation """ # grid search on a range of max features and compare # the performance between Extra Trees and Random Forest param_name = 'max_features' max_features_options = np.arange(4, 10) fit_param = {param_name: max_features_options} rf_reg = RandomForestRegressor(n_estimators = 30) et_reg = ExtraTreesRegressor(n_estimators = 30) gs_rf = GridSearchCV(rf_reg, fit_param, n_jobs = -1) gs_et = GridSearchCV(et_reg, fit_param, n_jobs = -1) gs_rf.fit(X_train, y_train) gs_et.fit(X_train, y_train) # visualize the performance on the cross validation test score gs_rf_mean_score = gs_rf.cv_results_['mean_test_score'] gs_et_mean_score = gs_et.cv_results_['mean_test_score'] mean_scores = [gs_rf_mean_score, gs_et_mean_score] labels = ['RF', 'ET'] for score, label in zip(mean_scores, labels): plt.plot(max_features_options, score, label = label) plt.legend() plt.ylabel('MSE') plt.xlabel(param_name) plt.xlim( np.min(max_features_options), np.max(max_features_options) ) plt.show() """ Explanation: When we compute the feature importances, we see that some of the features have higher importance than the others, while their “true” importance should be very similar. One thing to point out though is that the difficulty of interpreting the importance/ranking of correlated variables is not Random Forest specific, but applies to most model based feature selection methods. This is why it often best practice to remove correlated features prior to training the model. Advantages of Random Forests: Require very little feature engineering (e.g. standardization) Easy to use, as it rarely requires parameter tuning to achieve compelling and robust performance Provides a more reliable estimate of feature importance compare to other black-box methods (e.g. deep learning, support vector machine) Performance and computation wise, it is very competitive. Although you can typically find a model that beats Random Forest for any given dataset (typically a deep learning or gradient boosting algorithm), it’s never by much, and it usually takes much longer to train and tune those model Extra Trees What distinguishes Extra Trees from Random Forest is: We use the entire training set instead of a bootstrap sample of the training set (but can also be trained on a bootstrapped sample as well if we wish) Just like Random Forest, when choosing rules/features at a split, a random subset of candidate features is used, but now, instead of looking at all the thresholds to find the best the best split, thresholds (for the split) are chosen completely at random for each candidate feature and the best of these randomly generated thresholds is picked as the splitting rule. We all know that tree-based methods employ a greedy algorithm when choosing the feature to split on. Thus, we can think of this as taking an extra step in trying to migitate this drawback Based on Stackoverflow: RandomForestClassifier vs ExtraTreesClassifier in scikit learn In practice, RFs are often more compact than ETs. ETs are generally cheaper to train from a computational point of view but can grow much bigger. ETs can sometime generalize better than RFs but it's hard to guess when it's the case without trying both first (and tuning n_estimators, max_features and min_samples_split by cross-validated grid search). End of explanation """
liangjg/openmc
examples/jupyter/candu.ipynb
mit
%matplotlib inline from math import pi, sin, cos import numpy as np import openmc """ Explanation: In this example, we will create a typical CANDU bundle with rings of fuel pins. At present, OpenMC does not have a specialized lattice for this type of fuel arrangement, so we must resort to manual creation of the array of fuel pins. End of explanation """ fuel = openmc.Material(name='fuel') fuel.add_element('U', 1.0) fuel.add_element('O', 2.0) fuel.set_density('g/cm3', 10.0) clad = openmc.Material(name='zircaloy') clad.add_element('Zr', 1.0) clad.set_density('g/cm3', 6.0) heavy_water = openmc.Material(name='heavy water') heavy_water.add_nuclide('H2', 2.0) heavy_water.add_nuclide('O16', 1.0) heavy_water.add_s_alpha_beta('c_D_in_D2O') heavy_water.set_density('g/cm3', 1.1) """ Explanation: Let's begin by creating the materials that will be used in our model. End of explanation """ # Outer radius of fuel and clad r_fuel = 0.6122 r_clad = 0.6540 # Pressure tube and calendria radii pressure_tube_ir = 5.16890 pressure_tube_or = 5.60320 calendria_ir = 6.44780 calendria_or = 6.58750 # Radius to center of each ring of fuel pins ring_radii = np.array([0.0, 1.4885, 2.8755, 4.3305]) """ Explanation: With our materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual. End of explanation """ # These are the surfaces that will divide each of the rings radial_surf = [openmc.ZCylinder(r=r) for r in (ring_radii[:-1] + ring_radii[1:])/2] water_cells = [] for i in range(ring_radii.size): # Create annular region if i == 0: water_region = -radial_surf[i] elif i == ring_radii.size - 1: water_region = +radial_surf[i-1] else: water_region = +radial_surf[i-1] & -radial_surf[i] water_cells.append(openmc.Cell(fill=heavy_water, region=water_region)) """ Explanation: To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers. End of explanation """ plot_args = {'width': (2*calendria_or, 2*calendria_or)} bundle_universe = openmc.Universe(cells=water_cells) bundle_universe.plot(**plot_args) """ Explanation: Let's see what our geometry looks like so far. In order to plot the geometry, we create a universe that contains the annular water cells and then use the Universe.plot() method. While we're at it, we'll set some keyword arguments that can be reused for later plots. End of explanation """ surf_fuel = openmc.ZCylinder(r=r_fuel) fuel_cell = openmc.Cell(fill=fuel, region=-surf_fuel) clad_cell = openmc.Cell(fill=clad, region=+surf_fuel) pin_universe = openmc.Universe(cells=(fuel_cell, clad_cell)) pin_universe.plot(**plot_args) """ Explanation: Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe. End of explanation """ num_pins = [1, 6, 12, 18] angles = [0, 0, 15, 0] for i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)): for j in range(n): # Determine location of center of pin theta = (a + j/n*360.) * pi/180. x = r*cos(theta) y = r*sin(theta) pin_boundary = openmc.ZCylinder(x0=x, y0=y, r=r_clad) water_cells[i].region &= +pin_boundary # Create each fuel pin -- note that we explicitly assign an ID so # that we can identify the pin later when looking at tallies pin = openmc.Cell(fill=pin_universe, region=-pin_boundary) pin.translation = (x, y, 0) pin.id = (i + 1)*100 + j bundle_universe.add_cell(pin) bundle_universe.plot(**plot_args) """ Explanation: The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin. End of explanation """ pt_inner = openmc.ZCylinder(r=pressure_tube_ir) pt_outer = openmc.ZCylinder(r=pressure_tube_or) calendria_inner = openmc.ZCylinder(r=calendria_ir) calendria_outer = openmc.ZCylinder(r=calendria_or, boundary_type='vacuum') bundle = openmc.Cell(fill=bundle_universe, region=-pt_inner) pressure_tube = openmc.Cell(fill=clad, region=+pt_inner & -pt_outer) v1 = openmc.Cell(region=+pt_outer & -calendria_inner) calendria = openmc.Cell(fill=clad, region=+calendria_inner & -calendria_outer) root_universe = openmc.Universe(cells=[bundle, pressure_tube, v1, calendria]) """ Explanation: Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube. End of explanation """ geometry = openmc.Geometry(root_universe) geometry.export_to_xml() materials = openmc.Materials(geometry.get_all_materials().values()) materials.export_to_xml() plot = openmc.Plot.from_geometry(geometry) plot.color_by = 'material' plot.colors = { fuel: 'black', clad: 'silver', heavy_water: 'blue' } plot.to_ipython_image() """ Explanation: Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot. End of explanation """ settings = openmc.Settings() settings.particles = 1000 settings.batches = 20 settings.inactive = 10 settings.source = openmc.Source(space=openmc.stats.Point()) settings.export_to_xml() fuel_tally = openmc.Tally() fuel_tally.filters = [openmc.DistribcellFilter(fuel_cell)] fuel_tally.scores = ['flux'] tallies = openmc.Tallies([fuel_tally]) tallies.export_to_xml() openmc.run(output=False) """ Explanation: Interpreting Results One of the difficulties of a geometry like this is identifying tally results when there was no lattice involved. To address this, we specifically gave an ID to each fuel pin of the form 100*ring + azimuthal position. Consequently, we can use a distribcell tally and then look at our DataFrame which will show these cell IDs. End of explanation """ sp = openmc.StatePoint('statepoint.{}.h5'.format(settings.batches)) output_tally = sp.get_tally() output_tally.get_pandas_dataframe() """ Explanation: The return code of 0 indicates that OpenMC ran successfully. Now let's load the statepoint into a openmc.StatePoint object and use the Tally.get_pandas_dataframe(...) method to see our results. End of explanation """
metpy/MetPy
dev/_downloads/0c4829bf9f81fa07605c78ac7049bb69/spc_convective_outlook.ipynb
bsd-3-clause
import geopandas from metpy.cbook import get_test_data from metpy.plots import MapPanel, PanelContainer, PlotGeometry """ Explanation: NOAA SPC Convective Outlook Demonstrate the use of geoJSON and shapefile data with PlotGeometry in MetPy's simplified plotting interface. This example walks through plotting the Day 1 Convective Outlook from NOAA Storm Prediction Center. The geoJSON file was retrieved from the Storm Prediction Center's archives &lt;https://www.spc.noaa.gov/archive/&gt;_. End of explanation """ day1_outlook = geopandas.read_file(get_test_data('spc_day1otlk_20210317_1200_lyr.geojson')) """ Explanation: Read in the geoJSON file containing the convective outlook. End of explanation """ day1_outlook """ Explanation: Preview the data. End of explanation """ geo = PlotGeometry() geo.geometry = day1_outlook['geometry'] geo.fill = day1_outlook['fill'] geo.stroke = day1_outlook['stroke'] geo.labels = day1_outlook['LABEL'] geo.label_fontsize = 'large' """ Explanation: Plot the shapes from the 'geometry' column. Give the shapes their fill and stroke color by providing the 'fill' and 'stroke' columns. Use text from the 'LABEL' column as labels for the shapes. End of explanation """ panel = MapPanel() panel.title = 'SPC Day 1 Convective Outlook (Valid 12z Mar 17 2021)' panel.plots = [geo] panel.area = [-120, -75, 25, 50] panel.projection = 'lcc' panel.layers = ['lakes', 'land', 'ocean', 'states', 'coastline', 'borders'] pc = PanelContainer() pc.size = (12, 8) pc.panels = [panel] pc.show() """ Explanation: Add the geometry plot to a panel and container. End of explanation """
ctk3b/msibi
msibi/tutorials/propane/propane.ipynb
mit
import itertools import string import os import numpy as np from msibi import MSIBI, State, Pair, mie """ Explanation: Propane Tutorial Created by Davy Yue 2017-06-14 Imports End of explanation """ os.system('rm rdfs/pair_C3*_state*-step*.txt f_fits.log') os.system('rm state_*/*.txt state*/run.py state*/*query.dcd') """ Explanation: Remove files generated during CG simulation End of explanation """ rdf_cutoff = 5.0 opt = MSIBI(rdf_cutoff=rdf_cutoff, n_rdf_points=201, pot_cutoff=3.0, smooth_rdfs=True) """ Explanation: Set up global parameters Cutoff radius set to 5.0 units. Parameters including number of data points and potential cutoff are passed to MSIBI. End of explanation """ stateA = State(kT=0.5, state_dir='./state_A', top_file='start.hoomdxml', name='stateA', backup_trajectory=True) stateB = State(kT=1.5, state_dir='./state_B', top_file='start.hoomdxml', name='stateB', backup_trajectory=True) stateC = State(kT=2.0, state_dir='./state_C', top_file='start.hoomdxml', name='stateC', backup_trajectory=True) states = [stateA, stateB, stateC] """ Explanation: Specify states States each are initialized with different temperatures, directories, and start.hoomdxml files. A list states contains all the individual states: stateA, stateB, stateC. End of explanation """ indices = list(itertools.combinations(range(1024), 2)) # all-all for 1024 atoms initial_guess = mie(opt.pot_r, 1.0, 1.0) # 1-D array of potential values. alphabet = ['A', 'B', 'C'] rdf_targets = [np.loadtxt('rdfs/C3-C3-state_{0}.txt'.format(i)) for i in alphabet] pair0 = Pair('C3', 'C3', initial_guess) alphas = [1.0, 1.0, 1.0] """ Explanation: Specify pairs Creates a list of all the possible indices for the 1024 atoms. Passes the type of interaction to be optimized, a C3 to itself, to Pair. Sets the alpha values to 1.0 End of explanation """ for state, target, alpha in zip(states, rdf_targets, alphas): pair0.add_state(state, target, alpha, indices) pairs = [pair0] """ Explanation: Add targets to pair Loops through each state, target, and alpha in zip. Adds the appropriate states, and converts pair0 into a list for the optimize() function. End of explanation """ opt.optimize(states, pairs, n_iterations=5, engine='hoomd') """ Explanation: Do magic Sprinkle fairy dust over the code. Calls the optimize function with the parameters given. Performs five iterations, with each successive iteration usually producing finer, better output. Uses the hoomd engine to run the simulations. End of explanation """
GoogleCloudPlatform/gcp-getting-started-lab-jp
machine_learning/cloud_ai_building_blocks/conversation_ja.ipynb
apache-2.0
import getpass APIKEY = getpass.getpass() """ Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/gcp-getting-started-lab-jp/blob/master/machine_learning/cloud_ai_building_blocks/conversation_ja.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` Copyright 2019 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` 事前準備 GCP プロジェクト を作成します。 課金設定 を有効にします。 API Key を作成します。 Cloud Speech-to-Text API と Cloud Text-to-Speech API を有効にします。 Google Cloud API の認証情報を入力 Google Cloud API を REST インタフェースから利用するために、 API Key を利用します。 Google Cloud Console から API Key をコピーしましょう。 End of explanation """ from googleapiclient.discovery import build speech_service = build('speech', 'v1p1beta1', developerKey=APIKEY) """ Explanation: Cloud Speech-to-Text API を使ってみよう ! Google Cloud Speech-to-Text では、使いやすい API で高度なニューラル ネットワーク モデルを適用し、音声をテキストに変換できます。 API Discovery Service を利用して Cloud Speech-to-Text API を発見します。 Cloud Speech-to-Text の REST API 仕様は こちら に解説されています。 End of explanation """ #@title このセルを実行して record_audio を定義 # Install required libraries and packages !pip install -qq pydub !apt-get -qq update !apt-get -qq install -y ffmpeg # Define record_audio import base64 import google.colab import pydub from io import BytesIO def record_audio(file_id, framerate=16000, channels=1, file_format='flac'): # Record webm file from Colaboratory. audio = google.colab._message.blocking_request( 'user_media', { 'audio': True, 'video': False, 'duration': -1 }, timeout_sec=600) # Convert web file into in_memory file. mfile = BytesIO(base64.b64decode(audio[audio.index(',')+1:])) # Store webm file locally. with open('{0}.webm'.format(file_id), 'wb') as f: mfile.seek(0) f.write(mfile.read()) # Open stored web file and save it as wav with sample_rate=16000 output_file = '{0}.{1}'.format(file_id, file_format) _ = pydub.AudioSegment.from_file('{0}.webm'.format(file_id), codec='opus') _ = _.set_channels(channels) _.set_frame_rate(framerate).export(output_file, format=file_format) return output_file """ Explanation: 音声データの準備 音声録音のための関数 record_audio を定義しましょう。 End of explanation """ audio_filename = record_audio('ja-sample', framerate=16000, channels=1) """ Explanation: record_audio を実行して音声を録音しましょう。 End of explanation """ from IPython.display import Audio Audio(audio_filename, rate=16000) """ Explanation: 録音結果を確認しましょう。 End of explanation """ from base64 import b64encode from json import dumps languageCode = 'en-US' #@param ["en-US", "ja-JP", "en-IN"] model = 'default' #@param ["command_and_search", "phone_call", "video", "default"] """ Explanation: 音声認識の実行 Cloud Speech-to-Text API に入力する情報を定義します. End of explanation """ with open(audio_filename, 'rb') as audio_file: content = b64encode(audio_file.read()).decode('utf-8') my_audio = { 'content': content } """ Explanation: 入力する音声データを定義します。 End of explanation """ my_recognition_config = { 'encoding': 'FLAC', 'sampleRateHertz': 16000, 'languageCode': languageCode, 'model': model } """ Explanation: RecognitionConfig を定義します。 End of explanation """ my_request_body={ 'audio': my_audio, 'config': my_recognition_config, } """ Explanation: recognize method のリクエストメッセージの body を定義します。 End of explanation """ response = speech_service.speech().recognize(body=my_request_body).execute() """ Explanation: recognize method を実行します。 End of explanation """ response for r in response["results"]: print('認識結果: ', r['alternatives'][0]['transcript']) print('信頼度: ', r['alternatives'][0]['confidence']) """ Explanation: recognize method のレスポンスを確認します。 End of explanation """ my_recognition_config = { 'encoding': 'FLAC', 'sampleRateHertz': 16000, 'languageCode': languageCode, 'model': model, 'enableWordTimeOffsets': True } my_request_body={ 'audio': my_audio, 'config': my_recognition_config, } """ Explanation: 単語のタイムスタンプの取得 RecognitionConfig に enableWordTimeOffsets の設定を追加します。 End of explanation """ response = speech_service.speech().recognize(body=my_request_body).execute() """ Explanation: recognize method を実行します。 End of explanation """ response for r in response["results"]: print('認識結果: ', r['alternatives'][0]['transcript']) print('信頼度: ', r['alternatives'][0]['confidence'], "\n") for r in response["results"][0]['alternatives'][0]["words"]: print("word: ", r["word"]) print("startTime: ", r["startTime"]) print("endTime: ", r["endTime"], "\n") """ Explanation: recognize method のレスポンスを確認します。 End of explanation """ import textwrap from googleapiclient.discovery import build service = build('texttospeech', 'v1beta1', developerKey=APIKEY) """ Explanation: 演習問題 1. こちらを参考にして、単語レベルの信頼度を見てみましょう Cloud Text-to-Speech を使ってみよう ! Cloud Text-to-Speech を使うと、自然な会話音声を合成できます。用意されている声は 30 種類。数多くの言語と方言に対応します。 API Discovery Service を利用して Cloud Text-to-Speech API を発見します。 Cloud Text-to-Speech の REST API 仕様は こちら に解説されています。 End of explanation """ response = service.voices().list( languageCode="ja_JP", ).execute() for voice in response['voices']: print(voice) """ Explanation: サポートされているすべての音声の一覧表示 テキスト読み上げ合成のために Cloud Text-to-Speech API で使用できる音声を一覧表示します。なお、 languageCode はこちらを参考にしてください。 End of explanation """ source_language = "ja_JP" #@param {type: "string"} source_sentence = "Google Cloud Text-to-Speech \u3092\u4F7F\u3046\u3068\u3001\u81EA\u7136\u306A\u4F1A\u8A71\u97F3\u58F0\u3092\u5408\u6210\u3067\u304D\u307E\u3059\u3002" #@param {type:"string"} audio_encoding = 'OGG_OPUS' #@param ['OGG_OPUS', 'LINEAR16', 'MP3'] voice_gender = 'FEMALE' #@param ['FEMALE', 'MALE', 'NEUTRAL', 'SSML_VOICE_GENDER_UNSPECIFIED'] textwrap.wrap(source_sentence) voice_name = 'ja-JP-Wavenet-A' #@param {type: "string"} response = service.text().synthesize( body={ 'input': { 'text': source_sentence, }, 'voice': { 'languageCode': source_language, 'ssmlGender': voice_gender, 'name': voice_name, }, 'audioConfig': { 'audioEncoding': audio_encoding, }, } ).execute() """ Explanation: テキストから音声を合成する text.synthesize メソッドを使用すると、単語や文を自然な人間の音声の base64 でエンコードされた音声データに変換できます。このメソッドは、入力を生のテキストまたは音声合成マークアップ言語(SSML)として受け入れます。 End of explanation """ import base64 from IPython.display import Audio Audio(base64.b64decode(response['audioContent'])) """ Explanation: 合成した音声を確認しましょう End of explanation """
dewitt-li/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x/256 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ from sklearn import preprocessing labels=[0,1,2,3,4,5,6,7,8,9] lb=preprocessing.LabelBinarizer() lb.fit(labels) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function return lb.transform(x) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ input_shape=[None,image_shape[0],image_shape[1],image_shape[2]] # TODO: Implement Function return tf.placeholder(tf.float32,shape=input_shape,name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32,shape=[None,n_classes],name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32,name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function input_depth = x_tensor.shape[3].value weights_shape = [conv_ksize[0], conv_ksize[1],input_depth, conv_num_outputs] weights = tf.Variable(tf.truncated_normal(weights_shape,mean=0.0,stddev=0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') conv = tf.nn.bias_add(conv,bias) conv = tf.nn.max_pool(conv,ksize=[1,pool_ksize[0],pool_ksize[1],1],strides=[1,pool_strides[0],pool_strides[1],1],padding='SAME') conv = tf.nn.elu(conv) return conv """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function x_tensor = tf.reshape(x_tensor, [-1, x_tensor.shape[1].value*x_tensor.shape[2].value*x_tensor.shape[3].value]) return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.fully_connected(x_tensor,num_outputs,activation_fn=tf.nn.relu) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.fully_connected(x_tensor,num_outputs,activation_fn=None) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x_tensor=conv2d_maxpool(x, 32, (3,3), (1,1), (2,2), (2,2)) x_tensor=conv2d_maxpool(x_tensor, 64, (3,3), (1,1), (2,2), (2,2)) x_tensor=conv2d_maxpool(x_tensor, 128, (3,3), (1,1), (2,2), (2,2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x_tensor=flatten(x_tensor) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x_tensor=fully_conn(x_tensor, 32) x_tensor=fully_conn(x_tensor, 64) x_tensor=fully_conn(x_tensor, 128) x_tensor = tf.nn.dropout(x_tensor, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) num_outputs=10 x_tensor=output(x_tensor, num_outputs) # TODO: return output return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function loss = session.run(cost, feed_dict={x: feature_batch,y: label_batch,keep_prob: 1.}) print('Loss: {}'.format(loss)) valid_acc = session.run(accuracy, feed_dict={x: valid_features,y: valid_labels,keep_prob: 1.}) print('Validation Accuracy: {}'.format(valid_acc)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 10 batch_size = 128 keep_probability = 0.5 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
VVard0g/ThreatHunter-Playbook
docs/tutorials/jupyter/notebooks/03_intro_to_pandas.ipynb
mit
import pandas as pd """ Explanation: Introduction to Pandas Goals: Learn how to use pandas dataframes Plot basic charts using dataframes and matplotlib Reference: * https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html * https://pandas.pydata.org/pandas-docs/stable/reference/frame.html * https://pandas.pydata.org/pandas-docs/stable/reference/series.html It is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Import pandas library End of explanation """ import pandas as pd import numpy as np ndarray = np.array(['a','b','c','d']) serie = pd.Series(ndarray) print(serie) """ Explanation: Pandas is well suited for many different kinds of data: * Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet * Ordered and unordered (not necessarily fixed-frequency) time series data. * Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels * Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure Data structures in pandas are: Series objects: 1D array, similar to a column in a spreadsheet DataFrame objects: 2D table, similar to a spreadsheet Panel objects: Dictionary of DataFrames, similar to sheet in MS Excel Create a Serie A 1D array similar to a column in spreadsheet End of explanation """ dog_data=[ ['Pedro','Doberman',3],\ ['Clementine','Golden Retriever',8],\ ['Norah','Great Dane',6],\ ['Mabel','Austrailian Shepherd',1],\ ['Bear','Maltese',4],\ ['Bill','Great Dane',10] ] dog_df=pd.DataFrame(dog_data,columns=['name','breed','age']) dog_df print(type(dog_df['age'].iloc[0])) """ Explanation: Create a data frame A dataframe is the tabular representation of data. Think of a dataframe as a spreadsheet with column headers and rows. End of explanation """ dog_df.head() """ Explanation: Previewing the data frame DataFrame.head(n=5) * This function returns the first n rows for the object based on position. It is useful for quickly testing if your object has the right type of data in it End of explanation """ dog_df.tail(3) """ Explanation: DataFrame.tail(n=5) * This function returns last n rows from the object based on position. It is useful for quickly verifying data, for example, after sorting or appending rows End of explanation """ dog_df.shape len(dog_df) """ Explanation: DataFrame.shape * Return a tuple representing the dimensionality of the DataFrame. End of explanation """ dog_df.columns """ Explanation: DataFrame.columns * The column labels of the DataFrame End of explanation """ dog_df.dtypes """ Explanation: DataFrame.dtypes * Return the dtypes in the DataFrame. * This returns a Series with the data type of each column. * The result’s index is the original DataFrame’s columns. * Columns with mixed types are stored with the object dtype. End of explanation """ dog_df.values """ Explanation: DataFrame.values * Return a Numpy representation of the DataFrame. * Python documentation recommends using DataFrame.to_numpy() instead. * Only the values in the DataFrame will be returned, the axes labels will be removed. End of explanation """ dog_df.describe() """ Explanation: DataFrame.describe(percentiles=None, include=None, exclude=None) * Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. * Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided. End of explanation """ dog_df['breed'].value_counts() """ Explanation: Series.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True) * Return a Series containing counts of unique values. * The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default. End of explanation """ dog_df[['name','age']] """ Explanation: Sorting Selecting/Querying End of explanation """ dog_df.iloc[2:4] dog_df.iloc[1:4, 0:2] dog_df[dog_df['breed'].isin(['Great Dane', 'Maltese'])] dog_df[dog_df['name']=='Norah'] dog_df[(dog_df['name']=='Bill') & (dog_df['breed']=='Great Dane')] dog_df[dog_df['age']<5] dog_df[dog_df['breed'].str.contains('G')] """ Explanation: DataFrame.iloc * Purely integer-location based indexing for selection by position. * .iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. Allowed inputs are: An integer, e.g. 5. A list or array of integers, e.g. [4, 3, 0]. A slice object with ints, e.g. 1:7. A boolean array. A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above). This is useful in method chains, when you don’t have a reference to the calling object, but would like to base your selection on some value. End of explanation """ owner_data=[['Bilbo','Pedro'],['Gandalf','Bear'],['Sam','Bill']] owner_df=pd.DataFrame(owner_data,columns=['owner_name','dog_name']) """ Explanation: Combining data frames End of explanation """ df=pd.merge(owner_df,dog_df,left_on='dog_name',right_on='name',how='inner') df """ Explanation: DataFrame.merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None) * Merge DataFrame or named Series objects with a database-style join. * The join is done on columns or indexes. If joining columns on columns, the DataFrame indexes will be ignored. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on End of explanation """ inner_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='inner') inner_df inner_df=inner_df.drop(['name'],axis=1) inner_df """ Explanation: More details on merge parameters: * right : DataFrame * how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘inner’ * left: use only keys from left frame, similar to a SQL left outer join; preserve key order * right: use only keys from right frame, similar to a SQL right outer join; preserve key order * outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically * inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys * on : label or list. Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames. * left_on : label or list, or array-like. Column or index level names to join on in the left DataFrame. Can also be an array or list of arrays of the length of the left DataFrame. These arrays are treated as if they are columns. * right_on : label or list, or array-like Column or index level names to join on in the right DataFrame. Can also be an array or list of arrays of the length of the right DataFrame. These arrays are treated as if they are columns. Reference: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html | Merge method | SQL Join Name | Description | | -------------|---------------|-------------| | left | LEFT OUTER JOIN | Use keys from left frame only | | right | RIGHT OUTER JOIN | Use keys from right frame only | | outer | FULL OUTER JOIN | Use union of keys from both frames | | inner | INNER JOIN | Use intersection of keys from both frames | Inner Merge End of explanation """ left_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='left') left_df """ Explanation: Left Merge End of explanation """ right_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='right') right_df """ Explanation: Right Merge End of explanation """ outer_df = owner_df.merge(dog_df, left_on='dog_name', right_on='name', how='outer') outer_df """ Explanation: Outer Merge End of explanation """ df=df.drop(['name'],axis=1) df """ Explanation: Dropping Columns DataFrame.drop(labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise') * Drop specified labels from rows or columns. * Remove rows or columns by specifying label names and corresponding axis, or by specifying directly index or column names. * When using a multi-index, labels on different levels can be removed by specifying the level. End of explanation """ import matplotlib """ Explanation: Basic plotting End of explanation """ # Will allow us to embed images in the notebook %matplotlib inline plot_df = pd.DataFrame({ 'col1': [1, 3, 2, 4], 'col2': [3, 6, 5, 1], 'col3': [4, 7, 6, 2], }) """ Explanation: Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. End of explanation """ plot_df.plot() plot_df.plot(kind='box') plot_df.plot(kind='bar') """ Explanation: matplotlib.pyplot.plot(*args, scalex=True, scaley=True, data=None, kwargs)** * Plot y versus x as lines and/or markers. End of explanation """
AMICI-developer/AMICI
python/examples/example_constant_species/ExampleEquilibrationLogic.ipynb
bsd-2-clause
from IPython.display import Image fig = Image(filename=('../../../documentation/gfx/steadystate_solver_workflow.png')) fig """ Explanation: AMICI documentation example of the steady state solver logic This is an example to document the internal logic of the steady state solver, which is used in preequilibration and postequilibration. Steady states of dynamical system Not every dynamical system needs to run into a steady state. Instead, it may exhibit continuous growth, e.g., $$\dot{x} = x, \quad x_0 = 1$$ a finite-time blow up, e.g., $$\dot{x} = x^2, \quad x_0 = 1$$ oscillations, e.g., $$\ddot{x} = -x, \quad x_0 = 1$$ chaotic behaviour, e.g., the Lorentz attractor If the considered dynamical system has a steady state for positive times, then integrating the ODE long enough will equilibrate the system to this steady state. However, this may be computationally more demanding than other approaches and may fail, if the maximum number of integration steps is exceeded before reaching the steady state. In general, Newton's method will find the steady state faster than forward simulation. However, it only converges if started close enough to the steady state. Moreover, it will not work, if the dynamical system has conserved quantities which were not removed prior to steady state computation: Conserved quantities will cause singularities in the Jacobian of the right hand side of the system, such that the linear problem within each step of Newton's method can not be solved. Logic of the steady state solver If AMICI has to equilibrate a dynamical system, it can do this either via simulating until the right hand side of the system becomes small, or it can try to find the steady state directly by Newton's method. Amici decides automatically which approach is chosen and how forward or adjoint sensitivities are computed, if requested. However, the user can influence this behavior, if prior knowledge about the dynamical is available. The logic which AMICI will follow to equilibrate the system works as follows: End of explanation """ import libsbml import importlib import amici import os import sys import numpy as np import matplotlib.pyplot as plt # SBML model we want to import sbml_file = 'model_constant_species.xml' # Name of the models that will also be the name of the python module model_name = 'model_constant_species' model_reduced_name = model_name + '_reduced' # Directories to which the generated model code is written model_output_dir = model_name model_reduced_output_dir = model_reduced_name # Read the model and give some output sbml_reader = libsbml.SBMLReader() sbml_doc = sbml_reader.readSBML(sbml_file) sbml_model = sbml_doc.getModel() dir(sbml_doc) print('Species: ', [s.getId() for s in sbml_model.getListOfSpecies()]) print('\nReactions:') for reaction in sbml_model.getListOfReactions(): reactants = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfReactants()]) products = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfProducts()]) reversible = '<' if reaction.getReversible() else '' print('%3s: %10s %1s->%10s\t\t[%s]' % (reaction.getId(), reactants, reversible, products, libsbml.formulaToL3String(reaction.getKineticLaw().getMath()))) # Create an SbmlImporter instance for our SBML model sbml_importer = amici.SbmlImporter(sbml_file) # specify observables and constant parameters constantParameters = ['synthesis_substrate', 'init_enzyme'] observables = { 'observable_product': {'name': '', 'formula': 'product'}, 'observable_substrate': {'name': '', 'formula': 'substrate'}, } sigmas = {'observable_product': 1.0, 'observable_substrate': 1.0} # import the model sbml_importer.sbml2amici(model_reduced_name, model_reduced_output_dir, observables=observables, constantParameters=constantParameters, sigmas=sigmas) sbml_importer.sbml2amici(model_name, model_output_dir, observables=observables, constantParameters=constantParameters, sigmas=sigmas, compute_conservation_laws=False) # import the models and run some test simulations model_reduced_module = amici.import_model_module(model_reduced_name, os.path.abspath(model_reduced_output_dir)) model_reduced = model_reduced_module.getModel() model_module = amici.import_model_module(model_name, os.path.abspath(model_output_dir)) model = model_module.getModel() # simulate model with conservation laws model_reduced.setTimepoints(np.linspace(0, 2, 100)) solver_reduced = model_reduced.getSolver() rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced) # simulate model without conservation laws model.setTimepoints(np.linspace(0, 2, 100)) solver = model.getSolver() rdata = amici.runAmiciSimulation(model, solver) # plot trajectories import amici.plotting amici.plotting.plotStateTrajectories(rdata_reduced, model=model_reduced) amici.plotting.plotObservableTrajectories(rdata_reduced, model=model_reduced) amici.plotting.plotStateTrajectories(rdata, model=model) amici.plotting.plotObservableTrajectories(rdata, model=model) """ Explanation: The example model We will use the example model model_constant_species.xml, which has conserved species. Those are automatically removed in the SBML import of AMICI, but they can also be kept in the model to demonstrate the failure of Newton's method due to a singular right hand side Jacobian. End of explanation """ # Call postequilibration by setting an infinity timepoint model.setTimepoints(np.full(1, np.inf)) # set the solver solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setMaxSteps(1000) rdata = amici.runAmiciSimulation(model, solver) #np.set_printoptions(threshold=8, edgeitems=2) for key, value in rdata.items(): print('%12s: ' % key, value) """ Explanation: Inferring the steady state of the system (postequilibration) First, we want to demonstrate that Newton's method will fail with the unreduced model due to a singular right hand side Jacobian. End of explanation """ # reduce maxsteps for integration solver.setMaxSteps(100) rdata = amici.runAmiciSimulation(model, solver) print('Status of postequilibration:', rdata['posteq_status']) print('Number of steps employed in postequilibration:', rdata['posteq_numsteps']) """ Explanation: The fields posteq_status and posteq_numsteps in rdata tells us how postequilibration worked: the first entry informs us about the status/number of steps in Newton's method (here 0, as Newton's method did not work) the second entry tells us, the status/how many integration steps were taken until steady state was reached the third entry informs us about the status/number of Newton steps in the second launch, after simulation The status is encoded as an Integer flag with the following meanings: 1: Successful run 0: Did not run -1: Error: No further specification is given, the error message should give more information. -2: Error: The method did not converge to a steady state within the maximum number of steps (Newton's method or simulation). -3: Error: The Jacobian of the right hand side is singular (only Newton's method) -4: Error: The damping factor in Newton's method was reduced until it met the lower bound without success (Newton's method only) -5: Error: The model was simulated past the timepoint t=1e100 without finding a steady state. Therefore, it is likely that the model has not steady state for the given parameter vector. Here, only the second entry of posteq_status contains a positive integer: The first run of Newton's method failed due to a Jacobian, which oculd not be factorized, but the second run (simulation) contains the entry 1 (success). The third entry is 0, thus Newton's method was not launched for a second time. More information can be found inposteq_numsteps: Also here, only the second entry contains a positive integer, which is smaller than the maximum number of steps taken (<1000). Hence steady state was reached via simulation, which corresponds to the simulated time written to posteq_time. We want to demonstrate a complete failure if inferring the steady state by reducing the number of integration steps to a lower value: End of explanation """ # Call postequilibration by setting an infinity timepoint model_reduced.setTimepoints(np.full(1, np.inf)) # set the solver solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setMaxSteps(100) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced) print('Status of postequilibration:', rdata_reduced['posteq_status']) print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps']) """ Explanation: However, the same logic works, if we use the reduced model. For sufficiently many Newton steps, postequilibration is achieved by Newton's method in the first run. In this specific example, the steady state is found within one step. End of explanation """ # Call simulation with singular Jacobian and simulationFSA mode model.setTimepoints(np.full(1, np.inf)) model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.simulationFSA) solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setSensitivityMethod(amici.SensitivityMethod.forward) solver.setSensitivityOrder(amici.SensitivityOrder.first) solver.setMaxSteps(10000) rdata = amici.runAmiciSimulation(model, solver) print('Status of postequilibration:', rdata['posteq_status']) print('Number of steps employed in postequilibration:', rdata['posteq_numsteps']) print('Computed state sensitivities:') print(rdata['sx'][0,:,:]) # Call simulation with singular Jacobian and newtonOnly mode (will fail) model.setTimepoints(np.full(1, np.inf)) model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly) solver = model.getSolver() solver.setSensitivityMethod(amici.SensitivityMethod.forward) solver.setSensitivityOrder(amici.SensitivityOrder.first) solver.setMaxSteps(10000) rdata = amici.runAmiciSimulation(model, solver) print('Status of postequilibration:', rdata['posteq_status']) print('Number of steps employed in postequilibration:', rdata['posteq_numsteps']) print('Computed state sensitivities:') print(rdata['sx'][0,:,:]) # Call postequilibration by setting an infinity timepoint model_reduced.setTimepoints(np.full(1, np.inf)) model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly) solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) solver_reduced.setMaxSteps(1000) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced) print('Status of postequilibration:', rdata_reduced['posteq_status']) print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps']) print('Computed state sensitivities:') print(rdata_reduced['sx'][0,:,:]) """ Explanation: Postequilibration with sensitivities Equilibration is possible with forward and adjoint sensitivity analysis. As for the main simulation part, adjoint sensitivity analysis yields less information than forward sensitivity analysis, since no state sensitivities are computed. However, it has a better scaling behavior towards large model sizes. Postequilibration with forward sensitivities If forward sensitivity analysis is used, then state sensitivities at the timepoint np.inf will be computed. This can be done in (currently) two different ways: If the Jacobian $\nabla_x f$ of the right hand side $f$ is not (close to) singular, the most efficient approach will be solving the linear system of equations, which defines the steady state sensitivities: $$0 = \dot{s}^x = (\nabla_x f) s^x + \frac{\partial f}{\partial \theta}\qquad \Rightarrow \qquad(\nabla_x f) s^x = - \frac{\partial f}{\partial \theta}$$ This approach will always be chosen by AMICI, if the option model.SteadyStateSensitivityMode is set to SteadyStateSensitivityMode.newtonOnly. Furthermore, it will also be chosen, if the steady state was found by Newton's method, as in this case, the Jacobian is at least not singular (but may still be poorly conditioned). A check for the condition number of the Jacobian is currently missing, but will soon be implemented. If the Jacobian is poorly conditioned or singular, then the only way to obtain a reliable result will be integrating the state variables with state sensitivities until the norm of the right hand side becomes small. This approach will be chosen by AMICI, if the steady state was found by simulation and the option model.SteadyStateSensitivityMode is set to SteadyStateSensitivityMode.simulationFSA. This approach is numerically more stable, but the computation time for large models may be substantial. Side remark: A possible third way may consist in a (relaxed) Richardson iteration type approach, which interprets the entries of the right hand side $f$ as residuals and minimizes the squared residuals $\Vert f \Vert^2$ by a Levenberg-Marquart-type algorithm. This approach would also work for poorly conditioned (and even for singular Jacobians if additional constraints are implemented as Lagrange multipliers) while being faster than a long forward simulation. We want to demonstrate both possibilities to find the steady state sensitivities, as well as the failure of their computation if the Jacobian is singular and the newtonOnly setting was used. End of explanation """ # Call adjoint postequilibration by setting an infinity timepoint # and create an edata object, which is needed for adjoint computation edata = amici.ExpData(2, 0, 0, np.array([float('inf')])) edata.setObservedData([1.8] * 2) edata.fixedParameters = np.array([3., 5.]) model_reduced.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly) solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) solver_reduced.setMaxSteps(1000) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) print('Status of postequilibration:', rdata_reduced['posteq_status']) print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps']) print('Number of backward steps employed in postequilibration:', rdata_reduced['posteq_numstepsB']) print('Computed gradient:', rdata_reduced['sllh']) """ Explanation: Postequilibration with adjoint sensitivities Postequilibration also works with adjoint sensitivities. In this case, it is exploited that the ODE of the adjoint state $p$ will always have the steady state 0, since it's a linear ODE: $$\frac{d}{dt} p(t) = J(x^*, \theta)^T p(t),$$ where $x^*$ denotes the steady state of the system state. Since the Eigenvalues of the Jacobian are negative and since the Jacobian at steady state is a fixed matrix, this system has a simple algebraic solution: $$p(t) = e^{t J(x^*, \theta)^T} p_{\text{end}}.$$ As a consequence, the quadratures in adjoint computation also reduce to a matrix-vector product: $$Q(x, \theta) = Q(x^*, \theta) = p_{\text{integral}} * \frac{\partial f}{\partial \theta}$$ with $$p_{\text{integral}} = \int_0^\infty p(s) ds = (J(x^*, \theta)^T)^{-1} p_{\text{end}}.$$ However, this solution is given in terms of a linear system of equations defined by the transposed Jacobian of the right hand side. Hence, if the (transposed) Jacobian is singular, it is not applicable. In this case, standard integration must be carried out. End of explanation """ # Call adjoint postequilibration with model with singular Jacobian model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly) solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver.setSensitivityOrder(amici.SensitivityOrder.first) rdata = amici.runAmiciSimulation(model, solver, edata) print('Status of postequilibration:', rdata['posteq_status']) print('Number of steps employed in postequilibration:', rdata['posteq_numsteps']) print('Number of backward steps employed in postequilibration:', rdata['posteq_numstepsB']) print('Computed gradient:', rdata['sllh']) """ Explanation: If we carry out the same computation with a system that has a singular Jacobian, then posteq_numstepsB will not be 0 any more (which indicates that the linear system solve was used to compute backward postequilibration). Now, integration is carried out and hence posteq_numstepsB &gt; 0 End of explanation """ # create edata, with 3 timepoints and 2 observables: edata = amici.ExpData(2, 0, 0, np.array([0., 0.1, 1.])) edata.setObservedData([1.8] * 6) edata.fixedParameters = np.array([3., 5.]) edata.fixedParametersPreequilibration = np.array([0., 2.]) edata.reinitializeFixedParameterInitialStates = True # create the solver object and run the simulation solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) amici.plotting.plotStateTrajectories(rdata_reduced, model = model_reduced) amici.plotting.plotObservableTrajectories(rdata_reduced, model = model_reduced) """ Explanation: Preequilibrating the model Sometimes, we want to launch a solver run from a steady state which was inferred numerically, i.e., the system was preequilibrated. In order to do this with AMICI, we need to pass an ExpData object, which contains fixed parameter for the actual simulation and for preequilibration of the model. End of explanation """ # Change the last timepoint to an infinity timepoint. edata.setTimepoints(np.array([0., 0.1, float('inf')])) # run the simulation rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) """ Explanation: We can also combine pre- and postequilibration. End of explanation """ # No postquilibration this time. edata.setTimepoints(np.array([0., 0.1, 1.])) # create the solver object and run the simulation, singular Jacobian, enforce Newton solver for sensitivities model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly) solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setSensitivityMethod(amici.SensitivityMethod.forward) solver.setSensitivityOrder(amici.SensitivityOrder.first) rdata = amici.runAmiciSimulation(model, solver, edata) for key, value in rdata.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) # Singluar Jacobian, use simulation model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.simulationFSA) solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setSensitivityMethod(amici.SensitivityMethod.forward) solver.setSensitivityOrder(amici.SensitivityOrder.first) rdata = amici.runAmiciSimulation(model, solver, edata) for key, value in rdata.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) # Non-singular Jacobian, use Newton solver solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) for key, value in rdata_reduced.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) """ Explanation: Preequilibration with sensitivities Beyond the need for an ExpData object, the steady state solver logic in preequilibration is the same as in postequilibration, also if sensitivities are requested. The computation will fail for singular Jacobians, if SteadyStateSensitivityMode is set to newtonOnly, or if not enough steps can be taken. However, if forward simulation with steady state sensitivities is allowed, or if the Jacobian is not singular, it will work. Prequilibration with forward sensitivities End of explanation """ # Non-singular Jacobian, use Newton solver and adjoints with initial state sensitivities solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) for key, value in rdata_reduced.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) print('Gradient:', rdata_reduced['sllh']) # Non-singular Jacobian, use simulation solver and adjoints with initial state sensitivities solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(0) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) for key, value in rdata_reduced.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) print('Gradient:', rdata_reduced['sllh']) # Non-singular Jacobian, use Newton solver and adjoints with fully adjoint preequilibration solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(10) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver_reduced.setSensitivityMethodPreequilibration(amici.SensitivityMethod.adjoint) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) for key, value in rdata_reduced.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) print('Gradient:', rdata_reduced['sllh']) """ Explanation: Prequilibration with adjoint sensitivities When using preequilibration, adjoint sensitivity analysis can be used for simulation. This is a particularly interesting case: Standard adjoint sensitivity analysis requires the initial state sensitivities sx0 to work, at least if data is given for finite (i.e., not exclusively postequilibration) timepoints: For each parameter, a contribution to the gradient is given by the scalar product of the corresponding state sensitivity vector at timepoint $t=0$, (column in sx0), with the adjoint state ($p(t=0)$). Hence, the matrix sx0 is needed. This scalar product "closes the loop" from forward to adjoint simulation. By default, if adjoint sensitivity analysis is called with preequilibration, the initial state sensitivities are computed in just the same way as if this way done for forward sensitivity analysis. The only difference in the internal logic is that, if the steady state gets inferred via simulation, a separate solver object is used in order to ensure that the steady state simulation does not interfere with the snapshotting of the forward trajectory from the actual time course. However, also an adjoint version of preequilibration is possible: In this case, the "loop" from forward to adjoint simulation needs no closure: The simulation time is extended by preequilibration: forward from $t = -\infty$ to $t=0$, and after adjoint simulation also backward from $t=0$ to $t = -\infty$. Similar to adjoint postequilibration, the steady state of the adjoint state (at $t=-\infty$) is $p=0$, hence the scalar product (at $t=-\infty$) for the initial state sensitivities of preequilibration with the adjoint state vanishes. Instead, this gradient contribution is covered by additional quadratures $\int_{-\infty}^0 p(s) ds \cdot \frac{\partial f}{\partial \theta}$. In order to compute these quadratures correctly, the adjoint state from the main adjoint simulation must be passed on to the initial adjoint state of backward preequilibration. However, as the adjoint state must be passed on from backward computation to preequilibration, it is currently not allowed to alter (reinitialize) states of the model at $t=0$, unless these states are constant, as otherwise this alteration would lead to a discontinuity in the adjoints state as well and hence to an incorrect gradient. End of explanation """ # Non-singular Jacobian, use Newton solver and adjoints with fully adjoint preequilibration solver = model.getSolver() solver.setNewtonMaxSteps(10) solver.setSensitivityMethod(amici.SensitivityMethod.adjoint) solver.setSensitivityMethodPreequilibration(amici.SensitivityMethod.adjoint) solver.setSensitivityOrder(amici.SensitivityOrder.first) rdata = amici.runAmiciSimulation(model, solver, edata) for key, value in rdata.items(): if key[0:6] == 'preeq_': print('%20s: ' % key, value) print('Gradient:', rdata['sllh']) """ Explanation: As for postquilibration, adjoint preequilibration has an analytic solution (via the linear system), which will be preferred. If used for models with singular Jacobian, numerical integration will be carried out, which is indicated by preeq_numstepsB. End of explanation """ # Non-singular Jacobian, use simulaiton model_reduced.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.simulationFSA) solver_reduced = model_reduced.getSolver() solver_reduced.setNewtonMaxSteps(0) solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward) solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first) # run with lax tolerances solver_reduced.setRelativeToleranceSteadyState(1e-2) solver_reduced.setAbsoluteToleranceSteadyState(1e-3) solver_reduced.setRelativeToleranceSteadyStateSensi(1e-2) solver_reduced.setAbsoluteToleranceSteadyStateSensi(1e-3) rdata_reduced_lax = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) # run with strict tolerances solver_reduced.setRelativeToleranceSteadyState(1e-12) solver_reduced.setAbsoluteToleranceSteadyState(1e-16) solver_reduced.setRelativeToleranceSteadyStateSensi(1e-12) solver_reduced.setAbsoluteToleranceSteadyStateSensi(1e-16) rdata_reduced_strict = amici.runAmiciSimulation(model_reduced, solver_reduced, edata) # compare ODE outputs print('\nODE solver steps, which were necessary to reach steady state:') print('lax tolerances: ', rdata_reduced_lax['preeq_numsteps']) print('strict tolerances: ', rdata_reduced_strict['preeq_numsteps']) print('\nsimulation time corresponding to steady state:') print(rdata_reduced_lax['preeq_t']) print(rdata_reduced_strict['preeq_t']) print('\ncomputation time to reach steady state:') print(rdata_reduced_lax['preeq_cpu_time']) print(rdata_reduced_strict['preeq_cpu_time']) """ Explanation: Controlling the error tolerances in pre- and postequilibration When solving ODEs or DAEs, AMICI uses the default logic of CVODES and IDAS to control error tolerances. This means that error weights are computed based on the absolute error tolerances and the product of current state variables of the system and their respective relative error tolerances. If this error combination is then controlled. The respective tolerances for equilibrating a system with AMICI can be controlled by the user via the getter/setter functions [get|set][Absolute|Relative]ToleranceSteadyState[Sensi]: End of explanation """
yingchi/fastai-notes
deeplearning1/nbs/lesson4.ipynb
apache-2.0
ratings = pd.read_csv(path+'ratings.csv') ratings.head() len(ratings) """ Explanation: Set up data We're working with the movielens data, which contains one rating per row, like this: End of explanation """ movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict() users = ratings.userId.unique() movies = ratings.movieId.unique() userid2idx = {o:i for i,o in enumerate(users)} movieid2idx = {o:i for i,o in enumerate(movies)} """ Explanation: Just for display purposes, let's read in the movie names too. End of explanation """ ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x]) ratings.userId = ratings.userId.apply(lambda x: userid2idx[x]) user_min, user_max, movie_min, movie_max = (ratings.userId.min(), ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max()) user_min, user_max, movie_min, movie_max n_users = ratings.userId.nunique() n_movies = ratings.movieId.nunique() n_users, n_movies """ Explanation: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings. End of explanation """ n_factors = 50 np.random.seed = 42 """ Explanation: This is the number of latent factors in each embedding. End of explanation """ msk = np.random.rand(len(ratings)) < 0.8 trn = ratings[msk] val = ratings[~msk] """ Explanation: Randomly split into training and validation. End of explanation """ g=ratings.groupby('userId')['rating'].count() topUsers=g.sort_values(ascending=False)[:15] g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False)[:15] top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId') top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId') pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum) """ Explanation: Create subset for Excel We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however. End of explanation """ # It's a keras class (we import keras and a lot of other libraries in utils.py) ?Embedding user_in = Input(shape=(1,), dtype='int64', name='user_in') u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in) movie_in = Input(shape=(1,), dtype='int64', name='movie_in') m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in) x = merge([u, m], mode='dot') x = Flatten()(x) model = Model([user_in, movie_in], x) model.compile(Adam(0.001), loss='mse') model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, validation_data=([val.userId, val.movieId], val.rating)) """ Explanation: Dot product The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works: End of explanation """ def embedding_input(name, n_in, n_out, reg): inp = Input(shape=(1,), dtype='int64', name=name) return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp) user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) def create_bias(inp, n_in): x = Embedding(n_in, 1, input_length=1)(inp) return Flatten()(x) ub = create_bias(user_in, n_users) mb = create_bias(movie_in, n_movies) x = merge([u, m], mode='dot') x = Flatten()(x) x = merge([x, ub], mode='sum') x = merge([x, mb], mode='sum') model = Model([user_in, movie_in], x) model.compile(Adam(0.001), loss='mse') model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10, validation_data=([val.userId, val.movieId], val.rating)) model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5, validation_data=([val.userId, val.movieId], val.rating)) """ Explanation: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well... Bias The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output. End of explanation """ model.save_weights(model_path+'bias.h5') model.load_weights(model_path+'bias.h5') """ Explanation: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach! End of explanation """ model.predict([np.array([3]), np.array([6])]) """ Explanation: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6. End of explanation """ g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False)[:2000] topMovies = np.array(topMovies.index) """ Explanation: Analyze results To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies. End of explanation """ get_movie_bias = Model(movie_in, mb) # Model is anthter functional API(e.g. Sequential) in keras # Specify the input and output of the model movie_bias = get_movie_bias.predict(topMovies) movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)] """ Explanation: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float). End of explanation """ sorted(movie_ratings, key=itemgetter(0))[:15] sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15] """ Explanation: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch. End of explanation """ get_movie_emb = Model(movie_in, m) movie_emb = np.squeeze(get_movie_emb.predict([topMovies])) movie_emb.shape """ Explanation: We can now do the same thing for the embeddings. End of explanation """ from sklearn.decomposition import PCA pca = PCA(n_components=3) movie_pca = pca.fit(movie_emb.T).components_ fac0 = movie_pca[0] movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)] """ Explanation: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors. End of explanation """ sorted(movie_comp, key=itemgetter(0), reverse=True)[:10] sorted(movie_comp, key=itemgetter(0))[:10] fac1 = movie_pca[1] movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)] """ Explanation: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'. End of explanation """ sorted(movie_comp, key=itemgetter(0), reverse=True)[:10] sorted(movie_comp, key=itemgetter(0))[:10] fac2 = movie_pca[2] movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)] """ Explanation: The 2nd is 'hollywood blockbuster'. End of explanation """ sorted(movie_comp, key=itemgetter(0), reverse=True)[:10] sorted(movie_comp, key=itemgetter(0))[:10] """ Explanation: The 3rd is 'violent vs happy'. End of explanation """ import sys stdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr reload(sys) sys.setdefaultencoding('utf-8') sys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr start=50; end=100 X = fac0[start:end] Y = fac2[start:end] plt.figure(figsize=(15,15)) plt.scatter(X, Y) for i, x, y in zip(topMovies[start:end], X, Y): plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14) plt.show() """ Explanation: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components. End of explanation """ user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) x = merge([u, m], mode='concat') x = Flatten()(x) x = Dropout(0.3)(x) x = Dense(70, activation='relu')(x) x = Dropout(0.75)(x) x = Dense(1)(x) nn = Model([user_in, movie_in], x) nn.compile(Adam(0.001), loss='mse') nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8, validation_data=([val.userId, val.movieId], val.rating)) """ Explanation: Neural net Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/fio-ronm/cmip6/models/sandbox-3/landice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'landice') """ Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: FIO-RONM Source ID: SANDBOX-3 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:01 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation """
schaber/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab = set(text) vocab_to_int = {w: i for i,w in enumerate(vocab)} int_to_vocab = {vocab_to_int[w]: w for w in vocab_to_int} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function tokens = {'.' : '||Period||', ',' : '||Comma||' , '"' : '||QuotationMark||', ';' : '||Semicolon||', '!' : '||ExclamationMark||', '?' : '||QuestionMark||', '(' : '||LeftParenthesis||', ')' : '||RightParenthesis||', '--': '||Dash||', '\n': '||Return||'} return tokens """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function input = tf.placeholder(tf.int32, shape=[None,None], name='input') targets = tf.placeholder(tf.int32, shape=[None,None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return input, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) """ Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation """ lstm_layers = 1 def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([cell]*lstm_layers) #TODO: add dropout? initial_state = cell.zero_state(batch_size, tf.int32) initial_state = tf.identity(initial_state, name='initial_state') return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation """ def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function #with graph.as_default(): embeddings = tf.Variable(tf.truncated_normal((vocab_size, embed_dim), stddev=0.1)) embed = tf.nn.embedding_lookup(embeddings, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) """ Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function print('inputs.shape={}'.format(inputs.shape)) outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') print(outputs.shape) #print(final_state.shape) return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) """ Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation """ def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embed_dim = 200 print('input_data.shape={}'.format(input_data.shape)) print('vocab_size={}'.format(vocab_size)) embedded = get_embed(input_data, vocab_size, embed_dim) print('embedded.shape={}'.format(embedded.shape)) outputs, final_state = build_rnn(cell, embedded) print('outputs.shape={}'.format(outputs.shape)) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_batches = len(int_text)//(batch_size*seq_length) batches = np.zeros([n_batches, 2, batch_size, seq_length]) for i1 in range(n_batches): for i2 in range(2): for i3 in range(batch_size): pos = i1*seq_length+i2+2*seq_length*i3 batches[i1,i2,i3,:] = int_text[pos:pos+seq_length] return batches """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) """ Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive. End of explanation """ # Number of Epochs num_epochs = 200 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Sequence Length seq_length = 200 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function input = loaded_graph.get_tensor_by_name('input:0') init_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probs = loaded_graph.get_tensor_by_name('probs:0') return input, init_state, final_state, probs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function #return int_to_vocab[np.argmax(probabilities)] idx = np.random.choice(range(len(probabilities)), p=probabilities) return int_to_vocab[idx] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
spulido99/NetworksAnalysis
DiderGonzalez/Ejercicios 1.1/Ejercicios 1.1 - Graphs, Paths & Components.ipynb
mit
edges = set([(1, 2), (3, 1), (3, 2), (2, 4)]) import networkx as nx G=nx.Graph() #Se crea un grafo vacio y no dirigido G.add_edges_from(edges) G2=nx.DiGraph() #Se crea un grafo vacio y no dirigido G2.add_edges_from(edges) numNodes = G.number_of_nodes() numEdges = G.number_of_edges() # el grafo se creo no dirigido, la respuesta sera 4, pero son 4 con dos direciones print("Grafo no dirigido") print("La tupla (1,2) es no dirigida: ", G.has_edge(1,2))# dice si la tupla tiene un enlace dirigido (false) o no dirigido (true) print("Es dirigido: ", G.is_directed())# dice si el grafo es dirigido o no dirigido print("Numero de nodos: "+str(numNodes)) print("Numero de enlaces(no dirigido): "+str(numEdges))# imprime 4 enlaces no dirigidos print("") print("Grafo Dirigido") print("Numero de nodos: "+str(G2.number_of_nodes())) print("Numero de enlaces(dirigido): "+str(G2.number_of_edges()))# imprime 4 enlaces dirigidos print(G[3]) """ Explanation: Ejercicios Graphs, Paths & Components Ejercicios básicos de Grafos. Ejercicio - Número de Nodos y Enlaces _ (resuelva en código propio y usando la librería NetworkX (python) o iGraph (R)) _ Cuente el número de nodos y enlaces con los siguientes links (asumiendo que el grafo puede ser dirigido Y no dirigido): End of explanation """ A = nx.adjacency_matrix(G) print("Grafo no dirigido") print(A.todense()) print("") A2 = nx.adjacency_matrix(G2) print("Grafo dirigido") print(A2.todense()) """ Explanation: Ejercicio - Matriz de Adyacencia _ (resuelva en código propio y usando la librería NetworkX (python) o iGraph (R)) _ Cree la matriz de adyacencia del grafo del ejercicio anterior (para dirigido y no-dirigido) End of explanation """ N = 5 """ Explanation: Ejercicio - Sparseness Calcule la proporción entre número de links existentes en 3 redes reales (http://snap.stanford.edu/data/index.html) contra el número de links posibles. En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay? Ejercicio - Redes Bipartitas Defina una red bipartita y genere ambas proyecciones, explique qué son los nodos y links tanto de la red original como de las proyeccciones Ejercicio - Paths Cree un grafo de 5 nodos con 5 enlaces. Elija dos nodos cualquiera e imprima: + 5 Paths diferentes entre los nodos + El camino mas corto entre los nodos + El diámetro de la red + Un self-avoiding path Ejercicio - Componentes Baje una red real (http://snap.stanford.edu/data/index.html) y lea el archivo Utilizando NetworkX o iGraph descubra el número de componentes Implemente el algorithmo Breadth First para encontrar el número de componentes (revise que el resultado es el mismo que utilizando la librería) Ejercicio - Degree distribution _ (resuelva en código propio y usando la librería NetworkX (python) o iGraph (R)) _ Haga un plot con la distribución de grados de la red real Calcule el grado promedio Ejercicio - Diámetro End of explanation """ routemap = [('St. Louis', 'Miami'), ('St. Louis', 'San Diego'), ('St. Louis', 'Chicago'), ('San Diego', 'Chicago'), ('San Diego', 'San Francisco'), ('San Diego', 'Minneapolis'), ('San Diego', 'Boston'), ('San Diego', 'Portland'), ('San Diego', 'Seattle'), ('Tulsa', 'New York'), ('Tulsa', 'Dallas'), ('Phoenix', 'Cleveland'), ('Phoenix', 'Denver'), ('Phoenix', 'Dallas'), ('Chicago', 'New York'), ('Chicago', 'Los Angeles'), ('Miami', 'New York'), ('Miami', 'Philadelphia'), ('Miami', 'Denver'), ('Boston', 'Atlanta'), ('Dallas', 'Cleveland'), ('Dallas', 'Albuquerque'), ('Philadelphia', 'Atlanta'), ('Denver', 'Minneapolis'), ('Denver', 'Cleveland'), ('Albuquerque', 'Atlanta'), ('Minneapolis', 'Portland'), ('Los Angeles', 'Seattle'), ('San Francisco', 'Portland'), ('San Francisco', 'Seattle'), ('San Francisco', 'Cleveland'), ('Seattle', 'Portland')] """ Explanation: Cree un grafo de N nodos con el máximo diámetro posible Cree un grafo de N nodos con el mínimo diámetro posible Cree un grafo de N nodos que sea un ciclo simple Ejercicio - Pregunta "real" Una aerolínea tiene las siguientes rutas desde las ciudades a las que sirve (cada par tiene servicio en ambas direcciones). End of explanation """
d-li14/CS231n-Assignments
assignment3-winter1516/ImageGradients.ipynb
gpl-3.0
# As usual, a bit of setup import time, os, json import numpy as np import skimage.io import matplotlib.pyplot as plt from cs231n.classifiers.pretrained_cnn import PretrainedCNN from cs231n.data_utils import load_tiny_imagenet from cs231n.image_utils import blur_image, deprocess_image %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 """ Explanation: Image Gradients In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images. End of explanation """ data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True) """ Explanation: Introducing TinyImageNet The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images. We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A. To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory. NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory. End of explanation """ for i, names in enumerate(data['class_names']): print i, ' '.join('"%s"' % name for name in names) """ Explanation: TinyImageNet-100-A classes Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A: End of explanation """ # Visualize some examples of the training data classes_to_show = 7 examples_per_class = 5 class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False) for i, class_idx in enumerate(class_idxs): train_idxs, = np.nonzero(data['y_train'] == class_idx) train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False) for j, train_idx in enumerate(train_idxs): img = deprocess_image(data['X_train'][train_idx], data['mean_image']) plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j) if j == 0: plt.title(data['class_names'][class_idx][0]) plt.imshow(img) plt.gca().axis('off') plt.show() """ Explanation: Visualize Examples Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images. End of explanation """ model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5') """ Explanation: Pretrained model We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization). To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk. End of explanation """ batch_size = 100 # Test the model on training data mask = np.random.randint(data['X_train'].shape[0], size=batch_size) X, y = data['X_train'][mask], data['y_train'][mask] y_pred = model.loss(X).argmax(axis=1) print 'Training accuracy: ', (y_pred == y).mean() # Test the model on validation data mask = np.random.randint(data['X_val'].shape[0], size=batch_size) X, y = data['X_val'][mask], data['y_val'][mask] y_pred = model.loss(X).argmax(axis=1) print 'Validation accuracy: ', (y_pred == y).mean() """ Explanation: Pretrained model performance Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments. End of explanation """ def compute_saliency_maps(X, y, model): """ Compute a class saliency map using the model for images X and labels y. Input: - X: Input images, of shape (N, 3, H, W) - y: Labels for X, of shape (N,) - model: A PretrainedCNN that will be used to compute the saliency map. Returns: - saliency: An array of shape (N, H, W) giving the saliency maps for the input images. """ saliency = None ############################################################################## # TODO: Implement this function. You should use the forward and backward # # methods of the PretrainedCNN class, and compute gradients with respect to # # the unnormalized class score of the ground-truth classes in y. # ############################################################################## N, _, H, W = X.shape saliency = np.zeros([N, H, W]) scores, cache = model.forward(X) dscores = np.zeros(scores.shape) dscores[:, y] = 1 dX, grads = model.backward(dscores, cache) saliency = np.max(np.abs(dX), axis=1) ############################################################################## # END OF YOUR CODE # ############################################################################## return saliency """ Explanation: Saliency Maps Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1]. As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability. You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps. [1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", ICLR Workshop 2014. End of explanation """ def show_saliency_maps(mask): mask = np.asarray(mask) X = data['X_val'][mask] y = data['y_val'][mask] saliency = compute_saliency_maps(X, y, model) for i in xrange(mask.size): plt.subplot(2, mask.size, i + 1) plt.imshow(deprocess_image(X[i], data['mean_image'])) plt.axis('off') plt.title(data['class_names'][y[i]][0]) plt.subplot(2, mask.size, mask.size + i + 1) plt.title(mask[i]) plt.imshow(saliency[i]) plt.axis('off') plt.gcf().set_size_inches(10, 4) plt.show() # Show some random images mask = np.random.randint(data['X_val'].shape[0], size=5) show_saliency_maps(mask) # These are some cherry-picked images that should give good results show_saliency_maps([128, 3225, 2417, 1640, 4619]) """ Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A. End of explanation """ def make_fooling_image(X, target_y, model): """ Generate a fooling image that is close to X, but that the model classifies as target_y. Inputs: - X: Input image, of shape (1, 3, 64, 64) - target_y: An integer in the range [0, 100) - model: A PretrainedCNN Returns: - X_fooling: An image that is close to X, but that is classifed as target_y by the model. """ X_fooling = X.copy() ############################################################################## # TODO: Generate a fooling image X_fooling that the model will classify as # # the class target_y. Use gradient ascent on the target class score, using # # the model.forward method to compute scores and the model.backward method # # to compute image gradients. # # # # HINT: For most examples, you should be able to generate a fooling image # # in fewer than 100 iterations of gradient ascent. # ############################################################################## it = 1 y_pred = -1 lr = 200 while it < 100 and y_pred != target_y: score, cache = model.forward(X_fooling) y_pred = np.argmax(score[0]) if it % 10 == 0: print 'Iter:', it, ', Predicted class:', ' '.join('"%s"' % name for name in data['class_names'][y_pred]) dscore = np.zeros(score.shape) dscore[:, target_y] = 1 dX, grads = model.backward(dscore, cache) X_fooling += lr * dX it += 1 ############################################################################## # END OF YOUR CODE # ############################################################################## return X_fooling """ Explanation: Fooling Images We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images. [2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014 End of explanation """ # Find a correctly classified validation image while True: i = np.random.randint(data['X_val'].shape[0]) X = data['X_val'][i:i+1] y = data['y_val'][i:i+1] y_pred = model.loss(X)[0].argmax() if y_pred == y: break target_y = 67 X_fooling = make_fooling_image(X, target_y, model) # Make sure that X_fooling is classified as y_target scores = model.loss(X_fooling) assert scores[0].argmax() == target_y, 'The network is not fooled!' # Show original image, fooling image, and difference plt.subplot(1, 3, 1) plt.imshow(deprocess_image(X, data['mean_image'])) plt.axis('off') plt.title(data['class_names'][y[0]][0]) plt.subplot(1, 3, 2) plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True)) plt.title(data['class_names'][target_y][0]) plt.axis('off') plt.subplot(1, 3, 3) plt.title('Difference') plt.imshow(deprocess_image(X - X_fooling, data['mean_image'])) plt.axis('off') plt.show() """ Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. End of explanation """
ToqueWillot/M2DAC
FDMS/TME3/Model_V5-Flo.ipynb
gpl-2.0
# from __future__ import exam_success from __future__ import absolute_import from __future__ import print_function # Standard imports %matplotlib inline import os import sklearn import matplotlib.pyplot as plt import seaborn as sns import numpy as np import random import pandas as pd import scipy.stats as stats # Sk cheats from sklearn.cross_validation import cross_val_score from sklearn import grid_search from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import ExtraTreesRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.neighbors import KNeighborsRegressor from sklearn.svm import SVR #from sklearn.preprocessing import Imputer # get rid of nan from sklearn.decomposition import NMF # to add features based on the latent representation from sklearn.decomposition import ProjectedGradientNMF # Faster gradient boosting import xgboost as xgb # For neural networks models from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD, RMSprop """ Explanation: FDMS TME3 Kaggle How Much Did It Rain? II Florian Toque & Paul Willot Notes We tried different regressor model, like GBR, SVM, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far GBR seems to be the best, slightly better than the RF. The new features we exctracted only made a small impact on predictions but still improved them consistently. We tried to use a LSTM to take advantage of the sequential structure of the data but it didn't work too well, probably because there is not enought data (13M lines divided per the average length of sequences (15), less the 30% of fully empty data) End of explanation """ %%time #filename = "data/train.csv" filename = "data/reduced_train_100000.csv" #filename = "data/reduced_train_1000000.csv" raw = pd.read_csv(filename) raw = raw.set_index('Id') raw.columns raw['Expected'].describe() """ Explanation: 13.765.202 lines in train.csv 8.022.757 lines in test.csv Few words about the dataset Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months) The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable The test set is not a extracted from the same data as the training set however, which make the evaluation trickier Load the dataset End of explanation """ # Considering that the gauge may concentrate the rainfall, we set the cap to 1000 # Comment this line to analyse the complete dataset l = len(raw) raw = raw[raw['Expected'] < 300] #1000 print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100)) raw.head(5) raw.describe() """ Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail" If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark Therefor, it seems reasonable to drop values too large, considered as outliers End of explanation """ # We select all features except for the minutes past, # because we ignore the time repartition of the sequence for now features_columns = list([u'Ref', u'Ref_5x5_10th', u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite', u'RefComposite_5x5_10th', u'RefComposite_5x5_50th', u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th', u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th', u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th', u'Kdp_5x5_50th', u'Kdp_5x5_90th']) def getXy(raw): selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th', u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite', u'RefComposite_5x5_10th', u'RefComposite_5x5_50th', u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th', u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th', u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th', u'Kdp_5x5_50th', u'Kdp_5x5_90th']) data = raw[selected_columns] docX, docY = [], [] for i in data.index.unique(): if isinstance(data.loc[i],pd.core.series.Series): m = [data.loc[i].as_matrix()] docX.append(m) docY.append(float(raw.loc[i]["Expected"])) else: m = data.loc[i].as_matrix() docX.append(m) docY.append(float(raw.loc[i][:1]["Expected"])) X , y = np.array(docX) , np.array(docY) return X,y """ Explanation: We regroup the data by ID End of explanation """ #noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()] noAnyNan = raw.dropna() noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()] fullNan = raw.drop(raw[features_columns].dropna(how='all').index) print(len(raw)) print(len(noAnyNan)) print(len(noFullNan)) print(len(fullNan)) """ Explanation: On fully filled dataset End of explanation """ %%time #X,y=getXy(noAnyNan) X,y=getXy(noFullNan) #%%time #XX = [np.array(t).mean(0) for t in X] #XX = np.array([np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) for t in X]) XX=[] for t in X: nm = np.nanmean(t,0) for idx,j in enumerate(nm): if np.isnan(j): nm[idx]=global_means[idx] XX.append(nm) XX=np.array(XX) # rescale to clip min at 0 (for non negative matrix factorization) XX_rescaled=XX[:,:]-np.min(XX,0) %%time nn = ProjectedGradientNMF() W = nn.fit_transform(XX_rescaled) #H = nn.components_ global_means = np.nanmean(noFullNan,0) XX=[] for t in X: nm = np.nanmean(t,0) for idx,j in enumerate(nm): if np.isnan(j): nm[idx]=global_means[idx] XX.append(nm) XX=np.array(XX) # rescale to clip min at 0 (for non negative matrix factorization) XX_rescaled=XX[:,:]-np.min(XX,0) nmf = NMF(max_iter=1000) W = nmf.fit_transform(XX_rescaled) #H = nn.components_ # used to fill fully empty datas global_means = np.nanmean(noFullNan,0) # reduce the sequence structure of the data and produce # new hopefully informatives features def addFeatures(X,mf=0): # used to fill fully empty datas #global_means = np.nanmean(X,0) XX=[] nbFeatures=float(len(X[0][0])) for idxt,t in enumerate(X): # compute means, ignoring nan when possible, marking it when fully filled with nan nm = np.nanmean(t,0) tt=[] for idx,j in enumerate(nm): if np.isnan(j): nm[idx]=global_means[idx] tt.append(1) else: tt.append(0) tmp = np.append(nm,np.append(tt,tt.count(0)/nbFeatures)) # faster if working on fully filled data: #tmp = np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) # add the percentiles tmp = np.append(tmp,np.nanpercentile(t,10,axis=0)) tmp = np.append(tmp,np.nanpercentile(t,50,axis=0)) tmp = np.append(tmp,np.nanpercentile(t,90,axis=0)) for idx,i in enumerate(tmp): if np.isnan(i): tmp[idx]=0 # adding the dbz as a feature test = t try: taa=test[:,0] except TypeError: taa=[test[0][0]] valid_time = np.zeros_like(taa) valid_time[0] = taa[0] for n in xrange(1,len(taa)): valid_time[n] = taa[n] - taa[n-1] valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time) valid_time = valid_time / 60.0 sum=0 try: column_ref=test[:,2] except TypeError: column_ref=[test[0][2]] for dbz, hours in zip(column_ref, valid_time): # See: https://en.wikipedia.org/wiki/DBZ_(meteorology) if np.isfinite(dbz): mmperhr = pow(pow(10, dbz/10)/200, 0.625) sum = sum + mmperhr * hours if not(mf is 0): tmp = np.append(tmp,mf[idxt]) XX.append(np.append(np.array(sum),tmp)) #XX.append(np.array([sum])) #XX.append(tmp) return XX %time XX=addFeatures(X,mf=W) #XX=addFeatures(X) def splitTrainTest(X, y, split=0.2): tmp1, tmp2 = [], [] ps = int(len(X) * (1-split)) index_shuf = range(len(X)) random.shuffle(index_shuf) for i in index_shuf: tmp1.append(X[i]) tmp2.append(y[i]) return tmp1[:ps], tmp2[:ps], tmp1[ps:], tmp2[ps:] X_train,y_train, X_test, y_test = splitTrainTest(XX,y) """ Explanation: Predicitons As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data End of explanation """ def manualScorer(estimator, X, y): err = (estimator.predict(X_test)-y_test)**2 return -err.sum()/len(err) """ Explanation: End of explanation """ svr = SVR(kernel='rbf', C=800.0) %%time srv = svr.fit(X_train,y_train) print(svr.score(X_train,y_train)) print(svr.score(X_test,y_test)) err = (svr.predict(X_train)-y_train)**2 err.sum()/len(err) err = (svr.predict(X_test)-y_test)**2 err.sum()/len(err) %%time svr_score = cross_val_score(svr, XX, y, cv=5) print("Score: %s\nMean: %.03f"%(svr_score,svr_score.mean())) """ Explanation: max prof 24 nb trees 84 min sample per leaf 17 min sample to split 51 End of explanation """ knn = KNeighborsRegressor(n_neighbors=6,weights='distance',algorithm='ball_tree') #parameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')} parameters = {'n_neighbors':range(1,10,1)} grid_knn = grid_search.GridSearchCV(knn, parameters,scoring=manualScorer) %%time grid_knn.fit(X_train,y_train) print(grid_knn.grid_scores_) print("Best: ",grid_knn.best_params_) knn = grid_knn.best_estimator_ knn= knn.fit(X_train,y_train) print(knn.score(X_train,y_train)) print(knn.score(X_test,y_test)) err = (knn.predict(X_train)-y_train)**2 err.sum()/len(err) err = (knn.predict(X_test)-y_test)**2 err.sum()/len(err) """ Explanation: End of explanation """ etreg = ExtraTreesRegressor(n_estimators=200, max_depth=None, min_samples_split=1, random_state=0,n_jobs=4) parameters = {'n_estimators':range(100,200,10)} grid_rf = grid_search.GridSearchCV(etreg, parameters,n_jobs=4,scoring=manualScorer) %%time grid_rf.fit(X_train,y_train) print(grid_rf.grid_scores_) print("Best: ",grid_rf.best_params_) grid_rf.best_params_ #etreg = grid_rf.best_estimator_ %%time etreg = etreg.fit(X_train,y_train) print(etreg.score(X_train,y_train)) print(etreg.score(X_test,y_test)) err = (etreg.predict(X_train)-y_train)**2 err.sum()/len(err) err = (etreg.predict(X_test)-y_test)**2 err.sum()/len(err) """ Explanation: End of explanation """ rfr = RandomForestRegressor(n_estimators=200, criterion='mse', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=-1, random_state=None, verbose=0, warm_start=False) %%time rfr = rfr.fit(X_train,y_train) print(rfr.score(X_train,y_train)) print(rfr.score(X_test,y_test)) """ Explanation: End of explanation """ # the dbz feature does not influence xgbr so much xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True, objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=None) %%time xgbr = xgbr.fit(X_train,y_train) # without the nmf features # print(xgbr.score(X_train,y_train)) ## 0.993948231144 # print(xgbr.score(X_test,y_test)) ## 0.613931733332 # with nmf features print(xgbr.score(X_train,y_train)) print(xgbr.score(X_test,y_test)) """ Explanation: End of explanation """ gbr = GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=900, subsample=1.0, min_samples_split=2, min_samples_leaf=1, max_depth=4, init=None, random_state=None, max_features=None, alpha=0.5, verbose=0, max_leaf_nodes=None, warm_start=False) %%time gbr = gbr.fit(X_train,y_train) #os.system('say "終わりだ"') #its over! #parameters = {'max_depth':range(2,5,1),'alpha':[0.5,0.6,0.7,0.8,0.9]} #parameters = {'subsample':[0.2,0.4,0.5,0.6,0.8,1]} #parameters = {'subsample':[0.2,0.5,0.6,0.8,1],'n_estimators':[800,1000,1200]} #parameters = {'max_depth':range(2,4,1)} parameters = {'n_estimators':[400,800,1100]} #parameters = {'loss':['ls', 'lad', 'huber', 'quantile'],'alpha':[0.3,0.5,0.8,0.9]} #parameters = {'learning_rate':[0.1,0.5,0.9]} grid_gbr = grid_search.GridSearchCV(gbr, parameters,n_jobs=2,scoring=manualScorer) %%time grid_gbr = grid_gbr.fit(X_train,y_train) print(grid_gbr.grid_scores_) print("Best: ",grid_gbr.best_params_) print(gbr.score(X_train,y_train)) print(gbr.score(X_test,y_test)) err = (gbr.predict(X_train)-y_train)**2 print(err.sum()/len(err)) err = (gbr.predict(X_test)-y_test)**2 print(err.sum()/len(err)) err = (gbr.predict(X_train)-y_train)**2 print(err.sum()/len(err)) err = (gbr.predict(X_test)-y_test)**2 print(err.sum()/len(err)) """ Explanation: End of explanation """ t = [] for i in XX: t.append(np.count_nonzero(~np.isnan(i)) / float(i.size)) pd.DataFrame(np.array(t)).describe() """ Explanation: End of explanation """ svr.predict(X_test) s = modelList[0] t.mean(1) modelList = [svr,knn,etreg,rfr,xgbr,gbr] score_train = [[str(f).split("(")[0],f.score(X_train,y_train)] for f in modelList] score_test = [[str(f).split("(")[0],f.score(X_test,y_test)] for f in modelList] for idx,i in enumerate(score_train): print(i[0]) print(" train: %.03f"%i[1]) print(" test: %.03f"%score_test[idx][1]) globalPred = np.array([f.predict(XX) for f in modelList]).T globalPred[0] y[0] err = (globalPred.mean(1)-y)**2 print(err.sum()/len(err)) for f in modelList: print(str(f).split("(")[0]) err = (f.predict(XX)-y)**2 print(err.sum()/len(err)) for f in modelList: print(str(f).split("(")[0]) print(f.score(XX,y)) svrMeta = SVR() %%time svrMeta = svrMeta.fit(globalPred,y) err = (svrMeta.predict(globalPred)-y)**2 print(err.sum()/len(err)) """ Explanation: End of explanation """ in_dim = len(XX[0]) out_dim = 1 model = Sequential() # Dense(64) is a fully-connected layer with 64 hidden units. # in the first layer, you must specify the expected input data shape: # here, 20-dimensional vectors. model.add(Dense(128, input_shape=(in_dim,))) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(1, init='uniform')) model.add(Activation('linear')) #sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) #model.compile(loss='mean_squared_error', optimizer=sgd) rms = RMSprop() model.compile(loss='mean_squared_error', optimizer=rms) #model.fit(X_train, y_train, nb_epoch=20, batch_size=16) #score = model.evaluate(X_test, y_test, batch_size=16) prep = [] for i in y_train: prep.append(min(i,20)) prep=np.array(prep) mi,ma = prep.min(),prep.max() fy = (prep-mi) / (ma-mi) #my = fy.max() #fy = fy/fy.max() model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1) pred = model.predict(np.array(X_test))*ma+mi err = (pred-y_test)**2 err.sum()/len(err) r = random.randrange(len(X_train)) print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r])) r = random.randrange(len(X_test)) print("(Test) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r])) """ Explanation: Here for legacy End of explanation """ %%time #filename = "data/reduced_test_5000.csv" filename = "data/test.csv" test = pd.read_csv(filename) test = test.set_index('Id') features_columns = list([u'Ref', u'Ref_5x5_10th', u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite', u'RefComposite_5x5_10th', u'RefComposite_5x5_50th', u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th', u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th', u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th', u'Kdp_5x5_50th', u'Kdp_5x5_90th']) def getX(raw): selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th', u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite', u'RefComposite_5x5_10th', u'RefComposite_5x5_50th', u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th', u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th', u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th', u'Kdp_5x5_50th', u'Kdp_5x5_90th']) data = raw[selected_columns] docX= [] for i in data.index.unique(): if isinstance(data.loc[i],pd.core.series.Series): m = [data.loc[i].as_matrix()] docX.append(m) else: m = data.loc[i].as_matrix() docX.append(m) X = np.array(docX) return X #%%time #X=getX(test) #tmp = [] #for i in X: # tmp.append(len(i)) #tmp = np.array(tmp) #sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1)) #plt.title("Number of ID per number of observations\n(On test dataset)") #plt.plot() testFull = test.dropna() %%time X=getX(testFull) # 1min #XX = [np.array(t).mean(0) for t in X] # 10s XX=addFeatures(X) pd.DataFrame(gbr.predict(XX)).describe() predFull = zip(testFull.index.unique(),gbr.predict(XX)) testNan = test.drop(test[features_columns].dropna(how='all').index) tmp = np.empty(len(testNan)) tmp.fill(0.445000) # 50th percentile of full Nan dataset predNan = zip(testNan.index.unique(),tmp) testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique()) tmp = np.empty(len(testLeft)) tmp.fill(1.27) # 50th percentile of full Nan dataset predLeft = zip(testLeft.index.unique(),tmp) len(testFull.index.unique()) len(testNan.index.unique()) len(testLeft.index.unique()) pred = predFull + predNan + predLeft pred.sort(key=lambda x: x[0], reverse=False) submission = pd.DataFrame(pred) submission.columns = ["Id","Expected"] submission.head() submission.loc[submission['Expected']<0,'Expected'] = 0.445 submission.to_csv("submit4.csv",index=False) filename = "data/sample_solution.csv" sol = pd.read_csv(filename) sol ss = np.array(sol) %%time for a,b in predFull: ss[a-1][1]=b ss sub = pd.DataFrame(pred) sub.columns = ["Id","Expected"] sub.Id = sub.Id.astype(int) sub.head() sub.to_csv("submit3.csv",index=False) """ Explanation: Predict on testset End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/1a/structures_donnees_conversion.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: 1A.1 - D'une structure de données à l'autre Ce notebook s'amuse à passer d'une structure de données à une autre, d'une liste à un dictionnaire, d'une liste de liste à un dictionnaire, avec toujours les mêmes données : list, dict, tuple. End of explanation """ ens = ["a", "b", "gh", "er", "b", "gh"] hist = {} for e in ens: hist[e] = hist.get(e, 0) + 1 hist """ Explanation: histogramme et dictionnaire liste à dictionnaire Un histogramme est le moyen le plus simple de calculer la distribution d'une variable, de compter la fréquence des éléments d'une liste. End of explanation """ ens = ["a", "b", "gh", "er", "b", "gh"] hist = {} for e in ens: if e in hist: hist[e] += 1 else: hist[e] = 1 hist """ Explanation: La méthode get comme beaucoup de fonctions implémente un besoin fréquent. Elle regarde si une clé appartient au dictionnaire, retourne la valeur associée ou une valeur par défault dans le cas contraire. Sans utiliser cette méthode, le code précédent devient : End of explanation """ from collections import Counter ens = ["a", "b", "gh", "er", "b", "gh"] hist = Counter(ens) hist """ Explanation: Il existe également la fonction Counter qui fait cela. End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} ens = [] for k, v in hist.items(): for i in range(v): ens.append(k) ens """ Explanation: dictionnaire à liste A priori l'histogramme représente la même information que la liste initiale ens. Il doit exister un moyen de recontruire la liste initiale. End of explanation """ hist.items() """ Explanation: La liste initiale est retrouvée excepté l'ordre qui est différent. Les éléments identiques sont côte à côte. La méthode items retourne des couples (clé, valeur) ou plutôt une vue, c'est-à-dire une façon de parcourir un ensemble. End of explanation """ import sys vue = hist.items() sys.getsizeof(ens), sys.getsizeof(hist), sys.getsizeof(vue) """ Explanation: Pour vérifier que la méthode items ne retourne pas un ensemble mais une façon de parcourir un ensemble, on regarde sa taille avec la fonction getsizeof : End of explanation """ d = {i:i for i in range(1000)} sys.getsizeof(d), sys.getsizeof(d.items()) """ Explanation: Et pour un dictionnaire plus grand, la taille du dictionnaire. End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} ens = [] for k in hist: v = hist[k] for i in range(v): ens.append(k) ens """ Explanation: On peut ne pas utiliser la méthode items : End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} cles = [k for k in hist] vals = [hist[k] for k in hist] cles, vals """ Explanation: dictionnaire et deux listes Cette fois-ci, on met les clés d'un côté et les valeurs de l'autre. End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} cles = list(hist.keys()) vals = list(hist.values()) cles, vals """ Explanation: On peut écrire aussi ce programme End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} cles = [] vals = [] for k, v in hist.items(): cles.append(k) vals.append(v) cles, vals """ Explanation: Toutefois, cette écriture n'est pas recommandée car il est possible que l'expression for k in hist ou list(hist.keys()) parcourent les clés d'un dictionnaire de deux façons différentes si le dictionnaire est modifié entre temps. Mais on ne s'en pas toujours compte car cela dépend de l'implémentation des méthodes associées à la classe dict (voir cpython). C'est pourquoi on préfère ne parcourir qu'une seule fois le dictionnaire tout en créant les deux listes. End of explanation """ cles, vals = ['a', 'gh', 'er', 'b'], [1, 2, 1, 2] hist = {a:b for a, b in zip(cles, vals)} hist """ Explanation: deux listes et dictionnaires On effectue l'opération inverse. End of explanation """ cles, vals = ['a', 'gh', 'er', 'b'], [1, 2, 1, 2] hist = {} for i in range(len(cles)): hist[cles[i]] = vals[i] hist """ Explanation: Et si on ne veut pas utiliser la fonction zip : End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} cles = [] vals = [] for k, v in hist.items(): cles.append(k) vals.append(v) cles, vals """ Explanation: zip reverse La fonction zip permet de parcourir deux listes en parallèles. Cela permet de raccourcir le code pour créer un dictionnaire à partir de clés et de valeurs séparés. Ca paraît bien plus long que de créer les listes des clés et des valeurs. Et pourtant le code suivant peut être considérablement raccourci : End of explanation """ hist = {'a': 1, 'b': 2, 'er': 1, 'gh': 2} cles, vals = zip(*hist.items()) cles, vals """ Explanation: Cela devient : End of explanation """ mat = [[1, 2], [3, 4]] dv = {} for i, row in enumerate(mat): for j, x in enumerate(row): dv[i,j] = x dv """ Explanation: Petite différence, cles, vals sont sous forme de tuple mais cela reste très élégant. matrices et dictionnaires liste de listes et dictionnaires Une liste de listes est la représentation la plus naturelle. Essayons de la transformer sous forme de dictionnaire. On utilise la fonction enumerate. End of explanation """ dx = {(0, 0): 1, (0, 1): 2, (1, 0): 3, (1, 1): 4} max_i = max(k[0] for k in dx) + 1 max_j = max(k[1] for k in dx) + 1 mat = [[0] * max_j for i in range(max_i)] for k, v in dv.items(): mat[k[0]][k[1]] = v mat """ Explanation: dictionnaires et liste de listes On effectue l'opération inverse. Nous n'avons pas perdu d'information, nous devrions retrouver la liste de listes originale. End of explanation """ mat = [[1, 0, 0], [0, 4, 0]] dv = {} for i, row in enumerate(mat): for j, x in enumerate(row): if x != 0: dv[i,j] = x dv """ Explanation: La différence principale entre un dictionnaire d et une liste l est que l'instruction d[k] ajoute un élément d'indice k (quel que soit k) alors que l'instruction l[k]) suppose que l'élément d'indice k existe dans la liste. C'est pour cela qu'on commence à calculer les indices maximaux largeur, longueur. matrice sparse On utilise cette répresentation surtout lorsque pour des matrices sparses : la majorité des coefficients sont nuls. Dans ce cas, le dictionnaire final ne contient que les coefficients non nuls. End of explanation """ dx = {(0, 0): 1, (1, 1): 4} max_i = max(k[0] for k in dx) + 1 max_j = max(k[1] for k in dx) + 1 mat = [[0] * max_j for i in range(max_i)] for k, v in dv.items(): mat[k[0]][k[1]] = v mat """ Explanation: Si on ne conserve pas les dimensions de la matrice originale, on perd un peu d'information dans un cas précis : si la matrice se termine par une colonne ou une ligne de zéros. End of explanation """ mat = [[1, 0, 0], [0, 4, 0], [1, 2, 3]] arr = [] for i, row in enumerate(mat): for j, x in enumerate(row): arr.append(x) arr """ Explanation: matrices et tableaux 2 dimensions logiques, 1 dimension en mémoire On préfère représenter une matrice par un seul vecteur même si logiquement elle en contient car cela prend moins de place en mémoire. Dans ce cas, on met les lignes bout à bout. End of explanation """ import sys sys.getsizeof(mat), sys.getsizeof(arr) """ Explanation: D'un côté, nous avons 4 listes avec mat et une seule avec arr. Vérifions les tailles : End of explanation """ from ensae_teaching_cs.helpers.size_helper import total_size total_size(mat), total_size(arr) """ Explanation: Etrange ! Mais pour comprendre, il faut lire la documentation de la fonction getsizeof qui ne compte pas la somme des objets référencés par celui dont on mesure la taille. Autrement dit, dans le cas d'une liste de listes, la fonction ne mesure que la taille de la première liste. Pour corriger le tir, on utilise la fonction suggérée par la documentation de Python. End of explanation """ from pympler.asizeof import asizeof asizeof(mat), asizeof(arr) """ Explanation: On peut aussi utiliser le module pympler et la fonction asizeof. End of explanation """ from numpy import array amat = array(mat) aarr = array(arr) asizeof(amat), asizeof(aarr) """ Explanation: Cela prend énormément de place pour 9 float (soit 9x8 octets) mais Python stocke beaucoup plus d'informations qu'un langage compilé type C++. Cela explique pourquoi le module numpy fait la même chose avec moins d'espace mémoire car il est codé en C++. End of explanation """ n = 100000 li = list(float(x) for x in range(n)) ar = array(li) asizeof(li) / n, asizeof(ar) / n """ Explanation: Et si on augmente le nombre de réels pour faire disparaître les coûts fixes : End of explanation """ arr = [1, 0, 0, 0, 4, 0, 1, 2, 3] nb_lin = 3 nb_col = len(arr) // nb_lin mat = [] pos = 0 for i in range(nb_lin): row = [] for j in range(nb_col): row.append(arr[pos]) pos += 1 mat.append(row) mat """ Explanation: Python prend 4 fois plus de place que numpy. du tableau à la liste de listes A moins que la matrice soit carrée, il faut conserver une des dimensions du tableau original, le nombre de lignes par exemple. End of explanation """ arr = [1, 0, 0, 0, 4, 0, 1, 2, 3] nb_lin = 3 nb_col = len(arr) // nb_lin mat = [[0] * nb_col for i in range(nb_lin)] for pos, x in enumerate(arr): i = pos // nb_lin j = pos % nb_lin mat[i][j] = x mat """ Explanation: On peut aussi faire comme ceci : End of explanation """
drpjm/udacity-mle-project2
student_intervention/student_intervention.ipynb
mit
# Import libraries %matplotlib inline import numpy as np import pandas as pd import sklearn as skl import matplotlib.pyplot as plt # Read student data student_data = pd.read_csv("student-data.csv") print "Student data read successfully!" # Note: The last column 'passed' is the target/label, all other are feature columns """ Explanation: Project 2: Supervised Learning Building a Student Intervention System 1. Classification vs Regression Your goal is to identify students who might need early intervention - which type of supervised machine learning problem is this, classification or regression? Why? 2. Exploring the Data Let's go ahead and read in the student dataset first. To execute a code cell, click inside it and press Shift+Enter. End of explanation """ student_data[student_data.passed=='yes'].shape[0] student_data.dtypes # TODO: Compute desired values - replace each '?' with an appropriate expression/function call n_students = student_data.shape[0] n_features = student_data.shape[1]-1 n_passed = student_data[student_data.passed=='yes'].shape[0] n_failed = student_data[student_data.passed=='no'].shape[0] grad_rate = 100 * n_passed / (n_passed + n_failed) print "Total number of students: {}".format(n_students) print "Number of students who passed: {}".format(n_passed) print "Number of students who failed: {}".format(n_failed) print "Number of features: {}".format(n_features) print "Graduation rate of the class: {:.2f}%".format(grad_rate) """ Explanation: Now, can you find out the following facts about the dataset? - Total number of students - Number of students who passed - Number of students who failed - Graduation rate of the class (%) - Number of features Use the code block below to compute these values. Instructions/steps are marked using TODOs. End of explanation """ # Extract feature (X) and target (y) columns feature_cols = list(student_data.columns[:-1]) # all columns but last are features target_col = student_data.columns[-1] # last column is the target/label print "Feature column(s):-\n{}".format(feature_cols) print "Target column: {}".format(target_col) X_all = student_data[feature_cols] # feature values for all students y_all = student_data[target_col] # corresponding targets/labels print "\nFeature values:-" print X_all.head() # print the first 5 rows """ Explanation: 3. Preparing the Data In this section, we will prepare the data for modeling, training and testing. Identify feature and target columns It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with. Let's first separate our data into feature and target columns, and see if any features are non-numeric.<br/> Note: For this dataset, the last column ('passed') is the target or label we are trying to predict. End of explanation """ # Preprocess feature columns def preprocess_features(X): outX = pd.DataFrame(index=X.index) # output dataframe, initially empty # Check each column for col, col_data in X.iteritems(): # If data type is non-numeric, try to replace all yes/no values with 1/0 if col_data.dtype == object: col_data = col_data.replace(['yes', 'no'], [1, 0]) # Note: This should change the data type for yes/no columns to int # If still non-numeric, convert to one or more dummy variables if col_data.dtype == object: col_data = pd.get_dummies(col_data, prefix=col) # e.g. 'school' => 'school_GP', 'school_MS' outX = outX.join(col_data) # collect column(s) in output dataframe return outX preproc_sd = preprocess_features(X_all) print "Processed feature columns ({}):-\n{}".format(len(preproc_sd.columns), list(preproc_sd.columns)) """ Explanation: Preprocess feature columns As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values. Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others. These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. End of explanation """ # First, decide how many training vs test samples you want num_all = preproc_sd.shape[0] # same as len(student_data) num_train = 300 # about 75% of the data num_test = num_all - num_train shuffled_preproc_sd = preproc_sd.reindex(np.random.permutation(preproc_sd.index)) # Change indices on the labels to match the shuffling. shuffled_indices = shuffled_preproc_sd.index.values shuffled_labels = y_all.reindex(shuffled_indices) # TODO: Then, select features (X) and corresponding labels (y) for the training and test sets # Note: Shuffle the data or randomly select samples to avoid any bias due to ordering in the dataset X_train = shuffled_preproc_sd.head(num_train).values y_train = shuffled_labels.head(num_train).values X_test = shuffled_preproc_sd.tail(num_test).values y_test = shuffled_labels.tail(num_test).values print "Training set: {} samples".format(X_train.shape[0]) print "Test set: {} samples".format(X_test.shape[0]) # Note: If you need a validation set, extract it from within training data """ Explanation: Split data into training and test sets So far, we have converted all categorical features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets. End of explanation """ from sklearn.metrics import f1_score import time # Function for training a model def train_classifier(clf, X_train, y_train): print "Training {}...".format(clf.__class__.__name__) start = time.time() clf.fit(X_train, y_train) end = time.time() print "Done!\nTraining time (secs): {:.3f}".format(end - start) # Predict on training set and compute F1 score def predict_labels(clf, features, target): print "Predicting labels using {}...".format(clf.__class__.__name__) start = time.time() y_pred = clf.predict(features) end = time.time() print "Done!\nPrediction time (secs): {:.3f}".format(end - start) return f1_score(target, y_pred, pos_label='yes') # Taining data partitioning X_train_100 = X_train[:100] y_train_100 = y_train[:100] X_train_200 = X_train[:200] y_train_200 = y_train[:200] """ Explanation: I think there are a couple features that might be the most important based on my experience with teaching. First, attendance is key! Another good feature to examine would be the school and family support they receive. 4. Training and Evaluating Models Choose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model: What are the general applications of this model? What are its strengths and weaknesses? Given what you know about the data so far, why did you choose this model to apply? Fit this model to the training data, try to predict labels (for both training and test sets), and measure the F<sub>1</sub> score. Repeat this process with different training set sizes (100, 200, 300), keeping test set constant. Produce a table showing training time, prediction time, F<sub>1</sub> score on training set and F<sub>1</sub> score on test set, for each training set size. Note: You need to produce 3 such tables - one for each model. End of explanation """ from sklearn import tree dtc = tree.DecisionTreeClassifier(criterion="entropy") """ Explanation: Decision Tree The DT classifier can handle multiple inputs and if used with an entropy-based criterion, will split according to the highest information gained from the attribute. A couple of its key strengths are its simplicity and ability to handle multiple types of data. However, DTs are prone to over fitting and are sensitive to data. For example, the structure of the tree may change greatly between training runs. I chose this model first because it intuitively aligned with the problem: we have a lot of features, so we could ask multiple questions to determine whether a student should get assistance. End of explanation """ # Load up the GridSearch from sklearn.grid_search import GridSearchCV """ Explanation: Aside: Grid Search Testing End of explanation """ dtc_params = {'criterion':("gini","entropy"), 'min_samples_split':(2,4,8,16), 'max_features':("auto","sqrt","log2"), 'max_depth':np.arange(1,31,1)} f1scorer = skl.metrics.make_scorer( lambda yt, yp : skl.metrics.f1_score(yt, yp, pos_label='yes') ) tuned_dtc = GridSearchCV(dtc, dtc_params, f1scorer) tuned_dtc.fit(X_train, y_train) tuned_dtc.best_estimator_ # print "GridSearch DT Classifier" # train_predict(tuned_dtc, X_train_100, y_train_100, X_test, y_test) # train_predict(tuned_dtc, X_train_200, y_train_200, X_test, y_test) # train_predict(tuned_dtc, X_train, y_train, X_test, y_test) """ Explanation: To understand grid search a little better, I tried it out on the single DT classifier with the following parameter selections: criterion = "gini", "entropy" min_samples_split = 2, 4, 8, 16 max_features = None, sqrt, log2 max_depth = array of integers, [1 ... 30] Note: I made a lambda function to set that the positive label for the f1 scoring metric should be the string 'yes'. By passing this parameter, I do not need to convert the label data into 1s and 0s. End of explanation """ # Train and predict using different training set sizes def train_predict(clf, X_train, y_train, X_test, y_test): print "------------------------------------------" print "Training set size: {}".format(len(X_train)) train_classifier(clf, X_train, y_train) print "F1 score for training set: {}".format(predict_labels(clf, X_train, y_train)) print "F1 score for test set: {}".format(predict_labels(clf, X_test, y_test)) print "Non-tuned DT Classifier" train_predict(dtc, X_train_100, y_train_100, X_test, y_test) train_predict(dtc, X_train_200, y_train_200, X_test, y_test) train_predict(dtc, X_train, y_train, X_test, y_test) """ Explanation: ...now back to evaluating the DT classifier. Evaluation DT Classifier with Varying Sized Training Data From the data below, when I use entropy for splits, I get a lower F1 score with lower data (makes sense). The score increases with more data, which should happen as more data is added. End of explanation """ from sklearn.naive_bayes import GaussianNB # GaussianNB can accept sigma and theta as parameters, but I will try it empty. gnb = GaussianNB() train_predict(gnb, X_train_100, y_train_100, X_test, y_test) train_predict(gnb, X_train_200, y_train_200, X_test, y_test) train_predict(gnb, X_train, y_train, X_test, y_test) """ Explanation: Summary Results - DTC | Data Size | Training Time (s) | Prediction Time (s) | F1 Train | F1 Test | | --- | --- | --- | --- | --- | | 100 | 0.001 | 0.000 | 1.0 | 0.6935 | | 200 | 0.002 | 0.000 | 1.0 | 0.6865 | | 300 | 0.002 | 0.000 | 1.0 | 0.7218 | Bayesian Model We say the application of Naive Bayes models in email spam filters. In that case, we were trying to compute the likelihood that a particular email was spam based on the input data. Bayes learning, in essence, will let us switch cause and effect so that we can determine what sets of data make an outcome (pass/fail) liekly. Naive Bayes algorithms are relatively fast compared to other supervised learning techniques, since it makes the conditional independence assumption. The main disadvantage is caused by this assumption: we cannot leverage the interactions among the features. In practice, Naive Bayes works well with minimal tuning. One can think of this problem like the spam filter example: given a passing (or, failing) student classification, what was the effect of different features on the likelihood of that result. The feature data provides a chain of evidence to help derive the likelihood of correctly classifying the student. End of explanation """ # Bagger! from sklearn.ensemble import BaggingClassifier # I selected to have smaller sample and feature sets, but more estimators. The DT will use the entropy criterion. baggingClf_DT = BaggingClassifier(tree.DecisionTreeClassifier(criterion="entropy"), max_samples=0.3, max_features=0.3) train_predict(baggingClf_DT, X_train_100, y_train_100, X_test, y_test) train_predict(baggingClf_DT, X_train_200, y_train_200, X_test, y_test) train_predict(baggingClf_DT, X_train, y_train, X_test, y_test) """ Explanation: It works pretty well with no tuning. The F1 scores get better with more data. Summary Results - Gaussian Naive Bayes | Data Size | Training Time (s) | Prediction Time (s) | F1 Train | F1 Test | | --- | --- | --- | --- | --- | | 100 | 0.001 | 0.000 | 0.6732 | 0.3720 | | 200 | 0.001 | 0.000 | 0.8218 | 0.7727 | | 300 | 0.001 | 0.000 | 0.7922 | 0.7969 | Bagged Ensemble Model The BaggingClassifier in sklearn uses one type of classification algorithm and generates a set of learners (default value is 10) that train on a subset of the data and features. Each of the trained learning algorithms is built to classify based on a subset of the data and their results are averaged to come up with a classification over all running classifiers. One advantage of this method is the ability to construct a complex learner from a set of relatively simple learning algorithms. However, bagging increases the computational complexity, especially for tree based classifiers. My last evaluated model will use a bagging ensemble of single classifiers. The data set has lots of features that take different values. Much like the email examples in the lectures, this project could benefit from an ensemble of simpler classifiers. The implementation of the bagged classifier will use my DT classifier as its simpler model, as the sklearn documentation mentioned that BaggingClassifiers work better with DT algorithms. End of explanation """ bagged_params = {'max_samples':np.arange(0.1,1,0.1), 'max_features':np.arange(0.1,0.7,0.1),'n_estimators':np.arange(1,16,1)} basicBaggedClf = BaggingClassifier(tree.DecisionTreeClassifier(criterion="entropy")) tunedBaggedClf = GridSearchCV(basicBaggedClf, bagged_params, f1scorer) tunedBaggedClf.fit(X_train, y_train) tunedBaggedClf.best_estimator_ """ Explanation: Summary Results - Bagged DT Classifier | Data Size | Training Time (s) | Prediction Time (s) | F1 Train | F1 Test | | --- | --- | --- | --- | --- | | 100 | 0.037 | 0.002 | 0.9022 | 0.7022 | | 200 | 0.032 | 0.002 | 0.9096 | 0.7919 | | 300 | 0.034 | 0.002 | 0.8888 | 0.7702 | The performance looks similar to the single DT classifiers. 5. Choosing the Best Model Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance? In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction). Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this. What is the model's final F<sub>1</sub> score? Model Recommendations Based on the coded tests above, I recommend using a Bagged Decision Tree Classifier (DTC) for identifying students in need of assistance. The basic entropy based DTC has decent performance with minimal tuning: F1 scores over 0.7 for larger data sets. A simple Gaussian Naive Bayes (GNB) classifier trained quickly, but its performance was outmatched by the alternative DTC. The tables and code snippets above demonstrate how these single classifiers do not generalize as well as the Bagged DTC. This particulare classifier used an entropy DTC to start, but generated 20 different models over smaller sets of samples and features. Although the Bagged DTC takes longer to train and predicts 2x slower, its overall performance across data sets was better than the single classifiers. In this application, the added training time and execution time increase are worth the added accuracy. If we have too many false positives reported, it would potentially drain human resources more than computing resources. Bagged Decision Tree Classification The proposed model is generated from the concept of an ensemble learner: it is a single, complex learner composed of many simple learners. Bagging ensemble models average the results from their constituent classifiers. Each simple classifier is built up using a subset of the features and a subset of the data. To improve performance, it is recommended that this classifier be run through a grid search over the max_samples and max_features parameters to determine a higher performing combination of simple learners. At the core of my bagging classifier is the use of a DTC that uses entropy for its splitting criterion. Since I am more familiar with the leveraging of information gain, I chose to use the entropy based DTC. In general, DTC ask a series of questions over the data set, splitting the data into categories as each question is asked. Entropy is a measure of how random a collection of data points are. An entropy based chooses to split on attributes that reduce this randomness. For example, an ideal attribute would be one that splits the entire data set into two perfect subsets. Tuning the Bagged DT Classifier The main attributes we can tune for a bagging classifier are the number of estimators as well as the maximum samples and features used in each estimator. I chose to explore across a broad range of sample sizes, but I limited the maximum number of features to 60%. I wanted to see if the resulting classifier would prefer to use more data to learn more features. End of explanation """ predict_labels(tunedBaggedClf, X_test, y_test) """ Explanation: Running this tuned algorithm on the test data gave the following results: End of explanation """
AnthonyD973/swarmlist-list-based
src/statistics/analysis.ipynb
mit
%%bash if [ ! -e "$BUILD_DIR/experiment" ] then ARCHIVE="$SRC_DIR/statistics/results.tbz" mkdir -p "$BUILD_DIR" mkdir -p "$GRAPH_DIR" tar -xjf "$ARCHIVE" -C "$BUILD_DIR" fi """ Explanation: Data fetching Extract bzipped result. One may put their own results under &lt;git's root&gt;/build/experiment instead, in which case the extracting will be ignored. End of explanation """ INDEX_NAMES = ["Protocol", "Topology", "Packet drop rate", "Num. robots"] COLUMN_NAMES=["Consensus time", "Num. tx entries", "Num. rx entries", "Mean tx bandwidth", "Mean rx bandwidth"] PROTOCOLS=["consensus"] # Our experiments only used the 'consensus' protocol, i.e., # placing all the robots and waiting for consensus to be reached. TOPOLOGIES=["line", "cluster", "scalefree"] DROP_RATES=[0, 0.25, 0.5, 0.75] """ Explanation: Data analyzing Setup variables End of explanation """ # Gets results' data and renames the axes def readResults(filename): df = pd.read_csv(filename, index_col=[0,1,2,3]) df.index.names = INDEX_NAMES df.columns = COLUMN_NAMES return df # Gives statistical data for each {protocol, topology, drop rate, num robots} # configuration about the specified column. def crunchColumnByConfig(df, columnName): # Get column's data ret = df.xs(columnName, axis=1) # Then group experiments by configuration ret = ret.groupby(level=[0,1,2,3]) # Then do some pandas magic stuff ret = ret.apply(pd.Series.reset_index, drop=True).unstack().transpose().describe().transpose() return ret data = readResults(RES_IN) consensusData = crunchColumnByConfig(data, "Consensus time") consensusData """ Explanation: Crunch data End of explanation """ def plotGraph(df, topology, formats, deltas, yscale="linear", xlabel="Number of robots", ylabel="", savefileBaseName=None): fig = plt.figure(figsize = (10,5)) axis = fig.add_subplot(111) topologyDf = df.xs(topology, level=1) plotNumber=0 for protocol in PROTOCOLS: for dropRate in DROP_RATES: currDf = topologyDf.xs((protocol, dropRate)) numsRobots = currDf.index.tolist() numsRobots = [numsRobots[i] + deltas[plotNumber] for i in range(len(numsRobots))] yPlot = currDf.xs("50%", axis=1) yError = [(yPlot - currDf.xs("min", axis=1)), (currDf.xs("max", axis=1) - yPlot)] axis.errorbar(numsRobots, yPlot, yerr = yError, fmt=formats[plotNumber] + "-") plotNumber += 1 axis.set_xlabel(xlabel) axis.set_ylabel(ylabel) axis.set_yscale(yscale) axis.yaxis.grid() axis.legend([str(drop*100)+"% drop" for drop in DROP_RATES], loc=0, ncol=1, title=(topology + " topology")) if savefileBaseName != None: plt.savefig(GRAPH_DIR+"/"+savefileBaseName+".png", dpi=600, format="png", transparent=False) %matplotlib inline # Set variables CONSENSUS_YLABEL="Consensus Time (timesteps)" DELTAS=[0, 0, 0, 0] FORMATS=["ro", "go", "bo", "mo"] # Plot graphs plotGraph(consensusData, "line", FORMATS, DELTAS, ylabel=CONSENSUS_YLABEL, savefileBaseName="lineConsensus") plotGraph(consensusData, "line", FORMATS, DELTAS, yscale="log", ylabel=CONSENSUS_YLABEL + " [log]", savefileBaseName="lineConsensus_log") plotGraph(consensusData, "cluster", FORMATS, DELTAS, ylabel=CONSENSUS_YLABEL, savefileBaseName="clusterConsensus") plotGraph(consensusData, "cluster", FORMATS, DELTAS, yscale="log", ylabel=CONSENSUS_YLABEL + " [log]", savefileBaseName="clusterConsensus_log") plotGraph(consensusData, "scalefree", FORMATS, DELTAS, ylabel=CONSENSUS_YLABEL, savefileBaseName="scalefreeConsensus") plotGraph(consensusData, "scalefree", FORMATS, DELTAS, yscale="log", ylabel=CONSENSUS_YLABEL + " [log]", savefileBaseName="scalefreeConsensus_log") """ Explanation: Data displaying End of explanation """
OCDX/article-quality
src/generate_monthly_datasets.ipynb
mit
from ipynb.fs.full.article_quality.db_monthly_stats import DBMonthlyStats, dump_aggregation """ Explanation: Database-based monthly stats In this notebook, we'll use a database table to aggregate monthly article quality scores. We'll be using an SQL query to do the aggregation, writing the aggregated data out to a file that can then be imported in another script for analysis. One important note is regarding how we'll select the articles within Wikipedia that correspond to a specific WikiProject. To do this, we'll be using a WikiProject template -- a bit of structured wikitext that WikiProjects use to tag and add metadata to articles. This worklog shows some minor complications with using the templatelinks table to gather this list of articles. https://meta.wikimedia.org/wiki/Research_talk:Quality_dynamics_of_English_Wikipedia/Work_log/2017-02-17 In this notebook, we'll be using the methodology described there to find the "main" template and the wikiproject_aggregation query (defined in db_monthly_stats.ipynb) to also include all redirecting templates. End of explanation """ import configparser config = configparser.ConfigParser() config.read('../settings.cfg') """ Explanation: Read the configuration End of explanation """ import os def write_once(path, write_to): if not os.path.exists(path): print("Writing out " + path) with open(path, "w") as f: write_to(f) """ Explanation: Utility to make sure we only generate files once End of explanation """ dbms = DBMonthlyStats(config) write_once( "../data/processed/enwiki.full_wiki_aggregation.tsv", lambda f: dump_aggregation(dbms.all_wiki_aggregation(), f)) write_once( "../data/processed/enwiki.wikiproject_women_scientists_aggregation.tsv", lambda f: dump_aggregation(dbms.wikiproject_aggregation("WikiProject_Women_scientists"), f)) write_once( "../data/processed/enwiki.wikiproject_oregon_aggregation.tsv", lambda f: dump_aggregation(dbms.wikiproject_aggregation("WikiProject_Oregon"), f)) """ Explanation: Dump the monthly aggregations End of explanation """
gtfierro/cs262-project
evaluation/single_node/Single Node Benchmark.ipynb
bsd-3-clause
FILENAME="data/10_pub_sub_pairs.csv" df = parse_and_plot(FILENAME) df.describe() """ Explanation: Forwarding Latency It is being run from a desktop computer on the UC Berkeley network w/ avg ping latency of 5.03ms to a single broker running on EC2 running in standalone mode. Pairs 10 pairs of pub/sub that share a query. Publishers at a rate of 10 msg/sec, runtime of 20 minutes. Each query has a unique key and value End of explanation """ filename="data/forwarding_latency_100pub_to_1sub.csv" df = pd.read_csv(filename) df /= float(1e6) df.plot(kind='line', figsize=(15,8)) filename="data/1_client_10_pub.csv" df1 = pd.read_csv(filename, header=None) df1 /= float(1e6) df1.plot(style='-',figsize=(15,8)) print df1.quantile(q=.99) df1.describe() """ Explanation: 100 pairs of pub/sub that share a query. Publishers at a rate of 10 msg/sec, runtime of 20 minutes. Each query has a unique key and value End of explanation """ filename="data/10_client_100_pub.csv" df2 = pd.read_csv(filename, header=None) df2 /= float(1e6) df2[0] = df2.mean(axis=1) print df2[0].quantile(q=.99) df2[0].plot(style='-',figsize=(15,8)) #df2.describe() df1[1] = df2[0] filename="data/100_client_1000_pub.csv" df3 = pd.read_csv(filename, header=None) df3 /= float(1e6) df3[0] = df3.mean(axis=1) df3.plot(style='-',figsize=(15,8), legend=False) print df3[0].quantile(q=.99) df1[2] = df3[0] snp = df1[df1 > 3.5].copy() snp.columns = ['1:10 sub:pub','10:100 sub:pub', '100:1000 sub:pub'] axs = snp.plot(figsize=(15,10), subplots=True) for ax in axs: ax.set_ylabel("Latency (ms)") ax.set_xlabel("Sample (5msg/sec)") ax.set_yscale('log') df1 """ Explanation: We wil run forwrdingLatencyNto1 for clients = 1, 10, 100. In the N > 1 case, we average the N clients together to get a single timeseries for that case, then we will plot them all together End of explanation """
vzg100/Post-Translational-Modification-Prediction
old/Tyrosine Phosphorylation Example.ipynb
mit
from pred import Predictor from pred import sequence_vector """ Explanation: Example of using ptm_pred to prototype phosphorylation classifiers Histadine Phosphorylation is a quick place to start, not much data though. However, that means the code runs much faster. Predictor is the class which handles reading the data, sequence vector is a function which vectorizes a protien sequence into a feature array representing amino acids as integer values between 0-20. 0 represents empty space to average out vector length. It can also include hydrophobicity as a feature. End of explanation """ y = Predictor() y.load_data(file="Data/Training/clean_Y.csv") """ Explanation: Next we are going to load our data and generate random negative data aka gibberish data. The clean data files has negatives created from the data sets pulled from phosphoELM and dbptm. In generate_random_data the amino acid parameter represents the amino acid being modified aka the target amino acid modification, the float being passed through is multiplier. For example we use .5 here, that means that .5 * number of data points = random negatives generated. End of explanation """ y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function="ADASYN", random_data=1) """ Explanation: Next we vectorize the sequences, we are going to use the sequence vector. Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. By setting random data to 1 End of explanation """ y.supervised_training("mlp_adam") """ Explanation: Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. The array outputed contains the precision, recall, fscore, and total numbers correctly estimated. End of explanation """ y.benchmark("Data/Benchmarks/phos.csv", "Y") """ Explanation: Next we can check against the benchmarks pulled from dbptm. End of explanation """ y.generate_pca() y.generate_tsne() """ Explanation: Want to explore the data some more, easily generate PCA and TSNE diagrams of the training set. End of explanation """
DBWangGroupUNSW/COMP9318
L3 - Preprocessing.ipynb
mit
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Data Preprocessing with Pandas Import Modules End of explanation """ df = pd.read_csv('./asset/Median Price of Established House Transfers.txt', sep='\t') # row 3 has a null value df.head() """ Explanation: Import data Data is generated from Australia Bureau of statistics, some cells are removed (set to NaN) manually in order to serve this notebook. End of explanation """ # find rows that Price is null df[pd.isnull(df['Price'])] index_with_null = df[pd.isnull(df['Price'])].index index_with_null """ Explanation: Finding and Handling Missing Data End of explanation """ df2 = df.fillna(0) # price value of row 3 is set to 0.0 df2.ix[index_with_null] # df2.index in index_with_null """ Explanation: We can specify a value (e.g., 0) to replace those null values, through the fillna() method End of explanation """ df2 = df.fillna(method='pad', axis=0) df2.head() # The price of row 3 is the same as that of row 2 """ Explanation: We can also propagate non-null values forward or backward End of explanation """ df2 = df.dropna(axis=0) # if axis = 1 then the column will be dropped df2.head() # Note that row 3 is dropped """ Explanation: We can even drop the rows (or columns) with null values End of explanation """ df["Price"] = df.groupby("City").transform(lambda x: x.fillna(x.mean())) df.ix[index_with_null] """ Explanation: Obviously, none of the above solutions are appropriate. A better way to deal with the null value is to replace them with the mean value of the prices of the corresponding city over the whole year. End of explanation """ pd.cut(df['Price'],5).head() # equally partition Price into 5 bins # We could label the bins and add new column df['Bin'] = pd.cut(df['Price'],5,labels=["Very Low","Low","Medium","High","Very High"]) df.head() """ Explanation: Binning Equal-width Partitioning We use the table with all null values filled End of explanation """ pd.qcut(df['Price'],5).head() # Note the difference from the Equal-width Partitioning case # Let's check the depth of each bin df['Bin'] = pd.qcut(df['Price'],5,labels=["Very Low","Low","Medium","High","Very High"]) df.groupby('Bin').size() """ Explanation: Equal-depth Partitioining End of explanation """ df.head() df['Price-Smoothing-mean'] = df.groupby('Bin')['Price'].transform('mean') df.head() """ Explanation: Smoothing Smoothing by Bin Means End of explanation """ df['Price-Smoothing-max'] = df.groupby('Bin')['Price'].transform('max') df.head() """ Explanation: Smoothing by Bin Max End of explanation """ df = pd.read_csv('./asset/Median Price of Established House.txt', sep='\t') df.head() """ Explanation: Normalization End of explanation """ from sklearn import preprocessing min_max_scaler = preprocessing.StandardScaler() x_scaled = min_max_scaler.fit_transform(df[df.columns[1:5]]) # we need to remove the first column df_standard = pd.DataFrame(x_scaled) df_standard.insert(0, 'City', df.City) df_standard """ Explanation: Standard Scaler End of explanation """ from sklearn import preprocessing min_max_scaler = preprocessing.RobustScaler() x_scaled = min_max_scaler.fit_transform(df[df.columns[1:5]]) # we need to remove the first column df_robust = pd.DataFrame(x_scaled) df_robust.insert(0, 'City', df.City) df_robust """ Explanation: Robust Scaler End of explanation """ df = pd.read_csv('./asset/Median Price of Established House.txt', sep='\t') df.head() # use bins=x to control the number of bins df.hist(column=['Q1','Q3'],bins=6,alpha=0.5,figsize=(16, 6)) """ Explanation: Histograms End of explanation """ df.plot.scatter(x='Q1', y='Q3'); """ Explanation: Scatter End of explanation """ from pandas.tools.plotting import scatter_matrix scatter_matrix(df, alpha=0.9, figsize=(12, 12), diagonal='hist') # set the diagonal figures to be histograms """ Explanation: Scatter matrix provide a better way to discover the relationships in data End of explanation """
GPflow/GPflowOpt
doc/source/notebooks/firststeps.ipynb
apache-2.0
import numpy as np from gpflowopt.domain import ContinuousParameter def branin(x): x = np.atleast_2d(x) x1 = x[:, 0] x2 = x[:, 1] a = 1. b = 5.1 / (4. * np.pi ** 2) c = 5. / np.pi r = 6. s = 10. t = 1. / (8. * np.pi) ret = a * (x2 - b * x1 ** 2 + c * x1 - r) ** 2 + s * (1 - t) * np.cos(x1) + s return ret[:, None] domain = ContinuousParameter('x1', -5, 10) + \ ContinuousParameter('x2', 0, 15) domain """ Explanation: First steps into Bayesian optimization Ivo Couckuyt, Joachim van der Herten Introduction Bayesian optimization is particularly useful for expensive optimization problems. This includes optimization problems where the objective (and constraints) are time-consuming to evaluate: measurements, engineering simulations, hyperparameter optimization of deep learning models, etc. Another area where Bayesian optimization may provide a benefit is in the presence of (a lot of) noise. If your problem does not satisfy these requirements other optimization algorithms might be better suited. To setup a Bayesian optimization scheme with GPflowOpt you have to: define your objective and specify the optimization domain setup a GPflow model and choose an acquisition function create a BayesianOptimizer Objective function End of explanation """ import gpflow from gpflowopt.bo import BayesianOptimizer from gpflowopt.design import LatinHyperCube from gpflowopt.acquisition import ExpectedImprovement from gpflowopt.optim import SciPyOptimizer, StagedOptimizer, MCOptimizer # Use standard Gaussian process Regression lhd = LatinHyperCube(21, domain) X = lhd.generate() Y = branin(X) model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True)) model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3) # Now create the Bayesian Optimizer alpha = ExpectedImprovement(model) acquisition_opt = StagedOptimizer([MCOptimizer(domain, 200), SciPyOptimizer(domain)]) optimizer = BayesianOptimizer(domain, alpha, optimizer=acquisition_opt, verbose=True) # Run the Bayesian optimization r = optimizer.optimize(branin, n_iter=10) print(r) """ Explanation: Bayesian optimizer End of explanation """
jldbc/pybaseball
EXAMPLES/imputed_derivation.ipynb
mit
from pybaseball import statcast, utils import matplotlib.pyplot as plt import numpy as np import pandas as pd from pybaseball.plotting import plot_bb_profile """ Explanation: Isolate Imputations An inital approach to isolate imputations was to copy and paste from the related article on Fangraphs. This notebook serves as an analysis approach to derive the imputed values based on flagging if a certain exit velo/launch angle comprises more than a given percentage of the total dataset for a bb_type. End of explanation """ # Grab 1 month per year dfs = [] for year in range(2015, 2021): print(f"Starting year {year}") dfs.append(statcast(start_dt=f'{year}-08-01', end_dt=f'{year}-09-01',verbose=False)) """ Explanation: Since there's a bit of variance year-to-year and especially difference in 2020 with Hawkeye, grab a month from each year End of explanation """ threshold = 0.002 summary = None for year,df in zip(range(2015, 2021),dfs): for bb_type in dfs[0].bb_type.dropna().unique(): # Isolate each bb_type i = df[df["bb_type"] == bb_type] # Sort by pairs of launch angle and speed i = i.groupby(["launch_angle", "launch_speed"]).size().reset_index(name="count").sort_values("count", ascending=False) # Derive fraction of total i["fraction"] = i["count"] / i["count"].sum() # Flagging as possibly imputed criterion i["flag"] = (i["fraction"] > threshold) i["bb_type"] = bb_type i["year"] = year flagged = i[i["flag"] == True] # Add to dataframe (or create on first iteration) if summary is not None: summary = summary.append(flagged[["launch_angle","launch_speed","count","bb_type","year"]]) else: summary = flagged[["launch_angle","launch_speed","count","bb_type","year"]] """ Explanation: Calculate the fraction per year a given ev/launch angle makes up, then put those together into one DF. The topline threshold defines what fraction of total annual bb_type we use to raise a possible imputation. End of explanation """ # Print out the results summary.groupby(["launch_angle", "launch_speed","bb_type"]).size().reset_index(name="Years above threshold") """ Explanation: Group over years to see the cases where the threshold is passed End of explanation """ for i,year in enumerate(range(2015, 2021)): plot_bb_profile(dfs[i]) plt.title(f'{year}') plt.xlabel("Launch Angle") plt.show() for i,year in enumerate(range(2015, 2021)): skimmed_df = dfs[i].merge(summary.drop_duplicates(), how="left", on=["launch_angle","launch_speed","bb_type"],indicator=True) plot_bb_profile(skimmed_df[skimmed_df['_merge'] == 'left_only']) plt.title(f'{year}') plt.xlabel("Launch Angle") plt.show() """ Explanation: Validate Results: End of explanation """ for year,df in zip(range(2015, 2021),dfs): for bb_type in dfs[0].bb_type.dropna().unique(): # Isolate each bb_type i = df[df["bb_type"] == bb_type] # Sort by pairs of launch angle and speed i = i.groupby(["launch_angle", "launch_speed"]).size().reset_index(name="count").sort_values("count", ascending=False) # Derive fraction of total i["fraction"] = i["count"] / i["count"].sum() print(f"bb_type: {bb_type}, year: {year}") print(i.head(5)) """ Explanation: Output distributions look clean, so this set looks like a good start. Also want to output top handful for each bb_type for each year. This repeats some code above - not the cleanest, but I don't want the key parts of the notebook buried under a huge number of tables. End of explanation """
NuGrid/NuPyCEE
ChETEC_school/GCE Lab 1 - Solar Composition - Elemental Abundance Pattern.ipynb
bsd-3-clause
# Import the OMEGA+ code and standard packages import matplotlib import matplotlib.pyplot as plt import numpy as np # Two-zone galactic chemical evolution code import JINAPyCEE.omega_plus as omega_plus # Run scripts for this notebook %run script_solar_ab.py # Matplotlib option %matplotlib inline """ Explanation: GCE Lab 1 - Solar Composition - Elemental Abundance Pattern In this notebook, you will tune the number of Type Ia supernovae and the number of r-process events to match portions of the elemental abundance pattern of the Sun. End of explanation """ # \\\\\\\\\\ Modify below \\\\\\\\\\\\ # ==================================== # Number of SNe Ia per units of stellar mass formed. # For every solar mass of stars formed, there will be statistically nb_1a_per_m SNe Ia. # Original value --> 1.0e-1 nb_1a_per_m = 1.0e-1 # ==================================== # ////////// Modify above //////////// # Run the GCE code OMEGA+ op = omega_plus.omega_plus(nb_1a_per_m=nb_1a_per_m, **kwargs) # Get source contributions m_el_all, m_el_agb, m_el_massive, m_el_sn1a, m_el_nsm = \ get_individual_sources(op.inner, i_step_sol=i_t_Sun) # Set figure fig = plt.figure(figsize=(8,4)) matplotlib.rcParams.update({'font.size': 16.0}) # Plot solar abundance data plt.plot(solar_Z, solar_ab, color='k', marker='o', linewidth=6, alpha=0.5, label='Solar') # Plot contribution from Type Ia supernovae plt.plot(Z_charge, m_el_sn1a, color='g', label='SNe Ia', alpha=0.8, linestyle='-', marker='s') # Add element annotations (iron-peak) Z_low, Z_upp = 20, 30 for i in range(Z_low, Z_upp+1): plt.annotate(elements[i], xy=(solar_Z[i],yy[i]), color='k',\ fontsize=15, ha='center', va='center') # Label, legend, and axis plt.legend(fontsize=16, loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel('Z (charge number)', fontsize=16) plt.ylabel('X (mass fraction)', fontsize=16) plt.xlim(Z_low-0.5, Z_upp+0.5) plt.ylim(1e-9,3e-1) plt.yscale('log') # Frame tuning plt.subplots_adjust(top=0.95) plt.subplots_adjust(right=0.75) plt.subplots_adjust(left=0.15) plt.subplots_adjust(bottom=0.14) """ Explanation: Choice of Stellar Yields The stellar yields for the simulations are taken from: * Low-mass asymptotic-giant-branch (AGB) stars: Cristallo et al. (2015) * Massive stars: Limongi & Chieffi (2018) * Type Ia supernovae (SNe Ia): Iwamoto et al. (1999) * Rapid neutron-capture process (r-process): Solar residuals of Arnould et al. (2007) 1. Iron-Peak Elements. Contribution of SNe Ia End of explanation """ # \\\\\\\\\\ Modify below \\\\\\\\\\\\ # ==================================== # Number of r-process events per units of stellar mass formed. # Original value --> 1.0e-6 nb_nsm_per_m = 1.0e-6 # ==================================== # ////////// Modify above //////////// # Run the GCE code OMEGA+ op = omega_plus.omega_plus(nb_nsm_per_m=nb_nsm_per_m, **kwargs) # Get source contributions m_el_all, m_el_agb, m_el_massive, m_el_sn1a, m_el_nsm = \ get_individual_sources(op.inner, i_step_sol=i_t_Sun) # Set figure fig = plt.figure(figsize=(12,4)) matplotlib.rcParams.update({'font.size': 16.0}) # Plot solar abundance data plt.plot(solar_Z, solar_ab, color='k', marker='o', linewidth=6, alpha=0.5, label='Solar') # Contribution of the s-process (AGB stars) plt.plot(Z_charge, m_el_agb, color='r', label='s-process (AGB)', alpha=0.8, linestyle='-', marker='x') # Contribution of the r-process plt.plot(Z_charge, m_el_nsm, color='c', label='r-process', alpha=0.8, linestyle='-', marker='^') # Add element annotations (lanthanides) Z_low, Z_upp = 50, 80 for i in range(Z_low, Z_upp+1): plt.annotate(elements[i], xy=(solar_Z[i],yy[i]), color='k',\ fontsize=15, ha='center', va='center') # Label, legend, and axis plt.legend(fontsize=16, loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel('Z (charge number)', fontsize=16) plt.ylabel('X (mass fraction)', fontsize=16) plt.xlim(Z_low-0.5, Z_upp+0.5) plt.ylim(1e-11,1e-6) plt.yscale('log') # Frame tuning plt.subplots_adjust(top=0.95) plt.subplots_adjust(right=0.75) plt.subplots_adjust(left=0.15) plt.subplots_adjust(bottom=0.14) """ Explanation: 2. Neutron-Capture Elements. s- and r-Process Contributions End of explanation """ # \\\\\\\\\\ Modify below \\\\\\\\\\\\ # ==================================== # Number of SNe Ia per units of stellar mass formed. # For every solar mass of stars formed, there will be statistically nb_1a_per_m SNe Ia. nb_1a_per_m = 1.0e-4 # Number of r-process events per units of stellar mass formed. nb_nsm_per_m = 1.0e-6 # ==================================== # ////////// Modify above //////////// # Run the GCE code OMEGA+ op = omega_plus.omega_plus(nb_1a_per_m=nb_1a_per_m, nb_nsm_per_m=nb_nsm_per_m, **kwargs) # Get source contributions m_el_all, m_el_agb, m_el_massive, m_el_sn1a, m_el_nsm = \ get_individual_sources(op.inner, i_step_sol=i_t_Sun) # \\\\\\\\\\ Modify below \\\\\\\\\\\\ # ==================================== # Select the range of elements (atomic numbers) you want to plot Z_low, Z_upp = 5, 30 # NOTE: You might want to modify plt.ylim(..) below. # ==================================== # ////////// Modify above //////////// # Set figure fig = plt.figure(figsize=(12,4.0)) matplotlib.rcParams.update({'font.size': 16.0}) # Plot solar abundance data plt.plot(solar_Z, solar_ab, color='k', marker='o', linewidth=6, alpha=0.5, label='Solar') # All sources combined plt.plot(Z_charge, m_el_all, color='orange', label='All sources', alpha=1.0, linestyle='-', linewidth=2) # Contribution of Type Ia supernovae plt.plot(Z_charge, m_el_sn1a, color='g', label='SNe Ia', alpha=0.8, linestyle='-', marker='s') # Contribution of massive stars (core-collapse supernovae) plt.plot(Z_charge, m_el_massive, color='b', label='Massive stars', alpha=0.8, linestyle='-', marker='^') # Contribution of AGB stars plt.plot(Z_charge, m_el_agb, color='r', label='AGB stars', alpha=0.8, linestyle='-', marker='x') # Contribution of the r-process plt.plot(Z_charge, m_el_nsm, color='c', label='r-process', alpha=0.8, linestyle='-', marker='s') # Add element annotations (lanthanides) for i in range(Z_low, Z_upp+1): plt.annotate(elements[i], xy=(solar_Z[i],yy[i]), color='k',\ fontsize=15, ha='center', va='center') # Label, legend, and axis plt.legend(fontsize=16, loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel('Z (charge number)', fontsize=16) plt.ylabel('X (mass fraction)', fontsize=16) plt.xlim(Z_low-0.5, Z_upp+0.5) plt.ylim(1e-10,1) plt.yscale('log') # Frame tuning plt.subplots_adjust(top=0.95) plt.subplots_adjust(right=0.75) plt.subplots_adjust(left=0.15) plt.subplots_adjust(bottom=0.14) """ Explanation: Exercises 1) SNe Ia have synthesized most of the Fe we observed today in the Milky Way. How many SNe Ia, per units of stellar mass formed [M$_\odot^{-1}$], are needed in the simulation in order to reproduce the Fe solar abundance? You will need to modify the nb_1a_per_m parameter. 2) Neutron-capture elements have mostly been synthesized by the slow neutron-capture process (s-process) in AGB stars, and by the rapid neutron-capture process (r-process) in rare events such as compact binary mergers and exotic classes of supernovae. How many r-process events, per units of stellar mass formed [M$_\odot^{-1}$], are needed in the simulation in order to reproduce the solar abundance of lanthanides (e.g., Eu)? You will need to modify the nb_nsm_per_m parameter. 3) There are about $5\times10^{10}$ M$_\odot$ of stars in the Milky Way. Using the number you found in Exercises 1) and 2), approximately how many SNe Ia and r-process events have occured within the Milky Way since its formation? 3. Extra Material - All sources End of explanation """
philmui/datascience2016fall
lecture05.viz.data.shaping/lecture05.data.shaping.ipynb
mit
import numpy as np from pandas import Series, DataFrame import pandas as pd df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'], 'data1': range(7)}) df2 = df2 = DataFrame({'key': ['a', 'b', 'd'], 'data2': range(3)}) df1 df2 """ Explanation: Shaping Data Much of the programming work in data analysis and modeling is spent on data preparation: loading, cleaning, transforming, and rearranging. Sometimes the way that data is stored in files or databases is not the way you need it for a data processing application. pandas along with the Python standard library provide you with a high-level, flexible, and high-performance set of core manipulations and algorithms to enable you to wrangle data into the right form without much trouble. End of explanation """ pd.merge(df1, df2) """ Explanation: Merging / Joining This is an example of a many-to-one merge situation; the data in df1 has multiple rows labeled a and b, whereas df2 has only one row for each value in the key column. Calling merge with these objects we obtain: End of explanation """ pd.merge(df1, df2, on='key') """ Explanation: If not specified, merge uses the overlapping column names as the keys. It’s a good practice to specify explicitly, though: End of explanation """ df3 = DataFrame({'lkey': ['b', 'b', 'a', 'c', 'a', 'a', 'b'], 'data1': range(7)}) df4 = DataFrame({'rkey': ['a', 'b', 'd'], 'data2': range(3)}) pd.merge(df3, df4, left_on='lkey', right_on='rkey') """ Explanation: If the column names are different in each object, you can specify them separately: End of explanation """ pd.merge(df1, df2, how='outer') pd.merge(df1, df2, how='left') pd.merge(df1, df2, how='right') """ Explanation: You probably noticed that the 'c' and 'd' values and associated data are missing from the result. By default merge does an 'inner' join; the keys in the result are the intersec- tion. Other possible options are 'left', 'right', and 'outer'. The outer join takes the union of the keys, combining the effect of applying both left and right joins: End of explanation """ df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'], 'data1': range(6)}) df2 = DataFrame({'key': ['a', 'b', 'a', 'b', 'd'], 'data2': range(5)}) print(df1) print(df2) pd.merge(df1, df2, on='key', how='left') """ Explanation: Many-to-Many Merges Many-to-many merges have well-defined though not necessarily intuitive behavior. Here’s an example: End of explanation """ pd.merge(df1, df2, how='inner') """ Explanation: Many-to-many joins form the Cartesian product of the rows. Since there were 3 'b' rows in the left DataFrame and 2 in the right one, there are 6 'b' rows in the result. The join method only affects the distinct key values appearing in the result: End of explanation """ left = DataFrame({'key1': ['foo', 'foo', 'bar'], 'key2': ['one', 'two', 'one'], 'lval': [1, 2, 3]}) right = DataFrame({'key1': ['foo', 'foo', 'bar', 'bar'], 'key2': ['one', 'one', 'one', 'two'], 'rval': [4, 5, 6, 7]}) pd.merge(left, right, on=['key1', 'key2'], how='outer') """ Explanation: Compound Key To merge with multiple keys ("compound key"), pass a list of column names: End of explanation """ pd.merge(left, right, on='key1') pd.merge(left, right, on='key1', suffixes=('_left', '_right')) """ Explanation: To determine which key combinations will appear in the result depending on the choice of merge method, think of the multiple keys as forming an array of tuples to be used as a single join key (even though it’s not actually implemented that way). Overlapping Column Names A last issue to consider in merge operations is the treatment of overlapping column names. While you can address the overlap manually (see the later section on renaming axis labels), merge has a suffixes option for specifying strings to append to overlapping names in the left and right DataFrame objects: End of explanation """ left1 = DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'], 'value': range(6)}) right1 = DataFrame({'group_val': [3.5, 7]}, index=['a', 'b']) print(left1) print(right1) pd.merge(left1, right1, left_on='key', right_index=True) """ Explanation: Merge on Index In some cases, the merge key or keys in a DataFrame will be found in its index. In this case, you can pass left_index=True or right_index=True (or both) to indicate that the index should be used as the merge key: End of explanation """ pd.merge(left1, right1, left_on='key', right_index=True, how='outer') """ Explanation: Since the default merge method is to intersect the join keys, you can instead form the union of them with an outer join: End of explanation """ arr = np.arange(12).reshape((3, 4)) np.concatenate([arr, arr], axis=1) np.concatenate([arr, arr], axis=0) """ Explanation: Concatenation / Stacking Another kind of data combination operation is alternatively referred to as concatena- tion, binding, or stacking. NumPy has a concatenate function for doing this with raw NumPy arrays: End of explanation """ s1 = Series([0, 1], index=['a', 'b']) s2 = Series([2, 3, 4], index=['c', 'd', 'e']) s3 = Series([5, 6], index=['f', 'g']) pd.concat([s1, s2, s3]) """ Explanation: The concat function in pandas provides a consistent way to address each of these concerns. I’ll give a number of examples to illustrate how it works. Suppose we have three Series with no index overlap: End of explanation """ pd.concat([s1, s2, s3], axis=1) """ Explanation: By default concat works along axis=0, producing another Series. If you pass axis=1, the result will instead be a DataFrame (axis=1 is the columns): End of explanation """ s4 = pd.concat([s1 * 5, s3]) s4 pd.concat([s1, s4], axis=1) In [68]: pd.concat([s1, s4], axis=1, join='inner') """ Explanation: In this case there is no overlap on the other axis, which as you can see is the sorted union (the 'outer' join) of the indexes. You can instead intersect them by passing join='inner': End of explanation """ import pandas as pd df = pd.read_csv("data/eu_trade_sums.csv") """ Explanation: Split-Apply-Combine Let's load some real world data to illustration splitting, transformation and grouping End of explanation """ data = DataFrame(np.arange(6).reshape((2, 3)), index=pd.Index(['Ohio', 'Colorado'], name='state'), columns=pd.Index(['one', 'two', 'three'], name='number')) data """ Explanation: Reshaping There are a number of fundamental operations for rearranging tabular data. These are alternatingly referred to as reshape or pivot operations. Hierarchical indexing provides a consistent way to rearrange data in a DataFrame. There are two primary actions: stack: this “rotates” or pivots from the columns in the data to the rows unstack: this pivots from the rows into the columns We will illustrate these operations through a series of examples. Consider a small DataFrame with string arrays as row and column indexes: End of explanation """ result = data.stack() result """ Explanation: Using the stack method on this data pivots the columns into the rows, producing a Series: End of explanation """ result.unstack() """ Explanation: From a hierarchically-indexed Series, you can rearrange the data back into a DataFrame with unstack: End of explanation """ result.unstack(0) result.unstack('state') """ Explanation: By default the innermost level is unstacked (same with stack). You can unstack a different level by passing a level number or name: End of explanation """ s1 = Series([0, 1, 2, 3], index=['a', 'b', 'c', 'd']) s2 = Series([4, 5, 6], index=['c', 'd', 'e']) data2 = pd.concat([s1, s2], keys=['one', 'two']) data2.unstack() """ Explanation: Unstacking might introduce missing data if all of the values in the level aren’t found in each of the subgroups: End of explanation """ data2.unstack().stack() data2.unstack().stack(dropna=False) """ Explanation: Stacking filters out missing data by default, so the operation is easily invertible: End of explanation """ df = DataFrame({'left': result, 'right': result + 5}, columns=pd.Index(['left', 'right'], name='side')) df df.unstack('state') df.unstack('state').stack('side') """ Explanation: When unstacking in a DataFrame, the level unstacked becomes the lowest level in the result: End of explanation """ data = DataFrame({'k1': ['one'] * 3 + ['two'] * 4, 'k2': [1, 1, 2, 3, 3, 4, 4]}) data """ Explanation: Removing Duplicates Duplicate rows may be found in a DataFrame for any number of reasons. Here is an example: End of explanation """ data.duplicated() """ Explanation: The DataFrame method duplicated returns a boolean Series indicating whether each row is a duplicate or not: End of explanation """ data.drop_duplicates() """ Explanation: Relatedly, drop_duplicates returns a DataFrame where the duplicated array is False: End of explanation """ data['v1'] = range(7) data.drop_duplicates(['k1']) """ Explanation: Both of these methods by default consider all of the columns; alternatively you can specify any subset of them to detect duplicates. Suppose we had an additional column of values and wanted to filter duplicates only based on the 'k1' column: End of explanation """ data.drop_duplicates(['k1', 'k2'], keep='last') """ Explanation: duplicated and drop_duplicates by default keep the first observed value combination. Passing take_last=True will return the last one: End of explanation """ data = DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami', 'corned beef', 'Bacon', 'pastrami', 'honey ham', 'nova lox'], 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]}) data """ Explanation: Transforming Data Using a Function or Mapping For many data sets, you may wish to perform some transformation based on the values in an array, Series, or column in a DataFrame. Consider the following hypothetical data collected about some kinds of meat: End of explanation """ meat_to_animal = { 'bacon': 'pig', 'pulled pork': 'pig', 'pastrami': 'cow', 'corned beef': 'cow', 'honey ham': 'pig', 'nova lox': 'salmon' } """ Explanation: Suppose you wanted to add a column indicating the type of animal that each food came from. Let’s write down a mapping of each distinct meat type to the kind of animal: End of explanation """ data['animal'] = data['food'].map(str.lower).map(meat_to_animal) data """ Explanation: The map method on a Series accepts a function or dict-like object containing a mapping, but here we have a small problem in that some of the meats above are capitalized and others are not. Thus, we also need to convert each value to lower case: End of explanation """ data['food'].map(lambda x: meat_to_animal[x.lower()]) """ Explanation: We could also have passed a function that does all the work: End of explanation """ data = Series([1., -999., 2., -999., -1000., 3.]) data """ Explanation: Replacing Values Filling in missing data with the fillna method can be thought of as a special case of more general value replacement. While map, as you’ve seen above, can be used to modify a subset of values in an object, replace provides a simpler and more flexible way to do so. Let’s consider this Series: End of explanation """ data.replace(-999, np.nan) """ Explanation: The -999 values might be sentinel values for missing data. To replace these with NA values that pandas understands, we can use replace, producing a new Series: End of explanation """ data.replace([-999, -1000], np.nan) """ Explanation: If you want to replace multiple values at once, you instead pass a list then the substitute value: End of explanation """ data.replace([-999, -1000], [np.nan, 0]) """ Explanation: To use a different replacement for each value, pass a list of substitutes: End of explanation """ data.replace({-999: np.nan, -1000: 0}) """ Explanation: The argument passed can also be a dict: End of explanation """
jwlockhart/data_workshops
ICOS_data_camp/ICOS Big Data Camp Data Analysis.ipynb
mit
import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm import statsmodels.formula.api as smf # This makes it so that plots show up here in the notebook. # You do not need it if you are not using a notebook. %matplotlib inline from IPython.display import Image """ Explanation: Module 6 - Introduction to Python for Data Analysis : Why you will NOT use Excel anymore! Instructor: Ronnie (Saerom) Lee and Jeff Lockhart Date: June 8th (Thursday), 2017 Packages: pandas, numpy, matplotlib, statsmodels pandas: an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Matplotlib: a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Statsmodels: a Python module that provides classes and functions for the estimation of many different statistical models 0. Import relevant packages import bring in packages of useful tools and functions for you to use import (package_name) as (abbreviation) lets you refer to the package by the name abbreviation, so you can type less End of explanation """ data = {'Course' : 'Intro to Big Data', 'Section' : '6', 'Names' : ['Ronnie', 'Jeff', 'Teddy', 'Jerry'], 'Group' : ['1', '2', '1', '2'], 'Year' : ['Junior'] * 2 + ['Senior'] * 2, 'Date' : pd.Timestamp('20160607'), 'Quiz' : np.array([20, 90, 60, 100], dtype='float64')} df = pd.DataFrame(data) df """ Explanation: 1. pandas 1.1. How to create, save, and read a dataframe (1) Create a dataframe End of explanation """ df = df.rename(index=str, columns={"Names": "Name"}) df """ Explanation: Rename a column End of explanation """ df.to_csv('data.csv', sep = ',', index = False) # if comma separated (csv) df.to_csv('data.tsv', sep = '\t', index = False) # if tab separated (tsv) df.to_csv('data.txt', sep = '\t', index = False) # you can also use sep = ',' as in csv files """ Explanation: (2) Save the dataframe into a file: We will learn how to save first, since we don't have a file to read yet. csv/tsv/txt file (Note: Don't forget to specify the separator!) End of explanation """ df.to_excel('data.xlsx', index_label='label') """ Explanation: Excel file End of explanation """ df_csv = pd.read_csv('data.csv', sep = ',') df_csv df_tsv = pd.read_csv('data.tsv', sep = '\t') df_tsv """ Explanation: (3) Read a file into a dataframe csv/tsv/txt file (Note: Don't forget to specify the separator!) End of explanation """ with pd.ExcelFile('data.xlsx') as xlsx: df_excel = pd.read_excel(xlsx, sheetname = 'Sheet1') df_excel ### If there are multiple sheets to read from # with pd.ExcelFile('data.xlsx') as xlsx: # df_sheet1 = pd.read_excel(xlsx, sheetname = 'Sheet1') # df_sheet2 = pd.read_excel(xlsx, sheetname = 'Sheet2') """ Explanation: Excel file End of explanation """ # First, create a new dataframe new_data = {'Course' : 'Intro to Big Data', 'Section' : '6', 'Name' : ['Donald', 'Melania'], 'Group' : '5', 'Year' : ['Freshman', 'Sophomore'], 'Date' : pd.Timestamp('20160607'), 'Quiz' : np.array([5, 85], dtype='float64')} df2 = pd.DataFrame(new_data) df2 # Append the new dataframe to the existing dataframe df = df.append(df2, ignore_index=True) df """ Explanation: Other formats you can read JSON strings: pd.read_json() HTML tables: pd.read_html() SQL databases: pd.read_sql_table() SAS files: pd.read_sas() Stata files: pd.read_stata() and many more... 1.2. How to add and remove row/column(s) in the dateframe (1) Add row/column(s) Rows using .append() End of explanation """ df = pd.concat([df, df2], axis = 0, ignore_index = True) # If axis = 1, then add column df """ Explanation: Or an alternative way to add row(s) is to use pd.concat() End of explanation """ df['Assignment'] = np.array([45, 85, 50, 90, 10, 70, 10, 70], dtype='float64') df """ Explanation: Columns End of explanation """ df.drop(0) """ Explanation: (3) Remove rows, columns, and duplicates Rows (by index) End of explanation """ df = df.drop('Date', axis = 1) # Note: axis = 1 denotes that we are referring to a column, not a row df """ Explanation: Columns End of explanation """ # First, in order to check whether there are any duplicates df.duplicated() # If there are duplicates, then run the following code df = df.drop_duplicates() df """ Explanation: Duplicates End of explanation """ term_project = {'Group' : ['1', '2', '3', '4'], 'Presentation': [80.0, 90., 100., 50.], 'Report' : np.array([60, 80, 70, 30], dtype='float64')} df3 = pd.DataFrame(term_project) df3 df """ Explanation: 1.3. Merge two dataframes Q. Assume that the students were assigned to groups. For the term project, each group is required to do a presentation and submit a report. Suppose that you graded the presentations and the reports as the following. Create a dataframe with the following information: Group 1: Presentation: 80 Report: 60 Group 2: Presentation: 90 Report: 80 Group 3: Presentation: 100 Report: 70 Group 4: Presentation: 50 Report: 30 End of explanation """ pd.merge(df, df3, on = 'Group') """ Explanation: Rather than putting in the scores one by one, we can simply merge the two tables. End of explanation """ pd.merge(df, df3, how = 'left', on = 'Group') """ Explanation: Q. OOPS! We lost The Donald and Melania! What went wrong? Q. How should we merge the data in order to keep The Donald and Melania? Important parameter: how = {'left', 'right', 'outer', 'inner'} inner (default): use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically left: use only keys from left frame, similar to a SQL left outer join; preserve key order right: use only keys from right frame, similar to a SQL right outer join; preserve key order Q. Which one of these should we set how as? End of explanation """ pd.merge(df, df3, how = 'right', on = 'Group') pd.merge(df, df3, how = 'outer', on = 'Group') """ Explanation: Q. How would the dataframe look like if we set how = right or how = outer? End of explanation """ df = pd.merge(df, df3, how = 'left', on = 'Group') df """ Explanation: Thus, the right way to merge the two dataframes is End of explanation """ nRows = 3 # The number of rows to show df.head(nRows) df.tail(nRows) """ Explanation: 1.4. Check what's in the dataframe (1) See the top and bottom rows of the dataframe End of explanation """ df.index df.columns df.values """ Explanation: (2) Display the index, columns, and the underlying data End of explanation """ df.sort_values(by='Quiz') # Ascending order df.sort_values(by='Quiz', ascending=False) # Descending order """ Explanation: (3) Sort by values End of explanation """ df.sort_values(by='Report', ascending=False) # Descending order """ Explanation: Q. What would happen if we sort a column which has a missing value (i.e., NaN)? End of explanation """ df.where(df['Assignment'] > 50) """ Explanation: (4) Search for a value End of explanation """ df['Name'].where(df['Assignment'] > 50) """ Explanation: For specific column End of explanation """ df['Name'].where(df['Assignment'] > 50).count() """ Explanation: Q. How can we count the number of students who got 'Assignment' higher than 50? End of explanation """ df['Assignment'] """ Explanation: (5) Select Column(s) End of explanation """ df[0:3] """ Explanation: Row(s) End of explanation """ df.loc[0,'Name'] df.loc[0,['Assignment','Quiz']] """ Explanation: By location End of explanation """ df[df['Assignment'] > 50] df[df['Year'].isin(['Junior'])] """ Explanation: Using a condition End of explanation """ df """ Explanation: 1.5. Missing data Let's first take a look at what we have as our dataframe End of explanation """ # pd.isnull(df) df.isnull() """ Explanation: (1) Check whether there are any missing data End of explanation """ pd.isnull(df).sum() """ Explanation: If the dataframe is large in dimension, it would be NOT be easy to see whether there are any 'True's $\rightarrow$ An easier way to check is to use End of explanation """ df.dropna(how='any', axis = 0) """ Explanation: (2) [Option 1] Drop row/column(s) with missing data Drop row(s) End of explanation """ df.dropna(how='any', axis = 1) """ Explanation: Drop column(s) End of explanation """ df.fillna(value = 0) """ Explanation: (3) [Option 2] Fill in missing values Fill in ALL missing data with a single value End of explanation """ df.loc[4,'Presentation'] = 30 df.loc[5,'Presentation'] = 20 df.loc[4,'Report'] = 60 df.loc[5,'Report'] = 70 df """ Explanation: Fill in a single value by location End of explanation """ df """ Explanation: 1.6. Basic statistics End of explanation """ df.describe() """ Explanation: (1) Describe shows a quick statistic summary of your data End of explanation """ df.describe().round(2) """ Explanation: Q. EWW, IT'S UGLY WITH TOO MANY ZEROS! How can we make this more prettier? End of explanation """ df.mean().round(2) """ Explanation: (2) Caculate Mean End of explanation """ df.median().round(2) """ Explanation: Median End of explanation """ df['Report'].min().round(2) df['Report'].max().round(2) """ Explanation: Min/Max End of explanation """ df.var().round(2) """ Explanation: Variance End of explanation """ df.corr().round(2) """ Explanation: Correlation End of explanation """ df.groupby('Group').mean() """ Explanation: (3) Grouping: a process involving one or more of the following steps Splitting the data into groups based on some criteria Applying a function to each group independently Combining the results into a data structure End of explanation """ df.groupby(['Group', 'Year']).mean() """ Explanation: Q. How can we group by 'Group' and 'Year'? End of explanation """ pd.pivot_table(df, values='Report', index=['Year'], columns=['Group']) """ Explanation: (4) Pivot tables End of explanation """ df['log_Report'] = np.log(df['Report']) df """ Explanation: 1.4. Basic column operations Logarithm Natural logarithm: np.log() The base 10 logarithm: np.log10() The base 2 logarithm: np.log2() End of explanation """ df['sqrt_Report'] = np.sqrt(df['Report']) df """ Explanation: Square root End of explanation """ df['Total'] = 0.15 * df['Quiz'] + 0.2 * df['Assignment'] + 0.25 * df['Presentation'] + 0.4 * df['Report'] df.sort_values(by='Total', ascending=False)['Name'] """ Explanation: Q. Suppose that the evaluation is based on the following weights Quiz: 15% Assignment: 20% Presentation: 25% Report: 40% How can we make a new column 'Total' which calculate the weighted sum and rank the students by 'Total' in descending order? End of explanation """ salaries = pd.read_csv('./MLB/Salaries.csv', sep = ',') salaries.head(10) """ Explanation: 1.7. Application: Let's apply these tools to a set of real data Q. Read the datafile 'Salaries.csv' (separator = comma) as the variable 'salaries' and show its FIRST 10 rows End of explanation """ with pd.ExcelFile('Batting.xlsx') as f: batting = pd.read_excel(f, sheetname = 'Batting') batting.tail(5) """ Explanation: Q. Read the datafile 'Batting.xlsx' (sheetname = 'Batting') as the variable 'batting' and show its LAST 5 rows End of explanation """ data = pd.merge(salaries, batting, how='left', on = ['player', 'year', 'team']) data.head(7) """ Explanation: Q. Create a variable 'data' by LEFT MERGING 'salaries' and 'batting' based on 'player', 'year', and 'team' and show the FIRST 7 rows End of explanation """ pitching = pd.read_stata('./MLB/Pitching.dta') pitching.tail(4) """ Explanation: Q. Read the STATA datafile 'Pitching.dta' as the variable 'pitching' and show its LAST 4 rows End of explanation """ pitching = pitching.dropna(how='any', axis = 1) pitching.head() """ Explanation: Q. FOR SIMPLICITY, drop all columns with missing values End of explanation """ data = pd.merge(data, pitching, how='left', on = ['player', 'year', 'team']) data.head(5) """ Explanation: Q. LEFT MERGE 'data' and 'pitching' based on 'player', 'year', and 'team' and show the top 5 rows End of explanation """ # pd.isnull(data).sum() data.isnull().sum() """ Explanation: Q. Check whether there are any missing values in 'data' End of explanation """ data = data.fillna(value = 0.) pd.isnull(data).sum() """ Explanation: Q. FOR SIMPLICITY, fill in the missing values with zeros and re-check whether there are any missing values End of explanation """ basic = pd.read_csv('./MLB/Basic.csv', sep = ',') data = pd.merge(data, basic, how='inner', on = ['player']) data.head() """ Explanation: Q. Read the csv files 'Basic.csv' (separator = comma) and INNER MERGE with 'data' based on 'player' End of explanation """ teams = pd.read_csv('./MLB/Teams.csv', sep = ',') data = pd.merge(data, teams, how='left', on = ['team', 'year']) data.head() """ Explanation: Q. Read the csv files 'Teams.csv' (separator = comma) and INNER MERGE with 'data' based on 'team' and 'year End of explanation """ data.to_csv('baseball.tsv', sep = '\t', index = False) """ Explanation: Q. Save the dataframe 'data' as a tsv file, 'baseball.tsv' End of explanation """ data['log_salary'] = np.log(data['salary'] + 1) data.describe().round(2) """ Explanation: Q. Create a new column 'log_salary' by putting a natural log on ('salary' + 1) End of explanation """ data.describe().round(2) """ Explanation: Q. Examine summary statistics End of explanation """ data.groupby('team').mean().round(2).head() """ Explanation: Q. Examine statistics grouping by 'team' and show the FIRST 5 rows End of explanation """ data.groupby(['team', 'year']).mean().round(2).head(20) """ Explanation: Q. Examine statistics grouping by 'team' AND 'year' and show the FIRST 20 rows End of explanation """ year_dummies = pd.get_dummies(data['year'], drop_first = True) year_dummies.head() """ Explanation: Q. [TRY GOOGLING] Create dummy variables for 'year' End of explanation """ data = pd.concat([data, year_dummies], axis = 1) data.head() """ Explanation: Concatenate the two dataframes End of explanation """ import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm import statsmodels.formula.api as smf # This makes it so that plots show up here in the notebook. # You do not need it if you are not using a notebook. %matplotlib inline data = pd.read_csv('baseball.tsv', sep='\t') data.columns.values """ Explanation: 2. matplotlib 2.1. Let's plot this data End of explanation """ data['salary'].hist(bins=20) data['log_salary'] = np.log(data['salary'] + 1) def log2(value): result = np.log(value + 1) return result data['log_salary'] = data['salary'].apply(log2) data['log_salary'].hist(bins=20) """ Explanation: 2.2. Histogram: Histograms and other simple plots are easy End of explanation """ data.plot.scatter(x = 'batting_RBI', y = 'salary', title = 'Scatter plot', figsize = (5, 4)) """ Explanation: 2.3. Scatter plot: Basic two variable plots like this scatter are easy, too. End of explanation """ #let's get average salary by year born last = data[(data['year'] == 2014) & (data['birthYear'] > 1900)] tmp = last.groupby(by='birthYear').mean() #And make a line plot of it... tmp.plot.line(y='log_salary') """ Explanation: 2.4. Line graph End of explanation """ #ax stands for "axis", we'll use this object to change more settings #we can also specify the image size, in inches, right here with the figsize argument ax = tmp.plot.line(y='log_salary', figsize=(10,8)) #add a title to the chart ax.set_title('Average salary by year born') #label the axes ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') #set the ticks so they're not half years #The range command makes a list of years starting in 1972 and counting by 2 up to (not including) 1993. ax.set_xticks(range(1972, 1993, 2)) #show our plot plt.show() """ Explanation: But what if we want to change the formatting? End of explanation """ #group our data by rank tmp = data.groupby(by='Rank').mean() #plot the mean log salary by rank ax = tmp['log_salary'].plot.bar(figsize=(10,8)) ax.set_title('Average pay by rank') ax.set_ylabel('log(salary)') ax.set_xlabel('Rank') #set the upper and lower limits of the y axis ax.set_ylim(ymin=0, ymax=16) #get the location of the bars (rectangles) rects = ax.patches #loop throgh each bar and label it with its value for rect, label in zip(rects, tmp['log_salary']): #find out the height of the bar on the image height = rect.get_height() l = '{:5.2f}'.format(label) ax.text(rect.get_x() + rect.get_width()/2, height + 0.5, l, ha='center', va='bottom') plt.show() """ Explanation: 2.5. Bar chart End of explanation """ #find the standard deviation instead of the mean when we group by rank tmp2 = data.groupby(by='Rank').std() #add the errors to out plotting call ax = tmp['log_salary'].plot.bar(yerr=tmp2['log_salary'], figsize=(10,8)) ax.set_title('Average pay by rank') ax.set_ylabel('log(salery)') ax.set_xlabel('Rank') #set the upper and lower limits of the y axis ax.set_ylim(ymin=0, ymax=18) plt.show() #make a boxplot of salaries, by rank ax = data.boxplot(column='log_salary', by='Rank', figsize=(10,8)) ax.set_title('Pay by rank') ax.set_ylabel('log(salary)') ax.set_xlabel('Rank') plt.show() """ Explanation: Bar chart with error bars End of explanation """ with plt.xkcd(): data.log_salary.hist(bins=20) with plt.xkcd(): tmp = last.groupby(by='birthYear').mean() ax = tmp.plot.line(y='log_salary', figsize=(10,8)) ax.set_title('Average salary, by year born') ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') ax.set_xticks(range(1972, 1993, 2)) plt.show() """ Explanation: 2.6. FOR FUN: Now let's make it look like xkcd.com End of explanation """ plt.style.available plt.style.use('fivethirtyeight') ax = tmp.plot.line(y='log_salary', figsize=(10,8)) ax.set_title('Average salary, by year born') ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') ax.set_xticks(range(1972, 1993, 2)) plt.show() plt.style.use('ggplot') ax = tmp.plot.line(y='log_salary', figsize=(10,8)) ax.set_title('Average salary, by year born') ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') ax.set_xticks(range(1972, 1993, 2)) plt.show() plt.style.use('classic') ax = tmp.plot.line(y='log_salary', figsize=(10,8)) ax.set_title('Average salary, by year born') ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') ax.set_xticks(range(1972, 1993, 2)) plt.show() plt.style.use('seaborn') ax = tmp.plot.line(y='log_salary', figsize=(10,8)) ax.set_title('Average salary, by year born') ax.set_ylabel('log(salary)') ax.set_xlabel('Year born') ax.set_xticks(range(1972, 1993, 2)) plt.show() """ Explanation: In fact, we can use many styles! End of explanation """ formula = 'log_salary ~ batting_RBI + ERA' result_ols = smf.ols(formula = formula, data = data).fit() print(result_ols.summary()) """ Explanation: 3. statsmodels: Let's run regressions! Q. (Linear regression) Let's see whether RBI and ERA explains log_salary End of explanation """ data['above_average'] = (data['log_salary'] > data['log_salary'].mean()).astype(float) data['above_average'].head() formula = 'above_average ~ batting_RBI + ERA' result_logit = smf.logit(formula = formula, data = data).fit() print(result_logit.summary()) """ Explanation: Q. (Logistic regression) Let's see whether RBI and ERA explains whether the player gets above-average log_salary End of explanation """ # Import the pacakage from fuzzywuzzy import fuzz from fuzzywuzzy import process """ Explanation: In statsmodels, there are many other methods and tools that you can use. For more information, click here. [Bonus] Other useful packages to know for data analysis Matching strings: FuzzyWuzzy Machine learning: sklearn Neural network (Deep learning): TensorFlow Social network analysis: networkx 1. FuzzyWuzzy uses Levenshtein Distance to calculate the differences between sequences End of explanation """ fuzz.ratio("This is a test", "This is a test!") """ Explanation: Simple Ratio End of explanation """ fuzz.partial_ratio("this is a test", "this is a test!") """ Explanation: Partial Ratio End of explanation """ fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear") """ Explanation: Token Sort Ratio End of explanation """ fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear") fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear") """ Explanation: Token Set Ratio End of explanation """ choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"] process.extract("new york jets", choices, limit=2) process.extractOne("cowboys", choices) """ Explanation: Process End of explanation """
sylvchev/coursIntroPython
cours/3-ApprendrePython-Structures.ipynb
gpl-3.0
ma_liste = [66.6, 333, 333, 1, 1234.5] print (ma_liste.count(333), ma_liste.count(66.6), ma_liste.count('x')) ma_liste2 = list(ma_liste) ma_liste2.sort() print (ma_liste2) ma_liste.insert(2, -1) ma_liste.append(333) ma_liste ma_liste.index(333) ma_liste.remove(333) print(ma_liste) ma_liste.reverse() ma_liste ma_liste.sort() ma_liste """ Explanation: Structures de données Plus de détails sur les listes Le type de données liste possède d’autres méthodes. Voici toutes les méthodes des objets listes : append(x) équivalent à a.insert(len(a), x). extend(L) rallonge la liste en ajoutant à la fin tous les éléments de la liste donnée ; équivaut à a[len(a):] = L. insert(i, x) insère un élément à une position donnée. Le premier argument est l’indice de l’élément avant lequel il faut insérer, donc a.insert(0, x) insère au début de la liste, et a.insert(len(a), x) est équivalent à a.append(x). remove(x) enlève le premier élément de la liste dont la valeur est x. Il y a erreur si cet élément n’existe pas. pop([i ]) enlève l’élément présent à la position donnée dans la liste, et le renvoie. Si aucun indice n’est spécifié, a.pop() renvoie le dernier élément de la liste. L’élément est aussi supprimé de la liste. index(x) retourne l’indice dans la liste du premier élément dont la valeur est x. Il y a erreur si cet élément n’existe pas. count(x) renvoie le nombre de fois que x apparaît dans la liste. sort() trie les éléments à l’intérieur de la liste. reverse() renverse l’ordre des éléments à l’intérieur de la liste. Un exemple qui utilise toutes les méthodes des listes : End of explanation """ pile = [3, 4, 5] pile.append(6) pile.append(7) pile pile.pop() pile pile.pop() pile.pop() pile type(pile) """ Explanation: Utiliser les listes comme des piles (*) Les méthodes des listes rendent très facile l’utilisation d’une liste comme une pile, où le dernier élément ajouté est le premier élément récupéré (LIFO, "last-in, first-out"). Pour ajouter un élément au sommet de la pile, utilisez la méthode append(). Pour récupérer un élément du sommet de la pile, utilisez pop() sans indice explicite. Par exemple : End of explanation """ file = ["Eric", "John", "Michael"] file.append("Terry") # Terry arrive file.append("Graham") # Graham arrive file.pop(0) file.pop(0) file """ Explanation: Utiliser les listes comme des files (*) Vous pouvez aussi utiliser facilement une liste comme une file, où le premier élément ajouté est le premier élément retiré (FIFO, "first-in, first-out"). Pour ajouter un élément à la fin de la file, utiliser append(). Pour récupérer un élément du devant de la file, utilisez pop() avec 0 pour indice. Par exemple : End of explanation """ def f(x): return x % 2 != 0 and x % 3 != 0 filter(f, range(2, 25)) """ Explanation: Outils de programmation fonctionnelle (*) Il y a trois fonctions intégrées qui sont très pratiques avec les listes : filter(), map(), et reduce(). 'filter(fonction, sequence)' renvoit une liste (du même type, si possible) contenant les seul éléments de la séquence pour lesquels fonction(element) est vraie. Par exemple, pour calculer quelques nombres premiers : End of explanation """ def cube(x): return x*x*x map(cube, range(1, 11)) """ Explanation: 'map(fonction, sequence)' appelle fonction(element) pour chacun des éléments de la séquence et renvoie la liste des valeurs de retour. Par exemple, pour calculer les cubes : End of explanation """ def ajoute(x,y): return x+y reduce(ajoute, range(1, 11)) """ Explanation: Plusieurs séquences peuvent être passées en paramètre ; la fonction doit alors avoir autant d’arguments qu’il y a de séquences et est appelée avec les éléments correspondants de chacune des séquences (ou None si l’une des séquences est plus courte que l’autre). Si None est passé en tant que fonction, une fonction retournant ses arguments lui est substituée. reduce(fonction, sequence) renvoie une valeur unique construite par l’appel de la fonction binaire fonction sur les deux premiers éléments de la séquence, puis sur le résultat et l’élément suivant, et ainsi de suite. Par exemple, pour calculer la somme des nombres de 1 à 10 : End of explanation """ def somme(seq): def ajoute(x,y): return x+y return reduce(ajoute, seq, 0) somme(range(1, 11)) somme([]) """ Explanation: S’il y a seulement un élément dans la séquence, sa valeur est renvoyée ; si la séquence est vide, une exception est déclenchée. Un troisième argument peut être transmis pour indiquer la valeur de départ. Dans ce cas, la valeur de départ est renvoyée pour une séquence vide, et la fonction est d’abord appliquée à la valeur de départ et au premier élément de la séquence, puis au résultat et à l’élément suivant, et ainsi de suite. Par exemple, End of explanation """ liste_de_fruits = [' banane', ' myrtille ', 'fruit de la passion '] nouvelle_liste_de_fruits = [fruit.strip() for fruit in liste_de_fruits] print (nouvelle_liste_de_fruits) vec = [2, 4, 6] [3*x for x in vec] [3*x for x in vec if x > 3] [3*x for x in vec if x <= 2] [{x: x**2} for x in vec] [[x,x**2] for x in vec] [x, x**2 for x in vec] # erreur : parenthèses obligatoires pour les tuples [(x, x**2) for x in vec] vec1 = [2, 4, 6] vec2 = [4, 3, -9] [x*y for x in vec1 for y in vec2] [x+y for x in vec1 for y in vec2] [vec1[i]*vec2[i] for i in range(len(vec1))] """ Explanation: List Comprehensions (*) Les list comprehensions fournissent une façon concise de créer des listes sans avoir recours à map(), filter() et/ou lambda. La définition de liste qui en résulte a souvent tendance à être plus claire que des listes construites avec ces outils. Chaque list comprehension consiste en une expression suivie d’une clause for, puis zéro ou plus clauses for ou if. Le résultat sera une liste résultant de l’évaluation de l’expression dans le contexte des clauses for et if qui la suivent. Si l’expression s’évalue en un tuple, elle doit être mise entre parenthèses. End of explanation """ a = [-1, 1, 66.6, 333, 333, 1234.5] del a[0] a del a[2:4] a """ Explanation: L'instruction del Il y a un moyen d’enlever un élément d’une liste en ayant son indice au lieu de sa valeur : l’instruction del. Cela peut aussi être utilisé pour enlever des tranches dans une liste (ce que l’on a fait précédemment par remplacement de la tranche par une liste vide). Par exemple : End of explanation """ del a """ Explanation: del peut aussi être utilisé pour supprimer des variables complètes : End of explanation """ t = 12345, 54321, 'salut!' t[0] t # Les tuples peuvent être imbriqués: u = t, (1, 2, 3, 4, 5) u """ Explanation: Faire par la suite référence au nom a est une erreur (au moins jusqu’à ce qu’une autre valeur ne lui soit affectée). Nous trouverons d’autres utilisations de del plus tard. N-uplets (tuples) et séquences Nous avons vu que les listes et les chaînes ont plusieurs propriétés communes, telles que l’indexation et les opérations de découpage. Elles sont deux exemples de types de données de type séquence. Puisque Python est un langage qui évolue, d’autres types de données de type séquence pourraient être ajoutés. Il y a aussi un autre type de données de type séquence standard : le tuple (ou n-uplet). Un n-uplet consiste en un ensemble de valeurs séparées par des virgules, par exemple : End of explanation """ empty = () singleton = 'salut', # <-- notez la virgule en fin de ligne len(empty) len(singleton) singleton """ Explanation: Comme vous pouvez le voir, à l’affichage, les tuples sont toujours entre parenthèses, de façon à ce que des tuples de tuples puissent être interprétés correctement ; ils peuvent être saisis avec ou sans parenthèses, bien que des parenthèses soient souvent nécessaires (si le tuple fait partie d’une expression plus complexe). Les tuples ont plein d’utilisations. Par exemple, les couples de coordonnées (x, y), les enregistrements des employés d’une base de données, etc. Les tuples, comme les chaînes, sont non-modifiables : il est impossible d’affecter individuellement une valeur aux éléments d’un tuple (bien que vous puissiez simuler quasiment cela avec le découpage et la concaténation). spécificités des tuples (*) Un problème particulier consiste à créer des tuples contenant 0 ou 1 élément : la syntaxe reconnaît quelques subtilités pour y arriver. Les tuples vides sont construits grâce à des parenthèses vides ; un tuple avec un élément est construit en faisant suivre une valeur d’une virgule (il ne suffit pas de mettre une valeur seule entre parenthèses). Peu lisible, mais efficace. Par exemple : End of explanation """ x, y, z = t """ Explanation: L’instruction t = 12345, 54321, 'salut !' est un exemple d’ emballage en tuple (tuple packing) : les valeurs 12345, 54321 et 'salut !' sont emballées ensemble dans un tuple. L’opération inverse est aussi possible : End of explanation """ panier = ['pomme', 'orange', 'pomme', 'poire', 'orange', 'banane'] fruits = set(panier) # creation d'un set sans éléments dupliqués fruits 'orange' in fruits # test d'appartenance rapide 'ananas' in fruits """ Explanation: Cela est appelé, fort judicieusement, déballage de tuple (tuple unpacking). Le déballage d’un tuple nécessite que la liste des variables à gauche ait un nombre d’éléments égal à la longueur du tuple. Notez que des affectations multiples ne sont en réalité qu’une combinaison d’emballage et déballage de tuples ! Ensembles (*) Python comporte également un type de données pour représenter des ensembles. Un set est une collection (non rangée) sans éléments dupliqués. Les emplois basiques sont le test d’appartenance et l’élimination des entrée dupliquées. Les objets ensembles supportent les opérations mathématiques comme l’union, l’intersection, la différence et la différence symétrique. Voici une démonstration succincte : End of explanation """ a = set('abracadabra') b = set('alacazam') a # lettres uniques dans abracadabra b # lettres uniques dans alacazam a - b # lettres dans a mais pas dans b a | b # lettres soit dans a ou b a & b # lettres dans a et b a ^ b # lettres dans a ou dans b mais pas dans les deux """ Explanation: Une autre démonstration rapide des ensembles sur les lettres uniques de deux mots End of explanation """ tel = {'jack': 4098, 'sape': 4139} tel['guido'] = 4127 tel tel['jack'] del (tel['sape']) tel['irv'] = 4127 tel tel.keys() tel.has_key('guido') """ Explanation: Dictionnaires Un autre type de données intégré à Python est le dictionnaire. Les dictionnaires sont parfois trouvés dans d’autres langages sous le nom de "mémoires associatives" ou "tableaux associatifs". A la différence des séquences, qui sont indexées par un intervalle numérique, les dictionnaires sont indexés par des clés, qui peuvent être de n’importe quel type non-modifiable ; les chaînes et les nombres peuvent toujours être des clés. Les tuples peuvent être utilisés comme clés s’ils ne contiennent que des chaînes, des nombres ou des tuples. Vous ne pouvez pas utiliser des listes comme clés, puisque les listes peuvent être modifiées en utilisant leur méthode append(). Il est préférable de considérer les dictionnaires comme des ensembles non ordonnés de couples clé:valeur, avec la contrainte que les clés soient uniques (à l’intérieur d’un même dictionnaire). Un couple d’accolades crée un dictionnaire vide : {}. Placer une liste de couples clé:valeur séparés par des virgules à l’intérieur des accolades ajoute les couples initiaux clé :valeur au dictionnaire ; c’est aussi de cette façon que les dictionnaires sont affichés. Les opérations principales sur un dictionnaire sont le stockage d’une valeur à l’aide d’une certaine clé et l’extraction de la valeur en donnant la clé. Il est aussi possible de détruire des couples clé:valeur avec del. Si vous stockez avec une clé déjà utilisée, l’ancienne valeur associée à cette clé est oubliée. C’est une erreur d’extraire une valeur en utilisant une clé qui n’existe pas. La méthode keys() d’un objet de type dictionnaire retourne une liste de toutes les clés utilisées dans le dictionnaire, dans un ordre quelconque (si vous voulez qu’elle soit triée, appliquez juste la méthode sort() à la liste des clés). Pour savoir si une clé particulière est dans le dictionnaire, utilisez la méthode has_key() du dictionnaire. Voici un petit exemple utilisant un dictionnaire : End of explanation """ dict([('sape', 4139), ('guido', 4127), ('jack', 4098)]) dict([(x, x**2) for x in (2, 4, 6)]) # utilisation de la list comprehension """ Explanation: Le constructeur dict() construit des dictionnaires directement à partir de listes de paires clé:valeur rangées comme des n-uplets. Lorsque les paires forment un motif, les list comprehensions peuvent spécifier de manière compacte la liste de clés-valeurs. End of explanation """ dict(sape=4139, guido=4127, jack=4098) """ Explanation: Lorsque les clés sont de simples chaînes il est parfois plus simple de spécifier les paires en utilisant des arguments à mot-clé : End of explanation """ chevaliers = {'gallahad': 'le pur', 'robin': 'le brave'} for c, v in chevaliers.items(): print (c, v) """ Explanation: Techniques de boucles Lorsqu’on boucle sur un dictionnaire, les clés et les valeurs correspondantes peuvent être obtenues en même temps en utilisant la méthode iteritems() End of explanation """ for i, v in enumerate(['tic', 'tac', 'toe']): print (i, v) """ Explanation: Lorsqu’on boucle sur une séquence, l’indice donnant la position et la valeur correspondante peuvent être obtenus en même temps en utilisant la fonction enumerate(). End of explanation """ questions = ['nom', 'but', 'drapeau'] reponses = ['lancelot', 'le sacre graal', 'le bleu'] for q, r in zip(questions, reponses): print ("Quel est ton %s? C'est %s." % (q, r)) """ Explanation: Pour boucler sur deux séquences, ou plus, en même temps, les éléments peuvent être appariés avec la fonction zip(). End of explanation """ for i in reversed(xrange(1,10,2)): print (i) """ Explanation: Pour boucler à l’envers sur une séquence, spécifiez d’abord la séquence à l’endroit, ensuite appelez la fonction reversed(). End of explanation """ panier = ['pomme', 'orange', 'pomme', 'poire', 'orange', 'banane'] for f in sorted(set(panier)): print (f) """ Explanation: Pour boucler sur une séquence comme si elle était triée, utilisez la fonction sorted() qui retourne une liste nouvelle triée tout en laissant la source inchangée. End of explanation """ chaine1, chaine2, chaine3 = '', 'Trondheim', 'Hammer Dance' non_null = chaine1 or chaine2 or chaine3 non_null """ Explanation: Plus de détails sur les conditions (*) Les conditions utilisées dans les instructions while et if peuvent contenir d’autres opérateurs en dehors des comparaisons. Les opérateurs de comparaison in et not in vérifient si une valeur apparaît (ou non) dans une séquence. Les opérateurs is et is not vérifient si deux objets sont réellement le même objet ; cela se justifie seulement pour les objets modifiables comme les listes. Tous les opérateurs de comparaison ont la même priorité, qui est plus faible que celle de tous les opérateurs numériques. Les comparaisons peuvent être enchaînées. Par exemple, a &lt; b == c teste si a est strictement inférieur à b et de plus si b est égal à c. Les comparaisons peuvent être combinées avec les opérateurs booléens and (et) et or (ou), et le résultat d’une comparaison (ou de n’importe quel autre expression Booléenne) peut être inversé avec not (pas). Ces opérateurs ont encore une fois une priorité inférieure à celle des opérateurs de comparaison ; et entre eux, not a la plus haute priorité, et or la plus faible, de sorte que A and not B or C est équivalent à (A and (not B)) or C. Bien sûr, les parenthèses peuvent être utilisées pour exprimer les compositions désirées. Les opérateurs booléens and et or sont des opérateurs dits court-circuit : leurs arguments sont évalués de gauche à droite, et l’évaluation s’arrête dès que le résultat est trouvé. Par exemple, si A et C sont vrais mais que B est faux, A and B and C n'évalue pas l'expression C. En général, la valeur de retour d'un opérateur court-circuit, quand elle est utilisée comme une valeur générale et non comme un booléen, est celle du dernier argument évalué. Il est possible d’affecter le résultat d’une comparaison ou une autre expression booléenne à une variable. Par exemple End of explanation """ (1, 2, 3) < (1, 2, 4) [1, 2, 3] < [1, 2, 4] 'ABC' < 'C' < 'Pascal' < 'Python' (1, 2, 3, 4) < (1, 2, 4) (1, 2) < (1, 2, -1) (1, 2, 3) == (1.0, 2.0, 3.0) (1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4) """ Explanation: Notez qu’en Python, au contraire du C, les affectations ne peuvent pas être effectuées à l’intérieur des expressions. Les programmeurs C ronchonneront peut-être, mais cela évite une classe de problèmes qu’on rencontre dans les programmes C : écrire = dans une expression alors qu’il fallait ==. Comparer les séquences et d’autres types (*) Les objets de type séquence peuvent être comparés à d’autres objets appartenant au même type de séquence. La comparaison utilise l’ordre lexicographique : les deux premiers éléments sont d’abord comparés, et s’ils diffèrent cela détermine le résultat de la comparaison ; s’ils sont égaux, les deux éléments suivants sont comparés, et ainsi de suite, jusqu’à ce que l’une des deux séquences soit épuisée. Si deux éléments à comparer sont eux-mêmes des séquences du même type, la comparaison lexicographique est reconsidérée récursivement. Si la comparaison de tous les éléments de deux séquences les donne égaux, les séquences sont considérées comme égales. Si une séquence est une sous-séquence initiale de l’autre, la séquence la plus courte est la plus petite (inférieure). L’ordonnancement lexicographique pour les chaînes utilise l’ordonnancement ASCII pour les caractères. Quelques exemples de comparaisons de séquences du même type : End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/804ea48504b27f5f04fd03d517675af5/plot_point_spread.ipynb
bsd-3-clause
import os.path as op import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse from mne.simulation import simulate_stc, simulate_evoked """ Explanation: Corrupt known signal with point spread The aim of this tutorial is to demonstrate how to put a known signal at a desired location(s) in a :class:mne.SourceEstimate and then corrupt the signal with point-spread by applying a forward and inverse solution. End of explanation """ seed = 42 # parameters for inverse method method = 'sLORETA' snr = 3. lambda2 = 1.0 / snr ** 2 # signal simulation parameters # do not add extra noise to the known signals nave = np.inf T = 100 times = np.linspace(0, 1, T) dt = times[1] - times[0] # Paths to MEG data data_path = sample.data_path() subjects_dir = op.join(data_path, 'subjects') fname_fwd = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-oct-6-fwd.fif') fname_inv = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-oct-6-meg-fixed-inv.fif') fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif') """ Explanation: First, we set some parameters. End of explanation """ fwd = mne.read_forward_solution(fname_fwd) fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True, use_cps=False) fwd['info']['bads'] = [] inv_op = read_inverse_operator(fname_inv) raw = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')) raw.set_eeg_reference(projection=True) events = mne.find_events(raw) event_id = {'Auditory/Left': 1, 'Auditory/Right': 2} epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True) epochs.info['bads'] = [] evoked = epochs.average() labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir) label_names = [l.name for l in labels] n_labels = len(labels) """ Explanation: Load the MEG data End of explanation """ cov = mne.compute_covariance(epochs, tmin=None, tmax=0.) """ Explanation: Estimate the background noise covariance from the baseline period End of explanation """ # The known signal is all zero-s off of the two labels of interest signal = np.zeros((n_labels, T)) idx = label_names.index('inferiorparietal-lh') signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times) idx = label_names.index('rostralmiddlefrontal-rh') signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times) """ Explanation: Generate sinusoids in two spatially distant labels End of explanation """ hemi_to_ind = {'lh': 0, 'rh': 1} for i, label in enumerate(labels): # The `center_of_mass` function needs labels to have values. labels[i].values.fill(1.) # Restrict the eligible vertices to be those on the surface under # consideration and within the label. surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno'] restrict_verts = np.intersect1d(surf_vertices, label.vertices) com = labels[i].center_of_mass(subject='sample', subjects_dir=subjects_dir, restrict_vertices=restrict_verts, surf='white') # Convert the center of vertex index from surface vertex list to Label's # vertex list. cent_idx = np.where(label.vertices == com)[0][0] # Create a mask with 1 at center vertex and zeros elsewhere. labels[i].values.fill(0.) labels[i].values[cent_idx] = 1. """ Explanation: Find the center vertices in source space of each label We want the known signal in each label to only be active at the center. We create a mask for each label that is 1 at the center vertex and 0 at all other vertices in the label. This mask is then used when simulating source-space data. End of explanation """ stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt, value_fun=lambda x: x) """ Explanation: Create source-space data with known signals Put known signals onto surface vertices using the array of signals and the label masks (stored in labels[i].values). End of explanation """ kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4, time_unit='s', initial_time=0.05, size=1200, views=['lat', 'med']) clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7]) brain_gen = stc_gen.plot(clim=clim, **kwargs) """ Explanation: Plot original signals Note that the original signals are highly concentrated (point) sources. End of explanation """ evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave, random_state=seed) # Map the simulated sensor-space data to source-space using the inverse # operator. stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method) """ Explanation: Simulate sensor-space signals Use the forward solution and add Gaussian noise to simulate sensor-space (evoked) data from the known source-space signals. The amount of noise is controlled by nave (higher values imply less noise). End of explanation """ brain_inv = stc_inv.plot(**kwargs) """ Explanation: Plot the point-spread of corrupted signal Notice that after applying the forward- and inverse-operators to the known point sources that the point sources have spread across the source-space. This spread is due to the minimum norm solution so that the signal leaks to nearby vertices with similar orientations so that signal ends up crossing the sulci and gyri. End of explanation """
SolitonScientific/AtomicString
AFIntegrals.ipynb
mit
import numpy as np import pylab as pl pl.rcParams["figure.figsize"] = 9,6 ################################################################### ##This script calculates the values of Atomic Function up(x) (1971) ################################################################### ################### One Pulse of atomic function def up1(x: float) -> float: #Atomic function table up_y = [0.5, 0.48, 0.460000017,0.440000421,0.420003478,0.400016184, 0.380053256, 0.360139056, 0.340308139, 0.320605107,0.301083436, 0.281802850, 0.262826445, 0.244218000, 0.226041554, 0.208361009, 0.191239338, 0.174736305, 0.158905389, 0.143991189, 0.129427260, 0.115840866, 0.103044024, 0.9110444278e-01, 0.798444445e-01, 0.694444445e-01, 0.598444445e-01, 0.510444877e-01, 0.430440239e-01, 0.358409663e-01, 0.294282603e-01, 0.237911889e-01, 0.189053889e-01, 0.147363055e-01, 0.112393379e-01, 0.836100883e-02, 0.604155412e-02, 0.421800000e-02, 0.282644445e-02, 0.180999032e-02, 0.108343562e-02, 0.605106267e-03, 0.308138660e-03, 0.139055523e-03, 0.532555251e-04, 0.161841328e-04, 0.347816874e-05, 0.420576116e-05, 0.167693347e-07, 0.354008603e-10, 0] up_x = np.arange(0.5, 1.01, 0.01) res = 0. if ((x>=0.5) and (x<=1)): for i in range(len(up_x) - 1): if (up_x[i] >= x) and (x < up_x[i+1]): N1 = 1 - (x - up_x[i])/0.01 res = N1 * up_y[i] + (1 - N1) * up_y[i+1] return res return res ############### Atomic Function Pulse with width, shift and scale ############# def upulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float: x = (t - b)/a res = 0. if (x >= 0.5) and (x <= 1): res = up1(x) elif (x >= 0.0) and (x < 0.5): res = 1 - up1(1 - x) elif (x >= -1 and x <= -0.5): res = up1(-x) elif (x > -0.5) and (x < 0): res = 1 - up1(1 + x) res = d + res * c return res ############### Atomic Function Applied to list with width, shift and scale ############# def up(x: list, a = 1., b = 0., c = 1., d = 0.) -> list: res = [] for i in range(len(x)): res.append(upulse(x[i], a, b, c, d)) return res x = np.arange(-2.0, 2.0, 0.01) pl.title('Atomic Function up(x)') pl.plot(x, up(x), label='Atomic Function') pl.grid(True) pl.show() """ Explanation: <font color=Teal>ATOMIC AND ASTRING FUNCTIONS INTEGRALS (Python Code)</font> By Sergei Yu. Eremenko, PhD, Dr.Eng., Professor, Honorary Professor https://www.researchgate.net/profile/Sergei_Eremenko https://www.amazon.com/Sergei-Eremenko/e/B082F3MQ4L https://www.linkedin.com/in/sergei-eremenko-3862079 https://www.facebook.com/SergeiEremenko.Author Atomic functions (AF) described in many books and hundreds of papers have been discovered in 1970s by Academician NAS of Ukraine Rvachev V.L. (https://ru.wikipedia.org/w/index.php?oldid=83948367) (author's teacher) and professor Rvachev V.A. and advanced by many followers, notably professor Kravchenko V.F. (https://ru.wikipedia.org/w/index.php?oldid=84521570), H. Gotovac (https://www.researchgate.net/profile/Hrvoje_Gotovac), V.M. Kolodyazhni (https://www.researchgate.net/profile/Volodymyr_Kolodyazhny), O.V. Kravchenko (https://www.researchgate.net/profile/Oleg_Kravchenko) as well as the author S.Yu. Eremenko (https://www.researchgate.net/profile/Sergei_Eremenko) [1-4] for a wide range of applications in mathematical physics, boundary value problems, statistics, radio-electronics, telecommunications, signal processing, and others. As per historical survey (https://www.researchgate.net/publication/308749839), some elements, analogs, subsets or Fourier transformations of AFs sometimes named differently (Fabius function, hat function, compactly supported smooth function) have been probably known since 1930s and rediscovered many times by scientists from different countries, including Fabius, W.Hilberg and others. However, the most comprehensive 50+ years’ theory development supported by many books, dissertations, hundreds of papers, lecture courses and multiple online resources have been performed by the schools of V.L. Rvachev, V.A. Rvachev and V.F. Kravchenko. In 2017-2020, Sergei Yu. Eremenko, in papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons", "Atomic Machine Learning" and book "Soliton Nature" [1-8], has introduced <b>AString</b> atomic function as an integral and 'composing branch' of Atomic Function up(x): <font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font> AString function, is a smooth solitonic kink function by joining of which on a periodic lattice it is possible to compose a straight-line resembling flat spacetime as well as to build 'solitonic atoms' composing different fields. It may lead to novel models of spacetime and quantized gravity where AString may describe Spacetime Quantum, or Spacetime Metriant. Also, representing of different fields via shift and stretches of AStrings and Atomic Functions may lead to unified theory where AString may describe some fundamental building block of quantum fields, like a string, elementary spacetime distortion or metriant. So, apart from traditional areas of AF applications in mathematical physics, radio-electronics and signal processing, AStrings and Atomic Functions may be expanded to Spacetime Physics, String theory, General and Special Relativity, Theory of Solitons, Lattice Physics, Quantized Gravity, Cosmology, Dark matter and Multiverse theories as well as Finite Element Methods, Nonarchimedean Computers, Atomic regression analysis, Atomic Kernels, Machine Learning and Artificial Intelligence. <font color=teal>1. Atomic Function up(x) (introduced in 1971 by V.L.Rvachev and V.A.Rvachev)</font> End of explanation """ ############### Atomic String ############# def AString1(x: float) -> float: res = 1 * (upulse(x/2.0 - 0.5) - 0.5) return res ############### Atomic String Pulse with width, shift and scale ############# def AStringPulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float: x = (t - b)/a if (x < -1): res = -0.5 elif (x > 1): res = 0.5 else: res = AString1(x) res = d + res * c return res ###### Atomic String Applied to list with width, shift and scale ############# def AString(x: list, a = 1., b = 0., c = 1., d = 0.) -> list: res = [] for i in range(len(x)): res.append(AStringPulse(x[i], a, b, c, d)) #res[i] = AStringPulse(x[i], a, b, c) return res ###### Summation of two lists ############# def Sum(x1: list, x2: list) -> list: res = [] for i in range(len(x1)): res.append(x1[i] + x2[i]) return res x = np.arange(-2.0, 2.0, 0.01) pl.title('Atomic String Function') pl.plot(x, AString(x, 1.0, 0, 1, 0), label='Atomic String') pl.grid(True) pl.show() """ Explanation: <font color=teal>2. Atomic String Function (AString) is an Integral and Composing Branch of Atomic Function up(x) (introduced in 2017 by S. Yu. Eremenko)</font> AString function is solitary kink function which simultaneously is integral and composing branch of atomic function up(x) <font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font> End of explanation """ x = np.arange(-2.0, 2.0, 0.01) #This Calculates Derivative dx = x[1] - x[0] dydx = np.gradient(up(x), dx) pl.plot(x, up(x), label='Atomic Function') pl.plot(x, AString(x, 1.0, 0, 1, 0), linewidth=2, label='Atomic String Function') pl.plot(x, dydx, '--', label='A-Function Derivative') pl.title('Atomic and AString Functions') pl.legend(loc='best', numpoints=1) pl.grid(True) pl.show() """ Explanation: Atomic String, Atomic Function (AF) and AF Derivative plotted together End of explanation """ from scipy.integrate import simps x = np.arange(-1.0, 1.0, 0.01) I1 = simps(up(x), x) print(I1) """ Explanation: <font color=teal>3. Atomic Function Integrals</font> 3.1. Integral of basic Atomic Function up(x) End of explanation """ a = 0.5; c = 1 I2 = simps(up(x, a, 0., c), x) print(I2) a = 0.1; c = 2 I3 = simps(up(x, a, 0., c), x) print(I3) """ Explanation: Summary - Integral(up(x),-1,1) = 1, as expected 3.2. Integrals of stretched Atomic Function c*up(x/a) End of explanation """ x = np.arange(-2.0, 2.0, 0.01) pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1') pl.plot(x, up(x, 1, +0), '--', linewidth=1, label='Atomic Function at x=0') pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1') pl.plot(x, Sum(up(x, 1, -1), Sum(up(x), up(x, 1, 1))), linewidth=2, label='Atomic Function Compounding') pl.title('Partition of Unity - AF Compounding represent 1') pl.legend(loc='best', numpoints=1) pl.grid(True) pl.show() Continuum = Sum(up(x, 1, -1), Sum(up(x), up(x, 1, 1))) # Summation of three atomic functions I4 = simps(Continuum, x) print(I4) """ Explanation: Summary - Integral(cup(x/a),-1,1) = ca - Width times Height 3.3. Integral of the chain of Atomic Function pulses The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1), so-called 'Partition of Unity' <font color=maroon>1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...</font> End of explanation """ x = np.arange(-1.0, 1.0, 0.001) I1 = simps(AString(x), x) print(I1) x = np.arange(0, 1.0, 0.001) I1 = simps(AString(x), x) print(I1) x = np.arange(-1, 0.0, 0.001) I1 = simps(AString(x), x) print(I1) """ Explanation: Integral(Sum(up(x),n),-1,1) = n - Integral of Sum of n AF pulses equal to n. Important for quantisation <font color=teal>4. Integrals of Atomic String Function</font> 4.1. Integral of basic Atomic String Function AString(x) End of explanation """ a = 0.5; c = 1 x = np.arange(0, a, 0.001) I2 = simps(AString(x, a, 0., c), x) print(I2) a = 2; c = 5 x = np.arange(0, a, 0.001) I3 = simps(AString(x, a, 0., c), x) print(I3) """ Explanation: Summary - Integral(AString(x),-1,1) = 0; Integral(AString(x),0,1) ~ 0.36; 4.2. Integrals of stretched Atomic String function c*AString(x/a) End of explanation """ x = np.arange(-3, 3, 0.01) pl.plot(x, AString(x, 1, -1.0, 1, 0), '--', linewidth=1, label='AString 1') pl.plot(x, AString(x, 1, +0.0, 1, 0), '--', linewidth=1, label='AString 2') pl.plot(x, AString(x, 1, +1.0, 1, 0), '--', linewidth=1, label='AString 3') AS2 = Sum(AString(x, 1, -1.0, 1, 0), AString(x, 1, +0.0, 1, 0)) AS3 = Sum(AS2, AString(x, 1, +1.0, 1, 0)) pl.plot(x, AS3, label='AStrings Sum', linewidth=2) pl.title('Atomic Strings compose Line') pl.legend(loc='best', numpoints=1) pl.grid(True) pl.show() """ Explanation: Summary: Integral(cAString(x/a),0,a) = caIntegral(AString(x),0,1) meaning the length/energy of extended continuum can be composed from smaller parts <font color=teal>5. Some properties of Atomic and AString functions</font> 5.1. AString Kinks and Solitonic Atoms Solitonic mathematical properties of AString and Atomic Functions have been explored in author's paper [3] (Eremenko, S.Yu. Atomic solitons as a new class of solitons; 2018; https://www.researchgate.net/publication/329465767). They both satisfy differential equations with shifted arguments which introduce special kind of <b>nonlinearity</b> typical for all mathematical solitons. AString belong to the class of <b>Solitonic Kinks</b> similar to sine-Gordon, Frenkel-Kontorova, tanh and others. Unlike other kinks, AStrings are truly solitary (compactly-supported) and also have a unique property of composing of both straight-line and solitonic atoms on lattice resembling particle-like properties of solitons. Atomic Function up(x) is not actually a mathematical soliton, but a complex object composed from summation of two opposite AString kinks, and in solitonic terminology, is called 'solitonic atoms' (like bions). 5.2. Partition of Line from AStrings - resembles quantisation of space Combination/summation of Atomic Strings can exactly represent a straight line: x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)... <font color=maroon>x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)...</font> Partition based on AString function with width 1 and height 1 End of explanation """ x = np.arange(-40.0, 40.0, 0.01) width = 10.0 height = 10.0 #pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x') pl.plot(x, AString(x, width, -3*width/2, height, -3*width/2), '--', linewidth=1, label='AString 1') pl.plot(x, AString(x, width, -1*width/2, height, -1*width/2), '--', linewidth=1, label='AString 2') pl.plot(x, AString(x, width, +1*width/2, height, +1*width/2), '--', linewidth=1, label='AString 3') pl.plot(x, AString(x, width, +3*width/2, height, +3*width/2), '--', linewidth=1, label='AString 4') AS2 = Sum(AString(x, width, -3*width/2, height, -3*width/2), AString(x, width, -1*width/2, height, -1*width/2)) AS3 = Sum(AS2, AString(x, width,+1*width/2, height, +1*width/2)) AS4 = Sum(AS3, AString(x, width,+3*width/2, height, +3*width/2)) pl.plot(x, AS4, label='AStrings Joins', linewidth=2) pl.title('Atomic Strings Combinations') pl.legend(loc='best', numpoints=1) pl.grid(True) pl.show() """ Explanation: Partition based on AString with certain width and height depending on a size of 'quanta' End of explanation """ x = np.arange(-50.0, 50.0, 0.1) dx = x[1] - x[0] CS6 = Sum(up(x, 5, -30, 5, 5), up(x, 15, 0, 15, 5)) CS6 = Sum(CS6, up(x, 10, +30, 10, 5)) pl.plot(x, CS6, label='Spacetime Density distribution') IntC6 = np.cumsum(CS6)*dx/50 pl.plot(x, IntC6, label='Spacetime Shape (Geodesics)') DerC6 = np.gradient(CS6, dx) pl.plot(x, DerC6, label='Spacetime Curvature') LightTrajectory = -10 -IntC6/5 pl.plot(x, LightTrajectory, label='Light Trajectory') pl.title('Shape of Curved Spacetime model') pl.legend(loc='best', numpoints=1) pl.grid(True) pl.show() """ Explanation: 5.3. Representing curved continua via AStrings and Atomic Functions Shifts and stretches of Atomic adn AString functions allows reproducing curved surfaces (eq curved spacetime). Details are in author's papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons". End of explanation """ #pl.rcParams["figure.figsize"] = 16,12 book = pl.imread('BookSpread_small.png') pl.imshow(book) """ Explanation: <font color=teal>6. 'Soliton Nature' book by S.Eremenko</font> 6.1. AStrings and Atomic functions are described in the book 'Soliton Nature' Soliton Nature book is easy-to-read, pictorial, interactive book which uses beautiful photography, video channel, and computer scripts in R and Python to demonstrate existing and explore new solitons – the magnificent and versatile energy concentration phenomenon of nature. New class of atomic solitons can be used to describe Higgs boson (‘the god particle’) fields, spacetime quanta and other fundamental building blocks of nature. End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/structured/labs/3a_bqml_baseline_babyweight.ipynb
apache-2.0
%%bash sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \ sudo pip install google-cloud-bigquery==1.6.1 """ Explanation: LAB 3a: BigQuery ML Model Baseline. Learning Objectives Create baseline model with BQML Evaluate baseline model Calculate RMSE of baseline model Introduction In this notebook, we will create a baseline model to predict the weight of a baby before it is born. We will use BigQuery ML to build a linear babyweight prediction model with the base features and no feature engineering, yet. We will create a baseline model with BQML, evaluate our baseline model, and calculate the its RMSE. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Load necessary libraries Check that the Google BigQuery library is installed and if not, install it. End of explanation """ %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 """ Explanation: Verify tables exist Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. End of explanation """ %%bigquery CREATE OR REPLACE MODEL babyweight.baseline_model OPTIONS ( MODEL_TYPE=# TODO: Add model type, INPUT_LABEL_COLS=[# TODO: label column name], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add features and label FROM # TODO: Add train table """ Explanation: Create the baseline model Next, we'll create a linear regression baseline model with no feature engineering. We'll use this to compare our later, more complex models against. Lab Task #1: Train the "Baseline Model". When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (weight_pounds). Note also that we are using the training data table as the data source and we don't need BQML to split the data because we have already split it ourselves. End of explanation """ %%bigquery -- Information from model training SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.baseline_model) """ Explanation: REMINDER: The query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline model Even though BigQuery can automatically split the data it is given, and training on only a part of the data and using the rest for evaluation, to compare with our custom models later we wanted to decide the split ourselves so that it is completely reproducible. NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab. End of explanation """ %%bigquery SELECT * FROM ML.EVALUATE(MODEL # TODO: Add model name, ( SELECT # TODO: Add features and label FROM # TODO: Add eval table )) """ Explanation: Lab Task #2: Get evaluation statistics for the baseline model. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data. End of explanation """ %%bigquery SELECT # TODO: Select just the calculated RMSE FROM ML.EVALUATE(MODEL # TODO: Add model name, ( SELECT # TODO: Add features and label FROM # TODO: Add eval table )) """ Explanation: Resource for an explanation of the Regression Metrics. Lab Task #3: Write a SQL query to find the RMSE of the evaluation data Since this is regression, we typically use the RMSE, but natively this is not in the output of our evaluation metrics above. However, we can simply take the SQRT() of the mean squared error of our loss metric from evaluation of the baseline_model to get RMSE. End of explanation """
blankon123/skripsi-news-classification
Skripsi- Parsing Engine.ipynb
mit
import feedparser import sys import time from pymongo import MongoClient """ Explanation: Skripri - Feed Parsing Engine Proses Pertama, inisialisasi link Ceritanya list sudah ada di collection MongoLab, tetapi untuk testing digunakan inisiasi link manual End of explanation """ # server = 'localhost' # port = 27017 # db_name = 'thosangs-news' # username = 'userSkripsi' import pymongo print 'Python version', sys.version print 'Pymongo version', pymongo.version # connect to server print '\nConnecting ...' conn = MongoClient(server, port) # Get the database print '\nGetting database ...' db = conn[db_name] # Have to authenticate to get access print '\nAuthenticating ...' db.authenticate(username, password) """ Explanation: Testing feeds = [['detik','http://detik.feedsportal.com/c/33613/f/656089/index.rss'], ['viva','http://rss.viva.co.id/get/bisnis'], ['merdeka','http://www.merdeka.com/feed/'], ['liputan6','http://www.liputan6.com/feed/rss2'], ['tribun','http://www.tribunnews.com/rss/bisnis'], ['okezone','http://sindikasi.okezone.com/index.php/rss/11/RSS2.0'], ['jpnn','http://www.jpnn.com/index.php?mib=rss&id=216'], ['suara','http://www.suara.com/rss/bisnis'], ['bisniscom','http://www.bisnis.com/rss/index?c=382']] End of explanation """ link = db.newsLink.find() feeds = [] for l in link: dummy = {} dummy['name'] = l['name'] dummy['link'] = l['link'] feeds.append(dummy) doc = dict() for i in range(len(feeds)): start_time = time.time() doc[feeds[i]['name']] = feedparser.parse(feeds[i]['link'])['entries'] for j in range(len(doc[feeds[i]['name']])) : doc[feeds[i]['name']][j].pop('published_parsed') print '{0} {1}-News {2}-Seconds'.format(feeds[i]['name'],len(doc[feeds[i]['name']]),(time.time()-start_time)) pos = db.news for linkBerita in doc: for berita in doc[linkBerita]: if (pos.find({'link' : berita['link']})): doc[linkBerita].remove(berita) else: print 'TIDAK '+berita['link'] dd = pos.find({'link' : doc['viva'][1]['link']}) print doc['detik'][1]['link'] pos.insert_many([doc[feeds[i]['name']][j] for i in range(len(feeds)) for j in range(len(doc[feeds[i]['name']]))]) d = pos.find #testing ekstrak deskripsi singkat detik.com def FindShortDesc(desc,cekawal,cekakhir): cek = desc awal = cek.find(cekawal)+len(cekawal) akhir = cek.find(cekakhir) return cek[awal:akhir] for j in range(0,len(ds)): judul = ds.title[j] shortdesc = FindShortDesc(ds.summary[j],'width="100" />','<br c') img = FindShortDesc(ds.summary[j],'src="','" width') link = ds.id[j] print judul print shortdesc print img print link+'\n' print(FindShortDesc(ds.summary[0],'src="','" width')) """ Explanation: for i in range(len(feeds)): print(feeds[i][1]) pos = db.newsLink pos.insert_many([{ 'name' : feeds[i][0],'link' : feeds[i][1]} for i in range(len(feeds))]) End of explanation """
Autoplectic/dit
examples/hypothesis.ipynb
bsd-3-clause
from hypothesis import find import dit from dit.abc import * from dit.pid import * from dit.utils.testing import distribution_structures dit.ditParams['repr.print'] = dit.ditParams['print.exact'] = True """ Explanation: Using hypothesis to find interesting examples Hypothesis is a powerful and unique library for testing code. It also includes a find function for finding examples that satisfy an arbitrary predicate. Here, we will explore some of the neat things that can be found using this function. End of explanation """ a = distribution_structures(size=3, alphabet=2) a.example() """ Explanation: To illustrate what the distribution source looks like, here we instantiate it with a size of 3 and an alphabet of 2: End of explanation """ def pred(value): return lambda d: dit.multivariate.coinformation(d) < value ce = find(distribution_structures(3, 2), pred(-1e-5)) print(ce) print("The coinformation is: {}".format(dit.multivariate.coinformation(ce))) ce = find(distribution_structures(3, 2), pred(-0.5)) print(ce) print("The coinformation is: {}".format(dit.multivariate.coinformation(ce))) """ Explanation: Negativity of co-information End of explanation """ def b_lt_k(d): k = dit.multivariate.gk_common_information(d) b = dit.multivariate.dual_total_correlation(d) return k > b find(distribution_structures(size=3, alphabet=3, uniform=True), b_lt_k) """ Explanation: The Gács-Körner common information is bound from above by the dual total correlation As we will see, hypothesis can not find an example of $K > B$, because one does not exist. End of explanation """ ce = find(distribution_structures(3, 2, True), lambda d: PID_BROJA(d) != PID_Proj(d)) ce print(PID_BROJA(ce)) print(PID_Proj(ce)) """ Explanation: BROJA is not Proj We know that the BROJA and Proj PID measures are not the same, but the BROJA paper did not provide any simple examples of this. Here, we find one. End of explanation """
robblack007/clase-dinamica-robot
Clases/.ipynb_checkpoints/Dinamica SCARA-checkpoint.ipynb
mit
from sympy import var, sin, cos, Matrix, Integer, eye, Function, Rational, exp, Symbol, I, solve, pi, trigsimp, dsolve, sinh, cosh, simplify from sympy.physics.mechanics import mechanics_printing mechanics_printing() """ Explanation: Dinámica del Robot Manipulador SCARA Se tiene un robot manipulador tipo SCARA, como el de la siguiente figura, del cual se quiere obtener sus ecuaciones de movimiento, asi como una simulación bajo cierta ley de control. Empezaremos obteniendo una representación simple de las posiciones de los centros de masa para los eslabones del robot. Primero tenemos que importar las librerias de computo simbolico: End of explanation """ var("m1 m2 m3 J1 J2 J3 l1 l2 L1 L2 L0 t g") """ Explanation: Y declaramos todas las constantes involucradas en este calculo simbolico: End of explanation """ q1 = Function("q1")(t) q2 = Function("q2")(t) q3 = Function("q3")(t) """ Explanation: Asi como algunas de las variables de nuestro problema: End of explanation """ x1 = l1*cos(q1) y1 = l1*sin(q1) z1 = L0 v1 = x1.diff("t")**2 + y1.diff("t")**2 + z1.diff("t")**2 v1.trigsimp() x2 = L1*cos(q1) + l2*cos(q1 + q2) y2 = L1*sin(q1) + l2*sin(q1 + q2) z2 = L0 v2 = x2.diff("t")**2 + y2.diff("t")**2 + z2.diff("t")**2 v2.trigsimp() x3 = L1*cos(q1) + L2*cos(q1 + q2) y3 = L1*sin(q1) + L2*sin(q1 + q2) z3 = L0 - q3 v3 = x3.diff("t")**2 + y3.diff("t")**2 + z3.diff("t")**2 v3.trigsimp() """ Explanation: Y empezamos con el calculo de la posicion de los centros de masa de los eslabones, asi como su derivada y el cuadrado de la velocidad lineal de cada eslabon: End of explanation """ ω1 = q1.diff("t") ω2 = q2.diff("t") ω3 = 0 """ Explanation: Declaramos $\omega_i$ como la velocidad angular de cada eslabon: End of explanation """ K1 = Rational(1, 2)*m1*v1 + Rational(1, 2)*J1*ω1**2 K1 K2 = Rational(1, 2)*m1*v2 + Rational(1, 2)*J2*ω2**2 K2 K3 = Rational(1, 2)*m1*v3 + Rational(1, 2)*J3*ω3**2 K3 """ Explanation: Y procedemos al calculo de la energía cinética de cada eslabon: End of explanation """ U1 = m1*g*z1 U1 U2 = m2*g*z2 U2 U3 = m3*g*z3 U3 """ Explanation: Calculamos tambien la energía potencial de cada eslabon: End of explanation """ K = K1 + K2 + K3 K U = U1 + U2 + U3 U """ Explanation: Por lo que ya podemos calcular tanto la energía cinética de nuestro sistema como la potencial: End of explanation """ L = (K - U).expand().simplify() L """ Explanation: y el Lagrangiano queda: End of explanation """ τ1 = (L.diff(q1.diff(t)).diff(t) - L.diff(q1)).simplify().expand().collect(q1.diff(t).diff(t)).collect(q2.diff(t).diff(t)) τ2 = (L.diff(q2.diff(t)).diff(t) - L.diff(q2)).simplify().expand().collect(q1.diff(t).diff(t)).collect(q2.diff(t).diff(t)) τ3 = (L.diff(q3.diff(t)).diff(t) - L.diff(q3)).simplify().expand().collect(q1.diff(t).diff(t)).collect(q2.diff(t).diff(t)) τ1 τ2 τ3 """ Explanation: Por lo que ahora solo tenemos que aplicar la ecuación de Euler-Lagrange para obtener las ecuaciones de movimiento del sistema: End of explanation """ from scipy.integrate import odeint from numpy import linspace """ Explanation: Una vez que tenemos las ecuaciones de movimiento, debemos simular el comportamiento del sistema por medio de la función odeint, y obtener una gráfica de la trayectoria del sistema: End of explanation """ def scara(estado, tiempo): # Se importan funciones necesarias from numpy import sin, cos, matrix # Se desenvuelven variables del estado y tiempo q1, q2, q3, q̇1, q̇2, q̇3 = estado t = tiempo # Se declaran constantes del sistema m1, m2, m3 = 1, 1, 1 J1, J2, J3 = 1, 1, 1 l1, l2 = 0.5, 0.5 L1, L2 = 1, 1 L = 1 g = 9.81 # Se declaran constantes del control kp1, kp2, kp3 = -30, -60, -60 kv1, kv2, kv3 = -20, -20, -18 # Señales de control nulas #tau1, tau2, tau3 = 0, 0, 0 # Posiciones a alcanzar qd1, qd2, qd3 = 1, 1, 1 # Se declaran señales de control del sistema tau1 = kp1*(q1 - qd1) + kv1*q̇1 tau2 = kp2*(q2 - qd2) + kv2*q̇2 tau3 = kp3*(q3 - qd3) + kv3*q̇3- m3*g # Se calculan algunos terminos comunes λ1 = m1*L1*(l2 + L2) λ2 = m1*(l2**2 + L2**2) λ3 = m1*(l1**2 + L1**2) # Se calculan las matrices de masas, Coriolis, # y vectores de gravedad, control, posicion y velocidad M = matrix([[J1 + 2*λ1*cos(q2) + m1*L1**2 + λ2 + λ3, λ1*cos(q2) + λ2, 0], [λ1*cos(q2) + λ2, J2 + λ2, 0], [0, 0, m1]]) C = matrix([[-2*q̇1, -q̇2, 0], [q̇1, 0, 0], [0, 0, 0]]) G = matrix([[0], [0], [-m3*g]]) Tau = matrix([[tau1], [tau2], [tau3]]) q = matrix([[q1], [q2], [q3]]) q̇ = matrix([[q̇1], [q̇2], [q̇3]]) # Se calcula la derivada del estado del sistema qp1 = q̇1 qp2 = q̇2 qp3 = q̇3 qpp = M.I*(Tau - C*q̇ - G) qpp1, qpp2, qpp3 = qpp.tolist() return [qp1, qp2, qp3, qpp1[0], qpp2[0], qpp3[0]] """ Explanation: Para utilizar la función odeint, debemos crear una función, que describa la dinámica del sistema: End of explanation """ t = linspace(0, 10, 1000) estados_simulados = odeint(func = scara, y0 = [0, 0, 0, 0, 0, 0], t = t) """ Explanation: Y declarar un arreglo con todos los tiempos a simular, mandar a llamar a la función odeint, y listo! End of explanation """ q1, q2, q3, q̇1, q̇2, q̇3 = list(zip(*estados_simulados.tolist())) """ Explanation: Desempacamos los elementos que nos entrega odeint: End of explanation """ %matplotlib notebook from matplotlib.pyplot import plot, style, figure from mpl_toolkits.mplot3d import Axes3D style.use("ggplot") """ Explanation: Importamos la libreria para graficar: End of explanation """ fig1 = figure(figsize=(12, 8)) ax1 = fig1.gca() p1, = ax1.plot(t, q1) p2, = ax1.plot(t, q2) p3, = ax1.plot(t, q3) ax1.legend([p1, p2, p3],[r"$q_1$", r"$q_2$", r"$q_3$"]) ax1.set_ylim(-0.1, 1.2) ax1.set_xlim(-0.1, 10); """ Explanation: Hacemos la grafica de las trayectorias del sistema, $q_1$, $q_2$ y $q_3$: End of explanation """ def tras_x(x): from numpy import matrix A = matrix([[1, 0, 0, x], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) return A def tras_z(z): from numpy import matrix A = matrix([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, z], [0, 0, 0, 1]]) return A def rot_z(θ): from numpy import matrix, sin, cos A = matrix([[cos(θ), -sin(θ), 0, 0], [sin(θ), cos(θ), 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) return A def cinematica_scara(q1, q2, q3, l1, l2, L): from numpy import matrix p0 = matrix([[0], [0], [0], [1]]) p1 = tras_z(L)*p0 p2 = rot_z(q1)*tras_x(l1)*p1 p3 = rot_z(q2)*tras_x(l2)*p2 p4 = tras_z(-q3)*p3 return [[p0.tolist()[0][0], p1.tolist()[0][0], p2.tolist()[0][0], p3.tolist()[0][0], p4.tolist()[0][0]], [p0.tolist()[1][0], p1.tolist()[1][0], p2.tolist()[1][0], p3.tolist()[1][0], p4.tolist()[1][0]], [p0.tolist()[2][0], p1.tolist()[2][0], p2.tolist()[2][0], p3.tolist()[2][0], p4.tolist()[2][0]]] from numpy import pi τ = 2*pi xs, ys, zs = cinematica_scara(τ/12, τ/9, 0.5, 1, 1, 1) fig2 = figure(figsize=(8, 8)) ax2 = fig2.gca(projection='3d') ax2.plot(xs, ys, zs, "-o") ax2.set_xlim(-2, 2) ax2.set_ylim(-2, 2) ax2.set_zlim(0, 1.5); """ Explanation: Realizamos la cinemática del manipulador para poder graficar en 3D: End of explanation """ %matplotlib inline def grafica_scara(q1, q2, q3, l1, l2, L): xs, ys , zs = cinematica_scara(q1, q2, q3, l1, l2, L) fig = figure(figsize=(8, 8)) ax = fig.gca(projection='3d') ax.plot(xs, ys, zs, "-o") ax.set_xlim(-2, 2) ax.set_ylim(-2, 2) ax.set_zlim(0, 1.5); grafica_scara(τ/12, τ/9, 0.2, 1, 1, 1) """ Explanation: Cambiamos el ambiente de matplotlib, para poder graficar interactivamente, y declaramos una función que tome la cinematica del robot y grafique los puntos: End of explanation """ # Se importan widgets de IPython para interactuar con la funcion from IPython.html.widgets import interact, fixed # Se llama a la funcion interactiva interact(grafica_scara, q1=(0, τ), q2=(0, τ), q3=(0, 1.0), l1=fixed(1), l2=fixed(1), L=fixed(1)) """ Explanation: Importamos la libreria para interactuar con los datos y le pasamos la función que grafica: End of explanation """ from matplotlib import animation from numpy import arange # Se define el tamaño de la figura fig = figure(figsize=(8, 8)) # Se define una sola grafica en la figura y se dan los limites de los ejes x y y axi = fig.add_subplot(111, autoscale_on=False, xlim=(-2, 2), ylim=(-2, 2), projection='3d') # Se utilizan graficas de linea para el resorte y amortiguador robot, = axi.plot([], [], [], "-o", lw=2) def init(): # Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema robot.set_data([], []) return robot def animate(i): # Esta funcion se ejecuta para cada cuadro del GIF # Se obtienen las coordenadas del robot y se meten los datos en su grafica de linea xs, ys, zs = cinematica_scara(q1[i], q2[i], q3[i], 1, 1, 1) robot.set_data(xs, ys) robot .set_3d_properties(zs) return robot # Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que # se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo # de cada cuadro y la funcion inicial ani = animation.FuncAnimation(fig, animate, arange(1, len(q1)), interval=25, blit=True, init_func=init) # Se guarda el GIF en el archivo indicado ani.save('./imagenes/simulacion-scara.gif', writer='imagemagick'); """ Explanation: Para realizar una animación con los datos de la simulación primero importamos la libreria necesaria y creamos la misma grafica dentro del ambiente de animación: End of explanation """
clintpgeorge/tutorials
exploratory-data-analysis/Exploratory-Data-Analysis-Fall-2016-student.ipynb
gpl-3.0
# We will first read the wine data headers f = open("wine.data") header = f.readlines()[0] """ Explanation: Exploratory Data Analysis In this tutorial we focus on two popular methods for exploring high dimensional datasets. Principal Component Analysis Latent Semantic Analysis The first method is a general scheme for dimensionality reduction, but the second one is specifically used in the text domain. Principal Component Analysis (PCA) PCA is a popular method for summarizing datasets. Suppose, we have a dataset of different wine types. We describe each wine sample by its Alcohol content, color, and so on (see this very nice visualization of wine properties taken from here). Some of these features will measure related properties and so will be redundant. So, we can summarize each wine sample with fewer features! PCA is one such way to do this. It's also called as a method for dimensionality reduction. Here we have a scatter plot of different wine samples (synthetic). It's based on two wine characteristics, color intensity and alcohol content. <img src="http://i.stack.imgur.com/jPw90.png"> We notice a correlation between these two features. We can construct a new property or feature (that summarizes the two features) by drawing a line through the center of the scatter plot and projecting all points onto this line. We construct these lines via linear combinations of $x$ and $y$ coordinates, i.e., $w_1 x + w_2 y$. Each configuration of $(w_1, w_2)$ will give us a new line. Now we will look at the projections -- The below animation shows how the projections of data points look like for different lines (red dots are projections of the blue dots): <img src="http://i.stack.imgur.com/Q7HIP.gif"> PCA aims to find the best line according to the following two criteria. The variation of (projected) values along the line should be maximal. Have look at how the "variance" of the red dots changes while the line rotates... The line should give the lowest reconstruction error. By reconstruction, we mean that constructing the original two characteristics (the position ($x$, $y$) of a blue dot) from the new one (the position of a red dot). This reconstruction error is proportional to the length of the connecting red line. <img src="http://i.stack.imgur.com/XFngC.png"> We will notice that the maximum variance and the minimum error are happened at the same time, when the line points to the magenta ticks. This line corresponds to the first principal component constructed by PCA. PCA objective: Given the data covariance matrix $C$, we look for a vector $u$ having unit length ($\|u\| = 1$) such that $u^TCu$ is maximal. We will see that we can do this with the help of eigenvectors and eigenvalues of the covariance matrix. We will look at the intuition behind this approach using the example above. Let $C$ be an $n \times n$ matrix and $u$ is an $n \times 1$ vector. The operation $C u$ is well-defined. An eigenvector of $C$ is, by definition, any vector $u$ such that $C u = \lambda u$. For the dataset $A$ ($n \times 2$ matrix) above, the covariance matrix C ($2 \times 2$ matrix) is (we assume that the data is centered.) \begin{equation} \begin{vmatrix} 1.07 & 0.63 \ 0.63 & 0.64 \end{vmatrix} \end{equation} It's a square symmetric matrix. Thus, one can diagonalize it by choosing a new orthogonal coordinate system, given by its eigenvectors (spectral theorem): \begin{equation} C = U \Lambda U^{T} \end{equation} where $U$ is a matrix of eigenvectors $u_i$'s (each column is an eigenvector) and $\Lambda$ is a diagonal matrix with eigenvalues $\lambda_i$'s on the diagonal. In the new (eigen) space, the covariance matrix is diagonal, as follows: \begin{equation} \begin{vmatrix} 1.52 & 0 \ 0 & 0.18 \end{vmatrix} \end{equation} It means that there is no correlation between points in this new system. The maximum possible variance is $1.52$, which is given by the first eigenvalue. We achieve this variance by taking the projection on the first principal axis. The direction of this axis is given by the first eigen vector of $C$. This example/discussion is adapted from here. PCA on a Real Dataset For illustration, we will use the wine dataset. Each wine sample is described by 14 features as follows: End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import linalg as la # Read the data file (text format): wine.data, delimiter=',', use columns 0, 1, 10, skip the header wine_class, wine_alc, wine_col = np.loadtxt("wine.data", delimiter=',', usecols=(0, 1, 10), unpack=True, skiprows=1) # draw a scatter plot of wine color intensity and alcohol content """ Explanation: Let's first look at two wine characteristics: Alcohol Content and Color Intensity. <!--img src="http://winefolly.com/wp-content/uploads/2013/02/wine-color-chart1.jpg"--> We can draw a scatter plot: End of explanation """ # Perform PCA on two wine characteristics: **Alcohol Content** and **Color Intensity** col_alc = np.matrix([wine_col, wine_alc]).T m, n = col_alc.shape # compute column means # center the data with column means # calculate the covariance matrix # calculate eigenvectors & eigenvalues of the covariance matrix # sort eigenvalues and eigenvectors in decreasing order """ Explanation: PCA on a Subset of the Wine Data End of explanation """ # Create a scatter plot of the normalized data # color intensity of the x-axis and alcohol content on the y-axis # Plot the principal component line """ Explanation: Let's visualize the normalized data and its principal components. End of explanation """ # the PCA tranformation # Plot the data points in the new space """ Explanation: Let's transform the normalized data to the principal component space End of explanation """ import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from scipy.spatial.distance import cosine """ Explanation: Homework $1$: Apply PCA on the whole set of features and analyze its principal components. Exploratory Text Analysis First, let's import numpy and a couple other modules we'll need. End of explanation """ corpus = [ "Romeo and Juliet.", # document 1 "Juliet: O happy dagger!", # document 2 "Romeo died by dagger.", # document 3 "'Live free or die', that's the New-Hampshire's motto.", # document 4 "Did you know, New-Hampshire is in New-England." # document 5 ] key_words = [ 'die', 'dagger' ] """ Explanation: We consider a toy document collection (corpus) and a query for this tutorial. End of explanation """ # initialize the countvetorizer class vectorizer = CountVectorizer(min_df=0, stop_words=None) # transform the corpus based on the count vectorizer # print the vocabulary """ Explanation: We now build a term frequency (TF) matrix from the corpus using the Python sklearn package. End of explanation """ # A custom stopword list stop_words = ["a", "an", "the", "and", "in", "by", "or", "did", "you", "is", "that"] # Here, we assume that we preprocessed the corpus preprocessed_corpus = [ "Romeo and Juliet", "Juliet O happy dagger", "Romeo die by dagger", "Live free or die that the NewHampshire motto", "Did you know NewHampshire is in NewEngland" ] # Customize the vectorizer class # transform the corpus based on the count vectorizer # print the vocabulary """ Explanation: Let's look at the corpus vocabulary terms. Some of these terms are noninformative or stopwords, e.g., a, an, the, and, etc. One can use a standard or a custom stopword list to remove these terms. The vocabulary also contains different forms for a single word, e.g., die, died. One can use methods such are stemming and lemmatization to get root forms of words in a corpus. There are several open source libraries available to perform all these for you, e.g., Python Natural Language Processing Toolkit (NLTK) End of explanation """ # query keywords key_words = ['die', 'dagger'] # To keep the development simple, we build a composite model for both the corpus and the query corpus = preprocessed_corpus + [' '.join(key_words)] # transform the corpus based on the count vectorizer # TF-IDF transform using TfidfTransformer # transform the TF matrix to TF-IDF matrix # D x V document-term matrix # 1 x V query-term vector """ Explanation: TF-IDF Here, we compute the TF-IDF matrix for the normalized corpus and the sample query die dagger. We consider the query as a document in the corpus. End of explanation """ # Find cosine distance b/w the TF-IDF vectors of every document and the query # Sort them and create the rank list """ Explanation: Information Retrieval via TF-IDF Now, we solve the document ranking problem for the given query: die dagger. We use cosine distance to measure similarity between each document vector and the query vector in the TF-IDF vector space. Once we have the distance scores we can sort them to get a rank list as follows. End of explanation """ K = 2 # number of components """ Explanation: Latent Semantic Analysis (LSA) We perform LSA using the well-known matrix factorization technique Singular Value Decomposition (SVD). We consider the TF matrix for SVD. In practice, one can also perform SVD on the TF-IDF matrix. Note that $A$ is a $V \times D$ data matrix $U$ is the matrix of the eigenvectors of $C = AA'$ (the term-term matrix). It's a $V \times V$ matrix. $V$ is the matrix of the eigenvectors of $B = A'A$ (the document-document matrix). It's a $D \times D$ matrix $s$ is the vector singular values, obtained as square roots of the eigenvalues of $B$. More info can be found in the python SVD documentation: https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html We now perform data reduction or transform documents in a $V$-dimensional space to a lower dimensional space. Let's take the number dimensions $K = 3$, i.e., the number of semantic components in the corpus. Using LSA, we can represent vocabulary terms in the semantic space. End of explanation """ # Find cosine distance b/w the TF-IDF vectors of every document and the query # Sort them and create the rank list """ Explanation: Information Retrieval via LSA Now we would like to represent the query in the LSA space. A natural choice is to compute a vector that is the centroid of the semantic vectors for its terms. In our example, the keyword query is die dagger. We compute the query vector as We now solve the document ranking problem given the query die dagger as follows. End of explanation """
davidhamann/python-fmrest
examples/conf_dotfmp_2018.ipynb
mit
import fmrest fmrest.__version__ """ Explanation: An Introduction to python-fmrest (dotfmp demo) python-fmrest is a wrapper around the FileMaker Data API. No need to worry about manually requesting access tokens, setting the right http headers, parsing responses, ... Use cases Some things you may use the python-fmrest library for: Build a backend for a web app that works with FileMaker data Use python-fmrest together with a rest framework to build your own data API as middleware (so that you don't expose the whole FM data API to a third party, but only allowed endpoints/actions) Explore your FileMaker data with data analysis tools from the Python ecosystem Anything else you could do in the past with the CWP/XML API Installation (get you up and running quickly) If you haven't worked with Python and Virtualenvs before: brew install python3 No brew? /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)") pip3 install virtualenv If you have worked with Python and Virtualenvs before, or after executing the steps above: virtualenv venv --python=`which python3` source venv/bin/activate pip install python-fmrest Link to this notebook: https://github.com/davidhamann/python-fmrest/tree/master/examples/conf_dotfmp_2018.ipynb The demo setup FileMaker Server 17, with Data API enabled, in a VM Hosted Database called "Planets" incl. account with fmrest extended privilege Example code running in a Jupyter Notebook with a Python 3.6 kernel Not necessarily needed, but nice for exploration and presentation (mixing code and annotations) Installed python-fmrest library Import module End of explanation """ fms = fmrest.Server( 'https://dotfmp-demo.davidhamann.de', user='admin', password='admin', database='planets', layout='Planets', # if you are testing without cert/domain you may need the parameter verify_ssl=False here. ) """ Explanation: Create server instance End of explanation """ fms """ Explanation: This gives you a server instance which provides all further methods to interact with the Data API. End of explanation """ fms.login() """ Explanation: Login Obtain a token from FMS: End of explanation """ planets = fms.get_records() for planet in planets: print(f'{planet.id}, {planet.record_id}, {planet.name}') type(planets), planets """ Explanation: Get records and access field and portal data List all records from the Planets table End of explanation """ record = fms.get_record(5, portals=[{'name': 'moons', 'limit': 5}]) portal = record['portal_moons'] record, portal """ Explanation: Look at (some of) the moons of Jupiter (list records of a portal) End of explanation """ for row in portal: print(row['Moons::name']) """ Explanation: Fetching a record always gives you a Record instance. The portal rows, however, are returned as a Foundset. End of explanation """ record.keys() record.values() record.to_dict() """ Explanation: You can inspect what fields are available: End of explanation """ record.name, record['atmosphere'] """ Explanation: And access the value by attribute or key: End of explanation """ find_request = [{'name': 'Earth'}, {'name': 'Jupiter'}] foundset = fms.find(query=find_request) earth = foundset[0] earth """ Explanation: So far we have seen Server, Foundset, Record. These are the main classes you need to be aware of when working with the library. Find records End of explanation """ earth.name = 'Blue Dot' earth fms.edit(earth) """ Explanation: Edit a record End of explanation """ # change back earth.name = 'Earth' fms.edit(earth, validate_mod_id=False) """ Explanation: Handle outdated record values: End of explanation """ pluto = fms.create_record({'name': 'Pluto', 'id': 9}) pluto """ Explanation: Create a record End of explanation """ fms.delete_record(pluto) """ Explanation: Delete a record End of explanation """ fms.get_record( 1, scripts={ 'after': ['say_hello', 'dotfmp'] } ) fms.last_script_result fms.last_script_result['after'][1] """ Explanation: Performing scripts (new in v17) End of explanation """ with open('../scratch/dotfmp_logo.png', 'rb') as image: result = fms.upload_container(3, 'image', image) # upload dotfmp logo into field with name "image" of record 3 result """ Explanation: Uploading container data (new in v17) End of explanation """ earth = fms.get_record(3) earth.image name, type_, length, response = fms.fetch_file(earth.image) name, type_, length from IPython.display import Image Image(response.content) """ Explanation: Now retrieve the image again: End of explanation """ find_request = [{'name': 'something that doesn\'t exist'}] foundset = fms.find(query=find_request) """ Explanation: Exceptions End of explanation """ foundset = fms.get_records() df = foundset.to_df() df.loc[:, df.columns != 'image'] df[['name', 'atmosphere', 'rings', 'confirmed_moons', 'mass']].set_index('name').T df.describe() """ Explanation: Foundset into DataFrame Turn Foundset into a Pandas DataFrame to do statistical analyses on your dataset, work with missing data, reshape/pivot, perform joins/merges, plot with matplotlib, export, etc. End of explanation """ %matplotlib notebook df.plot(x='name', y='confirmed_moons') """ Explanation: ... or plot some data with matplotlib End of explanation """ path = 'data.csv' df.to_csv(path, sep=";", index=False) from IPython.display import FileLink FileLink(path) """ Explanation: ... or export the data in a different format End of explanation """
opesci/tutorial-hands-on
02a_fwi.ipynb
mit
import numpy as np %matplotlib inline from devito import configuration configuration['log_level'] = 'WARNING' """ Explanation: Full-Waveform Inversion (FWI) This notebook is the third in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling and velocity model. Inversion requirement Seismic inversion relies on two known parameters: Field data - or also called recorded data. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will fake field data by modelling it with the true velocity model. Initial velocity model. This is a velocity model that has been obtained by processing the field data. This model is a rough and very smooth estimate of the velocity as an initial estimate for the inversion. This is a necessary requirement for any optimization (method). Inversion computational setup In this tutorial, we will introduce the gradient operator. This operator corresponds to the imaging condition introduced in the previous tutorial with some minor modifications that are defined by the objective function (also referred to in the tutorial series as the functional, f) and its gradient, g. We will define this two terms in the tutorial too. Notes on the operators As we already describe the creation of a forward modelling operator, we will only call an wrapped function here. This wrappers already contains all the necessary operator for seismic modeling, imaging and inversion, however any new operator will be fully described and only used from the wrapper in the next tutorials. End of explanation """ nshots = 9 # Number of shots to create gradient from nreceivers = 101 # Number of receiver locations per shot fwi_iterations = 8 # Number of outer FWI iterations """ Explanation: Computational considerations As we will see in this tutorial, FWI is again very computationally demanding, even more so than RTM. To keep this tutorial as light-wight as possible we therefore again use a very small demonstration model. We also define here a few parameters for the final example runs that can be changed to modify the overall runtime of the tutorial. End of explanation """ #NBVAL_IGNORE_OUTPUT from examples.seismic import demo_model, plot_velocity, plot_perturbation # Define true and initial model shape = (101, 101) # Number of grid point (nx, nz) spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km origin = (0., 0.) # Need origin to define relative source and receiver locations model = demo_model('circle-isotropic', vp=3.0, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) model0 = demo_model('circle-isotropic', vp=2.5, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) plot_velocity(model) plot_velocity(model0) plot_perturbation(model0, model) """ Explanation: True and smooth velocity models As before, we will again use a very simple model domain, consisting of a circle within a 2D domain. We will again use the "true" model to generate our synthetic shot data and use a "smooth" model as our initial guess. In this case the smooth model is very smooth indeed - it is simply a constant background velocity without any features. End of explanation """ #NBVAL_IGNORE_OUTPUT # Define acquisition geometry: source from examples.seismic import RickerSource, Receiver # Define time discretization according to grid spacing t0 = 0. tn = 1000. # Simulation lasts 1 second (1000 ms) dt = model.critical_dt # Time step from model grid spacing nt = int(1 + (tn-t0) / dt) # Discrete time axis length time = np.linspace(t0, tn, nt) # Discrete modelling time f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz) src = RickerSource(name='src', grid=model.grid, f0=f0, time=np.linspace(t0, tn, nt)) src.coordinates.data[0, :] = np.array(model.domain_size) * .5 src.coordinates.data[0, 0] = 20. # 20m from the left end # We can plot the time signature to see the wavelet src.show() #NBVAL_IGNORE_OUTPUT # Define acquisition geometry: receivers # Initialize receivers for synthetic data rec = Receiver(name='rec', grid=model.grid, npoint=nreceivers, ntime=nt) rec.coordinates.data[:, 1] = np.linspace(0, model.domain_size[0], num=nreceivers) rec.coordinates.data[:, 0] = 980. # 20m from the right end # Plot acquisition geometry plot_velocity(model, source=src.coordinates.data, receiver=rec.coordinates.data[::4, :]) """ Explanation: Acquisition geometry In this tutorial, we will use the easiest case for inversion, namely a transmission experiment. The sources are located on one side of the model and the receivers on the other side. This allow to record most of the information necessary for inversion, as reflections usually lead to poor inversion results. End of explanation """ # Compute synthetic data with forward operator from examples.seismic.acoustic import AcousticWaveSolver solver = AcousticWaveSolver(model, src, rec, space_order=4) true_d, _, _ = solver.forward(src=src, m=model.m) # Compute initial data with forward operator smooth_d, _, _ = solver.forward(src=src, m=model0.m) #NBVAL_IGNORE_OUTPUT from examples.seismic import plot_shotrecord # Plot shot record for true and smooth velocity model and the difference plot_shotrecord(true_d.data, model, t0, tn) plot_shotrecord(smooth_d.data, model, t0, tn) plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn) """ Explanation: True and smooth data We can generate shot records for the true and smoothed initial velocity models, since the difference between them will again form the basis of our imaging procedure. End of explanation """ #NBVAL_IGNORE_OUTPUT # Prepare the varying source locations sources source_locations = np.empty((nshots, 2), dtype=np.float32) source_locations[:, 0] = 30. source_locations[:, 1] = np.linspace(0., 1000, num=nshots) plot_velocity(model, source=source_locations) # Create FWI gradient kernel from devito import Function, clear_cache def fwi_gradient(m_in): # Important: We force previous wavefields to be destroyed, # so that we may reuse the memory. clear_cache() # Create symbols to hold the gradient and residual grad = Function(name="grad", grid=model.grid) residual = Receiver(name='rec', grid=model.grid, ntime=nt, coordinates=rec.coordinates.data) objective = 0. for i in range(nshots): # Update source location src.coordinates.data[0, :] = source_locations[i, :] # Generate synthetic data from true model true_d, _, _ = solver.forward(src=src, m=model.m) # Compute smooth data and full forward wavefield u0 smooth_d, u0, _ = solver.forward(src=src, m=m_in, save=True) # Compute gradient from data residual and update objective function residual.data[:] = smooth_d.data[:] - true_d.data[:] objective += .5*np.linalg.norm(residual.data.reshape(-1))**2 solver.gradient(rec=residual, u=u0, m=m_in, grad=grad) return objective, grad.data """ Explanation: Full-Waveform Inversion Formulation Full-waveform inversion (FWI) aims to invert an accurate model of the discrete wave velocity, $\mathbf{c}$, or equivalently the square slowness of the wave, $\mathbf{m} = \frac{1}{\mathbf{c}^2}$, from a given set of measurements of the pressure wavefield $\mathbf{u}$. This can be expressed as the following optimization problem [1, 2]: \begin{aligned} \mathop{\hbox{minimize}}_{\mathbf{m}} \Phi_s(\mathbf{m})&=\frac{1}{2}\left\lVert\mathbf{P}_r \mathbf{u} - \mathbf{d}\right\rVert_2^2 \ \mathbf{u} &= \mathbf{A}(\mathbf{m})^{-1} \mathbf{P}_s^T \mathbf{q}_s, \end{aligned} where $\mathbf{P}_r$ is the sampling operator at the receiver locations, $\mathbf{P}_s^T$ is the injection operator at the source locations, $\mathbf{A}(\mathbf{m})$ is the operator representing the discretized wave equation matrix, $\mathbf{u}$ is the discrete synthetic pressure wavefield, $\mathbf{q}_s$ is the corresponding pressure source and $\mathbf{d}$ is the measured data. It is worth noting that $\mathbf{m}$ is the unknown in this formulation and that multiple implementations of the wave equation operator $\mathbf{A}(\mathbf{m})$ are possible. We have already defined a concrete solver scheme for $\mathbf{A}(\mathbf{m})$ in the first tutorial, including appropriate implementations of the sampling operator $\mathbf{P}_r$ and source term $\mathbf{q}_s$. To solve this optimization problem using a gradient-based method, we use the adjoint-state method to evaluate the gradient $\nabla\Phi_s(\mathbf{m})$: \begin{align} \nabla\Phi_s(\mathbf{m})=\sum_{\mathbf{t} =1}^{n_t}\mathbf{u}[\mathbf{t}] \mathbf{v}_{tt}[\mathbf{t}] =\mathbf{J}^T\delta\mathbf{d}_s, \end{align} where $n_t$ is the number of computational time steps, $\delta\mathbf{d}s = \left(\mathbf{P}_r \mathbf{u} - \mathbf{d} \right)$ is the data residual (difference between the measured data and the modelled data), $\mathbf{J}$ is the Jacobian operator and $\mathbf{v}{tt}$ is the second-order time derivative of the adjoint wavefield solving: \begin{align} \mathbf{A}^T(\mathbf{m}) \mathbf{v} = \mathbf{P}_r^T \delta\mathbf{d}. \end{align} We see that the gradient of the FWI function is the previously defined imaging condition with an extra second-order time derivative. We will therefore reuse the operators defined previously inside a Devito wrapper. FWI gradient operator To compute a single gradient $\nabla\Phi_s(\mathbf{m})$ in our optimization workflow we again use solver.forward to compute the entire forward wavefield $\mathbf{u}$ and a similar pre-defined gradient operator to compute the adjoint wavefield v. The gradient operator provided by our solver utility also computes the correlation between the wavefields, allowing us to encode a similar procedure to the previous imaging tutorial as our gradient calculation: Simulate the forward wavefield with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$ Compute the data residual Back-propagate the data residual and compute on the fly the gradient contribution at each time step. This procedure is applied to multiple source positions and summed to obtain a gradient image of the subsurface. We again prepare the source locations for each shot and visualize them, before defining a single gradient computation over a number of shots as a single function. End of explanation """ #NBVAL_IGNORE_OUTPUT # Compute gradient of initial model ff, update = fwi_gradient(model0.m) print('Objective value is %f ' % ff) #NBVAL_IGNORE_OUTPUT from examples.seismic import plot_image # Plot the FWI gradient plot_image(update, vmin=-1e4, vmax=1e4, cmap="jet") # Plot the difference between the true and initial model. # This is not known in practice as only the initial model is provided. plot_image(model0.m.data - model.m.data, vmin=-1e-1, vmax=1e-1, cmap="jet") # Show what the update does to the model alpha = .05 / np.max(update) plot_image(model0.m.data - alpha*update, vmin=.1, vmax=.2, cmap="jet") """ Explanation: Having defined our FWI gradient procedure we can compute the initial iteration from our starting model. This allows us to visualize the gradient alongside the model perturbation and the effect of the gradient update on the model. End of explanation """ # Define bounding box constraints on the solution. def apply_box_constraint(m): # Maximum possible 'realistic' velocity is 3.5 km/sec # Minimum possible 'realistic' velocity is 2 km/sec return np.clip(m, 1/3.5**2, 1/2**2) #NBVAL_SKIP # Run FWI with gradient descent history = np.zeros((fwi_iterations, 1)) for i in range(0, fwi_iterations): # Compute the functional value and gradient for the current # model estimate phi, direction = fwi_gradient(model0.m) # Store the history of the functional values history[i] = phi # Artificial Step length for gradient descent # In practice this would be replaced by a Linesearch (Wolfe, ...) # that would guarantee functional decrease Phi(m-alpha g) <= epsilon Phi(m) # where epsilon is a minimum decrease constant alpha = .005 / np.max(direction) # Update the model estimate and inforce minimum/maximum values model0.m.data[:] = apply_box_constraint(model0.m.data - alpha * direction) # Log the progress made print('Objective value is %f at iteration %d' % (phi, i+1)) #NBVAL_IGNORE_OUTPUT # First, update velocity from computed square slowness nbpml = model.nbpml model0.vp = np.sqrt(1. / model0.m.data[nbpml:-nbpml, nbpml:-nbpml]) # Plot inverted velocity model plot_velocity(model0) #NBVAL_SKIP import matplotlib.pyplot as plt # Plot objective function decrease plt.figure() plt.loglog(history) plt.xlabel('Iteration number') plt.ylabel('Misift value Phi') plt.title('Convergence') plt.show() """ Explanation: We see that the gradient and the true perturbation have the same sign, therefore, with an appropriate scaling factor, we will update the model in the correct direction. End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/images/cnn.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install tensorflow-gpu==2.0.0-rc1 import tensorflow as tf from tensorflow.keras import datasets, layers, models """ Explanation: 합성곱 신경망 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/cnn"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/cnn.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> 구글 코랩(Colab)에서 실행하기</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/cnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> 깃허브(GitHub) 소스 보기</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 docs-ko@tensorflow.org로 메일을 보내주시기 바랍니다. 이 튜토리얼은 MNIST 숫자를 분류하기 위해 간단한 합성곱 신경망(Convolutional Neural Network, CNN)을 훈련합니다. 간단한 이 네트워크는 MNIST 테스트 세트에서 99% 정확도를 달성할 것입니다. 이 튜토리얼은 케라스 Sequential API를 사용하기 때문에 몇 줄의 코드만으로 모델을 만들고 훈련할 수 있습니다. 노트: GPU를 사용하여 CNN의 훈련 속도를 높일 수 있습니다. 코랩에서 이 노트북을 실행한다면 * 수정 -> 노트 설정 -> 하드웨어 가속기* 에서 GPU를 선택하세요. 텐서플로 임포트하기 End of explanation """ (train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data() train_images = train_images.reshape((60000, 28, 28, 1)) test_images = test_images.reshape((10000, 28, 28, 1)) # 픽셀 값을 0~1 사이로 정규화합니다. train_images, test_images = train_images / 255.0, test_images / 255.0 """ Explanation: MNIST 데이터셋 다운로드하고 준비하기 End of explanation """ model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) """ Explanation: 합성곱 층 만들기 아래 6줄의 코드에서 Conv2D와 MaxPooling2D 층을 쌓는 일반적인 패턴으로 합성곱 층을 정의합니다. CNN은 배치(batch) 크기를 제외하고 (이미지 높이, 이미지 너비, 컬러 채널) 크기의 텐서(tensor)를 입력으로 받습니다. MNIST 데이터는 (흑백 이미지이기 때문에) 컬러 채널(channel)이 하나지만 컬러 이미지는 (R,G,B) 세 개의 채널을 가집니다. 이 예에서는 MNIST 이미지 포맷인 (28, 28, 1) 크기의 입력을 처리하는 CNN을 정의하겠습니다. 이 값을 첫 번째 층의 input_shape 매개변수로 전달합니다. End of explanation """ model.summary() """ Explanation: 지금까지 모델의 구조를 출력해 보죠. End of explanation """ model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) """ Explanation: 위에서 Conv2D와 MaxPooling2D 층의 출력은 (높이, 너비, 채널) 크기의 3D 텐서입니다. 높이와 너비 차원은 네트워크가 깊어질수록 감소하는 경향을 가집니다. Conv2D 층에서 출력 채널의 수는 첫 번째 매개변수에 의해 결정됩니다(예를 들면, 32 또는 64). 일반적으로 높이와 너비가 줄어듦에 따라 (계산 비용 측면에서) Conv2D 층의 출력 채널을 늘릴 수 있습니다. 마지막에 Dense 층 추가하기 모델을 완성하려면 마지막 합성곱 층의 출력 텐서(크기 (4, 4, 64))를 하나 이상의 Dense 층에 주입하여 분류를 수행합니다. Dense 층은 벡터(1D)를 입력으로 받는데 현재 출력은 3D 텐서입니다. 먼저 3D 출력을 1D로 펼치겠습니다. 그다음 하나 이상의 Dense 층을 그 위에 추가하겠습니다. MNIST 데이터는 10개의 클래스가 있으므로 마지막에 Dense 층에 10개의 출력과 소프트맥스 활성화 함수를 사용합니다. End of explanation """ model.summary() """ Explanation: 최종 모델의 구조를 확인해 보죠. End of explanation """ model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5) """ Explanation: 여기에서 볼 수 있듯이 두 개의 Dense 층을 통과하기 전에 (4, 4, 64) 출력을 (1024) 크기의 벡터로 펼쳤습니다. 모델 컴파일과 훈련하기 End of explanation """ test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) """ Explanation: 모델 평가 End of explanation """
utensil/julia-playground
dl/hello_nn_vis.ipynb
mit
import tensorflow as tf g = tf.Graph() with g.as_default(): a = tf.placeholder(tf.float32, name="a") b = tf.placeholder(tf.float32, name="b") c = a + b [node.name for node in g.as_graph_def().node] g.as_graph_def().node[2].input %%bash export DEBIAN_FRONTEND=noninteractive apt-get update apt-get install -yq --no-install-recommends graphviz %%bash pip install graphviz from graphviz import Digraph dot = Digraph() for n in g.as_graph_def().node: # Each node has a name and a label. The name identifies the node # while the label is what will be displayed in the graph. # We're using the name as a label for simplicity. dot.node(n.name, label=n.name) for i in n.input: # Edges are determined by the names of the nodes dot.edge(i, n.name) # Jupyter can automatically display the DOT graph, # which allows us to just return it as a value. dot def tf_to_dot(graph): dot = Digraph() for n in g.as_graph_def().node: dot.node(n.name, label=n.name) for i in n.input: dot.edge(i, n.name) return dot g = tf.Graph() with g.as_default(): pi = tf.constant(3.14, name="pi") r = tf.placeholder(tf.float32, name="r") y = pi * r * r tf_to_dot(g) %%bash mkdir vis_logs """ Explanation: The following is adapted from Visualizing TensorFlow Graphs in Jupyter Notebooks And excuted in bash docker run -it -p 8888:8888 -p 6006:6006 -v `pwd`:/space/ -w /space/ --rm --name md waleedka/modern-deep-learning jupyter notebook --ip=0.0.0.0 --allow-root End of explanation """ g = tf.Graph() with g.as_default(): pi = tf.constant(3.14, name="pi") r = tf.placeholder(tf.float32, name="r") y = pi * r * r tf.summary.FileWriter("vis_logs", g).close() """ Explanation: Run the follwing: bash docker exec -it md tensorboard --logdir=dl/vis_logs And navigate to http://localhost:6006/#graphs End of explanation """ g = tf.Graph() with g.as_default(): X = tf.placeholder(tf.float32, name="X") W1 = tf.placeholder(tf.float32, name="W1") b1 = tf.placeholder(tf.float32, name="b1") a1 = tf.nn.relu(tf.matmul(X, W1) + b1) W2 = tf.placeholder(tf.float32, name="W2") b2 = tf.placeholder(tf.float32, name="b2") a2 = tf.nn.relu(tf.matmul(a1, W2) + b2) W3 = tf.placeholder(tf.float32, name="W3") b3 = tf.placeholder(tf.float32, name="b3") y_hat = tf.matmul(a2, W3) + b3 tf.summary.FileWriter("vis_logs", g).close() """ Explanation: End of explanation """ g = tf.Graph() with g.as_default(): X = tf.placeholder(tf.float32, name="X") with tf.name_scope("Layer1"): W1 = tf.placeholder(tf.float32, name="W1") b1 = tf.placeholder(tf.float32, name="b1") a1 = tf.nn.relu(tf.matmul(X, W1) + b1) with tf.name_scope("Layer2"): W2 = tf.placeholder(tf.float32, name="W2") b2 = tf.placeholder(tf.float32, name="b2") a2 = tf.nn.relu(tf.matmul(a1, W2) + b2) with tf.name_scope("Layer3"): W3 = tf.placeholder(tf.float32, name="W3") b3 = tf.placeholder(tf.float32, name="b3") y_hat = tf.matmul(a2, W3) + b3 tf.summary.FileWriter("vis_logs", g).close() """ Explanation: End of explanation """ # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb # TensorFlow Graph visualizer code import numpy as np from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script src="//cdnjs.cloudflare.com/ajax/libs/polymer/0.3.3/platform.js"></script> <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) # Simply call this to display the result. Unfortunately it doesn't save the output together with # the Jupyter notebook, so we can only show a non-interactive image here. show_graph(g) """ Explanation: End of explanation """ %%bash pip install mxnet %%bash # https://github.com/dmlc/mxnet-model-gallery/blob/master/imagenet-1k-vgg.md wget http://data.dmlc.ml/mxnet/models/imagenet/vgg/vgg19.tar.gz %%bash wget http://data.dmlc.ml/models/imagenet/inception-bn/Inception-BN-symbol.json %%bash cat Inception-BN-symbol.json %%bash wget http://data.dmlc.ml/mxnet/models/imagenet/resnet/50-layers/resnet-50-symbol.json && wget http://data.dmlc.ml/mxnet/models/imagenet/resnet/50-layers/resnet-50-0000.params import mxnet as mx sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-50', 0) mx.viz.plot_network(sym, node_attrs={"shape":'rect',"fixedsize":'false'}, save_format='png') import mxnet as mx user = mx.symbol.Variable('user') item = mx.symbol.Variable('item') score = mx.symbol.Variable('score') # Set dummy dimensions k = 64 max_user = 100 max_item = 50 # user feature lookup user = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k) # item feature lookup item = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k) # predict by the inner product, which is elementwise product and then sum net = user * item net = mx.symbol.sum_axis(data = net, axis = 1) net = mx.symbol.Flatten(data = net) # loss layer net = mx.symbol.LinearRegressionOutput(data = net, label = score) # Visualize your network mx.viz.plot_network(net) """ Explanation: The following is adapted from Visualizing CNN architectures side by side with mxnet End of explanation """
sjschmidt44/bike_share
bike_share_data.ipynb
mit
from pandas import Series, DataFrame import pandas as pd import numpy as np weather = pd.read_table('daily_weather.tsv') stations = pd.read_table('stations.tsv') usage = pd.read_table('usage_2012.tsv') newseasons = {'Summer': 'Spring', 'Spring': 'Winter', 'Fall': 'Summer', 'Winter': 'Fall'} weather['season_desc'] = weather['season_desc'].map(newseasons) pd.pivot_table(weather, 'temp', 'season_desc', aggfunc=np.average) """ Explanation: Question 1: Average Temp by Season Compute the average temperature by season ('season_desc'). (The temperatures are numbers between 0 and 1, but don't worry about that. Let's say that's the Shellman temperature scale.) End of explanation """ weather['Month'] = pd.DatetimeIndex(weather.date).month pd.pivot_table(weather, 'total_riders', 'Month', aggfunc=np.sum) """ Explanation: Question 2: Number of Rentals by Month Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month. End of explanation """ pd.concat([weather['temp'], weather['total_riders']], axis=1).corr() weather[['temp', 'total_riders', 'Month']].groupby('Month').corr() """ Explanation: Question 3: Rental Variance by Temperature Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months? End of explanation """ pd.concat([weather['temp'], weather['no_casual_riders'], weather['no_reg_riders']], axis=1, keys=['temp', 'Non-Regulars', 'Regulars']).corr() """ Explanation: Question 4: User Data There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently? End of explanation """ pd.concat([weather['is_work_day'], weather['no_casual_riders'], weather['no_reg_riders']], axis=1, keys=['Is_Workday', 'Non-Regulars', 'Regulars']).corr() """ Explanation: As the temp is higher, Regulars are more likely to ride. End of explanation """ pd.concat([weather['is_holiday'], weather['no_casual_riders'], weather['no_reg_riders']], axis=1, keys=['Is_Holiday', 'Non-Regulars', 'Regulars']).corr() """ Explanation: Regulars have a much higher usage rate on Working Days. End of explanation """
kvr777/deep-learning
batch-norm/Batch_Normalization_Lesson.ipynb
mit
# Import necessary packages import tensorflow as tf import tqdm import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import MNIST data so we have something for our experiments from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) """ Explanation: Batch Normalization – Lesson What is it? What are it's benefits? How do we add it to a network? Let's see it work! What are you hiding? What is Batch Normalization?<a id='theory'></a> Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch. Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network. For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. Likewise, the output of layer 2 can be thought of as the input to a single layer network, consistng only of layer 3. When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network). Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models. Benefits of Batch Normalization<a id="benefits"></a> Batch normalization optimizes network training. It has been shown to have several benefits: 1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. 2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. 3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights. 4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again. 5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great. 6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. 7. May give better results overall – Some tests seem to show batch normalization actually improves the train.ing results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization. Batch Normalization in TensorFlow<a id="implementation_1"></a> This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization. End of explanation """ class NeuralNet: def __init__(self, initial_weights, activation_fn, use_batch_norm): """ Initializes this object, creating a TensorFlow graph using the given parameters. :param initial_weights: list of NumPy arrays or Tensors Initial values for the weights for every layer in the network. We pass these in so we can create multiple networks with the same starting weights to eliminate training differences caused by random initialization differences. The number of items in the list defines the number of layers in the network, and the shapes of the items in the list define the number of nodes in each layer. e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would create a network with 784 inputs going into a hidden layer with 256 nodes, followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param use_batch_norm: bool Pass True to create a network that uses batch normalization; False otherwise Note: this network will not use batch normalization on layers that do not have an activation function. """ # Keep track of whether or not this network uses batch normalization. self.use_batch_norm = use_batch_norm self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm" # Batch normalization needs to do different calculations during training and inference, # so we use this placeholder to tell the graph which behavior to use. self.is_training = tf.placeholder(tf.bool, name="is_training") # This list is just for keeping track of data we want to plot later. # It doesn't actually have anything to do with neural nets or batch normalization. self.training_accuracies = [] # Create the network graph, but it will not actually have any real values until after you # call train or test self.build_network(initial_weights, activation_fn) def build_network(self, initial_weights, activation_fn): """ Build the graph. The graph still needs to be trained via the `train` method. :param initial_weights: list of NumPy arrays or Tensors See __init__ for description. :param activation_fn: Callable See __init__ for description. """ self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]]) layer_in = self.input_layer for weights in initial_weights[:-1]: layer_in = self.fully_connected(layer_in, weights, activation_fn) self.output_layer = self.fully_connected(layer_in, initial_weights[-1]) def fully_connected(self, layer_in, initial_weights, activation_fn=None): """ Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. """ # Since this class supports both options, only use batch normalization when # requested. However, do not use it on the final layer, which we identify # by its lack of an activation function. if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. # (See later in the notebook for more details.) weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) # Apply batch normalization to the linear combination of the inputs and weights batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) # Now apply the activation function, *after* the normalization. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None): """ Trains the model on the MNIST training dataset. :param session: Session Used to run training graph operations. :param learning_rate: float Learning rate used during gradient descent. :param training_batches: int Number of batches to train. :param batches_per_sample: int How many batches to train before sampling the validation accuracy. :param save_model_as: string or None (default None) Name to use if you want to save the trained model. """ # This placeholder will store the target labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define loss and optimizer cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer)) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) if self.use_batch_norm: # If we don't include the update ops as dependencies on the train step, the # tf.layers.batch_normalization layers won't update their population statistics, # which will cause the model to fail at inference time with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) # Train for the appropriate number of batches. (tqdm is only for a nice timing display) for i in tqdm.tqdm(range(training_batches)): # We use batches of 60 just because the original paper did. You can use any size batch you like. batch_xs, batch_ys = mnist.train.next_batch(60) session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) # Periodically test accuracy against the 5k validation images and store it for plotting later. if i % batches_per_sample == 0: test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) self.training_accuracies.append(test_accuracy) # After training, report accuracy against test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images, labels: mnist.validation.labels, self.is_training: False}) print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy)) # If you want to use this model later for inference instead of having to retrain it, # just construct it with the same parameters and then pass this file to the 'test' function if save_model_as: tf.train.Saver().save(session, save_model_as) def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None): """ Trains a trained model on the MNIST testing dataset. :param session: Session Used to run the testing graph operations. :param test_training_accuracy: bool (default False) If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. Note: in real life, *always* perform inference using the population mean and variance. This parameter exists just to support demonstrating what happens if you don't. :param include_individual_predictions: bool (default True) This function always performs an accuracy test against the entire test set. But if this parameter is True, it performs an extra test, doing 200 predictions one at a time, and displays the results and accuracy. :param restore_from: string or None (default None) Name of a saved model if you want to test with previously saved weights. """ # This placeholder will store the true labels for each mini batch labels = tf.placeholder(tf.float32, [None, 10]) # Define operations for testing correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # If provided, restore from a previously saved model if restore_from: tf.train.Saver().restore(session, restore_from) # Test against all of the MNIST test data test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images, labels: mnist.test.labels, self.is_training: test_training_accuracy}) print('-'*75) print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy)) # If requested, perform tests predicting individual values rather than batches if include_individual_predictions: predictions = [] correct = 0 # Do 200 predictions, 1 at a time for i in range(200): # This is a normal prediction using an individual test case. However, notice # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`. # Remember that will tell it whether it should use the batch mean & variance or # the population estimates that were calucated while training the model. pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy], feed_dict={self.input_layer: [mnist.test.images[i]], labels: [mnist.test.labels[i]], self.is_training: test_training_accuracy}) correct += corr predictions.append(pred[0]) print("200 Predictions:", predictions) print("Accuracy on 200 samples:", correct/200) """ Explanation: Neural network classes for testing The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions. About the code: This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization. It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train. End of explanation """ def plot_training_accuracies(*args, **kwargs): """ Displays a plot of the accuracies calculated during training to demonstrate how many iterations it took for the model(s) to converge. :param args: One or more NeuralNet objects You can supply any number of NeuralNet objects as unnamed arguments and this will display their training accuracies. Be sure to call `train` the NeuralNets before calling this function. :param kwargs: You can supply any named parameters here, but `batches_per_sample` is the only one we look for. It should match the `batches_per_sample` value you passed to the `train` function. """ fig, ax = plt.subplots() batches_per_sample = kwargs['batches_per_sample'] for nn in args: ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample), nn.training_accuracies, label=nn.name) ax.set_xlabel('Training steps') ax.set_ylabel('Accuracy') ax.set_title('Validation Accuracy During Training') ax.legend(loc=4) ax.set_ylim([0,1]) plt.yticks(np.arange(0, 1.1, 0.1)) plt.grid(True) plt.show() def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500): """ Creates two networks, one with and one without batch normalization, then trains them with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies. :param use_bad_weights: bool If True, initialize the weights of both networks to wildly inappropriate weights; if False, use reasonable starting weights. :param learning_rate: float Learning rate used during gradient descent. :param activation_fn: Callable The function used for the output of each hidden layer. The network will use the same activation function on every hidden layer and no activate function on the output layer. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. :param training_batches: (default 50000) Number of batches to train. :param batches_per_sample: (default 500) How many batches to train before sampling the validation accuracy. """ # Use identical starting weights for each network to eliminate differences in # weight initialization as a cause for differences seen in training performance # # Note: The networks will use these weights to define the number of and shapes of # its layers. The original batch normalization paper used 3 hidden layers # with 100 nodes in each, followed by a 10 node output layer. These values # build such a network, but feel free to experiment with different choices. # However, the input size should always be 784 and the final output should be 10. if use_bad_weights: # These weights should be horrible because they have such a large standard deviation weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,100), scale=5.0).astype(np.float32), np.random.normal(size=(100,10), scale=5.0).astype(np.float32) ] else: # These weights should be good because they have such a small standard deviation weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] # Just to make sure the TensorFlow's default graph is empty before we start another # test, because we don't bother using different graphs or scoping and naming # elements carefully in this sample code. tf.reset_default_graph() # build two versions of same network, 1 without and 1 with batch normalization nn = NeuralNet(weights, activation_fn, False) bn = NeuralNet(weights, activation_fn, True) # train and test the two models with tf.Session() as sess: tf.global_variables_initializer().run() nn.train(sess, learning_rate, training_batches, batches_per_sample) bn.train(sess, learning_rate, training_batches, batches_per_sample) nn.test(sess) bn.test(sess) # Display a graph of how validation accuracies changed during training # so we can compare how the models trained and when they converged plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample) """ Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines. We add batch normalization to layers inside the fully_connected function. Here are some important points about that code: 1. Layers with batch normalization do not include a bias term. 2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.) 3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later. 4. We add the normalization before calling the activation function. In addition to that code, the training step is wrapped in the following with statement: python with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference. Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization. Batch Normalization Demos<a id='demos'></a> This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights. Code to support testing The following two functions support the demos we run in the notebook. The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots. The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights. End of explanation """ train_and_test(False, 0.01, tf.nn.relu) """ Explanation: Comparisons between identical networks, with and without batch normalization The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation """ train_and_test(False, 0.01, tf.nn.relu, 2000, 50) """ Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations. If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.) The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations. End of explanation """ train_and_test(False, 0.01, tf.nn.sigmoid) """ Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.) In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.relu) """ Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.relu) """ Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate. The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens. End of explanation """ train_and_test(False, 1, tf.nn.sigmoid) """ Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast. The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights. End of explanation """ train_and_test(False, 1, tf.nn.sigmoid, 2000, 50) """ Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy. The cell below shows a similar pair of networks trained for only 2000 iterations. End of explanation """ train_and_test(False, 2, tf.nn.relu) """ Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced. The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights. End of explanation """ train_and_test(False, 2, tf.nn.sigmoid) """ Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all. The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights. End of explanation """ train_and_test(False, 2, tf.nn.sigmoid, 2000, 50) """ Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization. However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster. End of explanation """ train_and_test(True, 0.01, tf.nn.relu) """ Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights. End of explanation """ train_and_test(True, 0.01, tf.nn.sigmoid) """ Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights. End of explanation """ train_and_test(True, 1, tf.nn.relu) """ Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a> End of explanation """ train_and_test(True, 1, tf.nn.sigmoid) """ Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere. The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights. End of explanation """ train_and_test(True, 2, tf.nn.relu) """ Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy. The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a> End of explanation """ train_and_test(True, 2, tf.nn.sigmoid) """ Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck. The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights. End of explanation """ train_and_test(True, 1, tf.nn.relu) """ Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%. Full Disclosure: Batch Normalization Doesn't Fix Everything Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run. This section includes two examples that show runs when batch normalization did not help at all. The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights. End of explanation """ train_and_test(True, 2, tf.nn.relu) """ Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.) The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights. End of explanation """ def fully_connected(self, layer_in, initial_weights, activation_fn=None): """ Creates a standard, fully connected layer. Its number of inputs and outputs will be defined by the shape of `initial_weights`, and its starting weight values will be taken directly from that same parameter. If `self.use_batch_norm` is True, this layer will include batch normalization, otherwise it will not. :param layer_in: Tensor The Tensor that feeds into this layer. It's either the input to the network or the output of a previous layer. :param initial_weights: NumPy array or Tensor Initial values for this layer's weights. The shape defines the number of nodes in the layer. e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 outputs. :param activation_fn: Callable or None (default None) The non-linearity used for the output of the layer. If None, this layer will not include batch normalization, regardless of the value of `self.use_batch_norm`. e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers. """ if self.use_batch_norm and activation_fn: # Batch normalization uses weights as usual, but does NOT add a bias term. This is because # its calculations include gamma and beta variables that make the bias term unnecessary. weights = tf.Variable(initial_weights) linear_output = tf.matmul(layer_in, weights) num_out_nodes = initial_weights.shape[-1] # Batch normalization adds additional trainable variables: # gamma (for scaling) and beta (for shifting). gamma = tf.Variable(tf.ones([num_out_nodes])) beta = tf.Variable(tf.zeros([num_out_nodes])) # These variables will store the mean and variance for this layer over the entire training set, # which we assume represents the general population distribution. # By setting `trainable=False`, we tell TensorFlow not to modify these variables during # back propagation. Instead, we will assign values to these variables ourselves. pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False) pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False) # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero. # This is the default value TensorFlow uses. epsilon = 1e-3 def batch_norm_training(): # Calculate the mean and variance for the data coming out of this layer's linear-combination step. # The [0] defines an array of axes to calculate over. batch_mean, batch_variance = tf.nn.moments(linear_output, [0]) # Calculate a moving average of the training data's mean and variance while training. # These will be used during inference. # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter # "momentum" to accomplish this and defaults it to 0.99 decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer. # This is necessary because the those two operations are not actually in the graph # connecting the linear_output and batch_normalization layers, # so TensorFlow would otherwise just skip them. with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): # During inference, use the our estimated population mean and variance to normalize the layer return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute # the operation returned from `batch_norm_training`; otherwise it will execute the graph # operation returned from `batch_norm_inference`. batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference) # Pass the batch-normalized layer output through the activation function. # The literature states there may be cases where you want to perform the batch normalization *after* # the activation function, but it is difficult to find any uses of that in practice. return activation_fn(batch_normalized_output) else: # When not using batch normalization, create a standard layer that multiplies # the inputs and weights, adds a bias, and optionally passes the result # through an activation function. weights = tf.Variable(initial_weights) biases = tf.Variable(tf.zeros([initial_weights.shape[-1]])) linear_output = tf.add(tf.matmul(layer_in, weights), biases) return linear_output if not activation_fn else activation_fn(linear_output) """ Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures. Batch Normalization: A Detailed Look<a id='implementation_2'></a> The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer. We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ $$ \mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i $$ We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation. $$ \sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2 $$ Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.) $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice. In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) The next section shows you how to implement the math directly. Batch normalization without the tf.layers package Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package. However, if you would like to implement batch normalization at a lower level, the following code shows you how. It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package. 1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before. End of explanation """ def batch_norm_test(test_training_accuracy): """ :param test_training_accuracy: bool If True, perform inference with batch normalization using batch mean and variance; if False, perform inference with batch normalization using estimated population mean and variance. """ weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,100), scale=0.05).astype(np.float32), np.random.normal(size=(100,10), scale=0.05).astype(np.float32) ] tf.reset_default_graph() # Train the model bn = NeuralNet(weights, tf.nn.relu, True) # First train the network with tf.Session() as sess: tf.global_variables_initializer().run() bn.train(sess, 0.01, 2000, 2000) bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True) """ Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points: It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function. It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights. Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly. TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization. tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation. We use the tf.nn.moments function to calculate the batch mean and variance. 2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: python if self.use_batch_norm: with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) else: train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line: python train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy) 3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training: python return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon) return gamma * normalized_linear_output + beta And replace this line in batch_norm_inference: python return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon) with these lines: python normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon) return gamma * normalized_linear_output + beta As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$: $$ \hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}} $$ And the second line is a direct translation of the following equation: $$ y_i \leftarrow \gamma \hat{x_i} + \beta $$ We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. Why the difference between training and inference? In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so: python batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training) And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function: python session.run(train_step, feed_dict={self.input_layer: batch_xs, labels: batch_ys, self.is_training: True}) If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that? First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training). End of explanation """ batch_norm_test(True) """ Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training. End of explanation """ batch_norm_test(False) """ Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions. To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/recommendation_systems/solutions/basic_ranking.ipynb
apache-2.0
!pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets """ Explanation: Recommending movies: ranking Learning Objectives Get our data and split it into a training and test set. Implement a ranking model. Fit and evaluate it. Introduction The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Imports Let's first get our imports out of the way. End of explanation """ # You can use any Python source file as a module by executing an import statement in some other Python source file. # The import statement combines two operations; it searches for the named module, then it binds the # results of that search to a name in the local scope. import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs """ Explanation: Note: Please ignore the incompatibility errors and re-run the above cell before proceeding for the lab. End of explanation """ # Show the currently installed version of TensorFlow print("TensorFlow version: ",tf.version.VERSION) """ Explanation: This notebook uses TF2.x. Please check your tensorflow version using the cell below. End of explanation """ ratings = tfds.load("movielens/100k-ratings", split="train") ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], "user_rating": x["user_rating"] }) """ Explanation: Lab Task 1: Preparing the dataset We're going to use the same data as the retrieval tutorial. This time, we're also going to keep the ratings: these are the objectives we are trying to predict. End of explanation """ tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) # Put 80% of the ratings in the train set, and 20% in the test set. # TODO 1a train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) """ Explanation: As before, we'll split the data by putting 80% of the ratings in the train set, and 20% in the test set. End of explanation """ movie_titles = ratings.batch(1_000_000).map(lambda x: x["movie_title"]) user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movie_titles))) unique_user_ids = np.unique(np.concatenate(list(user_ids))) """ Explanation: Let's also figure out unique user ids and movie titles present in the data. This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables. End of explanation """ class RankingModel(tf.keras.Model): def __init__(self): super().__init__() embedding_dimension = 32 # Compute embeddings for users. # TODO 2a self.user_embeddings = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) # Compute embeddings for movies. # TODO 2b self.movie_embeddings = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) # Compute predictions. self.ratings = tf.keras.Sequential([ # Learn multiple dense layers. tf.keras.layers.Dense(256, activation="relu"), tf.keras.layers.Dense(64, activation="relu"), # Make rating predictions in the final layer. tf.keras.layers.Dense(1) ]) def call(self, inputs): user_id, movie_title = inputs user_embedding = self.user_embeddings(user_id) movie_embedding = self.movie_embeddings(movie_title) return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1)) """ Explanation: Implementing a model Architecture Ranking models do not face the same efficiency constraints as retrieval models do, and so we have a little bit more freedom in our choice of architectures. A model composed of multiple stacked dense layers is a relatively common architecture for ranking tasks. We can implement it as follows: End of explanation """ RankingModel()((["42"], ["One Flew Over the Cuckoo's Nest (1975)"])) """ Explanation: This model takes user ids and movie titles, and outputs a predicted rating: End of explanation """ # `tf.keras.losses.MeanSquaredError()` computes the mean of squares of errors between labels and predictions. task = tfrs.tasks.Ranking( loss = tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()] ) """ Explanation: Loss and metrics The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy. In this instance, we'll make use of the Ranking task object: a convenience wrapper that bundles together the loss function and metric computation. We'll use it together with the MeanSquaredError Keras loss in order to predict the ratings. End of explanation """ class MovielensModel(tfrs.models.Model): def __init__(self): super().__init__() self.ranking_model: tf.keras.Model = RankingModel() self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking( loss = tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()] ) def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: rating_predictions = self.ranking_model( (features["user_id"], features["movie_title"])) # The task computes the loss and the metrics. return self.task(labels=features["user_rating"], predictions=rating_predictions) """ Explanation: The task itself is a Keras layer that takes true and predicted as arguments, and returns the computed loss. We'll use that to implement the model's training loop. The full model We can now put it all together into a model. TFRS exposes a base model class (tfrs.models.Model) which streamlines bulding models: all we need to do is to set up the components in the __init__ method, and implement the compute_loss method, taking in the raw features and returning a loss value. The base model will then take care of creating the appropriate training loop to fit our model. End of explanation """ # `tf.keras.optimizers.Adagrad()` optimizer that implements the Adagrad algorithm. model = MovielensModel() model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1)) """ Explanation: Fitting and evaluating After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model. Let's first instantiate the model. End of explanation """ cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() """ Explanation: Then shuffle, batch, and cache the training and evaluation data. End of explanation """ model.fit(cached_train, epochs=3) """ Explanation: Then train the model: End of explanation """ # Evaluate our model on the test set # TODO 3a model.evaluate(cached_test, return_dict=True) """ Explanation: As the model trains, the loss is falling and the RMSE metric is improving. Finally, we can evaluate our model on the test set: End of explanation """
Housebeer/Natural-Gas-Model
.ipynb_checkpoints/Matching Market v2-checkpoint.ipynb
mit
%matplotlib inline import random as rnd import pandas as pd class Seller(): wta = [] def __init__(self,name): self.name = name # the supplier has n quantities that they can sell # they may be willing to sell this quantity anywhere from a lower price of l # to a higher price of u def set_quantity(self,n,l,u): wta = [] for i in range(n): p = rnd.uniform(l,u) self.wta.append(p) def get_name(self): return self.name def get_asks(self): return self.wta class Buyer(): def __init__(self, name): self.wtp = [] self.name = name # the supplier has n quantities that they can buy # they may be willing to sell this quantity anywhere from a lower price of l # to a higher price of u def set_quantity(self,n,l,u): for i in range(n): p = rnd.uniform(l,u) self.wtp.append(p) def get_name(self): return self.name # return list of willingness to pay def get_bids(self): return self.wtp class Book(): ledger = pd.DataFrame(columns = ("role","name","price","cleared")) def set_asks(self,seller_list): # ask each seller their name # ask each seller their willingness # for each willingness append the data frame for seller in seller_list: seller_name = seller.get_name() seller_price = seller.get_asks() for price in seller_price: self.ledger=self.ledger.append({"role":"seller","name":seller_name,"price":price,"cleared":"in process"}, ignore_index=True) def set_bids(self,buyer_list): # ask each seller their name # ask each seller their willingness # for each willingness append the data frame for buyer in buyer_list: buyer_name = buyer.get_name() buyer_price = buyer.get_bids() for price in buyer_price: self.ledger=self.ledger.append({"role":"buyer","name":buyer_name,"price":price,"cleared":"in process"}, ignore_index=True) def update_ledger(self,ledger): self.ledger = ledger def get_ledger(self): return self.ledger class Market(): count = 0 last_price = '' book = Book() b = [] s = [] ledger = '' #def __init__(self): def add_buyer(self,buyer): self.b.append(buyer) def add_seller(self,seller): self.s.append(seller) def set_book(self): self.book.set_bids(self.b) self.book.set_asks(self.s) def get_ledger(self): self.ledger = self.book.get_ledger() return self.ledger def get_bids(self): # this is a data frame ledger = self.book.get_ledger() rows= ledger.loc[ledger['role'] == 'buyer'] # this is a series prices=rows['price'] # this is a list bids = prices.tolist() return bids def get_asks(self): # this is a data frame ledger = self.book.get_ledger() rows = ledger.loc[ledger['role'] == 'seller'] # this is a series prices=rows['price'] # this is a list asks = prices.tolist() return asks # return the price at which the market clears # this fails because there are more buyers then sellers def get_clearing_price(self): # buyer makes a bid starting with the buyer which wants it most b = self.get_bids() s = self.get_asks() # highest to lowest self.b=sorted(b, reverse=True) # lowest to highest self.s=sorted(s, reverse=False) # find out whether there are more buyers or sellers # then drop the excess buyers or sellers; they won't compete n = len(b) m = len(s) # there are more sellers than buyers # drop off the highest priced sellers if (m > n): s = s[0:n] matcher = n # There are more buyers than sellers # drop off the lowest bidding buyers else: b = b[0:m] matcher = m # It's possible that not all items sold actually clear the market here for i in range(matcher): if (self.b[i] > self.s[i]): self.count +=1 self.last_price = self.b[i] return self.last_price # TODO: Annotate the ledger def annotate_ledger(self,clearing_price): ledger = self.book.get_ledger() for index, row in ledger.iterrows(): if (row['role'] == 'seller'): if (row['price'] < clearing_price): ledger.ix[index,'cleared'] = 'True' else: ledger.ix[index,'cleared'] = 'False' else: if (row['price'] > clearing_price): ledger.ix[index,'cleared'] = 'True' else: ledger.ix[index,'cleared'] = 'False' self.book.update_ledger(ledger) def get_units_cleared(self): return self.count """ Explanation: Matching Market This simple model consists of a buyer, a supplier, and a market. The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a getbid function. The supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices. The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded. New in version 2 The actioneer now has a book to Microeconomic Foundations The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function. Agent-Based Objects The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market. End of explanation """ # Test the Book ledger = pd.DataFrame(columns = ("role","name","price","cleared")) ledger=ledger.append({"role":"seller","name":"gas","price":24,"cleared":"in process"},ignore_index=True) ledger=ledger.append({"role":"buyer","name":"gas","price":25,"cleared":"in process"},ignore_index=True) #df.append({'foo':1, 'bar':2}, ignore_index=True) rows=ledger.loc[ledger['role'] == 'seller'] print(rows['price'].tolist()) for index, row in ledger.iterrows(): if (row['role'] == 'seller'): print("yes","index") ledger.ix[index,'cleared']='True' row['cleared']='True' else: print("No change") print() print(ledger) """ Explanation: Test DataFrame Appending End of explanation """ # make a supplier and get the asks supplier = Seller("Natural Gas") supplier.set_quantity(100,0,10) book = Book() book.set_asks([supplier]) # make a buyer and get the bids buyerNames = ('home', 'industry', 'cat') buyerDictionary = {} for name in buyerNames: buyerDictionary[name] = Buyer('%s' %name) for obj in buyerDictionary.values(): obj.set_quantity(100,0,10) ''' # make a buyer and get the bids buyer1 = Buyer("Home") buyer1.set_quantity(100,0,10) # make a buyer and get the bids buyer2 = Buyer("Industry") buyer2.set_quantity(100,0,10) ''' book.set_bids([buy for buy in buyerDictionary.values()]) ledger = book.get_ledger() gas_market = Market() gas_market.add_seller(supplier) gas_market.add_buyer(buyer1) gas_market.add_buyer(buyer2) gas_market.set_book() asks = gas_market.get_asks() #print(asks) clearing = gas_market.get_clearing_price() gas_market.annotate_ledger(clearing) new_ledger = gas_market.get_ledger() pd.DataFrame.head(new_ledger) """ Explanation: Example Market In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue. End of explanation """ # To # Create a dictionary for the properties of the agents objectNames = ("foo", "bar", "cat", "mouse") objectDictionary = {} for name in objectNames: objectDictionary[name] = MyClass(property=foo,property2=bar) for obj in objectDictionary.itervalues(): obj.DoStuff(variable = foobar) """ Explanation: Operations Research Formulation The market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizeable supply and demand function. The auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand. An alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution. Next Steps A possible addition of this model would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A second addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator. Another possible addition would be to add a profit maximizing broker. This may require adding belief, fictitious play, or message passing. The object-orientation of the models will probably need to be further rationalized. Right now the market requires very particular ordering of calls to function correctly. End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_artifacts_correction_maxwell_filtering.ipynb
bsd-3-clause
import mne from mne.preprocessing import maxwell_filter data_path = mne.datasets.sample.data_path() """ Explanation: Artifact correction with Maxwell filter This tutorial shows how to clean MEG data with Maxwell filtering. Maxwell filtering in MNE can be used to suppress sources of external intereference and compensate for subject head movements. See maxwell for more details. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif' fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat' """ Explanation: Set parameters End of explanation """ raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False) raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads # Here we don't use tSSS (set st_duration) because MGH data is very clean raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname) """ Explanation: Preprocess with Maxwell filtering End of explanation """ tmin, tmax = -0.2, 0.5 event_id = {'Auditory/Left': 1} events = mne.find_events(raw, 'STI 014') picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, include=[], exclude='bads') for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')): epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(eog=150e-6), preload=False) evoked = epochs.average() evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250), mag=(-600, 700))) """ Explanation: Select events to extract epochs from, pick M/EEG channels, and plot evoked End of explanation """
ankitpandey2708/ml
recommender-system/ml-1m/model.ipynb
mit
import pandas as pd import numpy as np r_cols = ['user_id', 'movie_id', 'rating'] m_cols = ['movie_id', 'title', 'genres'] ratings_df = pd.read_csv('ratings.dat',sep='::', names=r_cols, engine='python', usecols=range(3), dtype = int) movies_df = pd.read_csv('movies.dat', sep='::', names=m_cols, engine='python') movies_df['movie_id'] = movies_df['movie_id'].apply(pd.to_numeric) movies_df.head(3) ratings_df.head(3) """ Explanation: Matrix Factorization via Singular Value Decomposition Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations. So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix: $$\begin{equation} R = U\Sigma V^{T} \end{equation}$$ where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie. To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors. End of explanation """ R_df = ratings_df.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0) R_df.head() """ Explanation: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R. End of explanation """ R = R_df.as_matrix() user_ratings_mean = np.mean(R, axis = 1) R_demeaned = R - user_ratings_mean.reshape(-1, 1) """ Explanation: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array. End of explanation """ from scipy.sparse.linalg import svds U, sigma, Vt = svds(R_demeaned, k = 50) """ Explanation: Singular Value Decomposition Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after). End of explanation """ sigma = np.diag(sigma) """ Explanation: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form. End of explanation """ all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1) preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns) preds_df.head() def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations): # Get and sort the user's predictions user_row_number = userID - 1 # UserID starts at 1, not 0 sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1 # Get the user's data and merge in the movie information. user_data = original_ratings_df[original_ratings_df.user_id == (userID)] user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movie_id', right_on = 'movie_id'). sort_values(['rating'], ascending=False) ) print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0])) print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations)) # Recommend the highest predicted rating movies that the user hasn't seen yet. recommendations = (movies_df[~movies_df['movie_id'].isin(user_full['movie_id'])]. merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left', left_on = 'movie_id', right_on = 'movie_id'). rename(columns = {user_row_number: 'Predictions'}). sort_values('Predictions', ascending = False). iloc[:num_recommendations, :-1] ) return user_full, recommendations already_rated, predictions = recommend_movies(preds_df, 837, movies_df, ratings_df, 10) predictions already_rated.head(10) """ Explanation: Making Predictions from the Decomposed Matrices I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$. I also need to add the user means back to get the actual star ratings prediction. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_time_frequency_simulated.ipynb
bsd-3-clause
# Authors: Hari Bharadwaj <hari@nmr.mgh.harvard.edu> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np from mne import create_info, EpochsArray from mne.time_frequency import tfr_multitaper, tfr_stockwell, tfr_morlet print(__doc__) """ Explanation: ======================================================== Time-frequency on simulated data (Multitaper vs. Morlet) ======================================================== This examples demonstrates on simulated data the different time-frequency estimation methods. It shows the time-frequency resolution trade-off and the problem of estimation variance. End of explanation """ sfreq = 1000.0 ch_names = ['SIM0001', 'SIM0002'] ch_types = ['grad', 'grad'] info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) n_times = int(sfreq) # 1 second long epochs n_epochs = 40 seed = 42 rng = np.random.RandomState(seed) noise = rng.randn(n_epochs, len(ch_names), n_times) # Add a 50 Hz sinusoidal burst to the noise and ramp it. t = np.arange(n_times, dtype=np.float) / sfreq signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing on_time = np.logical_and(t >= 0.45, t <= 0.55) signal[on_time] *= np.hanning(on_time.sum()) # Ramping data = noise + signal reject = dict(grad=4000) events = np.empty((n_epochs, 3), dtype=int) first_event_sample = 100 event_id = dict(sin50hz=1) for k in range(n_epochs): events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz'] epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id, reject=reject) """ Explanation: Simulate data End of explanation """ freqs = np.arange(5., 100., 3.) # You can trade time resolution or frequency resolution or both # in order to get a reduction in variance # (1) Least smoothing (most variance/background fluctuations). n_cycles = freqs / 2. time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper) power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3., title='Sim: Least smoothing, most variance') # (2) Less frequency smoothing, more time smoothing. n_cycles = freqs # Increase time-window length to 1 second. time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3., title='Sim: Less frequency smoothing, more time smoothing') # (3) Less time smoothing, more frequency smoothing. n_cycles = freqs / 2. time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3., title='Sim: Less time smoothing, more frequency smoothing') # ############################################################################# # Stockwell (S) transform # S uses a Gaussian window to balance temporal and spectral resolution # Importantly, frequency bands are phase-normalized, hence strictly comparable # with regard to timing, and, the input signal can be recoverd from the # transform in a lossless way if we disregard numerical errors. fmin, fmax = freqs[[0, -1]] for width in (0.7, 3.0): power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width) power.plot([0], baseline=(0., 0.1), mode='mean', title='Sim: Using S transform, width ' '= {:0.1f}'.format(width), show=True) # ############################################################################# # Finally, compare to morlet wavelet n_cycles = freqs / 2. power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False) power.plot([0], baseline=(0., 0.1), mode='mean', vmin=-1., vmax=3., title='Sim: Using Morlet wavelet') """ Explanation: Consider different parameter possibilities for multitaper convolution End of explanation """
austinjalexander/sandbox
python/py/nanodegree/intro_ds/final_project/IntroDS-ProjectOne-Section1.ipynb
mit
import inflect # for string manipulation import numpy as np import pandas as pd import scipy as sp import scipy.stats as st import matplotlib.pyplot as plt %matplotlib inline filename = '/Users/excalibur/py/nanodegree/intro_ds/final_project/improved-dataset/turnstile_weather_v2.csv' # import data data = pd.read_csv(filename) """ Explanation: Analyzing the NYC Subway Dataset Intro to Data Science: Final Project 1, Part 2 (Short Questions) Section 1. Statistical Test Austin J. Alexander Import Directives and Initial DataFrame Creation End of explanation """ entries_hourly_by_row = data['ENTRIESn_hourly'].values def map_column_to_entries_hourly(column): instances = column.values # e.g., longitude_instances = data['longitude'].values # reduce entries_hourly = {} # e.g., longitude_entries_hourly = {} for i in np.arange(len(instances)): if instances[i] in entries_hourly: entries_hourly[instances[i]] += float(entries_hourly_by_row[i]) else: entries_hourly[instances[i]] = float(entries_hourly_by_row[i]) return entries_hourly # e.g., longitudes, entries def create_df(entries_hourly_dict, column1name): # e.g, longitude_df = pd.DataFrame(data=longitude_entries_hourly.items(), columns=['longitude','entries']) df = pd.DataFrame(data=entries_hourly_dict.items(), columns=[column1name,'entries']) return df # e.g, longitude_df rain_entries_hourly = map_column_to_entries_hourly(data['rain']) rain_df = create_df(rain_entries_hourly, 'rain') rain_days = data[data['rain'] == 1] no_rain_days = data[data['rain'] == 0] def plot_box(sample1, sample2): plt.boxplot([sample2, sample1], vert=False) plt.title('NUMBER OF ENTRIES PER SAMPLE') plt.xlabel('ENTRIESn_hourly') plt.yticks([1, 2], ['Sample 2', 'Sample 1']) plt.show() """ Explanation: Functions for Getting, Mapping, and Plotting Data End of explanation """ def describe_samples(sample1, sample2): size1, min_max1, mean1, var1, skew1, kurt1 = st.describe(sample1) size2, min_max2, mean2, var2, skew2, kurt2 = st.describe(sample2) med1 = np.median(sample1) med2 = np.median(sample2) std1 = np.std(sample1) std2 = np.std(sample2) print "Sample 1 (rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max1[0], min_max1[1], mean1, med1, var1, std1) print "Sample 2 (non-rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max2[0], min_max2[1], mean2, med2, var2, std2) """ Explanation: Function for Basic Statistics End of explanation """ class MannWhitneyU: def __init__(self,n): self.n = n self.num_of_tests = 1000 self.sample1 = 0 self.sample2 = 0 def sample_and_test(self, plot, describe): self.sample1 = np.random.choice(rain_days['ENTRIESn_hourly'], size=self.n, replace=False) self.sample2 = np.random.choice(no_rain_days['ENTRIESn_hourly'], size=self.n, replace=False) ### the following two self.sample2 assignments are for testing purposes ### #self.sample2 = self.sample1 # test when samples are same #self.sample2 = np.random.choice(np.random.randn(self.n),self.n) # test for when samples are very different if plot == True: plot_box(self.sample1,self.sample2) if describe == True: describe_samples(self.sample1,self.sample2) return st.mannwhitneyu(self.sample1, self.sample2) def effect_sizes(self, U): # Wendt's rank-biserial correlation r = (1 - np.true_divide((2*U),(self.n*self.n))) # Cohen's d s = np.sqrt(np.true_divide((((self.n-1)*np.std(self.sample1)**2) + ((self.n-1)*np.std(self.sample2)**2)), (self.n+self.n-2))) d = np.true_divide((np.mean(self.sample1) - np.mean(self.sample2)), s) return r,d def trial_series(self): success = 0 U_values = [] p_values = [] d_values = [] r_values = [] for i in np.arange(self.num_of_tests): U, p = self.sample_and_test(False, False) r, d = self.effect_sizes(U) U_values.append(U) # scipy.stats.mannwhitneyu returns p for a one-sided hypothesis, # so multiply by 2 for two-sided p_values.append(p*2) d_values.append(d) r_values.append(r) if p <= 0.05: success += 1 print "n = {0}".format(self.n) print "average U value: {0:.2f}".format(np.mean(U_values)) print "number of times p <= 0.05: {0}/{1} ({2}%)".format(success, self.num_of_tests, (np.true_divide(success,self.num_of_tests)*100)) print "average p value: {0:.2f}".format(np.mean(p_values)) print "average rank-biserial r value: {0:.2f}".format(np.mean(r_values)) print "average Cohen's d value: {0:.2f}".format(np.mean(d_values)) plt.hist(p_values, color='green', alpha=0.3) plt.show() """ Explanation: Formulas Implemented (i.e., not included in modules/packages) Wendt's rank-biserial correlation $r$ $$r = 1 - \frac{2U}{n_{1}n_{2}}$$ Cohen's $d$ (and pooled standard deviation $s$) $$d = \frac{\bar{x}{1} - \bar{x}{2}}{s}$$ $$s = \sqrt{\frac{(n_{1} - 1)s_{1}^{2} + (n_{2} - 1)s_{2}^{2}}{n_{1} + n_{2} - 2}}$$ Class for Mann-Whitney U Test End of explanation """ sample_sizes = [30, 100, 500, 1500, 3000, 5000, 9585] for n in sample_sizes: MannWhitneyU(n).trial_series() """ Explanation: Section 1. Statistical Test <h3 id='1_1_a'>1.1.a Which statistical test did you use to analyse the NYC subway data?</h3> The Mann-Whitney $U$ test was used to determine if there was a statistically significant difference between the number of reported entries on rainy and non-rainy occasions. This nonparametric test of the equality of two population medians from independent samples was used since the distribution of entries is non-normal (right-skewed) and their shape is the same, as seen visually via histograms, probability plots, and box plots, and as the result of the Shapiro-Wilk normality test (see <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb#prep-for-stats' target='_blank'>Preparation for Statistical Tests</a>). However, since the sample sizes are so large, the parametric Welch's $t$-test likely could have been used (and, it was implemented for confirmation purposes, along with the nonparametric Wilcoxon signed-rank test; both agreed with the Mann-Whitney $U$ test results). Testing Average Values of $p$, $r$, and $d$ for Various Sample Sizes End of explanation """ print "Shape of rainy-days data:" +str(rain_days.shape) N = rain_days.shape[0] print "N = " + str(N) print "0.05 * N = " + str(0.05 * N) """ Explanation: As witnessed above, when rainy and non-rainy days from the data set are considered populations (as opposed to samples themselves), it takes significantly large sample sizes from each population (e.g., $n = 3000$, which is more than $30\%$ of the total number of rainy days in the data set) to attain low $p$-values<sup>1</sup> frequently enough to reject the null hypothesis of the Mann-Whitney $U$ test<sup>2</sup> with the critical values proposed below. Moreover, using Wendt's rank-biserial correlation $r$ and Cohen's $d$ to measure effect size, the relatively low average value of $r$<sup>3</sup> and the low average value of $d$<sup>4</sup> both suggest that the difference between the two samples (and, thus, the two populations) is trivial, even though, according to the Mann-Whitney U test, the difference appears to be statistically signficant (and only then with extremely large samples)<sup>5</sup>. In other words, statistical significance $\neq$ practical significance. Notes <sup>1</sup> Identical samples would produce a large $p$ (e.g., $p \approx 0.49$); extremely different samples would produce a very small number (e.g., $p \approx 0$). <sup>2</sup> Identical samples would produce $U = \frac{n^{2}}{2}$ (e.g., when $n = 450$, $U = 101250$); extremely different samples can produce a $U$ that is orders of magnitude smaller (e.g., when $n = 450$, possibly $U = 1293$). <sup>3</sup> For very different samples, $r \rightarrow 1$; in the above tests, as $n$ increases, $r \rightarrow 0$. <sup>4</sup> For very different samples, $d \rightarrow 1$; in the above tests, as $n$ increases, $d$ tends to remain constant, $d \approx 0.06$, even when the sample size is extremely large. $d$ is interpreted as the difference in the number of standard deviations. <sup>5</sup> On the issue of $p$-values and large data sets, see Lin, M., Lucas, H.C., and Shmueli, G. Research Commentary—Too big to fail: Large samples and the P-value problem. Inf. Syst. Res. 2013; 24: 906–917. PDF <a href='http://www.galitshmueli.com/system/files/Print%20Version.pdf' target='_blank'>here</a>. <h3 id='1_1_b'>1.1.b Did you use a one-tail or a two-tail P value?</h3> A two-tail $p$-value was selected since an appropriate initial question, given the results of the <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb#weather-related' target='_blank'>Weather-Related Data</a> section of the DataExploration supplement, is simply whether or not there is a statistically significant difference between the populations (i.e., not whether one population is statistically-significantly greater than another). <h3 id='1_1_c'>1.1.c What is the null hypothesis?</h3> End of explanation """ n = 450 mwu = MannWhitneyU(n) U, p = mwu.sample_and_test(True,True) r, d = mwu.effect_sizes(U) print "\nMann-Whitney U test results:" print "n = {0}".format(n) print "U = {0}".format(U) print "p = {0:.2f}".format(np.mean(p)) print "rank-biserial r value: {0:.2f}".format(np.mean(r)) print "Cohen's d value: {0:.2f}".format(np.mean(d)) """ Explanation: The Mann-Whitney $U$ test is a nonparametric test of the null hypothesis that the distributions of two populations are the same. To verify the assumption that the simple, randomly sampled values are independent, the sample sizes should be less than $5\%$ of the population sizes ($n \lt 0.05N$). Since the maximum number of rainy days is $9585$ ($N = 9585$), a reasonable sample size for each group would be $450$ ($n = 450$). Null Hyptohesis $H_{0}$: $M_{1} = M_{2}$ or $H_{0}$: $M_{1} - M_{2} = 0$ Alternate Hypothesis $H_{1}$: $M_{1} \neq M_{2}$ or $H_{1}$: $M_{1} - M_{2} \neq 0$ A $95\%$ level of confidence would suggest that $95\%$ of samples would produce similar statistical results. For a $95\%$ level of confidence, the level of significance (i.e., the probability of making a Type I error) $\alpha = (1 - 0.05) \cdot 100\% = 0.05$. <h3 id='1_1_d'>1.1.d What is your p-critical value?</h3> $p \leq 0.05$ Gather New Samples and Perform Statistical Test End of explanation """
dawenl/content_wmf
code/processTasteProfile.ipynb
mit
# tid2sid.json contains a mapping between track id and song id, which can obtained from track_metadata.db with open('tid2sid.json', 'r') as f: tid2sid = json.load(f) bad_audio = [] with open('tracks_bad_audio.txt', 'r') as f: for line in f: bad_audio.append(line.strip()) bad_sid = [tid2sid[k] for k in bad_audio] def filter_usable_tracks(tp, bad_sid): return tp[~tp['sid'].isin(bad_sid)] tp_good = filter_usable_tracks(tp, bad_sid) print '%d playcount triplets are kept out of %d'% (len(tp_good), len(tp)) """ Explanation: Keep play counts that involve only usable tracks End of explanation """ def get_count(tp, id): playcount_groupbyid = tp[[id, 'count']].groupby(id, as_index=False) count = playcount_groupbyid.size() return count def remove_inactive(tp, min_uc=MIN_USER_COUNT, min_sc=MIN_SONG_COUNT): # Only keep the triplets for songs which were listened to by at least min_sc users. songcount = get_count(tp, 'sid') tp = tp[tp['sid'].isin(songcount.index[songcount >= min_sc])] # Only keep the triplets for users who listened to at least min_uc songs # After doing this, some of the songs will have less than min_uc users, but should only be a small proportion usercount = get_count(tp, 'uid') tp = tp[tp['uid'].isin(usercount.index[usercount >= min_uc])] # Update both usercount and songcount after filtering usercount, songcount = get_count(tp, 'uid'), get_count(tp, 'sid') return tp, usercount, songcount tp, usercount, songcount = remove_inactive(tp_good) sparsity_level = float(tp.shape[0]) / (usercount.shape[0] * songcount.shape[0]) print "After filtering, there are %d triplets from %d users and %d songs (sparsity level %.3f%%)" % (tp.shape[0], usercount.shape[0], songcount.shape[0], sparsity_level * 100) usercount.hist(bins=100) songcount.hist(bins=100) songcount.sort(ascending=False) def get_song_info_from_sid(conn, sid): cur = conn.cursor() cur.execute("SELECT title, artist_name FROM songs WHERE song_id = '%s'" % (sid)) title, artist = cur.fetchone() return title, artist # take a look at the top 50 most listened songs with sqlite3.connect(os.path.join(MSD_ADD, md_dbfile)) as conn: for i in xrange(50): sid = songcount.index[i] title, artist = get_song_info_from_sid(conn, sid) print "%s BY %s -- count: %d" % (title, artist, songcount[i]) """ Explanation: Further filter out counts invoving inactive users & songs End of explanation """ playcount = tp[['sid', 'count']] playcount_groupbysid = playcount.groupby('sid', as_index=False) songcount = playcount_groupbysid.sum().sort('count', ascending=False) print songcount unique_sid = pd.unique(tp['sid']) n_songs = len(unique_sid) # Shuffle songs np.random.seed(98765) idx = np.random.permutation(np.arange(n_songs)) unique_sid = unique_sid[idx] print n_songs unique_uid = pd.unique(tp['uid']) # Map song/user ID to indices song2id = dict((sid, i) for (i, sid) in enumerate(unique_sid)) user2id = dict((uid, i) for (i, uid) in enumerate(unique_uid)) with open('unique_uid.txt', 'w') as f: for uid in unique_uid: f.write('%s\n' % uid) with open('unique_sid.txt', 'w') as f: for sid in unique_sid: f.write('%s\n' % sid) with open('song2id.json', 'w') as f: json.dump(song2id, f) with open('user2id.json', 'w') as f: json.dump(user2id, f) """ Explanation: Generate in- and out-of-matrix split Get all users & songs in filtered taste profile, shuffle them, and map to integer indices End of explanation """ in_sid = unique_sid[:int(0.95 * n_songs)] out_sid = unique_sid[int(0.95 * n_songs):] print out_sid.shape out_tp = tp[tp['sid'].isin(out_sid)] out_tp in_tp = tp[~tp['sid'].isin(out_sid)] in_tp """ Explanation: Select 5% songs for out-of-matrix prediction End of explanation """ np.random.seed(12345) n_ratings = in_tp.shape[0] test = np.random.choice(n_ratings, size=int(0.20 * n_ratings), replace=False) test_idx = np.zeros(n_ratings, dtype=bool) test_idx[test] = True test_tp = in_tp[test_idx] train_tp = in_tp[~test_idx] """ Explanation: Generate train/test/vad sets Pick out 20% of the rating for in-matrix prediction End of explanation """ print len(pd.unique(train_tp['uid'])) print len(pd.unique(in_tp['uid'])) print len(pd.unique(train_tp['sid'])) print len(pd.unique(in_tp['sid'])) """ Explanation: Make sure there is no empty row or column in the training data End of explanation """ np.random.seed(13579) n_ratings = train_tp.shape[0] vad = np.random.choice(n_ratings, size=int(0.10 * n_ratings), replace=False) vad_idx = np.zeros(n_ratings, dtype=bool) vad_idx[vad] = True vad_tp = train_tp[vad_idx] train_tp = train_tp[~vad_idx] print len(pd.unique(train_tp['uid'])) print len(pd.unique(in_tp['uid'])) print len(pd.unique(train_tp['sid'])) print len(pd.unique(in_tp['sid'])) test_tp.to_csv('in.test.csv', index=False) train_tp.to_csv('in.train.csv', index=False) vad_tp.to_csv('in.vad.csv', index=False) out_tp.to_csv('out.test.csv', index=False) """ Explanation: Pick out 10% of the training rating as validation set End of explanation """
jasdumas/jasdumas.github.io
post_data/KMEANS-POKER-ANALYSIS.ipynb
mit
# read training and test data from the url link and save the file to your working directory url = "http://archive.ics.uci.edu/ml/machine-learning-databases/poker/poker-hand-training-true.data" urllib.request.urlretrieve(url, "poker_train.csv") url2 = "http://archive.ics.uci.edu/ml/machine-learning-databases/poker/poker-hand-testing.data" urllib.request.urlretrieve(url2, "poker_test.csv") # read the data in and add column names data_train = pd.read_csv("poker_train.csv", header=None, names=['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5', 'CLASS']) data_test = pd.read_csv("poker_test.csv", header=None, names=['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5', 'CLASS']) """ Explanation: GET THE DATA End of explanation """ # summary statistics including counts, mean, stdev, quartiles for the training dataset data_train.head(n=5) data_train.dtypes # data types of each variable data_train.describe() """ Explanation: EXPLORE THE DATA End of explanation """ # subset clustering variables cluster=data_train[['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5']] """ Explanation: SUBSET THE DATA End of explanation """ # standardize clustering variables to have mean=0 and sd=1 so that card suit and # rank are on the same scale as to have the variables equally contribute to the analysis clustervar=cluster.copy() # create a copy clustervar['S1']=preprocessing.scale(clustervar['S1'].astype('float64')) clustervar['C1']=preprocessing.scale(clustervar['C1'].astype('float64')) clustervar['S2']=preprocessing.scale(clustervar['S2'].astype('float64')) clustervar['C2']=preprocessing.scale(clustervar['C2'].astype('float64')) clustervar['S3']=preprocessing.scale(clustervar['S3'].astype('float64')) clustervar['C3']=preprocessing.scale(clustervar['C3'].astype('float64')) clustervar['S4']=preprocessing.scale(clustervar['S4'].astype('float64')) clustervar['C4']=preprocessing.scale(clustervar['C4'].astype('float64')) clustervar['S5']=preprocessing.scale(clustervar['S5'].astype('float64')) clustervar['C5']=preprocessing.scale(clustervar['C5'].astype('float64')) # The data has been already split data into train and test sets clus_train = clustervar """ Explanation: STANDARDIZE THE DATA End of explanation """ # k-means cluster analysis for 1-10 clusters due to the 10 possible class outcomes for poker hands from scipy.spatial.distance import cdist clusters=range(1,11) meandist=[] # loop through each cluster and fit the model to the train set # generate the predicted cluster assingment and append the mean distance my taking the sum divided by the shape for k in clusters: model=KMeans(n_clusters=k) model.fit(clus_train) clusassign=model.predict(clus_train) meandist.append(sum(np.min(cdist(clus_train, model.cluster_centers_, 'euclidean'), axis=1)) / clus_train.shape[0]) """ Plot average distance from observations from the cluster centroid to use the Elbow Method to identify number of clusters to choose """ plt.plot(clusters, meandist) plt.xlabel('Number of clusters') plt.ylabel('Average distance') plt.title('Selecting k with the Elbow Method') # pick the fewest number of clusters that reduces the average distance """ Explanation: K-MEANS ANALYSIS - INITIAL CLUSTER SET End of explanation """ model3=KMeans(n_clusters=2) model3.fit(clus_train) # has cluster assingments based on using 2 clusters clusassign=model3.predict(clus_train) # plot clusters ''' Canonical Discriminant Analysis for variable reduction: 1. creates a smaller number of variables 2. linear combination of clustering variables 3. Canonical variables are ordered by proportion of variance accounted for 4. most of the variance will be accounted for in the first few canonical variables ''' from sklearn.decomposition import PCA # CA from PCA function pca_2 = PCA(2) # return 2 first canonical variables plot_columns = pca_2.fit_transform(clus_train) # fit CA to the train dataset plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=model3.labels_,) # plot 1st canonical variable on x axis, 2nd on y-axis plt.xlabel('Canonical variable 1') plt.ylabel('Canonical variable 2') plt.title('Scatterplot of Canonical Variables for 2 Clusters') plt.show() # close or overlapping clusters idicate correlated variables with low in-class variance but not good separation. 2 cluster might be better. """ Explanation: Interpret 2 cluster solution End of explanation """ # create a unique identifier variable from the index for the # cluster training data to merge with the cluster assignment variable clus_train.reset_index(level=0, inplace=True) # create a list that has the new index variable cluslist=list(clus_train['index']) # create a list of cluster assignments labels=list(model3.labels_) # combine index variable list with cluster assignment list into a dictionary newlist=dict(zip(cluslist, labels)) newlist # convert newlist dictionary to a dataframe newclus=DataFrame.from_dict(newlist, orient='index') newclus # rename the cluster assignment column newclus.columns = ['cluster'] # now do the same for the cluster assignment variable create a unique identifier variable from the index for the # cluster assignment dataframe to merge with cluster training data newclus.reset_index(level=0, inplace=True) # merge the cluster assignment dataframe with the cluster training variable dataframe # by the index variable merged_train=pd.merge(clus_train, newclus, on='index') merged_train.head(n=100) # cluster frequencies merged_train.cluster.value_counts() """ Explanation: BEGIN multiple steps to merge cluster assignment with clustering variables to examine cluster variable means by cluster End of explanation """ clustergrp = merged_train.groupby('cluster').mean() print ("Clustering variable means by cluster") print(clustergrp) """ Explanation: calculate clustering variable means by cluster End of explanation """ # split into test / train for class pokerhand_train=data_train['CLASS'] pokerhand_test=data_test['CLASS'] # put into a pandas dataFrame pokerhand_train=pd.DataFrame(pokerhand_train) pokerhand_test=pd.DataFrame(pokerhand_test) pokerhand_train.reset_index(level=0, inplace=True) # reset index merged_train_all=pd.merge(pokerhand_train, merged_train, on='index') # merge the pokerhand train with merged clusters sub1 = merged_train_all[['CLASS', 'cluster']].dropna() import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi # respone formula pokermod = smf.ols(formula='CLASS ~ cluster', data=sub1).fit() print (pokermod.summary()) print ('means for Poker hands by cluster') m1= sub1.groupby('cluster').mean() print (m1) print ('standard deviations for Poker hands by cluster') m2= sub1.groupby('cluster').std() print (m2) mc1 = multi.MultiComparison(sub1['CLASS'], sub1['cluster']) res1 = mc1.tukeyhsd() print(res1.summary()) """ Explanation: validate clusters in training data by examining cluster differences in CLASS using ANOVA first have to merge CLASS of poker hand with clustering variables and cluster assignment data End of explanation """
Pittsburgh-NEH-Institute/Institute-Materials-2017
schedule/week_2/Integrating_XML_with_Python.ipynb
gpl-3.0
import nltk # nltk.download() """ Explanation: Integrating XML with Python NLTK, the Python Natural Languge ToolKit package, is designed to work with plain text input, but sometimes your input is in XML. There are two principal paths to reconciliation: either use an XML environment that supports NLP (natural language processing) or let Python (which supports NLP through NLTK) manage the XML. The first approach, sticking to an XML environment, is illustrated in Week 3 of the Institute in the context of the eXist XML database, which integrates the Stanford Core NLP tools. Here we illustrate the second approach, letting Python manage the XML. Before you make a mistake It’s natural to think of parsing (reading, interpreting, and processing) XML with regular expressions, but it’s also Wrong for at least two sets of reasons: Regular expressions operate over strings, and there are string differences in XML that are not informationally different. For example, the order of attributes on an element, whether the attributes are single- or double-quoted, whether a Unicode character is represented by a raw character or a numerical character reference, and many other details represent string differences that are not informational differences. The same is true of the extent and type of white space in some environments but not others. And the same is true when you have to recognize whether a right angle bracket or a single or double quotation mark is part of content or part of markup. XML-aware processing knows what’s informational and what isn’t, as well as what’s content and what’s markup. You don’t want to reinvent those wheels. Parsing XML is a recursive operation. For example, if you have two elements of the same type nested inside each other, as in xml &lt;emphasis&gt;&lt;emphasis&gt;a very emphatic thought&lt;/emphasis&gt;&lt;/emphasis&gt; parsing has to match up the correctly paired start and end tags. XML-aware processing knows where it is in the tree. That’s another wheel you don’t want to reinvent. It’s also natural to think of writing XML by constructing a string, such as concatenating angle brackets and text and other bits and pieces. This is a Bad Idea because some decisions are context sensitive, and keeping track of the context is challenging. For example, attribute values can be quoted with single or double quotation marks, but if the value contains a single or double quotation mark, that can influence the choice, and there are situations where you may need to represent the quotation marks in attribute values with &amp;quot; or &amp;apos; character entities instead of as raw characters. A library that knows how to write XML will keep track of that for you. Wrangling XML in Python The Python Standard Library provides several tools for parsing and creating XML, and there are also third-party packages. In this tutorial we use two parts of the Standard Library: pulldom for parsing XML input and minidom for constructing XML output. You can read more about these modules by clicking on the preceding links to the Standard Library documentation, and also in the Structured text: XML chapter of the eTutorials.org Python tutorial. To illustrate how to read and write XML with Python we’ll read in a small input XML document, tag each word as a &lt;word&gt; element, and add part of speech (POS) and lemma (dictionary form) information as @pos and @lemma attributes of the &lt;word&gt; elements. We’ll use pulldom to read, parse, and process the input document, NLTK to determine the part of speech and the lemma, and minidom to create the output. Input XML Create the following small XML document in a work directory and save with a filename like test.xml: xml &lt;root&gt; &lt;p speaker="hamlet"&gt;Hamlet is a prince of Denmark.&lt;/p&gt; &lt;p speaker='ophelia'&gt;Things end badly for Ophelia.&lt;/p&gt; &lt;p speaker="nobody"&gt;Julius Caesar does not appear in this play.&lt;/p&gt; &lt;/root&gt; Desired output XML The desired output is: ```xml <?xml version="1.0" ?> <root> <p speaker="hamlet"> <word lemma="hamlet" pos="NNP">Hamlet</word> <word lemma="be" pos="VBZ">is</word> <word lemma="a" pos="DT">a</word> <word lemma="prince" pos="NN">prince</word> <word lemma="of" pos="IN">of</word> <word lemma="denmark" pos="NNP">Denmark</word> <word lemma="." pos=".">.</word> </p> <p speaker="ophelia"> <word lemma="thing" pos="NNS">Things</word> <word lemma="end" pos="VBP">end</word> <word lemma="badly" pos="RB">badly</word> <word lemma="for" pos="IN">for</word> <word lemma="ophelia" pos="NNP">Ophelia</word> <word lemma="." pos=".">.</word> </p> <p speaker="nobody"> <word lemma="julius" pos="NNP">Julius</word> <word lemma="caesar" pos="NNP">Caesar</word> <word lemma="do" pos="VBZ">does</word> <word lemma="not" pos="RB">not</word> <word lemma="appear" pos="VB">appear</word> <word lemma="in" pos="IN">in</word> <word lemma="this" pos="DT">this</word> <word lemma="play" pos="NN">play</word> <word lemma="." pos=".">.</word> </p> </root> ``` The python code Before you run the code NLTK is installed by default with Anaconda python, but the word tokenizer isn’t. To install the tokenizer, uncomment the second line below and run (if you’ve already installed the tokenizer, run the cell without uncommenting the second line): End of explanation """ #!/usr/bin/env python """Tag words and add POS and lemma information in XML document.""" from xml.dom.minidom import Document, Element from xml.dom import pulldom import nltk def create_word_element(d: Document, text: str, pos: str) -> Element: """Create <word> element with POS and lemma attributes.""" word = d.createElement("word") word.setAttribute("pos", pos) word.setAttribute("lemma", lemmatize(text, pos)) t = d.createTextNode(text) word.appendChild(t) return word def get_wordnet_pos(treebank_tag: str) -> str: """Replace treebank POS tags with wordnet ones; default POS is noun.""" pos_tags = {'J': nltk.corpus.reader.wordnet.ADJ, 'V': nltk.corpus.reader.wordnet.VERB, 'R': nltk.corpus.reader.wordnet.ADV} return pos_tags.get(treebank_tag[0], nltk.corpus.reader.wordnet.NOUN) def lemmatize(text: str, pos: str) -> str: """Identify lemma for current word.""" return nltk.stem.WordNetLemmatizer().lemmatize(text.lower(), get_wordnet_pos(pos)) def extract(input_xml) -> Document: """Process entire input XML document, firing on events.""" # Initialize output as XML document, point to most recent open node d = Document() current = d # Start pulling; it continues automatically doc = pulldom.parse(input_xml) for event, node in doc: if event == pulldom.START_ELEMENT: current.appendChild(node) current = node elif event == pulldom.END_ELEMENT: current = node.parentNode elif event == pulldom.CHARACTERS: # tokenize, pos-tag, create <word> as child of parent words = nltk.word_tokenize(node.toxml()) tagged_words = nltk.pos_tag(words) for (text, pos) in tagged_words: word = create_word_element(d, text, pos) current.appendChild(word) return d with open('test-1.xml', 'r') as test_in: results = extract(test_in) print(results.toprettyxml()) """ Explanation: A separate window will open. Select the Models tab on the top, then averaged_perceptron_tagger and punkt, and then press the Download button. Then select the Corpora tab on the top, then wordnet, and then Download. You only have to download these once on each machine you use; the download process will install them for future use. The annotation code Here is the entire Python script that creates the output (we describe how the pieces work below). If you have saved the sample input as test.xml in the same directory as the location of this notebook, you can run the transformation in the notebook now, and the output should be displayed below: End of explanation """ #!/usr/bin/env python """Tag words and add POS and lemma information in XML document.""" """ Explanation: How it works We’ve divided the program into sections below, with explanations after each section. Shebang and docstring End of explanation """ from xml.dom.minidom import Document from xml.dom import pulldom import nltk """ Explanation: A Python program begins with a shebang and a docstring. The shebang makes it easier to run the program from the command line, and the docstring documents what the program does, The shebang must be the very first line in a program. For now, think of the shebang as a magic incantation that should be copied and pasted verbatim; we explain below what it means. The docstring should be a single line framed by triple quotation marks, and it should describe concisely what the program does. When you execute the docstring by itself, as we do above, it echoes itself to the screen; when you run the program, though, it remains silent. Imports End of explanation """ def create_word_element(d: Document, text: str, pos: str) -> Element: """Create <word> element with POS and lemma attributes.""" word = d.createElement("word") word.setAttribute("pos", pos) word.setAttribute("lemma", lemmatize(text, pos)) t = d.createTextNode(text) word.appendChild(t) return word """ Explanation: We import the ability to create a new XML document, which we’ll use to create our output, from minidom, and we import pulldom to parse the input document. We import nltk because we’ll use it to determine the part of speech and the lemma for each word. Adding a &lt;word&gt; element to the output tree End of explanation """ def get_wordnet_pos(treebank_tag: str) -> str: """Replace treebank POS tags with wordnet ones; default POS is noun.""" pos_tags = {'J': nltk.corpus.reader.wordnet.ADJ, 'V': nltk.corpus.reader.wordnet.VERB, 'R': nltk.corpus.reader.wordnet.ADV} return pos_tags.get(treebank_tag[0], nltk.corpus.reader.wordnet.NOUN) """ Explanation: When we tokenize the text into words below, we pass each word and its part of speech into the create_word_element() function. The function creates a new &lt;word&gt; element, adds the part of speech tag as an attribute, and then uses our lemmatize() function to determine the lemma and add that as an attribute, as well. It then creates a text() node, sets its value as the text of the word, and makes the text() node a child of the new &lt;word&gt; element. Finally, we return the &lt;word&gt; element to the calling routine, which inserts it into the output XML tree in the right place. Converting treebank part of speech identifiers to Wordnet ones End of explanation """ def lemmatize(text: str, pos: str) -> str: return nltk.stem.WordNetLemmatizer().lemmatize(text.lower(), get_wordnet_pos(pos)) """ Explanation: We create a function called get_wordnet_pos(), which we’ll use later. This function is defined as taking one argument, called treebank_tag, which is a string, and it returns a value that is also a string. The reason we need to do this is that the NLTK part of speech tagger uses one set of part of speech identifiers, but Wordnet, the NLTK component that performs lemmatization, uses a different one. Since we do the part of speech tagging first, we use this function to convert that value to one that Wordnet will understand before we perform lemmatization. There are many treebank part of speech tags but only four Wordnet ones, for nouns, verbs, adjectives, and adverbs, and everything else is treated as a noun. Our function returns the correct value for the four defined parts of speech and defaults to the value for nouns otherwise. Lemmatizing End of explanation """ def extract(input_xml) -> Document: """Process entire input XML document, firing on events.""" # Initialize output as XML document, point to most recent open node d = Document() current = d # Start pulling; it continues automatically doc = pulldom.parse(input_xml) for event, node in doc: if event == pulldom.START_ELEMENT: current.appendChild(node) current = node elif event == pulldom.END_ELEMENT: current = node.parentNode elif event == pulldom.CHARACTERS: # tokenize, pos-tag, create <word> as child of parent words = nltk.word_tokenize(node.toxml()) tagged_words = nltk.pos_tag(words) for (text, pos) in tagged_words: word = create_word_element(d, text, pos) current.appendChild(word) return d """ Explanation: We define a function called lemmatize() that takes two pieces of input, both of which are strings, and returns a string. The parameter text is the word to be lemmatized and the parameter pos is the part of speech in treebank form. We call the NLTK function to identify the lemma with nltk.stem.WordNetLemmatizer().lemmatize() with two arguments. The lemmatizer expects words to be lower case, so we convert the text to lower case with the lower() string method. And it requires a Wordnet part of speech, and not a treebank one, so we use our get_wordnet_pos() function to perform the conversion. Doing the work End of explanation """ sample = "We didn't realize that we could split contractions!" nltk.word_tokenize(sample) """ Explanation: We refer below to line numbers, and if you’re reading this on line, you won’t those numbers. You can make them appear by running this notebook in a Jupyter session, clicking in the cell above, hitting the Esc key, to switch into command mode, and then typing l (the lowercase letter L), to toggle line numbering. Our extract() function does all the work, calling on the functions we defined earlier as needed. Here’s how it works (with line numbers): 1: extract() is a function that gets called with one argument, which we assign to a parameter we’ve called input_xml. 4: Near the top of the full program we’ve already used from xml.dom.minidom import Document, Element to make the Document class (and the Element class) available to our program. Here we use it to create a new XML document, which we assign as the value of a new variable d. We’ll use this to build our output document. 5: The variable current points to the node that will be the parent of any new elements. The document node is the root of the entire document, so it’s the initial value of the current variable. y: pulldom is a streaming parser, which means that once we start processing elements in the XML input tree, the parser keeps going until it has visited every node of the tree. We start that process with pulldom.parse(), telling it to parse the document we passed to it as the value of the input_xml parameter. 8: Parsing generates events like the start or end of an element or the presence of character data. There are other possible events, but these are the only ones we need to handle for our transformation. Each event provides a tuple that consists of two values, the name of the event (e.g., START_ELEMENT) and the value (e.g., an object of type node). We test the event type and process different types of events differently. 9–11: When we start a new element, we make it a child of the node identified by our current variable. This ensures that the output tree that we’re building will reproduce the structure of the input tree, and it also ensures that we create new &lt;word&gt; elements in the correct place. When we start an element, it’s the parent of any nodes we encounter until we find the corresponding END_ELEMENT event, so we make it a child of whatever node is current at the moment and then set the current variable to point to the node we just created. This means that, for example, when we encounter the first child of the root element of the input XML, we’ll make that a child of the root element of the output XML that we’re constructing. 12–13: When we encounter an END_ELEMENT event, that element can’t have any more children, so we set the current variable to point to its parent. 14–20: We’ll illustrate how the individual lines work below, but here’s a summary with everything in one place. When we encounter CHARACTERS while parsing, the value of the node is an XML text() node, and not a string. We convert it to a plain text string with the toxml() method, let NLTK break it into words with nltk.word_tokenize(), and assign the pieces to an array called words (line 16). Next, the nltk.pos_tag() function takes an array of words as its input (our words variable) and returns an array of tuples, that is, pairs of strings where the first is the original input word and the second is the part of speech according to treebank notation (17). It assigns this new array as the value of the tagged_words variable. We want to create a new &lt;word&gt; element in the output for each word, so we loop over that list of tuples (18). For each word, we call our create_word_element() function, which we defined earlier, and set the value of the variable word equal to the new &lt;word&gt; element (19). Finally, we make the new word a child of the current element, the one that was its parent in the input (20). There are other types of parse events, but we don’t need to do anything with them in this example, so we don’t write any code to process them. Remind me about those NLTK functions again nltk.word_tokenize() nltk.word_tokenize() splits a text into words. It’s smarter than just splitting on white space; it treats punctuation as a word, and it knows about common English contractions: End of explanation """ sample = "We didn't realize that we could split contractions!" words = nltk.word_tokenize(sample) nltk.pos_tag(words) """ Explanation: nltk.pos_tag() nltk.pos_tag() takes a list of words (not a sentence) as its input. That means that we need to tokenize the sentence before tagging: End of explanation """ words = ['thing', 'things', 'Things'] [(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word)) for word in words] """ Explanation: You can look up the part of speech tags at https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html. nltk.stem.WordNetLemmatizer().lemmatize() The Wordnet lemmatizer tries to lemmatize (find the dictionary form) of a word with or without part of speech information, but without the part of speech, it guesses that everything is a noun. Remember that Wordnet knows about only nouns, verbs, adjectives, and adverbs, and that the part of speech tags are different in Wordnet than in the treebank system. Oh, and it assumes lower-case input, so if you give it a capitalized word, it won’t recognize it as an inflected form of something else, and will therefore return it unchanged. End of explanation """ words = [('building','n'), ('building','v')] [(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word, pos)) for (word, pos) in words] """ Explanation: In the example above, the lemmatizer recognizes that “thing” is the lemma for “things”, but it fails to lemmatize “Things” correctly because of the upper case letter. End of explanation """ words = ['building'] [(word + ": " + nltk.stem.WordNetLemmatizer().lemmatize(word)) for word in words] """ Explanation: In the example above, we supplied part of speech information, and the lemmatizer correctly treats “building” differently as a noun than as a verb. If we don’t specify a part of speech, it assumes everything is a noun: End of explanation """ with open('test.xml', 'r') as test_in: results = extract(test_in) print(results.toprettyxml()) """ Explanation: Input and output End of explanation """ contents = open('test.xml','r').read() """ Explanation: We could, alternatively, have opened a file handle to read the file from disk, read it, and saved the results with: End of explanation """
jkibele/OpticalRS
docs/notebooks/depth/LyzengaDepth.ipynb
bsd-3-clause
%pylab inline import geopandas as gpd import pandas as pd from OpticalRS import * from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.cross_validation import train_test_split import itertools import statsmodels.formula.api as smf from collections import OrderedDict style.use('ggplot') cd ../data """ Explanation: Lyzenga Method I want to apply the Lyzenga 2006 method for comparison. End of explanation """ imrds = RasterDS('glint_corrected.tif') imarr = imrds.band_array deprds = RasterDS('Leigh_Depth_atAcq_Resampled.tif') darr = -1 * deprds.band_array.squeeze() """ Explanation: Preprocessing That happened here. End of explanation """ darr = np.ma.masked_greater( darr, 20.0 ) """ Explanation: Depth Limit Lyzenga et al methods for determining shallow water don't work for me based on the high reflectance of the water column and extremely low reflectance of Ecklonia for the blue bands. So I'm just going to limit the depths under consideration using the multibeam data. End of explanation """ imarr = ArrayUtils.mask3D_with_2D( imarr, darr.mask ) darr = np.ma.masked_where( imarr[...,0].mask, darr ) """ Explanation: Equalize Masks I need to make sure I'm dealing with the same pixels in depth and image data. End of explanation """ dwmeans = np.load('darkmeans.pkl') dwstds = np.load('darkstds.pkl') """ Explanation: Dark Pixel Subtraction I need to calculate a modified version of $X_i = ln(L_i - L_{si})$. In order to do that I'll first load the deep water means and standard deviations I calculated here. End of explanation """ dpsub = ArrayUtils.equalize_band_masks( \ np.ma.masked_less( imarr - (dwmeans - 2 * dwstds), 0.0 ) ) print "After that I still retain %.1f%% of my pixels." % ( 100 * dpsub.count() / float( imarr.count() ) ) X = np.log( dpsub ) # imrds.new_image_from_array(X.astype('float32'),'LyzengaX.tif') """ Explanation: I applied the same modification as Armstrong (1993), 2 standard deviations from $L_{si}$, to avoid getting too many negative values because those can't be log transformed. End of explanation """ h = np.ma.masked_where( X[...,0].mask, darr ) imshow( X[...,1] ) """ Explanation: I'll need to equalize the masks again. I'll call the depths h in reference to Lyzenga et al. 2006 (e.g. equation 14). End of explanation """ df = ArrayUtils.band_df( X ) df['depth'] = h.compressed() """ Explanation: Dataframe Put my $X_i$ and my $h$ values into a dataframe so I can regress them easily. End of explanation """ x_train, x_test, y_train, y_test = train_test_split( \ df[imrds.band_names],df.depth,train_size=300000,random_state=5) traindf = ArrayUtils.band_df( x_train ) traindf['depth'] = y_train.ravel() testdf = ArrayUtils.band_df( x_test ) testdf['depth'] = y_test.ravel() """ Explanation: Data Split I need to split my data into training and test sets. End of explanation """ def get_fit( ind, x_train, y_train ): skols = LinearRegression() skolsfit = skols.fit(x_train[...,ind],y_train) return skolsfit def get_selfscore( ind, x_train, y_train ): fit = get_fit( ind, x_train, y_train ) return fit.score( x_train[...,ind], y_train ) od = OrderedDict() for comb in itertools.combinations( range(8), 2 ): od[ get_selfscore(comb,x_train,y_train) ] = [ c+1 for c in comb ] od_sort = sorted( od.items(), key=lambda t: t[0], reverse=True ) od_sort best_ind = np.array( od_sort[0][1] ) - 1 best_ind """ Explanation: Find the Best Band Combo That's the one that returns the largest $R^2$ value. End of explanation """ skols = LinearRegression() skolsfit = skols.fit(x_train[...,best_ind],y_train) print "h0 = %.2f, h2 = %.2f, h3 = %.2f" % \ (skolsfit.intercept_,skolsfit.coef_[0],skolsfit.coef_[1]) """ Explanation: Build the model End of explanation """ print "R^2 = %.6f" % skolsfit.score(x_test[...,best_ind],y_test) pred = skolsfit.predict(x_test[...,best_ind]) fig,ax = plt.subplots(1,1,figsize=(8,6)) mapa = ax.hexbin(pred,y_test,mincnt=1,bins='log',gridsize=500,cmap=plt.cm.hot) # ax.scatter(pred,y_test,alpha=0.008,edgecolor='none') ax.set_ylabel('MB Depth') ax.set_xlabel('Predicted Depth') rmse = np.sqrt( mean_squared_error( y_test, pred ) ) n = x_train.shape[0] tit = "RMSE: %.4f, n=%i" % (rmse,n) ax.set_title(tit) ax.set_aspect('equal') ax.axis([-5,25,-5,25]) ax.plot([-5,25],[-5,25],c='white') cb = plt.colorbar(mapa) cb.set_label("Log10(N)") LyzPredVsMB = pd.DataFrame({'prediction':pred,'mb_depth':y_test}) LyzPredVsMB.to_pickle('LyzPredVsMB.pkl') """ Explanation: Check the Results End of explanation """ fullim = imrds.band_array fulldep = -1 * deprds.band_array.squeeze() fullim = ArrayUtils.mask3D_with_2D( fullim, fulldep.mask ) fulldep = np.ma.masked_where( fullim[...,0].mask, fulldep ) dlims = arange(5,31,2.5) drmses,meanerrs,stderrs = [],[],[] for dl in dlims: dlarr = np.ma.masked_greater( fulldep, dl ) iml = ArrayUtils.mask3D_with_2D( fullim, dlarr.mask ) imldsub = ArrayUtils.equalize_band_masks( \ np.ma.masked_less( iml - (dwmeans - 2 * dwstds), 0.0 ) ) imlX = np.log( imldsub ) dlarr = np.ma.masked_where( imlX[...,0].mask, dlarr ) xl_train, xl_test, yl_train, yl_test = train_test_split( \ imlX.compressed().reshape(-1,8),dlarr.compressed(),train_size=1500,random_state=5) linr = LinearRegression() predl = linr.fit(xl_train[...,best_ind],yl_train).predict( xl_test[...,best_ind] ) drmses.append( sqrt( mean_squared_error(yl_test,predl) ) ) meanerrs.append( (yl_test - predl).mean() ) stderrs.append( (yl_test - predl).std() ) fig,(ax1,ax2) = subplots(1,2,figsize=(12,6)) ax1.plot(dlims,np.array(drmses),marker='o',c='b') ax1.set_xlabel("Data Depth Limit (m)") ax1.set_ylabel("Model RMSE (m)") em,es = np.array(meanerrs), np.array(stderrs) ax2.plot(dlims,em,marker='o',c='b') ax2.plot(dlims,em+es,linestyle='--',c='k') ax2.plot(dlims,em-es,linestyle='--',c='k') ax2.set_xlabel("Data Depth Limit (m)") ax2.set_ylabel("Model Mean Error (m)") deplimdf = pd.DataFrame({'depth_lim':dlims,'rmse':drmses,\ 'mean_error':meanerrs,'standard_error':stderrs}) deplimdf.to_pickle('LyzengaDepthLimitDF.pkl') """ Explanation: Effect of Depth Limit on Model Accuracy Given a fixed number of training points (n=1500), what is the effect of limiting the depth of the model. End of explanation """ # ns = np.logspace(log10(0.00003*df.depth.count()),log10(0.80*df.depth.count()),15) int(ns.min()),int(ns.max()) ns = np.logspace(1,log10(0.80*df.depth.count()),15) ltdf = pd.DataFrame({'train_size':ns}) for rs in range(10): nrmses = [] for n in ns: xn_train,xn_test,yn_train,yn_test = train_test_split( \ df[imrds.band_names],df.depth,train_size=int(n),random_state=rs+100) thisols = LinearRegression() npred = thisols.fit(xn_train[...,best_ind],yn_train).predict(xn_test[...,best_ind]) nrmses.append( sqrt( mean_squared_error(yn_test,npred ) ) ) dflabel = 'rand_state_%i' % rs ltdf[dflabel] = nrmses print "min points: %i, max points: %i" % (int(ns.min()),int(ns.max())) fig,ax = subplots(1,1,figsize=(10,6)) for rs in range(10): dflabel = 'rand_state_%i' % rs ax.plot(ltdf['train_size'],ltdf[dflabel]) ax.set_xlabel("Number of Training Points") ax.set_ylabel("Model RMSE (m)") # ax.set_xlim(0,5000) ax.set_xscale('log') ax.set_title("Rapidly Increasing Accuracy With More Training Data") ltdf.to_pickle('LyzengaAccuracyDF.pkl') """ Explanation: Limited Training Data I want to see how the accuracy of this method is affected by the reduction of training data. End of explanation """ full_pred = skolsfit.predict(X[...,best_ind]) full_pred = np.ma.masked_where( h.mask, full_pred ) full_errs = full_pred - h blah = hist( full_errs.compressed(), 100 ) figure(figsize=(12,11)) vmin,vmax = np.percentile(full_errs.compressed(),0.1),np.percentile(full_errs.compressed(),99.9) imshow( full_errs, vmin=vmin, vmax=vmax ) ax = gca() ax.set_axis_off() ax.set_title("Depth Errors (m)") colorbar() full_pred.dump('LyzDepthPred.pkl') full_errs.dump('LyzDepthPredErrs.pkl') """ Explanation: Full Prediction Perform a prediction on all the data and find the errors. Save the outputs for comparison with KNN. End of explanation """
econandrew/povcalnetjson
notebooks/3-empirical-splines.ipynb
mit
# Choose our lorenz curve. India is: # with open("../jsoncache/IND_5_2011.5_0.json", "r") as f: # Good ones # IND_2_1977.5.json # MYS_3_1997.json # CHN_1_1999.json # JAM_3_1988.json with open("../jsoncache/CHL_3_2003.json", "r") as f: d = json.loads(f.read()) L = [0.0] + d['lorenz']['L'] p = [0.0] + d['lorenz']['p'] if 'sample' in d: ymean = d['sample']['mean_month_ppp'] # NOT annual income is more relatable ymin = d['sample']['month_min'] ymax = d['sample']['month_max'] if not ymean or ymean == float("NaN"): ymean = ymin + 0.25*(ymax - ymin) # guess with some skew else: ymean = d['inputs']['mean_month_ppp'] ymin = 0 ymax = ymean * 4 print("Mean", ymean, "(",ymin,ymax,")") # Down sample the Lorenz curve, for smoothness #pc = p #Lc = L if False and len(p) > 40: print ("Down sampling Lorenz curve...") pc = p[0:100:4] Lc = L[0:100:4] else: pc = p Lc = L if Lc[len(Lc)-1] < 1: pc = pc + [1.0] Lc = Lc + [1.0] #pc = p[0:100:5] #Lc = L[0:100:5] #pc = [0.0] + [1.0/N] + pc + [] + [1.0] #Lc = [0.0] + [(ymin-0.0) * (1.0/N - 0.0) / ymean] + Lc + [] + [1.0] # Check that this is a valid convex function - if we're working from bad data we'll go astray #Lc = Lc[1:] #pc = pc[1:] if not np.all(np.diff(Lc) > 0): print ("Lorenz curve is not increasing") print (np.diff(Lc)) if not np.all(np.diff(np.diff(Lc)/np.diff(pc)) > 0): print ("Lorenz curve is not convex") print (np.diff(np.diff(Lc)/np.diff(pc))) plt.plot(pc, Lc, ".") """ Explanation: Spline interpolations Piecewise linear functions (aka linear splines) had the advantage of being easy to fit and simple to work with. There disadvantage was that having a discontinuous first derivative (ie. they are $C^0$ functions), so that the the CDF derived from the Lorenz curve would be discontinuous, implying a discrete distribution (not a good match for income). Ideally, we would like to fit the Lorenz curve with a $C^2$ function, a function that is twice continuously differentiable, so that the CDF (effectively a first derivative) would be smooth, and the PDF continuous (or perhaps even $C^3$, resulting in a smooth PDF). One very common $C^2$ interpolant is cubic splines. Sample Lorenz curve data: India, 2011 End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,2.5) fig, ax = plt.subplots(1, 4) ########################################## import scipy.interpolate cubic_lorenz = scipy.interpolate.CubicSpline(pc, Lc, extrapolate=0) x = np.linspace(0.0, 1.0, 1000) y = np.linspace(0.0, ymax, 1000) ax[0].plot(pc, Lc, ".") ax[0].plot(x, cubic_lorenz(x)) cubic_quantile = lambda p: ymean * cubic_lorenz.derivative()(p) ax[1].plot(x, cubic_quantile(x)) cubic_cdf = inverse(cubic_quantile) ax[2].plot(y, cubic_cdf(y)) cubic_pdf = derivative(cubic_cdf) ax[3].plot(y, cubic_pdf(y)) plt.plot(y, cubic_pdf(y)) """ Explanation: Cubic spline interpolation of Lorenz curve First we attempt a simple cubic spline interpolation of the Lorenz curve. The sharp turning points result because the cubic spline is not $C^3$. Another issue is that the space of polynmonial splines is not closed under inversion, so that the CDF is not a polynomial spline, and nor is the PDF. He we use numerical methods, but the result is that for fast visualization we will either rely on precomputation (resulting in a memory intensive grid of data points) or on-the-fly numerical methods (computationally intensive). One way to avoid this is to interpolate not points on the Lorenz curve, but points on the CDF itself. We'll come back to this. End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,2.5) fig, ax = plt.subplots(1, 4) ########################################## import scipy.interpolate kspline_lorenz = scipy.interpolate.UnivariateSpline(pc, Lc, k=5, s=len(pc)/8.5e9) x = np.linspace(0.0, 1.0, 1000) y = np.linspace(0.0, ymax, 1000) ax[0].plot(pc, Lc, ".") ax[0].plot(x, kspline_lorenz(x)) kspline_quantile = lambda p: ymean * kspline_lorenz.derivative()(p) ax[1].plot(x, kspline_quantile(x)) kspline_cdf = inverse(kspline_quantile) ax[2].plot(y, kspline_cdf(y)) kspline_pdf = derivative(kspline_cdf) ax[3].plot(y, kspline_pdf(y)) y = np.linspace(0, ymax, 1000) pdf = kspline_pdf(y) plt.plot(y, pdf) plt.vlines(x=d['inputs']['line_month_ppp'],ymin=0,ymax=np.nanmax(pdf)) import scipy.integrate a = d['quadratic']['reg']['params']['A']['coef'] b = d['quadratic']['reg']['params']['B']['coef'] c = d['quadratic']['reg']['params']['C']['coef'] mu = ymean nu = -b * mu / 2 tau = mu * (4 * a - b**2) ** (1/2) / 2 eta1 = 2 * (c / (a + b + c + 1) + b/2) * (4 *a - b**2)**(-1/2) eta2 = 2 * ((2*a + b + c)/(a + c - 1) + b / a)*(4*a - b**2)**(-1/2) lower = tau*eta1+nu upper = tau*eta2+nu # Hacky way to normalise gq_pdf_integral = 1 gq_pdf = lambda y: (1 + ((y - nu)/tau)**2)**(-3/2) / gq_pdf_integral gq_pdf_integral = scipy.integrate.quad(gq_pdf,lower,upper)[0] plt.plot(y, gq_pdf(y),color='r') plt.plot(y, kspline_pdf(y)) plt.vlines(x=lower,ymin=0,ymax=np.nanmax(pdf),color="green") #plt.vlines(x=upper,ymin=0,ymax=np.nanmax(pdf),color="green") plt.vlines(x=d['inputs']['line_month_ppp'],ymin=0,ymax=np.nanmax(pdf)) print("GQ bounds",lower,upper) povline = d['inputs']['line_month_ppp'] print("Poverty line", povline) print("HC", d['dist']['HC']) btl_gq = scipy.integrate.quad(gq_pdf,lower,povline)[0] print("HC est GQ", btl_gq) btl_kspline = kspline_cdf(povline) print("HC est Kspline", btl_kspline) """ Explanation: k-splines (Quartic & Quintic) End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,2.5) fig, ax = plt.subplots(1, 4) ########################################## pstar = pc Lstar = Lc # Left tail, minimum addon = [-0.002,-0.001] pstar = addon+pstar Lstar = [a*ymin/ymean for a in addon]+Lstar # Right tail, maximum addon = [1.001,1.002] pstar = pstar+addon Lstar = Lstar+[1+(a-1)*ymax/ymean for a in addon] spline = scipy.interpolate.UnivariateSpline(pstar,Lstar,k=5,s=1e-8) print("Derivative at 0:", derivative(spline)(0)) print("Derivative at 1:", derivative(spline)(1)) x = np.linspace(0, 1.0, 1000) y = np.linspace(0, 200, 1000) ax[0].scatter(p, L) ax[0].plot(x, spline(x)) spline_quantile = lambda p: ymean * spline.derivative()(p) ax[1].plot(x, spline_quantile(x)) spline_cdf = inverse(spline_quantile) ax[2].plot(y, spline_cdf(y)) spline_pdf = derivative(spline_cdf) ax[3].plot(y, spline_pdf(y)) print(ymin/ymean) print(ymax/ymean) """ Explanation: k-spline boundary conditions End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,2.5) fig, ax = plt.subplots(1, 4) ########################################## import math G = 0.33 mean = 500 sigma = scipy.stats.norm.ppf((G + 1)/2, 0, 1) * math.sqrt(2); mu = math.log(mean) - 0.5 * sigma ** 2; lnpdf = lambda y: scipy.stats.lognorm.pdf(y, sigma, scale=math.exp(mu)) lncdf = lambda y: scipy.stats.lognorm.cdf(y, sigma, scale=math.exp(mu)) lnquantile = lambda p: scipy.stats.lognorm.ppf(p, sigma, scale=math.exp(mu)) lnlorenz = lambda p: integral(lnquantile)(p) / mean p = np.linspace(0, 1, 200) L = lnlorenz(p) spline = scipy.interpolate.UnivariateSpline(p,L,k=5,s=1e-8) x = np.linspace(0, 1.0, 1000) y = np.linspace(0, 1000, 1000) ax[0].scatter(p, L) ax[0].plot(x, spline(x)) spline_quantile = lambda p: ymean * spline.derivative()(p) ax[1].plot(x, spline_quantile(x)) spline_cdf = inverse(spline_quantile) ax[2].plot(y, spline_cdf(y)) spline_pdf = derivative(spline_cdf) ax[3].plot(y, spline_pdf(y)) ax[3].plot(y, lnpdf(y), color="r") """ Explanation: Can k-splines fit lognormal? End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,5) fig, ax = plt.subplots(1, 2) ########################################## # The y values are simply the mean-scaled derivatives of the Lorenz curve dL = np.diff(Lc) dp = np.diff(pc) y = ymean * dL/dp y = np.hstack((0.0, y)) # And we arbitrarily assign these y values to the mid-points of the p values pmid = np.add(pc[1:],pc[:-1])/2 pmid = np.hstack((0.0, pmid)) # Then calculate the direct interpolation of the CDF points ccubic_cdf = scipy.interpolate.CubicSpline(y, pmid, bc_type=((2, 0.0), (2, 0.0)), extrapolate=0) #ccubic_min = inverse(ccubic_cdf, (0, 500))(0.0) ygrid = np.linspace(0, 500.0, 1000) ax[0].plot(y, pmid, ".") ax[0].plot(ygrid, ccubic_cdf(ygrid)) ccubic_pdf = derivative(ccubic_cdf) ax[1].plot(ygrid, ccubic_pdf(ygrid)) """ Explanation: Direct cubic spline interpolation of CDF Since our main objects in visualization are the CDF and PDF, we might try instead intepolating CDF points directly. Then, since the PDF is a simple first derivative, it will be easy to get from a $k$th degree polynomial spline representation of the CDF to a $(k-1)$th degree polynomial for the PDF. As noted earlier, many Lorenz cuves (and hence CDFs) are consistent with a given sample of Lorenz curve points. Moreover, a given Lorenz curve point does not map uniquely to a CDF point. A Lorenz curve effectively defines a mean income for a sequence of quantile bands in the distribution. The mean value theorem suggests that some quantile within that band must have the mean income (assuming continuity), but we have no information of which quantile that will be. Thus, we have to make an assumption. We choose to assign the mean income for each quantile band to the mid-point of that band. For a sequence of $N$ $(p, L)$ pairs sampled from the Lorenz curve, our procedure will give $N-1$ $(p, y)$ pairs sampled from a hypothetical CDF. We also assume that the CDF starts at (0, 0) - that is no person has an income $<= 0$. (This may not be reasonable for a given data set.) End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,5) fig, ax = plt.subplots(1, 2) ########################################## # Then calculate the direct interpolation of the CDF points cpchip_cdf = scipy.interpolate.PchipInterpolator(y, pmid, extrapolate=0) ax[0].plot(y, pmid, ".") ax[0].plot(ygrid, cpchip_cdf(ygrid)) cpchip_pdf = derivative(cpchip_cdf) ax[1].plot(ygrid, cpchip_pdf(ygrid)) """ Explanation: Direct PCHIP interpolation of CDF An obvious issue with cubic splines - based on cubics - is that they are not constrained to be monotonic. We can see that above in the "CDF", which turns downward at both tails and is thus ill-formed. It's even more obvious in the "PDF", which is negative in the tails. The PCHIP interpolator is a variant on cubic splines that does guarantee monotonicity. However, to do so, it sacrifices a continuous second derivative. This means that the PDF will not be smooth. It works, and is at least well-formed, but it's not very attractive. End of explanation """ ########################################## plt.rcParams["figure.figsize"] = (12,4) fig, ax = plt.subplots(1, 3) ########################################## ### F(x) = 1 - (x/xm)^a => solve for xm and a #pmid = pmid[1:] #y = y[1:] p1 = pmid[-2] y1 = y[-2] p2 = pmid[-1] y2 = y[-1] #alpha = (np.log(1-p2) - np.log(1-p1)) / (np.log(y1) - np.log(y2)) #ym = np.exp(np.log(y2) + (1/alpha)*np.log(1-p2)) #yshift = 0 def pareto_cdf(y, ym, alpha, yshift): return 1 - (ym / (y - yshift))**alpha def pareto_pdf(y, ym, alpha, yshift): return alpha * ym**alpha / (y - yshift)**(alpha+1) def pareto_pdfderiv(y, ym, alpha, yshift): return -alpha*(alpha+1) * ym**alpha / (y - yshift)**(alpha+2) def target(parameters): dd1, ym, alpha, yshift = parameters dd1 = dd1/100 yshift = yshift*100 ccubpar_cdf = scipy.interpolate.CubicSpline(y[:-1], pmid[:-1], bc_type=((1,0),(2,dd1)),extrapolate=0) ccubpar_pdf = ccubpar_cdf.derivative() err1 = pareto_cdf(y1, ym, alpha, yshift) - p1 err1d = pareto_pdf(y1, ym, alpha, yshift) - ccubpar_pdf(y1) err1dd = pareto_pdfderiv(y1, ym, alpha, yshift) - ccubpar_pdf.derivative()(y1) #err1dd = 0 err2 = pareto_cdf(y2, ym, alpha, yshift) - p2 return err1**2 + (err1d*20)**2 + (err1dd*100)**2 + err2**2 from scipy.optimize import basinhopping #guess = (-0.000015, 2.85446391657, 0.783499652638, 135.703869627) guess = (-1, 1, 1, 1) dd1, ym, alpha, yshift = basinhopping(target, guess).x dd1 = dd1/100 yshift = yshift*100 print(dd1, ym, alpha, yshift) ccubpar_cdf = scipy.interpolate.CubicSpline(y[:-1], pmid[:-1], bc_type=((1,0),(2,dd1)),extrapolate=0) ccubpar_pdf = ccubpar_cdf.derivative() d1 = ccubpar_pdf(y1) dd1 = ccubpar_pdf.derivative()(y1) xgrid = np.linspace(0.0, 1.0, 1000) ygrid = np.linspace(0.0, 1000.0, 1000) ax[0].plot(y, pmid, ".") ax[0].plot(ygrid, ccubpar_cdf(ygrid), 'b-') ax[1].plot(ygrid, ccubpar_pdf(ygrid), 'b-') ax[0].plot(ygrid, pareto_cdf(ygrid, ym, alpha, yshift), 'r-') ax[0].set_xlim((0,1000)) ax[0].set_ylim((0, 1.0)) ygrid2 = np.linspace(y1, 1000, 1000) ax[1].plot(ygrid2, pareto_pdf(ygrid2, ym, alpha, yshift), 'r-') final_pdf = np.vectorize(lambda z: 0.0 if z < y[0] else (ccubpar_pdf(z) if z < y1 else pareto_pdf(z, ym, alpha, yshift))) ax[2].plot(ygrid, final_pdf(ygrid)) import scipy.integrate print(scipy.integrate.quad(final_pdf, 0, 1e3))[0] ygrid1 = np.linspace(y1-10, y1, 500) ygrid2 = np.linspace(y1, y1+10, 500) plt.plot(ygrid1, ccubpar_pdf(ygrid1), 'b-') plt.plot(ygrid2, pareto_pdf(ygrid2, ym, alpha, yshift), 'r-') ########################################## plt.rcParams["figure.figsize"] = (12,6) ########################################## ygrid = np.linspace(0, 400, 2000) plt.plot(ygrid, final_pdf(ygrid), 'b-') from scipy.integrate import quad final_cdf = lambda y: quad(final_pdf, 0, y)[0] final_quantile = inverse(final_cdf, domain=(0, 1e6)) final_mean = quad(lambda y: y * final_pdf(y), 0, 1e6)[0] final_lorenz = np.vectorize(lambda p: quad(lambda y: y * final_pdf(y), 0, final_quantile(p))[0] / ymean) final_lorenz(0.5) # should be 0.27 final_L = final_lorenz(np.asarray(pc[1:-1])) final_L = np.hstack((0.0, final_L, 1.0)) plt.plot(pc, final_L,'r-') plt.plot(pc, Lc,'g.') """ Explanation: Cublic spline with pareto tails Since the relatively flat tails seem to be the cause of most non-monotonicity with cubic spline interpolation, we can try instead to replace the tails with a Pareto tails, which naturally supports this asymptotic flattening. The advantage of retaining the cubic spline in the left and central parts of the distribution is that we can easily preserve some of the detail of distribution, rather than forcing it to fit a smooth parametric form. The pareto distribution has two parameters, and we will impose two conditions, namely 1. That it pass through the final "observed" point of the CDF 2. That it match the derivative of the cubic spline at the second last point of the CDF In addition we will vertically compress the pareto CDF so that it continues from the cubic spline. End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/7cf7296709bf473b6e7fed6bc98287be/plot_ems_filtering.ipynb
bsd-3-clause
# Author: Denis Engemann <denis.engemann@gmail.com> # Jean-Remi King <jeanremi.king@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io, EvokedArray from mne.datasets import sample from mne.decoding import EMS, compute_ems from sklearn.model_selection import StratifiedKFold print(__doc__) data_path = sample.data_path() # Preprocess the data raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_ids = {'AudL': 1, 'VisL': 3} # Read data and create epochs raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(0.5, 45, fir_design='firwin') events = mne.read_events(event_fname) picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_ids, tmin=-0.2, tmax=0.5, picks=picks, baseline=None, reject=dict(grad=4000e-13, eog=150e-6), preload=True) epochs.drop_bad() epochs.pick_types(meg='grad') # Setup the data to use it a scikit-learn way: X = epochs.get_data() # The MEG data y = epochs.events[:, 2] # The conditions indices n_epochs, n_channels, n_times = X.shape # Initialize EMS transformer ems = EMS() # Initialize the variables of interest X_transform = np.zeros((n_epochs, n_times)) # Data after EMS transformation filters = list() # Spatial filters at each time point # In the original paper, the cross-validation is a leave-one-out. However, # we recommend using a Stratified KFold, because leave-one-out tends # to overfit and cannot be used to estimate the variance of the # prediction within a given fold. for train, test in StratifiedKFold(n_splits=5).split(X, y): # In the original paper, the z-scoring is applied outside the CV. # However, we recommend to apply this preprocessing inside the CV. # Note that such scaling should be done separately for each channels if the # data contains multiple channel types. X_scaled = X / np.std(X[train]) # Fit and store the spatial filters ems.fit(X_scaled[train], y[train]) # Store filters for future plotting filters.append(ems.filters_) # Generate the transformed data X_transform[test] = ems.transform(X_scaled[test]) # Average the spatial filters across folds filters = np.mean(filters, axis=0) # Plot individual trials plt.figure() plt.title('single trial surrogates') plt.imshow(X_transform[y.argsort()], origin='lower', aspect='auto', extent=[epochs.times[0], epochs.times[-1], 1, len(X_transform)], cmap='RdBu_r') plt.xlabel('Time (ms)') plt.ylabel('Trials (reordered by condition)') # Plot average response plt.figure() plt.title('Average EMS signal') mappings = [(key, value) for key, value in event_ids.items()] for key, value in mappings: ems_ave = X_transform[y == value] plt.plot(epochs.times, ems_ave.mean(0), label=key) plt.xlabel('Time (ms)') plt.ylabel('a.u.') plt.legend(loc='best') plt.show() # Visualize spatial filters across time evoked = EvokedArray(filters, epochs.info, tmin=epochs.tmin) evoked.plot_topomap(time_unit='s', scalings=1) """ Explanation: ============================================== Compute effect-matched-spatial filtering (EMS) ============================================== This example computes the EMS to reconstruct the time course of the experimental effect as described in [1]_. This technique is used to create spatial filters based on the difference between two conditions. By projecting the trial onto the corresponding spatial filters, surrogate single trials are created in which multi-sensor activity is reduced to one time series which exposes experimental effects, if present. We will first plot a trials x times image of the single trials and order the trials by condition. A second plot shows the average time series for each condition. Finally a topographic plot is created which exhibits the temporal evolution of the spatial filters. References .. [1] Aaron Schurger, Sebastien Marti, and Stanislas Dehaene, "Reducing multi-sensor data to a single time course that reveals experimental effects", BMC Neuroscience 2013, 14:122. End of explanation """ epochs.equalize_event_counts(event_ids) X_transform, filters, classes = compute_ems(epochs) """ Explanation: Note that a similar transformation can be applied with compute_ems However, this function replicates Schurger et al's original paper, and thus applies the normalization outside a leave-one-out cross-validation, which we recommend not to do. End of explanation """
probml/pyprobml
notebooks/misc/bnn_mnist_sgld_jaxbayes.ipynb
mit
%%capture !pip install git+https://github.com/deepmind/dm-haiku !pip install git+https://github.com/jamesvuc/jax-bayes import haiku as hk import jax.numpy as jnp from jax.experimental import optimizers import jax import jax_bayes import sys, os, math, time import numpy as onp import numpy as np from functools import partial from matplotlib import pyplot as plt os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" import tensorflow_datasets as tfds """ Explanation: Bayesian MLP for MNIST using preconditioned SGLD We use the Jax Bayes library by James Vuckovic to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning). Code is based on: https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist.ipynb https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist_mcmc.ipynb Setup End of explanation """ def load_dataset(split, is_training, batch_size): ds = tfds.load("mnist:3.*.*", split=split).cache().repeat() if is_training: ds = ds.shuffle(10 * batch_size, seed=0) ds = ds.batch(batch_size) # return tfds.as_numpy(ds) return iter(tfds.as_numpy(ds)) # load the data into memory and create batch iterators train_batches = load_dataset("train", is_training=True, batch_size=1_000) val_batches = load_dataset("train", is_training=False, batch_size=10_000) test_batches = load_dataset("test", is_training=False, batch_size=10_000) """ Explanation: Data End of explanation """ nclasses = 10 def net_fn(batch, sig): """Standard LeNet-300-100 MLP""" x = batch["image"].astype(jnp.float32) / 255.0 # x has size (1000, 28, 28, 1) D = np.prod(x.shape[1:]) # 784 # To match initialization of linear layer # sigma = 1/sqrt(fan-in) # https://dm-haiku.readthedocs.io/en/latest/api.html#id1 # w_init = hk.initializers.TruncatedNormal(stddev=stddev) sizes = [D, 300, 100, nclasses] sigmas = [sig / jnp.sqrt(fanin) for fanin in sizes] mlp = hk.Sequential( [ hk.Flatten(), hk.Linear(sizes[1], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[0]), b_init=jnp.zeros), jax.nn.relu, hk.Linear(sizes[2], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[1]), b_init=jnp.zeros), jax.nn.relu, hk.Linear(sizes[3], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[2]), b_init=jnp.zeros), ] ) return mlp(x) # L2 regularizer will be added to loss reg = 1e-4 """ Explanation: Model End of explanation """ net = hk.transform(partial(net_fn, sig=1)) lr = 1e-3 opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr) # instantiate the model parameters --- requires a sample batch to get size params_init = net.init(jax.random.PRNGKey(42), next(train_batches)) # intialize the optimzier state opt_state = opt_init(params_init) def loss(params, batch): logits = net.apply(params, None, batch) labels = jax.nn.one_hot(batch["label"], 10) l2_loss = 0.5 * sum(jnp.sum(jnp.square(p)) for p in jax.tree_leaves(params)) softmax_crossent = -jnp.mean(labels * jax.nn.log_softmax(logits)) return softmax_crossent + reg * l2_loss @jax.jit def accuracy(params, batch): preds = net.apply(params, None, batch) return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"]) @jax.jit def train_step(i, opt_state, batch): params = opt_get_params(opt_state) dx = jax.grad(loss)(params, batch) opt_state = opt_update(i, dx, opt_state) return opt_state print(params_init["linear"]["w"].shape) def callback(step, params, train_eval, test_eval, print_every=500): if step % print_every == 0: # Periodically evaluate classification accuracy on train & test sets. train_accuracy = accuracy(params, next(train_eval)) test_accuracy = accuracy(params, next(test_eval)) train_accuracy, test_accuracy = jax.device_get((train_accuracy, test_accuracy)) print(f"[Step {step}] Train / Test accuracy: " f"{train_accuracy:.3f} / {test_accuracy:.3f}.") %%time nsteps = 5000 for step in range(nsteps + 1): opt_state = train_step(step, opt_state, next(train_batches)) params_sgd = opt_get_params(opt_state) callback(step, params_sgd, val_batches, test_batches) """ Explanation: SGD End of explanation """ lr = 5e-3 num_samples = 10 # number of samples to approximate the posterior init_stddev = 0.01 # 0.1 # params sampled around params_init # we initialize all weights to 0 since we will be sampling them anyway # net_bayes = hk.transform(partial(net_fn, sig=0)) sampler_fns = jax_bayes.mcmc.rms_langevin_fns seed = 0 key = jax.random.PRNGKey(seed) sampler_init, sampler_propose, sampler_update, sampler_get_params = sampler_fns( key, num_samples=num_samples, step_size=lr, init_stddev=init_stddev ) @jax.jit def accuracy_bayes(params_samples, batch): # average the logits over the parameter samples pred_fn = jax.vmap(net.apply, in_axes=(0, None, None)) preds = jnp.mean(pred_fn(params_samples, None, batch), axis=0) return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"]) # the log-probability is the negative of the loss logprob = lambda p, b: -loss(p, b) # build the mcmc step. This is like the opimization step, but for sampling @jax.jit def mcmc_step(i, sampler_state, sampler_keys, batch): # extract parameters params = sampler_get_params(sampler_state) # form a partial eval of logprob on the data logp = lambda p: logprob(p, batch) # evaluate *per-sample* gradients fx, dx = jax.vmap(jax.value_and_grad(logp))(params) # generat proposal states for the Markov chains sampler_prop_state, new_keys = sampler_propose(i, dx, sampler_state, sampler_keys) # we don't need to re-compute gradients for the accept stage (unadjusted Langevin) fx_prop, dx_prop = fx, dx # accept the proposal states for the markov chain sampler_state, new_keys = sampler_update(i, fx, fx_prop, dx, sampler_state, dx_prop, sampler_prop_state, new_keys) return jnp.mean(fx), sampler_state, new_keys def callback_bayes(step, params, val_batches, test_batches, print_every=500): if step % print_every == 0: val_acc = accuracy_bayes(params, next(val_batches)) test_acc = accuracy_bayes(params, next(test_batches)) print(f"step = {step}" f" | val acc = {val_acc:.3f}" f" | test acc = {test_acc:.3f}") %%time #get a single sample of the params using the normal hk.init(...) params_init = net.init(jax.random.PRNGKey(42), next(train_batches)) # get a SamplerState object with `num_samples` params along dimension 0 # generated by adding Gaussian noise (see sampler_fns(..., init_dist='normal')) sampler_state, sampler_keys = sampler_init(params_init) # iterate the the Markov chain nsteps = 5000 for step in range(nsteps+1): train_logprob, sampler_state, sampler_keys = \ mcmc_step(step, sampler_state, sampler_keys, next(train_batches)) params_samples = sampler_get_params(sampler_state) callback_bayes(step, params_samples, val_batches, test_batches) print(params_samples["linear"]["w"].shape) # 10 samples of the weights for first layer """ Explanation: SGLD End of explanation """ test_batch = next(test_batches) from jax_bayes.utils import entropy, certainty_acc def plot_acc_vs_confidence(predict_fn, test_batch): # plot how accuracy changes as we increase the required level of certainty preds = predict_fn(test_batch) # (batch_size, n_classes) array of probabilities acc, mask = certainty_acc(preds, test_batch["label"], cert_threshold=0) thresholds = [0.1 * i for i in range(11)] cert_accs, pct_certs = [], [] for t in thresholds: cert_acc, cert_mask = certainty_acc(preds, test_batch["label"], cert_threshold=t) cert_accs.append(cert_acc) pct_certs.append(cert_mask.mean()) fig, ax = plt.subplots(1) line1 = ax.plot(thresholds, cert_accs, label="accuracy at certainty", marker="x") line2 = ax.axhline(y=acc, label="regular accuracy", color="black") ax.set_ylabel("accuracy") ax.set_xlabel("certainty threshold") axb = ax.twinx() line3 = axb.plot(thresholds, pct_certs, label="pct of certain preds", color="green", marker="x") axb.set_ylabel("pct certain") lines = line1 + [line2] + line3 labels = [l.get_label() for l in lines] ax.legend(lines, labels, loc=6) return fig, ax """ Explanation: Uncertainty analysis We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected. End of explanation """ # plugin approximation to posterior predictive @jax.jit def posterior_predictive_plugin(params, batch): logit_pp = net.apply(params, None, batch) return jax.nn.softmax(logit_pp, axis=-1) def pred_fn(batch): return posterior_predictive_plugin(params_sgd, batch) fig, ax = plot_acc_vs_confidence(pred_fn, test_batch) plt.savefig("acc-vs-conf-sgd.pdf") plt.show() """ Explanation: SGD For the plugin estimate, the model is very confident on nearly all of the points. End of explanation """ def posterior_predictive_bayes(params_sampled, batch): """computes the posterior_predictive P(class = c | inputs, params) using a histogram""" pred_fn = lambda p: net.apply(p, jax.random.PRNGKey(0), batch) pred_fn = jax.vmap(pred_fn) logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes pred_samples = jnp.argmax(logit_samples, axis=-1) # n_samples x batch_size n_classes = logit_samples.shape[-1] batch_size = logit_samples.shape[1] probs = np.zeros((batch_size, n_classes)) for c in range(n_classes): idxs = pred_samples == c probs[:, c] = idxs.sum(axis=0) return probs / probs.sum(axis=1, keepdims=True) def pred_fn(batch): return posterior_predictive_bayes(params_samples, batch) fig, ax = plot_acc_vs_confidence(pred_fn, test_batch) plt.savefig("acc-vs-conf-sgld.pdf") plt.show() """ Explanation: SGLD End of explanation """ fashion_ds = tfds.load("fashion_mnist:3.*.*", split="test").cache().repeat() fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000)) fashion_test_batches = iter(fashion_test_batches) fashion_batch = next(fashion_test_batches) """ Explanation: Distribution shift We now examine the behavior of the models on the Fashion MNIST dataset. We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation. End of explanation """ def pred_fn(batch): return posterior_predictive_plugin(params_sgd, batch) fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch) plt.savefig("acc-vs-conf-sgd-fashion.pdf") plt.show() """ Explanation: SGD We see that the plugin estimate is confident (but wrong!) on many of the predictions, which is undesirable. If consider a confidence threshold of 0.6, the plugin approach predicts on about 80% of the examples, even though the accuracy is only about 6% on these. End of explanation """ def pred_fn(batch): return posterior_predictive_bayes(params_samples, batch) fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch) plt.savefig("acc-vs-conf-sgld-fashion.pdf") plt.show() """ Explanation: SGLD If consider a confidence threshold of 0.6, the Bayesian approach predicts on less than 20% of the examples, on which the accuracy is ~4%. End of explanation """
zakandrewking/cobrapy
documentation_builder/simulating.ipynb
lgpl-2.1
import cobra.test model = cobra.test.create_test_model("textbook") """ Explanation: Simulating with FBA Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions. End of explanation """ solution = model.optimize() print(solution) """ Explanation: Running FBA End of explanation """ solution.objective_value """ Explanation: The Model.optimize() function will return a Solution object. A solution object has several attributes: objective_value: the objective value status: the status from the linear programming solver fluxes: a pandas series with flux indexed by reaction identifier. The flux for a reaction variable is the difference of the primal values for the forward and reverse reaction variables. shadow_prices: a pandas series with shadow price indexed by the metabolite identifier. For example, after the last call to model.optimize(), if the optimization succeeds it's status will be optimal. In case the model is infeasible an error is raised. End of explanation """ %%time model.optimize().objective_value %%time model.slim_optimize() """ Explanation: The solvers that can be used with cobrapy are so fast that for many small to mid-size models computing the solution can be even faster than it takes to collect the values from the solver and convert to them python objects. With model.optimize, we gather values for all reactions and metabolites and that can take a significant amount of time if done repeatedly. If we are only interested in the flux value of a single reaction or the objective, it is faster to instead use model.slim_optimize which only does the optimization and returns the objective value leaving it up to you to fetch other values that you may need. End of explanation """ model.summary() """ Explanation: Analyzing FBA solutions Models solved using FBA can be further analyzed by using summary methods, which output printed text to give a quick representation of model behavior. Calling the summary method on the entire model displays information on the input and output behavior of the model, along with the optimized objective. End of explanation """ model.metabolites.nadh_c.summary() """ Explanation: In addition, the input-output behavior of individual metabolites can also be inspected using summary methods. For instance, the following commands can be used to examine the overall redox balance of the model End of explanation """ model.metabolites.atp_c.summary() """ Explanation: Or to get a sense of the main energy production and consumption reactions End of explanation """ biomass_rxn = model.reactions.get_by_id("Biomass_Ecoli_core") """ Explanation: Changing the Objectives The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Generally, a "biomass" function which describes the composition of metabolites which make up a cell is used. End of explanation """ from cobra.util.solver import linear_reaction_coefficients linear_reaction_coefficients(model) """ Explanation: Currently in the model, there is only one reaction in the objective (the biomass reaction), with an linear coefficient of 1. End of explanation """ # change the objective to ATPM model.objective = "ATPM" # The upper bound should be 1000, so that we get # the actual optimal value model.reactions.get_by_id("ATPM").upper_bound = 1000. linear_reaction_coefficients(model) model.optimize().objective_value """ Explanation: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction: objective_coefficient}. End of explanation """ from cobra.flux_analysis import flux_variability_analysis flux_variability_analysis(model, model.reactions[:10]) """ Explanation: We can also have more complicated objectives including quadratic terms. Running FVA FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum. End of explanation """ cobra.flux_analysis.flux_variability_analysis( model, model.reactions[:10], fraction_of_optimum=0.9) """ Explanation: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality. End of explanation """ loop_reactions = [model.reactions.FRD7, model.reactions.SUCDi] flux_variability_analysis(model, reaction_list=loop_reactions, loopless=False) flux_variability_analysis(model, reaction_list=loop_reactions, loopless=True) """ Explanation: The standard FVA may contain loops, i.e. high absolute flux values that only can be high if they are allowed to participate in loops (a mathematical artifact that cannot happen in vivo). Use the loopless argument to avoid such loops. Below, we can see that FRD7 and SUCDi reactions can participate in loops but that this is avoided when using the looplesss FVA. End of explanation """ model.optimize() model.summary(fva=0.95) """ Explanation: Running FVA in summary methods Flux variability analysis can also be embedded in calls to summary methods. For instance, the expected variability in substrate consumption and product formation can be quickly found by End of explanation """ model.metabolites.pyr_c.summary(fva=0.95) """ Explanation: Similarly, variability in metabolite mass balances can also be checked with flux variability analysis. End of explanation """ model.objective = 'Biomass_Ecoli_core' fba_solution = model.optimize() pfba_solution = cobra.flux_analysis.pfba(model) """ Explanation: In these summary methods, the values are reported as a the center point +/- the range of the FVA solution, calculated from the maximum and minimum values. Running pFBA Parsimonious FBA (often written pFBA) finds a flux distribution which gives the optimal growth rate, but minimizes the total sum of flux. This involves solving two sequential linear programs, but is handled transparently by cobrapy. For more details on pFBA, please see Lewis et al. (2010). End of explanation """ abs(fba_solution.fluxes["Biomass_Ecoli_core"] - pfba_solution.fluxes[ "Biomass_Ecoli_core"]) """ Explanation: These functions should give approximately the same objective value. End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_brainstorm_auditory.ipynb
bsd-3-clause
# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr> # Eric Larson <larson.eric.d@gmail.com> # Jaakko Leppakangas <jaeilepp@student.jyu.fi> # # License: BSD (3-clause) import os.path as op import pandas as pd import numpy as np import mne from mne import combine_evoked from mne.minimum_norm import apply_inverse from mne.datasets.brainstorm import bst_auditory from mne.io import read_raw_ctf from mne.filter import notch_filter, low_pass_filter print(__doc__) """ Explanation: Brainstorm auditory tutorial dataset Here we compute the evoked from raw for the auditory Brainstorm tutorial dataset. For comparison, see [1]_ and http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory Experiment: - One subject 2 acquisition runs 6 minutes each. - Each run contains 200 regular beeps and 40 easy deviant beeps. - Random ISI: between 0.7s and 1.7s seconds, uniformly distributed. - Button pressed when detecting a deviant with the right index finger. The specifications of this dataset were discussed initially on the FieldTrip bug tracker: http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300 References .. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages, 2011. doi:10.1155/2011/879716 End of explanation """ use_precomputed = True """ Explanation: To reduce memory consumption and running time, some of the steps are precomputed. To run everything from scratch change this to False. With use_precomputed = False running time of this script can be several minutes even on a fast computer. End of explanation """ data_path = bst_auditory.data_path() subject = 'bst_auditory' subjects_dir = op.join(data_path, 'subjects') raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_01.ds') raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory', 'S01_AEF_20131218_02.ds') erm_fname = op.join(data_path, 'MEG', 'bst_auditory', 'S01_Noise_20131218_01.ds') """ Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass filtered at 600 Hz. Here the data and empty room data files are read to construct instances of :class:mne.io.Raw. End of explanation """ preload = not use_precomputed raw = read_raw_ctf(raw_fname1, preload=preload) n_times_run1 = raw.n_times mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)]) raw_erm = read_raw_ctf(erm_fname, preload=preload) """ Explanation: In the memory saving mode we use preload=False and use the memory efficient IO which loads the data on demand. However, filtering and some other functions require the data to be preloaded in the memory. End of explanation """ raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'}) if not use_precomputed: # Leave out the two EEG channels for easier computation of forward. raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True, ecg=True) """ Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference sensors and 2 EEG electrodes (Cz and Pz). In addition: - 1 stim channel for marking presentation times for the stimuli - 1 audio channel for the sent signal - 1 response channel for recording the button presses - 1 ECG bipolar - 2 EOG bipolar (vertical and horizontal) - 12 head tracking channels - 20 unused channels The head tracking channels and the unused channels are marked as misc channels. Here we define the EOG and ECG channels. End of explanation """ annotations_df = pd.DataFrame() offset = n_times_run1 for idx in [1, 2]: csv_fname = op.join(data_path, 'MEG', 'bst_auditory', 'events_bad_0%s.csv' % idx) df = pd.read_csv(csv_fname, header=None, names=['onset', 'duration', 'id', 'label']) print('Events from run {0}:'.format(idx)) print(df) df['onset'] += offset * (idx - 1) annotations_df = pd.concat([annotations_df, df], axis=0) saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int) # Conversion from samples to times: onsets = annotations_df['onset'].values / raw.info['sfreq'] durations = annotations_df['duration'].values / raw.info['sfreq'] descriptions = map(str, annotations_df['label'].values) annotations = mne.Annotations(onsets, durations, descriptions) raw.annotations = annotations del onsets, durations, descriptions """ Explanation: For noise reduction, a set of bad segments have been identified and stored in csv files. The bad segments are later used to reject epochs that overlap with them. The file for the second run also contains some saccades. The saccades are removed by using SSP. We use pandas to read the data from the csv files. You can also view the files with your favorite text editor. End of explanation """ saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True, reject_by_annotation=False) projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0, desc_prefix='saccade') if use_precomputed: proj_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-eog-proj.fif') projs_eog = mne.read_proj(proj_fname)[0] else: projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(), n_mag=1, n_eeg=0) raw.add_proj(projs_saccade) raw.add_proj(projs_eog) del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory """ Explanation: Here we compute the saccade and EOG projectors for magnetometers and add them to the raw data. The projectors are added to both runs. End of explanation """ raw.plot(block=True) """ Explanation: Visually inspect the effects of projections. Click on 'proj' button at the bottom right corner to toggle the projectors on/off. EOG events can be plotted by adding the event list as a keyword argument. As the bad segments and saccades were added as annotations to the raw data, they are plotted as well. End of explanation """ if not use_precomputed: meg_picks = mne.pick_types(raw.info, meg=True, eeg=False) raw.plot_psd(picks=meg_picks) notches = np.arange(60, 181, 60) raw.notch_filter(notches) raw.plot_psd(picks=meg_picks) """ Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. The power spectra are plotted before and after the filtering to show the effect. The drop after 600 Hz appears because the data was filtered during the acquisition. In memory saving mode we do the filtering at evoked stage, which is not something you usually would do. End of explanation """ if not use_precomputed: raw.filter(None, 100.) """ Explanation: We also lowpass filter the data at 100 Hz to remove the hf components. End of explanation """ tmin, tmax = -0.1, 0.5 event_id = dict(standard=1, deviant=2) reject = dict(mag=4e-12, eog=250e-6) # find events events = mne.find_events(raw, stim_channel='UPPT001') """ Explanation: Epoching and averaging. First some parameters are defined and events extracted from the stimulus channel (UPPT001). The rejection thresholds are defined as peak-to-peak values and are in T / m for gradiometers, T for magnetometers and V for EOG and EEG channels. End of explanation """ sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0] onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0] min_diff = int(0.5 * raw.info['sfreq']) diffs = np.concatenate([[min_diff + 1], np.diff(onsets)]) onsets = onsets[diffs > min_diff] assert len(onsets) == len(events) diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq'] print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms' % (np.mean(diffs), np.std(diffs))) events[:, 0] = onsets del sound_data, diffs """ Explanation: The event timing is adjusted by comparing the trigger times on detected sound onsets on channel UADC001-4408. End of explanation """ raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408'] """ Explanation: We mark a set of bad channels that seem noisier than others. This can also be done interactively with raw.plot by clicking the channel name (or the line). The marked channels are added as bad when the browser window is closed. End of explanation """ picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=reject, preload=False, proj=True) """ Explanation: The epochs (trials) are created for MEG channels. First we find the picks for MEG and EOG channels. Then the epochs are constructed using these picks. The epochs overlapping with annotated bad segments are also rejected by default. To turn off rejection by bad segments (as was done earlier with saccades) you can use keyword reject_by_annotation=False. End of explanation """ epochs.drop_bad() epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)], epochs['standard'][182:222]]) epochs_standard.load_data() # Resampling to save memory. epochs_standard.resample(600, npad='auto') epochs_deviant = epochs['deviant'].load_data() epochs_deviant.resample(600, npad='auto') del epochs, picks """ Explanation: We only use first 40 good epochs from each run. Since we first drop the bad epochs, the indices of the epochs are no longer same as in the original epochs collection. Investigation of the event timings reveals that first epoch from the second run corresponds to index 182. End of explanation """ evoked_std = epochs_standard.average() evoked_dev = epochs_deviant.average() del epochs_standard, epochs_deviant """ Explanation: The averages for each conditions are computed. End of explanation """ if use_precomputed: sfreq = evoked_std.info['sfreq'] nchan = evoked_std.info['nchan'] notches = [60, 120, 180] for ch_idx in range(nchan): evoked_std.data[ch_idx] = notch_filter(evoked_std.data[ch_idx], sfreq, notches, verbose='ERROR') evoked_dev.data[ch_idx] = notch_filter(evoked_dev.data[ch_idx], sfreq, notches, verbose='ERROR') evoked_std.data[ch_idx] = low_pass_filter(evoked_std.data[ch_idx], sfreq, 100, verbose='ERROR') evoked_dev.data[ch_idx] = low_pass_filter(evoked_dev.data[ch_idx], sfreq, 100, verbose='ERROR') """ Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the original 60 Hz artifact and the harmonics. Normally this would be done to raw data (with :func:mne.io.Raw.filter), but to reduce memory consumption of this tutorial, we do it at evoked stage. End of explanation """ evoked_std.plot(window_title='Standard', gfp=True) evoked_dev.plot(window_title='Deviant', gfp=True) """ Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions we can see the P50 and N100 responses. The mismatch negativity is visible only in the deviant condition around 100-200 ms. P200 is also visible around 170 ms in both conditions but much stronger in the standard condition. P300 is visible in deviant condition only (decision making in preparation of the button press). You can view the topographies from a certain time span by painting an area with clicking and holding the left mouse button. End of explanation """ times = np.arange(0.05, 0.301, 0.025) evoked_std.plot_topomap(times=times, title='Standard') evoked_dev.plot_topomap(times=times, title='Deviant') """ Explanation: Show activations as topography figures. End of explanation """ evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1]) evoked_difference.plot(window_title='Difference', gfp=True) """ Explanation: We can see the MMN effect more clearly by looking at the difference between the two conditions. P50 and N100 are no longer visible, but MMN/P200 and P300 are emphasised. End of explanation """ reject = dict(mag=4e-12) cov = mne.compute_raw_covariance(raw_erm, reject=reject) cov.plot(raw_erm.info) del raw_erm """ Explanation: Source estimation. We compute the noise covariance matrix from the empty room measurement and use it for the other runs. End of explanation """ trans_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-trans.fif') trans = mne.read_trans(trans_fname) """ Explanation: The transformation is read from a file. More information about coregistering the data, see :ref:ch_interactive_analysis or :func:mne.gui.coregistration. End of explanation """ if use_precomputed: fwd_fname = op.join(data_path, 'MEG', 'bst_auditory', 'bst_auditory-meg-oct-6-fwd.fif') fwd = mne.read_forward_solution(fwd_fname) else: src = mne.setup_source_space(subject, spacing='ico4', subjects_dir=subjects_dir, overwrite=True) model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3], subjects_dir=subjects_dir) bem = mne.make_bem_solution(model) fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src, bem=bem) inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov) snr = 3.0 lambda2 = 1.0 / snr ** 2 del fwd """ Explanation: To save time and memory, the forward solution is read from a file. Set use_precomputed=False in the beginning of this script to build the forward solution from scratch. The head surfaces for constructing a BEM solution are read from a file. Since the data only contains MEG channels, we only need the inner skull surface for making the forward solution. For more information: :ref:CHDBBCEJ, :class:mne.setup_source_space, :ref:create_bem_model, :func:mne.bem.make_watershed_bem. End of explanation """ stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM') brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh') brain.set_data_time_index(120) del stc_standard, evoked_std, brain """ Explanation: The sources are computed using dSPM method and plotted on an inflated brain surface. For interactive controls over the image, use keyword time_viewer=True. Standard condition. End of explanation """ stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM') brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh') brain.set_data_time_index(120) del stc_deviant, evoked_dev, brain """ Explanation: Deviant condition. End of explanation """ stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM') brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject, surface='inflated', time_viewer=False, hemi='lh') brain.set_data_time_index(150) """ Explanation: Difference. End of explanation """
AllenDowney/ModSimPy
notebooks/chap11.ipynb
mit
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * """ Explanation: Modeling and Simulation in Python Chapter 11 Copyright 2017 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation """ init = State(S=89, I=1, R=0) """ Explanation: SIR implementation We'll use a State object to represent the number (or fraction) of people in each compartment. End of explanation """ init /= sum(init) """ Explanation: To convert from number of people to fractions, we divide through by the total. End of explanation """ def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma) """ Explanation: make_system creates a System object with the given parameters. End of explanation """ tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) """ Explanation: Here's an example with hypothetical values for beta and gamma. End of explanation """ def update_func(state, t, system): """Update the SIR model. state: State with variables S, I, R t: time step system: System with beta and gamma returns: State object """ s, i, r = state infected = system.beta * i * s recovered = system.gamma * i s -= infected i += infected - recovered r += recovered return State(S=s, I=i, R=r) """ Explanation: The update function takes the state during the current time step and returns the state during the next time step. End of explanation """ state = update_func(init, 0, system) """ Explanation: To run a single time step, we call it like this: End of explanation """ def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: State object for final state """ state = system.init for t in linrange(system.t0, system.t_end): state = update_func(state, t, system) return state """ Explanation: Now we can run a simulation by calling the update function for each time step. End of explanation """ run_simulation(system, update_func) """ Explanation: The result is the state of the system at t_end End of explanation """ # Solution goes here """ Explanation: Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected? Hint: what is the change in S between the beginning and the end of the simulation? End of explanation """ def run_simulation(system, update_func): """Runs a simulation of the system. Add three Series objects to the System: S, I, R system: System object update_func: function that updates state """ S = TimeSeries() I = TimeSeries() R = TimeSeries() state = system.init t0 = system.t0 S[t0], I[t0], R[t0] = state for t in linrange(system.t0, system.t_end): state = update_func(state, t, system) S[t+1], I[t+1], R[t+1] = state return S, I, R """ Explanation: Using TimeSeries objects If we want to store the state of the system at each time step, we can use one TimeSeries object for each state variable. End of explanation """ tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) S, I, R = run_simulation(system, update_func) """ Explanation: Here's how we call it. End of explanation """ def plot_results(S, I, R): """Plot the results of a SIR model. S: TimeSeries I: TimeSeries R: TimeSeries """ plot(S, '--', label='Susceptible') plot(I, '-', label='Infected') plot(R, ':', label='Recovered') decorate(xlabel='Time (days)', ylabel='Fraction of population') """ Explanation: And then we can plot the results. End of explanation """ plot_results(S, I, R) savefig('figs/chap11-fig01.pdf') """ Explanation: Here's what they look like. End of explanation """ def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ frame = TimeFrame(columns=system.init.index) frame.row[system.t0] = system.init for t in linrange(system.t0, system.t_end): frame.row[t+1] = update_func(frame.row[t], t, system) return frame """ Explanation: Using a DataFrame Instead of making three TimeSeries objects, we can use one DataFrame. We have to use row to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the DataFrame. End of explanation """ tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) results = run_simulation(system, update_func) results.head() """ Explanation: Here's how we run it, and what the result looks like. End of explanation """ plot_results(results.S, results.I, results.R) """ Explanation: We can extract the results and plot them. End of explanation """ # Solution goes here """ Explanation: Exercises Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results. End of explanation """
xmnlab/notebooks
probability/LeaPkgStudies.ipynb
mit
from IPython.display import display, HTML from nxpd import draw import networkx as nx def draw_graph( graph, labels=None ): # create networkx graph G = nx.DiGraph() G.graph['dpi'] = 120 G.add_nodes_from(set([ graph[k1][k2] for k1 in range(len(graph)) for k2 in range(len(graph[k1])) ])) G.add_edges_from(graph) return draw(G, show='ipynb') """ Explanation: Table of Contents <p><div class="lev1"><a href="#Lea-package-first-studies"><span class="toc-item-num">1&nbsp;&nbsp;</span>Lea package first studies</a></div><div class="lev2"><a href="#Flip-coin"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Flip coin</a></div><div class="lev2"><a href="#&quot;Rain-Sprinkler-Grass&quot;-bayesian-network"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>"Rain-Sprinkler-Grass" bayesian network</a></div><div class="lev2"><a href="#Happiness-Test"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Happiness Test</a></div><div class="lev2"><a href="#Alarm-test"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Alarm test</a></div><div class="lev2"><a href="#Test-#1"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Test #1</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>References</a></div> # Lea package first studies End of explanation """ from lea import * flip1 = Lea.fromValFreqs(('head',67),('tail',33)) flip1 P(flip1=='head') Pf(flip1=='head') flip1.random(10) flip2 = flip1.clone() flips = flip1 + '-' + flip2 flips P(flip1==flip2) P(flip1!=flip2) flip1.upper() flip1.upper()[0] def toInt(flip): return 1 if flip=='head' else 0 headCount1 = flip1.map(toInt) headCount1 headCount2 = flip2.map(toInt) headCounts = headCount1 + headCount2 headCounts headCounts.given(flip1==flip2) headCounts.given(flip1!=flip2) flip1.given(headCounts==0) flip1.given(headCounts==1) flip1.given(headCounts==2) """ Explanation: Flip coin End of explanation """ rain = B(20,100) sprinkler = Lea.if_(rain, B(1,100), B(40,100)) grassWet = Lea.buildCPT( ( ~sprinkler & ~rain, False ), ( ~sprinkler & rain, B(80,100)), ( sprinkler & ~rain, B(90,100)), ( sprinkler & rain, B(99,100)) ) grassWet """ Explanation: "Rain-Sprinkler-Grass" bayesian network Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian network (shown to the right). All three variables have two possible values, T (for true) and F (for false). The joint probability function is: $P (G,S,R)= P (G| S,R) P(S| R) P(R)$ where the names of the variables have been abbreviated to G = Grass wet (yes/no), S = Sprinkler turned on (yes/no), and R = Raining (yes/no). The model can answer questions like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables: End of explanation """ Sunny = B(70,100) Raise = B(1,100) print(Sunny) print(Raise) # independence check assert Sunny.given(Raise).p(True) == Sunny.p(True) """ Explanation: Happiness Test $P('S', 0.7)$ $P('R', 0.01)$ End of explanation """ Happy = Lea.buildCPT( (Sunny & Raise, B(100,100)), (~Sunny & Raise, B(90,100)), (Sunny & ~Raise, B(70,100)), (~Sunny & ~Raise, B(10,100)) ) Happy # Evidences # P(H|S)=0.703 # P(H|R)=0.97 # P(H)=P(H|S)P(S)+P(H|¬S)P(¬S)=…≈0.5245 assert Happy.given(Sunny).pmf(True) == 0.703 assert Happy.given(Raise).pmf(True) == 0.97 assert Happy.pmf(True) == 0.5245 Raise.given(Happy & Sunny).pmf(True) Raise.given(Happy).pmf(True) Raise.given(Happy & ~Sunny).pmf(True) """ Explanation: $P(of='H', given=['S', 'R'], value=1)$ $P(of='H', given=['!S', 'R'], value=0.9)$ $P(of='H', given=['S', '!R'], value=0.7)$ $P(of='H', given=['!S', '!R'], value=0.1)$ End of explanation """ graph = [ ('Burglary', 'Alarm'), ('Earthquake', 'Alarm'), ('Alarm', 'John'), ('Alarm', 'Mary') ] draw_graph(graph) Burglary = B(1,1000) Earthquake = B(2,1000) Alarm = Lea.buildCPT( (Burglary & Earthquake, B(95,100)), (Burglary & ~Earthquake, B(94,100)), (~Burglary & Earthquake, B(29,100)), (~Burglary & ~Earthquake, B(1,1000)), ) John = Lea.if_(Alarm, B(90,100), B(5,100)) Mary = Lea.if_(Alarm, B(70,100), B(1,100)) # P(+b) P(e) P(a|+b,e) P(+j|a) P(+m|a) print('P(+b):', Burglary.pmf(True)) print('P(e):', Earthquake.pmf(True)) print('P(a|+b,e):', Alarm.given(Burglary & Earthquake).pmf(True)) print('P(+j|a):', John.given(Alarm).pmf(True)) print('P(+m|a):', Mary.given(Alarm).pmf(True)) print('=', 0.001 * 0.002 * 0.95 * 0.9 * 0.7) """ Explanation: Alarm test End of explanation """ graph = [ ('A', 'X1'), ('A', 'X2'), ('A', 'X3'), ] draw_graph(graph) A = Lea.boolProb(1,2) # using variable elimination # X1_A = Lea.if_(A, (2,10), (6,10)) # X2_A = Lea.if_(A, (2,10), (6,10)) # X3_A = Lea.if_(A, (2,10), (6,10)) X1 = B(4, 10) X2 = B(4, 10) X3 = B(4, 10) A.given(X1 & X2 & ~X3).pmf(True) """ Explanation: Test #1 End of explanation """
GoogleCloudPlatform/practical-ml-vision-book
07_training/07b_gpumax.ipynb
apache-2.0
import tensorflow as tf print('TensorFlow version' + tf.version.VERSION) print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!')) print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU")))) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) """ Explanation: GPU Utilization In this notebook, we show how to take advantage of TensorFlow optimizations for the GPU. Enable GPU and set up helper functions This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab: - Navigate to Edit→Notebook Settings - Select GPU from the Hardware Accelerator drop-down On Cloud AI Platform Notebooks: - Navigate to https://console.cloud.google.com/ai-platform/notebooks - Create an instance with a GPU or select your instance and add a GPU Next, we'll confirm that we can connect to the GPU with tensorflow: End of explanation """ %%writefile input.txt gs://practical-ml-vision-book/images/california_fire1.jpg gs://practical-ml-vision-book/images/california_fire2.jpg import matplotlib.pylab as plt import numpy as np import tensorflow as tf def read_jpeg(filename): img = tf.io.read_file(filename) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) img = tf.reshape(img, [338, 600, 3]) return img ds = tf.data.TextLineDataset('input.txt').map(read_jpeg) f, ax = plt.subplots(1, 2, figsize=(15,10)) for idx, img in enumerate(ds): ax[idx].imshow( img.numpy() ); """ Explanation: Ingest code End of explanation """ def to_grayscale(img): red = img[:, :, 0] green = img[:, :, 1] blue = img[:, :, 2] c_linear = 0.2126 * red + 0.7152 * green + 0.0722 * blue gray = tf.where(c_linear > 0.0031308, 1.055 * tf.pow(c_linear, 1/2.4) - 0.055, 12.92*c_linear) print(gray.shape) return gray ds = tf.data.TextLineDataset('input.txt').map(read_jpeg).map(to_grayscale) f, ax = plt.subplots(1, 2, figsize=(15,10)) for idx, img in enumerate(ds): im = ax[idx].imshow( img.numpy() , interpolation='none'); if idx == 1: f.colorbar(im, fraction=0.028, pad=0.04) """ Explanation: Adding a map function Let's say that we want to apply a custom formula to convert the images. End of explanation """ # This function is not accelerated. At all. def to_grayscale(img): rows, cols, _ = img.shape result = np.zeros([rows, cols], dtype=np.float32) for row in range(rows): for col in range(cols): red = img[row][col][0] green = img[row][col][1] blue = img[row][col][2] c_linear = 0.2126 * red + 0.7152 * green + 0.0722 * blue if c_linear > 0.0031308: result[row][col] = 1.055 * pow(c_linear, 1/2.4) - 0.055 else: result[row][col] = 12.92*c_linear return result %%time ds = tf.data.TextLineDataset('input.txt').repeat(10).map(read_jpeg) overall = tf.constant([0.], dtype=tf.float32) count = 0 for img in ds: # Notice that we have to call .numpy() to move the data outside TF Graph gray = to_grayscale(img.numpy()) # This moves the data back into the graph m = tf.reduce_mean(gray, axis=[0, 1]) overall += m count += 1 print(overall/count) """ Explanation: 1. Iterating through image (don't do this) End of explanation """ def to_grayscale_numpy(img): # the numpy happens here img = img.numpy() rows, cols, _ = img.shape result = np.zeros([rows, cols], dtype=np.float32) for row in range(rows): for col in range(cols): red = img[row][col][0] green = img[row][col][1] blue = img[row][col][2] c_linear = 0.2126 * red + 0.7152 * green + 0.0722 * blue if c_linear > 0.0031308: result[row][col] = 1.055 * pow(c_linear, 1/2.4) - 0.055 else: result[row][col] = 12.92*c_linear # the convert back happens here return tf.convert_to_tensor(result) def to_grayscale(img): return tf.py_function(to_grayscale_numpy, [img], tf.float32) %%time ds = tf.data.TextLineDataset('input.txt').repeat(10).map(read_jpeg).map(to_grayscale) overall = tf.constant([0.], dtype=tf.float32) count = 0 for gray in ds: m = tf.reduce_mean(gray, axis=[0, 1]) overall += m count += 1 print(overall/count) """ Explanation: 2. Pyfunc If you absolutely need to iterate or call Python-only functionality (like time/json/etc.) and still need to use map(), you can use a py_func Data is still moved out of the graph, the job done, and data moved back into the graph. So, efficiency-wise it's not a gain. End of explanation """ # All in GPU def to_grayscale(img): # TensorFlow slicing functionality red = img[:, :, 0] green = img[:, :, 1] blue = img[:, :, 2] # All these are actually tf.mul(), tf.add(), etc. c_linear = 0.2126 * red + 0.7152 * green + 0.0722 * blue # Use tf.cond and tf.where for if-then statements gray = tf.where(c_linear > 0.0031308, 1.055 * tf.pow(c_linear, 1/2.4) - 0.055, 12.92*c_linear) return gray %%time ds = tf.data.TextLineDataset('input.txt').repeat(10).map(read_jpeg).map(to_grayscale) overall = tf.constant([0.]) count = 0 for gray in ds: m = tf.reduce_mean(gray, axis=[0, 1]) overall += m count += 1 print(overall/count) """ Explanation: 3. Use TensorFlow slicing and tf.where This is 10x faster than iterating. End of explanation """ def to_grayscale(img): wt = tf.constant([[0.2126], [0.7152], [0.0722]]) # 3x1 matrix c_linear = tf.matmul(img, wt) # (ht,wd,3) x (3x1) -> (ht, wd) gray = tf.where(c_linear > 0.0031308, 1.055 * tf.pow(c_linear, 1/2.4) - 0.055, 12.92*c_linear) return gray %%time ds = tf.data.TextLineDataset('input.txt').repeat(10).map(read_jpeg).map(to_grayscale) overall = tf.constant([0.]) count = 0 for gray in ds: m = tf.reduce_mean(gray, axis=[0, 1]) overall += m count += 1 print(overall/count) """ Explanation: 4. Use Matrix math and tf.where This is 3x as fast as the slicing. End of explanation """ class Grayscale(tf.keras.layers.Layer): def __init__(self, **kwargs): super(Grayscale, self).__init__(kwargs) def call(self, img): wt = tf.constant([[0.2126], [0.7152], [0.0722]]) # 3x1 matrix c_linear = tf.matmul(img, wt) # (N, ht,wd,3) x (3x1) -> (N, ht, wd) gray = tf.where(c_linear > 0.0031308, 1.055 * tf.pow(c_linear, 1/2.4) - 0.055, 12.92*c_linear) return gray # (N, ht, wd) model = tf.keras.Sequential([ Grayscale(input_shape=(336, 600, 3)), tf.keras.layers.Lambda(lambda gray: tf.reduce_mean(gray, axis=[1, 2])) # note axis change ]) %%time ds = tf.data.TextLineDataset('input.txt').repeat(10).map(read_jpeg).batch(5) overall = tf.constant([0.]) count = 0 for batch in ds: bm = model(batch) overall += tf.reduce_sum(bm) count += len(bm) print(overall/count) """ Explanation: 5. Batching Fully vectorize the operations so that they work on a batch of images End of explanation """ from inspect import signature def myfunc(a, b): return (a + b) print(myfunc(3,5)) print(myfunc('foo', 'bar')) print(signature(myfunc).parameters) print(signature(myfunc).return_annotation) from inspect import signature def myfunc(a: int, b: float) -> float: return (a + b) print(myfunc(3,5)) print(myfunc('foo', 'bar')) # runtime doesn't check print(signature(myfunc).parameters) print(signature(myfunc).return_annotation) from inspect import signature import tensorflow as tf @tf.function(input_signature=[ tf.TensorSpec([3,5], name='a'), tf.TensorSpec([5,8], name='b') ]) def myfunc(a, b): return (tf.matmul(a,b)) print(myfunc.get_concrete_function(tf.ones((3,5)), tf.ones((5,8)))) """ Explanation: Results When we did it, these were the timings we got: | Method | CPU time | Wall time | | ---------------------- | ----------- | ------------ | | Iterate | 39.6s | 41.1s | | Pyfunc | 39.7s | 41.1s | | Slicing | 4.44s | 3.07s | | Matmul | 1.22s | 2.29s | | Batch | 1.11s | 2.13s | Signature Playing around with signature End of explanation """
halimacc/CS231n-assignments
assignment2/ConvolutionalNetworks.ipynb
unlicense
# As usual, a bit of setup from __future__ import print_function import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) """ Explanation: Convolutional Networks So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead. First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset. End of explanation """ x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 2e-8 print('Testing conv_forward_naive') print('difference: ', rel_error(out, correct_out)) """ Explanation: Convolution: Naive forward pass The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear. You can test your implementation by running the following: End of explanation """ from scipy.misc import imread, imresize kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_noax(img, normalize=True): """ Tiny helper to show images as uint8 and remove axis labels """ if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show() """ Explanation: Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check. End of explanation """ np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) # Your errors should be around 1e-8' print('Testing conv_backward_naive function') print('dx error: ', rel_error(dx, dx_num)) print('dw error: ', rel_error(dw, dw_num)) print('db error: ', rel_error(db, db_num)) """ Explanation: Convolution: Naive backward pass Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency. When you are done, run the following to check your backward pass with a numeric gradient check. End of explanation """ x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be around 1e-8. print('Testing max_pool_forward_naive function:') print('difference: ', rel_error(out, correct_out)) """ Explanation: Max pooling: Naive forward Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency. Check your implementation by running the following: End of explanation """ np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be around 1e-12 print('Testing max_pool_backward_naive function:') print('dx error: ', rel_error(dx, dx_num)) """ Explanation: Max pooling: Naive backward Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency. Check your implementation with numeric gradient checking by running the following: End of explanation """ from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print('Testing conv_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('Difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print('\nTesting conv_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) print('dw difference: ', rel_error(dw_naive, dw_fast)) print('db difference: ', rel_error(db_naive, db_fast)) from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast np.random.seed(231) x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print('Testing pool_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print('\nTesting pool_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) """ Explanation: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory: bash python setup.py build_ext --inplace The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights. NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation. You can compare the performance of the naive and fast versions of these layers by running the following: End of explanation """ from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) print('Testing conv_relu_pool') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) print('Testing conv_relu:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) """ Explanation: Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. End of explanation """ model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss) """ Explanation: Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug: Sanity check loss After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up. End of explanation """ num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads): f = lambda _: model.loss(X, y)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))) """ Explanation: Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2. End of explanation """ np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=1) solver.train() """ Explanation: Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy. End of explanation """ plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show() """ Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting: End of explanation """ model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train() """ Explanation: Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set: End of explanation """ from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show() """ Explanation: Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following: End of explanation """ np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) print(' Stds: ', x.std(axis=(0, 2, 3))) # Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization:') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) # Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization (nontrivial gamma, beta):') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) np.random.seed(231) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, C, H, W = 10, 4, 11, 12 bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in range(50): x = 2.3 * np.random.randn(N, C, H, W) + 13 spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After spatial batch normalization (test-time):') print(' means: ', a_norm.mean(axis=(0, 2, 3))) print(' stds: ', a_norm.std(axis=(0, 2, 3))) """ Explanation: Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map. If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W. Spatial batch normalization: forward In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following: End of explanation """ np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) """ Explanation: Spatial batch normalization: backward In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check: End of explanation """
MelroLeandro/Matematica-Discreta-para-Hackers
jpynb_source/Chapter1_Introducao/.ipynb_checkpoints/Chapter1_Introducao-checkpoint.ipynb
gpl-2.0
2+2 """ Explanation: Matemática Discreta para Hackers Version 0.1 Bem vindo a Matemática Discreta para Hackers. O repositório Github está desponivel em github/Matematica-Discreta-e-Programacao-Usando-Python. Esperamos que goste deste livro, e encorajamos que contribuira na sua melhoria! Capítulo 1 Python A linguagem de programação Python é constituida por diferenres construções sintácticas, uma grande variedade de funções em bibliotecas e estructuras de dados standard à linguagem. Formalmente podemos ignorar grande parte destes atributos, para o tipo de aplicações que temos em mente. Pretendemos implementar simples funções ou pequenos programas com o proposito de resolver problemas de matemática discreta e ensentivar o seu estudo mais aprefundado. A complexidade dos problemas seram incrementadas progrecivamente ao longo dos Capítulos. O que inicialmente são pequenos scripts, com meia duzia de linhas de programação, nos últimos Capítulos do livro vai exigir a utilização de vários módulos descitos em ficeiros separados. Nesse sentido, a parte inicial, ou nos capítulos iniciais, a execução das linhas de comando faz-se no Jupyter ou directamente no interpretagor. Na parte final deste Notebook, passa a ser esegida a utilização dum editor de texto ou de um ambiente de programação. Aqui usamos o IDLE que apesar das suas limitações é usado aqui como ambiente de desenvolvimento standard. Temos no entanto de aprender alguns conceitos de programação básicos antes de podermos resolver qualquer problema. Os exemplos de utilização seram apresentados após apresentação de um enquadramento teórico, nos Capítulos consecuentes. Em todos estes Capítulos encorajamos a que os exemplos apresentados sejam alterados e executados no Jupyter e se possivel reescritos directamente no interpretador ou ambiente de desenvolvimento. Neste ponto assumimos que tem o ambiente Jupyter a funcionar no seu computador. Fundamentos O espaço abaixo é usual designar no Jupyter célula de execução: Neste caso a célula está vazia. Com isto quer dizer que não tem código ou programa para ser executado. Para executar uma célula basta que esta esteja selecionada e seja teclado simultaniamente Shift+Return. Podemos começar por usar uma célula para fazer um pouco de aritmética. End of explanation """ (5 - 1) * ((7 + 1) / (3 - 1)) """ Explanation: Resumino: o resultado de avaliar 2+2 é 4. Fantástico. Exprimente alteras o operador usando para isso a sintaxe descrita abaixo. Operador | Operação | Examplemplo | Valor ... ---------|-----------|-------------|----------- | potência | 2 3 | 8 % | resto da divisão inteira | 22 % 8 |6 // | quociente da divisão inteira | 22 // 8 | 2 / | divisão | 22 / 8 | 2.75 * | multiplicação | 3 * 5 |15 - | Subtraction | 5 - 2 | 3 + | Addition | 2 + 2 | 4 A ordem das operações (também designada de precedência) dos operadores aritméticos do Pythone é similar à que é usada na matemática. O operador * é avaliado primeiro; os operadores , /, //, e % são avaliados a seguir, da esquerda para a direita; por último é avaliado + e - (também da esquerda para a direita). Naturalmente que podemos usar parenteses para superimpor a ordem de avaliação que desejar. End of explanation """ 5+ 42 + 5 + * 2 """ Explanation: Na tentativa de avaliar uma expressão, como a anterir, o Python, guiado pela ordem de precedencia imposta pelos operadores ou pela parentização, avalia progrecivamente subfórmulas até opter um valor único. <img src="000056.png" width = 200/> As regras usadas para formar expressões que podem ser avaliadas ( ou que o interpretador de Python perceba) são parte fundamental do Python como linguagem de programação. Estas regras podem ser entendidas como identicas às regras gramaticais que nos ajudam a comunicar. Sempre que pedimos ao Python para executar uma instrução desconhecida, ou cuja descição é incorrecta ou que o Python não consegue preceber, é emitida uma mensagem identificando um erro de sintaxe: End of explanation """ H= "\"Reciprocamente\", continuou Tweedledee, \ \"Se é assim, ele pode ser, \n \ e se não é, será; mas como não é não se preocupa. \ Isto é lógica.\" " print(H) """ Explanation: Acima são apresentados dois exemplos de expressões que não respeitam as regras gramaticais da linguagem de programação. Constantes Literais Um exemplo de uma constante literal é um número como 5, 1.23, 9.25e-3 ou uma string (sequência de caracteres) como 'Isto é uma string' ou "É uma string!". Estas constantes são designadas literais porque devem ser interpretadas à letra (literalmente). O número 2 é entendido como dois, é uma constante porque o seu valor (significado) não pode ser alterado. Números Os números em Python são de três tipos: inteiros, ponto flutuante e complexos: 2 é um exemplo de inteiro, os inteiros são elementos do conjunto dos números inteiros. Existem no entanto limitações à magnitude dos inteiros que se podem usar. 3.23 e 52.3E-4 são exemplos de números no sistema de virgula flutuante (ou floats, para abreviar). A notação E indica as potências de 10. Neste caso, 52.3E-4 significa 52.3$\times$ 10$^{-4}$. (-5+4j) e (2.3 - 4.6j) são exemplos de números complexos, que em Matemáica é normal representar por -5+4i e 2.3 - 4.6i. Strings [caracteres] Uma string é uma sequência de caracteres. As strings são basicamente uma sequência de símbolos. As stings são usualmente usadas para representar palavras que podem ser da língua Inglesa ou de qualquer língua que seja suportada pelo padrão Unicode (permitindo codificar quase todas as línguas do mundo). Aspas Unitárias Uma strings pode ser definida por uma sequência de caracteres delimitada por aspas unitárias (ou apóstrofes) tais como 'O estudo da lógica remonta à civilização helénica'. Todos os espaços em branco, isto é, espaços e tabulações são preservados no estado em que se encontram. Aspas Duplas As strings podem ser também definidas usando aspas duplas por exemplo : "A arte da argumentação levou à morte de Sócrates." Aspas Triplas Outra forma a definir strings que ocupam várias linhas é usar aspas triplas (""" ou '''). Um exemplo: '''A palavra "trivial" tem uma etimologia interessante. É a conjugação de "tri" (significando '3') e "via" (significando caminho). Originalmente refere-se ao "trivium", as três áreas fundamentais do 'curriculae': gramática, retórica e lógica. Assuntos que se tem de dominar para aceder ao "quadrivium", que consiste na aritmética, geometria, música e astronomia. ''' Sequências de Escape Para definir uma string que contenha um apóstrofe ('), como em: 'Why was logic considered to be fundamental to one's education?' sem que a apóstrofe interna entre em conflito com os delimitadores, usa-se uma sequência de escape. Para evitar o conflito, o caracter apóstrofe é representado na string por $\setminus$'. A string deve assim ser definida como: 'Why was logic considered to be fundamental to one$\setminus$'s education?' Outra forma de definir a string anterior seria \emph{"Why was logic considered to be fundamental to one$\setminus$'s education?"}, através das aspas duplas. De forma idêntica, é usada uma sequência de escape para inserir aspas duplas numa string limitada por aspas duplas. A própria barra invertida pode ser inserida na string pela sequência de escape $\setminus\setminus$. Como apresentado para definir uma string com mais de duas linhas usa-se por limitador aspas triplas. Outro processo é usar uma sequência de escape $\setminus$n, para indicar o fim de uma linha e o início de outra linha na string. Por exemplo, 'A lógica centra-se na razão e na noção de verdade.$\setminus$n A retórica fundamenta-se em ideias feitas e populistas.' Existem outras sequências de escape, apresenta-se aqui apenas as mais usadas. Para um descrição sistemática use a documentação do interpretador. Num editor, durante a programação, é frequente ter a necessidade de continuar uma string na linha imediatamente abaixo. Para isso é usada uma única barra invertida no fim da linha. Por exemplo em Looking Glass de Lewis Carroll: End of explanation """ H= r"\"Reciprocamente\", continuou Tweedledee, \ \"Se é assim, ele pode ser, \n \ e se não é, será; mas como não é não se preocupa. \ Isto é lógica.\" " print(H) """ Explanation: Se por algum motivo tem necessidade que o interpretador não trate as sequências de escape, na definição da string deve usar como prefixo um r ou um R. Por exemplo, na string anterior: End of explanation """ H='Originalmente' ' a lógica lidava' ' com linguagem natural' print(H) """ Explanation: As strings são Imutáveis [unicode: descrição e exemplos] [seria mais lógico iniciar o estudo com listas, tuplos diferença entre estruturas mutáveis e imotáveis] Isso significa que, como constantes literais, uma vez definida uma string, esta não pode sofrer alterações. Concatenação de Literais do Tipo String Quando numa linha de código duas strings são postas lado a lado, o interpretador faz a sua concatenação. Por exemplo: End of explanation """ H='Seria útil'+' demonstrar a correcção'+' dum argumento.' print(H) """ Explanation: De forma mais descritiva pode-se recorrer ao operador +: End of explanation """ H1 = 'Seria útil ' H2 = 'demonstrar a correcção ' H3 = 'dum argumento.' print(H1+H2+H3) """ Explanation: Variáveis As variáveis podem ser usadas para referenciar literais, por forma a permitir o seu tratamento e manipulação. De forma genérica, uma variável pode ser entendida como uma referência a uma parte da memória do computador onde está armazenada informação. Diferindo das constantes literais, uma vez que o seu significado pode variar no decurso do programa. End of explanation """ i = 5 print(i) i = i + 1 print(i) s = '''Esta é uma string de múltiplas linhas. Esta é a segunda linha.''' print(s) """ Explanation: No exemplo H1, H2 e H3 são usadas para identificar três strings, cuja concatenação é imprimida na shell través do comando print. As variáveis são entendidas como identificadores. Entendendo-se por identificador um nome dado para identificar um objecto. Existem regras para descrever os identificadores: O primeiro caracter do identificador tem de ser uma letra do alfabeto (maiúsculo ASCII ou minúsculo ASCII) ou um '_'. O resto do nome do identificador pode consistir de letras (maiúsculo ASCII ou minúsculo ASCII), '_' ou dígitos (0-9). Nomes de identificadores são \textit{case-sensitive}. Por exemplo, myname e myName são identificadores diferentes. Exemplos de nomes de identificadores válidos são i, __my_name, name_23 e a1b2_c3. Exemplos de nomes de identificadores inválidos são 2things e my-name. End of explanation """ i = 5 print(i) """ Explanation: No programa começamos por atribuir o valor constante literal 5 à variável i através do operador de atribuição (=). Uma linha deste tipo é designada de instrução, e indica neste caso, que passámos a referenciar através do nome da variável i o objecto 5. Em seguida, imprime-se o valor de i através do comando print, que imprime o valor da variável na shell. Na instrução seguinte somamos 1 ao valor referenciado por i. A partir deste momento i passa a referenciar o objecto 6. Em seguida, imprime-se o valor de i, agora 6. Como já se tinha feito na secção anterior, de forma análoga referencia-se um objecto string pela variável s, que depois se imprime. Linhas Lógicas e Físicas As linhas físicas são aquelas que escrevemos num editor a quando da definição dum programa. Uma linha lógica é a que o interpretador de Python entende por uma única instrução. O Python implicitamente assume que cada linha física corresponde a uma linha lógica. Um exemplo de uma linha lógica é uma instrução como print('Sócrates é mortal') caso esteja escrita no editor numa única linha, deve também ser entendida como uma linha física. Implicitamente, Python incentiva o uso de uma única instrução por linha, com o propósito de tornar o código mais legível. Se pretende definir mais do que uma linha lógica numa única linha física, então deve separar as linhas lógicas através de um ponto-e-virgula (';') para indicar o fim de cada linha lógica ou instrução. Por exemplo, End of explanation """ i = 5; print(i) """ Explanation: é o mesmo que End of explanation """ H= "\"Reciprocamente\", continuou Tweedledee, \ \"Se é assim, ele pode ser, \n \ e se não é, será; mas como não é não se preocupa. \ Isto é lógica.\" " print(H) """ Explanation: Voltando a um exemplo anterior: End of explanation """ i = 5 print('São ', i) # Erro! Existe um espaço no início da linha print('São,',i,' os macacos.') """ Explanation: O objecto string que passa a ser referenciado por H deve ser entendido como definido numa única linha lógica, apesar de ocupar diferentes linhas físicas. Neste sentido usa-se ; (ponto-e-virgula) para separar linhas lógicas na mesma linha física, enquanto $\setminus$ para separar uma linha lógica em diferentes linhas físicas. Identação Os espaços em branco são muito importantes no Python. Na verdade, os espaços brancos no início de uma linha definem a estrutura do programa. Nesta situação são designados de identação. Os espaços ou tabulações no início de uma linha determinam o nível de identação de uma linha lógica, que por sua vez agrupam instruções. Significando isto que linhas com o mesmo nível de identação são executadas em sequência. Estes conjuntos de instruções são designados blocos. Tenta-se ao longo do texto descrever a importância dos blocos. Note que, má identação pode originar erros a quando da execução. Por exemplo: End of explanation """ 2+3 3*5 """ Explanation: Note-se que, existe um espaço simples no início da segunda linha. O erro indica que a sintaxe do programa está incorrecta, ou seja, o programa não está escrito com a estrutura correcta. Não podemos iniciar novos blocos de instruções arbitrariamente (com excepção do inicial). Os casos onde se podem usar novos blocos são apresentados ao longo do capítulo. Como identar Não misturar tabulações com espaços a quando da identação, já que nem todas as plataformas a suportam. É recomendado o uso de uma tabulação, ou dois espaços ou quatro espaços para distinguir cada nível de identação. Escolha um destes estilos de identação. Mais importante que a escolha que faz, deve manter-se consistente a ela, ou seja, mantenha apenas um tipo de identação ao longo de todo o código. Operadores e Expressões Introdução A maioria das instruções (linhas lógicas) que escreve contêm expressões. Um exemplo simples de uma expressão é 2+3. Uma expressão pode ser partida em operador e operandos. Os operadores são funções que podem ser identificadas por símbolos, como +, ou palavras especiais. Os operadores requerem argumentos, que chamamos de operandos. No exemplo anterior, 2 e 3 são operandos. Operandos Uma expressão pode ser avaliada no operador interactivamente. Por exemplo, para testar a expressão 2+3, usamos a \textit{prompt} do interpretador de Python: End of explanation """ #!/usr/bin/python # Nome do ficheiro: if.py number = 23 guess = int(input('Qual é o número inteiro? ')) if guess == number: print('Parabéns, você acertou.') # Novo bloco começa aqui print('Tenha um bom dia...') # Novo bloco termina aqui elif guess < number: print('Não, é maior que isso.') # Outro bloco # O bloco pode conter uma ou mais linhas ... else: print('Não, é menor que isso.') print('Adeus.') # Esta última instrução é sempre executada, depois da instrução if # ser executada """ Explanation: Operador | Nome | Explicação ----------|-------------|--------------------- + | Adição | Soma dois objectos - | Subtracção | Define um número negativo ou a subtracção de um número por outro * | Multiplicação| Devolve o produto de dois números ou uma string repetida uma certa quantidade de vezes ** | Potência | Retorna x elevado à potência de y / | Divisão | Divide x por y // | Divisão Inteira | Devolve a parte inteira do quociente \% | Modulo| Devolve o resto da divisão inteira < | Menor que | Compara x a y. Devolvendo True(verdadeiro) se x é menor que y, e False(falso) caso contrário > | Maior que | Devolvendo True se x é maior que y, e False caso contrário. <= | Menor ou igual a | Devolvendo True se x é menor ou igual a y, e False caso contrário. >= | Maior ou igual a | Devolvendo True se x é maior ou igual a y, e False caso contrário. == | Igual a | Avalia se os objectos são iguais != | Diferente de | Avalia se os objectos são diferentes not | Operador booleano NOT | Se x é True, devolve False. Se x é False, ele devolve True. and | Operador booleano AND | x and y devolve False se x é False, senão devolve a avaliação de y. or | Operador booleano OR | Se x é True, devolve True, senão devolve a avaliação de y. Controlo do fluxo de programas Os programas que vimos até aqui, são descritos por uma série de declarações e o interpretador de Python executa-as seguindo uma ordem estipulada. Como alterar o fluxo execução? Por exemplo, se pretende que o programa tome algumas decisões e faça diferentes coisas dependendo das diferentes situações, como imprimir 'Bom Dia' ou 'Boa Tarde' dependendo da hora do dia. Isto é consumado usando as instruções de controle de fluxo no Python if, for e while, permitindo executar um ou mais blocos de instruções apenas ou enquanto uma condição for verdadeira. Blocos controlados por um if A instrução if é usada para avaliar uma condição e se a condição é verdadeira, é executado um bloco de instruções (a que chamamos de bloco-if (if-block)), senão é executado outro bloco de instruções (a que chamamos de bloco-else (else-block)). A cláusula else é opcional. End of explanation """ if True: print('Sim, é verdade') """ Explanation: Neste programa, é inquirido o utilizador por um número inteiro e é verificado se este é igual a um número escondido. Usa-se uma variável number para referenciar o inteiro a adivinhar, neste caso number = 23. O utilizador tem apenas uma tentativa para adivinhar o número. A hipótese do utilizador é feita através da função input(). Uma função é entendida aqui como um módulo ou bloco de código reutilizável. A função input(), cria uma prompt com o argumento da função na shell, e espera que o utilizador escreva uma cadeia de caracteres (string). A string que o utilizador fornece é usada como valor de saída da função. A função int(), converte essa string para um inteiro, passado a ser referenciado pela variável guess. Genericamente a função int() permite, sempre que possível, converter objectos para string. Neste caso é usada para converter uma cadeia de caracteres para um inteiro. Em seguida, é comparado o inteiro referenciado pela variável guess com o número referenciado por number. Se eles forem iguais, imprime-se uma mensagem a felicitar o utilizador. Note que são utilizados níveis de identação para informar o interpretador de Python que a sequência de instruções pertence a um bloco. Note que a linha da instrução if termina com ``dois pontos'' indicando que a seguir há um bloco de instruções. Caso a tentativa do utilizador seja menor que o número referenciado pela variável number, informa-se que tente na próxima execução um número maior que o número da presente tentativa. Usa-se aqui uma cláusula elif para reduz a quantidade de identações requeridas. Note-se que as linhas das instruções elif e else também terminam com ``dois pontos'', sendo seguidas pelo seu bloco de instruções que controlam. Devemos realçar que as partes elif e else são opcionais. Uma instrução if mínima válida assume a forma: End of explanation """ #!/usr/bin/python # Nome do ficheiro: while.py number = 23 running = True while running: guess = int(input('Qual é o número inteiro?') if guess == number: print('Parabéns, você acertou.') running = False # Isto faz o loop while parar elif guess < number: print('Não, é maior que isso.') else: print('Não, é menor que isso.') else: print('O loop while terminou.') print('Adeus.') """ Explanation: Após ser executada a instrução if, elif e else que lhe estão associadas, a execução passa para o próximo bloco de instruções. Neste caso, volta ao bloco principal onde encontra a instrução print 'Adeus.'. Após executar esta linha de código, o interpretador termina a execução do código. Na verdade, não é muito prático ter de executar o programa sempre que se quer fazer uma nova tentativa. Tentemos resolver o problema. Ciclos while A instrução while permite executar repetidamente um bloco de instruções, enquanto uma condição for verdadeira. Uma instrução while pode ter uma cláusula else opcional. End of explanation """ #!/usr/bin/python # Nome do ficheiro: for.py for i in range(1, 5): print i else: print('O ciclo terminou.') """ Explanation: Neste programa, melhorámos o jogo anterior, tendo a vantagem de se poder fazer várias tentativas na mesma execução, terminando apenas quando se acerta no número. Tenta-se assim exemplificar o uso do ciclo while. Neste código a função input e o bloco if, do programa anterior formam um bloco controlado pelo while. É adicionada, antes do bloco while, uma nova variável unning, que é iniciada como referenciando o valor de verdade True. Como a condição que controla o while é verdadeira, o bloco controlado pelo while é executado. Após a execução deste bloco de instruções, a condição é novamente avaliada. Se running continua a referenciar verdade, o bloco while volta a ser executado, caso contrário, se running passa a referenciar falso, a execução passa para o bloco opcional else, findo o qual passa a executar instruções no bloco principal. Neste sentido, o bloco else é executado quando a condição que controla o while se torna False que pode acontecer na primeira avaliação da condição. Se existir uma cláusula else no ciclo while, este é sempre executado a menos que o ciclo while nunca termine. Os valores True (verdadeiro) e False (falso) são designados de objectos de tipo Booleano ou valores de verdade. Note-se que, um bloco else num ciclo while é redundante, já que se este bloco for usado no bloco anterior ao do while o programa tem o mesmo comportamento. Ciclo for A instrução for <var> in <objecto> permite impor a execução cíclica de um conjunto de instruções, por exemplo, executa o bloco para cada item numa sequência. End of explanation """ #!/usr/bin/python # Nome do ficheiro: break.py while True: s = input('Diga alguma coisa: ') if s == 'vai dar uma volta': break print('Hoje não está muito comunicativo.') print('Escreveu', len(s),' caracteres.') print('Então adeus ;-(') """ Explanation: Neste programa, imprime-se uma sequência de números. A sequência de objectos a imprimir é gerada através da função interna range. A função range, possibilita a utilização de dois argumentos numéricos, devolvendo uma sequência de números tendo início no primeiro argumento, que incrementa sucessivamente uma unidade, terminando antes de alcançar o segundo. No exemplo, usa-se range(1,5) originando a sequência [1, 2, 3, 4]. O incremento desta função entre cada objecto na lista pode ser controlado usando um terceiro argumento. Por exemplo, range(1,5,2) devolve uma referência para o objecto [1,3]. Note-se que na função range o segundo argumento funciona como limite da sequência, nunca sendo alcançado. No ciclo for i in range(1,5) é equivalente a for i in [1, 2, 3, 4] em cada execução do bloco de instruções que controla i, assume valores diferentes, um por cada valor na lista [1, 2, 3, 4]. Na primeira execução i referencia o inteiro 1, na segunda execução i referencia o inteiro 2, na terceira execução i referencia o inteiro 3 e na quarta execução i referencia o inteiro 4. Em cada uma destas execuções imprimimos o valor de i, sendo este bloco executado 4 e apenas quatro vezes, tantas vezes quanto o número de objectos que definem a lista. Findo o qual imprime ``O ciclo terminou''. Lembre-se que a parte else é opcional. Quando incluída, ela será sempre executada uma vez após o ciclo for ter terminado, a não ser que uma instrução break seja encontrada. Devemos notar que o ciclo ``for <var> in <objecto>'', pode iterar em qualquer objecto iterável. Aqui temos uma lista de números, em geral, podemos usar por exemplo objectos do tipo tuplo, conjunto, string ou lista genérica. A instrução break A instrução break é usada para impedir a continuação da execução dum ciclo. Permitindo por exemplo terminar um ciclo while sem que a condição que a controla se torne falsa. Deve no entanto notar que quando usada em ciclos for ou while, os correspondentes bloco else não são executados. End of explanation """ def mod3(n): return n % 3 mod3(3) """ Explanation: Neste programa, a cada entrada de valores o programa imprime "Hoje não está muito comunicativo.", indicando de seguida o número de caracteres usados. Sendo o comportamento alterado sempre que escrever "vai dar uma volta". Neste caso o ciclo termina executando break, voltando ao bloco principal, imprime "Então adeus ;-(" e termina o programa. O tamanho duma string é determinada através da função interna len. Lembre-se que a instrução break pode também ser usada com o ciclos for. No entanto, apesar desta instrução facilitar a implementação de certos algoritmos, vamos tentar aqui evitar a sua utilização. Exercícios de Python Exercício: Escreva a função abaixo a função: def mod3(n): return n % 3 e execute-a para diferentes argumentos inteiros. Qual a interpretação para o operador % em Python? Qual o comportamento da função para números negativos? End of explanation """ 2+-2 2++2 """ Explanation: Exercício: Defina a função incrementaUmaUnidade, que tenha por argumento $x$ e devolva $x+1$. Teste a sua função para $x=3$, $x=5$ e $x=1.5$. Exercício: Defina a função somaDe1AteN(n), que devolve $1+2+\ldots+n$ usando a fórmula $1+2+\ldots+n=\frac{n(n+1)}{2}$. Teste a função para vários valores de $n$. Exercício: Defina a função inverso(x), que devolve $1/x$. Aplique a função para $x=0$. Como é que o Python trata o domínio natural duma função?\label{inverso} Exercício: Assumindo definida a função duplica(x), que devolve $2x$, e a função incrementaUmaUnidade(x), dum exercício anterior. Qual é o resultado de duplica(incrementaUmaUnidade(6))? Qual é o resultado de incrementaUmaUnidade(duplica(6))}? Explique esses resultados. Exercício: Tente executar incrementaUmaUnidade, dum exercício anterior com argumento '123'. Podemos adicionar um número a uma string em Python? Exercício: Tente executar duplica, dum exercício anterior com argumento '123'. Qual o comportamento do operador $\ast$ quando tem por operandos um inteiro e uma string? Exercício: No Python, se s é uma string, s[0] identifica o primeiro caracter. Defina e teste uma função que quando aplicada a uma string devolva o seu primeiro caracter. Exercício: No Python, [a, b, c, $\ldots$, x] representa uma lista de objectos. Por exemplo [1, 5, 2] representa a lista com três números: 1, 5 e 2. Qual é o comportamento da função dum exercício anterior quando tem por argumento uma lista? Exercício: Qual o resultado de aplicar as funções $sum$, $min$ e $max$ (funções pré-definidas) a uma lista de números? Exercício: Qual o resultado de executar $min(range(n))$ e $max(range(n))$, quando $n$ é um inteiro positivo? Exercício: Reescreva a função somaDe1AteN do exercício anterior por forma a usar funções apresentadas nos exercícios sum e range. Exercício: Explique o resultado da execução de 2+-2 e 2++2. End of explanation """ def inverso(x): return 1/x 1 + inverso(2*5) """ Explanation: Exercício: Experimente 2+++2. Explique o resultado. Exercício: Experimente 23 e 24. Descreva o comportamento do operador **. Exercício: Experimente "abc" + "def" e 'abc' + 'def'. Descreva o comportamento do operador + quando aplicado a strings. Exercício: O operador * pode ser aplicado a strings? Experimente $3\ast$'12' e explique o resultado. Exercício: Execute 9-82+6 e (5-1)(1+2)**3. Qual a ordem de precedência dos operadores usados nas expressões? Exercício: Definindo a função: def inverso(x): return 1/x Qual o resultado de executar End of explanation """ def primeiro(s): return s[0] """ Explanation: Como é feita a avaliação quando um dos operadores é uma função? Como é feita a avaliação quando uma função é aplicada a uma expressão? Exercício: A função abaixo devolve o primeiro caracter duma \emph{string}: End of explanation """ primeiro('Bom dia') """ Explanation: Adicione à função uma \emph{string} de documentação. Execute: End of explanation """ primeiro.__doc__ """ Explanation: e de executar End of explanation """ def codigoErrado(x): Return X**2 - 1 """ Explanation: Exercício: Identifique os erros de sintaxe na definição da função abaixo: End of explanation """
ngoldschlag/HighTechIndustries
CalculateHT.ipynb
gpl-3.0
# import libraries import pandas as pd import numpy as np # data paths xwalkPath = '' blsPath = '' """ Explanation: High Tech Industries (STEM Concentration) This notebook uses BLS Industry-Occupation employment data to identify a set of High Tech industries according to the methodology in Hecker (2005). The resulting list, based on the relative concentration of STEM or "technology intensive" occupations. Moreover, these High Tech industries are classified into levels according to the intensity with which they utilize STEM workers. Heckler, D. (2005). High-technology employment: a NAICS-based update. Monthly Lab. Rev., 128, 57. End of explanation """ # import list of 'technology intensive' occupations from Hecker (2005), Table 3 stemOcc = pd.read_csv(xwalkPath+'hecker2005_table3.txt') stemOcc = stemOcc[['occupationcode']] stemOcc.columns = ['occ00'] # import BLS soc crosswalk, 2000 to 2010 soc0010 = pd.read_csv(xwalkPath+'soc_2000_to_2010_crosswalk.csv') soc0010 = soc0010[['2000 SOC code','2010 SOC code']] soc0010.columns = ['occ00','occ10'] # concord Hecker (2005) high tech occupations stemOcc = pd.merge(stemOcc, soc0010, on='occ00', how='left') stemOcc = stemOcc[['occ10']] stemOcc.columns = ['occ'] stemOcc = stemOcc.drop_duplicates() print 'Count of STEM occupations (2010 SOC): ', len(stemOcc) """ Explanation: Import list of 2000 SOC occupations identified in Hecker (2005) as technology intensive. These occupations are then concorded to 2010 SOC codes so that they can be used to identify STEM employment in more recent OES Industry-Occupation data. The BLS 2000 SOC to 2010 SOC crosswalk can be found here: http://www.bls.gov/soc/ End of explanation """ # import 2012 OES data oes2012 = pd.read_csv(blsPath+'nat4d_M2012_dl_1_113300_517100.csv') oes2012 = oes2012.append(pd.read_csv(blsPath+'nat4d_M2012_dl_2_517200_999300.csv')) # keep only detail level records, dropping summary and aggregate records oes2012 = oes2012[(oes2012.OCC_GROUP=='detailed') & (oes2012.OCC_CODE!='00-0000')] oes2012 = oes2012.reset_index(drop=True) oes2012 = oes2012[['NAICS','OCC_CODE','TOT_EMP']] # subset to first 4 digits of naics, dropping zero padding oes2012['NAICS'] = oes2012['NAICS'].astype(str) oes2012['NAICS'] = oes2012['NAICS'].str[0:4] # clean and destring total employment oes2012['TOT_EMP'] = oes2012['TOT_EMP'].str.replace(' ','') oes2012['TOT_EMP'] = oes2012['TOT_EMP'].str.replace(' ','') oes2012['TOT_EMP'] = oes2012['TOT_EMP'].str.replace(',','') oes2012['TOT_EMP'] = oes2012['TOT_EMP'].str.replace('\*\*','') oes2012['TOT_EMP'] = pd.to_numeric(oes2012['TOT_EMP']) oes2012.columns = ['naics', 'occ', 'tot_emp'] # import 2014 OES data oes2014 = pd.read_csv(blsPath+'nat4d_M2014_dl.csv') # keep only detail level records, dropping summary and aggregate records oes2014 = oes2014[(oes2014.OCC_GROUP=='detailed') & (oes2014.OCC_CODE!='00-0000')] oes2014 = oes2014.reset_index(drop=True) oes2014 = oes2014[['NAICS','OCC_CODE','TOT_EMP']] # subset to first 4 digits of naics, dropping zero padding oes2014['NAICS'] = oes2014['NAICS'].astype(str) oes2014['NAICS'] = oes2014['NAICS'].str[0:4] # clean and destring total employment oes2014['TOT_EMP'] = oes2014['TOT_EMP'].str.replace(' ','') oes2014['TOT_EMP'] = oes2014['TOT_EMP'].str.replace(' ','') oes2014['TOT_EMP'] = oes2014['TOT_EMP'].str.replace(',','') oes2014['TOT_EMP'] = oes2014['TOT_EMP'].str.replace('\*\*','') oes2014['TOT_EMP'] = pd.to_numeric(oes2014['TOT_EMP']) oes2014.columns = ['naics', 'occ', 'tot_emp'] """ Explanation: Import OES BLS Industry-Occupation data for 2012 and 2014. These tables are at the detailed occupation and 4-digit 2012 NAICS industry level. The 2012 data comes split into two files. BLS OES data can be found here: http://www.bls.gov/oes/tables.htm End of explanation """ # flag STEM occupations 2012 OES oes2012ht = pd.merge(oes2012, stemOcc, on='occ', how='left', indicator=True) oes2012ht['htocc'] = 0 oes2012ht.loc[oes2012ht._merge=='both','htocc'] = 1 # calculate STEM employment oes2012ht['htemp'] = oes2012ht.tot_emp * oes2012ht.htocc # sum emp and STEM emp by industry, calc ratio and average oes2012ht_gb = oes2012ht[['tot_emp','htemp','naics']].groupby('naics').agg(sum) oes2012ht_gb['naics']=oes2012ht_gb.index oes2012ht_gb = oes2012ht_gb.reset_index(drop=True) oes2012ht_gb['htratio'] = oes2012ht_gb.htemp/oes2012ht_gb.tot_emp oes2012ht_gb['htratio_mean'] = oes2012ht_gb.htratio.mean() # flag industry by high tech level oes2012ht_gb['oes12htlvl'] = '' oes2012ht_gb.loc[oes2012ht_gb.htratio>=2*oes2012ht_gb.htratio_mean,'oes12htlvl'] = 'Level III' oes2012ht_gb.loc[oes2012ht_gb.htratio>=3*oes2012ht_gb.htratio_mean,'oes12htlvl'] = 'Level II' oes2012ht_gb.loc[oes2012ht_gb.htratio>=5*oes2012ht_gb.htratio_mean,'oes12htlvl'] = 'Level I' # show count of 4-digit industries by level print '#'*20 +' High Tech Industries - 2012 OES ' + '#'*20 print '\nCount of 4-digit 2012 NAICS by HT Level:' print oes2012ht_gb.groupby('oes12htlvl').agg([len])['naics'] # list level I industries print '\nList of Level I HT Industries:' print list(oes2012ht_gb[oes2012ht_gb.oes12htlvl=='Level I']['naics']) # flag STEM occupations 2014 OES oes2014ht = pd.merge(oes2014, stemOcc, on='occ', how='left', indicator=True) oes2014ht['htocc'] = 0 oes2014ht.loc[oes2014ht._merge=='both','htocc'] = 1 # calculate STEM employment oes2014ht['htemp'] = oes2014ht.tot_emp * oes2014ht.htocc # sum emp and STEM emp by industry, calc ratio and average oes2014ht_gb = oes2014ht[['tot_emp','htemp','naics']].groupby('naics').agg(sum) oes2014ht_gb['naics']=oes2014ht_gb.index oes2014ht_gb = oes2014ht_gb.reset_index(drop=True) oes2014ht_gb['htratio'] = oes2014ht_gb.htemp/oes2014ht_gb.tot_emp oes2014ht_gb['htratio_mean'] = oes2014ht_gb.htratio.mean() # flag industry by high tech level oes2014ht_gb['oes14htlvl'] = '' oes2014ht_gb.loc[oes2014ht_gb.htratio>=2*oes2014ht_gb.htratio_mean,'oes14htlvl'] = 'Level III' oes2014ht_gb.loc[oes2014ht_gb.htratio>=3*oes2014ht_gb.htratio_mean,'oes14htlvl'] = 'Level II' oes2014ht_gb.loc[oes2014ht_gb.htratio>=5*oes2014ht_gb.htratio_mean,'oes14htlvl'] = 'Level I' # show count of 4-digit industries by level print '#'*20 +' High Tech Industries - 2014 OES ' + '#'*20 print '\nCount of 4-digit 2012 NAICS by HT Level:' print oes2014ht_gb.groupby('oes14htlvl').agg([len])['naics'] # list level I industries print '\nList of Level I HT Industries:' print list(oes2014ht_gb[oes2014ht_gb.oes14htlvl=='Level I']['naics']) """ Explanation: Flag STEM or 'technology oriented' occupations on the OES data. Then calculate the total employment and STEM employment for each industry. Take the ratio of STEM employment to total employment in each industry and find the mean STEM emplyoment ratio across all industries. Finally, implement the cutoff rules for High Tech industries as defined in Hecker (2005): Level I industries have STEM ratio greater than (or equal to) 5 times the average STEM concentration, Level II includes industries with a STEM employment ratio between 3 and 5 times the average, and Level III includes industries with a STEM employment ratio between 2 and 3 times the average. End of explanation """ hecker05 = pd.read_csv(xwalkPath+'hecker2005_table4.txt')[['naics','level']] hecker05['naics'] = hecker05['naics'].astype(str) hecker05.columns = ['naics','hkr05htlvl'] outDF = pd.merge(hecker05,oes2012ht_gb[['naics','oes12htlvl']], on='naics', how='outer') outDF = pd.merge(outDF, oes2014ht_gb[['naics','oes14htlvl']], on='naics', how='outer') outDF['oes12htlvl'] = outDF.oes12htlvl.fillna(np.nan) outDF.loc[outDF.oes12htlvl=='','oes12htlvl'] = np.nan outDF['oes14htlvl'] = outDF.oes14htlvl.fillna(np.nan) outDF.loc[outDF.oes14htlvl=='','oes14htlvl'] = np.nan outDF = outDF.dropna(how='all',subset=['hkr05htlvl','oes12htlvl','oes14htlvl']) outDF = outDF.sort_values(by='naics') outDF.to_csv(xwalkPath+'ht_stem_industries.csv', index=False) print outDF """ Explanation: Combine with list of 4-digit 2002 NAICS industries from Hecker (2005) and export to csv. Note that NAICS industries from Hecker (2005) are 2002 NAICS while the 2012 and 2014 OES data use 2012 NAICS. For comparability, these industries would need to be concorded to the same vintage of industry classification. End of explanation """
zambzamb/zpic
zdf/legacy/zdf_view.ipynb
agpl-3.0
from zdf import zdf_read_grid, zdf_read_particles """ Explanation: Plotting ZDF data files To Plot ZDF data files you must first import the ZDF module End of explanation """ (data, info) = zdf_read_grid( "J3-000500.zdf" ) """ Explanation: Next you need to read the data. You should also read the metadata while you are at it. End of explanation """ print(type(data)) print(info) """ Explanation: data is a NumPy ndarray, info is a dictionary with all the Metadata. End of explanation """ import numpy as np from bokeh.io import push_notebook, show, output_notebook output_notebook() from bokeh.plotting import figure from bokeh.models import LinearColorMapper, BasicTicker, ColorBar from bokeh.core.enums import TextBaseline p = figure(x_range=(info['grid']['axis'][0]['min'], info['grid']['axis'][0]['max']), y_range=(info['grid']['axis'][1]['min'], info['grid']['axis'][1]['max']), toolbar_sticky=False) p.title.text = info['grid']['label'] p.xaxis.axis_label = info['grid']['axis'][0]['label'] p.yaxis.axis_label = info['grid']['axis'][1]['label'] color_map = LinearColorMapper(palette="Viridis256", low = np.amin(data), high = np.amax(data)) p.image(image=[data], x = 0, y = 0, dw = info['grid']['axis'][0]['max'], dh = info['grid']['axis'][1]['max'], color_mapper = color_map ) color_bar = ColorBar(color_mapper = color_map, ticker = BasicTicker(), location = (0,0)) p.add_layout( color_bar, 'right') t = show(p, notebook_handle = True) """ Explanation: You can plot the data with any of your favorite tools. Plotting with Bokeh End of explanation """ import matplotlib.pyplot as plt %matplotlib notebook fig = plt.figure( figsize = (8,6), dpi = 80) fig.subplots_adjust( top = 0.85 ) fig.set_facecolor("#FFFFFF") timeLabel = r'$\sf{t = ' + str( info['iteration']['t'] ) + \ ' ['+info['iteration']['tunits']+r']}$' plotTitle = r'$\sf{' + info['grid']['label'] + r'}$' + '\n' + timeLabel plotArea = fig.add_subplot(1,1,1) plotArea.set_title(plotTitle, fontsize = 16) colorMap = plotArea.imshow(data, cmap = plt.cm.jet, interpolation = 'nearest', origin = 'lower') colorBar = fig.colorbar(colorMap) colorBar.set_label(r'$\sf{'+info['grid']['label'] + ' [' + info['grid']['units'] + r']}$', fontsize = 14) xlabel = info['grid']['axis'][0]['label'] + '[' + info['grid']['axis'][0]['units'] + ']' ylabel = info['grid']['axis'][1]['label'] + '[' + info['grid']['axis'][1]['units'] + ']' plt.xlabel(r'$\sf{'+xlabel+r'}$', fontsize = 14) plt.ylabel(r'$\sf{'+ylabel+r'}$', fontsize = 14) """ Explanation: Plotting with MatplotLib End of explanation """ (particles, info) = zdf_read_particles("particles-electrons-001200.zdf") """ Explanation: Working with particle data The routines also work with particle data, that can be read using the zdf_read_particles command: End of explanation """ print(type(particles)) print(type(particles['x1'])) """ Explanation: Particles is a dictionary of NumPy arrays containing all the particles quantities End of explanation """ import matplotlib.pyplot as plt %matplotlib notebook x = particles['x1'] y = particles['u1'] plt.plot(x, y, 'r.', ms=1,alpha=0.5) t = str(info["iteration"]["t"]) tunits = str(info["iteration"]["tunits"]) title = info['particles']['name'] + ' u_1 x_1' timeLabel = r'$\sf{t = ' + t + ' [' + tunits + r']}$' plt.title(r'$\sf{' + title + r'}$' + '\n' + timeLabel) xlabel = 'x_1' + '[' + info['particles']['units']['x1'] + ']' ylabel = 'u_1' + '[' + info['particles']['units']['u1'] + ']' plt.xlabel(r'$\sf{' + xlabel + r'}$', fontsize=14) plt.ylabel(r'$\sf{' + ylabel + r'}$', fontsize=14) """ Explanation: Again, you can plot the data with any of your favorite tools. Here's an example using Matplotlib End of explanation """
rmcgibbo/mdtraj
examples/solvent-accessible-surface-area.ipynb
lgpl-2.1
from __future__ import print_function %matplotlib inline import numpy as np import mdtraj as md """ Explanation: In this example, we'll compute the solvent accessible surface area of one of the residues in our protien accross each frame in a MD trajectory. We're going to use our trustly alanine dipeptide trajectory for this calculation, but in a real-world situtation you'll probably want to use a more interesting peptide or protein, especially one with a Trp residue. End of explanation """ help(md.shrake_rupley) trajectory = md.load('ala2.h5') sasa = md.shrake_rupley(trajectory) print(trajectory) print('sasa data shape', sasa.shape) """ Explanation: We'll use the algorithm from Shrake and Rupley for computing the SASA. Here's the function in MDTraj: End of explanation """ total_sasa = sasa.sum(axis=1) print(total_sasa.shape) from matplotlib.pylab import * plot(trajectory.time, total_sasa) xlabel('Time [ps]', size=16) ylabel('Total SASA (nm)^2', size=16) show() """ Explanation: The computed sasa array contains the solvent accessible surface area for each atom in each frame of the trajectory. Let's sum over all of the atoms to get the total SASA from all of the atoms in each frame. End of explanation """ def autocorr(x): "Compute an autocorrelation with numpy" x = x - np.mean(x) result = np.correlate(x, x, mode='full') result = result[result.size//2:] return result / result[0] semilogx(trajectory.time, autocorr(total_sasa)) xlabel('Time [ps]', size=16) ylabel('SASA autocorrelation', size=16) show() """ Explanation: We probably don't really have enough data do compute a meaningful autocorrelation, but for more realistic datasets, this might be something that you want to do. End of explanation """
willingc/jupyter-data-seeker
JupyterDataSeeker.ipynb
gpl-2.0
import github3 """ Explanation: Jupyter Data Seeker This notebook uses the github3py project maintained by Ian Cordasco. This notebook is a starter notebook for finding information about repositories that are managed by the Jupyter team. Repos are from the Jupyter and IPython GitHub organizations. End of explanation """ gh = github3.login('willingc', '') type(gh) """ Explanation: GitHub Authorization Note: Be careful. Don't check in to version control your username and password. End of explanation """ willingc = gh.user('willingc') type(willingc) print(willingc.name) # print followers for f in willingc.followers(): print(f) # print a dictionary of user's content print(willingc.as_dict()) """ Explanation: User End of explanation """ gists = [g for g in gh.gists()] gists[0].as_dict() """ Explanation: Gists End of explanation """ # Get all Jupyter repos repos = [r.refresh() for r in gh.repositories_by('jupyter')] for repo in repos: print(repo.name, repo.url) print(' ', repo.description) print(' ', repo.homepage) print(' Open ', repo.open_issues_count) # Get all IPython repos repos = [r.refresh() for r in gh.repositories_by('ipython')] for repo in repos: print(repo.name, repo.url) print(' ', repo.description) print(' ', repo.homepage) for org in willingc.organizations(): print(org.url) willingc.organizations().as_dict """ Explanation: Repos in Organization End of explanation """ sub_jup_ui = ['jupyter/notebook', 'jupyter/jupyter_console', 'jupyter/qtconsole'] sub_kernels = ['ipython/ipython', 'ipython/ipythonwidgets', 'ipython/ipyparallel'] sub_formatting = ['jupyter/nbconvert', 'jupyter/nbformat'] sub_education = ['jupyter/nbgrader'] sub_deployment= ['jupyter/jupyterhub', 'jupyter/jupyter-drive', 'jupyter/nbviewer','jupyter/tmpnb', 'jupyter/tmpnb-deploy','jupyter/dockerspawner','jupyter/docker-stacks'] sub_architecture = ['jupyter/jupyter_client', 'jupyter/jupyter_core'] for subproject in sub_jup_ui: print(subproject) """ Explanation: Jupyter Subprojects User Interface, Kernels, Formatting and conversion, Education, Deployment, Architecture End of explanation """ for issue in willingc.iter_repo_issues('jupyter','notebook'): if issue.state == 'open': print(issue.number, issue.title) myissues = willingc.iter_repo_issues('jupyter','notebook') for issue in gh.iter_repo_issues(owner='jupyter', repository='jupyter'): print(issue.number, issue.title) print(issue.labels) print(issue.created_at) print(issue.updated_at) """ Explanation: Issues in a subproject repo End of explanation """ def list_issues(token, owner='jupyter', repo=repo): for issue in token.iter_repo_issues(owner=owner, repository=repo): if issue.state == 'open': print(issue.number, issue.title) """ Explanation: Issues in a Jupyter Repo End of explanation """
Pretendi/Team_Jimmy_Paul
pcube.ipynb
mit
%matplotlib inline import os import datetime as dt import numpy as np import pandas as pd import statsmodels as sm from IPython.display import display, HTML import matplotlib.pyplot as plt """ Explanation: <span style="color: #f2cf4a ; font-family: Babas; font-size: 3em;">$$P^3$$</span> <center> <span style="color: #f2cf4a ; font-family: Babas; font-size: 3em;">P</span><span style="color: #00000; font-family: Babas; font-size: 3em;">robabilistic </span> <span style="color: #f2cf4a ; font-family: Babas; font-size: 3em;">P</span><span style="color: #00000; font-family: Babas; font-size: 3em;">recipitation </span><span style="color: #f2cf4a ; font-family: Babas; font-size: 3em;">P</span><span style="color: #00000; font-family: Babas; font-size: 3em;">rediction </span> </center> Imports End of explanation """ # Setup directories CWD = os.getcwd() DATA_DIR = CWD + "/data/" CLEAN_DATA_DIR = DATA_DIR + "clean/" OUT_DIR = CWD + "/output/" """ Explanation: Directories End of explanation """ f = open(DATA_DIR+"eng-daily-01012016-12312016.csv", "r") lines = f.readlines() lines = lines[25:] f.close() f = open(CLEAN_DATA_DIR+"eng-daily-01012016-12312016.csv", "w+") f.writelines(lines) f.truncate() f.close() prev = pd.read_csv(CLEAN_DATA_DIR+"eng-daily-01012016-12312016.csv") # display(prev) """ Explanation: Preview Data End of explanation """ def clean_data(in_dir, out_dir, filename): """ """ f = open(in_dir+filename, "r") lines = f.readlines() lines = lines[25:] f.close() f = open(out_dir+filename, "w+") f.writelines(lines) f.truncate() f.close() frames = [] for filename in os.listdir(DATA_DIR): if filename != "clean": clean_data(DATA_DIR, CLEAN_DATA_DIR, filename) frames.append(pd.read_csv(CLEAN_DATA_DIR+filename)) train_data = pd.concat(frames) train_data["Date/Time"] = pd.to_datetime(train_data["Date/Time"], format="%Y-%m-%d") train_data.sort_values("Date/Time", ascending=True, inplace=True) display(train_data) plt.figure(figsize=(17, 9)) plt.plot_date(train_data["Date/Time"], train_data["Total Precip (mm)"], fmt='b-') plt.ylabel("Precipitation [mm]") plt.xlabel("Day") plt.title("Toronto Pearson Daily Precipitation from 1953 to 2017") """ Explanation: Build Training Set End of explanation """
tensorflow/neural-structured-learning
workshops/kdd_2020/graph_regularization_pheme_natural_graph.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 Google LLC End of explanation """ !pip install --quiet neural-structured-learning !pip install --quiet transformers !pip install --quiet tokenizers !pip install --quiet sentencepiece import os import json import random import pprint import numpy as np import neural_structured_learning as nsl import tensorflow as tf import tensorflow_datasets as tfds import sentencepiece from tokenizers import BertWordPieceTokenizer from transformers import BertConfig from transformers import BertTokenizer, TFBertModel from transformers import XLNetTokenizer, TFXLNetModel from transformers import RobertaTokenizer, TFRobertaModel from transformers import AlbertTokenizer, TFAlbertModel from transformers import T5Tokenizer, TFT5Model from transformers import ElectraTokenizer, TFElectraModel # Resets notebook state tf.keras.backend.clear_session() print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print( "GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE") """ Explanation: Graph regularization for Twitter rumour veracity classification using natural graphs Overview This tutorial uses the PHEME dataset for veracity classification of Twitter rumours, a binary classification task. Tweet texts are used as input for computing embeddings, e.g. with ALBERT. Those representations are then used as features for the baseline MLP classification model, as well as for the graph regularized version, which uses the structure defined by the tweet replies as a natural graph. Install and import dependencies End of explanation """ !mkdir /tmp/PHEME !wget --quiet -P /tmp/PHEME https://ndownloader.figshare.com/files/11767817 !ls -l /tmp/PHEME !tar -C /tmp/PHEME -xvzf /tmp/PHEME/11767817 !ls /tmp/PHEME/all-rnr-annotated-threads/ """ Explanation: Dataset description The PHEME dataset includes the following, which are used in this tutorial. Twitter rumours associated with diverse news events Source tweets, each of which is accompanied by reaction tweets The reply structure for each source and corresponding reaction tweets Veracity label annotations (by professional journalists) for each source tweet. Download and extract the dataset Attribution We use the PHEME dataset for Rumour Detection and Veracity Classification created by Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. This data set is licensed under CC BY 4.0. The underlying data is used without modifications as labels or for computing model features. End of explanation """ """ Python 3 function to convert rumour annotations into True, False, Unverified """ def convert_annotations(annotation, string = True): if 'misinformation' in annotation.keys() and 'true'in annotation.keys(): if int(annotation['misinformation'])==0 and int(annotation['true'])==0: if string: label = "unverified" else: label = 2 elif int(annotation['misinformation'])==0 and int(annotation['true'])==1 : if string: label = "true" else: label = 1 elif int(annotation['misinformation'])==1 and int(annotation['true'])==0 : if string: label = "false" else: label = 0 elif int(annotation['misinformation'])==1 and int(annotation['true'])==1: print ("OMG! They both are 1!") print(annotation['misinformation']) print(annotation['true']) label = None elif 'misinformation' in annotation.keys() and 'true' not in annotation.keys(): # all instances have misinfo label but don't have true label if int(annotation['misinformation'])==0: if string: label = "unverified" else: label = 2 elif int(annotation['misinformation'])==1: if string: label = "false" else: label = 0 elif 'true' in annotation.keys() and 'misinformation' not in annotation.keys(): print ('Has true not misinformation') label = None else: print('No annotations') label = None return label """ Explanation: Convert rumor annotations to labels Code available in convert_veracity_annotations.py together with the PHEME dataset End of explanation """ def load_pheme_data(parent_dir): """ Loads and returns tweets and their replies from input `parent_dir`. Args: parent_dir: A string with full path to directory where the data lies in. Returns: A tuple T (annotation, structure, source_tweets, reactions) such that: Each of them is a dictionary directly read from the underlying JSON structure provided in the PHEME dataset. """ with open(parent_dir + '/annotation.json') as f: annotation = json.load(f) with open(parent_dir + '/structure.json') as f: structure = json.load(f) source_tweets = {} for f in os.scandir(parent_dir + '/source-tweets'): if f.name[0] != '.': with open(f.path) as json_file: source_tweets[f.name.split('.json')[0]] = json.load(json_file) reactions = {} for f in os.scandir(parent_dir + '/reactions'): if f.name[0] != '.': with open(f.path) as json_file: reactions[f.name.split('.json')[0]] = json.load(json_file) return annotation, structure, source_tweets, reactions def load_labels_and_texts(topics): """Reads verified rumour tweets, replies and labels for input `topics`. Non-rumours or unverified tweet threads aren't included in returned dataset. Args: topics: A List of strings, each containing the full path to a topic to be read. Returns: A List of dictionaries such that each entry E contains: E['label']: (integer) the rumour veracity annotation. E['source_text']: (string) the source tweet text. E['reactions']: (List of strings) the texts from the tweet replies. """ labels_and_texts = [] for t in topics: rumours = [ f.path for f in os.scandir(t + '/rumours') if f.is_dir() ] for r in rumours: annotation, structure, source_tweets, reactions = load_pheme_data(r) for source_tweet in source_tweets.values(): labels_and_texts.append({ 'label' : convert_annotations(annotation, string = False), 'source_text' : source_tweet['text'], 'reactions' : [reaction['text'] for reaction in reactions.values()] }) print('Read', len(labels_and_texts), 'annotated rumour tweet threads') verified_labels_and_texts = list( filter(lambda d : d['label'] != 2, labels_and_texts)) print('Returning', len(verified_labels_and_texts), 'verified rumour tweet threads') return verified_labels_and_texts topics = [ f.path for f in os.scandir('/tmp/PHEME/all-rnr-annotated-threads/') if f.is_dir() ] topics """ Explanation: Load annotated rumours from dataset End of explanation """ configuration = BertConfig() """ Uncomment any tokenizer-model pair for experimentation with other models. """ TOKENIZERS = { # 'bert_base' : BertTokenizer.from_pretrained('bert-base-uncased'), # 'bert_large' : BertTokenizer.from_pretrained('bert-large-uncased'), # 'xlnet_base' : XLNetTokenizer.from_pretrained('xlnet-base-cased'), # 'xlnet_large' : XLNetTokenizer.from_pretrained('xlnet-large-cased'), # 'roberta_base' : RobertaTokenizer.from_pretrained('roberta-base'), # 'roberta_large' : RobertaTokenizer.from_pretrained('roberta-large'), 'albert_base' : AlbertTokenizer.from_pretrained('albert-base-v2'), # 'albert_large' : AlbertTokenizer.from_pretrained('albert-large-v2'), # 'albert_xlarge' : AlbertTokenizer.from_pretrained('albert-xlarge-v2'), # 'albert_xxlarge' : AlbertTokenizer.from_pretrained('albert-xxlarge-v2'), # 't5_small' : T5Tokenizer.from_pretrained('t5-small'), # 't5_base' : T5Tokenizer.from_pretrained('t5-base'), # 't5_large' : T5Tokenizer.from_pretrained('t5-large'), # 'electra_small' : ElectraTokenizer.from_pretrained('google/electra-small-discriminator'), # 'electra_large' : ElectraTokenizer.from_pretrained('google/electra-large-discriminator'), } PRETRAINED_MODELS = { # 'bert_base' : TFBertModel.from_pretrained('bert-base-uncased'), # 'bert_large' : TFBertModel.from_pretrained('bert-large-uncased'), # 'xlnet_base' : TFXLNetModel.from_pretrained('xlnet-base-cased'), # 'xlnet_large' : TFXLNetModel.from_pretrained('xlnet-large-cased'), # 'roberta_base' : TFRobertaModel.from_pretrained('roberta-base'), # 'roberta_large' : TFRobertaModel.from_pretrained('roberta-large'), 'albert_base' : TFAlbertModel.from_pretrained('albert-base-v2'), # 'albert_large' : TFAlbertModel.from_pretrained('albert-large-v2'), # 'albert_xlarge' : TFAlbertModel.from_pretrained('albert-xlarge-v2'), # 'albert_xxlarge' : TFAlbertModel.from_pretrained('albert-xxlarge-v2'), # 't5_small' : TFT5Model.from_pretrained('t5-small'), # 't5_base' : TFT5Model.from_pretrained('t5-base'), # 't5_large' : TFT5Model.from_pretrained('t5-large'), # 'electra_small' : TFElectraForPreTraining.from_pretrained('google/electra-small-discriminator'), # 'electra_large' : TFElectraForPreTraining.from_pretrained('google/electra-large-discriminator') } """ Explanation: Load tokenizers, model architectures and pre-trained weights End of explanation """ # Given that tweets are short, use a small value for reduced inference time TOKENIZER_MAX_SEQ_LENGTH = 64 def bert_embedding_model_inference(model, tokenizer, text): """ Tokenizes and computes BERT `model` inference for input `text`. """ input_ids = tf.constant( tokenizer.encode(text, add_special_tokens=True))[None, :] # Batch size 1 outputs = model(input_ids) # The last hidden-state is the first element of the output tuple last_hidden_states = outputs[0] cls_embedding = last_hidden_states[0][0] return cls_embedding def albert_embedding_model_inference(model, tokenizer, text): """ Tokenizes and computes ALBERT `model` inference for input `text`. """ encoded_text = tf.constant(tokenizer.encode(text))[None, :] outputs = model(encoded_text) # The last hidden-state is the first element of the output tuple last_hidden_states = outputs[0] cls_embedding = last_hidden_states[0][0] return cls_embedding def albert_embedding_batch_model_inference(model, tokenizer, text_batch): """ Tokenizes and computes ALBERT `model` inference for input `text_batch`. """ encoded_text = [ tf.constant(tokenizer.encode( t, max_length=TOKENIZER_MAX_SEQ_LENGTH, pad_to_max_length=True)) for t in text_batch ] encoded_batch = tf.stack(encoded_text) outputs = model(encoded_text) cls_embeddings = [] for last_hidden_state in outputs[0]: cls_embeddings.append(last_hidden_state[0]) return cls_embeddings def t5_embedding_model_inference(model, tokenizer, text): """ Tokenizes and computes T5 `model` inference for input `text`. """ inputs = tokenizer.encode(text, return_tensors="tf") # Batch size 1 outputs = model(inputs, decoder_input_ids=inputs) # The last hidden-state is the first element of the output tuple last_hidden_states = outputs[0] cls_embedding = last_hidden_states[0][0] return cls_embedding ''' More model inference functions can be added, following documentation on https://huggingface.co/transformers/ ''' INFERENCE_MODELS = { 'bert' : bert_embedding_model_inference, 'albert' : albert_embedding_model_inference, 'albert_batch' : albert_embedding_batch_model_inference, 't5' : t5_embedding_model_inference } """ Explanation: Tokenization and model inference functions End of explanation """ PRETRAINED_MODEL = 'albert_base' INFERENCE = 'albert' test_inference = INFERENCE_MODELS[INFERENCE]( PRETRAINED_MODELS[PRETRAINED_MODEL], TOKENIZERS[PRETRAINED_MODEL], "Why is the sky blue" ) print('Example embedding', test_inference) embedding_dim = test_inference.shape[0] """ Explanation: Testing tokenization and model inference with an example End of explanation """ class HParams(object): """Hyperparameters used for training.""" def __init__(self): ### dataset parameters self.num_classes = 2 self.embedding_dim = embedding_dim ### neural graph learning parameters self.distance_type = nsl.configs.DistanceType.L2 self.graph_regularization_multiplier = 0.1 self.num_neighbors = 5 ### model architecture self.num_fc_units = [64, 64] ### training parameters self.train_epochs = 50 self.batch_size = 128 self.encoder_inference_batch_size = 32 self.dropout_rate = 0.2 ### eval parameters self.eval_steps = None # All instances in the test set are evaluated. HPARAMS = HParams() """ Explanation: Hyperparameters for the classifier to be trained End of explanation """ def add_batch_embeddings(labels_and_texts): """Splits `labels_and_texts` into batches and performs tokenization and converts tweet text into its corresponding embedding. Args: labels_and_texts: A List of dictionaries such that each entry E contains: E['label']: (integer) the rumour veracity annotation. E['source_text']: (string) the source tweet text. E['reactions']: (List of strings) the texts from the tweet replies. Each entry E from labels_and_texts is updated, adding the following key, value pairs: E['source_embedding']: (Tensor of floats) embeddings for E['source_text'] E['reaction_embedding']: (List of float Tensors) embeddings for E['reactions'], up to HPARAMS.num_neighbors and in the corresponding E['reactions'] order. """ inputs = [] print('Accumulating inputs') for e in labels_and_texts: inputs.append(e['source_text']) for r in e['reactions'][:HPARAMS.num_neighbors]: # Alternative ways to select neighbors within a tweet thread can be used. inputs.append(r) def generate_batches(inputs, batch_size): """Splits `inputs` list into chunks of (up to) `batch_size` length.""" for i in range(0, len(inputs), batch_size): yield inputs[i: i + batch_size] inferences = [] for i, batch in enumerate(generate_batches( inputs, HPARAMS.encoder_inference_batch_size)): print('Running model inference for batch', i) batch_inference = INFERENCE_MODELS[INFERENCE]( PRETRAINED_MODELS[PRETRAINED_MODEL], TOKENIZERS[PRETRAINED_MODEL], batch) for inference in batch_inference: inferences.append(inference) i = 0 for e in labels_and_texts: e['source_embedding'] = inferences[i] i += 1 e['reaction_embedding'] = [] for r in e['reactions'][:HPARAMS.num_neighbors]: e['reaction_embedding'].append(inferences[i]) i += 1 def add_embeddings(labels_and_texts): """Performs tokenization and model inference for each element of labels_and_texts, updating it with computed embeddings. Args: labels_and_texts: A List of dictionaries such that each entry E contains: E['label']: (integer) the rumour veracity annotation. E['source_text']: (string) the source tweet text. E['reactions']: (List of strings) the texts from the tweet replies. Each entry E from labels_and_texts is updated, adding the following key, value pairs: E['source_embedding']: (Tensor of floats) embeddings for E['source_text'] E['reaction_embedding']: (List of float Tensors) embeddings for E['reactions'], up to HPARAMS.num_neighbors and in the corresponding E['reactions'] order. """ for index, label_and_texts in enumerate(labels_and_texts): if index % 100 == 0: print('Computing embeddings for item', index) label_and_texts['source_embedding'] = INFERENCE_MODELS[INFERENCE]( PRETRAINED_MODELS[PRETRAINED_MODEL], TOKENIZERS[PRETRAINED_MODEL], label_and_texts['source_text']) label_and_texts['reaction_embedding'] = [] for r in label_and_texts['reactions'][:HPARAMS.num_neighbors]: # Alternative ways to select neighbors within a tweet thread can be used. label_and_texts['reaction_embedding'].append(INFERENCE_MODELS[INFERENCE]( PRETRAINED_MODELS[PRETRAINED_MODEL], TOKENIZERS[PRETRAINED_MODEL], r)) """ Explanation: Process dataset in batch or single inference mode End of explanation """ labels_and_texts = load_labels_and_texts(topics) for e in random.sample(labels_and_texts, 3): pprint.pprint(e) """ Explanation: Load and inspect dataset End of explanation """ if 'batch' in INFERENCE: add_batch_embeddings(labels_and_texts) else: add_embeddings(labels_and_texts) """ Explanation: Compute and store textual embeddings End of explanation """ # Alternative ways to split include a split by time or by news event. random.shuffle(labels_and_texts) train_size = int(0.8 * len(labels_and_texts)) TRAIN_DATA = labels_and_texts[:train_size] TEST_DATA = labels_and_texts[train_size:] # Constants used to identify neighbor features in the input. NBR_FEATURE_PREFIX = 'NL_nbr_' NBR_WEIGHT_SUFFIX = '_weight' def create_np_tensors_from_datum(datum, propagate_label=True): """Creates a node and neighbor numpy tensors given input datum. Args: datum: A dictionary with node and neighbor features and annotation label. propagate_label: Boolean indicating if we labels should be propagated to neighbors. Returns: A tuple T (node_tensor, neighbor_tensors) such that: T[0]: a dictionary containing node embeddings and label T[1]: a List of dictionaries, each containing embeddings for a neighbor and, if propagate_label is true, the corresponding label. """ label_tensor = datum['label'] def get_float32_tensor(d): np_array = np.array(d, dtype='f') return np_array node_tensor = { 'embedding' : get_float32_tensor(datum['source_embedding']), 'label' : label_tensor } neighbor_tensors = [] for re in datum['reaction_embedding']: tensor = { 'embedding' : get_float32_tensor(re), } if propagate_label: tensor['label'] = label_tensor neighbor_tensors.append(dict(tensor)) return node_tensor, neighbor_tensors TRAIN_TENSORS = [ create_np_tensors_from_datum(d) for d in TRAIN_DATA ] TEST_TENSORS = [ create_np_tensors_from_datum(d) for d in TEST_DATA ] TRAIN_TENSORS[0][0], TEST_TENSORS[0][0] """ Explanation: Create train and test datasets Using a 80:20 train:test split End of explanation """ def make_dataset(tf_features, training=False): """Creates a `tf.data.TFRecordDataset`. Args: tf_features: List of (node_tensor, neighbor_tensors) tuples. training: Boolean indicating if we are in training mode. Returns: An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example` objects. """ def get_tf_examples_with_nsl_signals(node_feature, neighbor_features): """Merges `neighbor_features` and `node_feature`. Args: node_feature: A dictionary of `tf.train.Feature`. neighbor_features: A list of `tf.train.Feature` dictionaries. Returns: A pair whose first value is a dictionary containing relevant features and whose second value contains the ground truth label. """ feature_dict = dict(node_feature) # We also extract corresponding neighbor features in a similar manner to # the features above during training. if training: for i in range(HPARAMS.num_neighbors): nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'embedding') nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX) if i < len(neighbor_features): nf = neighbor_features[i] feature_dict[nbr_feature_key] = nf['embedding'] feature_dict[nbr_weight_key] = 1.0 else: feature_dict[nbr_feature_key] = np.zeros( HPARAMS.embedding_dim, dtype='f') feature_dict[nbr_weight_key] = 0.0 label = feature_dict.pop('label') return feature_dict, label print('Adding NSL features for entries') tensors_with_nsl = {} labels = [] for (node, neighbors) in tf_features: feature_dict, label = get_tf_examples_with_nsl_signals(node, neighbors) for k,v in feature_dict.items(): if k not in tensors_with_nsl: tensors_with_nsl[k] = [] tensors_with_nsl[k].append(v) labels.append(label) print('Creating tf.data.Dataset from tensors') dataset = tf.data.Dataset.from_tensor_slices((tensors_with_nsl, labels)) dataset = dataset.batch(HPARAMS.batch_size) return dataset train_dataset = make_dataset(TRAIN_TENSORS, training=True) test_dataset = make_dataset(TEST_TENSORS) """ Explanation: Create train and test tf.data.TFRecordDataset End of explanation """ for feature_batch, label_batch in train_dataset.take(1): print('Feature list:', list(feature_batch.keys())) print('Batch of inputs:', feature_batch['embedding']) nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'embedding') nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX) print('Batch of neighbor inputs:', feature_batch[nbr_feature_key]) print('Batch of neighbor weights:', tf.reshape(feature_batch[nbr_weight_key], [-1])) print('Batch of labels:', label_batch) """ Explanation: Inspecting train dataset End of explanation """ for feature_batch, label_batch in test_dataset.take(1): print('Feature list:', list(feature_batch.keys())) print('Batch of inputs:', feature_batch['embedding']) print('Batch of labels:', label_batch) """ Explanation: Inspecting test dataset End of explanation """ def make_mlp_sequential_model(hparams): """Creates a sequential multi-layer perceptron model.""" model = tf.keras.Sequential() model.add( tf.keras.layers.InputLayer( input_shape=(hparams.embedding_dim,), name='embedding')) for num_units in hparams.num_fc_units: model.add(tf.keras.layers.Dense(num_units, activation='relu')) # For sequential models, by default, Keras ensures that the 'dropout' layer # is invoked only during training. model.add(tf.keras.layers.Dropout(hparams.dropout_rate)) model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax')) return model """ Explanation: Multi-layer perceptron classification model Sequential MLP model End of explanation """ def make_mlp_functional_model(hparams): """Creates a functional API-based multi-layer perceptron model.""" inputs = tf.keras.Input( shape=(hparams.embedding_dim,), dtype='int64', name='embedding') cur_layer = inputs for num_units in hparams.num_fc_units: cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer) # For functional models, by default, Keras ensures that the 'dropout' layer # is invoked only during training. cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer) outputs = tf.keras.layers.Dense( hparams.num_classes, activation='softmax')( cur_layer) model = tf.keras.Model(inputs, outputs=outputs) return model """ Explanation: Functional MLP model End of explanation """ def make_mlp_subclass_model(hparams): """Creates a multi-layer perceptron subclass model in Keras.""" class MLP(tf.keras.Model): """Subclass model defining a multi-layer perceptron.""" def __init__(self): super(MLP, self).__init__() self.dense_layers = [ tf.keras.layers.Dense(num_units, activation='relu') for num_units in hparams.num_fc_units ] self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate) self.output_layer = tf.keras.layers.Dense( hparams.num_classes, activation='softmax') def call(self, inputs, training=False): cur_layer = inputs['embedding'] for dense_layer in self.dense_layers: cur_layer = dense_layer(cur_layer) cur_layer = self.dropout_layer(cur_layer, training=training) outputs = self.output_layer(cur_layer) return outputs return MLP() """ Explanation: tf.Keras.Model subclass MLP End of explanation """ # Create a base MLP model using the functional API. # Alternatively, you can also create a sequential or subclass base model using # the make_mlp_sequential_model() or make_mlp_subclass_model() functions # respectively, defined above. Note that if a subclass model is used, its # summary cannot be generated until it is built. base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS) base_model.summary() """ Explanation: Base MLP model End of explanation """ base_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1) """ Explanation: Compile and train the base MLP model End of explanation """ # Helper function to print evaluation metrics. def print_metrics(model_desc, eval_metrics): """Prints evaluation metrics. Args: model_desc: A description of the model. eval_metrics: A dictionary mapping metric names to corresponding values. It must contain the loss and accuracy metrics. """ print('\n') print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy']) print('Eval loss for ', model_desc, ': ', eval_metrics['loss']) if 'graph_loss' in eval_metrics: print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss']) eval_results = dict( zip(base_model.metrics_names, base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps))) print_metrics('Base MLP model', eval_results) """ Explanation: Evaluate base model on test dataset End of explanation """ # Build a new base MLP model. base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model( HPARAMS) # Wrap the base MLP model with graph regularization. graph_reg_config = nsl.configs.make_graph_reg_config( max_neighbors=HPARAMS.num_neighbors, multiplier=HPARAMS.graph_regularization_multiplier, distance_type=HPARAMS.distance_type, sum_over_axis=-1) graph_reg_model = nsl.keras.GraphRegularization(base_reg_model, graph_reg_config) graph_reg_model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1) """ Explanation: Create, compile and train MLP model with graph regularization End of explanation """ eval_results = dict( zip(graph_reg_model.metrics_names, graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps))) print_metrics('MLP + graph regularization', eval_results) """ Explanation: Evaluate MLP model with graph regularization on test dataset End of explanation """
lpfann/fri
docs/Guide.ipynb
mit
import numpy as np # fixed Seed for demonstration STATE = np.random.RandomState(123) from fri import genClassificationData """ Explanation: Quick start guide Installation Stable Fri can be installed via the Python Package Index (PyPI). If you have pip installed just execute the command pip install fri to get the newest stable version. The dependencies should be installed and checked automatically. If you have problems installing please open issue at our tracker. Development To install a bleeding edge dev version of FRI you can clone the GitHub repository using git clone git@github.com:lpfann/fri.git and then check out the dev branch: git checkout dev. We use poetry for dependency management. Run poetry install in the cloned repository to install fri in a virtualenv. To check if everything works as intented you can use pytest to run the unit tests. Just run the command poetry run pytest in the main project folder Using FRI Now we showcase the workflow of using FRI on a simple classification problem. Data To have something to work with, we need some data first. fri includes a generation method for binary classification and regression data. In our case we need some classification data. End of explanation """ n = 300 features = 6 strongly_relevant = 2 weakly_relevant = 2 X,y = genClassificationData(n_samples=n, n_features=features, n_strel=strongly_relevant, n_redundant=weakly_relevant, random_state=STATE) """ Explanation: We want to create a small set with a few features. Because we want to showcase the all-relevant feature selection, we generate multiple strongly and weakly relevant features. End of explanation """ X.shape """ Explanation: The method also prints out the parameters again. End of explanation """ from sklearn.preprocessing import StandardScaler X_scaled = StandardScaler().fit_transform(X) """ Explanation: We created a binary classification set with 6 features of which 2 are strongly relevant and 2 weakly relevant. Preprocess Because our method expects mean centered data we need to standardize it first. This centers the values around 0 and deviation to the standard deviation End of explanation """ import fri """ Explanation: Model Now we need to creata a Model. We use the FRI module. End of explanation """ list(fri.ProblemName) """ Explanation: fri provides a convenience class fri.FRI to create a model. fri.FRI needs the type of problem as a first argument of type ProblemName. Depending on the Problem you want to analyze pick from one of the available models in ProblemName. End of explanation """ fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, loss_slack=0.2, w_l1_slack=0.2, random_state=STATE) fri_model """ Explanation: Because we have Classification data we use the ProblemName.CLASSIFICATION to instantiate our model. End of explanation """ fri_model.fit(X_scaled,y) """ Explanation: We used no parameters for creation so the defaults are active. Fitting to data Now we can just fit the model to the data using scikit-learn like commands. End of explanation """ fri_model.interval_ """ Explanation: The resulting feature relevance bounds are saved in the interval_ variable. End of explanation """ print(fri_model.print_interval_with_class()) """ Explanation: If you want to print out the relevance class use the print_interval_with_class() function. End of explanation """ fri_model.interval_[2] """ Explanation: The bounds are grouped in 2d sublists for each feature. To acess the relevance bounds for feature 2 we would use End of explanation """ fri_model.relevance_classes_ """ Explanation: The relevance classes are saved in the corresponding variable relevance_classes_: End of explanation """ # Import plot function from fri.plot import plot_relevance_bars import matplotlib.pyplot as plt %matplotlib inline # Create new figure, where we can put an axis on fig, ax = plt.subplots(1, 1,figsize=(6,3)) # plot the bars on the axis, colored according to fri out = plot_relevance_bars(ax,fri_model.interval_,classes=fri_model.relevance_classes_) """ Explanation: 2 denotes strongly relevant features, 1 weakly relevant and 0 irrelevant. Plot results The bounds in numerical form are useful for postprocesing. If we want a human to look at it, we recommend the plot function plot_relevance_bars. We can also color the bars according to relevance_classes_ End of explanation """ preset = {} """ Explanation: Setting constraints manually Our model also allows to compute relevance bounds when the user sets a given range for the features. We use a dictionary to encode our constraints. End of explanation """ preset[2] = fri_model.interval_[2, 0] """ Explanation: Example As an example, let us constrain the third from our example to the minimum relevance bound. End of explanation """ const_ints = fri_model.constrained_intervals(preset=preset) const_ints """ Explanation: We use the function constrained_intervals. Note: we need to fit the model before we can use this function. We already did that, so we are fine. End of explanation """ fig, ax = plt.subplots(1, 1,figsize=(6,3)) out = plot_relevance_bars(ax, const_ints) """ Explanation: Feature 3 is set to its minimum (at 0). How does it look visually? End of explanation """ fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, verbose=True, random_state=STATE) fri_model.fit(X_scaled,y) """ Explanation: Feature 3 is reduced to its minimum (no contribution). In turn, its correlated partner feature 4 had to take its maximum contribution. Print internal Parameters If we want to take at internal parameters, we can use the verbose flag in the model creation. End of explanation """ fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, n_jobs=-1, verbose=1, random_state=STATE) fri_model.fit(X_scaled,y) """ Explanation: This prints out the parameters of the baseline model One can also see the best selected hyperparameter according to gridsearch and the training score of the model in score. Multiprocessing To enable multiprocessing simply use the n_jobs parameter when init. the model. It expects an integer parameter which defines the amount of processes used. n_jobs=-1 uses all available on the CPU. End of explanation """
MicrosoftGenomics/PySnpTools
doc/ipynb/tutorial.ipynb
apache-2.0
# set some ipython notebook properties %matplotlib inline # set degree of verbosity (adapt to INFO for more verbose output) import logging logging.basicConfig(level=logging.WARNING) # set figure sizes import pylab pylab.rcParams['figure.figsize'] = (10.0, 8.0) """ Explanation: PySnpTools Tutorial Step up notebook End of explanation """ import os import numpy as np from pysnptools.snpreader import Bed snpreader = Bed("all.bed", count_A1=False) # What is snpreader? print snpreader """ Explanation: Reading Bed files Use "Bed" to access file "all.bed" End of explanation """ print snpreader.iid_count print snpreader.sid_count print snpreader.iid[:3] print snpreader.sid """ Explanation: Find out about iids and sids End of explanation """ snpdata = snpreader.read() #What is snpdata? print snpdata #What do the iids and sid of snprdata look like? print snpdata.iid_count, snpdata.sid_count print snpdata.iid[:3] print snpdata.sid """ Explanation: Read all the SNP data in to memory End of explanation """ print snpdata.val[:7,:7] print np.mean(snpdata.val) """ Explanation: Print the SNP data snpdata.val is a NumPy array. We can apply any NumPy functions. End of explanation """ print np.mean(Bed("all.bed",count_A1=False).read().val) """ Explanation: If all you want is to read data in a Numpy array, here it is one line: End of explanation """ from pysnptools.snpreader import SnpData snpdata1 = SnpData(iid=[['f1','c1'],['f1','c2'],['f2','c1']], sid=['snp1','snp2'], val=[[0,1],[2,.5],[.5,np.nan]]) print np.nanmean(snpdata1.val) """ Explanation: You can also create a SnpData object from scratch (without reading from a SnpReader) End of explanation """ snpreader = Bed("all.bed",count_A1=False) snp0reader = snpreader[:,0] print snp0reader print snp0reader.iid_count, snp0reader.sid_count print snp0reader.sid print snpreader # Is not changed snp0data = snp0reader.read() print snp0data print snp0data.iid_count, snp0data.sid_count print snp0data.sid print snp0data.val[:10,:] """ Explanation: <font color='red'>see PowerPoint summary</font> Reading subsets of data, reading with re-ordering iids & sids (rows & cols), stacking Reading SNP data for just one SNP End of explanation """ print Bed("all.bed",count_A1=False)[9,:].read().val """ Explanation: Print the data for iid #9 (in one line) End of explanation """ snp55data = Bed("all.bed",count_A1=False)[:5,:5].read() print snp55data print snp55data.iid_count, snp55data.sid_count print snp55data.iid print snp55data.sid print snp55data.val """ Explanation: Read the data for the first 5 iids AND the first 5 sids: End of explanation """ snpreaderA = Bed("all.bed",count_A1=False) # read all snpreaderB = snpreaderA[:,:250] #read first 250 sids snpreaderC = snpreaderB[:10,:] # reader first 10 iids snpreaderD = snpreaderC[::2,::2] print snpreaderD print snpreaderD.iid_count, snpreaderD.sid_count print snpreaderD.iid print snpreaderD.read().val[:10,:10] #only reads the final values desired (if practical) """ Explanation: Stacking indexing is OK and efficient Recall NumPy slice notation: start:stop:step, so ::2 is every other End of explanation """ # List of indexes (can use to reorder) snpdata43210 = Bed("all.bed",count_A1=False)[[4,3,2,1,0],:].read() print snpdata43210.iid # List of booleans to select snp43210B = snpdata43210[[False,True,True,False,False],:] print snp43210B print snp43210B.iid """ Explanation: Fancy indexing - list of indexes, slices, list of booleans, negatives(?) on iid or sid or both End of explanation """ print hasattr(snp43210B,'val') """ Explanation: Question: Does snp43210B have a val property? End of explanation """ snpdata4321B = snp43210B.read(view_ok=True) #view_ok means ok to share memory print snpdata4321B.val """ Explanation: Answer: No. It's a subset of a SnpData, so it will read from a SnpData, but it is not a SnpData. Use .read() to get the values. End of explanation """ print Bed("all.bed",count_A1=False)[::-10,:].iid[:10] """ Explanation: Negatives NumPy slices: start:stop:step 'start','stop': negative means counting from the end 'step': negative means count backwards Indexes: -1 means last, -2 means second from the list [Not Yet Implemented] Lists of indexes can have negatives [Not Yet Implemented] End of explanation """ print Bed("all.bed",count_A1=False).read().val.flags snpdata32c = Bed("all.bed",count_A1=False).read(order='C',dtype=np.float32) print snpdata32c.val.dtype print snpdata32c.val.flags """ Explanation: <font color='red'>see PowerPoint summary</font> More properties and attributes of SnpReaders read() supports both NumPy memory layouts and 8-byte or 4-byte floats End of explanation """ print Bed("all.bed",count_A1=False).pos """ Explanation: Every reader includes an array of SNP properties called ".pos" End of explanation """ snpreader = Bed("all.bed",count_A1=False) print snpreader.pos[:,0] chr5_bools = (snpreader.pos[:,0] == 5) print chr5_bools chr5reader = snpreader[:,chr5_bools] print chr5reader chr5data = chr5reader.read() print chr5data.pos """ Explanation: [chromosome, genetic distance, basepair distance] Accessable without a SNP data read. So, using Python-style fancy indexing, how to we read all SNPs at Chrom 5? End of explanation """ chr5data = Bed("all.bed",count_A1=False)[:,snpreader.pos[:,0] == 5].read() print chr5data.val """ Explanation: In one line End of explanation """ snpreader = Bed("all.bed",count_A1=False) iid0 =[['cid499P1','cid499P1'], ['cid489P1','cid489P1'], ['cid479P1','cid479P1']] indexes0 = snpreader.iid_to_index(iid0) print indexes0 snpreader0 = snpreader[indexes0,:] print snpreader0.iid # more condensed snpreader0 = snpreader[snpreader.iid_to_index(iid0),:] print snpreader0.iid """ Explanation: You can turn iid or sid names into indexes End of explanation """ snpdata0chr5 = snpreader[snpreader.iid_to_index(iid0),snpreader.pos[:,0] == 5].read() print np.mean(snpdata0chr5.val) """ Explanation: Can use both .pos and iid_to_index (sid_to_index) at once End of explanation """ from pysnptools.snpreader import Pheno phenoreader = Pheno("pheno_10_causals.txt") print phenoreader print phenoreader.iid_count, phenoreader.sid_count print phenoreader.sid print phenoreader.pos phenodata = phenoreader.read() print phenodata.val[:10,:] """ Explanation: <font color='red'>see PowerPoint summary</font> Other SnpReaders and how to write Read from the PLINK phenotype file (text) instead of a Bed file Looks like: cid0P0 cid0P0 0.4853395139922632 cid1P0 cid1P0 -0.2076984565752155 cid2P0 cid2P0 1.4909084058931985 cid3P0 cid3P0 -1.2128996652683697 cid4P0 cid4P0 0.4293203431508744 ... End of explanation """ snpdata1010 = Bed("all.bed",count_A1=False)[:10,:10].read() Pheno.write("deleteme1010.txt",snpdata1010) print os.path.exists("deleteme1010.txt") """ Explanation: Write 1st 10 iids and sids of Bed data into Pheno format End of explanation """ Bed.write("deleteme1010.bed",snpdata1010) print os.path.exists("deleteme1010.bim") """ Explanation: Write the snpdata to Bed format End of explanation """ snpdata1 = SnpData(iid=[['f1','c1'],['f1','c2'],['f2','c1']], sid=['snp1','snp2'], val=[[0,1],[2,1],[1,np.nan]]) Bed.write("deleteme1.bed",snpdata1) print os.path.exists("deleteme1.fam") """ Explanation: Create a snpdata on the fly and write to Bed End of explanation """ from pysnptools.snpreader import SnpNpz SnpNpz.write("deleteme1010.snp.npz", snpdata1010) print os.path.exists("deleteme1010.snp.npz") """ Explanation: The SnpNpz and SnpHdf5 SnpReaders Pheno is slow because its txt. Bed format can only hold 0,1,2,missing. Use SnpNpz for fastest read/write times, smallest file size. End of explanation """ from pysnptools.snpreader import SnpHdf5 SnpHdf5.write("deleteme1010.snp.hdf5", snpdata1010) print os.path.exists("deleteme1010.snp.hdf5") """ Explanation: Use SnpHdf5 for low-memory random-access reads, good speed and size, and compatiblity outside Python End of explanation """ snpreader = Bed("all.bed",count_A1=False) phenoreader = Pheno("pheno_10_causals.txt")[::-2,:] #half the iids, and in reverse order print snpreader.iid_count, phenoreader.iid_count print snpreader.iid[:5] print phenoreader.iid[:5] """ Explanation: <font color='red'>see PowerPoint summary</font> Intersecting iids What if we have two data sources with slightly different iids in different orders? End of explanation """ import pysnptools.util as pstutil snpreader_i,phenoreader_i = pstutil.intersect_apply([snpreader,phenoreader]) print np.array_equal(snpreader_i.iid,phenoreader_i.iid) snpdata_i = snpreader_i.read() phenodata_i = phenoreader_i.read() print np.array_equal(snpdata_i.iid,phenodata_i.iid) print snpdata_i.val[:10,:] print phenodata_i.val[:10,:] """ Explanation: Create an intersecting and reordering reader End of explanation """ weights = np.linalg.lstsq(snpdata_i.val, phenodata_i.val)[0] #usually would add a 1's column predicted = snpdata_i.val.dot(weights) import matplotlib.pyplot as plt plt.plot(phenodata_i.val, predicted, '.', markersize=10) plt.show() #Easy to 'predict' seen 250 cases with 5000 variables. # How does it predict unseen cases? phenoreader_unseen = Pheno("pheno_10_causals.txt")[-2::-2,:] snpreader_u,phenoreader_u = pstutil.intersect_apply([snpreader,phenoreader_unseen]) snpdata_u = snpreader_u.read() phenodata_u = phenoreader_u.read() predicted_u = snpdata_u.val.dot(weights) plt.plot(phenodata_u.val, predicted_u, '.', markersize=10) plt.show() #Hard to predict unseen 250 cases with 5000 variables. """ Explanation: Example of use with NumPy's built-in linear regression End of explanation """ snpreader = Bed("all.bed",count_A1=False) snpdata = snpreader.read() snpdata = snpdata.standardize() #In place AND returns self print snpdata.val[:,:5] """ Explanation: <font color='red'>see PowerPoint summary</font> Standardization, Kernels To Unit standardize: read data, ".standardize()" End of explanation """ snpdata = Bed("all.bed",count_A1=False).read().standardize() print snpdata.val[:,:5] """ Explanation: Sets means per sid to 0 and stdev to 1 and fills nan with 0. In one line: End of explanation """ from pysnptools.standardizer import Beta snpdataB = Bed("all.bed",count_A1=False).read().standardize(Beta(1,25)) print snpdataB.val[:,:4] """ Explanation: Beta standardization End of explanation """ from pysnptools.standardizer import Unit kerneldata = Bed("all.bed",count_A1=False).read_kernel(standardizer=Unit()) print kerneldata.val[:,:4] kerneldata = Bed("all.bed",count_A1=False).read_kernel(standardizer=Unit(),block_size=500) print kerneldata.val[:,:4] """ Explanation: To create an kernel (the relateness of each iid pair as the dot product of their standardized SNP values) End of explanation """ from pysnptools.snpreader import Bed pstreader = Bed("all.bed",count_A1=False) print pstreader.row_count, pstreader.col_count print pstreader.col_property """ Explanation: <font color='red'>see PowerPoint summary</font> PstReader Every SnpReader is a PstReader End of explanation """ from pysnptools.pstreader import PstData data1 = PstData(row=['a','b','c'], col=['y','z'], val=[[1,2],[3,4],[np.nan,6]], row_property=['A','B','C']) reader2 = data1[data1.row < 'c', data1.col_to_index(['z','y'])] print reader2 print reader2.read().val print reader2.row_property print reader2.col_property.shape, reader2.col_property.dtype """ Explanation: Can also create PstData from scratch, on the fly End of explanation """ from pysnptools.pstreader import PstNpz, PstHdf5 fnnpz = "delme.pst.npz" PstNpz.write(fnnpz,data1) data2 = PstNpz(fnnpz).read() fnhdf5 = "delme.pst.hdf5" PstHdf5.write(fnhdf5,data2) data3 = PstHdf5(fnhdf5).read() print data2, data3 print data2.val print data3.val """ Explanation: Two new PstReaders: PstNpz and PstHdf5 End of explanation """ from pysnptools.util import IntRangeSet a = IntRangeSet("100:500,501:1000") # a is the set of integers from 100 to 500 (exclusive) and 501 to 1000 (exclusive) b = IntRangeSet("-20,400:600") # b is the set of integers -20 and the range 400 to 600 (exclusive) c = a | b # c is the union of a and b, namely -20 and 100 to 1000 (exclusive) print c print 200 in c print -19 in c """ Explanation: <font color='red'>see PowerPoint summary</font> IntRangeSet Union of two sets <img src="example1.png"> End of explanation """ from pysnptools.util import IntRangeSet line = "chr15 29370 37380 29370,32358,36715 30817,32561,37380" chr,trans_start,trans_last,exon_starts,exon_lasts = line.split() # split the line on white space trans_start = int(trans_start) trans_stop = int(trans_last) + 1 # add one to convert the inclusive "last" value into a Pythonesque exclusive "stop" value int_range_set = IntRangeSet((trans_start,trans_stop)) # creates a IntRangeSet from 29370 (inclusive) to 37381 (exclusive) print int_range_set # print at any time to see the current value """ Explanation: Set difference Suppose we want to find the intron regions of a gene but we are given only the transcription region and the exon regions. <img src="example2.png"> End of explanation """ exon_starts = [int(start) for start in exon_starts.strip(",").split(',')] exon_stops = [int(last)+1 for last in exon_lasts.strip(",").split(',')] assert len(exon_starts) == len(exon_stops) """ Explanation: Parse the exon start and last lists from strings to lists of integers (converting ‘last’ to ‘stop’) End of explanation """ from itertools import izip int_range_set -= izip(exon_starts,exon_stops) print int_range_set # See what it looks like """ Explanation: Zip together the two lists to create an iterable of exon_start,exon_stop tuples. Then ‘set subtract’ all these ranges from int_range_set. End of explanation """ for start, stop in int_range_set.ranges(): print "{0} {1} {2}".format(chr, start, stop-1) """ Explanation: Create the desired output by iterating through each contiguous range of integers. End of explanation """ # import the algorithm from fastlmm.association import single_snp_leave_out_one_chrom from pysnptools.snpreader import Bed # set up data ############################## snps = Bed("all.bed",count_A1=False) pheno_fn = "pheno_10_causals.txt" cov_fn = "cov.txt" # run gwas ################################################################### results_df = single_snp_leave_out_one_chrom(snps, pheno_fn, covar=cov_fn) # print head of results data frame import pandas as pd pd.set_option('display.width', 1000) results_df.head(n=10) # manhattan plot import pylab import fastlmm.util.util as flutil flutil.manhattan_plot(results_df[["Chr", "ChrPos", "PValue"]],pvalue_line=1e-5,xaxis_unit_bp=False) pylab.show() """ Explanation: <font color='red'>see PowerPoint summary</font> FastLMM End of explanation """
google/starthinker
colabs/kv_uploader.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: Tag Key Value Uploader A tool for bulk editing key value pairs for CM placements. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'recipe_name':'', # Name of document to deploy to. } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter Tag Key Value Uploader Recipe Parameters Add this card to a recipe and save it. Then click Run Now to deploy. Follow the instructions in the sheet for setup. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'drive':{ 'auth':'user', 'hour':[ ], 'copy':{ 'source':'https://docs.google.com/spreadsheets/d/19Sxy4BDtK9ocq_INKTiZ-rZHgqhfpiiokXOTsYzmah0/', 'destination':{'field':{'name':'recipe_name','prefix':'Key Value Uploader For ','kind':'string','order':1,'description':'Name of document to deploy to.','default':''}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute Tag Key Value Uploader This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """
maxvogel/NetworKit-mirror2
Doc/Notebooks/NetworKit_Tutorial_Part_4.ipynb
mit
from networkit import * %matplotlib inline cd ~/workspace/NetworKit G = readGraph("input/PGPgiantcompo.graph", Format.METIS) # Code for 7-1) # exact computation # Code for 7-2) # approximate computation """ Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 4) Determining Important Nodes (cont'd) Betweenness Centrality If you interpret the Facebook graph as web link graph in the previous Q&A session, the obvious ranking choice is the PageRank. Note that today it is only one of many aspects modern web search engines consider to rank web pages. However, we were looking for the eigenvector centrality as MIT8 is a social network (both and possibly others can be applicable, though). In applications where the flow of goods, vehicles, information, etc. via shortest paths in a network plays a major role, betweenness centrality is an interesting centrality index. It is also widely used for social networks. Its drawback is its rather high running time, which makes its use problematic for really large networks. But in many applications we do not need to consider the exact betweenness values nor an exact ranking. An approximation is often good enough. Q&A Session #7 In the PGP network, compute the 15 nodes with the highest (exact) betweenness values and order them accordingly in a ranking. Answer: Perform the same as in 1) with one difference: Instead of using the algorithm for computing exact betweenness values, use the RK approximation algorithm. Use also values different from the default ones for the parameters $\delta$ and $\epsilon$. What effects do you see in comparison to the ranking based on exact values? What about running time (you can use %time preceding a call to get its CPU time)? And how do the parameter settings affect these effects? Answer: End of explanation """ community.detectCommunities(G) """ Explanation: Community Detection This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to each other than to the rest of the network. Code for community detection is contained in the community module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result. End of explanation """ community.detectCommunities(G, algo=community.PLP(G)) """ Explanation: The default setting uses the parallel Louvain method (PLM) as underlying algorithm. The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. PLM yields a high-quality solution at reasonably fast running times. Let us now apply other algorithms. To this end, one specifies the algorithm directly in the call. End of explanation """ communities = _ viztasks.drawCommunityGraph(G, communities) """ Explanation: The visualization module, which is based on external code for graph drawing, provides a function which visualizes the community graph for a community detection solution: Communities are contracted into single nodes whose size corresponds to the community size. For problems with hundreds or thousands of communities, this may take a while. End of explanation """ # Code for 8-1) and 8-2) """ Explanation: Q&A Session 8 Run PLMR as well. What are the main differences between the three algorithms PLM, PLMR, and PLP in terms of the solutions they compute and the time they need for this computation? Answer: Visualize the three results. Can you see aspects of your answer to 1) in the figure as well? Do the figures lead to other insights? Answer: End of explanation """
GoogleCloudPlatform/mlops-on-gcp
workshops/kfp-caip-sklearn/lab-03-kfp-cicd/lab-03.ipynb
apache-2.0
ENDPOINT = '<YOUR_ENDPOINT>' PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] """ Explanation: CI/CD for a KFP pipeline Learning Objectives: 1. Learn how to create a custom Cloud Build builder to pilote CAIP Pipelines 1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP 1. Learn how to setup a Cloud Build Github trigger to rebuild the KFP In this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds and deploys a KFP pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code. Configuring environment settings Update the ENDPOINT constat with the settings reflecting your lab environment. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console. Open the SETTINGS for your instance Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window. End of explanation """ !cat kfp-cli/Dockerfile """ Explanation: Creating the KFP CLI builder Review the Dockerfile describing the KFP CLI builder End of explanation """ IMAGE_NAME='kfp-cli' TAG='latest' IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG) !gcloud builds submit --timeout 15m --tag {IMAGE_URI} kfp-cli """ Explanation: Build the image and push it to your project's Container Registry. End of explanation """ SUBSTITUTIONS=""" _ENDPOINT={},\ _TRAINER_IMAGE_NAME=trainer_image,\ _BASE_IMAGE_NAME=base_image,\ TAG_NAME=test,\ _PIPELINE_FOLDER=.,\ _PIPELINE_DSL=covertype_training_pipeline.py,\ _PIPELINE_PACKAGE=covertype_training_pipeline.yaml,\ _PIPELINE_NAME=covertype_continuous_training,\ _RUNTIME_VERSION=1.15,\ _PYTHON_VERSION=3.7,\ _USE_KFP_SA=True,\ _COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/ """.format(ENDPOINT).strip() !gcloud builds submit . --config cloudbuild.yaml --substitutions {SUBSTITUTIONS} """ Explanation: Understanding the Cloud Build workflow. Review the cloudbuild.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables. The CI/CD workflow automates the steps you walked through manually during lab-02-kfp-pipeline: 1. Builds the trainer image 1. Builds the base image for custom components 1. Compiles the pipeline 1. Uploads the pipeline to the KFP environment 1. Pushes the trainer and base images to your project's Container Registry Although the KFP backend supports pipeline versioning, this feature has not been yet enable through the KFP CLI. As a temporary workaround, in the Cloud Build configuration a value of the TAG_NAME variable is appended to the name of the pipeline. The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI. Manually triggering CI/CD runs You can manually trigger Cloud Build runs using the gcloud builds submit command. End of explanation """
mamrehn/machine-learning-tutorials
ipynb/[scikit-learn] first steps.ipynb
cc0-1.0
import numpy; print('numpy:\t', numpy.__version__, sep='\t') import scipy; print('scipy:\t', scipy.__version__, sep='\t') import matplotlib; print('matplotlib:', matplotlib.__version__, sep='\t') import sklearn; print('scikit-learn:', sklearn.__version__, sep='\t') """ Explanation: This is a quick tutorial to get started with scikit-learn. Parts of the code presented are based on this machineLearning tutorial. First, let's take a look at the versions of the libraries involved. End of explanation """ from sklearn import datasets #datasets.load_ -> [press tab for completion] iris = datasets.load_iris() iris.keys() for k in iris.keys(): print('\n== ', k, '==\n', str(iris[k])[0:390]) for k in iris.keys(): print(k, ':', type(iris[k])) [(k, iris[k].shape) for k in iris.keys() if type(iris[k]) == numpy.ndarray] # note: this also imports numpy as np, imports matplotlib.pyplot as plt, and others %pylab inline """ Explanation: Then load some data. End of explanation """ def dtime_to_seconds(dtime): return dtime.seconds + (dtime.microseconds * 1e-6) def bench(func, data, n=10): assert n > 2 score = np.inf try: time = [] for i in range(n): score, t = func(*data) time.append(dtime_to_seconds(t)) # remove extremal values time.pop(np.argmax(time)) time.pop(np.argmin(time)) except Exception as detail: print('%s error in function %s: ', (repr(detail), func)) time = [] return score, np.array(time) def bench_skl(X, y, T, valid): from sklearn import linear_model, ensemble start = datetime.now() # http://scikit-learn.org/stable/modules/classes.html clf = ensemble.RandomForestClassifier(n_estimators=1000, n_jobs=5, verbose=0) #clf = linear_model.ElasticNet(alpha=0.5, l1_ratio=0.5) #clf = linear_model.LogisticRegression() #clf = neighbors.NeighborsClassifier(n_neighbors=n_neighbors, algorithm='brute_inplace') #clf = skl_cluster.KMeans(k=n_components, n_init=1) #... clf.fit(X, y) ## Regression # pred = clf.predict(T) # delta = datetime.now() - start # mse = np.linalg.norm(pred - valid, 2) ** 2 # return mse, delta # Classification score = np.mean(clf.predict(T) == valid) return score, datetime.now() - start from sklearn import datasets import numpy as np from datetime import datetime iris = datasets.load_iris() sample_range = np.random.random_sample(size=iris.target.shape[0]) TH = 0.7 X = np.array([(iris.data[i,]) for i in range(len(iris.target)) if sample_range[i] >= TH]) Y = np.array([(iris.target[i,]) for i in range(len(iris.target)) if sample_range[i] >= TH]) T = np.array([(iris.data[i,]) for i in range(len(iris.target)) if sample_range[i] < TH]) valid = np.array([(iris.target[i,]) for i in range(len(iris.target)) if sample_range[i] < TH]) num_tries = 25 score, times = bench(bench_skl, (X,Y,T,valid), num_tries) print('Tries:', num_tries, 'Score:', score, 'Time:', np.mean(times), '(mean)', np.median(times), '(median)') """ Explanation: Benchmark classificator by ml-benchmarks: End of explanation """
danresende/deep-learning
sentiment_network/Sentiment Classification - Project 3 Solution.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() len(reviews) reviews[0] labels[0] """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory End of explanation """ from collections import Counter import numpy as np positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 positive_counts.most_common() pos_neg_ratios = Counter() for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio for word,ratio in pos_neg_ratios.most_common(): if(ratio > 1): pos_neg_ratios[word] = np.log(ratio) else: pos_neg_ratios[word] = -np.log((1 / (ratio+0.01))) # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] """ Explanation: Project 1: Quick Theory Validation End of explanation """ from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') """ Explanation: Transforming Text into Numbers End of explanation """ vocab = set(total_counts.keys()) vocab_size = len(vocab) print(vocab_size) list(vocab) import numpy as np layer_0 = np.zeros((1,vocab_size)) layer_0 from IPython.display import Image Image(filename='sentiment_network.png') word2index = {} for i,word in enumerate(vocab): word2index[word] = i word2index def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 def get_target_for_label(label): if(label == 'POSITIVE'): return 1 else: return 0 labels[0] get_target_for_label(labels[0]) labels[1] get_target_for_label(labels[1]) """ Explanation: Project 2: Creating the Input/Output Data End of explanation """ import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): # set our random number generator np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) self.review_vocab = list(review_vocab) label_vocab = set() for label in labels: label_vocab.add(label) self.label_vocab = list(label_vocab) self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.learning_rate = learning_rate self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] += 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews, training_labels): assert(len(training_reviews) == len(training_labels)) correct_so_far = 0 start = time.time() for i in range(len(training_reviews)): review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer self.update_input_layer(review) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # TODO: Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # TODO: Update the weights self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step if(np.abs(layer_2_error) < 0.5): correct_so_far += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): correct = 0 start = time.time() for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) if(layer_2[0] > 0.5): return "POSITIVE" else: return "NEGATIVE" mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) # evaluate our model before training (just to show how horrible it is) mlp.test(reviews[-1000:],labels[-1000:]) # train the network mlp.train(reviews[:-1000],labels[:-1000]) mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) # train the network mlp.train(reviews[:-1000],labels[:-1000]) mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) # train the network mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Project 3: Building a Neural Network Start with your neural network from the last chapter 3 layer neural network no non-linearity in hidden layer use our functions to create the training data create a "pre_process_data" function to create vocabulary for our training data generating functions modify "train" to train over the entire corpus Where to Get Help if You Need it Re-watch previous week's Udacity Lectures Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17) End of explanation """
brockk/clintrials
tutorials/matchpoint/Ambivalence.ipynb
gpl-3.0
import numpy as np from scipy.stats import norm from clintrials.dosefinding.efftox import EffTox, LpNormCurve real_doses = [7.5, 15, 30, 45] trial_size = 30 cohort_size = 3 first_dose = 3 prior_tox_probs = (0.025, 0.05, 0.1, 0.25) prior_eff_probs = (0.2, 0.3, 0.5, 0.6) tox_cutoff = 0.40 eff_cutoff = 0.45 tox_certainty = 0.05 eff_certainty = 0.03 mu_t_mean, mu_t_sd = -5.4317, 2.7643 beta_t_mean, beta_t_sd = 3.1761, 2.7703 mu_e_mean, mu_e_sd = -0.8442, 1.9786 beta_e_1_mean, beta_e_1_sd = 1.9857, 1.9820 beta_e_2_mean, beta_e_2_sd = 0, 0.2 psi_mean, psi_sd = 0, 1 efftox_priors = [ norm(loc=mu_t_mean, scale=mu_t_sd), norm(loc=beta_t_mean, scale=beta_t_sd), norm(loc=mu_e_mean, scale=mu_e_sd), norm(loc=beta_e_1_mean, scale=beta_e_1_sd), norm(loc=beta_e_2_mean, scale=beta_e_2_sd), norm(loc=psi_mean, scale=psi_sd), ] """ Explanation: Implementing the EffTox Dose-Finding Design in the Matchpoint Trials This tutorial complements the manuscript Implementing the EffTox Dose-Finding Design in the Matchpoint Trial (Brock et al.,in submission). Please consult the paper for the clinical background, the methodology details, and full explanation of the terminology. Dose Ambivalence In this notebook, we illustrate the phenomenon of dose ambivalence using the EffTox design in the seamless phase I/II dose-finding clinical trial, Matchpoint. End of explanation """ hinge_points = [(0.4, 0), (1, 0.7), (0.5, 0.4)] metric = LpNormCurve(hinge_points[0][0], hinge_points[1][1], hinge_points[2][0], hinge_points[2][1]) et = EffTox(real_doses, efftox_priors, tox_cutoff, eff_cutoff, tox_certainty, eff_certainty, metric, trial_size, first_dose) """ Explanation: The above parameters are explained in the manuscript. End of explanation """ outcomes = [(3, 0, 0), (3, 1, 0), (3, 0, 1)] et.reset() np.random.seed(123) et.update(outcomes) """ Explanation: The EffTox class is an object-oriented implementation of the trial design by Thall & Cook (Thall, P. F., & Cook, J. D. (2004). Dose-Finding Based on Efficacy-Toxicity Trade-Offs. Biometrics, 60(3), 684–693.) Dose ambivalence after 3NTE Outcomes for a patient are represented by a three item tuple, where: first item is 1-based dose-index give (i.e. 3 is dose-level 3); second item is 1 if toxicity happened, else 0; third item is 1 if efficacy happened, else 0. Outcomes for several patients are represented as lists: End of explanation """ et.reset() np.random.seed(321) et.update(outcomes) """ Explanation: So, using seed 123, dose-level 3 is recommended to be given to the next patient after oberving 3NTE in the first cohort of patients. Fair enough. End of explanation """ def get_next_dose(trial, outcomes, **kwargs): trial.reset() next_dose = trial.update(outcomes, **kwargs) return next_dose """ Explanation: Wait...using seed 321, that advice is now dose-level 4. I need a single answer. What should I do? Let's define a simple function to calculate next dose based on some outcomes: End of explanation """ np.random.seed(123) replicates = [get_next_dose(et, outcomes, n=10**5) for i in range(100)] doses, freq = np.unique(replicates, return_counts=True) list(zip(doses, 1.0 * freq / len(replicates))) """ Explanation: And then run that a number of times. For indication, 100 iterations will suffice (it takes a wee while...). In practice, you might use more iterations. End of explanation """
astarostin/MachineLearningSpecializationCoursera
course1/week4/CentralLimitTheoremTask.ipynb
apache-2.0
gilbrat_rv = sts.gilbrat() sample = gilbrat_rv.rvs(1000) """ Explanation: Возьмем для исследования распределение Гилбрата. Сгенерируем выборку объема 1000. End of explanation """ plt.hist(sample, bins = 15, normed=True) plt.ylabel('$f(x)$, number of samples') plt.xlabel('$x$') x = np.linspace(0,15,1000) pdf = gilbrat_rv.pdf(x) plt.plot(x, pdf, label='theoretical PDF', color='red') plt.legend(loc='upper right') """ Explanation: Построим гистограмму выборки и график теоретической плотности распределения случайной величины End of explanation """ n = 5 sample_means = [] for i in xrange(1000): smpl = gilbrat_rv.rvs(n) sample_means.append(smpl.mean()) plt.hist(sample_means, normed=True) plt.ylabel('$f(x)$, number of sample means') plt.xlabel('$x$') norm_rv = sts.norm(loc=np.sqrt(np.e), scale=np.sqrt(np.e*(np.e-1)/n)) pdf = norm_rv.pdf(x) plt.plot(x, pdf, color='red', linewidth=2) """ Explanation: Из справочных источников следует, что для распределения Гилбрата среднее равно $\sqrt{e}$, а дисперсия равна $e(e-1)$. Таким образом, согласно центральной предельной теореме распределение выборочных средних описывается нормальным распределением с параметрами $\mu$=$\sqrt{e}$ и $\sigma^2$=$e(e-1)/n$, где n - размер выборки, по которой считаются выборочные средние. Построим гистограмму распределения выборочного среднего для выборки размера 5 и соответствующий график функции плотности вероятности. End of explanation """ n = 10 sample_means = [] for i in xrange(1000): smpl = gilbrat_rv.rvs(n) sample_means.append(smpl.mean()) plt.hist(sample_means, normed=True) plt.ylabel('$f(x)$, number of sample means') plt.xlabel('$x$') norm_rv = sts.norm(loc=np.sqrt(np.e), scale=np.sqrt(np.e*(np.e-1)/n)) pdf = norm_rv.pdf(x) plt.plot(x, pdf, color='red', linewidth=2) """ Explanation: Построим гистограмму распределения выборочного среднего для выборки размера 10 и соответствующий график функции плотности вероятности. End of explanation """ n = 50 sample_means = [] for i in xrange(1000): smpl = gilbrat_rv.rvs(n) sample_means.append(smpl.mean()) plt.hist(sample_means, normed=True) plt.ylabel('$f(x)$, number of sample means') plt.xlabel('$x$') norm_rv = sts.norm(loc=np.sqrt(np.e), scale=np.sqrt(np.e*(np.e-1)/n)) pdf = norm_rv.pdf(x) plt.plot(x, pdf, color='red', linewidth=2) """ Explanation: Построим гистограмму распределения выборочного среднего для выборки размера 50 и соответствующий график функции плотности вероятности. End of explanation """
podondra/bt-spectraldl
notebooks/00-spectroscopy.ipynb
gpl-3.0
%matplotlib inline import numpy as np import astropy.analytic_functions import astropy.io.fits import matplotlib.pyplot as plt wavelens = np.linspace(100, 30000, num=1000) temperature = np.array([5000, 4000, 3000]).reshape(3, 1) with np.errstate(all='ignore'): flux_lam = astropy.analytic_functions.blackbody_lambda( wavelens, temperature ) for flux, temp in zip(flux_lam, temperature.ravel()): plt.plot(wavelens, flux, label='{} K'.format(temp)) plt.legend() plt.xlabel('wavelength (Angstrom)') plt.ylabel('flux') plt.title('blackbody radiation graph') plt.grid() """ Explanation: Astronomical Spectroscopy To generate publication images change Matplotlib backend to nbagg: %matplotlib nbagg Blackbody Radiation A blackbody is a hypothetical object which is a perfect absorber and emitter of radiation over all wavelengths. The spectral flux distribution of blackbody's thermal energy depends on its temperature. Stars are often modelled as blackbodies in astronomy. Their spectrum approximates the blackbody spectrum. End of explanation """ fig, (ax1, ax2) = plt.subplots(2, 1) for ax in (ax1, ax2): ax.set_xlabel('wavelength (Angstrom)') ax.set_ylabel('flux') ax.axvline(x=6562.8, color='black', label='H-alpha', linestyle='dashed') ax.grid() with astropy.io.fits.open('data/alpha-lyr-absorption.fits') as hdulist: data = hdulist[1].data ax1.plot( data.field('spectral'), data.field('flux') ) ax1.set_title('absorption in spectrum of {}'.format(hdulist[1].header['OBJECT'])) with astropy.io.fits.open('data/gamma-cas-emission.fits') as hdulist: data = hdulist[1].data ax2.plot( data.field('spectral'), data.field('flux') ) ax2.set_title('emission in spectrum of {}'.format(hdulist[1].header['OBJECT'])) fig.tight_layout() """ Explanation: Spectral Lines Spectral lines can be used to identify the chemical composition of stars. If a light from a star is separeted with a prism its spectrum of colours is crossed with discrete lines. This can be also visualized as flux of particural wavelengths. Flux is the total amount of energy that crosses a unit area per unit time. There are two types of spectral lines: emission and absorption lines. End of explanation """ fig, (ax1, ax2) = plt.subplots(2, 1) for ax in (ax1, ax2): ax.set_xlabel('wavelength (Angstrom)') ax.set_ylabel('flux') ax.grid() with astropy.io.fits.open('data/bt-cmi-lamost.fits') as hdulist: header = hdulist[0].header start = header['CRVAL1'] delta = header['CD1_1'] pix = header['CRPIX1'] waves = np.array([10 ** (start + (i - pix + 1) * delta) for i in range(header['NAXIS1'])]) ax1.plot(waves, hdulist[0].data[0]) ax1.set_title('original spectrum') ax2.plot(waves, hdulist[0].data[2]) ax2.set_title('spectrum with normalized continuum') fig.tight_layout() """ Explanation: Continuum Normalization End of explanation """ fig, (ax1, ax2) = plt.subplots(2, 1) for ax in (ax1, ax2): ax.set_xlabel('wavelength (Angstrom)') ax.set_ylabel('flux') ax.grid() with astropy.io.fits.open('data/bt-cmi-ondrejov.fits') as hdulist: header = hdulist[1].header data = hdulist[1].data ax1.plot(data.field('spectral'), data.field('flux')) ax1.set_title('spectrum of {} from Ondřejov'.format(header['OBJECT'])) with astropy.io.fits.open('data/bt-cmi-lamost.fits') as hdulist: # waves from previous code cell ax2.plot(waves, hdulist[0].data[2]) ax2.set_title('spectrum of BT CMi from LAMOST') fig.tight_layout() """ Explanation: LAMOST versus Ondřejov Cross matched spectum of BT CMi. End of explanation """
a301-teaching/a301_code
notebooks/resample.ipynb
mit
import h5py from a301lib.geolocate import find_corners import numpy as np import pyproj import pyresample from pyresample import kd_tree,geometry from pyresample.plot import area_def2basemap from matplotlib import pyplot as plt from a301utils.modismeta_read import parseMeta from a301utils.a301_readfile import download data_name='MYD021KM.A2016224.2100.006.2016225153002.h5' download(data_name) geom='MYD03.A2016224.2100.006.2016225152335.h5' download(geom) index=0 my_name = 'EV_250_Aggr1km_RefSB' with h5py.File(data_name,'r') as h5_file: chan1=h5_file['MODIS_SWATH_Type_L1B']['Data Fields'][my_name][index,:,:] scale=h5_file['MODIS_SWATH_Type_L1B']['Data Fields'][my_name].attrs['reflectance_scales'][...] offset=h5_file['MODIS_SWATH_Type_L1B']['Data Fields'][my_name].attrs['reflectance_offsets'][...] chan1_calibrated =(chan1 - offset[index])*scale[index] with h5py.File(geom) as geo_file: lon_data=geo_file['MODIS_Swath_Type_GEO']['Geolocation Fields']['Longitude'][...] lat_data=geo_file['MODIS_Swath_Type_GEO']['Geolocation Fields']['Latitude'][...] """ Explanation: working with projections We have been using fast_hist and fast_avg to map our MODIS level 1b pixels to a uniform lat/lon grid for plotting. There is one big problem with this approach: the actual ground spacing of a uniform longitude grid changes drastically from the equator to the poles, because longitude lines converge. A better approach is to do the following: 1) pick a map projection 2) define an x,y in grid in meters based on that projection 3) resample the satellite data onto that grid 4) save it as a geotiff file, which maintains all the information about the projection we used First, get channel 1 and the lat/lon files as usual End of explanation """ corners=parseMeta(data_name) proj_id = 'laea' datum = 'WGS84' lat_0 = '{lat_0:5.2f}'.format_map(corners) lon_0= '{lon_0:5.2f}'.format_map(corners) lon_bbox = [corners['min_lon'],corners['max_lon']] lat_bbox = [corners['min_lat'],corners['max_lat']] """ Explanation: Next use a new function to get the corner points and the center lat and lon from MODIS metadata The function is parseMeta and you can treat it as a black box unless you're interested in how regular expressions work in python. It does the same thing that find_corners does, but skips the calculation, since NASA has already computed everything we need more accurately. End of explanation """ area_dict = dict(datum=datum,lat_0=lat_0,lon_0=lon_0, proj=proj_id,units='m') prj=pyproj.Proj(area_dict) x, y = prj(lon_bbox, lat_bbox) xsize=2200 ysize=2500 area_id = 'granule' area_name = 'modis swath 5min granule' area_extent = (x[0], y[0], x[1], y[1]) area_def = geometry.AreaDefinition(area_id, area_name, proj_id, area_dict, xsize,ysize, area_extent) swath_def = geometry.SwathDefinition(lons=lon_data, lats=lat_data) result = kd_tree.resample_nearest(swath_def, chan1_calibrated.ravel(), area_def, radius_of_influence=5000, nprocs=2) print(area_def) """ Explanation: 1. Pick a map projection The program that resamples modis data onto a particular projected grid is pyresample. I'll resample the swath using a lambert azimuthal equal area projection The output will be a 2500 x 2500 array called result which has the channel 1 data resampled onto a grid centered at the lat_0,lon_0 center of the swath. The values will be determined by averaging the nearest neighbors to a particular cell location, using a zone of influence with a radius of 5000 meters, and a kd-tree The next cell puts the projection into a structure that pyresample understands, and does the resampling 2. and 3.: define an x,y grid and resample onto it End of explanation """ plt.close('all') fig,ax = plt.subplots(1,1, figsize=(8,8)) bmap=area_def2basemap(area_def,ax=ax,resolution='c') num_meridians=180 num_parallels = 90 vmin=None; vmax=None col = bmap.imshow(result, origin='upper', vmin=0, vmax=0.4) label='channel 1' bmap.drawmeridians(np.arange(-180, 180, num_meridians),labels=[True,False,False,True]) bmap.drawparallels(np.arange(-90, 90, num_parallels),labels=[False,True,True,False]) bmap.drawcoastlines() fig.colorbar(col, shrink=0.5, pad=0.05).set_label(label) fig.canvas.draw() plt.show() """ Explanation: plot the reprojected image pyresample can take its projection structure and turn it into a basemap instance using area_def2basemap This works because both pyresample and basemap use the same underlying projection code called pyproj. End of explanation """ from osgeo import gdal, osr raster = gdal.GetDriverByName("GTiff") gformat = gdal.GDT_Float32 channel=result.astype(np.float32) opacity=0 fill_value=0 g_opts=[] height,width=result.shape tiffile='test.tif' dst_ds = raster.Create(tiffile,width,height, 1,gformat,g_opts) area=area_def adfgeotransform = [area.area_extent[0], area.pixel_size_x, 0, area.area_extent[3], 0, -area.pixel_size_y] dst_ds.SetGeoTransform(adfgeotransform) srs = osr.SpatialReference() srs.ImportFromProj4(area.proj4_string) srs.SetProjCS(area.proj_id) srs = srs.ExportToWkt() dst_ds.SetProjection(srs) dst_ds.GetRasterBand(1).WriteArray(channel) del dst_ds """ Explanation: 4. Write ths out as a geotiff file Here's how to create a tif file that saves all of the projection infomation along with the gridded raster image of channel 1 End of explanation """ !gdalinfo test.tif !gdalsrsinfo test.tif """ Explanation: The gdal package has some scripts to dump the details about the tif file and the srs ("spatial reference system") End of explanation """ import gdal from gdalconst import GA_ReadOnly data = gdal.Open(tiffile, GA_ReadOnly) geoTransform = data.GetGeoTransform() minx = geoTransform[0] maxy = geoTransform[3] maxx = minx + geoTransform[1] * data.RasterXSize miny = maxy + geoTransform[5] * data.RasterYSize print('\nhere are the projected corners of the x,y raster\n{}'.format([minx, miny, maxx, maxy])) print('\nhere are the attributes of the data instance:\n{}\n'.format(dir(data))) data = None """ Explanation: Read the projection data back in from the tif file using gdal: End of explanation """
mromanello/SunoikisisDC_NER
Sunoikisis - Named Entity Extraction 2a-FM.ipynb
gpl-3.0
from idai_journals import nlp as dainlp import re from treetagger import TreeTagger from nltk.tag import StanfordNERTagger from nltk.chunk.util import tree2conlltags from nltk.chunk import RegexpParser from nltk.tree import Tree from nltk.tag import StanfordNERTaggelr """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Summary-of-the-previous-lecture" data-toc-modified-id="Summary-of-the-previous-lecture-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Summary of the previous lecture</a></div><div class="lev2 toc-item"><a href="#Libraries-and-import-statements" data-toc-modified-id="Libraries-and-import-statements-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Libraries and import statements</a></div><div class="lev2 toc-item"><a href="#Data-types" data-toc-modified-id="Data-types-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Data types</a></div><div class="lev2 toc-item"><a href="#Data-collections-(and-variables)" data-toc-modified-id="Data-collections-(and-variables)-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Data collections (and variables)</a></div><div class="lev2 toc-item"><a href="#For-loops-and-if-statements" data-toc-modified-id="For-loops-and-if-statements-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>For loops and if statements</a></div><div class="lev2 toc-item"><a href="#Functions" data-toc-modified-id="Functions-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Functions</a></div><div class="lev2 toc-item"><a href="#Handling-exceptions" data-toc-modified-id="Handling-exceptions-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Handling exceptions</a></div><div class="lev2 toc-item"><a href="#A-bonus:-objects" data-toc-modified-id="A-bonus:-objects-17"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>A bonus: objects</a></div><div class="lev1 toc-item"><a href="#Regular-expressions" data-toc-modified-id="Regular-expressions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Regular expressions</a></div><div class="lev1 toc-item"><a href="#Extracting-dates-and-persons-from-texts" data-toc-modified-id="Extracting-dates-and-persons-from-texts-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Extracting dates and persons from texts</a></div><div class="lev2 toc-item"><a href="#A-modern-text-in-English" data-toc-modified-id="A-modern-text-in-English-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>A modern text in English</a></div><div class="lev2 toc-item"><a href="#Part-Of-Speech-(POS)-and-Named-Entity-(NE)-Tagging" data-toc-modified-id="Part-Of-Speech-(POS)-and-Named-Entity-(NE)-Tagging-32"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Part-Of-Speech (POS) and Named-Entity (NE) Tagging</a></div><div class="lev2 toc-item"><a href="#Chunking" data-toc-modified-id="Chunking-33"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Chunking</a></div><div class="lev2 toc-item"><a href="#Export-to-IOB-notation" data-toc-modified-id="Export-to-IOB-notation-34"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Export to IOB notation</a></div><div class="lev1 toc-item"><a href="#Regex-tagger" data-toc-modified-id="Regex-tagger-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Regex tagger</a></div><div class="lev1 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Exercise</a></div> # Summary of the previous lecture In our previous common session, we have introduced some fundamental notions of the Python language. Let's review some of them! ## Libraries and import statements End of explanation """ #interges and floats 3 + 0.5 #strings "hello" #Booleans True """ Explanation: Data types End of explanation """ #lists (can also contain multiple different data types) li = ["Leipzig", "London", "Berlin", "Boston", 4, False] #tuples (like lists, but immutable) tu = ("tuple", "list", "dictionary") #dictionaries (key : value pairs) di = {"key" : "value", "other-key" : "second value"} """ Explanation: Data collections (and variables) End of explanation """ #home assignment: try to figure out what the if statement (line 2) does for l in li: if isinstance(l, str): print(l) """ Explanation: For loops and if statements End of explanation """ def printMe(message): print(message) printMe("Hello, world!") printMe("goodbye...") """ Explanation: Functions End of explanation """ l = ["zero", "one", "two", "three"] l[10] try: l[10] except IndexError: print("hey, your index is way too high!") """ Explanation: Handling exceptions End of explanation """ class MagicBroom(): #this is called "constructor"; it's a special method def __init__(self, name, speed=20): self.name = name self.buckets = 2 self.speed = speed def greet(self): print("Hello, my name is %s! What can I do for you?" % self.name) def fetchWater(self): if self.speed >= 20: print("Yes, sir! I'll be back in a sec!") else: print("Allright, but I am taking my time!") mickey = MagicBroom("Mickey") mickey.greet() peter = MagicBroom("Peter", speed=5) mickey.speed mickey.fetchWater() peter.fetchWater() """ Explanation: A bonus: objects Objects might be a bit complicated, but they're very important to understand the code written by other people, while most of the programs that you'll find around is written using classes and objects. Oh, and the good news is... you've already met them! What are "objects" in a programming language like Python? Well, I like to think about them as... magical, animated tools! Say that you want to fetch water from a well (and maybe clean some of the mess...). Well, the object-oriented approach to this tak consists in creating one or more magic brooms that go and fetch the water for you! In order to create them, you have to conceptualize the broom in terms of: the special features it has (e.g. number of buckets carried, speed...) the actions that it can execute (fetch water, clean the floor) That's it! In programming parlance, the features are called properties of the object; the actions are called methods. When you want to build your own magic brooms you first create a sort of prototype for each of them (which is called the class of magic brooms); then you can go on and create as many brooms as you want... Here's how to do it! (very simplified) End of explanation """ wiki = 'The set of integers consists of zero (0), the positive natural numbers (1, 2, 3, …), also called whole numbers or counting numbers,[1][2] and their additive inverses (the negative integers, i.e., −1, −2, −3, …). This is often denoted by a boldface Z ("Z") or blackboard bold Z {\displaystyle \mathbb {Z} } \mathbb {Z} (Unicode U+2124 ℤ) standing for the German word Zahlen ([ˈtsaːlən], "numbers").[3][4] ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite.' """ Explanation: Regular expressions How would you find all the numbers in this sentence? The set of integers consists of zero (0), the positive natural numbers (1, 2, 3, …), also called whole numbers or counting numbers,[1][2] and their additive inverses (the negative integers, i.e., −1, −2, −3, …). This is often denoted by a boldface Z ("Z") or blackboard bold Z {\displaystyle \mathbb {Z} } \mathbb {Z} (Unicode U+2124 ℤ) standing for the German word Zahlen ([ˈtsaːlən], "numbers").[3][4] ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. End of explanation """ import re """ Explanation: We'd need a way to tell our machine not to look for specific strings, bur rather for classes of strings, i.e. using some sort of meta-character to catch a whole group of signs (e.g. the numbers); then we'd need to tell to optionally include/exclude some other signs, or to catch the numbers only if they're not preceeded/followed by other signs... That's precisely what Regular Expressions do! They allow you to express a query as a string of metacharacters (or groups of metacharacters). How do we use them in Python? First, we need to import a module from the Standard Library (i.e. you already have them with Python: no need to install external libraries) End of explanation """ #here is one to catch all numbers reg = re.compile(r'[0-9]+') #or: r'\d+' type(reg) """ Explanation: A cool feature of RegExp in Python is that you can create your complicated patterns as objects (and assign them to variables)! That's right, RegExp patterns are your magic brooms... End of explanation """ reg.findall(wiki) """ Explanation: The Pattern object has a number of interesting methods to search and replace the pattern. Generally, you use them with the text that must be searched as an argument. For instance, findall returns all matches as a list End of explanation """ reg = re.compile(r'(?<!\[)−?\d+(?!\])') # the 'r' is there to make sure that we don't have to "escape the escape" sing (\) reg.findall(wiki) """ Explanation: Kind of a sloppy job we did! The negative numbers are not captured as negative; the footnote reference (e.g. [1], [4]) are also captured and we don't want them... We can do better. Let's improve our pattern so that we include the '-' signs (if present) and we get rid of the footnotes End of explanation """ with open("data/txt/article446_10k.txt") as f: txt = f.read() txt[:1000] """ Explanation: Now it's time to go back to our task of (Named) Entity recognition and extraction task. But we're going to use RegExp patterns and syntax quite a few times now... Extracting dates and persons from texts As Matteo said last time, the concept of "named entity" is domain- and task- specific. While a person's or a place's name will more or less always fall under the definition, in some contexts of information extraction people might be interested in other kinds of real-life "entities", such as time references (months, days, dates) or museum objects, which are not relevant to others. In this exercise, we are going to expand on what Matteo did last time with proper names in Latin and look at two specific classes of "entities" mentioned in a modern scientific text about ancient history: dates and persons. A modern text in English First, let's grab a text. We will be working with an English article on Roman history. The article is: Frederik Juliaan Vervaet, The Praetorian Proconsuls of the Roman Republic (211–52 BCE). A Constitutional Survey, Chiron 42(2012): 45-96. Let's start by loading the text and inspect the first 10.000 characters (we'll be working with just the first 10k words) End of explanation """ #first we load the library from treetagger import TreeTagger #That's right! we start by creating a Tagger "magic broom" (a Tagger object) tt = TreeTagger(language="english") #then we tag our text tagged = tt.tag(txt) tagged[:20] """ Explanation: Part-Of-Speech (POS) and Named-Entity (NE) Tagging Most of the time POS tagging is the precondition before you can perform any other advanced operation on a text As we did with Matteo last time, by "tagging" we mean the coupling of each word with a tag that describe some property of the word itself. Part-of-speech tags define what word class (e.g. "verb", or "proper noun") a text token belongs to. There are several tagset used for each language, and several software (pos taggers) who can tag your text automatically. One of the most used is TreeTagger, which has pretrained classifiers for many languages. Let's run it from Python, using one of the few "wrappers" available End of explanation """ #first, we define the path to the English classifier for Stanford NER english_classifier = 'english.all.3class.distsim.crf.ser.gz' twords = [w[0] for w in tagged] #then... guess what? Yes, we create a NER-tagger Magic Broom ;-) from nltk.tag import StanfordNERTagger ner_tagger = StanfordNERTagger(english_classifier) ners = ner_tagger.tag(twords) #not very pretty... ners[:20] """ Explanation: Named Entity Recognition (using a tool like the Stanford NER that we saw in our last lecture) is also a way of tagging the text, this time using information not on the word class but on a different level of classification (place, person, organization or none of the above). Let's do this too End of explanation """ from nltk.chunk import RegexpParser english_chunker = RegexpParser(r''' LOC: {<LOCATION><(PERSON|LOCATION|MISC|ORGANIZATION)>*} ''') """ Explanation: Chunking As we saw, when we analyze a text we proceed word by word (more exactly: token by token). However, Named Entities (now including dates) often span over more than one token. The task of sub-dividing a section of text into phrases and/or meaningful constituents (which may include 1 or more text tokens) is called chunking In the image above, the tokens are [We, saw, the, yellow, dog]. Two Noun Phrases (NP) can be chunked: * "we" (1 token) * "the yellow dog" (3 tokens) The IOB notation that Matteo introduced last time is a popular way to store the information about chunks in a word-by-word format. In the case of "the yellow dog", we will have: * saw = not in a chunk --> O * the = beginning of the chunk --> B-NP * yellow = internal part of the chunk --> I-NP * dog = internal part of the chunk --> I-NP The easiest method for chunking a sentence in Python is to use the information in the Tag and a regexp syntax. For example, if we have: in O New LOCATION York LOCATION City LOCATION We easily see that the 3 tokens tagged as LOCATION go together. We may thus write a grammar rule that chunks the LOC together: LOC: {&lt;LOCATION&gt;&lt;LOCATION&gt;*} Which means group in a chunk named LOC every token tagged as LOCATION, including any token tagged as LOCATION that might optionally come after. And the same goes also for PERSONS and ORGANIZATIONS. We may even use RegExp syntax to be more tollerant and make room for annotation errors, in case e.g. the two tokens Geore Washington are wrongly tagged as PERSON and LOCATION. Here's how I'd do it (it's not perfect at all but it should work in most cases)... End of explanation """ tree = english_chunker.parse(ners[:20]) print(tree) """ Explanation: Let's see it in action with the first few words End of explanation """ from nltk.chunk.util import tree2conlltags iobs = tree2conlltags(tree) iobs """ Explanation: Well... OK, "Roman Republic" is not a location, but at least the chunking is exactly what we wanted to have, right? Export to IOB notation OK, but now how do we convert this to the IOB notation? Luckily, there's a ready-made function in a module from the NLTK library! Let's load and use it (just in case, there is also a function that does the reverse: from IOB to tree) End of explanation """ from nltk.tag import RegexpTagger #here is our list of patterns patterns = [ (r'\d+$', 'CD'), (r'\d+[-–]\d+$', "Date"), (r'\d{1,2}[-\.\/]\d{1,2}[-\.\/]\d{2,4}', "Date"), (r'January|February|March|April|May|June|July|August|September|October|November|December', "Date"), (r'\d{4}$', "Date"), (r'BCE|BC|AD', "Date"), (r'.*', "O") ] #Our RegexpTagger magic broom! We initialize it with our pattern list tagger = RegexpTagger(patterns) #let's test it with a trivial example tagger.tag("I was born on September 14 , or 14-09".split(" ")) """ Explanation: Regex tagger Now, to go back to our original task, how do we use all this to annotate the dates and export them to IOB? Dates are often just numbers (e.g. "2017"); sometimes they come in more complex formats like: "14 September 2017" or "14-09-2017". One very simple solutions to find them and annotate them with a chunking notation might be to tag the tokens of our text with a very simple custom tagset that we design for dates. We assign "O" to all tokens, save the numbers (that we tag "CD") and some selected time formats or expressions, like the months of the year or the sequence number-number. We use the tag "Date" for them. In order to do this, we need: regular expression syntax a tagger that works with RegExp patterns A module of NLTK provides with exactly that tagger that can work with RegExp syntax End of explanation """ reg_tag = tagger.tag(twords) reg_tag[:50] """ Explanation: Now let's see it in action on the real stuff End of explanation """ date_chunker = RegexpParser(r''' DATE: {<CD>*<Date><Date|CD>*} DATE: {<CD>+} ''') t = date_chunker.parse(reg_tag) #we use that function to make sure that the tree is not too complex to be converted flat = dainlp.flatten_tree(t) iob_list = tree2conlltags(flat) iob_list[:50] #then we can write it on an output file with open("data/iob/article_446_date_aut.iob", "w") as out: for i in iob_list: out.write("\t".join(i)+"\n") """ Explanation: Now we just need to chunk it and export it to IOB. Then we are ready to evaluate the manual annotation... First, we have to define a chunker End of explanation """ #just remember that the path to the English pre-trained classifier for Stanfor NER is english_classifier = 'english.all.3class.distsim.crf.ser.gz' """ Explanation: Exercise In the practical exercise, you are requested to extract the person names from the same article that we used for dates. You will annotate them using the Stanford NER with the pre-trained classifier for English that come with the software; extract the Person chunks; evaluate the results against a golden standard. Here is a summary of the steps that you will have to execute in order to solve the exercise: load the file: data/txt/article446_10k.txt and read its content annotate the Named Entities using Stanford NER define an appropriate chunker for Persons chunk the extracted Named Entities convert the chunked Tree into IOB format evaluate the IOB annotation using the appropriate functions use the file: data/iob/article_446_person_GOLD.iob as gold standard report the final evaluation metrics (precision, recall, F-score) End of explanation """
neto71/courses-1
lesson1.ipynb
apache-2.0
%matplotlib inline """ Explanation: Using Convolutional Neural Networks Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning. Introduction to this week's task: 'Dogs vs Cats' We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as at 2013! Basic setup There isn't too much to do to get started - just a few simple configuration steps. This shows plots in the web page itself - we always wants to use this when using jupyter notebook: End of explanation """ path = "data/dogscats/" #path = "data/dogscats/sample" """ Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.) End of explanation """ from __future__ import division,print_function import os, json from glob import glob import numpy as np np.set_printoptions(precision=4, linewidth=100) from matplotlib import pyplot as plt """ Explanation: A few basic libraries that we'll need for the initial exercises: End of explanation """ import utils; reload(utils) from utils import plots """ Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them. End of explanation """ # As large as you can, but no larger than 64 is recommended. # If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this. batch_size=64 # Import our class, and instantiate from vgg16 import Vgg16 vgg = Vgg16() # Grab a few images at a time for training and validation. # NB: They must be in subdirectories named based on their category batches = vgg.get_batches(path+'train', batch_size=batch_size) val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2) vgg.finetune(batches) vgg.fit(batches, val_batches, nb_epoch=1) """ Explanation: Use a pretrained VGG model with our Vgg16 class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy. We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of code Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work. End of explanation """ vgg = Vgg16() """ Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above. Let's take a look at how this works, step by step... Use Vgg16 for basic image recognition Let's start off by using the Vgg16 class to recognise the main imagenet category for each image. We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step. First, create a Vgg16 object: End of explanation """ batches = vgg.get_batches(path+'train', batch_size=4) """ Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder. Let's grab batches of data from our training folder: End of explanation """ imgs,labels = next(batches) """ Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.) Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels. End of explanation """ plots(imgs, titles=labels) """ Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding. The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1. End of explanation """ vgg.predict(imgs, True) """ Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction. End of explanation """ vgg.classes[:4] """ Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four: End of explanation """ batch_size=64 batches = vgg.get_batches(path+'train', batch_size=batch_size) val_batches = vgg.get_batches(path+'valid', batch_size=batch_size) """ Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.) Use our Vgg16 class to finetune a Dogs vs Cats model To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided. However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune(). We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory. End of explanation """ vgg.finetune(batches) """ Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'. End of explanation """ vgg.fit(batches, val_batches, nb_epoch=1) """ Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.) End of explanation """ from numpy.random import random, permutation from scipy import misc, ndimage from scipy.ndimage.interpolation import zoom import keras from keras import backend as K from keras.utils.data_utils import get_file from keras.models import Sequential, Model from keras.layers.core import Flatten, Dense, Dropout, Lambda from keras.layers import Input from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D from keras.optimizers import SGD, RMSprop from keras.preprocessing import image """ Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth. Next up, we'll dig one level deeper to see what's going on in the Vgg16 class. Create a VGG model from scratch in Keras For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes. Model setup We need to import all the modules we'll be using from numpy, scipy, and keras: End of explanation """ FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json' # Keras' get_file() is a handy function that downloads files, and caches them for re-use later fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models') with open(fpath) as f: class_dict = json.load(f) # Convert dictionary with string indexes into an array classes = [class_dict[str(i)][1] for i in range(len(class_dict))] """ Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later. End of explanation """ classes[:5] """ Explanation: Here's a few examples of the categories we just imported: End of explanation """ def ConvBlock(layers, model, filters): for i in range(layers): model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(filters, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) """ Explanation: Model creation Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture. VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition: End of explanation """ def FCBlock(model): model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) """ Explanation: ...and here's the fully-connected definition. End of explanation """ # Mean of each channel as provided by VGG researchers vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1)) def vgg_preprocess(x): x = x - vgg_mean # subtract mean return x[:, ::-1] # reverse axis bgr->rgb """ Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model: End of explanation """ def VGG_16(): model = Sequential() model.add(Lambda(vgg_preprocess, input_shape=(3,224,224))) ConvBlock(2, model, 64) ConvBlock(2, model, 128) ConvBlock(3, model, 256) ConvBlock(3, model, 512) ConvBlock(3, model, 512) model.add(Flatten()) FCBlock(model) FCBlock(model) model.add(Dense(1000, activation='softmax')) return model """ Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined! End of explanation """ model = VGG_16() """ Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that: Convolution layers are for finding patterns in images Dense (fully connected) layers are for combining patterns across an image Now that we've defined the architecture, we can create the model like any python object: End of explanation """ fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models') model.load_weights(fpath) """ Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here. End of explanation """ batch_size = 4 """ Explanation: Getting imagenet predictions The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them. End of explanation """ def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=batch_size, class_mode='categorical'): return gen.flow_from_directory(path+dirname, target_size=(224,224), class_mode=class_mode, shuffle=shuffle, batch_size=batch_size) """ Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data: End of explanation """ batches = get_batches('train', batch_size=batch_size) val_batches = get_batches('valid', batch_size=batch_size) imgs,labels = next(batches) # This shows the 'ground truth' plots(imgs, titles=labels) """ Explanation: From here we can use exactly the same steps as before to look at predictions from the model. End of explanation """ def pred_batch(imgs): preds = model.predict(imgs) idxs = np.argmax(preds, axis=1) print('Shape: {}'.format(preds.shape)) print('First 5 classes: {}'.format(classes[:5])) print('First 5 probabilities: {}\n'.format(preds[0, :5])) print('Predictions prob/class: ') for i in range(len(idxs)): idx = idxs[i] print (' {:.4f}/{}'.format(preds[i, idx], classes[idx])) pred_batch(imgs) """ Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. End of explanation """
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Revisiting alexnet based experiment with 64 inputs (small).ipynb
mit
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600. fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(111) ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record) ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record) ax1.plot(model_no_mom.monitor.channels['valid_y_y_1_nll'].val_record) ax1.plot(model_no_mom.monitor.channels['train_y_y_1_nll'].val_record) ax1.set_xlabel('Epochs') ax1.legend(['Valid', 'Train', 'Valid (no mom.)', 'Train (no mom.)']) ax1.set_ylabel('NLL') ax1.set_ylim(0., 5.) ax1.grid(True) ax2 = ax1.twiny() ax2.set_xticks(np.arange(0,tr.shape[0],20)) ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]]) ax2.set_xlabel('Hours') plt.plot(model.monitor.channels['train_term_1_l1_penalty'].val_record) plt.plot(model.monitor.channels['train_term_2_weight_decay'].val_record) pv = get_weights_report(model=model) img = pv.get_img() img = img.resize((4*img.size[0], 4*img.size[1])) img_data = io.BytesIO() img.save(img_data, format='png') display(Image(data=img_data.getvalue(), format='png')) plt.plot(model.monitor.channels['learning_rate'].val_record) """ Explanation: Plot train and valid set NLL End of explanation """ h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record]) h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record]) plt.plot(h1_W_norms / h1_W_up_norms) #plt.ylim(0,1000) plt.show() plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record) h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record]) h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record]) plt.plot(h2_W_norms / h2_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record) h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record]) h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record]) plt.plot(h3_W_norms / h3_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record) h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_kernel_norm_mean'].val_record]) h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_kernel_norms_mean'].val_record]) plt.plot(h4_W_norms / h4_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h4_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h4_kernel_norms_max'].val_record) h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_kernel_norm_mean'].val_record]) h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_kernel_norms_mean'].val_record]) plt.plot(h5_W_norms / h5_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h5_kernel_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h5_kernel_norms_max'].val_record) h6_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h6_W_col_norm_mean'].val_record]) h6_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h6_col_norms_mean'].val_record]) plt.plot(h6_W_norms / h6_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_h6_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_h6_col_norms_max'].val_record) y_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_softmax_W_col_norm_mean'].val_record]) y_W_norms = np.array([float(v) for v in model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record]) plt.plot(y_W_norms / y_W_up_norms) plt.show() plt.plot(model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record) plt.plot(model.monitor.channels['valid_y_y_1_col_norms_max'].val_record) """ Explanation: Plot ratio of update norms to parameter norms across epochs for different layers End of explanation """
teresaborcuch/teresaborcuch.github.io
notebooks/second_blog_post.ipynb
mit
from articledata import * data = ArticleData().call() import pickle import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from datetime import datetime, timedelta import numpy as np data = pd.read_pickle('/Users/teresaborcuch/capstone_project/notebooks/pickled_data.pkl') data.shape data.head(1) """ Explanation: EDA: Sentiment Analysis By now, I've amassed a corpus of articles from three sources: The New York Times (1,425 articles), The Washington Post (447 articles), and FoxNews.com (309 articles). I'm aiming to have at least 1,000 articles from each source, but I'll start examining trends in the data I currently have. Sentiment analysis is a natural language processing technique that seeks to identify the polarity of written text - whether the document expresses positive, negative, or neutral feelings. This technique is commonly applied to text that may express strong opinions, such as reviews or social media posts, but I'm going to apply it to news articles and see if there are any differences in the sentiments expressed by different publications or in different sections. The goal for this analysis is to derive scores for each article's body and title based on the positivity or negativity of the words they contain, and see if these score might be useful features in predicting what section the article is from. Sentiment scores range from 1 (very positive) to -1 (very negative), and I'm expecting a relatively small range for articles in this corpus, since news is presumably objective. SentiWordNet is a resource that maps tens of thousands of English words to a sentiment score. Since language is complex, and the meaning of a given word can vary significantly depending on the context in which it's used, these scores are imperfect, but they'll provide a general picture of the sentiment behind each article. ArticleData Class I've been scraping each of my three sources at least twice a day to get fresh news and storing articles in a local postgres database. To facilitate the process of retrieving data and automate some of the cleaning and formatting, I've created a service class ArticleData. Objects of this class have a call method that creates a dataframe of article titles, dates, bodies, sections, and sources. Any articles that have a body of under 200 characters in length are dropped, all dates converted to datetime format, and the various sections from each source are condensed into business, world, politics, science/health, technology, sports, education, entertainment, or opinion. Sentiment Scores Since I'll be looking at sentiment scores, ArticleData objects also calculate scores for article bodies and titles, and store them in columns in the dataframes they return when called. I've borrowed the compute_score function from this Github gist and written it into the call method. This function works by using NLTK's WordNetLemmatizer to lemmatize each word in a text, and the PerceptronTagger to tag the part of speech of each word in the text as a noun, verb, adjective, or adverb. This helps SentiwordNet to assign the word to a synset, or synonym set, for which it can assign a positive or negative value. Since a word can belong to multiple synsets, the function selects the most common one. End of explanation """ plt.scatter(data['SA_title'], data['SA_body']) plt.xlabel('Sentiment Scores of Article Titles') plt.ylabel('Sentiment Scores of Article Bodies') plt.xlim(-0.35, 0.35) plt.ylim(-0.35, 0.35) plt.show() """ Explanation: Trends in Sentiment Scores End of explanation """ mask1 = (data['source'] == 'Fox') mask2 = (data['source'] == 'WP') mask3 = (data['source'] == 'NYT') score_dict = {'Fox': data[mask1]['SA_body'], 'WP': data[mask2]['SA_body'], 'NYT' : data[mask3]['SA_body']} sns.set_style("whitegrid", {'axes.grid' : False}) fig, ax = plt.subplots(figsize = (10,6)) for x in score_dict.keys(): ax = sns.distplot(score_dict[x], kde = False, label = x) ax.set_xlabel("Distribution of Sentiment Scores by Publication") ax.legend() ax.set_xlim(-0.1, 0.1) plt.show() """ Explanation: This plot shows the relationship between title scores and body scores for all articles. Interestingly, there seems to be little relationship between the sentiment score of the title and that of the body. The other feature of note is that article titles have a much wider score range than the bodies. This may be because the compute_score function averages the score of each word over the entire length of the document. Since article bodies contain many more words, and therefore more neutral words, their scores will be lower. End of explanation """ pd.DataFrame(data.pivot_table( values = ['SA_title', 'SA_body'], index = 'condensed_section', aggfunc =np.mean)).sort_values('SA_body', ascending = False) """ Explanation: This histogram shows the distribution of article bodies' sentiment scores for all three publications. As I suspected, they are normally distributed tightly around zero. End of explanation """ mask1 = (data['condensed_section'] == 'opinion') mask2 = (data['condensed_section'] == 'entertainment') mask3 = (data['condensed_section'] == 'world') score_dict = {'opinion': data[mask1]['SA_body'], 'entertainment': data[mask2]['SA_body'], 'world' : data[mask3]['SA_body']} sns.set_style("whitegrid", {'axes.grid' : False}) fig, ax = plt.subplots(figsize = (10,6)) for x in score_dict.keys(): ax = sns.distplot(score_dict[x], kde = False, label = x) ax.set_xlabel("Distribution of Sentiment Scores by Section") ax.legend() ax.set_xlim(-0.1, 0.1) plt.show() """ Explanation: Let's see which sections are the most positive and negative. This table contains the average sentiment scores of bodies and titles for each section, sorted by highest-scoring body to lowest. The averages are clustered around zero, but entertainment and education have the most positive content in their article bodies, and articles from the world news setion have the most negative. End of explanation """ # create dictionaries for each publication nyt_trump = evaluate_topic(data = data, section = 'opinion', source = 'NYT', topic = 'Trump') fox_trump = evaluate_topic(data = data, section = 'opinion', source = 'Fox', topic = 'Trump') wp_trump = evaluate_topic(data = data, section = 'opinion', source = 'WP', topic = 'Trump') # plot dictionaries sns.set_style("whitegrid", {'axes.grid' : False}) fig = plt.figure(figsize = (12, 5)) count = 1 label_dict = {1: "NYT", 2: "Fox", 3: "Washington Post"} for score_dict in [nyt_trump, fox_trump, wp_trump]: ax = fig.add_subplot(1, 3, count) for key in score_dict.keys(): ax.hist(score_dict[key], label = key) ax.set_xlabel(label_dict[count]) ax.legend() ax.set_xlim(-0.1, 0.1) count +=1 plt.suptitle('Sentiment Scores for Opinion Articles Mentioning "Trump"') plt.show() """ Explanation: This histogram shows the distribution of scores for the article bodies from three sections: opinion, world, and entertainment. The entertainment section has more articles scoring above zero than the other two sections, and the world news section has fewer. This makes sense, since the entertainment section is composed of stores about "lighter" topics such as the arts, dining, or travel, and the world news section might be a bit "heavier". It also seems like the opinion section has a wider range of scores than the other two. Evaluating Sentiment For Particular Topics It might be interesting to investigate whether certain publications differ in the sentiment of their reporting on particular topics, or whether the sentiment for a particular topic varies among sections of the same publication. I've written a function that aggregates the sentiment scores for articles from a particular publication or section into a dictionary with two keys: topic and non-topic. These dictionaries are readily plottable, and I can compare among sections and sources. End of explanation """ # Compare sentiment scores for opinion vs non-opinion pieces that mention education ed_dict = evaluate_topic(data = data, section = 'opinion', topic = 'education') for key in ed_dict.keys(): sns.distplot(ed_dict[key], kde = False, label = key) plt.legend() plt.show() print 'Mean Score for Opinion Pieces Mentioning "education": ', np.mean(ed_dict['topic']) print 'Mean Score for Opinion Pieces Not Mentioning "education": ', np.mean(ed_dict['nontopic']) """ Explanation: The obvious observation from these graphs is that more opinion pieces mention Trump than not for all three publications, and the sentiment of the Trump-related articles spans a wider range than non-Trump-related articles. End of explanation """ data.head(1) class EvaluateTime(): def __init__(self, data = None, section = None, source = None, topic = None, date = None): self.data = data self.section = section self.source = source self.topic = topic self.date = date def call(self): #self.plot_date_dict, self.range_date_dict = self.make_dict() return self def make_dict(self): # define masks section_mask = (self.data['condensed_section'] == self.section) source_mask = (self.data['source'] == self.source) date_mask = (self.data['date'] > self.date) # initialize lists for plot_date_dict topic_scores = [] dates = [] # initialize other dict range_date_dict = {} if not self.date: print "Please select a start date." # make plot_date_dict from appropriate subset of data else: if self.section and self.source: masked_data = self.data[section_mask & source_mask & date_mask] elif self.section and (not self.source): masked_data = self.data[section_mask & date_mask] elif self.source and (not self.section): masked_data = self.data[source_mask & date_mask] else: masked_data = self.data[date_mask] for i, row in masked_data.iterrows(): if self.topic in row[2]: topic_scores.append(row[6]) dates.append(row[1]) # add to range_date_dict where keys are the dates and the vales are a list of scores if row[1] not in range_date_dict.keys(): range_date_dict[row[1]] = [row[6]] elif row[1] in range_date_dict.keys(): (range_date_dict[row[1]]).append(row[6]) #plot_date_dict = {'date': dates, 'score': topic_scores} return range_date_dict #plot_date_dict, def plot_time(self): x = self.range_date_dict.keys() x.sort() ordered_x = [] y = [] for val in x: ordered_x.append(val) values = self.range_date_dict[val] mean = np.mean(values) y.append(mean) # define upper and lower boundaries for error bars upper_bounds = [max(self.range_date_dict[x]) for x in ordered_x] lower_bounds = [min(self.range_date_dict[x]) for x in ordered_x] # define distance for upper error bar y_upper = zip(y, upper_bounds) upper_error = [abs(pair[0] - pair[1]) for pair in y_upper] # define distance for lower error bar y_lower = zip(y, lower_bounds) lower_error = [abs(pair[0] - pair[1]) for pair in y_lower] asymmetric_error = [lower_error, upper_error] plt.plot(ordered_x, y, c = 'r', marker = 'o') plt.errorbar(ordered_x, y, yerr = asymmetric_error, ecolor = 'r', capthick = 1) plt.xlim(min(ordered_x) + timedelta(days = -1), max(ordered_x) + timedelta(days = 1)) plt.xticks(rotation = 70) plt.show() et = EvaluateTime(data = data, source = 'Fox', section = 'politics', topic = 'Trump', date = datetime(2017, 1, 24)).call() et.plot_time() et = EvaluateTime(data = data, source = 'NYT', section = 'politics', topic = 'Trump', date = datetime(2017, 1, 24)).call() et.plot_time() # make it so plot_time returns the graph as an object so that it can be manipulatedimport matplotlib.pyplot as plt a = [1,3,5,7] b = [11,-2,4,19] plt.plot(a,b) c = [1,3,2,1] plt.errorbar(a,b,yerr=c, linestyle="None") plt.show() """ Explanation: Here, I'm comparing opinion pieces from all three publications that mention education to those that don't. Opinion articles about education have a higher rating on average than those that don't, across all publications. Evaluating Fluctuations in Sentiment Over Time End of explanation """
Diyago/Machine-Learning-scripts
statistics/Биномиальный критерий для доли stat.binomial_test.ipynb
apache-2.0
import numpy as np from scipy import stats %pylab inline """ Explanation: Биномиальный критерий для доли End of explanation """ n = 16 n_samples = 1000 samples = np.random.randint(2, size = (n_samples, n)) t_stat = map(sum, samples) values = list(t_stat) pylab.hist(values, bins = 16, color = 'b', range = (0, 16), label = 't_stat') pylab.legend() """ Explanation: Shaken, not stirred Джеймс Бонд говорит, что предпочитает мартини смешанным, но не взболтанным. Проведём слепой тест (blind test): n раз предложим ему пару напитков и выясним, какой из двух он предпочитает: выборка - бинарный вектор длины $n$, где 1 - Джеймс Бонд предпочел смешанный напиток, 0 - взболтанный; гипотеза $H_0$ - Джеймс Бонд не различает 2 вида напитков и выбирает наугад; статистика $t$ - количество единиц в выборке. End of explanation """ stats.binom_test(12, 16, 0.5, alternative = 'two-sided') stats.binom_test(13, 16, 0.5, alternative = 'two-sided') """ Explanation: Нулевое распределение статистики — биномиальное $Bin(n, 0.5)$ Двусторонняя альтернатива гипотеза $H_1$ - Джеймс Бонд предпочитает какой-то определённый вид мартини. End of explanation """ stats.binom_test(12, 16, 0.5, alternative = 'greater') stats.binom_test(11, 16, 0.5, alternative = 'greater') """ Explanation: Односторонняя альтернатива гипотеза $H_1$ - Джеймс Бонд предпочитает смешанный напиток. End of explanation """
quantopian/research_public
notebooks/lectures/Means/notebook.ipynb
apache-2.0
# Two useful statistical libraries import scipy.stats as stats import numpy as np # We'll use these two data sets as examples x1 = [1, 2, 2, 3, 4, 5, 5, 7] x2 = x1 + [100] print 'Mean of x1:', sum(x1), '/', len(x1), '=', np.mean(x1) print 'Mean of x2:', sum(x2), '/', len(x2), '=', np.mean(x2) """ Explanation: Measures of Central Tendency By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Mackenzie. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public In this notebook we will discuss ways to summarize a set of data using a single number. The goal is to capture information about the distribution of data. Arithmetic mean The arithmetic mean is used very frequently to summarize numerical data, and is usually the one assumed to be meant by the word "average." It is defined as the sum of the observations divided by the number of observations: $$\mu = \frac{\sum_{i=1}^N X_i}{N}$$ where $X_1, X_2, \ldots , X_N$ are our observations. End of explanation """ print 'Median of x1:', np.median(x1) print 'Median of x2:', np.median(x2) """ Explanation: We can also define a <i>weighted</i> arithmetic mean, which is useful for explicitly specifying the number of times each observation should be counted. For instance, in computing the average value of a portfolio, it is more convenient to say that 70% of your stocks are of type X rather than making a list of every share you hold. The weighted arithmetic mean is defined as $$\sum_{i=1}^n w_i X_i $$ where $\sum_{i=1}^n w_i = 1$. In the usual arithmetic mean, we have $w_i = 1/n$ for all $i$. Median The median of a set of data is the number which appears in the middle of the list when it is sorted in increasing or decreasing order. When we have an odd number $n$ of data points, this is simply the value in position $(n+1)/2$. When we have an even number of data points, the list splits in half and there is no item in the middle; so we define the median as the average of the values in positions $n/2$ and $(n+2)/2$. The median is less affected by extreme values in the data than the arithmetic mean. It tells us the value that splits the data set in half, but not how much smaller or larger the other values are. End of explanation """ # Scipy has a built-in mode function, but it will return exactly one value # even if two values occur the same number of times, or if no value appears more than once print 'One mode of x1:', stats.mode(x1)[0][0] # So we will write our own def mode(l): # Count the number of times each element appears in the list counts = {} for e in l: if e in counts: counts[e] += 1 else: counts[e] = 1 # Return the elements that appear the most times maxcount = 0 modes = {} for (key, value) in counts.items(): if value > maxcount: maxcount = value modes = {key} elif value == maxcount: modes.add(key) if maxcount > 1 or len(l) == 1: return list(modes) return 'No mode' print 'All of the modes of x1:', mode(x1) """ Explanation: Mode The mode is the most frequently occuring value in a data set. It can be applied to non-numerical data, unlike the mean and the median. One situation in which it is useful is for data whose possible values are independent. For example, in the outcomes of a weighted die, coming up 6 often does not mean it is likely to come up 5; so knowing that the data set has a mode of 6 is more useful than knowing it has a mean of 4.5. End of explanation """ # Get return data for an asset and compute the mode of the data set start = '2014-01-01' end = '2015-01-01' pricing = get_pricing('SPY', fields='price', start_date=start, end_date=end) returns = pricing.pct_change()[1:] print 'Mode of returns:', mode(returns) # Since all of the returns are distinct, we use a frequency distribution to get an alternative mode. # np.histogram returns the frequency distribution over the bins as well as the endpoints of the bins hist, bins = np.histogram(returns, 20) # Break data up into 20 bins maxfreq = max(hist) # Find all of the bins that are hit with frequency maxfreq, then print the intervals corresponding to them print 'Mode of bins:', [(bins[i], bins[i+1]) for i, j in enumerate(hist) if j == maxfreq] """ Explanation: For data that can take on many different values, such as returns data, there may not be any values that appear more than once. In this case we can bin values, like we do when constructing a histogram, and then find the mode of the data set where each value is replaced with the name of its bin. That is, we find which bin elements fall into most often. End of explanation """ # Use scipy's gmean function to compute the geometric mean print 'Geometric mean of x1:', stats.gmean(x1) print 'Geometric mean of x2:', stats.gmean(x2) """ Explanation: Geometric mean While the arithmetic mean averages using addition, the geometric mean uses multiplication: $$ G = \sqrt[n]{X_1X_1\ldots X_n} $$ for observations $X_i \geq 0$. We can also rewrite it as an arithmetic mean using logarithms: $$ \ln G = \frac{\sum_{i=1}^n \ln X_i}{n} $$ The geometric mean is always less than or equal to the arithmetic mean (when working with nonnegative observations), with equality only when all of the observations are the same. End of explanation """ # Add 1 to every value in the returns array and then compute R_G ratios = returns + np.ones(len(returns)) R_G = stats.gmean(ratios) - 1 print 'Geometric mean of returns:', R_G """ Explanation: What if we want to compute the geometric mean when we have negative observations? This problem is easy to solve in the case of asset returns, where our values are always at least $-1$. We can add 1 to a return $R_t$ to get $1 + R_t$, which is the ratio of the price of the asset for two consecutive periods (as opposed to the percent change between the prices, $R_t$). This quantity will always be nonnegative. So we can compute the geometric mean return, $$ R_G = \sqrt[T]{(1 + R_1)\ldots (1 + R_T)} - 1$$ End of explanation """ T = len(returns) init_price = pricing[0] final_price = pricing[T] print 'Initial price:', init_price print 'Final price:', final_price print 'Final price as computed with R_G:', init_price*(1 + R_G)**T """ Explanation: The geometric mean is defined so that if the rate of return over the whole time period were constant and equal to $R_G$, the final price of the security would be the same as in the case of returns $R_1, \ldots, R_T$. End of explanation """ print 'Harmonic mean of x1:', stats.hmean(x1) print 'Harmonic mean of x2:', stats.hmean(x2) """ Explanation: Harmonic mean The harmonic mean is less commonly used than the other types of means. It is defined as $$ H = \frac{n}{\sum_{i=1}^n \frac{1}{X_i}} $$ As with the geometric mean, we can rewrite the harmonic mean to look like an arithmetic mean. The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals of the observations: $$ \frac{1}{H} = \frac{\sum_{i=1}^n \frac{1}{X_i}}{n} $$ The harmonic mean for nonnegative numbers $X_i$ is always at most the geometric mean (which is at most the arithmetic mean), and they are equal only when all of the observations are equal. End of explanation """
ShinjiKatoA16/UCSY-sw-eng
python-1.ipynb
mit
print(1) print('hello') # please add something here ... """ Explanation: Fundamentals of Python Object and Variable Everything (inclulding function) is an object in Python. Each object has type and optionally its own methods. Variable is not declared in Python. Variable does not have type but refers object. Clause (Block) The clause(block) of the python is grouped with indent (In case of C, clause is enclosed with brace {}). Same clause need to have same indent (number of spaces). Tab character can be used as a indent, but not recommended. PEP8 (guide line of python) defines 4 space as a recommended indent. Print function print() function takes any type of object. Integer, String, Float, List etc. The return value of print() function is None. End of explanation """ x = int(input('Please enter your score(0-100): ')) if x >=80: print("You gained A. Conglaturation!!!") elif x >=70: print("You are B. OK") elif x >=60: print("You are C") print('Please try to study harder') else: print("Sorry. You failed.") """ Explanation: if statement if and : (colon) are keyword to create if statement. Condition need to be specified between if and :. Statements in each clase need to have same indent. if some-codition: statement-1 statement-2 ... elif some-condition: statement-3 else: statement-4 statement-5 Syntax of the if statement of python is similar to C, except that {} are not used to define clause and elif is used insted of else if. elif and else clause is optional and multiple elif clauses can be defined. End of explanation """ n = 5 while n > 0: print(n) n -= 1 p = int(input('Please input number: ')) x = [2,3,5,7,11, None] i = 0 while x[i] != None: if x[i] == p: print(p, 'is found') break i += 1 else: # while loop exited other than break print(p, 'not found') """ Explanation: while statement Syntax of the while statement of python is similar to C. python's while statement can optionally have a else clause, which is executed if while loop exited other than break. End of explanation """ list_a = [1,2,3, 'abc'] tuple_a = (4, 'a', 1.23) # List and Tuple can contain different type of objects as element string_a = 'this is a string_a' for x in list_a: print(x) print() for x in tuple_a: print(x) print() for x in string_a: print(x) # each print() prints 1 character(string with lengh:1) print() for x in (1,2,3,4,5): print(x) print() for x in range(5): # range(5) returns 0, 1, 2, 3, 4 one by one print(x) p = int(input('Please input number: ')) for x in [2,3,5,7,11]: if x == p: print(p, 'is found') break else: # for loop exited except break print(p, 'not found') """ Explanation: for statement Syntex of the for statement is different from C. python's for statement takes iterable object. List, Tuple and String are iterable object. Some functions such as range() returns iterable object. for statement also can have else clause, which is executed if for loop exited except break. This feature is useful to detect the condition that something is not found in the List etc. End of explanation """ def func1(): # Function without parameter print('function-1') def func2(parm1, parm2): # 0 or more parameters can be passed print(parm1) print(parm2) def func3(a, b): return a+b # Function returns 1 or more value func1() # add () to call function func2('abc', 100) print('value of func3', func3(1,2)) # return value of function can be an argument to other function """ Explanation: def (function definition) Function is defined using keyword def. Simple example of functions are as follows. End of explanation """ help(range) for i in range(??, ??): ## change ?? to proper value print(i) """ Explanation: Hands on Check the usage of range() function using help() print 1 to 10 vertically End of explanation """ help(print) # Change ?? for i in range(??, ??): print(i, ???=??) """ Explanation: Check the usage of print() function using help() print 1 to 10 horizontally End of explanation """ for row in range(??, ??): for col in range(??, ??): print(????) ???? """ Explanation: print following tables 1x1= 1 2x1= 2 3x1= 3 4x1= 4 5x1= 5 6x1= 6 7x1= 7 8x1= 8 9x1= 9 1x2= 2 2x2= 4 3x2= 6 4x2= 8 5x2=10 6x2=12 7x2=14 8x2=16 9x2=18 1x3= 3 2x3= 6 3x3= 9 4x3=12 5x3=15 6x3=18 7x3=21 8x3=24 9x3=27 1x4= 4 2x4= 8 3x4=12 4x4=16 5x4=20 6x4=24 7x4=28 8x4=32 9x4=36 1x5= 5 2x5=10 3x5=15 4x5=20 5x5=25 6x5=30 7x5=35 8x5=40 9x5=45 1x6= 6 2x6=12 3x6=18 4x6=24 5x6=30 6x6=36 7x6=42 8x6=48 9x6=54 1x7= 7 2x7=14 3x7=21 4x7=28 5x7=35 6x7=42 7x7=49 8x7=56 9x7=63 1x8= 8 2x8=16 3x8=24 4x8=32 5x8=40 6x8=48 7x8=56 8x8=64 9x8=72 1x9= 9 2x9=18 3x9=27 4x9=36 5x9=45 6x9=54 7x9=63 8x9=72 9x9=81 End of explanation """
peastman/deepchem
examples/tutorials/Introduction_To_Material_Science.ipynb
mit
!pip install --pre deepchem """ Explanation: Introduction To Material Science Table of Contents: Introduction Setup Featurizers Crystal Featurizers Compound Featurizers Datasets Predicting structural properties of a crystal Further Reading Introduction <a class="anchor" id="introduction"></a> One of the most exciting applications of machine learning in the recent time is it's application to material science domain. DeepChem helps in development and application of machine learning to solid-state systems. As a starting point of applying machine learning to material science domain, DeepChem provides material science datasets as part of the MoleculeNet suite of datasets, data featurizers and implementation of popular machine learning algorithms specific to material science domain. This tutorial serves as an introduction of using DeepChem for machine learning related tasks in material science domain. Traditionally, experimental research were used to find and characterize new materials. But traditional methods have high limitations by constraints of required resources and equipments. Material science is one of the booming areas where machine learning is making new in-roads. The discovery of new material properties holds key to lot of problems like climate change, development of new semi-conducting materials etc. DeepChem acts as a toolbox for using machine learning in material science. This tutorial can also be used in Google colab. If you'd like to open this notebook in colab, you can use the following link. End of explanation """ !pip install pymatgen~=2020.12 !pip install matminer==0.6.5 !pip install dgl import deepchem as dc dc.__version__ import pymatgen as mg import os os.environ['DEEPCHEM_DATA_DIR'] = os.getcwd() """ Explanation: DeepChem for material science will also require the additiona libraries pymatgen and matminer. These two libraries assist machine learning in material science. For graph neural network models which we will be used in the backend, DeepChem requires dgl library. All these can be installed using pip. End of explanation """ # the lattice paramter of a cubic cell a = 4.2 lattice = mg.core.Lattice.cubic(a) # Atoms in a crystal atomic_species = ["Cs", "Cl"] # Coordinates of atoms in a crystal cs_coords = [0, 0, 0] cl_coords = [0.5, 0.5, 0.5] structure = mg.core.Structure(lattice, atomic_species, [cs_coords, cl_coords]) structure """ Explanation: Featurizers <a class="anchor" id="featurizers"></a> Material Structure Featurizers <a class="anchor" id="crystal-featurizers"></a> Crystal are geometric structures which has to be featurized for using in machine learning algorithms. The following featurizers provided by DeepChem helps in featurizing crystals: The SineCoulombMatrix featurizer a crystal by calculating sine coulomb matrix for the crystals. It can be called using dc.featurizers.SineCoulombMatrix function. [1] The CGCNNFeaturizer calculates structure graph features of crystals. It can be called using dc.featurizers.CGCNNFeaturizer function. [2] The LCNNFeaturizer calculates the 2-D Surface graph features in 6 different permutations. It can be used using the utility dc.feat.LCNNFeaturizer. [3] [1] Faber et al. “Crystal Structure Representations for Machine Learning Models of Formation Energies”, Inter. J. Quantum Chem. 115, 16, 2015. https://arxiv.org/abs/1503.07406 [2] T. Xie and J. C. Grossman, “Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties”, Phys. Rev. Lett. 120, 2018, https://arxiv.org/abs/1710.10324 [3] Jonathan Lym, Geun Ho Gu, Yousung Jung, and Dionisios G. Vlachos, Lattice Convolutional Neural Network Modeling of Adsorbate Coverage Effects, J. Phys. Chem. C 2019 https://pubs.acs.org/doi/10.1021/acs.jpcc.9b03370 Example: Featurizing a crystal In this part, we will be using pymatgen for representing the crystal structure of Caesium Chloride and calculate structure graph features using CGCNNFeaturizer. The CsCl crystal is a cubic lattice with the chloride atoms lying upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the center of the cubes. The green colored atoms are the caesium atoms in this crystal structure and chloride atoms are the grey ones. <img src="assets/CsCl_crystal_structure.png"> Source: Wikipedia End of explanation """ featurizer = dc.feat.CGCNNFeaturizer() features = featurizer.featurize([structure]) features[0] """ Explanation: In above code sample, we first defined a cubic lattice using the cubic lattice parameter a. Then, we created a structure with atoms in the crystal and their coordinates as features. A nice introduction to crystallographic coordinates can be found here. Once a structure is defined, it can be featurized using CGCNN Featurizer. Featurization of a crystal using CGCNNFeaturizer returns a DeepChem GraphData object which can be used for machine learning tasks. End of explanation """ comp = mg.core.Composition("Fe2O3") featurizer = dc.feat.ElementPropertyFingerprint() features = featurizer.featurize([comp]) features[0] """ Explanation: Material Composition Featurizers <a class="anchor" id="compound-featurizers"></a> The above part discussed about using DeepChem for featurizing crystal structures. Here, we will be seeing about featurizing material compositions. DeepChem supports the following material composition featurizers: The ElementPropertyFingerprint can be used to find fingerprint of elements based on elemental stoichiometry. It can be used using a call to dc.featurizers.ElementPropertyFingerprint. [4] The ElemNetFeaturizer returns a vector containing fractional compositions of each element in the compound. It can be used using a call to dc.feat.ElemNetFeaturizer. [5] [4] Ward, L., Agrawal, A., Choudhary, A. et al. A general-purpose machine learning framework for predicting properties of inorganic materials. npj Comput Mater 2, 16028 (2016). https://doi.org/10.1038/npjcompumats.2016.28 [5] Jha, D., Ward, L., Paul, A. et al. "ElemNet: Deep Learning the Chemistry of Materials From Only Elemental Composition", Sci Rep 8, 17593 (2018). https://doi.org/10.1038/s41598-018-35934-y Example: Featurizing a compund In the below example, we featurize Ferric Oxide (Fe2O3) using ElementPropertyFingerprint featurizer . The featurizer returns the compounds elemental stoichoimetry properties as features. End of explanation """ dataset_config = {"reload": True, "featurizer": dc.feat.CGCNNFeaturizer(), "transformers": []} tasks, datasets, transformers = dc.molnet.load_perovskite(**dataset_config) train_dataset, valid_dataset, test_dataset = datasets train_dataset.get_data_shape """ Explanation: Datasets <a class="anchor" id="datasets"></a> DeepChem has the following material properties dataset as part of MoleculeNet suite of datasets. These datasets can be used for a variety of tasks in material science like predicting structure formation energy, metallicity of a compound etc. The Band Gap dataset contains 4604 experimentally measured band gaps for inorganic crystal structure compositions. The dataset can be loaded using dc.molnet.load_bandgap utility. The Perovskite dataset contains 18928 perovskite structures and their formation energies. It can be loaded using a call to dc.molnet.load_perovskite. The Formation Energy dataset contains 132752 calculated formation energies and inorganic crystal structures from the Materials Project database. It can be loaded using a call to dc.molnet.load_mp_formation_energy. The Metallicity dataset contains 106113 inorganic crystal structures from the Materials Project database labeled as metals or nonmetals. It can be loaded using dc.molnet.load_mp_metallicity utility. In the below example, we will demonstrate loading perovskite dataset and use it to predict formation energy of new crystals. Perovskite structures are structures adopted by many oxides. Ideally it is a cubic structure but non-cubic variants also exists. Each datapoint in the perovskite dataset contains the lattice structure as a pymatgen.core.Structure object and the formation energy of the corresponding structure. It can be used by calling for machine learning tasks by calling dc.molnet.load_perovskite utility. The utility takes care of loading, featurizing and splitting the dataset for machine learning tasks. End of explanation """ model = dc.models.CGCNNModel(mode='regression', batch_size=32, learning_rate=0.001) model.fit(train_dataset, nb_epoch=5) """ Explanation: Predicting Formation Energy <a class="anchor" id="pred-props"></a> Along with the dataset and featurizers, DeepChem also provide implementation of various machine learning algorithms which can be used on the fly for material science applications. For predicting formation energy, we use CGCNNModel as described in the paper [1]. End of explanation """ metric = dc.metrics.Metric(dc.metrics.mean_squared_error) print("Training set score:", model.evaluate(train_dataset, [metric], transformers)) print("Test set score:", model.evaluate(test_dataset, [metric], transformers)) """ Explanation: Once fitting the model, we evaluate the performance of the model using mean squared error metric since it is a regression task. For selection a metric, dc.metrics.mean_squared_error function can be used and we evaluate the model by calling dc.model.evaluate.` End of explanation """
letsgoexploring/teaching
winter2017/econ129/python/Econ129_Class_18.ipynb
mit
# 1. Input model parameters and print parameters = pd.Series() parameters['rho'] = .75 parameters['sigma'] = 0.006 parameters['alpha'] = 0.35 parameters['delta'] = 0.025 parameters['beta'] = 0.99 print(parameters) # 2. Compute the steady state of the model directly A = 1 K = (parameters.alpha*A/(parameters.beta**-1+parameters.delta-1))**(1/(1-parameters.alpha)) C = A*K**parameters.alpha - parameters.delta Y = I = # 3. Define a function that evaluates the equilibrium conditions def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters p = parameters # Variables fwd = variables_forward cur = variables_current # Resource constraint # Exogenous tfp # Euler equation # Production function # Capital evolution # Stack equilibrium conditions into a numpy array return np.array([ ]) # 4. Initialize the model model = ls.model(equations = equilibrium_equations, nstates=, varNames=[], # Any order as long as the state variables are named first shockNames=[], # Name a shock for each state variable *even if there is no corresponding shock in the model* parameters = parameters) # 5. Set the steady state of the model directly. Input vars in same order as varNames above model.set_ss([]) # 6. Find the log-linear approximation around the non-stochastic steady state and solve model.approximate_and_solve() # 7(a) Compute impulse responses and print the computed impulse responses model.impulse(T=41,t0=5,shock=None,percent=True) print(model.irs['eA'].head(10)) # 8(b) Plot the computed impulse responses to a TFP shock fig = plt.figure(figsize=(12,12)) ax1 = fig.add_subplot(3,2,1) model.irs['eA'][['y','i','k']].plot(lw=5,alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=4) ax1.set_title('Output, investment, capital') ax1.set_ylabel('% dev') ax1.set_xlabel('quarters') ax2 = fig.add_subplot(3,2,2) model.irs['eA'][['a','eA']].plot(lw=5,alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2) ax2.set_title('TFP and TFP shock') ax2.set_ylabel('% dev') ax2.set_xlabel('quarters') ax3 = fig.add_subplot(3,2,3) model.irs['eA'][['y','c']].plot(lw=5,alpha=0.5,grid=True,ax = ax3).legend(loc='upper right',ncol=4) ax3.set_title('Output and consumption') ax3.set_ylabel('% dev') ax3.set_xlabel('quarters') plt.tight_layout() # 9(a) Compute stochastic simulation and print the simulated values model.stoch_sim(seed=192,covMat= [[parameters['sigma']**2,0],[0,0]]) print(model.simulated.head(10)) # 9(b) Plot the computed stochastic simulation fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(1,2,1) model.simulated[['k','c','y','i']].plot(lw=5,alpha=0.5,grid=True,ax = ax1).legend(loc='upper right',ncol=4) ax2 = fig.add_subplot(1,2,2) model.simulated[['eA','a']].plot(lw=5,alpha=0.5,grid=True,ax = ax2).legend(loc='upper right',ncol=2) """ Explanation: Class 18: A Centralized Real Business Cycle Model without Labor (Continued) The Model with Output and Investment Setup A representative household lives for an infinite number of periods. The expected present value of lifetime utility to the household from consuming $C_0, C_1, C_2, \ldots $ is denoted by $U_0$: \begin{align} U_0 & = \log (C_0) + \beta E_0 \log (C_1) + \beta^2 E_0 \log (C_2) + \cdots\ & = E_0\sum_{t = 0}^{\infty} \beta^t \log (C_t), \end{align} where $0<\beta<1$ is the household's subjective discount factor. $E_0$ denotes the expectation with respect to all information available as of date 0. The household enters period 0 with capital $K_0>0$. Production in period $t$: \begin{align} F(A_t,K_t) & = A_t K_t^{\alpha} \end{align} where TFP $A_t$ is stochastic: \begin{align} \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \end{align} Capital depreciates at the constant rate $\delta$ per period and so the household's resource constraint in each period $t$ is: \begin{align} C_t + K_{t+1} & = A_t K_{t}^{\alpha} + (1-\delta)K_t \end{align} Define output and investment: \begin{align} Y_t & = A_t K_{t}^{\alpha} \ I_t & = K_{t+1} - (1-\delta)K_t \end{align} Optimization problem In period 0, the household solves: \begin{align} & \max_{C_0,K_1} \; E_0\sum_{t=0}^{\infty}\beta^t\log (C_t) \ & \; \; \; \; \; \; \; \; \text{s.t.} \; \; \; \; C_t + K_{t+1} = A_t K_{t}^{\alpha} + (1-\delta)K_t \end{align} which can be written as a choice of $K_1$ only: \begin{align} \max_{K_1} \; E_0\sum_{t=0}^{\infty}\beta^t\log \left( A_t K_{t}^{\alpha} + (1-\delta)K_t - K_{t+1}\right) \end{align} Equilibrium So given $K_0>0$ and $A_0$, the equilibrium paths for consumption, capital, and TFP are described described by: \begin{align} \frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha - 1} + 1 - \delta}{C_{t+1}}\right]\ C_t + K_{t+1} & = A_{t} K_t^{\alpha} + (1-\delta) K_t\ Y_t & = A_t K_{t}^{\alpha} \ I_t & = K_{t+1} - (1-\delta)K_t\ \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \end{align} Calibration For computation purposes, assume the following values for the parameters of the model: \begin{align} \beta & = 0.99\ \rho & = .75\ \sigma & = 0.006\ \alpha & = 0.35\ \delta & = 0.025 \end{align} Steady State The steady state: \begin{align} A & = 1\ K & = \left(\frac{\alpha A}{\beta^{-1} - 1 + \delta} \right)^{\frac{1}{1-\alpha}}\ C & = AK^{\alpha} - \delta K\ Y & = AK^{\alpha} \ I & = \delta K \end{align} End of explanation """ # Compute the standard deviations of Y, C, and I in model.simulated # Compute the coefficients of correlation for Y, C, and I """ Explanation: Evaluation Now we want to examine the statistical properties of the simulated model End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/06_structured/4_preproc.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst pip install --user apache-beam[gcp]==2.16.0 """ Explanation: <h1> Preprocessing using Dataflow </h1> This notebook illustrates: <ol> <li> Creating datasets for Machine Learning using Dataflow </ol> <p> While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. End of explanation """ # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 import apache_beam as beam print(beam.__version__) """ Explanation: Run the command again if you are getting oauth2client error. Note: You may ignore the following responses in the cell output above: ERROR (in Red text) related to: witwidget-gpu, fairing WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client <b>Restart</b> the kernel before proceeding further. Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number. End of explanation """ # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi """ Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this. End of explanation """ # Create SQL query using natality data after the year 2000 query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 """ # Call BigQuery and examine in dataframe from google.cloud import bigquery df = bigquery.Client().query(query + " LIMIT 100").to_dataframe() df.head() """ Explanation: <h2> Save the query from earlier </h2> The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that. End of explanation """ import datetime, os def to_csv(rowdict): # Pull columns from BQ and create a line import hashlib import copy CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',') # Create synthetic data where we assume that no ultrasound has been performed # and so we don't know sex of the baby. Let's assume that we can tell the difference # between single and multiple, but that the errors rates in determining exact number # is difficult in the absence of an ultrasound. no_ultrasound = copy.deepcopy(rowdict) w_ultrasound = copy.deepcopy(rowdict) no_ultrasound['is_male'] = 'Unknown' if rowdict['plurality'] > 1: no_ultrasound['plurality'] = 'Multiple(2+)' else: no_ultrasound['plurality'] = 'Single(1)' # Change the plurality column to strings w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1] # Write out two rows for each input row, one with ultrasound and one without for result in [no_ultrasound, w_ultrasound]: data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS]) key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key yield str('{},{}'.format(data, key)) def preprocess(in_test_mode): import shutil, os, subprocess job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S') if in_test_mode: print('Launching local job ... hang on') OUTPUT_DIR = './preproc' shutil.rmtree(OUTPUT_DIR, ignore_errors=True) os.makedirs(OUTPUT_DIR) else: print('Launching Dataflow job {} ... hang on'.format(job_name)) OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET) try: subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split()) except: pass options = { 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'), 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'), 'job_name': job_name, 'project': PROJECT, 'region': REGION, 'teardown_policy': 'TEARDOWN_ALWAYS', 'no_save_main_session': True, 'num_workers': 4, 'max_num_workers': 5 } opts = beam.pipeline.PipelineOptions(flags = [], **options) if in_test_mode: RUNNER = 'DirectRunner' else: RUNNER = 'DataflowRunner' p = beam.Pipeline(RUNNER, options = opts) query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0 AND month > 0 """ if in_test_mode: query = query + ' LIMIT 100' for step in ['train', 'eval']: if step == 'train': selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query) else: selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query) (p | '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True)) | '{}_csv'.format(step) >> beam.FlatMap(to_csv) | '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step)))) ) job = p.run() if in_test_mode: job.wait_until_finish() print("Done!") preprocess(in_test_mode = False) """ Explanation: <h2> Create ML dataset using Dataflow </h2> Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files. Instead of using Beam/Dataflow, I had three other options: Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well! Read from BigQuery directly using TensorFlow. Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage. <p> However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing. Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me. <p> If you wish to continue without doing this step, you can copy my preprocessed output: <pre> gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/ </pre> End of explanation """ %%bash gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000* """ Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. End of explanation """