repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
nerdcommander/scientific_computing_2017
|
lesson16/Lesson16_team_imp.ipynb
|
mit
|
def factorial(n):
"""calculates n factorial"""
print('n is ', n)
if n == 0:
return 1
else:
print('need factorial of', n-1)
answer = factorial(n-1)
print ('factorial of ', n-1, 'was', answer)
return answer * n
factorial(3)
"""
Explanation: Unit 2: Programming Design
Lesson 16: Recursion
Notebook Authors
(fill in your two names here)
Facilitator: (fill in name)
Spokesperson: (fill in name)
Process Analyst: (fill in name)
Quality Control: (fill in name)
If there are only three people in your group, have one person serve as both spokesperson and process analyst for the rest of this activity.
At the end of this Lesson, you will be asked to record how long each Model required for your team. The Facilitator should keep track of time for your team.
Computational Focus: Recursion
Recursion is an alternative method for writing code that repeats. Any recursive program can be written as a loop, but recursion is especially good when the problem can be solved by considering the solution to a smaller program (similar to the idea of nested Russian Matryoshka dolls that fit inside each other):
An example of a recursive application would be one where a program tries to find a solution by generating every possible combination (such as trying every number possible on a Sudoku puzzle).
Model 1: Identifying Base Cases
When faced with a large problem to solve, we can seek to use a solution to a smaller, simpler problem. If we repeatedly decompose the original problem into smaller simpler expressions, eventually, we will identify the most simple or basic component that can’t be broken down any further. This is referred to as the base case.
Critical Thinking Questions
1. Consider two different ways to show how to calculate $4!$ (the product of the numbers from 1 to 4).
1a. Type out all numbers that explicitly need to be multiplied.
4! =
1b. Now type the expression using $3!$.
4! =
2. Write an expression similar to question 1b showing how each factorial can be calculated in terms of a "simpler" factorial.
2a. 3! =
2b. 2! =
2c. 1! =
2d. 100! =
3. Generalize your group’s answer to question 2 in terms of $n$ to create an equation for factorial that would be true for all factorials except the base case.
n! =
4. What would your group propose to be the base case of a factorial function? Also include your group’s justification for this answer.
5. Assume someone else has already written a Python function factorial(n) that takes n as a parameter and returns n!
5a. Copy and paste your answer to question 2d, which should include how 100! can be calculated using a “simpler” factorial.
5b. Convert your expression in question 5a to "Python" code, making the appropriate method call to this hypothetical factorial() method and "hard coding" numbers as arguments. (note: still put this "Python" in a markdown cell since it won't run anyway)
5c. Now convert your answer for question 3 to "Python" code that would calculate n! This time, you should not "hard code" numbers as arguments.
6. Assume you incorporated your group’s answer to question 5c as part of the function definition for factorial(n). How would this function call differ from function calls you have used in previous programs/notebooks?
7. Is a loop necessary to calculate 3! based on your group’s answers above? Describe/Explain your group’s reasoning.
8. What type of programming structure (sequential, branching or looping) is required to differentiate the successive function calls from the base case?
Model 2: Example Recursive Code
When a function makes a call to itself, this is referred to as recursion. To define a recursive function in Python, you should write an if-statement that checks for the base case. When the operation is not the base case, you include a call to the function you are writing.
Here is an example of a recursive function (do NOT copy and run this yet):
python
def factorial(n):
"""calculates n factorial"""
print('n is ', n)
if n == 0: # base case
return 1
else:
print('need factorial of', n-1)
answer = factorial(n-1) # recursive call
print ('factorial of ', n-1, 'was', answer)
return answer * n
Critical Thinking Questions
9. Examine but do not run the code for factorial() above.
9a. Predict how many distinct calls will be made to the factorial function to calculate the factorial of 3?
9b. Identify the value of the parameter n for each of these separate calls.
Now run the two code cells below
End of explanation
"""
# run factorial with a negative number
"""
Explanation: 10. Examine the output from factorial(3) above.
10a. How many lines were printed when calculating the factorial of 3?
10b. For each printed line, identify which distinct factorial function call printed that line. In other words, which lines were printed by factorial(3), which lines were printed by factorial(2), and so on.
11a. What happens if you try to calculate the factorial of a negative number?
End of explanation
"""
# fixed factorial()
# factorial(3) test case
# factorial(-1) test case
"""
Explanation: 11b. Fix the bug in the function and test it below.
End of explanation
"""
def count_down(n):
""" counts down to 1 """
if n >= 1:
print(n)
count_down(n-1)
count_down(10)
"""
Explanation: Model 3: Writing Recursive Code
Important questions to answer before writing a recursive function:
+ How can you define the problem in terms of a smaller similar problem? In other words, how will having a solution to the smaller problem help you answer the original problem?
+ For a recursive call, how will you make the problem size smaller?
+ What is the base case, where you solve an easy problem in one step?
+ What will you do for the base case?
To avoid an infinite loop, you must be sure that each recursive call brings you closer to the base case!
Critical Thinking Questions
12. Consider the factorial problem from the first two models.
12a. How was factorial defined in terms of a smaller similar problem?
12b. How was the problem size made smaller for a recursive call?
12c. What was the base case, and what did you do for the base case?
13. Now consider two different ways to show how to calculate the following sum:
$$\sum_{i=1}^{4}i$$
Look here if you're not familiar with the Sigma notation for summation.
13a. Type out all numbers that explicitly need to be summed:
$\displaystyle \sum_{i=1}^{4}i = $
13b. Write an expression showing how this sum can be calculated in terms of a “simpler” sum.
$\displaystyle \sum_{i=1}^{4}i = $
14. Now answer the same questions for summation:
14a. How could summation be defined in terms of a smaller similar problem?
14b. How can the problem size be made smaller for a recursive call?
14c. What will be your base case, and what did you do for the base case?
14d. How will you make sure each recursive call will bring you closer to the base case?
Model 4: Order of execution
Not all recursive functions need to have both an if and else statement. Sometimes you only need the if-statement, while other times you might need multiple branches (if-elif-else).
For example, run the code in the cells below
End of explanation
"""
# your count_up function
# testing count_up
"""
Explanation: Critical Thinking Questions
15. What is the base case for the count_down function, which did not require an else branch?
16. Modify the count_down function so it counts up (i.e., printing 1 to 10) instead of counting down. You should NOT modify the boolean expression in the if-statement. Instead, you should move entire lines of code (i.e., cutting-and-pasting); wholesale rewriting of lines of code should not be necessary. Ask for help if you cannot figure this out within a few minutes:
End of explanation
"""
|
mqvist/CarND-Behavioral-Cloning
|
Experiment_2.ipynb
|
mit
|
import os
from PIL import Image
def get_record_and_image(index):
record = df.iloc[index]
path = os.path.join('data', record.center)
return record, Image.open(path)
def layer_info(model):
for n, layer in enumerate(model.layers, 1):
print('Layer {:2} {:16} input shape {} output shape {}'.format(n, layer.name, layer.input_shape, layer.output_shape))
"""
Explanation: Introduction
In this notebook, I want to continue working with the model form the experiment 1. The model was able to learn the steering angles for the three hand-picked images but the question is can it learn to actually steer the car in the simulator's autonomous mode. Given the discussion about recovery in the project material, it is unlikely that the provided sample training data is enough to teach the model to drive, but doing a test with that data would give at least a baseline to work from.
Here is the overall plan
1. Recreate the model from experiment 1
1. Create training data using the provided sample data
1. Train the model using the whole training data and see if it any learning takes place
1. If needed, tweak the model to get better training performance
1. Test the model with the simulator to see how it performs
Here are some utility functions.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
model = Sequential()
model.add(Convolution2D(6, 5, 5, border_mode='valid', subsample=(5, 5), input_shape=(80, 160, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(16, 5, 5, border_mode='valid', subsample=(2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120))
model.add(Activation('relu'))
model.add(Dense(84))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('tanh'))
layer_info(model)
"""
Explanation: Step 1: Recreate the model from experiment 1
This is an exact copy of the model from experiment 1 with one difference: the input image size is halved, because the images will be downscaled this time. The reason for the downscaling is explained in Step 2.
End of explanation
"""
import numpy as np
import pandas as pd
df = pd.read_csv('data/driving_log.csv')
"""
Explanation: Step 2: Create training set
End of explanation
"""
from tqdm import tqdm
X_train = []
y_train = []
for i in tqdm(range(len(df))):
record, image = get_record_and_image(i)
image = image.resize((image.width // 2, image.height // 2))
X_train.append(np.array(image))
image.close()
y_train.append(record['steering'])
"""
Explanation: Now I need to create the actual training data, X_train and y_train. I will just read all the images and store them as NumPy arrays to X_train. Similary, I read the corresponding steering angles and store them to y_train.
Note: I ended up scaling the images down to half size to conserve memory and speed up training. This was also mentioned in the project cheat sheet (https://carnd-forums.udacity.com/questions/26214464/behavioral-cloning-cheatsheet).
End of explanation
"""
X_min = np.min(X_train)
X_max = np.max(X_train)
X_normalized = (X_train - X_min) / (X_max - X_min) - 0.5
y_train = np.array(y_train)
"""
Explanation: Some preprocessing: normalize the images and convert the y_train to a NumPy array because that is what the Keras fit() seems to want. This step takes some time and consumes also a lot of memory; downscaling the images above helps.
End of explanation
"""
import keras.optimizers
def train(model, nb_epoch=10, learning_rate=0.001):
adam = keras.optimizers.Adam(lr=learning_rate)
model.compile(loss='mse', optimizer=adam)
model.fit(X_normalized, y_train, validation_split=0.2, nb_epoch=nb_epoch, verbose=2)
model.save('model.h5')
train(model)
"""
Explanation: Step 3: Train the model
Here I use all the data from the sample training data, 8036 images and their steering angles. Instead of using the training data generator as in the experiment 1, I just give the whole training set to model.fit and let it split it to training and validation sets. After training, I save the model so it can be loaded to the simulator for testing if the training seems to proceed well.
End of explanation
"""
from random import randrange
def sample_predictions(model):
for i in range(10):
index = randrange(len(df))
X = np.expand_dims(X_normalized[index], axis=0)
y = y_train[index]
print('Actual steering angle {} model prediction {}'.format(y, model.predict(X)[0][0]))
sample_predictions(model)
"""
Explanation: The validation error does not get much lower after epoch 4 or so, whereas the training error keeps falling. This indicates overtraining and poor generalization ability.
Lets do a bit of random sampling of the predicted steering angles to get a feeling how they match with the actual angles.
End of explanation
"""
from keras.layers import Dropout
model_2 = Sequential()
model_2.add(Convolution2D(6, 5, 5, border_mode='valid', subsample=(5, 5), input_shape=(80, 160, 3)))
model_2.add(Dropout(0.5))
model_2.add(Activation('relu'))
model_2.add(MaxPooling2D(pool_size=(2, 2)))
model_2.add(Convolution2D(16, 5, 5, border_mode='valid', subsample=(2, 2)))
model_2.add(Dropout(0.5))
model_2.add(Activation('relu'))
model_2.add(MaxPooling2D(pool_size=(2, 2)))
model_2.add(Flatten())
model_2.add(Dense(120))
model_2.add(Activation('relu'))
model_2.add(Dense(84))
model_2.add(Activation('relu'))
model_2.add(Dense(1))
model_2.add(Activation('tanh'))
layer_info(model_2)
train(model_2)
sample_predictions(model_2)
"""
Explanation: The sample predictions do not look very good. Some tweaks to the model are in place.
Step 4: Tweaking the model
So what could be done to the model to improve it? Basically there are three different approaches for changing the model:
Keep the model as it is, but try to improve its generalization ability
Keep the current architecture, but increase the amount of weights
Do some changes to the model's architecture
Before going for options 2 or 3, let's consider option 1 as it is more conservative than the other. A simple way to try to increase the generalization ability is add dropout layers, which force the model to learn redundant connections. Let's try that.
End of explanation
"""
model_3 = Sequential()
model_3.add(Convolution2D(6, 5, 5, border_mode='valid', subsample=(5, 5), input_shape=(80, 160, 3)))
model_3.add(Dropout(0.5))
model_3.add(Activation('relu'))
#model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Convolution2D(16, 5, 5, border_mode='valid'))
model_3.add(Dropout(0.5))
model_3.add(Activation('relu'))
#model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Flatten())
model_3.add(Dense(120))
model_3.add(Activation('relu'))
model_3.add(Dense(84))
model_3.add(Activation('relu'))
model_3.add(Dense(1))
model_3.add(Activation('tanh'))
layer_info(model_3)
train(model_3, 20)
sample_predictions(model_3)
"""
Explanation: The performace is even poorer now so the model is probably not complex enough to learn the given data set. I could increase the layer dimensions directly, but there is another way: remove the pooling layers. Pooling is analogous to downsampling and it reduces the amount of weights in the model. Let's strip the pooling layers and see what happens.
End of explanation
"""
model_4 = Sequential()
model_4.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), input_shape=(80, 160, 3)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(48, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Flatten())
model_4.add(Dense(100))
model_4.add(Activation('relu'))
model_4.add(Dense(50))
model_4.add(Activation('relu'))
model_4.add(Dense(10))
model_4.add(Activation('relu'))
model_4.add(Dense(1))
model_4.add(Activation('tanh'))
layer_info(model_4)
train(model_4)
sample_predictions(model_4)
model_4 = Sequential()
model_4.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), input_shape=(80, 160, 3)))
model_4.add(Activation('relu'))
model_4.add(Dropout(0.5))
model_4.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Dropout(0.5))
model_4.add(Convolution2D(48, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Dropout(0.5))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Dropout(0.5))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Dropout(0.5))
model_4.add(Flatten())
model_4.add(Dense(100))
model_4.add(Activation('relu'))
model_4.add(Dense(50))
model_4.add(Activation('relu'))
model_4.add(Dense(10))
model_4.add(Activation('relu'))
model_4.add(Dense(1))
model_4.add(Activation('tanh'))
layer_info(model_4)
train(model_4)
sample_predictions(model_4)
model_4 = Sequential()
model_4.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), input_shape=(80, 160, 3)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(48, 5, 5, border_mode='valid', subsample=(2, 2)))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Convolution2D(64, 3, 3, border_mode='valid'))
model_4.add(Activation('relu'))
model_4.add(Flatten())
model_4.add(Dense(100))
model_4.add(Dropout(0.5))
model_4.add(Activation('relu'))
model_4.add(Dense(50))
model_4.add(Activation('relu'))
model_4.add(Dense(10))
model_4.add(Activation('relu'))
model_4.add(Dense(1))
layer_info(model_4)
train(model_4, 50, learning_rate=0.001)
sample_predictions(model_4)
sample_predictions(model_4)
"""
Explanation: A bit better but even after 20 epochs not that much of an improvement. I begin to suspect that I need to increase the model's complexity quite a bit. At this point I will try to replicate the architecture from the NVidia paper (http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf) and see what kind of difference it makes.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/supplemental/labs/deepconv_gan.ipynb
|
apache-2.0
|
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# To generate GIFs
!python3 -m pip install -q imageio
import glob
import os
import time
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
from IPython import display
from tensorflow.keras import layers
"""
Explanation: Deep Convolutional Generative Adversarial Network
Learning Objectives
Build a GAN architecture (consisting of a generator and discriminator) in Keras
Define the loss for the generator and discriminator
Define a training step for the GAN using tf.GradientTape() and @tf.function
Train the GAN on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a Generative Adversarial Network (GAN) to generate images of handwritten digits using a Deep Convolutional Generative Adversarial Network (DCGAN).
GANs consist of two models which are trained simultaneously through an adversarial process. A generator ("the artist") learns to create images that look real, while a discriminator ("the art critic") learns to tell real images apart from fakes.
During training, the generator progressively becomes better at creating images that look real, while the discriminator becomes better at recognizing fake images. The process reaches equilibrium when the discriminator can no longer distinguish real images from fakes.
In this notebook we'll build a GAN to generate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.
Import TensorFlow and other libraries
End of explanation
"""
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype(
"float32"
)
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
"""
Explanation: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
End of explanation
"""
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
"""
Explanation: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
End of explanation
"""
# TODO 1
def make_generator_model():
model = tf.keras.Sequential()
# TODO: Your code goes here.
assert model.output_shape == (None, 28, 28, 1)
return model
"""
Explanation: Create the generator and discriminator models
Both our generator and discriminator models will be defined using the Keras Sequential API.
The Generator
The generator uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from a seed (random noise). We will start with a Dense layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the generator model. Start with a dense layer that takes as input random noise. We will create random noise using tf.random.normal([1, 100]). Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample the random noise from dimension 100 to ultimately dimension 28x28x1 (the shape of our original MNIST digits).
Hint: Experiment with using BatchNormalization or different activation functions like LeakyReLU.
End of explanation
"""
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap="gray")
"""
Explanation: Let's use the (as yet untrained) generator to create an image.
End of explanation
"""
# TODO 1.
def make_discriminator_model():
model = tf.keras.Sequential()
# TODO: Your code goes here.
assert model.output_shape == (None, 1)
return model
"""
Explanation: The Discriminator
Next, we will build the discriminator. The discriminator is a CNN-based image classifier. It should take in an image of shape 28x28x1 and return a single classification indicating if that image is real or not.
Exercise. Complete the code below to create the CNN-based discriminator model. Your model should be binary classifier which takes as input a tensor of shape 28x28x1. Experiment with different stacks of convolutions, activation functions, and/or dropout.
End of explanation
"""
make_generator_model().summary()
make_discriminator_model().summary()
"""
Explanation: Using .summary() we can have a high-level summary of the generator and discriminator models.
End of explanation
"""
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print(decision)
"""
Explanation: Let's use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
End of explanation
"""
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
"""
Explanation: Define the loss and optimizers
Next, we will define the loss functions and optimizers for both the generator and discriminaotr models. Both the generator and discriminator will use the BinaryCrossentropy.
End of explanation
"""
#TODO 2
def discriminator_loss(real_output, fake_output):
real_loss = # TODO: Your code goes here.
fake_loss = # TODO: Your code goes here.
total_loss = # TODO: Your code goes here.
return total_loss
"""
Explanation: Discriminator loss
The method below quantifies how well the discriminator is able to distinguish real images from fakes.
Recall, when training the discriminator (i.e. holding the generator fixed) the loss function has two parts: the loss when sampling from the real data and the loss when sampling from the fake data. The function below compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
Exercise.
Complete the code in the method below. The real_loss should return the cross-entropy for the discriminator's predictions on real images and the fake_loss should return the cross-entropy for the discriminator's predictions on fake images.
End of explanation
"""
# TODO 2
def generator_loss(fake_output):
return # Your code goes here.
"""
Explanation: Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
Exercise.
Complete the code to return the cross-entropy loss of the generator's output.
End of explanation
"""
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
"""
Explanation: Optimizers for the geneerator and discriminator
Note that we must define two separete optimizers for the discriminator and the generator optimizers since we will train two networks separately.
End of explanation
"""
checkpoint_dir = "./gan_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(
generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator,
)
"""
Explanation: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
End of explanation
"""
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
"""
Explanation: Define the training loop
Next, we define the training loop for training our GAN. Below we set up global variables for training.
End of explanation
"""
# TODO 3
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = # TODO: Your code goes here.
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(
gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(
disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(
zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(
zip(gradients_of_discriminator, discriminator.trainable_variables))
"""
Explanation: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
Exercise.
Complete the code below to define the training loop for our GAN. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images. In the rest of the function,
- generated_images is created using the generator function with noise as input
- apply the discriminator model to the images and generated_images to create the real_output and fake_output (resp.)
- define the gen_loss and disc_loss using the methods you defined above.
- compute the gradients of the generator and the discriminator using gen_tape and disc_tape (resp.)
Lastly, we use the .apply_gradients method to make a gradient step for the generator_optimizer and discriminator_optimizer
End of explanation
"""
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator, epoch + 1, seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f"Time for epoch {epoch + 1} is {time.time() - start} sec")
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator, epochs, seed)
"""
Explanation: We use the train_step function above to define training of our GAN. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
End of explanation
"""
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap="gray")
plt.axis("off")
plt.savefig(f"./gan_images/image_at_epoch_{epoch:04d}.png")
plt.show()
"""
Explanation: Generate and save images.
We'll use a small helper function to generate images and save them.
End of explanation
"""
# TODO 4
# TODO: Your code goes here.
"""
Explanation: Train the model
Call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one ot two minutes / epoch.
End of explanation
"""
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
"""
Explanation: Restore the latest checkpoint.
End of explanation
"""
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open(f"./gan_images/image_at_epoch_{epoch_no:04d}.png")
display_image(EPOCHS)
"""
Explanation: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training.
End of explanation
"""
anim_file = "dcgan.gif"
with imageio.get_writer(anim_file, mode="I") as writer:
filenames = glob.glob("./gan_images/image*.png")
filenames = sorted(filenames)
last = -1
for i, filename in enumerate(filenames):
frame = 2 * (i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6, 2, 0, ""):
display.Image(filename=anim_file)
"""
Explanation: Use imageio to create an animated gif using the images saved during training.
End of explanation
"""
|
WNoxchi/Kaukasos
|
FADL1/L3CA2_rossmann_old.ipynb
|
mit
|
%matplotlib inline
%reload_ext autoreload
%autoreload 2
# from fastai.imports import *
# from fastai.torch_imports import *
from fastai.structured import * # non-PyTorch specfc Machine-Learning tools; indep lib
# from fastai.dataset import * # lets us do fastai PyTorch stuff w/ structured columnar data
from fastai.column_data import *
# np.set_printoptions(threshold=50, edgeitems=20)
# from sklearn_pandas import DataFrameMapper
# from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
# import operator
PATH = 'data/rossmann/'
"""
Explanation: FastAI DL1 Lesson 3 CodeAlong II:
Rossmann - Structured Data
Code Along of lesson3-rossman.ipynb
1 Structured and Time Series Data
This notebook contains an implementation of the third place result in the Rossmann Kaggle competition as detailed in Gui/Berkhahn's Entity Embeddings of Categorical Variables.
The motivation behind exploring this architecture is it's relevance to real-world application. Most data used for decision making day-to-day in industry is structured and/or time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
End of explanation
"""
def concat_csvs(dirname):
path = f'{PATH}{dirname}'
filenames = glob.glob(f'{path}/*.csv')
wrote_header = False
with open(f'{path}.csv', 'w') as outputfile:
for filename in filenames:
name = filename.split('.')[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write('file,' + line)
for line in f:
outputfile.write(name + ',' + line)
outputfile.write('\n')
# concat_csvs('googletrend')
# concat_csvs('weather')
"""
Explanation: 1.1 Create Datasets
End of explanation
"""
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
"""
Explanation: Feature Space:
* train: Training set provided by competition
* store: List of stores
* store_states: Mapping of store to the German state they're in
* List of German state names
* googletrend: Trend of certain google keywords over time, found by users to correlate well w/ given daya
* weather: Weather
* test: Testing set
End of explanation
"""
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
"""
Explanation: We'll be using the popular data manipulation framework pandas. Among other things, Pandas allows you to manipulate tables/DataFrames in Python as one would in a database.
We're going to go ahead and load all our csv's as DataFrames into the list tables.
End of explanation
"""
for t in tables: display(t.head())
"""
Explanation: We can use head() to get a quick look at the contents of each table:
* train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holiday, etc.
* store: General info about the store including competition, etc.
* store_states: Maps store to state it's in
* state_names: Maps state abbreviations to names
* googletrend: Trend data for particular week/state
* weather: Weather conditions for each state
* test: Same as training table, w/o sales and customers
End of explanation
"""
for t in tables: display(DataFrameSummary(t).summary())
"""
Explanation: This is vert representative of a typical industry dataset.
The following returns summarized aggregate information to each table accross each field.
End of explanation
"""
train, store, store_states, state_names, googletrend, weather, test = tables
len(train), len(test)
"""
Explanation: 1.2 Data Cleaning / Feature Engineering
As a structuered data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
End of explanation
"""
train.StateHoliday = train.StateHoliday != '0'
test.StateHoliday = test.StateHoliday != '0'
"""
Explanation: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
End of explanation
"""
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
"""
Explanation: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Pandas does joins using the merge method. The suffixes argument doescribes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left entouched, and append a "_y" to those on the right.
End of explanation
"""
weather = join_df(weather, state_names, "file", "StateName")
"""
Explanation: Join weather/state names:
End of explanation
"""
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', 'State'] = 'HB,NI'
"""
Explanation: In Pandas you can add new columns to a DataFrame by simply definint it. We'll do this for googletrends by extracting dates and state names from the given data an dadding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight Pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the DataFrame. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".
End of explanation
"""
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
"""
Explanation: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
End of explanation
"""
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
"""
Explanation: The Google Trends data has a special category for the whole of the US - we'll pull that out so we can use it explicitly. * -- DE?*
End of explanation
"""
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]), len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year","Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year","Week"])
len(joined[joined.trend.isnull()]), len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]), len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]), len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
"""
Explanation: Now we can outer join all of our data into a single DataFrame. Recall that in outer joins everytime a value in the joining field on the left table doesn't have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields.One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside Why not just do an inner join? If you're assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event that you're wrong or a mistake is made, an outer join followed by a Null-Check will catch it (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.).
End of explanation
"""
for df in (joined, joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth']= df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
"""
Explanation: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we're picking an aribtirary signal value that doesn't otherwise appear in the data.
End of explanation
"""
for df in (joined, joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
"""
Explanation: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across DataFrame values.
End of explanation
"""
for df in (joined, joined_test):
df.loc[df.CompetitionDaysOpen < 0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear < 1990, "CompetitionDaysOpen"] = 0
"""
Explanation: We'll replace some erroneous / outlying data
End of explanation
"""
for df in (joined, joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"] // 30
df.loc[df.CompetitionMonthsOpen > 24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
"""
Explanation: We add the "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
End of explanation
"""
for df in (joined, joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined, joined_test):
df.loc[df.Promo2Days < 0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear < 1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"] // 7
df.loc[df.Promo2Weeks < 0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks > 25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
"""
Explanation: Same process for Promo dates.
End of explanation
"""
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
joined = pd.read_feather(f'{PATH}joined')
joined_test = pd.read_feather(f'{PATH}joined_test')
"""
Explanation: NOTE: make sure joined or joined_test is loaded into memory (either reinitialized from above, or loaded from disk if saved as below) when switching between train and test datasets.
Actually you'll probably have to make sure you have them both read in anyway, if you're going to run some of the lines further below in the same cells together.
End of explanation
"""
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values, df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d - last_date).astype('timedelta64[D]') / day1).astype(int))
df[pre + fld] = res
"""
Explanation: 1.3 Durations
It's common when working with time series data to extract data that explains relationships accross rows as opposed to columns, eg:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they're designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function get_elapsed for cumulative counting across a sorted DataFrame. Given a particular field fld to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime NA's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
End of explanation
"""
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
"""
Explanation: We'll be applying this to a subset of columns:
NOTE: You must rerun the cell below before running either df = train[columns] or df = test[columns], ie: before switching between train and test datasets -- since columns is redefined for the purpose of converting NaNs to zeros further on down below.
End of explanation
"""
df = train[columns]
df = test[columns]
"""
Explanation: NOTE: when running on the train-set, do train[columns], when running on the test-set, do test[columns] --- idk yet if you have to run all the cells afterwards, but I think you do.
End of explanation
"""
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: Let's walk through an exmaple:
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After'): This'll apply to each row with School Holiday:
Applied to every row of the DataFrame in order of store and date
Will add to the DataFrame the days since seeing a School Holiday
If we sort in the other direction, this'll count the days until another holiday
End of explanation
"""
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: We'll do this for two more fields:
End of explanation
"""
df = df.set_index("Date")
"""
Explanation: We're going to set the active index to Date
End of explanation
"""
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o + p
df[a] = df[a].fillna(0)
"""
Explanation: Then set Null values from elapsed field calculations to 0.
End of explanation
"""
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
"""
Explanation: Next we'll demonstrate window functions in Pandas to calculate rolling quantities.
Here we're sorting by date (sort_index()) and counting the number of events of interest (sum() defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.
End of explanation
"""
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
"""
Explanation: Next we want to drop the Store indices grouped together in the window function.
Often in Pandas, there's an option to do this in place. This is time an memory efficient when working with large datasets.
End of explanation
"""
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
"""
Explanation: Now we'll merge these values onto the df.
End of explanation
"""
df.to_feather(f'{PATH}df')
df.to_feather(f'{PATH}df_test')
# df = pd.read_feather(f'{PATH}df', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
"""
Explanation: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
End of explanation
"""
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
"""
Explanation: NOTE: *you'll get a Buffer dtype mismatch ValueError here unless you have joined or joined_test loaded in from before up above. -- Note for when rerunning this notebook: switching between test and train datasets.
End of explanation
"""
joined = joined[joined.Sales != 0]
"""
Explanation: The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where astores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
End of explanation
"""
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
# so `` ValueError: cannot insert level_0, already exists `` just means Ive already done this once eh?
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
"""
Explanation: We'll back this up as well
End of explanation
"""
joined = pd.read_feather(f'{PATH}joined')
joined_test = pd.read_feather(f'{PATH}joined_test')
joined.head().T.head(40)
joined_test.columns
joined.columns
"""
Explanation: We now have our final set of engineered features.
While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
1.4 Create Features
End of explanation
"""
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw',
'SchoolHoliday_fw', 'SchoolHoliday_bw']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
n = len(joined); n
dep = 'Sales'
joined_test[dep] = 0
joined = joined[cat_vars + contin_vars + [dep, 'Date']].copy()
joined_test = joined_test[cat_vars + contin_vars + [dep, 'Date', 'Id']].copy()
for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered()
apply_cats(joined_test, joined)
for v in contin_vars:
joined[v] = joined[v].astype('float32')
joined_test[v] = joined_test[v].astype('float32')
"""
Explanation: Now that we've engineered all our features, we need to convert to input compatible with a neural network.
This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc..
End of explanation
"""
idxs = get_cv_idxs(n, val_pct = 150000 / n)
joined_samp = joined.iloc[idxs].set_index("Date")
samp_size = len(joined_samp); samp_size
"""
Explanation: We're going to run on a sample:
End of explanation
"""
samp_size = n
joined_samp = joined.set_index("Date")
"""
Explanation: To run on the full dataset, use this instead:
End of explanation
"""
joined_samp.head(2)
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yl = np.log(y)
joined_test = joined_test.set_index("Date")
df_test, _, nas, mapper, = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'],
mapper=mapper, na_dict=nas)
df.head(2)
"""
Explanation: We can now process our data...
End of explanation
"""
train_ratio = 0.75
# train_ration = 0.90
train_size = int(samp_size * train_ratio); train_size
val_idx = list(range(train_size, len(df)))
"""
Explanation: In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be a in a real application. This issue is discussed in detail in this post on our website.
One approach is to take the last 25% of rows (sorted by date) as our validation set.
End of explanation
"""
val_idx = np.flatnonzero(
(df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1)))
# val_idx=[0]
"""
Explanation: An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here:
End of explanation
"""
def inv_y(a): return np.exp(a)
def exp_rmspe(y_pred, targ):
targ = inv_y(targ)
pct_var = (targ - inv_y(y_pred))/targ
return math.sqrt((pct_var**2).mean())
max_log_y = np.max(yl)
y_range = (0, max_log_y*1.2)
"""
Explanation: 1.6 DL
We're ready to put together our models.
Root-Mean-Squared percent error is the metric Kaggle used for this competition.
End of explanation
"""
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl, cat_flds=cat_vars, bs=128)
# md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl, cat_flds=cat_vars, bs=128,
# test_df=df_test)
"""
Explanation: We can create a ModelData object directly from our DataFrame.
End of explanation
"""
cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars]
cat_sz
"""
Explanation: Some categorical variables have a lot more levels than others. Store, in particular, has over a thousand!
End of explanation
"""
emb_szs = [(c, min(50, (c + 1) // 2)) for _, c in cat_sz]
emb_szs
m = md.get_learner(emb_szs, len(df.columns) - len(cat_vars),
0.04, 1, [1000, 500], [0.001, 0.01], y_range=y_range)
lr = 1e-3
m.lr_find()
m.sched.plot(100)
"""
Explanation: We use the cardinality of each variable (the number of unique values) to decide how large to make its embeddings. Each level will be associated with a vector with length defined as below.
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns) - len(cat_vars),
0.04, 1, [1000,500], [0.001, 0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 5, metrics=[exp_rmspe], cycle_len=1)
m.fit(lr, 2, metrics=[exp_rmspe], cycle_len=4)
"""
Explanation: 1.6.1 Sample
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns) - len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1)
"""
Explanation: 1.6.2 All
End of explanation
"""
m = md.get_learner(emb_szz, len(df.columns) - len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
lr = 1e-3
m.fit(lr, 3, metrics=[exp_rmspe])
m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1)
m.save('val0')
m.load('val0')
x,y=m.predict_with_targs()
exp_rmspe(x,y)
pred_test=m.predict(True)
pred_test = np.exp(pred_test)
joined_test['Sales']=pred_test
csv_fn=f'{PATH}tmp/sub.csv'
FileLink(csv_fn)
"""
Explanation: Test
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
((val,trn), (y_val,y_trn)) = split_by_idx(val_idx, df.values, yl)
m = RandomForestRegressor(n_estimators=40, max_features=0.99, min_samples_leaf=2,
n_jobs=-1, oob_score=True)
m.fit(trn, y_trn)
preds = m.predict(val)
m.score(trn, y_trn), m.score(val, y_val), m.oob_score_, exp_rmspe(preds, y_val)
"""
Explanation: 1.5 RF
End of explanation
"""
|
m2dsupsdlclass/lectures-labs
|
labs/06_deep_nlp/Transformers_Joint_Intent_Classification_Slot_Filling.ipynb
|
mit
|
import tensorflow as tf
tf.__version__
!nvidia-smi
# TODO: update this notebook to work with the latest version of transformers
%pip install -q transformers==2.11.0
"""
Explanation: Joint Intent Classification and Slot Filling with Transformers
The goal of this notebook is to fine-tune a pretrained transformer-based neural network model to convert a user query expressed in English into
a representation that is structured enough to be processed by an automated service.
Here is an example of interpretation computed by such a Natural Language Understanding system:
```python
nlu("Book a table for two at Le Ritz for Friday night",
tokenizer, joint_model, intent_names, slot_names)
{
'intent': 'BookRestaurant',
'slots': {
'party_size_number': 'two',
'restaurant_name': 'Le Ritz',
'timeRange': 'Friday night'
}
}
```
Intent classification is a simple sequence classification problem. The trick is to treat the structured knowledge extraction part ("Slot Filling") as token-level classification problem using BIO-annotations:
```python
show_predictions("Book a table for two at Le Ritz for Friday night!",
... tokenizer, joint_model, intent_names, slot_names)
Intent: BookRestaurant
Slots:
Book : O
a : O
table : O
for : O
two : B-party_size_number
at : O
Le : B-restaurant_name
R : I-restaurant_name
##itz : I-restaurant_name
for : O
Friday : B-timeRange
night : I-timeRange
! : O
```
We will show how to train a such as join "sequence classification" and "token classification" joint model on a voice command dataset published by snips.ai.
This notebook is a partial reproduction of some of the results presented in this paper:
BERT for Joint Intent Classification and Slot Filling
Qian Chen, Zhu Zhuo, Wen Wang
https://arxiv.org/abs/1902.10909
End of explanation
"""
from urllib.request import urlretrieve
from pathlib import Path
SNIPS_DATA_BASE_URL = (
"https://github.com/ogrisel/slot_filling_and_intent_detection_of_SLU/blob/"
"master/data/snips/"
)
for filename in ["train", "valid", "test", "vocab.intent", "vocab.slot"]:
path = Path(filename)
if not path.exists():
print(f"Downloading {filename}...")
urlretrieve(SNIPS_DATA_BASE_URL + filename + "?raw=true", path)
"""
Explanation: The Data
We will use a speech command dataset collected, annotated and published by French startup SNIPS.ai (bought in 2019 by Audio device manufacturer Sonos).
The original dataset comes in YAML format with inline markdown annotations.
Instead we will use a preprocessed variant with token level B-I-O annotations closer the representation our model will predict. This variant of the SNIPS
dataset was prepared by Su Zhu.
End of explanation
"""
lines_train = Path("train").read_text("utf-8").strip().splitlines()
lines_train[:5]
"""
Explanation: Let's have a look at the first lines from the training set:
End of explanation
"""
def parse_line(line):
utterance_data, intent_label = line.split(" <=> ")
items = utterance_data.split()
words = [item.rsplit(":", 1)[0]for item in items]
word_labels = [item.rsplit(":", 1)[1]for item in items]
return {
"intent_label": intent_label,
"words": " ".join(words),
"word_labels": " ".join(word_labels),
"length": len(words),
}
parse_line(lines_train[0])
"""
Explanation: Some remarks:
The class label for the voice command appears at the end of each line (after the "<=>" marker).
Each word-level token is annotated with B-I-O labels using the ":" separator.
B/I/O stand for "Beginning" / "Inside" / "Outside"
"Add:O" means that the token "Add" is "Outside" of any annotation span
"Don:B-entity_name" means that "Don" is the "Beginning" of an annotation of type "entity-name".
"and:I-entity_name" means that "and" is "Inside" the previously started annotation of type "entity-name".
Let's write a parsing function and test it on the first line:
End of explanation
"""
print(Path("vocab.intent").read_text("utf-8"))
print(Path("vocab.slot").read_text("utf-8"))
"""
Explanation: This utterance is a voice command of type "AddToPlaylist" with to annotations:
an entity-name: "Don and Sherri",
a playlist: "Medidate to Sounds of Nature".
The goal of this project is to build a baseline Natural Understanding model to analyse such voice commands and predict:
the intent of the speaker: the sentence level class label ("AddToPlaylist");
extract the interesting "slots" (typed named entities) from the sentence by performing word level classification using the B-I-O tags as target classes. This second task is often referred to as "NER" (Named Entity Recognition) in the Natural Language Processing literature. Alternatively this is also known as "slot filling" when we expect a fixed set of named entity per sentence of a given class.
The list of possible classes for the sentence level and the word level classification problems are given as:
End of explanation
"""
import pandas as pd
parsed = [parse_line(line) for line in lines_train]
df_train = pd.DataFrame([p for p in parsed if p is not None])
df_train
df_train
df_train.groupby("intent_label").count()
df_train.hist("length", bins=30);
lines_valid = Path("valid").read_text("utf-8").strip().splitlines()
lines_test = Path("test").read_text("utf-8").strip().splitlines()
df_valid = pd.DataFrame([parse_line(line) for line in lines_valid])
df_test = pd.DataFrame([parse_line(line) for line in lines_test])
"""
Explanation: "POI" stands for "Point of Interest".
Let's parse all the lines and store the results in pandas DataFrames:
End of explanation
"""
from transformers import BertTokenizer
model_name = "bert-base-cased"
tokenizer = BertTokenizer.from_pretrained(model_name)
first_sentence = df_train.iloc[0]["words"]
first_sentence
tokenizer.tokenize(first_sentence)
"""
Explanation: A First Model: Intent Classification (Sentence Level)
Let's ignore the slot filling task for now and let's try to build a sentence level classifier by fine-tuning a pre-trained Transformer-based model using the huggingface/transformers package that provides both TF2/Keras and Pytorch APIs.
The BERT Tokenizer
First let's load a pre-trained tokenizer and test it on a test sentence from the training set:
End of explanation
"""
tokenizer.encode(first_sentence)
tokenizer.decode(tokenizer.encode(first_sentence))
"""
Explanation: Notice that BERT uses subword tokens so the length of the tokenized sentence is likely to be larger than the number of words in the sentence.
Question:
why is it particulary interesting to use subword tokenization for general purpose language models such as BERT?
Each token string is mapped to a unique integer id that makes it fast to lookup the right column in the input layer token embedding:
End of explanation
"""
import matplotlib.pyplot as plt
train_sequence_lengths = [len(tokenizer.encode(text))
for text in df_train["words"]]
plt.hist(train_sequence_lengths, bins=30)
plt.title(f"max sequence length: {max(train_sequence_lengths)}");
"""
Explanation: Remarks:
The first token [CLS] is used by the pre-training task for sequence classification.
The last token [SEP] is a separator for the pre-training task that classifiies if a pair of sentences are consecutive in a corpus or not (next sentence prediction).
Here we want to use BERT to compute a representation of a single voice command at a time
We could reuse the representation of the [CLS] token for sequence classification.
Alternatively we can pool the representations of all the tokens of the voice command (e.g. global average) and use that as the input of the final sequence classification layer.
End of explanation
"""
tokenizer.vocab_size
bert_vocab_items = list(tokenizer.vocab.items())
bert_vocab_items[:10]
bert_vocab_items[100:110]
bert_vocab_items[900:910]
bert_vocab_items[1100:1110]
bert_vocab_items[20000:20010]
bert_vocab_items[-10:]
"""
Explanation: To perform transfer learning, we will need to work with padded sequences so they all have the same sizes. The above histograms, shows that after tokenization, 43 tokens are enough to represent all the voice commands in the training set.
The mapping can be introspected in the tokenizer.vocab attribute:
End of explanation
"""
import numpy as np
def encode_dataset(tokenizer, text_sequences, max_length):
token_ids = np.zeros(shape=(len(text_sequences), max_length),
dtype=np.int32)
for i, text_sequence in enumerate(text_sequences):
encoded = tokenizer.encode(text_sequence)
token_ids[i, 0:len(encoded)] = encoded
attention_masks = (token_ids != 0).astype(np.int32)
return {"input_ids": token_ids, "attention_masks": attention_masks}
encoded_train = encode_dataset(tokenizer, df_train["words"], 45)
encoded_train["input_ids"]
encoded_train["attention_masks"]
encoded_valid = encode_dataset(tokenizer, df_valid["words"], 45)
encoded_test = encode_dataset(tokenizer, df_test["words"], 45)
"""
Explanation: Couple of remarks:
30K is a reasonable vocabulary size and is small enough to be used in a softmax output layer;
it can represent multi-lingual sentences, including non-Western alphabets;
subword tokenization makes it possible to deal with typos and morphological variations with a small vocabulary side and without any language-specific preprocessing;
subword tokenization makes it unlikely to use the [UNK] special token as rare words can often be represented as a sequence of frequent enough short subwords in a meaningful way.
Encoding the Dataset with the Tokenizer
Let's now encode the full train / valid and test sets with our tokenizer to get a padded integer numpy arrays:
End of explanation
"""
intent_names = Path("vocab.intent").read_text("utf-8").split()
intent_map = dict((label, idx) for idx, label in enumerate(intent_names))
intent_map
intent_train = df_train["intent_label"].map(intent_map).values
intent_train
intent_valid = df_valid["intent_label"].map(intent_map).values
intent_test = df_test["intent_label"].map(intent_map).values
"""
Explanation: Encoding the Sequence Classification Targets
To do so we build a simple mapping from the auxiliary files:
End of explanation
"""
from transformers import TFAutoModel
base_bert_model = TFAutoModel.from_pretrained("bert-base-cased")
base_bert_model.summary()
encoded_valid
outputs = base_bert_model(encoded_valid)
len(outputs)
"""
Explanation: Loading and Feeding a Pretrained BERT model
Let's load a pretrained BERT model using the huggingface transformers package:
End of explanation
"""
outputs[0].shape
"""
Explanation: The first ouput of the BERT model is a tensor with shape: (batch_size, seq_len, output_dim) which computes features for each token in the input sequence:
End of explanation
"""
outputs[1].shape
"""
Explanation: The second output of the BERT model is a tensor with shape (batch_size, output_dim) which is the vector representation of the special token [CLS]. This vector is typically used as a pooled representation for the sequence as a whole. This is will be used as the features of our Intent classifier:
End of explanation
"""
import tensorflow as tf
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
class IntentClassificationModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, model_name="bert-base-cased",
dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
# Let's preload the pretrained model BERT in the constructor of our
# classifier model
self.bert = TFAutoModel.from_pretrained(model_name)
# TODO: define a (Dense) classification layer to compute the
# for each sequence in a batch the batch of samples. The number of
# output classes is given by the intent_num_labels parameter.
# Use the default linear activation (no softmax) to compute logits.
# The softmax normalization will be computed in the loss function
# instead of the model itself.
def call(self, inputs, training=False):
# Use the pretrained model to extract features from our encoded inputs:
sequence_output, pooled_output = self.bert(inputs, training=training)
# The second output of the main BERT layer has shape:
# (batch_size, output_dim)
# and gives a "pooled" representation for the full sequence from the
# hidden state that corresponds to the "[CLS]" token.
# TODO: use the classifier layer to compute the logits from the pooled
# features.
intent_logits = None
return intent_logits
intent_model = IntentClassificationModel(intent_num_labels=len(intent_map))
intent_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[SparseCategoricalAccuracy('accuracy')])
# TODO: uncomment to train the model:
# history = intent_model.fit(encoded_train, intent_train, epochs=2, batch_size=32,
# validation_data=(encoded_valid, intent_valid))
"""
Explanation: Exercise
Use the following code template to build and train a sequence classification model using to predict the intent class.
Use the self.bert pre-trained model in the call method and only consider the pooled features (ignore the token-wise features for now).
End of explanation
"""
import tensorflow as tf
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
class IntentClassificationModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, model_name="bert-base-cased",
dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
self.dropout = Dropout(dropout_prob)
# Use the default linear activation (no softmax) to compute logits.
# The softmax normalization will be computed in the loss function
# instead of the model itself.
self.intent_classifier = Dense(intent_num_labels)
def call(self, inputs, training=False):
sequence_output, pooled_output = self.bert(inputs, training=training)
pooled_output = self.dropout(pooled_output, training=training)
intent_logits = self.intent_classifier(pooled_output)
return intent_logits
intent_model = IntentClassificationModel(intent_num_labels=len(intent_map))
"""
Explanation: Solution
End of explanation
"""
intent_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[SparseCategoricalAccuracy('accuracy')])
history = intent_model.fit(encoded_train, intent_train, epochs=2, batch_size=32,
validation_data=(encoded_valid, intent_valid))
def classify(text, tokenizer, model, intent_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
class_id = model(inputs).numpy().argmax(axis=1)[0]
return intent_names[class_id]
classify("Book a table for two at La Tour d'Argent for Friday night.",
tokenizer, intent_model, intent_names)
classify("I would like to listen to Anima by Thom Yorke.",
tokenizer, intent_model, intent_names)
classify("Will it snow tomorrow in Saclay?",
tokenizer, intent_model, intent_names)
classify("Where can I see to the last Star Wars near Odéon tonight?",
tokenizer, intent_model, intent_names)
"""
Explanation: Our classification model outputs logits instead of probabilities. The final softmax normalization layer is implicit, that is included in the loss function instead of the model directly.
We need to configure the loss function SparseCategoricalCrossentropy(from_logits=True) accordingly:
End of explanation
"""
slot_names = ["[PAD]"]
slot_names += Path("vocab.slot").read_text("utf-8").strip().splitlines()
slot_map = {}
for label in slot_names:
slot_map[label] = len(slot_map)
slot_map
"""
Explanation: Join Intent Classification and Slot Filling
Let's now refine our Natural Language Understanding system by trying the retrieve the important structured elements of each voici command.
To do so we will perform word level (or token level) classification of the BIO labels.
Since we have word level tags but BERT uses a wordpiece tokenizer, we need to align the BIO labels with the BERT tokens.
Let's load the list of possible word token labels and augment it with an additional padding label to be able to ignore special tokens:
End of explanation
"""
def encode_token_labels(text_sequences, slot_names, tokenizer, slot_map,
max_length):
encoded = np.zeros(shape=(len(text_sequences), max_length), dtype=np.int32)
for i, (text_sequence, word_labels) in enumerate(
zip(text_sequences, slot_names)):
encoded_labels = []
for word, word_label in zip(text_sequence.split(), word_labels.split()):
tokens = tokenizer.tokenize(word)
encoded_labels.append(slot_map[word_label])
expand_label = word_label.replace("B-", "I-")
if not expand_label in slot_map:
expand_label = word_label
encoded_labels.extend([slot_map[expand_label]] * (len(tokens) - 1))
encoded[i, 1:len(encoded_labels) + 1] = encoded_labels
return encoded
slot_train = encode_token_labels(
df_train["words"], df_train["word_labels"], tokenizer, slot_map, 45)
slot_valid = encode_token_labels(
df_valid["words"], df_valid["word_labels"], tokenizer, slot_map, 45)
slot_test = encode_token_labels(
df_test["words"], df_test["word_labels"], tokenizer, slot_map, 45)
slot_train[0]
slot_valid[0]
"""
Explanation: The following function generates token-aligned integer labels from the BIO word-level annotations. In particular, if a specific word is too long to be represented as a single token, we expand its label for all the tokens of that word while taking care of using "B-" labels only for the first token and then use "I-" for the matching slot type for subsequent tokens of the same word:
End of explanation
"""
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
class JointIntentAndSlotFillingModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, slot_num_labels=None,
model_name="bert-base-cased", dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
# TODO: define all the needed layers here.
def call(self, inputs, training=False):
# TODO: extract the features from the inputs using the pre-trained
# BERT model here.
# TODO: use the new layers to predict slot class (logits) for each
# token position in the input sequence:
slot_logits = None # (batch_size, seq_len, slot_num_labels)
# TODO: define a second classification head for the sequence-wise
# predictions:
intent_logits = None # (batch_size, intent_num_labels)
return slot_logits, intent_logits
joint_model = JointIntentAndSlotFillingModel(
intent_num_labels=len(intent_map), slot_num_labels=len(slot_map))
# Define one classification loss for each output:
losses = [SparseCategoricalCrossentropy(from_logits=True),
SparseCategoricalCrossentropy(from_logits=True)]
joint_model.compile(optimizer=Adam(learning_rate=3e-5, epsilon=1e-08),
loss=losses)
# TODO: uncomment to train the model:
# history = joint_model.fit(
# encoded_train, (slot_train, intent_train),
# validation_data=(encoded_valid, (slot_valid, intent_valid)),
# epochs=2, batch_size=32)
"""
Explanation: Note that the special tokens such as "[PAD]" and "[SEP]" and all padded positions recieve a 0 label.
Exercise
Use the following code template to build a joint sequence and token classification model suitable for training on our encoded dataset with slot labels:
End of explanation
"""
from transformers import TFAutoModel
from tensorflow.keras.layers import Dropout, Dense
class JointIntentAndSlotFillingModel(tf.keras.Model):
def __init__(self, intent_num_labels=None, slot_num_labels=None,
model_name="bert-base-cased", dropout_prob=0.1):
super().__init__(name="joint_intent_slot")
self.bert = TFAutoModel.from_pretrained(model_name)
self.dropout = Dropout(dropout_prob)
self.intent_classifier = Dense(intent_num_labels,
name="intent_classifier")
self.slot_classifier = Dense(slot_num_labels,
name="slot_classifier")
def call(self, inputs, training=False):
sequence_output, pooled_output = self.bert(inputs, training=training)
# The first output of the main BERT layer has shape:
# (batch_size, max_length, output_dim)
sequence_output = self.dropout(sequence_output, training=training)
slot_logits = self.slot_classifier(sequence_output)
# The second output of the main BERT layer has shape:
# (batch_size, output_dim)
# and gives a "pooled" representation for the full sequence from the
# hidden state that corresponds to the "[CLS]" token.
pooled_output = self.dropout(pooled_output, training=training)
intent_logits = self.intent_classifier(pooled_output)
return slot_logits, intent_logits
joint_model = JointIntentAndSlotFillingModel(
intent_num_labels=len(intent_map), slot_num_labels=len(slot_map))
opt = Adam(learning_rate=3e-5, epsilon=1e-08)
losses = [SparseCategoricalCrossentropy(from_logits=True),
SparseCategoricalCrossentropy(from_logits=True)]
metrics = [SparseCategoricalAccuracy('accuracy')]
joint_model.compile(optimizer=opt, loss=losses, metrics=metrics)
history = joint_model.fit(
encoded_train, (slot_train, intent_train),
validation_data=(encoded_valid, (slot_valid, intent_valid)),
epochs=2, batch_size=32)
"""
Explanation: Solution:
End of explanation
"""
def show_predictions(text, tokenizer, model, intent_names, slot_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
outputs = model(inputs)
slot_logits, intent_logits = outputs
slot_ids = slot_logits.numpy().argmax(axis=-1)[0, 1:-1]
intent_id = intent_logits.numpy().argmax(axis=-1)[0]
print("## Intent:", intent_names[intent_id])
print("## Slots:")
for token, slot_id in zip(tokenizer.tokenize(text), slot_ids):
print(f"{token:>10} : {slot_names[slot_id]}")
show_predictions("Book a table for two at Le Ritz for Friday night!",
tokenizer, joint_model, intent_names, slot_names)
show_predictions("Will it snow tomorrow in Saclay?",
tokenizer, joint_model, intent_names, slot_names)
show_predictions("I would like to listen to Anima by Thom Yorke.",
tokenizer, joint_model, intent_names, slot_names)
"""
Explanation: The following function uses our trained model to make a prediction on a single text sequence and display both the sequence-wise and the token-wise class labels:
End of explanation
"""
def decode_predictions(text, tokenizer, intent_names, slot_names,
intent_id, slot_ids):
info = {"intent": intent_names[intent_id]}
collected_slots = {}
active_slot_words = []
active_slot_name = None
for word in text.split():
tokens = tokenizer.tokenize(word)
current_word_slot_ids = slot_ids[:len(tokens)]
slot_ids = slot_ids[len(tokens):]
current_word_slot_name = slot_names[current_word_slot_ids[0]]
if current_word_slot_name == "O":
if active_slot_name:
collected_slots[active_slot_name] = " ".join(active_slot_words)
active_slot_words = []
active_slot_name = None
else:
# Naive BIO: handling: treat B- and I- the same...
new_slot_name = current_word_slot_name[2:]
if active_slot_name is None:
active_slot_words.append(word)
active_slot_name = new_slot_name
elif new_slot_name == active_slot_name:
active_slot_words.append(word)
else:
collected_slots[active_slot_name] = " ".join(active_slot_words)
active_slot_words = [word]
active_slot_name = new_slot_name
if active_slot_name:
collected_slots[active_slot_name] = " ".join(active_slot_words)
info["slots"] = collected_slots
return info
def nlu(text, tokenizer, model, intent_names, slot_names):
inputs = tf.constant(tokenizer.encode(text))[None, :] # batch_size = 1
outputs = model(inputs)
slot_logits, intent_logits = outputs
slot_ids = slot_logits.numpy().argmax(axis=-1)[0, 1:-1]
intent_id = intent_logits.numpy().argmax(axis=-1)[0]
return decode_predictions(text, tokenizer, intent_names, slot_names,
intent_id, slot_ids)
nlu("Book a table for two at Le Ritz for Friday night",
tokenizer, joint_model, intent_names, slot_names)
nlu("Will it snow tomorrow in Saclay",
tokenizer, joint_model, intent_names, slot_names)
nlu("I would like to listen to Anima by Thom Yorke",
tokenizer, joint_model, intent_names, slot_names)
"""
Explanation: Decoding Predictions into Structured Knowledge
For completeness, here a minimal function to naively decode the predicted BIO slot ids and convert it into a structured representation for the detected slots as a Python dictionaries:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.19/_downloads/85e12f42707b248635bc0c477c2ffc2f/plot_mne_solutions.ipynb
|
bsd-3-clause
|
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
"""
Explanation: Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by MNE, dSPM, sLORETA, and eLORETA.
End of explanation
"""
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
"""
Explanation: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
End of explanation
"""
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='both', subjects_dir=subjects_dir,
size=(600, 600))
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
"""
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
"""
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
"""
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
"""
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
"""
Explanation: And sLORETA:
End of explanation
"""
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
"""
Explanation: And finally eLORETA:
End of explanation
"""
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
"""
Explanation: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
End of explanation
"""
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
"""
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
"""
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
"""
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
"""
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
"""
Explanation: And sLORETA:
End of explanation
"""
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True)
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
"""
Explanation: And finally eLORETA:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_dipole_fit.ipynb
|
bsd-3-clause
|
from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
"""
Explanation: Source localization with single dipole fit
This shows how to fit a dipole using mne-python.
For a comparison of fits between MNE-C and mne-python, see:
https://gist.github.com/Eric89GXL/ca55f791200fe1dc3dd2
Note that for 3D graphics you may need to choose a specific IPython
backend, such as:
%matplotlib qt or %matplotlib wx
End of explanation
"""
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain
dip.plot_locations(fname_trans, 'sample', subjects_dir)
"""
Explanation: Let's localize the N100m (using MEG only)
End of explanation
"""
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, None, snr=np.inf)
# find time point with highes GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
# rememeber to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4])
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False)
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, pred_evoked], [1, -1])
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2], **plot_params)
plt.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
"""
Explanation: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
End of explanation
"""
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot()
"""
Explanation: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF)over the entire interval
End of explanation
"""
|
ernestyalumni/servetheloop
|
CurveFit/CurveFit.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit
from scipy.stats import gamma # for drag vs. v fit
SUBDIR = './rawdata/' # subdirectory with all data
"""
Explanation: CurveFit
Various (nonlinear) curve fitting methods needed at various times for the rloop capsule
cf. capsulecorplab's github repository on CurveFit
End of explanation
"""
xdata = np.array([-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9])
ydata = np.array([0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001])
# define fit function
def func(x, p1,p2):
return p1*np.cos(p2*x) + p2*np.sin(p1*x)
# Calculate and show fit parameters. Use a starting guess of p1=1 and p2=0.2
popt, pcov = curve_fit(func, xdata, ydata,p0=(1.0,0.2))
# Calculate and show sum of squares of residuals since it’s not given by the curve_fit function
p1 = popt[0]
p2 = popt[1]
residuals = ydata - func(xdata,p1,p2)
fres = sum(residuals**2)
print 'popt', popt
print 'pcov', pcov
print 'p1', p1
print 'p2', p2
print 'residuals', residuals
print 'fres', fres
# Plot fitted curve along with data
curvex=np.linspace(-2,3,100)
curvey=func(curvex,p1,p2)
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: Starting example from www.walkingrandomly.com
capsulecorplab started with this blog post on Simple nonlinear least squares curve fitting in Python, which used this example (only an example) for the fitting function:
$$
F(p_1,p_2,x)=p_1\cos{(p_2x)} +p_2 \sin{(p_1x)}
$$
cf. CurveFit/example_BestFit.py
End of explanation
"""
# Read csv and save to a pandas data frame
df = pd.read_csv(SUBDIR+'data.csv')
print( df.describe() )
df.head()
h = 0.002 # use for data.csv
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
# Calculate and show fit parameters. Use a starting guess of p1=1 and p2=0.2
popt, pcov = curve_fit(func, xdata, ydata,p0=(1.0,0.2))
plt.scatter(xdata, ydata)
"""
Explanation: Applying this particular curvefit to ski data
cf. CurveFit/nonlinearFit.py
End of explanation
"""
#def gamma_func_fit(x, k, theta, x_0, A_0 ):
# return A_0 * x**2 * np.exp( -((x-x_0)**2/theta) )
#def gamma_func_fit(x, k, theta, x_0, A_0 ):
# return A_0 * x**k * np.exp( -((x-x_0)**2/theta) )
def gamma_func_fit(x, k, theta, x_0 ):
return x**k * np.exp( -((x-x_0)/theta) )
# Calculate and show fit parameters. Use a starting guess of p1=1 and p2=0.2
popt, pcov = curve_fit(gamma_func_fit, xdata, ydata , maxfev=10000000)
popt
pcov
# Plot fitted curve along with data
curvex=np.linspace(-1,150,200)
#curvey=gamma_func_fit(curvex,popt[0],popt[1],popt[2] )
curvey=gamma_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: Taking a look at the plot of drag vs. $v$, the form $p_1 \cos{(p_2 x)} + p_2 \sin{(p_1 x) }$ isn't appropriate since the coefficients in front of the sinusoidal terms $\cos$, $\sin$ are dependent upon the "wavenumbers" $p_2,p_1$.
Instead, consider a gamma function distribution
$$
f(x,k,\theta) = A_0 x^k \exp{ (-\frac{x}{\theta }) }
$$
with $x,k,\theta \in \mathbb{R}$, and $A_0 \in \mathbb{R}$ constant.
End of explanation
"""
def gamma_func_fit(x, k, theta, beta ):
return (x/theta)**k * np.exp( - (x/theta)**beta )
# Calculate and show fit parameters. Use a starting guess of p1=1 and p2=0.2
popt, pcov = curve_fit(gamma_func_fit, xdata[3:], ydata[3:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(-1,150,200)
curvey=gamma_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: Function form:
$$
f(x; k, \beta, \theta) = (\frac{x}{ \theta } )^k \exp{ \left( - \left( \frac{x}{\theta} \right)^{\beta} \right) }
$$
End of explanation
"""
h = 0.002 # use for data.csv
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
def gamma_func_fit(x, k, theta, beta ):
return (x/theta)**k * np.exp( - (x/theta)**beta )
# Calculate and show fit parameters.
popt, pcov = curve_fit(gamma_func_fit, xdata[3:], ydata[3:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(-5,150,200)
curvey=gamma_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: So for
drag vs. $v$
$h=0.002 m$
End of explanation
"""
h = .016 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
# Calculate and show fit parameters.
popt, pcov = curve_fit(gamma_func_fit, xdata[3:], ydata[3:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(-5,150,200)
curvey=gamma_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: $h=0.016 m$
End of explanation
"""
h = .032 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
# Calculate and show fit parameters.
popt, pcov = curve_fit(gamma_func_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(-5,150,200)
curvey=gamma_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('xdata')
plt.ylabel('ydata')
plt.show()
"""
Explanation: $h=0.032 m$
End of explanation
"""
h = .014 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
"""
Explanation: $h=0.014 m$
End of explanation
"""
h = .018 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.drag[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
"""
Explanation: So there's no data for $h= 0.014 m$. Same with $h=0.018 m$
End of explanation
"""
def log_func_fit(x, A, B, C ):
return A+B*np.log( x + C)
"""
Explanation: lift vs. $v$
$$
y=y(x) = A+ B\log{ (x/C) }
$$
End of explanation
"""
h = .002 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.lift[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
# Calculate and show fit parameters.
popt, pcov = curve_fit(log_func_fit, xdata[3:], ydata[3:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,150,200)
curvey=log_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$v$')
plt.ylabel('lift')
plt.title('$h=0.002 m')
plt.show()
"""
Explanation: $h=0.002 m $
End of explanation
"""
h = .008 # data.csv, in meters (m) (EY : ???)
ydata = np.array(df.lift[df.h == h])
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
# Calculate and show fit parameters.
popt, pcov = curve_fit(log_func_fit, xdata[3:], ydata[3:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,150,200)
curvey=log_func_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$v$')
plt.ylabel('lift')
plt.title('$h=0.008$ m')
plt.show()
"""
Explanation: $h = 0.008 m $
End of explanation
"""
def exp_fit(x, A,B,C,D ):
return A * np.exp(- C * (x-B)) + D
"""
Explanation: Force2.Force_y vs. $v$
Fitting form, exponential function:
$$
f(x; \lambda,k) = A \exp{ ( - C (x -B) )} + D
$$
End of explanation
"""
df.columns
h = .002 # data.csv, in meters (m) (EY : ???)
ydata = np.array( df.ix[df.h==h]["Force2.Force_y"] )
xdata = np.array(df.v[df.h == h])
print (len(ydata)); print(len(xdata))
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[3:], ydata[3:] , p0=[100,100,1,1],maxfev=1000000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,150,200)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$v$')
plt.ylabel('Force2.Force_y')
plt.title('$h=0.002$ m')
plt.show()
"""
Explanation: $h=0.002 \, m$
End of explanation
"""
## [`example_ski_data.csv`](https://github.com/capsulecorplab/CurveFit/blob/master/example_ski_data.csv)
# Read csv and save to a pandas data frame
df = pd.read_csv(SUBDIR+'example_ski_data.csv')
df.describe()
"""
Explanation: Some remarks:
Trying different initial conditions for curve_fit helps to find the parameters; wrong choice of initial conditions lead to RunTime Errors since number of iterations run out before residuals shrink enough
I determined the fitting forms by the following manner:
I use only combinations and factors of transcendental, polynomial, exponential, and logarithmic functions, because physical systems should only exhibit such functions
Take a look manually at what the desired fit shape would be and search for what combination of transcendentals, polynomials, exponentials, and logarithms, would give us such as shape
modify accordingly
End of explanation
"""
DF1_A57 = pd.read_csv( SUBDIR + "Data Table 1 A57.csv" )
XY1_A57 = pd.read_csv( SUBDIR + "XY Plot 1 A57.csv" )
print( DF1_A57.head() )
print( DF1_A57.describe() )
print( XY1_A57.head() )
XY1_A57.describe()
DF1_A57.plot("Time [ms]", "Force1.Force_x [newton]", kind="scatter", title="Data Table 1")
DF1_A57.plot("Time [ms]", "Force1.Force_y [newton]", kind="scatter", title="Data Table 1")
DF1_A57.plot("Time [ms]", "Force1.Force_z [newton]", kind="scatter", title="Data Table 1")
XY1_A57.plot("Time [ms]", "Force1.Force_x [kNewton]", kind="scatter", title="XY 1")
XY1_A57.plot("Time [ms]", "Force1.Force_y [kNewton]", kind="scatter", title="XY 1")
XY1_A57.plot("Time [ms]", "Force1.Force_z [kNewton]", kind="scatter" , title="XY 1")
"""
Explanation: There may not be enough data points to make any sense. The functional relationship is not clear.
But the desired outputs are lift and drag.
On Data Table 1 A57.csv and XY Plot 1 A57.csv
End of explanation
"""
xdata = np.array( DF1_A57["Time [ms]"] )
ydata = np.array( DF1_A57["Force1.Force_x [newton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[100,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_x [newton]')
plt.title('$DF1$ ')
plt.show()
xdata = np.array( DF1_A57["Time [ms]"] )
ydata = np.array( DF1_A57["Force1.Force_y [newton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[100,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_y [newton]')
plt.title('$DF1$ ')
plt.show()
xdata = np.array( DF1_A57["Time [ms]"] )
ydata = np.array( DF1_A57["Force1.Force_z [newton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[1000,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_z [newton]')
plt.title('$DF1$ ')
plt.show()
xdata = np.array( XY1_A57["Time [ms]"] )
ydata = np.array( XY1_A57["Force1.Force_x [kNewton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[100,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_x [kNewton]')
plt.title('$DF1$ ')
plt.show()
xdata = np.array( XY1_A57["Time [ms]"] )
ydata = np.array( XY1_A57["Force1.Force_y [kNewton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[100,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_x [kNewton]')
plt.title('$XY1$ ')
plt.show()
xdata = np.array( XY1_A57["Time [ms]"] )
ydata = np.array( XY1_A57["Force1.Force_z [kNewton]"] )
# Calculate and show fit parameters.
popt, pcov = curve_fit( exp_fit, xdata[0:], ydata[0:] , p0=[100,100,1,-25.],maxfev=1000000)
#popt, pcov = curve_fit( exp_fit, xdata[2:], ydata[2:] , maxfev=100000)
print(popt)
print(pcov)
# Plot fitted curve along with data
curvex=np.linspace(3,510,1000)
curvey= exp_fit(curvex, *popt )
plt.plot(xdata,ydata,'*')
plt.plot(curvex,curvey,'r')
plt.xlabel('$t [ms]$')
plt.ylabel('Force1.Force_z [kNewton]')
plt.title('$XY1$ ')
plt.show()
"""
Explanation: Fit with decaying exponentials
End of explanation
"""
|
vkpedia/databuff
|
random-walks/YouTube-Spam/YouTube_Spam_Collection (Part 2).ipynb
|
mit
|
# Import modules
import numpy as np
import pandas as pd
"""
Explanation: YouTube Spam Collection Data Set (Part 2)
Source: UCI Machine Learning Repository
Original Source: YouTube Spam Collection v. 1
Alberto, T.C., Lochter J.V., Almeida, T.A. Filtragem Automática de Spam nos Comentários do YouTube. Anais do XII Encontro Nacional de Inteligência Artificial e Computacional (ENIAC'15), Natal, RN, Brazil, 2015. (preprint)
Alberto, T.C., Lochter J.V., Almeida, T.A. TubeSpam: Comment Spam Filtering on YouTube. Proceedings of the 14th IEEE International Conference on Machine Learning and Applications (ICMLA'15), 1-6, Miami, FL, USA, December, 2015. (preprint)
Contents
1 Data Set Description
2 Approach
3 Solution
3a Import modules
3b Read the data set
3c Data cleanup
3d Split the data
3e Transform the data
3f Build the model
3g Run predictions
3h Score the prediction
4 Summary
<a id='section1'></a>
1. Data Set Description
From the description accompanying the data set, "the samples were extracted from the comments section of five videos that were among the 10 most viewed on YouTube during the collection period."
The data is available in five distinct data sets, and the data is classified as 1 for "spam" and 0 for "ham"
<a id='section2'></a>
2. Approach
Since the data set is split across five data sets, we will take two passes at the data. This is the second pass.
In the (optional) first pass, we considered only the Psy data set, as a way to wrap our hands around the problem. The notebook for this can be accessed here.
Our second pass will involve merging all five data sets and then running the classification on the combined data set. In this round, we will also tune the model and the vectorizer to eke out some improvements.
<a id='section3'></a>
3. Solution
<a id='section3a'></a>
Import initial set of modules
End of explanation
"""
# Read the data set; print the first few rows
files = ['data\\Youtube01-Psy.csv', 'data\\Youtube02-KatyPerry.csv', 'data\\Youtube03-LMFAO.csv',
'data\\Youtube04-Eminem.csv', 'data\\Youtube05-Shakira.csv']
df = pd.DataFrame()
for file in files:
df = df.append(pd.read_csv(file))
df.head()
"""
Explanation: <a id='section3b'></a>
Read in the data from the first CSV alone
End of explanation
"""
# Check for missing values
df.info()
# Looks like there are missing values in the DATE column, but it is not a column of interest. Let's proceed.
# Of the five columns, the only relevant columns for spam/ham classification are the CONTENT and CLASS columns.
# We will use just these two columns. But first, let's check the distribution of spam and ham
df.CLASS.value_counts()
# There is an almost equal distribution. Given that this is a small data set, this is probably good,
# because the algorithm has enough items it can learn from
# Now, let us set up our X and y
X = df.CONTENT
y = df.CLASS
"""
Explanation: <a id='section3c'></a>
Data cleanup
End of explanation
"""
# Let us now split the data set into train and test sets
# We will use an 80/20 split
test_size = 0.2
seed = 42
scoring = 'accuracy'
num_folds = 10
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=seed, test_size=test_size)
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
models = []
names = []
results = []
lr = ('LR', LogisticRegression())
knn = ('KNN', KNeighborsClassifier())
svc = ('SVC', SVC())
nb = ('NB', MultinomialNB())
cart = ('CART', DecisionTreeClassifier())
models.extend([lr, knn, svc, nb, cart])
"""
Explanation: <a id='section3d'></a>
Split the data
End of explanation
"""
# Set up a vectorizer, and create a Document-Term matrix
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
X_train_dtm = vect.fit_transform(X_train)
# Check the layout of the Document-Term matrix
X_train_dtm
"""
Explanation: <a id='section3e'></a>
Transform the data
End of explanation
"""
from sklearn.model_selection import KFold, cross_val_score
for name, model in models:
kfold = KFold(n_splits=num_folds, random_state=seed)
score = cross_val_score(model, X_train_dtm, y_train, scoring=scoring, cv=kfold)
names.append(name)
results.append(score)
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, \
RandomForestClassifier, ExtraTreesClassifier
ensembles = []
ensemble_names = []
ensemble_results = []
ensembles.append(('AB', AdaBoostClassifier()))
ensembles.append(('RF', RandomForestClassifier()))
ensembles.append(('ET', ExtraTreesClassifier()))
for name, model in ensembles:
kfold = KFold(n_splits=num_folds, random_state=seed)
score = cross_val_score(model, X_train_dtm, y_train, cv=kfold, scoring=scoring)
ensemble_names.append(name)
ensemble_results.append(score)
models_list = []
for i, name in enumerate(names):
d = {'model': name, 'mean': results[i].mean(), 'std': results[i].std()}
models_list.append(d)
for i, name in enumerate(ensemble_names):
d = {'model': name, 'mean': results[i].mean(), 'std': results[i].std()}
models_list.append(d)
models_df = pd.DataFrame(models_list).set_index('model')
models_df.sort_values('mean', ascending=False)
"""
Explanation: <a id='section3f'></a>
Build the model
In this step, we will build 6 models, and pick the one with the best accuracy score
End of explanation
"""
cart
from sklearn.model_selection import GridSearchCV
final_model = DecisionTreeClassifier()
criterion_values = ['gini', 'entropy']
splitter_values = ['best', 'random']
min_samples_split_values = np.arange(2, 11, 1)
param_grid = dict(criterion=criterion_values, splitter=splitter_values,
min_samples_split=min_samples_split_values)
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=final_model, cv=kfold, scoring=scoring, param_grid=param_grid)
grid_result = grid.fit(X_train_dtm, y_train)
print(grid_result.best_params_, grid_result.best_score_)
"""
Explanation: Model selection
Based on accuracy scores, the best algorithm is the Decision Tree Classifier. Logistic Regression and AdaBoost Classifier also performed very well. We will choose Decision Tree as our model, and look to tune it.
End of explanation
"""
final_model = DecisionTreeClassifier(min_samples_split=7, random_state=seed)
final_model.fit(X_train_dtm, y_train)
# Transform the test data to a DTM and predict
X_test_dtm = vect.transform(X_test)
y_pred = final_model.predict(X_test_dtm)
"""
Explanation: It looks like we were able to eke out some improvement in the performance. The Decision Tree Classifier seems to perform best with the min_samples_split set to 7. We will use this for our final model. Note that the default values for 'criterion' and 'splitter' seem to be part of the best performing set of parameters.
<a id='section3g'></a>
Run the prediction
End of explanation
"""
# Let us check the accuracy score
# It needs to better than 50%, which was the baseline
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
accuracy_score(y_test, y_pred)
# The accuracy score was 93.37%, which is lower than we may have anticipated
# Let us check the confusion matrix to get a sense of the prediction distribution
confusion_matrix(y_test, y_pred)
# The model predicted 366 out of 392 instances correctly
# We had 14 false positives and 12 false negatives
# What were the false positive comments? (That is, ham marked as spam)
X_test[y_pred > y_test]
# And what were the false negative comments? (That is, spam comments that went undetected)
X_test[y_pred < y_test]
"""
Explanation: <a id='section3h'></a>
Score the prediction
End of explanation
"""
roc_auc_score(y_test, final_model.predict_proba(X_test_dtm)[:, 1])
"""
Explanation: Some of the false negatives seem like they should have been marked as spam, so it is interesting that the model missed these. We may need to tune our vectorizer and/or attempt some other classifiers.
Let us check the area under the ROC curve.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/guide/autodiff.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
"""
Explanation: 梯度和自动微分简介
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/guide/autodiff"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/autodiff.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/autodiff.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/autodiff.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
自动微分和梯度
自动微分对于实现机器学习算法(例如,用于训练神经网络的反向传播)非常有用。
在本指南中,您将探索使用 TensorFlow 计算梯度的方法,尤其是在 Eager Execution 中。
设置
End of explanation
"""
x = tf.Variable(3.0)
with tf.GradientTape() as tape:
y = x**2
"""
Explanation: 计算梯度
要实现自动微分,TensorFlow 需要记住在前向传递过程中哪些运算以何种顺序发生。随后,在后向传递期间,TensorFlow 以相反的顺序遍历此运算列表来计算梯度。
梯度带
TensorFlow 为自动微分提供了 tf.GradientTape API;即计算某个计算相对于某些输入(通常是 tf.Variable)的梯度。TensorFlow 会将在 tf.GradientTape 上下文内执行的相关运算“记录”到“条带”上。TensorFlow 随后会该使用条带通过反向模式微分计算“记录的”计算的梯度。
例如:
End of explanation
"""
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
dy_dx.numpy()
"""
Explanation: 记录一些运算后,使用 GradientTape.gradient(target, sources) 计算某个目标(通常是损失)相对于某个源(通常是模型变量)的梯度。
End of explanation
"""
w = tf.Variable(tf.random.normal((3, 2)), name='w')
b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')
x = [[1., 2., 3.]]
with tf.GradientTape(persistent=True) as tape:
y = x @ w + b
loss = tf.reduce_mean(y**2)
"""
Explanation: 上方示例使用标量,但是 tf.GradientTape 在任何张量上都可以轻松运行:
End of explanation
"""
[dl_dw, dl_db] = tape.gradient(loss, [w, b])
"""
Explanation: 要获得 loss 相对于两个变量的梯度,可以将这两个变量同时作为 gradient 方法的源传递。梯度带在关于源的传递方式上非常灵活,可以接受列表或字典的任何嵌套组合,并以相同的方式返回梯度结构(请参阅 tf.nest)。
End of explanation
"""
print(w.shape)
print(dl_dw.shape)
"""
Explanation: 相对于每个源的梯度具有源的形状:
End of explanation
"""
my_vars = {
'w': w,
'b': b
}
grad = tape.gradient(loss, my_vars)
grad['b']
"""
Explanation: 此处也为梯度计算,这一次传递了一个变量字典:
End of explanation
"""
layer = tf.keras.layers.Dense(2, activation='relu')
x = tf.constant([[1., 2., 3.]])
with tf.GradientTape() as tape:
# Forward pass
y = layer(x)
loss = tf.reduce_mean(y**2)
# Calculate gradients with respect to every trainable variable
grad = tape.gradient(loss, layer.trainable_variables)
for var, g in zip(layer.trainable_variables, grad):
print(f'{var.name}, shape: {g.shape}')
"""
Explanation: 相对于模型的梯度
通常将 tf.Variables 收集到 tf.Module 或其子类之一(layers.Layer、keras.Model)中,用于设置检查点和导出。
在大多数情况下,需要计算相对于模型的可训练变量的梯度。 由于 tf.Module 的所有子类都在 Module.trainable_variables 属性中聚合其变量,您可以用几行代码计算这些梯度:
End of explanation
"""
# A trainable variable
x0 = tf.Variable(3.0, name='x0')
# Not trainable
x1 = tf.Variable(3.0, name='x1', trainable=False)
# Not a Variable: A variable + tensor returns a tensor.
x2 = tf.Variable(2.0, name='x2') + 1.0
# Not a variable
x3 = tf.constant(3.0, name='x3')
with tf.GradientTape() as tape:
y = (x0**2) + (x1**2) + (x2**2)
grad = tape.gradient(y, [x0, x1, x2, x3])
for g in grad:
print(g)
"""
Explanation: <a id="watches"></a>
控制梯度带监视的内容
默认行为是在访问可训练 tf.Variable 后记录所有运算。原因如下:
条带需要知道在前向传递中记录哪些运算,以计算后向传递中的梯度。
梯度带包含对中间输出的引用,因此应避免记录不必要的操作。
最常见用例涉及计算损失相对于模型的所有可训练变量的梯度。
以下示例无法计算梯度,因为默认情况下 tf.Tensor 未被“监视”,并且 tf.Variable 不可训练:
End of explanation
"""
[var.name for var in tape.watched_variables()]
"""
Explanation: 您可以使用 GradientTape.watched_variables 方法列出梯度带正在监视的变量:
End of explanation
"""
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x**2
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
print(dy_dx.numpy())
"""
Explanation: tf.GradientTape 提供了钩子,让用户可以控制被监视或不被监视的内容。
要记录相对于 tf.Tensor 的梯度,您需要调用 GradientTape.watch(x):
End of explanation
"""
x0 = tf.Variable(0.0)
x1 = tf.Variable(10.0)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.softplus(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
"""
Explanation: 相反,要停用监视所有 tf.Variables 的默认行为,请在创建梯度带时设置 watch_accessed_variables=False。此计算使用两个变量,但仅连接其中一个变量的梯度:
End of explanation
"""
# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)
grad = tape.gradient(ys, {'x0': x0, 'x1': x1})
print('dy/dx0:', grad['x0'])
print('dy/dx1:', grad['x1'].numpy())
"""
Explanation: 由于 GradientTape.watch 未在 x0 上调用,未相对于它计算梯度:
End of explanation
"""
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x * x
z = y * y
# Use the tape to compute the gradient of z with respect to the
# intermediate value y.
# dz_dy = 2 * y and y = x ** 2 = 9
print(tape.gradient(z, y).numpy())
"""
Explanation: 中间结果
您还可以请求输出相对于 tf.GradientTape 上下文中计算的中间值的梯度。
End of explanation
"""
x = tf.constant([1, 3.0])
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = x * x
z = y * y
print(tape.gradient(z, x).numpy()) # [4.0, 108.0] (4 * x**3 at x = [1.0, 3.0])
print(tape.gradient(y, x).numpy()) # [2.0, 6.0] (2 * x at x = [1.0, 3.0])
del tape # Drop the reference to the tape
"""
Explanation: 默认情况下,只要调用 GradientTape.gradient 方法,就会释放 GradientTape 保存的资源。要在同一计算中计算多个梯度,请创建一个 persistent=True 的梯度带。这样一来,当梯度带对象作为垃圾回收时,随着资源的释放,可以对 gradient 方法进行多次调用。例如:
End of explanation
"""
x = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient(y0, x).numpy())
print(tape.gradient(y1, x).numpy())
"""
Explanation: 性能说明
在梯度带上下文内进行运算会有一个微小的开销。对于大多数 Eager Execution 来说,这一成本并不明显,但是您仍然应当仅在需要的地方使用梯度带上下文。
梯度带使用内存来存储中间结果,包括输入和输出,以便在后向传递中使用。
为了提高效率,某些运算(例如 ReLU)不需要保留中间结果,而是在前向传递中进行剪枝。不过,如果在梯度带上使用 persistent=True,则不会丢弃任何内容,并且峰值内存使用量会更高。
非标量目标的梯度
梯度从根本上说是对标量的运算。
End of explanation
"""
x = tf.Variable(2.0)
with tf.GradientTape() as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient({'y0': y0, 'y1': y1}, x).numpy())
"""
Explanation: 因此,如果需要多个目标的梯度,则每个源的结果为:
目标总和的梯度,或等效
每个目标的梯度总和。
End of explanation
"""
x = tf.Variable(2.)
with tf.GradientTape() as tape:
y = x * [3., 4.]
print(tape.gradient(y, x).numpy())
"""
Explanation: 类似地,如果目标不是标量,则计算总和的梯度:
End of explanation
"""
x = tf.linspace(-10.0, 10.0, 200+1)
with tf.GradientTape() as tape:
tape.watch(x)
y = tf.nn.sigmoid(x)
dy_dx = tape.gradient(y, x)
plt.plot(x, y, label='y')
plt.plot(x, dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
"""
Explanation: 这样一来,就可以轻松获取损失集合总和的梯度,或者逐元素损失计算总和的梯度。
如果每个条目都需要单独的梯度,请参阅雅可比矩阵。
在某些情况下,您可以跳过雅可比矩阵。对于逐元素计算,总和的梯度给出了每个元素相对于其输入元素的导数,因为每个元素都是独立的:
End of explanation
"""
x = tf.constant(1.0)
v0 = tf.Variable(2.0)
v1 = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
if x > 0.0:
result = v0
else:
result = v1**2
dv0, dv1 = tape.gradient(result, [v0, v1])
print(dv0)
print(dv1)
"""
Explanation: 控制流
在执行运算时,由于梯度带会记录这些运算,因此会自然地处理 Python 控制流(例如 if 和 while 语句)。
此处,if 的每个分支上使用不同变量。梯度仅连接到使用的变量:
End of explanation
"""
dx = tape.gradient(result, x)
print(dx)
"""
Explanation: 注意,控制语句本身不可微分,因此对基于梯度的优化器不可见。
根据上面示例中 x 的值,梯度带将记录 result = v0 或 result = v1**2。 相对于 x 的梯度始终为 None。
End of explanation
"""
x = tf.Variable(2.)
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y * y
print(tape.gradient(z, x))
"""
Explanation: 获取 None 的梯度
当目标未连接到源时,您将获得 None 的梯度。
End of explanation
"""
x = tf.Variable(2.0)
for epoch in range(2):
with tf.GradientTape() as tape:
y = x+1
print(type(x).__name__, ":", tape.gradient(y, x))
x = x + 1 # This should be `x.assign_add(1)`
"""
Explanation: 此处 z 显然未连接到 x,但可以通过几种不太明显的方式将梯度断开。
1. 使用张量替换变量
在控制梯度带监视内容部分中,梯度带会自动监视 tf.Variable,但不会监视 tf.Tensor。
一个常见错误是无意中将 tf.Variable 替换为 tf.Tensor,而不使用 Variable.assign 更新 tf.Variable。见下例:
End of explanation
"""
x = tf.Variable([[1.0, 2.0],
[3.0, 4.0]], dtype=tf.float32)
with tf.GradientTape() as tape:
x2 = x**2
# This step is calculated with NumPy
y = np.mean(x2, axis=0)
# Like most ops, reduce_mean will cast the NumPy array to a constant tensor
# using `tf.convert_to_tensor`.
y = tf.reduce_mean(y, axis=0)
print(tape.gradient(y, x))
"""
Explanation: 2.在 TensorFlow 之外进行了计算
如果计算退出 TensorFlow,梯度带将无法记录梯度路径。例如:
End of explanation
"""
x = tf.constant(10)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
print(g.gradient(y, x))
"""
Explanation: 3.通过整数或字符串获取梯度
整数和字符串不可微分。如果计算路径使用这些数据类型,则不会出现梯度。
谁也不会期望字符串是可微分的,但是如果不指定 dtype,很容易意外创建一个 int 常量或变量。
End of explanation
"""
x0 = tf.Variable(3.0)
x1 = tf.Variable(0.0)
with tf.GradientTape() as tape:
# Update x1 = x1 + x0.
x1.assign_add(x0)
# The tape starts recording from x1.
y = x1**2 # y = (x1 + x0)**2
# This doesn't work.
print(tape.gradient(y, x0)) #dy/dx0 = 2*(x1 + x0)
"""
Explanation: TensorFlow 不会在类型之间自动进行转换,因此,在实践中,您经常会遇到类型错误而不是缺少梯度。
4. 通过有状态对象获取梯度
状态会停止梯度。从有状态对象读取时,梯度带只能观察当前状态,而不能观察导致该状态的历史记录。
tf.Tensor 不可变。张量创建后就不能更改。它有一个值,但没有状态。目前讨论的所有运算也都无状态:tf.matmul 的输出只取决于它的输入。
tf.Variable 具有内部状态,即它的值。使用变量时,会读取状态。计算相对于变量的梯度是正常操作,但是变量的状态会阻止梯度计算进一步向后移动。 例如:
End of explanation
"""
image = tf.Variable([[[0.5, 0.0, 0.0]]])
delta = tf.Variable(0.1)
with tf.GradientTape() as tape:
new_image = tf.image.adjust_contrast(image, delta)
try:
print(tape.gradient(new_image, [image, delta]))
assert False # This should not happen.
except LookupError as e:
print(f'{type(e).__name__}: {e}')
"""
Explanation: 类似地,tf.data.Dataset 迭代器和 tf.queue 也有状态,会停止经过它们的张量上的所有梯度。
未注册梯度
某些 tf.Operation 被注册为不可微分,将返回 None。还有一些则未注册梯度。
tf.raw_ops 页面显示了哪些低级运算已经注册梯度。
如果您试图通过一个没有注册梯度的浮点运算获取梯度,梯度带将抛出错误,而不是直接返回 None。这样一来,您可以了解某个环节出现问题。
例如,tf.image.adjust_contrast 函数封装了 raw_ops.AdjustContrastv2,此运算可能具有梯度,但未实现该梯度:
End of explanation
"""
x = tf.Variable([2., 2.])
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y**2
print(tape.gradient(z, x, unconnected_gradients=tf.UnconnectedGradients.ZERO))
"""
Explanation: 如果需要通过此运算进行微分,则需要实现梯度并注册该梯度(使用 tf.RegisterGradient),或者使用其他运算重新实现该函数。
零而不是 None
在某些情况下,对于未连接的梯度,得到 0 而不是 None 会比较方便。您可以使用 unconnected_gradients 参数来决定具有未连接的梯度时返回的内容:
End of explanation
"""
|
guyk1971/deep-learning
|
dcgan-svhn/DCGAN.ipynb
|
mit
|
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
#data_dir = 'data/'
data_dir='/home/guy/datasets/svhn/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
"""
|
Merinorus/adaisawesome
|
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
|
gpl-3.0
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
"""
Explanation: Table of Contents
<p><div class="lev1"><a href="#Data-Wrangling-with-Pandas"><span class="toc-item-num">1 </span>Data Wrangling with Pandas</a></div><div class="lev2"><a href="#Date/Time-data-handling"><span class="toc-item-num">1.1 </span>Date/Time data handling</a></div><div class="lev2"><a href="#Merging-and-joining-DataFrame-objects"><span class="toc-item-num">1.2 </span>Merging and joining DataFrame objects</a></div><div class="lev2"><a href="#Concatenation"><span class="toc-item-num">1.3 </span>Concatenation</a></div><div class="lev2"><a href="#Exercise-1"><span class="toc-item-num">1.4 </span>Exercise 1</a></div><div class="lev2"><a href="#Reshaping-DataFrame-objects"><span class="toc-item-num">1.5 </span>Reshaping DataFrame objects</a></div><div class="lev2"><a href="#Pivoting"><span class="toc-item-num">1.6 </span>Pivoting</a></div><div class="lev2"><a href="#Data-transformation"><span class="toc-item-num">1.7 </span>Data transformation</a></div><div class="lev3"><a href="#Dealing-with-duplicates"><span class="toc-item-num">1.7.1 </span>Dealing with duplicates</a></div><div class="lev3"><a href="#Value-replacement"><span class="toc-item-num">1.7.2 </span>Value replacement</a></div><div class="lev3"><a href="#Inidcator-variables"><span class="toc-item-num">1.7.3 </span>Inidcator variables</a></div><div class="lev2"><a href="#Categorical-Data"><span class="toc-item-num">1.8 </span>Categorical Data</a></div><div class="lev3"><a href="#Discretization"><span class="toc-item-num">1.8.1 </span>Discretization</a></div><div class="lev3"><a href="#Permutation-and-sampling"><span class="toc-item-num">1.8.2 </span>Permutation and sampling</a></div><div class="lev2"><a href="#Data-aggregation-and-GroupBy-operations"><span class="toc-item-num">1.9 </span>Data aggregation and GroupBy operations</a></div><div class="lev3"><a href="#Apply"><span class="toc-item-num">1.9.1 </span>Apply</a></div><div class="lev2"><a href="#Exercise-2"><span class="toc-item-num">1.10 </span>Exercise 2</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.11 </span>References</a></div>
# Data Wrangling with Pandas
Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.
As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.
End of explanation
"""
from datetime import datetime
now = datetime.now()
now
now.day
now.weekday()
"""
Explanation: Date/Time data handling
Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.
The datetime built-in library handles temporal information down to the nanosecond.
End of explanation
"""
from datetime import date, time
time(3, 24)
date(1970, 9, 3)
"""
Explanation: In addition to datetime there are simpler objects for date and time information only, respectively.
End of explanation
"""
my_age = now - datetime(1970, 1, 1)
my_age
print(type(my_age))
my_age.days/365
"""
Explanation: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times:
End of explanation
"""
segments = pd.read_csv("Data/AIS/transit_segments.csv")
segments.head()
"""
Explanation: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.
For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.
End of explanation
"""
segments.seg_length.hist(bins=500)
"""
Explanation: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:
End of explanation
"""
segments.seg_length.apply(np.log).hist(bins=500)
"""
Explanation: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful:
End of explanation
"""
segments.st_time.dtype
"""
Explanation: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.
End of explanation
"""
datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')
"""
Explanation: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.
End of explanation
"""
from dateutil.parser import parse
parse(segments.st_time.ix[0])
"""
Explanation: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.
End of explanation
"""
segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))
"""
Explanation: We can convert all the dates in a particular column by using the apply method.
End of explanation
"""
pd.to_datetime(segments.st_time[:10])
"""
Explanation: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.
End of explanation
"""
pd.to_datetime([None])
"""
Explanation: Pandas also has a custom NA value for missing datetime objects, NaT.
End of explanation
"""
segments = pd.read_csv("Data/AIS/transit_segments.csv", parse_dates=['st_time', 'end_time'])
segments.dtypes
"""
Explanation: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import:
End of explanation
"""
segments.st_time.dt.month.head()
segments.st_time.dt.hour.head()
"""
Explanation: Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example:
End of explanation
"""
segments[segments.st_time.dt.month==2].head()
"""
Explanation: This can be used to easily filter rows by particular temporal attributes:
End of explanation
"""
segments.st_time.dt.tz_localize('UTC').head()
segments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head()
"""
Explanation: In addition, time zone information can be applied:
End of explanation
"""
vessels = pd.read_csv("Data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
[v for v in vessels.type.unique() if v.find('/')==-1]
vessels.type.value_counts()
"""
Explanation: Merging and joining DataFrame objects
Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.
End of explanation
"""
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)),
score=np.random.random(size=6)))
df1
df2
pd.merge(df1, df2)
"""
Explanation: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:
End of explanation
"""
pd.merge(df1, df2, how='outer')
"""
Explanation: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.
End of explanation
"""
segments.head(1)
vessels.head(1)
"""
Explanation: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Looking at the two datasets that we wish to merge:
End of explanation
"""
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
segments_merged.head()
"""
Explanation: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.
End of explanation
"""
vessels.merge(segments, left_index=True, right_on='mmsi').head()
"""
Explanation: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.
Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables:
End of explanation
"""
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()
"""
Explanation: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.
End of explanation
"""
np.concatenate([np.random.random(5), np.random.random(5)])
np.r_[np.random.random(5), np.random.random(5)]
np.c_[np.random.random(5), np.random.random(5)]
"""
Explanation: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Concatenation
A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_:
End of explanation
"""
mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
mb1.head()
"""
Explanation: Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.
This operation is also called binding or stacking.
With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.
Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.
End of explanation
"""
mb1.columns = mb2.columns = ['Count']
mb1.index.name = mb2.index.name = 'Taxon'
mb1.head()
"""
Explanation: Let's give the index and columns meaningful labels:
End of explanation
"""
mb1.index[:3]
mb1.index.is_unique
"""
Explanation: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
End of explanation
"""
pd.concat([mb1, mb2], axis=0).shape
"""
Explanation: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated:
End of explanation
"""
pd.concat([mb1, mb2], axis=0).index.is_unique
"""
Explanation: However, the index is no longer unique, due to overlap between the two DataFrames.
End of explanation
"""
pd.concat([mb1, mb2], axis=1).shape
pd.concat([mb1, mb2], axis=1).head()
"""
Explanation: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.
End of explanation
"""
pd.concat([mb1, mb2], axis=1, join='inner').head()
"""
Explanation: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.
End of explanation
"""
mb1.combine_first(mb2).head()
"""
Explanation: If we wanted to use the second table to fill values absent from the first table, we could use combine_first.
End of explanation
"""
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique
"""
Explanation: We can also create a hierarchical index based on keys identifying the original tables.
End of explanation
"""
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()
"""
Explanation: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table.
End of explanation
"""
# Write solution here
"""
Explanation: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Exercise 1
In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.
End of explanation
"""
cdystonia = pd.read_csv("Data/cdystonia.csv", index_col=None)
cdystonia.head()
"""
Explanation: Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
End of explanation
"""
stacked = cdystonia.stack()
stacked
"""
Explanation: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.
The stack method rotates the data frame so that columns are represented in rows:
End of explanation
"""
stacked.unstack().head()
"""
Explanation: To complement this, unstack pivots from rows back to columns.
End of explanation
"""
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
cdystonia2.index.is_unique
"""
Explanation: For this dataset, it makes sense to create a hierarchical index based on the patient and observation:
End of explanation
"""
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
.head())
cdystonia_wide
"""
Explanation: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
End of explanation
"""
(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
.unstack('week').head())
"""
Explanation: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:
End of explanation
"""
pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
"""
Explanation: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
End of explanation
"""
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
"""
Explanation: This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
Pivoting
The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.
For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:
End of explanation
"""
cdystonia.pivot('patient', 'obs')
"""
Explanation: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table:
End of explanation
"""
cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs',
aggfunc=max).head(20)
"""
Explanation: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
End of explanation
"""
pd.crosstab(cdystonia.sex, cdystonia.site)
"""
Explanation: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
End of explanation
"""
vessels.duplicated(subset='names')
vessels.drop_duplicates(['names'])
"""
Explanation: Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values.
Dealing with duplicates
We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name:
End of explanation
"""
cdystonia.treat.value_counts()
"""
Explanation: Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:
End of explanation
"""
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
"""
Explanation: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
End of explanation
"""
vals = pd.Series([float(i)**10 for i in range(10)])
vals
np.log(vals)
"""
Explanation: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values:
End of explanation
"""
vals = vals.replace(0, 1e-6)
np.log(vals)
"""
Explanation: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.
End of explanation
"""
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
"""
Explanation: We can also perform the same replacement that we used map for with replace:
End of explanation
"""
top5 = vessels.type.isin(vessels.type.value_counts().index[:5])
top5.head(10)
vessels5 = vessels[top5]
pd.get_dummies(vessels5.type).head(10)
"""
Explanation: Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships:
End of explanation
"""
cdystonia.treat.head()
"""
Explanation: Categorical Data
Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category.
For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters.
End of explanation
"""
pd.Categorical(cdystonia.treat)
cdystonia['treat'] = cdystonia.treat.astype('category')
cdystonia.treat.describe()
"""
Explanation: We can convert this to a category type either by the Categorical constructor, or casting the column using astype:
End of explanation
"""
cdystonia.treat.cat.categories
"""
Explanation: By default the Categorical type represents an unordered categorical.
End of explanation
"""
cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U']
cdystonia.treat.cat.as_ordered().head()
"""
Explanation: However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order.
End of explanation
"""
cdystonia.treat.cat.codes
"""
Explanation: The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels.
End of explanation
"""
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()
segments['name'] = segments.name.astype('category')
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()
"""
Explanation: Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.
There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!):
End of explanation
"""
cdystonia.age.describe()
"""
Explanation: Hence, we get a considerable speedup simply by using the appropriate dtype for our data.
Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:
End of explanation
"""
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
"""
Explanation: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:
End of explanation
"""
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
"""
Explanation: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:
End of explanation
"""
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]
"""
Explanation: Since the data are now ordinal, rather than numeric, we can give them labels:
End of explanation
"""
pd.qcut(cdystonia.age, 4)[:30]
"""
Explanation: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:
End of explanation
"""
quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
"""
Explanation: Alternatively, one can specify custom quantiles to act as cut points:
End of explanation
"""
pd.get_dummies(quantiles).head(10)
"""
Explanation: Note that you can easily combine discretiztion with the generation of indicator variables shown above:
End of explanation
"""
new_order = np.random.permutation(len(segments))
new_order[:30]
"""
Explanation: Permutation and sampling
For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence.
End of explanation
"""
segments.take(new_order).head()
"""
Explanation: Using this sequence as an argument to the take method results in a reordered DataFrame:
End of explanation
"""
segments.head()
"""
Explanation: Compare this ordering with the original:
End of explanation
"""
vessels.sample(n=10)
vessels.sample(n=10, replace=True)
"""
Explanation: For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement:
End of explanation
"""
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
"""
Explanation: Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
group-wise transformation, such as standardization/normalization
End of explanation
"""
cdystonia_grouped
"""
Explanation: This grouped dataset is hard to visualize
End of explanation
"""
for patient, group in cdystonia_grouped:
print('patient', patient)
print('group', group)
"""
Explanation: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:
End of explanation
"""
cdystonia_grouped.agg(np.mean).head()
"""
Explanation: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method:
End of explanation
"""
cdystonia_grouped.mean().head()
"""
Explanation: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean:
End of explanation
"""
cdystonia_grouped.mean().add_suffix('_mean').head()
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)
"""
Explanation: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:
End of explanation
"""
cdystonia.groupby(['week','site']).mean().head()
"""
Explanation: If we wish, we can easily aggregate according to multiple keys:
End of explanation
"""
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
"""
Explanation: Alternately, we can transform the data, using a function of our choice with the transform method:
End of explanation
"""
cdystonia_grouped['twstrs'].mean().head()
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head()
"""
Explanation: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:
End of explanation
"""
chunks = dict(list(cdystonia_grouped))
chunks[4]
"""
Explanation: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:
End of explanation
"""
grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1)
{g:grouped_by_type.get_group(g) for g in grouped_by_type.groups}
"""
Explanation: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:
End of explanation
"""
cdystonia2.head(10)
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
"""
Explanation: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:
End of explanation
"""
def top(df, column, n=5):
return df.sort_values(by=column, ascending=False)[:n]
"""
Explanation: Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
End of explanation
"""
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments.head(15)
"""
Explanation: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
End of explanation
"""
mb1.index[:3]
"""
Explanation: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
End of explanation
"""
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))
mb_class = mb1.copy()
mb_class.index = class_index
"""
Explanation: Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.
End of explanation
"""
mb_class.head()
"""
Explanation: However, since there are multiple taxonomic units with the same class, our index is no longer unique:
End of explanation
"""
mb_class.groupby(level=0).sum().head(10)
"""
Explanation: We can re-establish a unique index by summing all rows with the same class, using groupby:
End of explanation
"""
from IPython.core.display import HTML
HTML(filename='Data/titanic.html')
"""
Explanation: Exercise 2
Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
End of explanation
"""
titanic_df = pd.read_excel('Data/titanic.xls', 'titanic', index_col=None, header=0)
titanic_df
titanic_nameduplicate = titanic_df.duplicated(subset='name')
#titanic_nameduplicate
titanic_df.drop_duplicates(['name'])
gender_map = {'male':0, 'female':1}
titanic_df['sex'] = titanic_df.sex.map(gender_map)
titanic_df
titanic_grouped = titanic_df.groupby(titanic_df.sex)
titanic_grouped
for sex, survived in titanic_grouped:
print('sex', sex)
print('survived', survived)
titanic_grouped.agg(survived.mean).head()
"""
Explanation: Women and children first?
Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
Use the groupby method to calculate the proportion of passengers that survived by sex.
Calculate the same proportion, but by class and sex.
Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_stats_cluster_spatio_temporal.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation t-test on source data with spatio-temporal clustering
Tests if the evoked response is significantly different between
conditions across subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
"""
Explanation: Set parameters
End of explanation
"""
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
"""
Explanation: Read epochs for all channels, removing a bad one
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
"""
Explanation: Transform to source space
End of explanation
"""
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
"""
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g. fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only p = 1/(2 ** 6) = 0.015,
which is large.</p></div>
End of explanation
"""
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
"""
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
"""
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
"""
Explanation: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
End of explanation
"""
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=1,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
"""
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal)
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(hemi='both', views='lateral',
subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
dsquareindia/gensim
|
docs/notebooks/Corpora_and_Vector_Spaces.ipynb
|
lgpl-2.1
|
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Tutorial 1: Corpora and Vector Spaces
See this gensim tutorial on the web here.
Don’t forget to set:
End of explanation
"""
from gensim import corpora
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
"""
Explanation: if you want to see logging events.
From Strings to Vectors
This time, let’s start from documents represented as strings:
End of explanation
"""
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
from pprint import pprint # pretty-printer
pprint(texts)
"""
Explanation: This is a tiny corpus of nine documents, each consisting of only a single sentence.
First, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:
End of explanation
"""
dictionary = corpora.Dictionary(texts)
dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
print(dictionary)
"""
Explanation: Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).
The ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represented by the features extracted from it, not by its “surface” string form: how you get to the features is up to you. Below I describe one common, general-purpose approach (called bag-of-words), but keep in mind that different application domains call for different features, and, as always, it’s garbage in, garbage out...
To convert documents to vectors, we’ll use a document representation called bag-of-words. In this representation, each document is represented by one vector where each vector element represents a question-answer pair, in the style of:
"How many times does the word system appear in the document? Once"
It is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:
End of explanation
"""
print(dictionary.token2id)
"""
Explanation: Here we assigned a unique integer id to all words appearing in the corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids:
End of explanation
"""
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
"""
Explanation: To actually convert tokenized documents to vectors:
End of explanation
"""
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
for c in corpus:
print(c)
"""
Explanation: The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a sparse vector. The sparse vector [(word_id, 1), (word_id, 1)] therefore reads: in the document “Human computer interaction”, the words "computer" and "human", identified by an integer id given by the built dictionary, appear once; the other ten dictionary words appear (implicitly) zero times. Check their id at the dictionary displayed in the previous cell and see that they match.
End of explanation
"""
class MyCorpus(object):
def __iter__(self):
for line in open('datasets/mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
"""
Explanation: By now it should be clear that the vector feature with id=10 stands for the question “How many times does the word graph appear in the document?” and that the answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example. If you're running this notebook by your own, the words id may differ, but you should be able to check the consistency between documents comparing their vectors.
Corpus Streaming – One Document at a Time
Note that corpus above resides fully in memory, as a plain Python list. In this simple example, it doesn’t matter much, but just to make things clear, let’s assume there are millions of documents in the corpus. Storing all of them in RAM won’t do. Instead, let’s assume the documents are stored in a file on disk, one document per line. Gensim only requires that a corpus must be able to return one document vector at a time:
End of explanation
"""
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
"""
Explanation: The assumption that each document occupies one line in a single file is not important; you can mold the __iter__ function to fit your input format, whatever it is. Walking directories, parsing XML, accessing network... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.
End of explanation
"""
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
"""
Explanation: Corpus is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time):
End of explanation
"""
from six import iteritems
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('datasets/mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
# remove gaps in id sequence after words that were removed
dictionary.compactify()
print(dictionary)
"""
Explanation: Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.
Similarly, to construct the dictionary without loading all texts into memory:
End of explanation
"""
# create a toy corpus of 2 documents, as a plain Python list
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)
"""
Explanation: And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.
Corpus Formats
There exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk. Gensim implements them via the streaming corpus interface mentioned earlier: documents are read from (resp. stored to) disk in a lazy fashion, one document at a time, without the whole corpus being read into main memory at once.
One of the more notable file formats is the Matrix Market format. To save a corpus in the Matrix Market format:
End of explanation
"""
corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
corpora.LowCorpus.serialize('/tmp/corpus.low', corpus)
"""
Explanation: Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.
End of explanation
"""
corpus = corpora.MmCorpus('/tmp/corpus.mm')
"""
Explanation: Conversely, to load a corpus iterator from a Matrix Market file:
End of explanation
"""
print(corpus)
"""
Explanation: Corpus objects are streams, so typically you won’t be able to print them directly:
End of explanation
"""
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
"""
Explanation: Instead, to view the contents of a corpus:
End of explanation
"""
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
"""
Explanation: or
End of explanation
"""
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
"""
Explanation: The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).
To save the same Matrix Market document stream in Blei’s LDA-C format,
End of explanation
"""
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5,2])
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
numpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)
"""
Explanation: In this way, gensim can also be used as a memory-efficient I/O format conversion tool: just load a document stream using one format and immediately save it in another format. Adding new formats is dead easy, check out the code for the SVMlight corpus for an example.
Compatibility with NumPy and SciPy
Gensim also contains efficient utility functions to help converting from/to numpy matrices:
End of explanation
"""
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5,2)
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
"""
Explanation: and from/to scipy.sparse matrices:
End of explanation
"""
|
rishuatgithub/MLPy
|
nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/04-Stop-Words.ipynb
|
apache-2.0
|
# Perform standard imports:
import spacy
nlp = spacy.load('en_core_web_sm')
# Print the set of spaCy's default stop words (remember that sets are unordered):
print(nlp.Defaults.stop_words)
len(nlp.Defaults.stop_words)
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Stop Words
Words like "a" and "the" appear so frequently that they don't require tagging as thoroughly as nouns, verbs and modifiers. We call these stop words, and they can be filtered from the text to be processed. spaCy holds a built-in list of some 305 English stop words.
End of explanation
"""
nlp.vocab['myself'].is_stop
nlp.vocab['mystery'].is_stop
"""
Explanation: To see if a word is a stop word
End of explanation
"""
# Add the word to the set of stop words. Use lowercase!
nlp.Defaults.stop_words.add('btw')
# Set the stop_word tag on the lexeme
nlp.vocab['btw'].is_stop = True
len(nlp.Defaults.stop_words)
nlp.vocab['btw'].is_stop
"""
Explanation: To add a stop word
There may be times when you wish to add a stop word to the default set. Perhaps you decide that 'btw' (common shorthand for "by the way") should be considered a stop word.
End of explanation
"""
# Remove the word from the set of stop words
nlp.Defaults.stop_words.remove('beyond')
# Remove the stop_word tag from the lexeme
nlp.vocab['beyond'].is_stop = False
len(nlp.Defaults.stop_words)
nlp.vocab['beyond'].is_stop
"""
Explanation: <font color=green>When adding stop words, always use lowercase. Lexemes are converted to lowercase before being added to vocab.</font>
To remove a stop word
Alternatively, you may decide that 'beyond' should not be considered a stop word.
End of explanation
"""
|
jdhp-docs/python_notebooks
|
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
|
mit
|
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
"""
Explanation: Basic 1D non-linear regression with Keras
TODO: see https://stackoverflow.com/questions/44998910/keras-model-to-fit-polynomial
Install Keras
https://keras.io/#installation
Install dependencies
Install TensorFlow backend: https://www.tensorflow.org/install/
pip install tensorflow
Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
pip install h5py
Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
pip install pydot
Install Keras
pip install keras
Import packages and check versions
End of explanation
"""
df_train = gen_1d_polynomial_samples(n_samples=100, noise_std=0.05)
x_train = df_train.x.values
y_train = df_train.y.values
plt.plot(x_train, y_train, ".k");
df_test = gen_1d_polynomial_samples(n_samples=100, noise_std=None)
x_test = df_test.x.values
y_test = df_test.y.values
plt.plot(x_test, y_test, ".k");
"""
Explanation: Make the dataset
End of explanation
"""
model = keras.models.Sequential()
#model.add(keras.layers.Dense(units=1000, activation='relu', input_dim=1))
#model.add(keras.layers.Dense(units=1))
#model.add(keras.layers.Dense(units=1000, activation='relu'))
#model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu', input_dim=1))
model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu'))
model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu'))
model.add(keras.layers.Dense(units=1))
model.compile(loss='mse',
optimizer='adam')
model.summary()
hist = model.fit(x_train, y_train, batch_size=100, epochs=3000, verbose=None)
plt.plot(hist.history['loss']);
model.evaluate(x_test, y_test)
y_predicted = model.predict(x_test)
plt.plot(x_test, y_test, ".r")
plt.plot(x_test, y_predicted, ".k");
"""
Explanation: Make the regressor
End of explanation
"""
|
spencer2211/deep-learning
|
seq2seq/sequence_to_sequence_implementation.ipynb
|
mit
|
import numpy as np
import time
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
"""
Explanation: Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.
<img src="images/sequence-to-sequence.jpg"/>
Dataset
The dataset lives in the /data/ folder. At the moment, it is made up of the following files:
* letters_source.txt: The list of input letter sequences. Each sequence is its own line.
* letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.
End of explanation
"""
source_sentences[:50].split('\n')
"""
Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
End of explanation
"""
target_sentences[:50].split('\n')
"""
Explanation: source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.
End of explanation
"""
def extract_character_vocab(data):
special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
"""
Explanation: Preprocess
To do anything useful with it, we'll need to turn the each string into a list of characters:
<img src="images/source_and_target_arrays.png"/>
Then convert the characters to their int values as declared in our vocabulary:
End of explanation
"""
from distutils.version import LooseVersion
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
"""
Explanation: This is the final shape we need them to be in. We can now proceed to building the model.
Model
Check the Version of TensorFlow
This will check to make sure you have the correct version of TensorFlow
End of explanation
"""
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 15
decoding_embedding_size = 15
# Learning Rate
learning_rate = 0.001
"""
Explanation: Hyperparameters
End of explanation
"""
def get_model_inputs():
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length
"""
Explanation: Input
End of explanation
"""
def encoding_layer(input_data, rnn_size, num_layers,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
"""
Explanation: Sequence to Sequence Model
We can now stat defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:
2.1 Encoder
- Embedding
- Encoder cell
2.2 Decoder
1- Process decoder inputs
2- Set up the decoder
- Embedding
- Decoder cell
- Dense output layer
- Training decoder
- Inference decoder
2.3 Seq2seq model connecting the encoder and decoder
2.4 Build the training graph hooking up the model with the
optimizer
2.1 Encoder
The first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.
Embed the input data using tf.contrib.layers.embed_sequence
<img src="images/embed_sequence.png" />
Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.
<img src="images/encoder.png" />
End of explanation
"""
# Process the input we'll feed to the decoder
def process_decoder_input(target_data, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
Explanation: 2.2 Decoder
The decoder is probably the most involved part of this model. The following steps are needed to create it:
1- Process decoder inputs
2- Set up the decoder components
- Embedding
- Decoder cell
- Dense output layer
- Training decoder
- Inference decoder
Process Decoder Input
In the training process, the target sequences will be used in two different places:
Using them to calculate the loss
Feeding them to the decoder during training to make the model more robust.
Now we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):
<img src="images/targets_1.png"/>
We need to do a simple transformation on the tensor before feeding it to the decoder:
1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item.
We do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.
<img src="images/strided_slice_1.png"/>
2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.
<img src="images/targets_add_go.png"/>
Now the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):
<img src="images/targets_after_processing_1.png"/>
End of explanation
"""
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
target_sequence_length, max_target_sequence_length, enc_state, dec_input):
# 1. Decoder Embedding
target_vocab_size = len(target_letter_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return training_decoder_output, inference_decoder_output
"""
Explanation: Set up the decoder components
- Embedding
- Decoder cell
- Dense output layer
- Training decoder
- Inference decoder
1- Embedding
Now that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder.
We'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:
<img src="images/embeddings.png" />
2- Decoder Cell
Then we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.
We need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).
First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.
3- Dense output layer
Before we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.
4- Training decoder
Essentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.
We can think of the training decoder as looking like this (except that it works with sequences in batches):
<img src="images/sequence-to-sequence-training-decoder.png"/>
The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).
5- Inference decoder
The inference decoder is the one we'll use when we deploy our model to the wild.
<img src="images/sequence-to-sequence-inference-decoder.png"/>
We'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.
End of explanation
"""
def seq2seq_model(input_data, targets, lr, target_sequence_length,
max_target_sequence_length, source_sequence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers):
# Pass the input data through the encoder. We'll ignore the encoder output, but use the state
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
source_sequence_length,
source_vocab_size,
encoding_embedding_size)
# Prepare the target sequences we'll feed to the decoder in training mode
dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)
# Pass encoder state and decoder inputs to the decoders
training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int,
decoding_embedding_size,
num_layers,
rnn_size,
target_sequence_length,
max_target_sequence_length,
enc_state,
dec_input)
return training_decoder_output, inference_decoder_output
"""
Explanation: 2.3 Seq2seq model
Let's now go a step above, and hook up the encoder and decoder using the methods we just declared
End of explanation
"""
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()
# Create the training and inference logits
training_decoder_output, inference_decoder_output = seq2seq_model(input_data,
targets,
lr,
target_sequence_length,
max_target_sequence_length,
source_sequence_length,
len(source_letter_to_int),
len(target_letter_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')
# Create the weights for sequence_loss
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:
<img src="images/logits.png"/>
The logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.
End of explanation
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths
"""
Explanation: Get Batches
There's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2
Source sequences (it's actually in int form, we're showing the characters for clarity):
<img src="images/source_batch.png" />
Target sequences (also in int, but showing letters for clarity):
<img src="images/target_batch.png" />
End of explanation
"""
# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>']))
display_step = 20 # Check training loss after every 20 batches
checkpoint = "best_model.ckpt"
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(1, epochs+1):
for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
get_batches(train_target, train_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>'])):
# Training step
_, loss = sess.run(
[train_op, cost],
{input_data: sources_batch,
targets: targets_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths})
# Debug message updating us on the status of the training
if batch_i % display_step == 0 and batch_i > 0:
# Calculate validation cost
validation_loss = sess.run(
[cost],
{input_data: valid_sources_batch,
targets: valid_targets_batch,
lr: learning_rate,
target_sequence_length: valid_targets_lengths,
source_sequence_length: valid_sources_lengths})
print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'
.format(epoch_i,
epochs,
batch_i,
len(train_source) // batch_size,
loss,
validation_loss[0]))
# Save Model
saver = tf.train.Saver()
saver.save(sess, checkpoint)
print('Model Trained and Saved')
"""
Explanation: Train
We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
End of explanation
"""
def source_to_seq(text):
'''Prepare the text for the model'''
sequence_length = 7
return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))
input_sentence = 'hello'
text = source_to_seq(input_sentence)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
#Multiply by batch_size to match the model's input parameters
answer_logits = sess.run(logits, {input_data: [text]*batch_size,
target_sequence_length: [len(text)]*batch_size,
source_sequence_length: [len(text)]*batch_size})[0]
pad = source_letter_to_int["<PAD>"]
print('Original Text:', input_sentence)
print('\nSource')
print(' Word Ids: {}'.format([i for i in text]))
print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text])))
print('\nTarget')
print(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))
print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad])))
"""
Explanation: Prediction
End of explanation
"""
|
ECP-CANDLE/Supervisor
|
workflows/cp1/scripts/cp1_scripts.ipynb
|
mit
|
df = pd.read_csv('~/Documents/results/cp1/non_nci_hpo_log/hpos.txt', sep="|", header=None, names=["i", "hpo_id", "params", "run_dir", "ts", "val_loss"])
df.groupby("hpo_id")['val_loss'].agg([np.min, np.max, np.mean, np.std])
n = 10
smallest = df.groupby('hpo_id')['val_loss'].nsmallest(n)
best_n = df.iloc[smallest.index.get_level_values(1), :]
#best_n.to_csv('~/Documents/results/cp1/best_{}_nci.txt'.format(n), sep="|", index=False)
params = '/home/nick/Documents/results/cp1/best_{}_nci_params.txt'.format(n)
# Write out the best n parameters
best_n[['params']].to_csv(params, index=False,
header=False, quoting=csv.QUOTE_NONE, sep='|')
pd.__version__
# stats for the best n
best_n.groupby("hpo_id")['val_loss'].agg([np.min, np.max, np.mean, np.std])
"""
Explanation: Generate Stats From hpos.txt
Each hpo writes an hpo.txt with hpo_id, params, run_dir, ts, and val loss
End of explanation
"""
from os import path
with open(params) as f_in:
lines = f_in.readlines()
upf = '/home/nick/Documents/results/cp1/best_{}_nci_params_upf.txt'.format(n)
with open(upf, 'w') as f_out:
for line in lines:
j = json.loads(line)
train_sources = j['train_sources']
if 'cell_feature_subset_path' in j and train_sources == 'NCI60':
fsp = path.basename(j['cell_feature_subset_path'])
train = '{}_train.h5'.format(fsp[:fsp.find('_features')])
j['use_exported_data'] = '/autofs/nccs-svm1_proj/med106/ncollier/repos/Supervisor/workflows/cp1/cache/{}'.format(train)
del j['save_path']
j['epochs'] = 100
f_out.write('{}\n'.format(json.dumps(j)))
"""
Explanation: Generate upf -- one json param dict per line -- file from the best n runs
End of explanation
"""
js = """{{"batch_size": 6144, "train_sources": "{}", "preprocess_rnaseq": "combat", "gpus": "0 1 2 3 4 5", "cell_feature_subset_path": "/autofs/nccs-svm1_proj/med106/ncollier/repos/Supervisor/workflows/cp1/xcorr_data/{}_{}_2000_1000_features.txt", "export_data": "/autofs/nccs-svm1_proj/med106/ncollier/repos/Supervisor/workflows/cp1/cache/{}_{}_2000_1000_{}.h5", "no_feature_source": true, "no_response_source": true, "cp": true}}"""
studies = ['CCLE', 'CTRP', 'gCSI', 'GDSC']
for s1 in studies:
for s2 in studies:
if s1 != s2:
j1 = js.format(s1, s1, s2, s1, s2, 'train')
j2 = js.format(s2, s1, s2, s1, s2, 'test')
#print(j1)
#print(j2)
for s1 in studies:
s2 = 'NCI60'
j = js.format(s2, s1, s2, s1, s2, 'test')
print(j)
"""
Explanation: Create upf for creating test, train generated data
End of explanation
"""
from os import path
# python uno_infer.py --data CTRP_CCLE_2000_1000_test.h5 --model_file model.h5
inputs = '/home/nick/Documents/results/cp1/inputs.txt'
model_class_ids = {}
next_id = 0
infer_upf = '/home/nick/Documents/repos/Supervisor/workflows/cp1/data/infer_upf_all.txt'
with open(infer_upf, 'w') as f_out:
studies = ['CCLE', 'CTRP', 'gCSI', 'GDSC', 'NCI60']
with open(inputs) as f_in:
reader = csv.reader(f_in, delimiter="|")
for r in reader:
params = json.loads(r[2])
save_path = params['save_path']
if 'cell_feature_subset_path' in params:
fsp = path.basename(params['cell_feature_subset_path'])
fsp_prefix = fsp[:fsp.find('_features')]
test_data = '{}_test.h5'.format(fsp_prefix)
save_path = params['save_path']
train_source = params['train_sources']
#if fsp.find('_NCI60_') != -1:
f_out.write('{},{},{}\n'.format(test_data, save_path, fsp_prefix))
else:
train_source = params['train_sources']
for s in studies:
test_data = '{}.h5'.format(s)
f_out.write('{},{},{}_{}\n'.format(test_data, save_path, train_source, s))
f = "/home/nick/Documents/repos/Supervisor/workflows/cp1/scripts/counts_by_hpo.csv"
hp = {}
class Entry:
def __init__(self, start):
self.start = start
self.end = -1
def __repr__(self):
return "[{}, {}]".format(self.start, self.end)
with open(f) as f_in:
reader = csv.reader(f_in)
next(reader)
for row in reader:
if row[1] == "1":
hpo_id = int(row[3])
h = float(row[2])
if not hpo_id in hp:
hp[hpo_id] = [Entry(h)]
else:
entry = hp[hpo_id][-1]
if entry.end != -1:
hp[hpo_id].append(Entry(h))
elif row[1] == "0":
hpo_id = int(row[3])
h = float(row[2])
hp[hpo_id][-1].end = h
with open('/home/nick/Documents/repos/Supervisor/workflows/cp1/scripts/start_end.csv', 'w') as f_out:
for k in hp:
for e in hp[k]:
f_out.write('{},{},{}\n'.format(e.start, e.end, k))
import csv
import os
f = "/home/nick/Documents/results/cp1/inference_log.txt"
prefix = '/gpfs/alpine/med106/scratch/ncollier/experiments/infer_all_4/run/'
with open(f) as f_in:
reader = csv.reader(f_in, delimiter='|')
for i, row in enumerate(reader):
#train_path = row[2]
run_id = int(os.path.basename(os.path.dirname(row[2])))
if run_id < 200:
print('{}{}'.format(prefix, i))
"""
Explanation: Create upf for inferencing runs, using inputs.txt as produced by training upf
End of explanation
"""
|
IanOlin/github-research
|
Unsupported/Affilliation/csvs/.ipynb_checkpoints/arrayify-checkpoint.ipynb
|
mit
|
importpath = "/home/jwb/repos/github-research/csvs/Companies/Ugly/Stack/"
exportpath = "/home/jwb/repos/github-research/csvs/Companies/Pretty/Stack/"
"""
Explanation: Ugly To Pretty for CSVS
Run on linux. Set an import path and an export path to folders.
Will take every file in import directory that is a mathematica generated CSV and turn it into a nicely fomatted CSV in Output directory.
Paths
End of explanation
"""
import csv
import pandas as pd
import os
def arrayer(path):
with open(path, "rt") as f:
reader = csv.reader(f)
names = set()
times = {}
windows = []
rownum = 0
for row in reader:
newrow = [(i[1:-1],j[:-2]) for i,j in zip(row[1::2], row[2::2])] #Drops the timewindow, and groups the rest of the row into [name, tally]
rowdict = dict(newrow)
names.update([x[0] for x in newrow]) #adds each name to a name set
l=row[0].replace("DateObject[{","").strip("{}]}").replace(",","").replace("}]","").split() #Strips DateObject string
timestamp=':'.join(l[:3])+'-'+':'.join(l[3:]) #Formats date string
windows.append(timestamp) #add timestamp to list
times[timestamp] = rowdict #link results as value in timestamp dict
rownum += 1
cols = [[times[k][name] if name in times[k] else ' 0' for name in names ] for k in windows] #put the tally for each name across each timestamp in a nested list of Columns
data = pd.DataFrame(cols,columns=list(names),index=windows) #Put into dataframe with labels
return data.transpose()
"""
Explanation: Function
End of explanation
"""
for filename in os.listdir(importpath):
arrayer(importpath+filename).to_csv(exportpath+filename, encoding='utf-8')
"""
Explanation: Run
End of explanation
"""
|
dsiufl/2015-Fall-Hadoop
|
notes/.ipynb_checkpoints/1-hadoop-streaming-py-wordcount-checkpoint.ipynb
|
mit
|
hadoop_root = '/home/ubuntu/shortcourse/hadoop-2.7.1/'
hadoop_start_hdfs_cmd = hadoop_root + 'sbin/start-dfs.sh'
hadoop_stop_hdfs_cmd = hadoop_root + 'sbin/stop-dfs.sh'
# start the hadoop distributed file system
! {hadoop_start_hdfs_cmd}
# show the jave jvm process summary
# You should see NamenNode, SecondaryNameNode, and DataNode
! jps
"""
Explanation: Hadoop Short Course
1. Hadoop Distributed File System
Hadoop Distributed File System (HDFS)
HDFS is the primary distributed storage used by Hadoop applications. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. The HDFS Architecture Guide describes HDFS in detail. To learn more about the interaction of users and administrators with HDFS, please refer to HDFS User Guide.
All HDFS commands are invoked by the bin/hdfs script. Running the hdfs script without any arguments prints the description for all commands. For all the commands, please refer to HDFS Commands Reference
Start HDFS
End of explanation
"""
# We will use three ebooks from Project Gutenberg for later example
# Pride and Prejudice by Jane Austen: http://www.gutenberg.org/ebooks/1342.txt.utf-8
! wget http://www.gutenberg.org/ebooks/1342.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/pride-and-prejudice.txt
# Alice's Adventures in Wonderland by Lewis Carroll: http://www.gutenberg.org/ebooks/11.txt.utf-8
! wget http://www.gutenberg.org/ebooks/11.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/alice.txt
# The Adventures of Sherlock Holmes by Arthur Conan Doyle: http://www.gutenberg.org/ebooks/1661.txt.utf-8
! wget http://www.gutenberg.org/ebooks/1661.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/sherlock-holmes.txt
"""
Explanation: Normal file operations and data preparation for later example
list recursively everything under the root dir
Download some files for later use. The files should already be there.
End of explanation
"""
Start the hadoop distributed file system
! {hadoop_root + 'sbin/start-yarn.sh'}
"""
Explanation: Delete existing folders under /user/ubuntu/ in hdfs
Create input folder: /user/ubuntu/input
Copy the three books to the input folder in HDFS.
Similiar to normal bash cmd:
cp /home/ubuntu/shortcourse/data/wordcount/* /user/ubuntu/input/
but copy to hdfs.
Show if the files are there.
2. WordCount Example
Let's count the single word frequency in the uploaded three books.
Start Yarn, the resource allocator for Hadoop.
End of explanation
"""
# wordcount 1 the scripts
# Map: /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py
# Test locally the map script
! echo "go gators gators beat everyone go glory gators" | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py
# Reduce: /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py
# Test locally the reduce script
! echo "go gators gators beat everyone go glory gators" | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py | \
sort -k1,1 | \
/home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py
# run them with Hadoop against the uploaded three books
cmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \
'-input input ' + \
'-output output ' + \
'-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \
'-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py ' + \
'-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \
'-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py'
! {cmd}
"""
Explanation: Test locally the mapper.py and reduce.py
End of explanation
"""
# Let's see what's in the output file
# delete if previous results exist
! tail -n 20 $(THE_DOWNLOADED_FILE)
"""
Explanation: List the output
Download the output file (part-00000) to local fs.
End of explanation
"""
# 1. go to wordcount2 folder, modify the mapper
# 2. test locally if the mapper is working
# 3. run with hadoop streaming. Input is still the three books, output to 'output2'
"""
Explanation: 3. Exercise: WordCount2
Count the single word frequency, where the words are given in a pattern file.
For example, given pattern.txt file, which contains:
"a b c d"
And the input file is:
"d e a c f g h i a b c d".
Then the output shoule be:
"a 1
b 1
c 2
d 2"
Please copy the mapper.py and reduce.py from the first wordcount example to foler "/home/ubuntu/shortcourse/notes/scripts/wordcount2/". The pattern file is given in the wordcount2 folder with name "wc2-pattern.txt"
Hint:
1. pass the pattern file using "-file option" and use -cmdenv to pass the file name as environment variable
2. in the mapper, read the pattern file into a set
3. only print out the words that exist in the set
End of explanation
"""
# 1. list the output, download the output to local, and cat the output file
# 2. use bash cmd to find out the most frequently used 20 words from the previous example,
# and compare the results with this output
# stop dfs and yarn
!{hadoop_root + 'sbin/stop-yarn.sh'}
# don't stop hdfs for now, later use
# !{hadoop_stop_hdfs_cmd}
"""
Explanation: Verify Results
Copy the output file to local
run the following command, and compare with the downloaded output
sort -nrk 2,2 part-00000 | head -n 20
The wc1-part-00000 is the output of the previous wordcount (wordcount1)
End of explanation
"""
|
TariqAHassan/BioVida
|
tutorials/1_openi.ipynb
|
bsd-3-clause
|
from biovida.images import OpeniInterface
opi = OpeniInterface()
"""
Explanation: BioVida: Open-i
Open-i is an open access biomedical search engine provided by the US National Institutes of Health. The service grants programmatic access to its over 1.2 million images through a RESTful web API. BioVida provides an easy-to-use python interface for this web API, located in the images subpackage.
End of explanation
"""
opi.options()
"""
Explanation: We start by creating an instance of the class. All BioVida interfaces accept at least two parameters: verbose and cache_path. The first simply determines whether or not the class provides you with additional updates as the class works. The second refers to the location where will be stored (or cached) on your computer. If left to its default, data will be cached in a directory entitled biovida_cache in your home directory. For most use cases, this should suffice.
Searching
To search the Open-i database, we can use the OpeniInterface's search method. To explore valid values that can be passed to search, we can use options().
End of explanation
"""
opi.options('collection')
opi.options('image_type')
"""
Explanation: The code above enumerates all of the parameters, apart from a specific query string, that can be passed to search(). Additionally, options() can be used to investigate the valid values for any one of these parameters.
End of explanation
"""
opi.search(query='lung cancer', image_type=('x_ray', 'ct'), collection='pubmed')
"""
Explanation: Let's go ahead and perform a search for X-ray and CT images of 'lung cancer' from the PubMed collection/database.
End of explanation
"""
pull_df = opi.pull(download_limit=1500)
"""
Explanation: Downloading Data
Now that we've defined a search, we can easily download some, or all, of the results found.
For the sake of expediency, let's limit the number of results we download to the first 1500.
End of explanation
"""
import numpy as np
def simplify_df(df):
"""This function simplifies dataframes
for the purposes of this tutorial."""
data_frame = df.copy()
data_frame['cached_images_path'] = '/path/to/image'
return data_frame[0:5].replace({np.NaN: ''})
simplify_df(opi.records_db_short)
"""
Explanation: The text information associated with images are referred to as 'records', which are downloaded in 'chunks' of no more than 30 at a time. <br>
Images, unlike records, are downloaded 'one by one'. However, pull() will check the cache before downloading an image, in an effort to reduce redundant downloads.
The dataframe generated by pull() can be viewed using either opi.records_db, or the pull_df used above to capture the output of pull(). Both will be identical. We can also view an abbreviated dataframe, opi.records_db_short, which has several (typically unneeded) columns removed.
End of explanation
"""
pull_df['age'].describe()
pull_df['sex'].value_counts(normalize=True)
"""
Explanation: This dataframe is provides a lot of rich data, which is valuable independent of the images which have also been downloaded.
For instance, it is possible to quickly generate some descriptive statistics about our newly created 'lung cancer' dataset.
End of explanation
"""
from utils import show_image
%matplotlib inline
"""
Explanation: The age and sex columns are generated by analyzing the raw text provided by Open-i. It is reasonably accurate, but mistakes are certainly possible.
It should also be mentioned that opi.records_db only contains data for the most recent search() and pull(). Conversely, cache_records_db provides a more complete account of all images in the cache, e.g., those obtained several sessions ago. Additionally, unlike opi.records_db, cache_records_db can contain duplicate rows. However, this is only allowed to occur if the queries that generated the rows are different.
Images
Now that we've explored obtaining and reviewing data, we can finally turn our attention to images themselves.
End of explanation
"""
# show_image(opi.records_db['cached_images_path'].iloc[156])
opi.records_db['license_type'].iloc[156]
"""
Explanation: Note: utils is a small script with some helpful functions located in the base of this directory.
Using the show_images imported above, we can now look at a random images we pulled in the step above.
End of explanation
"""
age_sex = opi.records_db['age'].iloc[156], opi.records_db['sex'].iloc[156]
print("age: {0}, sex: {1}.".format(*age_sex))
"""
Explanation: Let's also look at the age and sex of this subject.
End of explanation
"""
opi.records_db['diagnosis'].iloc[156]
"""
Explanation: We can also easily check their diagnosis
End of explanation
"""
# show_image(opi.records_db['cached_images_path'].iloc[100])
# show_image(opi.records_db['cached_images_path'].iloc[10])
"""
Explanation: Please be advised that for collections other than 'MedPix'*, such as PubMed, diagnosis information is obtained by analyzing the text associated with the image. Errors are possible.
*MedPix explicitly provides diagnosis information, so it can be assumed to be accurate.
Automated Cleaning of Image Data (<font color='red'>Experimental</font>)
While the data may look OK so far, if we look more closely we will likely find several problems with the images we have downloaded.
End of explanation
"""
from biovida.images import OpeniImageProcessing
"""
Explanation: The images above contain several clear problems. They both contain arrows and the latter is actually a 'grid' of images. These are liable to confuse any model we attempt to train detect disease. We could manually go through and remove these images or, alternatively, we can use the experimental OpeniImageProcessing class to try and eliminate these images from our dataset automatically.
End of explanation
"""
ip = OpeniImageProcessing(opi)
"""
Explanation: We initialize this class using our OpeniInterface instance.
By default, it will extract the records_db DataFrame. Do note, however, that we can force it to extract the cache_records_db DataFrame by setting the db_to_extract equal to 'cache_records_db'.
End of explanation
"""
ip.trained_open_i_modality_types
"""
Explanation: OpeniImageProcessing will automatically download a model for a Convolutional Neural Network (convnet) which has been trained to detect these kinds of problems. If you are unfamiliar with these kinds of models, you can read more about them here.
The OpeniImageProcessing class tries to detect problems in the images by analyzing both the text associated it is associated with as well as by feeding the image through the convnet mentioned above. However, by default the OpeniImageProcessing class will only use predictions gleaned from this model if it has been explicitly trained on images from that kind of imaging modality.
We can easily check the modalities for which the model has been trained:
End of explanation
"""
analysis_df = ip.auto()
simplify_df(analysis_df)
"""
Explanation: Luckily, we're working with X-rays and CTs. <br>
Now we're ready to analyze our images.
End of explanation
"""
ip.clean_image_dataframe()
"""
Explanation: This will generate several new columns:
'grayscale': this is simply an account of whether or not the images is grayscale.
'medpix_logo_bounding_box': images from the MedPix collection, typically contain the organization's logo in the top right corner. Had we passed the class images from MedPix, it would have tried to 'draw' a bounding box around its precise location (enabling it to be cropped out of the image).
'hbar': this denotes a 'horizontal bar' that is sometimes found at the bottom of images. If present, this column reports its height in pixels.
'hborder': this column provides an account of 'horizontal borders' on either side of the image.
'vborder': this column provides an account of 'vertical borders' on the top and bottom of the image.
'upper crop': this is the location that has been selected to crop the top of the image. This decision is made by considering the 'medpix_logo_bounding_box' and 'vborder' columns.
'lower crop': this is the location that has been selected to crop the bottom of the image. This decision is made by considering the 'hbar' and 'hborder' columns.
'visual_image_problems': this column contains the output of the convnet model, with the numbers following the words representing the probability that the image belongs to that category.
'invalid_image': this is a decision as to whether or not the image is invalid, e.g., has an arrow. This decision is made using the 'grayscale' and 'visual_image_problems' columns as well as the text associated with the image ('image_problems_from_text')
'invalid_image_reasons': in cases where the 'invalid_image' column is True, column provides an account as to why a decision was made.
We can use this analysis to construct a new dataframe, with 'invalid_images' removed and the remaining images cropped in such a way that problematic features are removed.
End of explanation
"""
# show_image(ip.image_dataframe_cleaned['cleaned_image'].iloc[180])
"""
Explanation: This 'cleaned' set, should have fewer instances of problematic images.
Here's a random image from this new set:
End of explanation
"""
opi.search(collection='indiana_u_xray')
"""
Explanation: With time, the machinery used to detect these kinds of problems, particularly the convolutional neural network, will be improved. However, at the current time, this class is still considered to be very experimental.
Train, Validation and Test
Now that we've explored data harvesting, we can turn our attention to the final step before modeling: dividing data into training, validation and/or tests sets.
Let's use images from the Indiana University Chest X-Ray collection* ('indiana_u_xray'). This set of images has been assembled 'by hand', and thus does not require complicated image cleaning procedures.
<br>
*License; images have not been modified.
End of explanation
"""
pull_df2 = opi.pull(download_limit=None)
"""
Explanation: Let's go ahead and download this entire collection. <br>
Please be advised that this will take some time, so feel free to adjust download_limit to suit your needs.
End of explanation
"""
simplify_df(opi.records_db_short)
"""
Explanation: Let's quickly inspect this newly downloaded data.
End of explanation
"""
from biovida.images import image_divvy
"""
Explanation: We can easily select a subset of these ~7000 images and divide them into training and test sets for some machine learning model using the image_divvy() tool.
End of explanation
"""
def my_divvy_rule(row):
if isinstance(row['diagnosis'], str):
if 'normal' in row['diagnosis']:
return 'normal' # though this could be anything, e.g., 'super cool normal images'.
elif 'calcinosis' in row['diagnosis']:
return 'calcinosis'
"""
Explanation: Let's imagine we're interested in building a model capable of distinguishing between 'normal' chest x-rays and those with signs of problematic caclium deposits, a disease formally known as 'calcinosis'.
We can define a rule to construct such a training and test set using a 'divvy_rule'.
This rule will tell image_divvy() how to 'divvy up' the images in the cache. More specifically, our rule will tell this image_divvy() how to categorize images in the cache.
End of explanation
"""
train_test = image_divvy(instance=opi,
divvy_rule=my_divvy_rule,
db_to_extract='records_db',
action='ndarray',
train_val_test_dict={'train': 0.8, 'test': 0.2})
"""
Explanation: Now that image_divvy() knows how we would like it to categorize the data we've downloaded, we can also pass it a dictionary specifying how to 'split' the data into training and testing sets. In this example, we'll use a standard 80% train, 20% test split and ask the function returns numpy arrays (ndarrays) as output.
End of explanation
"""
train_ca, test_ca = train_test['train']['calcinosis'], train_test['test']['calcinosis']
train_norm, test_norm = train_test['train']['normal'], train_test['test']['normal']
"""
Explanation: Before signing off, image_divvy() printed the structure of the nested dictionary it returned. <br>
We can use this information to unpack the arrays nested within this data structure:
End of explanation
"""
# Normal
print("Train:", len(train_norm), "|", "Test:", len(test_norm))
# Calcinosis
print("Train:", len(train_ca), "|", "Test:", len(test_ca))
"""
Explanation: Now that our data has been neatly unpacked, we can look at the number of samples the procedure generated.
End of explanation
"""
# Normal
# show_image(train_norm[99])
# Calcinosis
# show_image(train_ca[104])
"""
Explanation: Using the show_image() tool we imported above, we can take a quick at an image from each category.
End of explanation
"""
|
cathywu/flow
|
tutorials/tutorial12_inflows.ipynb
|
mit
|
from flow.scenarios import MergeScenario
"""
Explanation: Tutorial 12: Inflows
This tutorial walks you through the process of introducing inflows of vehicles into a network. Inflows allow us to simulate open networks where vehicles may enter (and potentially exit) the network consanstly, such as a section of a highway or of an intersection.
The rest of this tutorial is organized as follows:
In section 1, we introduce inflows and show how to create them into Flow.
In section 2, we simulate the merge network in the presence of inflows.
In section 3, we explain the different options you have to customize inflows.
1. Creating inflows in Flow
For this tutorial, we will simulate inflows through a highway network with an entrance ramp (an on-merge). As we will see, the perturbations caused by the vehicles entering through the ramp leads to the formation of congested waves downstream in the main highway.
We begin by importing the merge scenario class provided by Flow.
End of explanation
"""
from flow.core.params import VehicleParams
from flow.controllers import IDMController
from flow.core.params import SumoCarFollowingParams
# create an empty vehicles object
vehicles = VehicleParams()
# add some vehicles to this object of type "human"
vehicles.add("human",
acceleration_controller=(IDMController, {}),
car_following_params=SumoCarFollowingParams(
speed_mode="obey_safe_speed",
# we use the speed mode "obey_safe_speed" for better dynamics at the merge
),
num_vehicles=20)
"""
Explanation: A schematic of the above network is displayed in the figure below. As we can see, the edges at the start of the main highway and of the on-merge are named inflow_highway and inflow_merge respectively. These names will be important when we begin creating our inflows, as we will need to specify by which edges the vehicles should enter the network.
<img src="img/merge_scheme.png" width="750">
We also need to define the types of the vehicles that are placed in the network through our inflows. These types are string values that allow us to distinguish between vehicles. For instance, we could have two types of vehicles entering through the main highway, one for human-driven vehicles and one for RL-driven vehicles.
For this tutorial, we will only use one type of vehicles, with the vehicle identifier human:
End of explanation
"""
from flow.core.params import InFlows
inflow = InFlows()
"""
Explanation: We have created a new type of vehicle, called human, and we directly inserted 20 vehicles of this type into the network. These vehicles will already be on the network when the simulation starts, contrary to the vehicles added by the inflow which will only start coming in the network after the simulation starts.
Note that it is not necessary to add vehicles at the start. If you don't wish that to happen, you can set num_vehicles=0, which is the default value if you don't specify num_vehicles at all.
Next, we are ready to import and create an empty InFlows object.
End of explanation
"""
inflow.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=2000)
"""
Explanation: In order to add new inflows of vehicles of pre-defined types onto specific edges and lanes in the network, we use the InFlows object's add method. This function requires at least the following parameters (more will be shown in section 3):
veh_type: the type of the vehicles the inflow will create (this must match one of the types set in the VehicleParams object),
edge: the name of the edge (in the network) where the inflow will insert vehicles,
vehs_per_hour: the number of vehicles entering from the edge per hour at most (it may not be achievable due to congestion and safe driving behavior).
More options are shown in section 3.
We begin by creating an inflow of vehicles at a rate of 2000 vehicules per hour on the main highway:
End of explanation
"""
inflow.add(veh_type="human",
edge="inflow_merge",
vehs_per_hour=100)
"""
Explanation: Next, we create a second inflow of vehicles on the on-merge lane at a lower rate of 100 vehicules par hour.
End of explanation
"""
from flow.scenarios.merge import ADDITIONAL_NET_PARAMS
from flow.core.params import NetParams
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
# make the part of the highway after the merge longer
additional_net_params['post_merge_length'] = 350
# make the number of lanes on the highway be just one
additional_net_params['highway_lanes'] = 1
net_params = NetParams(inflows=inflow, # our inflows
additional_params=additional_net_params)
"""
Explanation: In the next section, we will add our inflows to our network and run a simulation to see them in action.
2. Running simulations with inflows
In the next section, we will add our inflows to our network and run a simulation to see them in action.
2. Running simulations with inflows
We are now ready to test our inflows in a simulation. Introducing these inflows into the network is handled by the backend scenario generation processes during the instantiation of the scenario object. To make this work, the InFlows object should be given as a parameter to the NetParams object, in addition to all other network-specific parameters.
For the merge network, this is done as follows:
End of explanation
"""
from flow.core.params import SumoParams, EnvParams, InitialConfig
from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS
from flow.core.experiment import Experiment
sumo_params = SumoParams(render=True,
sim_step=0.2)
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
initial_config = InitialConfig()
scenario = MergeScenario(name="merge-example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config)
env = AccelEnv(env_params, sumo_params, scenario)
exp = Experiment(env)
_ = exp.run(1, 10000)
"""
Explanation: Finally, we create and start the simulation, following what is explained in tutorial 1.
If the simulation in SUMO is going too fast, you can slow it down by sliding the "Delay" cursor from left to right.
Don't worry about potential warnings that might come up in the log while runing the simulation.
End of explanation
"""
inflow.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=2000)
"""
Explanation: <img src="img/merge_visual.png" width="100%">
Running this simulation, we can see that a large number of vehicles are entering from the main highway, while only a sparse number of vehicles are entering from the on-merge, as we specified in the inflows. Feel free to try different vehs_per_hour values so as to have different inflow rates.
In the next section, we will see how to exploit the full capabilities of inflows.
3. Customizing inflows
If you run the previous simulation carefully, you will see that the vehicles entering the network start with no speed. Besides, if you replace additional_net_params['highway_lanes'] = 1 by additional_net_params['highway_lanes'] = 2 in section 1, thus making the highway two-lane-wide, you will see that vehicles only enter on the right lane.
In this section, we will see how to solve these issues, and how to customize inflows.
We saw that you can create an inflow by doing the following:
End of explanation
"""
from flow.core.experiment import Experiment
from flow.core.params import NetParams, EnvParams, InitialConfig, InFlows, \
VehicleParams, SumoParams, SumoCarFollowingParams
from flow.controllers import IDMController
from flow.scenarios import MergeScenario
from flow.scenarios.merge import ADDITIONAL_NET_PARAMS
from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS
# create a vehicle type
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
car_following_params=SumoCarFollowingParams(
speed_mode="obey_safe_speed"))
# create the inflows
inflows = InFlows()
# inflow for (1)
inflows.add(veh_type="human",
edge="inflow_highway",
vehs_per_hour=10000,
depart_lane="random",
depart_speed="speedLimit",
color="white")
# inflow for (2)
inflows.add(veh_type="human",
edge="inflow_merge",
period=2,
depart_lane=0, # right lane
depart_speed=0,
color="green")
# inflow for (3)
inflows.add(veh_type="human",
edge="inflow_merge",
probability=0.1,
depart_lane=1, # left lane
depart_speed="random",
begin=60, # 1 minute
number=30,
color="red")
# modify the network accordingly to instructions
# (the available parameters can be found in flow/scenarios/merge.py)
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
additional_net_params['post_merge_length'] = 350 # this is just for visuals
additional_net_params['highway_lanes'] = 4
additional_net_params['merge_lanes'] = 2
# setup and run the simulation
net_params = NetParams(inflows=inflows,
additional_params=additional_net_params)
sim_params = SumoParams(render=True,
sim_step=0.2)
sim_params.color_vehicles = False
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
initial_config = InitialConfig()
scenario = MergeScenario(name="merge-example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config)
env = AccelEnv(env_params, sim_params, scenario)
exp = Experiment(env)
_ = exp.run(1, 10000)
"""
Explanation: However, this add method has a lot more parameters, which we will talk about now.
Let's start with parameters that allow you to specify the inflow rate, i.e. how many vehicles the inflow will add into the network.
There are 3 parameters to do this:
vehs_per_hour: we have seen this one before, this is the number of vehicles that should enter the network, in vehicles per hour, equally spaced. For example, as there are $60 \times 60 = 3600$ seconds in one hour, setting this parameter to $\frac{3600}{5}=720$ will result in vehicles entering the network every $5$ seconds.
probability: this is the probability (between 0 and 1) of a vehicle entering the network every second. For example, if we set this to $0.2$, then at each second of the simulation, a vehicle will enter the network with probability $\frac{1}{5}$.
period: this is the time in seconds between two vehicles are inserted. For example, setting this to $5$ would result in vehicles entering the network every $5$ seconds (which is effectively the same as setting vehs_per_hour to $720$).
Note that all these rates are maximum rates, meaning that if adding vehicles at the current rate would result in vehicles between too close to each other or colliding, then the rate will automatically be reduced.
Exactly one of these 3 parameters should be set, no more nor less. You can choose how you would rather have your vehicles enter the network. With vehs_per_hour and period (which are proportional to each other, use whichever is more convenient to define), vehicles will enter the network equally spaced, while the vehicles will be more randomly separated if you use probability.
Now let's look into where and how fast vehicles enter the network.
There are 2 parameters taking care of this:
depart_lane: this parameter lets you specify in which lane vehicles are inserted when they enter the network on an edge consisting of several lanes. It should be a positive int, 0 being the rightmost lane. However most of the time, you don't want vehicles entering through only one lane (although you could create one inflow for each lane). That's why there are other options for this parameter, which are the following strings:
"random": vehicles will enter on a random lane
"free": vehicles will enter on the least occupied lane
"best": vehicles will enter on the "free" lane among those which allow the vehicle the longest ride without needing to change lane
"first": vehicles will enter on the rightmost lane they can use
By default, depart_lane is set to "free", which is why vehicles were only using the rightmost lane on the highway, if several lanes were available.
depart_speed: this parameter lets you specify the speed at which the vehicles will enter the network. It should be a positive float, in meters per second. If this speed is unsafe, the departure of the vehicles is delayed. Just like for depart_lane, there are other options for this parameter, which are the following strings:
"random": vehicles enter the edge with a random speed between 0 and the speed limit on the edge. The entering speed may be adapted to ensure that a safe distance to the leading vehicle is kept
"speedLimit": vehicles enter the edge with the maximum speed that is allowed on this edge. If that speed is unsafe, the departure is delayed.
By default, depart_speed is set to 0.
Finally, let's look at the rest of the parameters available:
name (str): a name for the inflow, which will also be used as a prefix for the ids of the vehicles created by it . This is set to "flow" by default.
begin (float): the time of the simulation, in seconds, at which the inflow should start producing vehicles. This is set to 1 second by default, which is the minimum value (setting it to 0 could cause collisions with vehicles that are manually added into the network).
end (float): the time of the simulation, in seconds, at which the inflow should stop producing vehicles. This is set to 24 hours (86400 seconds) by default.
number (int): the number of vehicles that should be procuded by the inflow. This is set to None by default, which make the inflow keep producing vehicles indefinitely until end is reached. If this parameter is specified, the end parameter won't be used. Note that if this number is small, it might not be enforced accurately due to rounding up.
kwargs (dict): you can specify additional parameters if you need to. These can include, for instance, a specific route for the vehicles to follow, an arrival speed, an arrival lane, or even a color for the vehicles, etc. For more information on all the available parameters, and more details on the existing parameters, see here.
Let us finish this section with a more complex example. This is what we want:
We will use the merge scenario, with no vehicles being manually pre-inserted into the network.
There will be 4 lanes on the main highway and 2 on the on-merge.
(1) Every hour, 10000 vehicles will enter the highway at maximum speed on a random lane, from the start of the simulation up until the end. These vehicles should be colored in white
(2) Every two seconds, a vehicle will enter the on-merge with no speed, on the right lane, from the start of the simulation up until the end. These vehicles should be colored in green.
(3) Every second, a vehicle should enter with probability 0.1 on the left lane of the on-merge, with random speed. These vehicles should only start entering the network after the first minute of simulation time, and there should be at most 30 of them throughout the whole simulation. These vehicles should be colored in red.
Note: for the colors, you will need to use the kwargs parameter.
Also, set color_vehicles to False in the simulation parameters so that the vehicles are not colored automatically according to their types.
The result should look something like this:
<img src="img/complex_merge_visual.png" width="100%"/>
You can try to do it yourself as an exercise if you want.
Here is a solution code:
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/group_pandas_data_by_hour_of_the_day.ipynb
|
mit
|
# Import libraries
import pandas as pd
import numpy as np
"""
Explanation: Title: Group Pandas Data By Hour Of The Day
Slug: group_pandas_data_by_hour_of_the_day
Summary: Group data by hour of the day using pandas.
Date: 2016-12-21 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create a time series of 2000 elements, one very five minutes starting on 1/1/2000
time = pd.date_range('1/1/2000', periods=2000, freq='5min')
# Create a pandas series with a random values between 0 and 100, using 'time' as the index
series = pd.Series(np.random.randint(100, size=2000), index=time)
"""
Explanation: Create Data
End of explanation
"""
# View the first few rows of the data
series[0:10]
"""
Explanation: View Data
End of explanation
"""
# Group the data by the index's hour value, then aggregate by the average
series.groupby(series.index.hour).mean()
"""
Explanation: Group Data By Time Of The Day
End of explanation
"""
|
adamwang0705/cross_media_affect_analysis
|
develop/20171011-daheng-check_topics_basic_statistics.ipynb
|
mit
|
"""
Initialization
"""
'''
Standard modules
'''
import os
import pickle
import sqlite3
import time
from pprint import pprint
'''
Analysis modules
'''
import pandas as pd
'''
Custom modules
'''
import config
import utilities
'''
Misc
'''
nb_name = '20171011-daheng-check_topics_basic_statistics'
"""
Explanation: Check basic statistics of manually selected topics
Objective: make sure manually selected topics have high quality.
- Characteristic keywords: easy to regonize associated news.
- Amout of disscussion: reasonable size of associated news and tweets
- Consistent in meaning: no drift/disperse in content.
- Evolution of event: reasonable time-span of associated news.
Last modified: 2017-10-18
Roadmap
Manually compile a list of topics with keywords
Check number of associated news and tweets for each topic
Check news titles and sample tweets of each topic
Check time-span of each topic
Steps
End of explanation
"""
"""
Print out manually selected topics information
"""
for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):
print('({}/{}) {}'.format(topic_ind+1, len(config.MANUALLY_SELECTED_TOPICS_LST), topic))
"""
Explanation: Manually compile a list of topics with keywords
Topics information (category, name, keywords_lst) are manually compiled into config.MANUALLY_SELECTED_TOPICS_LST
End of explanation
"""
%%time
"""
Register
TOPICS_LST_PKL = os.path.join(DATA_DIR, 'topics.lst.pkl')
in config.
"""
if 0 == 1:
supplement_topics_lst = []
'''
Load in pickle for news data over selected period.
'''
news_period_df = pd.read_pickle(config.NEWS_PERIOD_DF_PKL)
for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST):
localtime = time.asctime(time.localtime(time.time()))
print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1,
len(config.MANUALLY_SELECTED_TOPICS_LST),
topic['name'],
localtime))
'''
Match out associated news titles.
'''
asso_news_native_ids_lst = []
for ind, row in news_period_df.iterrows():
if utilities.news_title_match(row['news_title'], topic['keywords_lst'], verbose=False):
asso_news_native_ids_lst.append(row['news_native_id'])
topic['news_native_ids_lst'] = asso_news_native_ids_lst
'''
Query associated tweets
'''
asso_tweets_ids_lst = []
query_news_tweets = '''
select tweet_id from tweets
where news_native_id = :news_native_id
order by tweet_id asc;'''
with sqlite3.connect(config.NEWS_TWEETS_DB_FILE) as conn:
cursor = conn.cursor()
for news_native_id in topic['news_native_ids_lst']:
cursor.execute(query_news_tweets, {'news_native_id': news_native_id})
tweets_ids_lst = [item[0] for item in cursor.fetchall()]
asso_tweets_ids_lst.extend(tweets_ids_lst)
topic['tweets_ids_lst'] = asso_tweets_ids_lst
supplement_topics_lst.append(topic)
'''
Make pickle
'''
with open(config.TOPICS_LST_PKL, 'wb') as f:
pickle.dump(supplement_topics_lst, f)
"""
Explanation: Check number of associated news and tweets for each topic
Build pickle for news_id and tweets_id associated with each topic
End of explanation
"""
"""
Test recover topics lst pkl
"""
if 0 == 1:
with open(config.TOPICS_LST_PKL, 'rb') as f:
topics_lst = pickle.load(f)
for topic_ind, topic in enumerate(topics_lst):
print('{} Topic_name: {}; news_num: {}; tweets_num: {}'.format(topic_ind,
topic['name'],
len(topic['news_native_ids_lst']),
len(topic['tweets_ids_lst'])))
"""
Explanation: Recover pickle and print number of news and tweets for each topic
End of explanation
"""
"""
Recover pkl
"""
if 1 == 1:
with open(config.TOPICS_LST_PKL, 'rb') as f:
topics_lst = pickle.load(f)
"""
Select topic
"""
if 1 == 1:
target_topic_ind = 26
topic = topics_lst[target_topic_ind]
'''
Print associated news titles
'''
if 1 == 1:
print('TOPIC: {}; KEYWORDS: {}'.format(topic['name'], topic['keywords_lst']))
# limit to first 100 news
news_native_ids_lst = topic['news_native_ids_lst'][:100]
query_news = '''
select news_title, news_collected_time from news
where news_native_id = :news_native_id
order by news_native_id asc;'''
with sqlite3.connect(config.NEWS_TWEETS_DB_FILE) as conn:
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
for news_native_id in news_native_ids_lst:
cursor.execute(query_news, {'news_native_id': news_native_id})
for row in cursor.fetchall():
print('{}: {}'.format(row['news_collected_time'], row['news_title']))
'''
Print associated tweets
'''
if 1 == 1:
print('TOPIC: {}; KEYWORDS: {}'.format(topic['name'], topic['keywords_lst']))
# limit to first 150 tweets
tweets_ids_lst = topic['tweets_ids_lst'][:150]
query_tweets = '''
select tweet_text, tweet_collected_time from tweets
where tweet_id = :tweet_id
order by tweet_native_id asc;'''
with sqlite3.connect(config.NEWS_TWEETS_DB_FILE) as conn:
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
for tweet_id in tweets_ids_lst:
cursor.execute(query_tweets, {'tweet_id': tweet_id})
for row in cursor.fetchall():
print('{}: {}'.format(row['tweet_collected_time'], row['tweet_text']))
"""
Explanation: Check news titles and sample tweets of each topic
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/awi/cmip6/models/sandbox-1/atmoschem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:37
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
jshudzina/keras-tutorial
|
notebooks/01-TensorPoisonousMushrooms.ipynb
|
apache-2.0
|
from pandas import read_csv
srooms_df = read_csv('../data/agaricus-lepiota.data.csv')
from sklearn_pandas import DataFrameMapper
import sklearn
import numpy as np
mappings = ([
('edibility', sklearn.preprocessing.LabelEncoder()),
('odor', sklearn.preprocessing.LabelBinarizer()),
('habitat', sklearn.preprocessing.LabelBinarizer()),
('spore-print-color', sklearn.preprocessing.LabelBinarizer())
])
mapper = DataFrameMapper(mappings)
srooms_np = mapper.fit_transform(srooms_df.copy()).astype(np.float32)
from sklearn.model_selection import train_test_split
train, test = train_test_split(srooms_np, test_size = 0.2, random_state=7)
train_labels = train[:,0:1]
train_data = train[:,1:]
test_labels = test[:,0:1]
test_data = test[:,1:]
"""
Explanation: Tensorflow versus Poisonous Mushrooms
After the Keras Example, let's build a tensorflow-based model as a comparision.
Feature Extraction
This example uses the same feature extraction techniques as the Keras Example.
In summary, the data prep follows these steps...
1. Load a pandas dataframe from a csv file.
2. Transform categorial data to one-hot representation.
3. Split the training and test data sets.
4. Extract edibility as labels.
End of explanation
"""
import tensorflow as tf
import math
def inference(samples, input_dim, dense1_units, dense2_units):
with tf.name_scope('dense_1'):
weights = tf.Variable(
tf.truncated_normal([input_dim, dense1_units],
stddev=1.0 / math.sqrt(float(input_dim))),
name='weights')
biases = tf.Variable(tf.zeros([dense1_units]),
name='biases')
dense1 = tf.nn.relu(tf.nn.xw_plus_b(samples, weights, biases))
with tf.name_scope('dropout'):
dropout = tf.nn.dropout(dense1, 0.5)
with tf.name_scope('dense_2'):
weights = tf.Variable(
tf.truncated_normal([dense1_units, dense2_units],
stddev=1.0 / math.sqrt(float(dense2_units))),
name='weights')
biases = tf.Variable(tf.zeros([dense2_units]),
name='biases')
output = tf.sigmoid(tf.nn.xw_plus_b(dropout, weights, biases))
return output
"""
Explanation: Model Definition
Tensorflow requies a bit more work than Keras to define the network because we need to define the model's parameters (i.e. the weights and biases). Here is a Keras code snippnet for comparison:
```
from keras.models import Sequential
from keras.layers import Dense, Dropout
model = Sequential()
model.add(Dense(20, activation='relu', input_dim=25))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
```
Here are the key differences:
1. Tensorflow uses name scoping to logically separate the layers.
2. Each dense layer defines and initializes weights and biases variables (implictly done in Keras).
3. Tensorflow doesn't use a sequential model. It uses a graph. The model defines Tensor references between layers.
End of explanation
"""
def loss(output, labels, from_logits=False):
if not from_logits:
epsilon = 10e-8
output = tf.clip_by_value(output, epsilon, 1 - epsilon)
output = tf.log(output / (1 - output))
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=output)
return tf.reduce_mean(xentropy)
def training(loss):
tf.summary.scalar('loss', loss)
optimizer = tf.train.AdamOptimizer()
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
return train_op
def predict(output):
return tf.round(output)
def accuracy(output, labels):
return tf.reduce_mean(tf.to_float(tf.equal(predict(output),labels)))
"""
Explanation: Model Compile
Unlike Keras, TensorFlow doesn't provide pre-canned functions for training. The model needs the following functions defined.
Define a loss function. The functions convert probabilities to logits. The clip function prevents a log(0).
Define a training function. Uses the loss to compute the gradients.
Define a accuracy function as a training metric.
Again, Keras hides these details by providing pre-canned loss and accuracy functions. The same definition in keras is a one liner.
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
End of explanation
"""
import time
log_dir = './logs/tensor_srooms'
num_epochs=10
batch_size=64
with tf.Graph().as_default():
with tf.name_scope('input'):
features_initializer = tf.placeholder(dtype=tf.float32, shape=train_data.shape)
labels_initializer = tf.placeholder(dtype=tf.float32, shape=train_labels.shape)
input_features = tf.Variable(features_initializer, trainable=False, collections=[])
input_labels = tf.Variable(labels_initializer, trainable=False, collections=[])
# Shuffle the training data between epochs and train in batchs
feature, label = tf.train.slice_input_producer([input_features, input_labels], num_epochs=num_epochs)
features, labels = tf.train.batch([feature, label], batch_size=batch_size)
# Define layers dimensions
output = inference(features, 25, 20, 1)
loss_op = loss(output, labels)
train_op = training(loss_op)
# Define the metrics op
acc_op = accuracy(predict(output), labels)
# Initialize all variables op
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
summary_op = tf.summary.merge_all()
# Saver for the weights
saver = tf.train.Saver()
print('create saver')
# Start Session
sess = tf.Session()
sess.run(init_op)
print('session started')
# Load up the data.
sess.run(input_features.initializer, feed_dict={features_initializer: train_data})
sess.run(input_labels.initializer, feed_dict={labels_initializer: train_labels})
print('loaded data')
# Write the summary for tensorboard
summary_writer = tf.summary.FileWriter(log_dir, sess.graph)
# coordinate reading threads
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
start_time = time.time()
# Run one step of the model.
_, loss_value, acc_value = sess.run([train_op, loss_op, acc_op])
duration = time.time() - start_time
# Write the summaries and print an overview fairly often.
if step % 100 == 0:
# Print status to stdout.
print('Step %d: loss = %.2f, acc = %.3f (%.3f sec)' % (step, loss_value, acc_value, duration))
# Update the events file.
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
step += 1
except tf.errors.OutOfRangeError:
print('Saving')
saver.save(sess, log_dir, global_step=step)
print('Done training for %d epochs, %d steps.' % (num_epochs, step))
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
"""
Explanation: Training
This entire code block represents a single line in Keras...
model.fit(train_data, train_labels, epochs=10, batch_size=32, callbacks=[tensor_board])
So, what's going on here?
1. Define an input producer to batch samples and shuffle examples between epochs.
2. Create SummaryWriter to write TensorBoard logs
3. Iterate over each batch
* Print accuracy and loss every epoch
* Write out accuracy and loss to a log every epoch
4. Save parameters when done.
End of explanation
"""
|
dnaneet/ELC
|
DATA/ELC_computable_report_AY1920.ipynb
|
gpl-3.0
|
#@title
#%%capture
import numpy as np #Linear algebra
import pandas as pd #Time series, datetime object manipulation
import matplotlib.pyplot as plt #plotting
#import seaborn as sb
#plt.style.use('fivethirtyeight') #Plot style preferred by author.
import calendar
from tabulate import tabulate #pretty display of tables
import plotly.express as px #Plotly interactive plots
from plotly.subplots import make_subplots
import warnings #suppress warning messages -- declutter
warnings.filterwarnings('ignore')
!pip install calmap #Calendar heat map
#!pip install qgrid #dynamic manipulation of tables.
"""
Explanation: Computable Document prototype: An interactive ELC report
End of explanation
"""
#@title
#url = 'https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/DATAf19.csv' #data is stored at this URL
df = pd.read_csv('DATA_AY1920.csv') #df = dataframe. Read data from the URL
#df.head(5) #First 5 entries. Data exploration. Gives the reader an idea of what the ELC data looks like.
df=df.replace({'December': 12, 'January': 1, 'February': 2, 'March': 3, 'April': 4,
'May': 5, 'June': 6, 'July': 7, 'August': 8, 'September': 9,
'October': 10, 'November': 11}) #replace month names with month numbers
df=df.replace({'Monday': 1, 'Tuesday': 2, 'Wednesday': 3,
'Thursday': 4, 'Friday': 5, 'Saturday': 6,
'Sunday': 7}) #replace day names with day numbers
#df.head(5) #print data frame. Data exploration.
#df.dtypes #Uncomment this line and run if you want to show what the datatypes of each column in the time series is
df['mdy'] = pd.to_datetime((df.year*10000+df.month*100+df.date).apply(str),format='%Y%m%d')
"""
Explanation: Preprocessing: Import ELC sign-in data from a URL
After the data is imported, month and day identifiers, which are strings, are replaced with integers. This operations allows the imported dataset to be cast into a time series type and perform further operations on it. This intermediate step of strings -> integers is necessary for the python functions for time series analysis to accept the data as arguments.
End of explanation
"""
#@title
n=5 #last n records
obj_mdy = df['visits'].groupby(df['mdy']).count() #Grouping number of visits by day
#obj_mdy.tail(n) #What does this grouped data look like?
xdata = np.array(obj_mdy.reset_index())[:,0]
ydata = np.array(obj_mdy.reset_index())[:,1]
df_mdy = pd.DataFrame({'mdy': xdata, 'visits': ydata})
fig = px.bar(df_mdy, x="mdy", y="visits")
fig.show()
#df_mdy.sample(n=5, random_state=1)
#df_day_hour['dayName'] = df_day_hour['day'].replace({1:'Mon', 2:'Tue', 3:'Wed', 4:'Thur', 5:'Fri', 6:'Sat', 7:'Sun'}) #replace day names with day numbers
#ax = obj_mdy.tail(len(df)).plot(kind='bar')
#ax.set_xticklabels(obj_mdy.tail(len(df)).index.strftime('%b-%d-%y'));
#plt.xlabel('Date')
#plt.ylabel('Number of sign-ins')
#plt.show()
#df_mdy.head(5)
df_mdy['DayOfOperation'] = np.arange(len(df_mdy))
df_mdy.plot.bar(x='DayOfOperation', y='visits', rot=90,figsize=(12,8))
plt.show()
"""
Explanation: Visualization of ELC usage data
Now that the ELC visit data has been cast into the appropriate format, exploratory visualization is performed. The exploratory steps are alongside lines of code, as comments.
Sign-ins by date
The ELC has peak usage in the 2nd or 3rd week of September. This can be correlated to Mechanics of Materials students requiring help with Statics
End of explanation
"""
#@title
import calmap #The Calendar (heat) map package is imported to provide higher quality visualization than bargraphs
#https://pythonhosted.org/calmap/#'
#plt.figure(figsize=(12,18))
#calmap.yearplot(df_mdy, year=2019, daylabels='MTWTRFSS')
#plt.show()
print('Calendar Map of ELC Sign-ins for Fall 2019')
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(111)
cax = calmap.yearplot(obj_mdy, year=2019, ax=ax, cmap='jet')
fig.colorbar(cax.get_children()[1], ax=cax, orientation='horizontal')
print('\n\n\n')
print('Calendar Map of ELC Sign-ins for Spring 2020')
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(111)
cax = calmap.yearplot(obj_mdy, year=20, ax=ax, cmap='jet')
fig.colorbar(cax.get_children()[1], ax=cax, orientation='horizontal')
"""
Explanation: Calendar heat map of sign-ins
The ELC was less utilized by students in the Spring semester
End of explanation
"""
#@title
obj_course = df['visits'].groupby(df['visits']).count()
course_data = np.array(obj_course)
#df_course = pd.DataFrame({'course': ['Statics', 'MechanicsOfMatls', 'Thermodynamics','Dynamics', 'MATLAB', 'StudySpace'],
# 'NumberOfSignins' : course_data})
#print(course_data)
df_course = pd.DataFrame({'course': ['Statics', 'MechanicsOfMatls', 'Thermodynamics','Dynamics', 'MATLAB', 'StudySpace'],
'NumberOfSignIns': course_data[0:6]})
df_course.plot.bar(x='course', y='NumberOfSignIns', rot=90,figsize=(12,8))
plt.show()
fig = px.bar(df_course, x="course", y="NumberOfSignIns")
fig.show()
"""
Explanation: Sign-ins by course:
Statics (MEEM2110) help was the most used service. Since 2018, it has been an ELC policy to ensure that all coaches hired are proficient in Statics.
End of explanation
"""
#@title
obj_hr = df['visits'].groupby(df['hour']).count() #Grouping number of visits by hour
xdata = np.array(obj_hr.reset_index())[:,0]
ydata = np.array(obj_hr.reset_index())[:,1]
df_hr = pd.DataFrame({'hour': xdata, 'visits': ydata})
fig = px.bar(df_hr, x="hour", y="visits")
fig.show()
df_hr.plot.bar(x='hour', y='visits', rot=90,figsize=(12,8))
plt.show()
#df_hr.head(10) #What does this grouped data look like?
#plt.xlabel('Hour of day')
#plt.ylabel('Total number of sign-ins')
#ax2 = df_hr.plot(kind='bar')
#plt.plot()
"""
Explanation: Sign-ins by hour of day
Afternoon hours were the most popular time (pre-COVID19) for students to visit the ELC. Since 2018, ELC has been staffed with ~50% more coaches during the 12-3pm time period, as compared to other periods.
End of explanation
"""
#@title
obj_day = df['visits'].groupby(df['day']).count() #Grouping number of visits by day
#df_day.head(10) #What does this grouped data look like?
xdata = np.array(obj_day.reset_index())[:,0]
ydata = np.array(obj_day.reset_index())[:,1]
df_day = pd.DataFrame({'day': xdata, 'visits': ydata})
df_day["day"] = df_day["day"].replace({1:'Mon', 2:'Tue', 3:'Wed',
4:'Thur', 5:'Fri', 6:'Sat',
7:'Sun'})
fig = px.bar(df_day, x="day", y="visits")
fig.show()
df_day.plot.bar(x='day', y='visits', rot=90,figsize=(12,8))
plt.show()
#obj_day.plot(kind='bar')
#locs, labels = plt.xticks()
#plt.xticks(np.arange(7), ('Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat', 'Sun'), rotation=45)
#plt.xlabel('Day of week')
#plt.ylabel('Total sign-ins')
#plt.plot()
"""
Explanation: Sign-ins by day of week
Monday thru Thursday are when most of the sign-ins are recorded. The leanest day of week is Saturdays. This lean usage has been recorded since Spring 2018. To ensure that the ELC has more coaches during Mon-Thur, Saturday's have the lowest operating hours and coaching staff.
End of explanation
"""
#@title
obj_month = df['visits'].groupby(df['month']).count() #Grouping number of visits by day
#print(df_month.head(10)) #What does this grouped data look like?
xdata = np.array(obj_month.reset_index())[:,0]
ydata = np.array(obj_month.reset_index())[:,1]
df_month = pd.DataFrame({'month': xdata, 'visits': ydata})
df_month["monthName"]=['Jan', 'Feb', 'Mar','Sep', 'Oct', 'Nov', 'Dec']
fig = px.bar(df_month, x="monthName", y="visits")
fig.show()
df_month.plot.bar(x='monthName', y='visits', rot=90,figsize=(12,8))
plt.show()
#ax = dobj_month.plot(kind='bar')
#ax.set_xticklabels(df_mdy.tail(n).index.strftime('%b-%d-%y'));
#ax.set_xticks(np.arange(df_month.shape[0]), ('Jan', 'Feb', 'Mar', 'Apr', 'Jun', 'Aug', 'Sep', 'Nov'));
#plt.xticks(np.arange(obj_month.shape[0]), ('Sep', 'Oct', 'Nov', 'Dec'), rotation=90)
#plt.xlabel('Month')
#plt.ylabel('Number of sign-ins')
#plt.show()
"""
Explanation: Sign-ins by month
Since 2018, the month of September has had the most sign-ins. This is due to a week-2 influx of Mechanics of Materials student visits for Statics help.
End of explanation
"""
#@title
#url4 = 'https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/expenses_f19.csv'
#df4 = pd.read_csv(url4, error_bad_lines=False)
df4 = pd.read_csv('expenses_ay1920.csv', error_bad_lines=False)
print('ELC Expenses ($) on wages for AY 19-20')
print(tabulate(df4, headers='keys', tablefmt='psql'))
print('\n\n\n')
url2='https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/ay1920_costperhour.csv'
df2 = pd.read_csv(url2)
print('Average cost ($) per hour by hour of day, for the semester')
print(tabulate(df2, headers='keys', tablefmt='psql'))
#print('Mean cost per day ($):\n')
#print(df2.mean())
print('\n\n\n')
print('Average (rounded) number of coaches per hour')
url3='https://raw.githubusercontent.com/dnaneet/ELC/master/DATA/ay1920_num_coaches_per_hr.csv'
df3 = pd.read_csv(url3)
print(tabulate(df3, headers='keys', tablefmt='psql'))
"""
Explanation: Grouping of data by (visits, day, hour): 3D scatter plot
Financial data
Cost per hour and number of coaches per hour, in any given regular week (week 2 - week before final exams). A zero or NaN entry in a table suggests that the ELC is not open during those hours.
The actual ELC expenditure on coach salaries was recorded as $26,041.38 for AY 2019-2020. The initial estimate was $27618.13. Since it took 1 week to restart the ELC after COVID19 closure, this reflected as approximately ~$1577. The ELC coach salaries amount to approximately ~$1000 a week.
End of explanation
"""
|
darkomen/TFG
|
modelado/temperatura/modelado.ipynb
|
cc0-1.0
|
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Mostramos todos los gráficos en el notebook
%pylab inline
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('datos.csv')
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
#columns = ['temperatura', 'entrada']
columns = ['temperatura', 'entrada']
"""
Explanation: Modelado de un sistema con ipython
Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.
End of explanation
"""
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax = datos[columns].plot(secondary_y=['entrada'],figsize=(10,5), ylim=(20,60),title='Modelo matemático del sistema')
ax.set_xlabel('Tiempo')
ax.set_ylabel('Temperatura [ºC]')
#datos_filtrados['RPM TRAC'].plot(secondary_y=True,style='g',figsize=(20,20)).set_ylabel=('RPM')
"""
Explanation: Respuesta del sistema
El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.
End of explanation
"""
# Buscamos el polinomio de orden 4 que determina la distribución de los datos
reg = np.polyfit(datos['time'],datos['temperatura'],2)
# Calculamos los valores de y con la regresión
ry = np.polyval(reg,datos['time'])
print (reg)
plt.plot(datos['time'],datos['temperatura'],'b^', label=('Datos experimentales'))
plt.plot(datos['time'],ry,'ro', label=('regresión polinómica'))
plt.legend(loc=0)
plt.grid(True)
plt.xlabel('Tiempo')
plt.ylabel('Temperatura [ºC]')
"""
Explanation: Cálculo del polinomio
Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
End of explanation
"""
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it1 = pd.read_csv('Regulador1.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax = datos_it1[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax.set_xlabel('Tiempo')
ax.set_ylabel('Temperatura [ºC]')
ax.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it1.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it1.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
"""
Explanation: El polinomio caracteristico de nuestro sistema es:
$$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$$
Transformada de laplace
Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:
$$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$$
Cálculo del PID mediante OCTAVE
Aplicando el método de sintonizacion de Ziegler-Nichols calcularemos el PID para poder regular correctamente el sistema.Este método, nos da d emanera rápida unos valores de $K_p$, $K_i$ y $K_d$ orientativos, para que podamos ajustar correctamente el controlador. Esté método consiste en el cálculo de tres parámetros característicos, con los cuales calcularemos el regulador:
$$G_s=K_p(1+\frac{1}{T_i·S}+T_d·S)=K_p+\frac{K_i}{S}+K_d$$
El cálculo de los parámetros característicos del método, lo realizaremos con Octave, con el siguiente código:
~~~
pkg load control
%los datos en la función tf() debe ser el numerador y denominador de nuestro sistema.
H=tf([25.95 0.000157333 1.63635E-8],[1 0 0 0]);
step(H);
dt=0.150;
t=0:dt:65;
y=step(H,t);
dy=diff(y)/dt;
[m,p]=max(dy);
yi=y(p);
ti=t(p);
L=ti-yi/m
Tao=(y(end)-yi)/m+ti-L
Kp=1.2Tao/L
Ti=2L;
Td=0.5L;
Ki=Kp/ti;
Kd=KpTd;
~~~
En esta primera iteración, los datos obtenidos son los siguientes:
$K_p = 6082.6$ $K_i=93.868 K_d=38.9262$
Con lo que nuestro regulador tiene la siguiente ecuación característica:
$$G_s = \frac{38.9262·S^2 + 6082.6·S + 93.868}{S}$$
Iteracción 1 de regulador
End of explanation
"""
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it2 = pd.read_csv('Regulador2.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax2 = datos_it2[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax2.set_xlabel('Tiempo')
ax2.set_ylabel('Temperatura [ºC]')
ax2.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it2.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it2.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
"""
Explanation: En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes:
$K_p = 6082.6$ $K_i=103.25 K_d=51.425$
Iteracción 2 del regulador
End of explanation
"""
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it3 = pd.read_csv('Regulador3.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax3 = datos_it3[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)
ax3.set_xlabel('Tiempo')
ax3.set_ylabel('Temperatura [ºC]')
ax3.hlines([160],0,6000,colors='r')
#Calculamos MP
Tmax = datos_it3.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it3.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
"""
Explanation: En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=60$
Iteracción 3 del regulador
End of explanation
"""
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it4 = pd.read_csv('Regulador4.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax4 = datos_it4[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)
ax4.set_xlabel('Tiempo')
ax4.set_ylabel('Temperatura [ºC]')
ax4.hlines([160],0,7000,colors='r')
#Calculamos MP
Tmax = datos_it4.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
print (" {:.2f}".format(Tmax))
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it4.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
"""
Explanation: En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=150$
Iteracción 4
End of explanation
"""
|
keras-team/keras-io
|
examples/vision/ipynb/edsr.ipynb
|
apache-2.0
|
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
AUTOTUNE = tf.data.AUTOTUNE
"""
Explanation: Enhanced Deep Residual Networks for single-image super-resolution
Author: Gitesh Chawda<br>
Date created: 2022/04/07<br>
Last modified: 2022/04/07<br>
Description: Training an EDSR model on the DIV2K Dataset.
Introduction
In this example, we implement
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
by Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee.
The EDSR architecture is based on the SRResNet architecture and consists of multiple
residual blocks. It uses constant scaling layers instead of batch normalization layers to
produce consistent results (input and output have similar distributions, thus
normalizing intermediate features may not be desirable). Instead of using a L2 loss (mean squared error),
the authors employed an L1 loss (mean absolute error), which performs better empirically.
Our implementation only includes 16 residual blocks with 64 channels.
Alternatively, as shown in the Keras example
Image Super-Resolution using an Efficient Sub-Pixel CNN,
you can do super-resolution using an ESPCN Model. According to the survey paper, EDSR is one of the top-five
best-performing super-resolution methods based on PSNR scores. However, it has more
parameters and requires more computational power than other approaches.
It has a PSNR value (≈34db) that is slightly higher than ESPCN (≈32db).
As per the survey paper, EDSR performs better than ESPCN.
Paper:
A comprehensive review of deep learning based single image super-resolution
Comparison Graph:
<img src="https://dfzljdn9uc3pi.cloudfront.net/2021/cs-621/1/fig-11-2x.jpg" width="500" />
Imports
End of explanation
"""
# Download DIV2K from TF Datasets
# Using bicubic 4x degradation type
div2k_data = tfds.image.Div2k(config="bicubic_x4")
div2k_data.download_and_prepare()
# Taking train data from div2k_data object
train = div2k_data.as_dataset(split="train", as_supervised=True)
train_cache = train.cache()
# Validation data
val = div2k_data.as_dataset(split="validation", as_supervised=True)
val_cache = val.cache()
"""
Explanation: Download the training dataset
We use the DIV2K Dataset, a prominent single-image super-resolution dataset with 1,000
images of scenes with various sorts of degradations,
divided into 800 images for training, 100 images for validation, and 100
images for testing. We use 4x bicubic downsampled images as our "low quality" reference.
End of explanation
"""
def flip_left_right(lowres_img, highres_img):
"""Flips Images to left and right."""
# Outputs random values from a uniform distribution in between 0 to 1
rn = tf.random.uniform(shape=(), maxval=1)
# If rn is less than 0.5 it returns original lowres_img and highres_img
# If rn is greater than 0.5 it returns flipped image
return tf.cond(
rn < 0.5,
lambda: (lowres_img, highres_img),
lambda: (
tf.image.flip_left_right(lowres_img),
tf.image.flip_left_right(highres_img),
),
)
def random_rotate(lowres_img, highres_img):
"""Rotates Images by 90 degrees."""
# Outputs random values from uniform distribution in between 0 to 4
rn = tf.random.uniform(shape=(), maxval=4, dtype=tf.int32)
# Here rn signifies number of times the image(s) are rotated by 90 degrees
return tf.image.rot90(lowres_img, rn), tf.image.rot90(highres_img, rn)
def random_crop(lowres_img, highres_img, hr_crop_size=96, scale=4):
"""Crop images.
low resolution images: 24x24
hight resolution images: 96x96
"""
lowres_crop_size = hr_crop_size // scale # 96//4=24
lowres_img_shape = tf.shape(lowres_img)[:2] # (height,width)
lowres_width = tf.random.uniform(
shape=(), maxval=lowres_img_shape[1] - lowres_crop_size + 1, dtype=tf.int32
)
lowres_height = tf.random.uniform(
shape=(), maxval=lowres_img_shape[0] - lowres_crop_size + 1, dtype=tf.int32
)
highres_width = lowres_width * scale
highres_height = lowres_height * scale
lowres_img_cropped = lowres_img[
lowres_height : lowres_height + lowres_crop_size,
lowres_width : lowres_width + lowres_crop_size,
] # 24x24
highres_img_cropped = highres_img[
highres_height : highres_height + hr_crop_size,
highres_width : highres_width + hr_crop_size,
] # 96x96
return lowres_img_cropped, highres_img_cropped
"""
Explanation: Flip, crop and resize images
End of explanation
"""
def dataset_object(dataset_cache, training=True):
ds = dataset_cache
ds = ds.map(
lambda lowres, highres: random_crop(lowres, highres, scale=4),
num_parallel_calls=AUTOTUNE,
)
if training:
ds = ds.map(random_rotate, num_parallel_calls=AUTOTUNE)
ds = ds.map(flip_left_right, num_parallel_calls=AUTOTUNE)
# Batching Data
ds = ds.batch(16)
if training:
# Repeating Data, so that cardinality if dataset becomes infinte
ds = ds.repeat()
# prefetching allows later images to be prepared while the current image is being processed
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = dataset_object(train_cache, training=True)
val_ds = dataset_object(val_cache, training=False)
"""
Explanation: Prepare a tf.Data.Dataset object
We augment the training data with random horizontal flips and 90 rotations.
As low resolution images, we use 24x24 RGB input patches.
End of explanation
"""
lowres, highres = next(iter(train_ds))
# Hight Resolution Images
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(highres[i].numpy().astype("uint8"))
plt.title(highres[i].shape)
plt.axis("off")
# Low Resolution Images
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(lowres[i].numpy().astype("uint8"))
plt.title(lowres[i].shape)
plt.axis("off")
def PSNR(super_resolution, high_resolution):
"""Compute the peak signal-to-noise ratio, measures quality of image."""
# Max value of pixel is 255
psnr_value = tf.image.psnr(high_resolution, super_resolution, max_val=255)[0]
return psnr_value
"""
Explanation: Visualize the data
Let's visualize a few sample images:
End of explanation
"""
class EDSRModel(tf.keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
def predict_step(self, x):
# Adding dummy dimension using tf.expand_dims and converting to float32 using tf.cast
x = tf.cast(tf.expand_dims(x, axis=0), tf.float32)
# Passing low resolution image to model
super_resolution_img = self(x, training=False)
# Clips the tensor from min(0) to max(255)
super_resolution_img = tf.clip_by_value(super_resolution_img, 0, 255)
# Rounds the values of a tensor to the nearest integer
super_resolution_img = tf.round(super_resolution_img)
# Removes dimensions of size 1 from the shape of a tensor and converting to uint8
super_resolution_img = tf.squeeze(
tf.cast(super_resolution_img, tf.uint8), axis=0
)
return super_resolution_img
# Residual Block
def ResBlock(inputs):
x = layers.Conv2D(64, 3, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.Add()([inputs, x])
return x
# Upsampling Block
def Upsampling(inputs, factor=2, **kwargs):
x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(inputs)
x = tf.nn.depth_to_space(x, block_size=factor)
x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(x)
x = tf.nn.depth_to_space(x, block_size=factor)
return x
def make_model(num_filters, num_of_residual_blocks):
# Flexible Inputs to input_layer
input_layer = layers.Input(shape=(None, None, 3))
# Scaling Pixel Values
x = layers.Rescaling(scale=1.0 / 255)(input_layer)
x = x_new = layers.Conv2D(num_filters, 3, padding="same")(x)
# 16 residual blocks
for _ in range(num_of_residual_blocks):
x_new = ResBlock(x_new)
x_new = layers.Conv2D(num_filters, 3, padding="same")(x_new)
x = layers.Add()([x, x_new])
x = Upsampling(x)
x = layers.Conv2D(3, 3, padding="same")(x)
output_layer = layers.Rescaling(scale=255)(x)
return EDSRModel(input_layer, output_layer)
model = make_model(num_filters=64, num_of_residual_blocks=16)
"""
Explanation: Build the model
In the paper, the authors train three models: EDSR, MDSR, and a baseline model. In this code example,
we only train the baseline model.
Comparison with model with three residual blocks
The residual block design of EDSR differs from that of ResNet. Batch normalization
layers have been removed (together with the final ReLU activation): since batch normalization
layers normalize the features, they hurt output value range flexibility.
It is thus better to remove them. Further, it also helps reduce the
amount of GPU RAM required by the model, since the batch normalization layers consume the same amount of
memory as the preceding convolutional layers.
<img src="https://miro.medium.com/max/1050/1*EPviXGqlGWotVtV2gqVvNg.png" width="500" />
End of explanation
"""
# Using adam optimizer with initial learning rate as 1e-4, changing learning rate after 5000 steps to 5e-5
optim_edsr = keras.optimizers.Adam(
learning_rate=keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5000], values=[1e-4, 5e-5]
)
)
# Compiling model with loss as mean absolute error(L1 Loss) and metric as psnr
model.compile(optimizer=optim_edsr, loss="mae", metrics=[PSNR])
# Training for more epochs will improve results
model.fit(train_ds, epochs=100, steps_per_epoch=200, validation_data=val_ds)
"""
Explanation: Train the model
End of explanation
"""
def plot_results(lowres, preds):
"""
Displays low resolution image and super resolution image
"""
plt.figure(figsize=(24, 14))
plt.subplot(132), plt.imshow(lowres), plt.title("Low resolution")
plt.subplot(133), plt.imshow(preds), plt.title("Prediction")
plt.show()
for lowres, highres in val.take(10):
lowres = tf.image.random_crop(lowres, (150, 150, 3))
preds = model.predict_step(lowres)
plot_results(lowres, preds)
"""
Explanation: Run inference on new images and plot the results
End of explanation
"""
|
google-research/google-research
|
dnn_predict_accuracy/colab/dnn_predict_accuracy.ipynb
|
apache-2.0
|
from __future__ import division
import time
import os
import json
import sys
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
import pandas as pd
import seaborn as sns
from scipy import stats
from tensorflow import keras
from tensorflow.io import gfile
import lightgbm as lgb
DATAFRAME_CONFIG_COLS = [
'config.w_init',
'config.activation',
'config.learning_rate',
'config.init_std',
'config.l2reg',
'config.train_fraction',
'config.dropout']
CATEGORICAL_CONFIG_PARAMS = ['config.w_init', 'config.activation']
CATEGORICAL_CONFIG_PARAMS_PREFIX = ['winit', 'act']
DATAFRAME_METRIC_COLS = [
'test_accuracy',
'test_loss',
'train_accuracy',
'train_loss']
TRAIN_SIZE = 15000
# TODO: modify the following lines
CONFIGS_PATH_BASE = 'path_to_the_file_with_best_configs'
MNIST_OUTDIR = "path_to_files_with_mnist_collection"
FMNIST_OUTDIR = 'path_to_files_with_fmnist_collection'
CIFAR_OUTDIR = 'path_to_files_with_cifar10gs_collection'
SVHN_OUTDIR = 'path_to_files_with_svhngs_collection'
def filter_checkpoints(weights, dataframe,
target='test_accuracy',
stage='final', binarize=True):
"""Take one checkpoint per run and do some pre-processing.
Args:
weights: numpy array of shape (num_runs, num_weights)
dataframe: pandas DataFrame which has num_runs rows. First 4 columns should
contain test_accuracy, test_loss, train_accuracy, train_loss respectively.
target: string, what to use as an output
stage: flag defining which checkpoint out of potentially many we will take
for the run.
binarize: Do we want to binarize the categorical hyperparams?
Returns:
tuple (weights_new, metrics, hyperparams, ckpts), where
weights_new is a numpy array of shape (num_remaining_ckpts, num_weights),
metrics is a numpy array of shape (num_remaining_ckpts, num_metrics) with
num_metric being the length of DATAFRAME_METRIC_COLS,
hyperparams is a pandas DataFrame of num_remaining_ckpts rows and columns
listed in DATAFRAME_CONFIG_COLS.
ckpts is an instance of pandas Index, keeping filenames of the checkpoints
All the num_remaining_ckpts rows correspond to one checkpoint out of each
run we had.
"""
assert target in DATAFRAME_METRIC_COLS, 'unknown target'
ids_to_take = []
# Keep in mind that the rows of the DataFrame were sorted according to ckpt
# Fetch the unit id corresponding to the ckpt of the first row
current_uid = dataframe.axes[0][0].split('/')[-2] # get the unit id
steps = []
for i in range(len(dataframe.axes[0])):
# Fetch the new unit id
ckpt = dataframe.axes[0][i]
parts = ckpt.split('/')
if parts[-2] == current_uid:
steps.append(int(parts[-1].split('-')[-1]))
else:
# We need to process the previous unit
# and choose which ckpt to take
steps_sort = sorted(steps)
target_step = -1
if stage == 'final':
target_step = steps_sort[-1]
elif stage == 'early':
target_step = steps_sort[0]
else: # middle
target_step = steps_sort[int(len(steps) / 2)]
offset = [j for (j, el) in enumerate(steps) if el == target_step][0]
# Take the DataFrame row with the corresponding row id
ids_to_take.append(i - len(steps) + offset)
current_uid = parts[-2]
steps = [int(parts[-1].split('-')[-1])]
# Fetch the hyperparameters of the corresponding checkpoints
hyperparams = dataframe[DATAFRAME_CONFIG_COLS]
hyperparams = hyperparams.iloc[ids_to_take]
if binarize:
# Binarize categorical features
hyperparams = pd.get_dummies(
hyperparams,
columns=CATEGORICAL_CONFIG_PARAMS,
prefix=CATEGORICAL_CONFIG_PARAMS_PREFIX)
else:
# Make the categorical features have pandas type "category"
# Then LGBM can use those as categorical
hyperparams.is_copy = False
for col in CATEGORICAL_CONFIG_PARAMS:
hyperparams[col] = hyperparams[col].astype('category')
# Fetch the file paths of the corresponding checkpoints
ckpts = dataframe.axes[0][ids_to_take]
return (weights[ids_to_take, :],
dataframe[DATAFRAME_METRIC_COLS].values[ids_to_take, :].astype(
np.float32),
hyperparams,
ckpts)
def build_fcn(n_layers, n_hidden, n_outputs, dropout_rate, activation,
w_regularizer, w_init, b_init, last_activation='softmax'):
"""Fully connected deep neural network."""
model = keras.Sequential()
model.add(keras.layers.Flatten())
for _ in range(n_layers):
model.add(
keras.layers.Dense(
n_hidden,
activation=activation,
kernel_regularizer=w_regularizer,
kernel_initializer=w_init,
bias_initializer=b_init))
if dropout_rate > 0.0:
model.add(keras.layers.Dropout(dropout_rate))
if n_layers > 0:
model.add(keras.layers.Dense(n_outputs, activation=last_activation))
else:
model.add(keras.layers.Dense(
n_outputs,
activation='sigmoid',
kernel_regularizer=w_regularizer,
kernel_initializer=w_init,
bias_initializer=b_init))
return model
def extract_summary_features(w, qts=(0, 25, 50, 75, 100)):
"""Extract various statistics from the flat vector w."""
features = np.percentile(w, qts)
features = np.append(features, [np.std(w), np.mean(w)])
return features
def extract_per_layer_features(w, qts=None, layers=(0, 1, 2, 3)):
"""Extract per-layer statistics from the weight vector and concatenate."""
# Indices of the location of biases/kernels in the flattened vector
all_boundaries = {
0: [(0, 16), (16, 160)],
1: [(160, 176), (176, 2480)],
2: [(2480, 2496), (2496, 4800)],
3: [(4800, 4810), (4810, 4970)]}
boundaries = []
for layer in layers:
boundaries += all_boundaries[layer]
if not qts:
features = [extract_summary_features(w[a:b]) for (a, b) in boundaries]
else:
features = [extract_summary_features(w[a:b], qts) for (a, b) in boundaries]
all_features = np.concatenate(features)
return all_features
"""
Explanation: Copyright 2020 The dnn-predict-accuracy Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
README
This notebook contains code for training predictors of DNN accuracy.
Contents:
(1) Loading the Small CNN Zoo dataset
(2) Figure 2 of the paper
(3) Examples of training Logit-Linear / GBM / DNN predictors
(4) Transfer of predictors across CNN collections
(5) Various visualizations of CNN collections
Code dependencies:
Light-GBM package
End of explanation
"""
all_dirs = [MNIST_OUTDIR, FMNIST_OUTDIR, CIFAR_OUTDIR, SVHN_OUTDIR]
weights = {'mnist': None,
'fashion_mnist': None,
'cifar10': None,
'svhn_cropped': None}
metrics = {'mnist': None,
'fashion_mnist': None,
'cifar10': None,
'svhn_cropped': None}
for (dirname, dataname) in zip(
all_dirs, ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']):
print('Loading %s' % dataname)
with gfile.GFile(os.path.join(dirname, "all_weights.npy"), "rb") as f:
# Weights of the trained models
weights[dataname] = np.load(f)
with gfile.GFile(os.path.join(dirname, "all_metrics.csv")) as f:
# pandas DataFrame with metrics
metrics[dataname] = pd.read_csv(f, index_col=0)
"""
Explanation: 1. Loading the Small CNN Zoo dataset
The following code loads the dataset (trained weights from .npy files and all the relevant metrics, including accuracy, from .csv files).
End of explanation
"""
weights_train = {}
weights_test = {}
configs_train = {}
configs_test = {}
outputs_train = {}
outputs_test = {}
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
# Take one checkpoint per each run
# If using GBM as predictor, set binarize=False
weights_flt, metrics_flt, configs_flt, ckpts = filter_checkpoints(
weights[dataset], metrics[dataset], binarize=True)
# Filter out DNNs with NaNs and Inf in the weights
idx_valid = (np.isfinite(weights_flt).mean(1) == 1.0)
inputs = np.asarray(weights_flt[idx_valid], dtype=np.float32)
outputs = np.asarray(metrics_flt[idx_valid], dtype=np.float32)
configs = configs_flt.iloc[idx_valid]
ckpts = ckpts[idx_valid]
# Shuffle and split the data
random_idx = list(range(inputs.shape[0]))
np.random.shuffle(random_idx)
weights_train[dataset], weights_test[dataset] = (
inputs[random_idx[:TRAIN_SIZE]], inputs[random_idx[TRAIN_SIZE:]])
outputs_train[dataset], outputs_test[dataset] = (
1. * outputs[random_idx[:TRAIN_SIZE]],
1. * outputs[random_idx[TRAIN_SIZE:]])
configs_train[dataset], configs_test[dataset] = (
configs.iloc[random_idx[:TRAIN_SIZE]],
configs.iloc[random_idx[TRAIN_SIZE:]])
"""
Explanation: Next it filters the dataset by keeping only checkpoints corresponding to 18 epochs and discarding runs that resulted in numerical instabilities. Finally, it performs the train / test splits.
End of explanation
"""
plt.figure(figsize = (16, 8))
pic_id = 0
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
pic_id += 1
sp = plt.subplot(2, 4, pic_id)
outputs = outputs_train[dataset]
if dataset == 'mnist':
plt.title('MNIST', fontsize=24)
if dataset == 'fashion_mnist':
plt.title('Fashion MNIST', fontsize=24)
if dataset == 'cifar10':
plt.title('CIFAR10-GS', fontsize=24)
if dataset == 'svhn_cropped':
plt.title('SVHN-GS', fontsize=24)
# 1. test accuracy hist plots
sns.distplot(np.array(outputs[:, 0]), bins=15, kde=False, color='green')
plt.xlim((0.0, 1.0))
sp.axes.get_xaxis().set_ticklabels([])
sp.axes.get_yaxis().set_ticklabels([])
pic_id += 4
sp = plt.subplot(2, 4, pic_id)
# 2. test / train accuracy scatter plots
NUM_POINTS = 1000
random_idx = range(len(outputs))
np.random.shuffle(random_idx)
plt.plot([0.0, 1.0], [0.0, 1.0], 'r--')
sns.scatterplot(np.array(outputs[random_idx[:NUM_POINTS], 0]), # test acc
np.array(outputs[random_idx[:NUM_POINTS], 2]), # train acc
s=30
)
if pic_id == 5:
plt.ylabel('Train accuracy', fontsize=22)
sp.axes.get_yaxis().set_ticklabels([0.0, 0.2, .4, .6, .8, 1.])
else:
sp.axes.get_yaxis().set_ticklabels([])
plt.xlim((0.0, 1.0))
plt.ylim((0.0, 1.0))
sp.axes.get_xaxis().set_ticks([0.0, 0.2, .4, .6, .8, 1.])
sp.axes.tick_params(axis='both', labelsize=18)
plt.xlabel('Test accuracy', fontsize=22)
pic_id -= 4
plt.tight_layout()
"""
Explanation: 2. Figure 2 of the paper
Next we plot distribution of CNNs from 4 collections in Small CNN Zoo according to their train / test accuracy
End of explanation
"""
with gfile.GFile(os.path.join(CONFIGS_PATH_BASE, 'best_configs.json'), 'r') as file:
best_configs = json.load(file)
"""
Explanation: 3. Examples of training Logit-Linear / GBM / DNN predictors
Next we train 3 models on all 4 CNN collections with the best hyperparameter configurations we found during our studies (documented in Table 2 and Section 4 of the paper).
First, we load the best hyperparameter configurations we found.
The file best_configs.json contains a list.
Each entry of that list corresponds to the single hyperparameter configuration.
It consists of:
(1) name of the CNN collection (mnist/fashion mnist/cifar10/svhn)
(2) predictor type (linear/dnn/lgbm)
(3) type of inputs, (refer to Table 2)
(4) value of MSE you will get training with these settings,
(5) dictionary of "parameter name"-> "parameter value" for the given type of predictor.
End of explanation
"""
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'cifar10' and
el[1] == 'lgbm' and
el[2] == 'wstats-perlayer'][0]
# Pre-process the weights
train_x = np.apply_along_axis(
extract_per_layer_features, 1,
weights_train['cifar10'],
qts=None,
layers=(0, 1, 2, 3))
test_x = np.apply_along_axis(
extract_per_layer_features, 1,
weights_test['cifar10'],
qts=None,
layers=(0, 1, 2, 3))
# Get the target values
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Define the GBM model
lgbm_model = lgb.LGBMRegressor(
num_leaves=config['num_leaves'],
max_depth=config['max_depth'],
learning_rate=config['learning_rate'],
max_bin=int(config['max_bin']),
min_child_weight=config['min_child_weight'],
reg_lambda=config['reg_lambda'],
reg_alpha=config['reg_alpha'],
subsample=config['subsample'],
subsample_freq=1, # it means always subsample
colsample_bytree=config['colsample_bytree'],
n_estimators=2000,
first_metric_only=True
)
# Train the GBM model;
# Early stopping will be based on rmse of test set
eval_metric = ['rmse', 'l1']
eval_set = [(test_x, test_y)]
lgbm_model.fit(train_x, train_y, verbose=100,
early_stopping_rounds=500,
eval_metric=eval_metric,
eval_set=eval_set,
eval_names=['test'])
# Evaluate the GBM model
assert hasattr(lgbm_model, 'best_iteration_')
# Choose the step which had the best rmse on the test set
best_iter = lgbm_model.best_iteration_ - 1
lgbm_history = lgbm_model.evals_result_
mse = lgbm_history['test']['rmse'][best_iter] ** 2.
mad = lgbm_history['test']['l1'][best_iter]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', mad)
print('Test R2 = ', r2)
"""
Explanation: 3.1 Training GBM predictors
GBM code below requires the lightgbm package.
This is an example of training GBM on CIFAR10-GS CNN collection using per-layer weights statistics as inputs.
End of explanation
"""
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'mnist' and
el[1] == 'dnn' and
el[2] == 'weights'][0]
train_x, test_x = weights_train['cifar10'], weights_test['cifar10']
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Get the optimizer, initializers, and regularizers
optimizer = keras.optimizers.get(config['optimizer_name'])
optimizer.learning_rate = config['learning_rate']
w_init = keras.initializers.get(config['w_init_name'])
if config['w_init_name'].lower() in ['truncatednormal', 'randomnormal']:
w_init.stddev = config['init_stddev']
b_init = keras.initializers.get('zeros')
w_reg = (keras.regularizers.l2(config['l2_penalty'])
if config['l2_penalty'] > 0 else None)
# Get the fully connected DNN architecture
dnn_model = build_fcn(int(config['n_layers']),
int(config['n_hiddens']),
1, # number of outputs
config['dropout_rate'],
'relu',
w_reg, w_init, b_init,
'sigmoid') # Last activation
dnn_model.compile(
optimizer=optimizer,
loss='mean_squared_error',
metrics=['mse', 'mae'])
# Train the model
dnn_model.fit(
train_x, train_y,
batch_size=int(config['batch_size']),
epochs=300,
validation_data=(test_x, test_y),
verbose=1,
callbacks=[keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=10,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)]
)
# Evaluate the model
eval_train = dnn_model.evaluate(train_x, train_y, batch_size=128, verbose=0)
eval_test = dnn_model.evaluate(test_x, test_y, batch_size=128, verbose=0)
assert dnn_model.metrics_names[1] == 'mean_squared_error'
assert dnn_model.metrics_names[2] == 'mean_absolute_error'
mse = eval_test[1]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', eval_test[2])
print('Test R2 = ', r2)
"""
Explanation: 3.2 Training DNN predictors
This is an example of training DNN on MNIST CNN collection using all weights as inputs.
End of explanation
"""
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'cifar10' and
el[1] == 'linear' and
el[2] == 'hyper'][0]
# Turn DataFrames to numpy arrays.
# Since we used "binarize=True" when calling filter_checkpoints all the
# categorical columns were binarized.
train_x = configs_train['cifar10'].values.astype(np.float32)
test_x = configs_test['cifar10'].values.astype(np.float32)
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Get the optimizer, initializers, and regularizers
optimizer = keras.optimizers.get(config['optimizer_name'])
optimizer.learning_rate = config['learning_rate']
w_init = keras.initializers.get(config['w_init_name'])
if config['w_init_name'].lower() in ['truncatednormal', 'randomnormal']:
w_init.stddev = config['init_stddev']
b_init = keras.initializers.get('zeros')
w_reg = (keras.regularizers.l2(config['l2_penalty'])
if config['l2_penalty'] > 0 else None)
# Get the linear architecture (DNN with 0 layers)
dnn_model = build_fcn(int(config['n_layers']),
int(config['n_hiddens']),
1, # number of outputs
None, # Dropout is not used
'relu',
w_reg, w_init, b_init,
'sigmoid') # Last activation
dnn_model.compile(
optimizer=optimizer,
loss='mean_squared_error',
metrics=['mse', 'mae'])
# Train the model
dnn_model.fit(
train_x, train_y,
batch_size=int(config['batch_size']),
epochs=300,
validation_data=(test_x, test_y),
verbose=1,
callbacks=[keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=10,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)]
)
# Evaluate the model
eval_train = dnn_model.evaluate(train_x, train_y, batch_size=128, verbose=0)
eval_test = dnn_model.evaluate(test_x, test_y, batch_size=128, verbose=0)
assert dnn_model.metrics_names[1] == 'mean_squared_error'
assert dnn_model.metrics_names[2] == 'mean_absolute_error'
mse = eval_test[1]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', eval_test[2])
print('Test R2 = ', r2)
"""
Explanation: 3.3 Train Logit-Linear predictors
This is an example of training Logit-Linear model on CIFAR10 CNN collection using hyperparameters as inputs.
End of explanation
"""
transfer_results = {}
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
print('Training on %s' % dataset)
transfer_results[dataset] = {}
train_x = weights_train[dataset]
test_x = weights_test[dataset]
train_y = outputs_train[dataset][:, 0]
test_y = outputs_test[dataset][:, 0]
# Pre-process the weights by taking the statistics across layers
train_x = np.apply_along_axis(
extract_per_layer_features, 1,
train_x, qts=None, layers=(0, 1, 2, 3))
test_x = np.apply_along_axis(
extract_per_layer_features, 1,
test_x, qts=None, layers=(0, 1, 2, 3))
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == dataset and
el[1] == 'lgbm' and
el[2] == 'wstats-perlayer'][0]
lgbm_model = lgb.LGBMRegressor(
num_leaves=config['num_leaves'],
max_depth=config['max_depth'],
learning_rate=config['learning_rate'],
max_bin=int(config['max_bin']),
min_child_weight=config['min_child_weight'],
reg_lambda=config['reg_lambda'],
reg_alpha=config['reg_alpha'],
subsample=config['subsample'],
subsample_freq=1, # Always subsample
colsample_bytree=config['colsample_bytree'],
n_estimators=4000,
first_metric_only=True,
)
# Train the GBM model
lgbm_model.fit(
train_x,
train_y,
verbose=100,
# verbose=False,
early_stopping_rounds=500,
eval_metric=['rmse', 'l1'],
eval_set=[(test_x, test_y)],
eval_names=['test'])
# Evaluate on all 4 CNN collections
for transfer_to in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
print('Evaluating on %s' % transfer_to)
# Take the test split of the dataset
transfer_x = weights_test[transfer_to]
transfer_x = np.apply_along_axis(
extract_per_layer_features, 1,
transfer_x, qts=None, layers=(0, 1, 2, 3))
y_hat = lgbm_model.predict(transfer_x)
transfer_results[dataset][transfer_to] = y_hat
"""
Explanation: 4. Figure 4: Transfer across datasets
Train GBM predictor using statistics of all layers as inputs on all 4 CNN collections. Then evaluate them on each of the 4 CNN collections (without fine-tuning). Store all results.
End of explanation
"""
plt.figure(figsize = (15, 15))
pic_id = 0
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
for transfer_to in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
pic_id += 1
sp = plt.subplot(4, 4, pic_id)
# Take true labels
y_true = outputs_test[transfer_to][:, 0]
# Take the predictions of the model
y_hat = transfer_results[dataset][transfer_to]
plt.plot([0.01, .99], [0.01, .99], 'r--', linewidth=2)
sns.scatterplot(y_true, y_hat)
# Compute the Kendall's tau coefficient
tau = stats.kendalltau(y_true, y_hat)[0]
plt.text(0.05, 0.9, r"$\tau=%.3f$" % tau, fontsize=25)
plt.xlim((0.0, 1.0))
plt.ylim((0.0, 1.0))
if pic_id % 4 != 1:
sp.axes.get_yaxis().set_ticklabels([])
else:
plt.ylabel('Predictions', fontsize=22)
sp.axes.tick_params(axis='both', labelsize=15)
if pic_id < 13:
sp.axes.get_xaxis().set_ticklabels([])
else:
plt.xlabel('Test accuracy', fontsize=22)
sp.axes.tick_params(axis='both', labelsize=15)
if pic_id == 1:
plt.title('MNIST', fontsize=22)
if pic_id == 2:
plt.title('Fashion-MNIST', fontsize=22)
if pic_id == 3:
plt.title('CIFAR10-GS', fontsize=22)
if pic_id == 4:
plt.title('SVHN-GS', fontsize=22)
plt.tight_layout()
"""
Explanation: And plot everything
End of explanation
"""
# Take the per-layer weights stats for the train split of CIFAR10-GS collection
per_layer_stats = np.apply_along_axis(
extract_per_layer_features, 1,
weights_train['cifar10'])
train_test_accuracy = outputs_train['cifar10'][:, 0]
# Positions of various stats
b0min = 0 # min of the first layer
b0max = 4 # max of the first layer
bnmin = 6*7 + 0 # min of the last layer
bnmax = 6*7 + 4 # max of the last layer
x = per_layer_stats[:,b0max] - per_layer_stats[:,b0min]
y = per_layer_stats[:,bnmax] - per_layer_stats[:,bnmin]
plt.figure(figsize=(10,8))
plt.scatter(x, y, s=15,
c=train_test_accuracy,
cmap="jet",
vmin=0.1,
vmax=0.54,
linewidths=0)
plt.yscale("log")
plt.xscale("log")
plt.ylim(0.1, 10)
plt.xlim(0.1, 10)
plt.xlabel("Bias range, first layer", fontsize=22)
plt.ylabel("Bias range, final layer", fontsize=22)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=18)
plt.tight_layout()
"""
Explanation: 5. Figure 3: various 2d plots based on subsets of weights statistics
Take weight statistics for the CIFAR10 CNN collection. Plot various 2d plots
End of explanation
"""
|
catalyst-cooperative/pudl
|
test/validate/notebooks/validate_gf_eia923.ipynb
|
mit
|
%load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
"""
Explanation: Validation of gf_eia923
This notebook runs sanity checks on the Generation Fuel data that are reported in EIA Form 923. These are the same tests which are run by the gf_eia923 validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
End of explanation
"""
pudl_out_orig = pudl.output.pudltabl.PudlTabl(pudl_engine, freq=None)
gf_eia923_orig = pudl_out_orig.gf_eia923()
"""
Explanation: Get the original EIA 923 data
First we pull the original (post-ETL) EIA 923 data out of the database. We will use the values in this dataset as a baseline for checking that latter aggregated data and derived values remain valid. We will also eyeball these values here to make sure they are within the expected range. This may take a minute or two depending on the speed of your machine.
End of explanation
"""
gf_eia923_orig.sample(10)
"""
Explanation: Validation Against Fixed Bounds
Some of the variables reported in this table have a fixed range of reasonable values, like the heat content per unit of a given fuel type. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* Tails: are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* Middle: Is the central value of the distribution where it should be?
Fields that need checking:
These are all contained in the frc_eia923 table data validations, and those should just be re-used if possible. Ugh, names not all the same though. Annoying.
* fuel_mmbtu_per_unit (BIT, SUB, LIG, coal, DFO, oil, gas)
End of explanation
"""
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_coal_heat_content)
"""
Explanation: Coal Heat Content
End of explanation
"""
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_oil_heat_content)
"""
Explanation: Oil Heat Content
End of explanation
"""
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_gas_heat_content)
"""
Explanation: Gas Heat Content
End of explanation
"""
pudl_out_month = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="MS")
gf_eia923_month = pudl_out_month.gf_eia923()
pudl.validate.plot_vs_agg(gf_eia923_orig, gf_eia923_month, pudl.validate.gf_eia923_agg)
"""
Explanation: Validate Monthly Aggregation
It's possible that the distribution will change as a function of aggregation, or we might make an error in the aggregation process. These tests check that a collection of quantiles for the original and the data aggregated by month have internally consistent values.
End of explanation
"""
pudl_out_year = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="AS")
gf_eia923_year = pudl_out_year.gf_eia923()
pudl.validate.plot_vs_agg(gf_eia923_orig, gf_eia923_year, pudl.validate.gf_eia923_agg)
"""
Explanation: Validate Annual Aggregation
It's possible that the distribution will change as a function of aggregation, or we might make an error in the aggregation process. These tests check that a collection of quantiles for the original and the data aggregated by year have internally consistent values.
End of explanation
"""
|
neeasthana/ML-SQL
|
ML-SQL/ML-SQL-initialDemo.ipynb
|
gpl-3.0
|
#Libraries
#from pyparsing import Word, Literal, alphas, Optional, OneOrMore, Group, Or, Combine, oneOf
from pyparsing import *
import string
import sys
import pandas as pd
from sklearn import svm
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
"""
Explanation: ML-SQL language (1st-take)
Authors
Written by: Neeraj Asthana (under Professor Robert Brunner)
University of Illinois at Urbana-Champaign
Summer 2016
Acknowledgements
Followed Tutorial at: http://www.onlamp.com/lpt/a/6435
Description
This notebook is meant to experiment with constructs for the ML-SQL language. The goal is to be able to understand ML-SQL syntax and port commands to actionable directives in Python.
End of explanation
"""
letters = string.ascii_letters
punctuation = string.punctuation
numbers = string.digits
whitespace = string.whitespace
#combinations
everything = letters + punctuation + numbers
everythingWOQuotes = everything.replace("\"", "").replace("'", "")
#Booleans
bools = Literal("True") + Literal("False")
#Parenthesis and Quotes
openParen = Literal("(").suppress()
closeParen = Literal(")").suppress()
Quote = Literal('"').suppress()
#includes every combination except whitespace
everything
"""
Explanation: Grammer Definition
Literals and Valid Symbols that are possible in the ML-SQL language
End of explanation
"""
filename = Word(everything).setResultsName("filename")
#define so that there can be multiple verisions of READ
readKeyword = oneOf(["Read", "READ"]).suppress()
#Define Read Optionals
#header
headerLiteral = (Literal("header") + Literal("=")).suppress()
header = Optional(headerLiteral + Or(bools).setResultsName("header"), default = "False" )
#separator
separatorLiteral = (Or([Literal("sep"), Literal("separator")]) + Literal("=")).suppress()
definesep = Quote + Word(everythingWOQuotes + whitespace).setResultsName("sep") + Quote
separator = Optional(separatorLiteral + definesep, default = ",")
#Compose Read Optionals
readOptions = Optional(openParen + separator + header + closeParen)
read = readKeyword + filename + readOptions
readTest = 'READ /home/ubuntu/notebooks/ML-SQL/Classification/iris.data (sep="," header=False)'
readTestResult = read.parseString(readTest)
filename = readTestResult.filename
header = readTestResult.header
sep = readTestResult.sep
#Function to lower a string value of "True" or "False" to an actual python boolean value
def str_to_bool(s):
if s == 'True':
return True
elif s == 'False':
return None
else:
raise ValueError ("Cannot lower value " + s + " to a boolean value")
#read parameters from parsed statement and read the file
f = pd.read_csv(filename, sep = sep, header = str_to_bool(header))
f.head()
"""
Explanation: READ
Read files into memory by speicifying the file name, header presence, and the separator.
Issues:
- Read cannot handle spaces in file names
- Handling names of columns
End of explanation
"""
#define so that there can be multiple verisions of Split
splitKeyword = oneOf(["Split", "SPLIT"]).suppress()
#Phrases used to organize splits
trainPhrase = (Literal("train") + Literal("=")).suppress()
testPhrase = (Literal("test") + Literal("=")).suppress()
valPhrase = (Literal("validation") + Literal("=")).suppress()
#train, test, validation split values
trainS = Combine(Literal(".") + Word(numbers)).setResultsName("train_split")
testS = Combine(Literal(".") + Word(numbers)).setResultsName("test_split")
valS = Combine(Literal(".") + Word(numbers)).setResultsName("validation_split")
#Compose phrases and values together
training = trainPhrase + trainS
testing = testPhrase + testS
val = valPhrase + valS
#Creating Optional Split phrase
ocomma = Optional(",").suppress()
split = Optional(splitKeyword + openParen + training + ocomma + testing + ocomma + val + closeParen)
#Combining READ and SPLIT keywords into one clause for combined use
read_split = read + split
#Split test
splitTest = "SPLIT (train = .8, test = .2, validation = .0)"
print(split.parseString(splitTest))
#Read with Split test
read_split_test = readTest + " "+ splitTest
print(read_split.parseString(read_split_test))
"""
Explanation: SPLIT
Splits dataset into training, testing, and validation sets. Give 3 non-negative decimals that sum to 1 to specify these quantities.
End of explanation
"""
#Algorithm Definitions
algoPhrase = (Literal ("algorithm") + Literal("=")).suppress()
svmPhrase = oneOf(["svm", "SVM"])
logPhrase = oneOf(["logistic", "Logistic", "LOGISTIC"])
#Options for classifiers
#Compositions
svm = svmPhrase + Optional(openParen + closeParen)
log = logPhrase + Optional(openParen + closeParen)
algo = algoPhrase + MatchFirst([svm, log]).setResultsName("algorithm")
#define so that there can be multiple verisions of Classify
classifyKeyword = oneOf(["Classify", "CLASSIFY"]).suppress()
#Phrases to organize predictor and label column numbers
predPhrase = (Literal("predictors") + Literal("=")).suppress()
labelPhrase = (Literal("label") + Literal("=")).suppress()
#define predictor and label column numbers
predictorsDef = OneOrMore(Word(numbers) + ocomma).setResultsName("predictors")
labelDef = Word(numbers).setResultsName("label")
#combine phrases with found column numbers
preds = predPhrase + openParen + predictorsDef + closeParen
labels = labelPhrase + labelDef
classify = Optional(classifyKeyword + openParen + preds + ocomma + labels + ocomma + algo + closeParen)
classifyTest = "CLASSIFY (predictors = (1,2,3,4), label = 5, algorithm = SVM)"
print(classify.parseString(classifyTest))
"""
Explanation: Classify
Define an algorithm to perform a classification task on
Supported classifier: SVM, Logistic Regression
End of explanation
"""
#Algorithm Definitions
simplePhrase = oneOf(["simple", "SIMPLE", "Simple"])
lassoPhrase = oneOf(["lasso", "Lasso", "LASSO"])
ridgePhrase = oneOf(["ridge", "Ridge", "RIDGE"])
#Options for classifiers
#Compositions
simple = simplePhrase + Optional(openParen + closeParen)
lasso = lassoPhrase + Optional(openParen + closeParen)
ridge = ridgePhrase + Optional(openParen + closeParen)
algo = algoPhrase + MatchFirst([simple, lasso, ridge]).setResultsName("algorithm")
#define so that there can be multiple verisions of Regression
regressionKeyword = oneOf(["Regression", "REGRESSION"]).suppress()
#Phrases to organize predictor and label column numbers
predPhrase = (Literal("predictors") + Literal("=")).suppress()
labelPhrase = (Literal("label") + Literal("=")).suppress()
#define predictor and label column numbers
predictorsDef = OneOrMore(Word(numbers) + ocomma).setResultsName("predictors")
labelDef = Word(numbers).setResultsName("label")
#combine phrases with found column numbers
preds = predPhrase + openParen + predictorsDef + closeParen
labels = labelPhrase + labelDef
regression = Optional(regressionKeyword + openParen + preds + ocomma + labels + ocomma + algo + closeParen)
regressionTest = "REGRESSION (predictors = (1,2,3,4), label = 5, algorithm = simple)"
print(regression.parseString(regressionTest))
"""
Explanation: Regression
Define an algorithm to perform a classification task on
Supported classifier: Simple Linear Regression, Lasso, Ridge
End of explanation
"""
read_split_classify = read + split + classify
read_split_classify_regression = read + split + classify + regression
query1 = readTest + " " + splitTest + " " + classifyTest
print(query1)
#define a pipeline to accomplish all of the data tasks we envision
result1 = read_split_classify.parseString(query1)
#Extract relevant features from the query
filename1 = result1.filename
header1 = result1.header
sep1 = result1.sep
train1 = result1.train_split
test1 = result1.test_split
predictors1 = result1.predictors
label1 = result1.label
algo1 = str(result1.algorithm)
#Preform classification dataflow
#read file
file1 = pd.read_csv(filename1, header = str_to_bool(header1), sep = sep1)
#predictors and labels
pred_cols = map(int, predictors1)
pred_cols = map(lambda x: x - 1, pred_cols)
label_col = int(label1) - 1
X = file1.ix[:,pred_cols]
y = file1.ix[:,label_col]
#Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=float(train1), test_size=float(test1))
#choose classification algorithm
if algo1.lower() == "svm":
clf = svm.SVC()
elif algo.lower() == "logistic":
clf = LogisticRegression()
#Train model
clf.fit(X_train, y_train)
#Performance on test data
clf.score(X_test, y_test)
"""
Explanation: Examples
I include the 3 words: READ, SPLIT, CLASSIFY and show a basic example of a classification task on the Iris-data set
End of explanation
"""
|
ianhamilton117/deep-learning
|
sentiment-rnn/Sentiment_RNN.ipynb
|
mit
|
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
# Create your dictionary that maps vocab words to integers here
word_counts = Counter(words)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(sorted_vocab, 1)} #Start at 1 because we'll be using 0 for padding
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[word] for word in review.split()] for review in reviews]
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [1 if word == "positive" else 0 for word in labels.split('\n')]
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, review in enumerate(reviews_ints):
features[i, -len(review):] = np.array(review)[:seq_len]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features) * split_frac)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_split_idx = int(len(val_x) * 0.5)
val_x, test_x = val_x[:test_split_idx], val_x[test_split_idx:]
val_y, test_y = val_y[:test_split_idx], val_y[test_split_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 256
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
with tf.name_scope("inputs"):
inputs_ = tf.placeholder(tf.int32, [None, None], name="inputs")
with tf.name_scope("labels"):
labels_ = tf.placeholder(tf.int32, [None, None], name="labels")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
with tf.device('/cpu:0'):
with tf.name_scope("embedding"):
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.summary.histogram("word_embedding", embedding)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
with tf.name_scope("RNN_layers"):
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
with tf.name_scope("output"):
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
with tf.name_scope("predictions"):
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
tf.summary.histogram("predictions", predictions)
with tf.name_scope("cost"):
cost = tf.losses.mean_squared_error(labels_, predictions)
tf.summary.scalar("cost", cost)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
with tf.name_scope("train"):
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar("accuracy", accuracy)
merged = tf.summary.merge_all()
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter("./logs/2/train", sess.graph)
test_writer = tf.summary.FileWriter("./logs/2/test")
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
summary, loss, state, _ = sess.run([merged, cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
train_writer.add_summary(summary, iteration)
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
summary, batch_acc, val_state = sess.run([merged, accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
test_writer.add_summary(summary, iteration)
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
google/empirical_calibration
|
notebooks/kang_schafer_population_mean.ipynb
|
apache-2.0
|
#@title Copyright 2019 The Empirical Calibration Authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
Explanation: We illustrate empirical calibration on Kang-Schafer simulation under both correctly specified and misspecified models, and benchmark the execution time.
End of explanation
"""
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import patsy
import seaborn as sns
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina'
from google.colab import widgets
# install and import ec
!pip install -q git+https://github.com/google/empirical_calibration
import empirical_calibration as ec
"""
Explanation: <table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/empirical_calibration/blob/master/notebooks/kang_schafer_population_mean.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/empirical_calibration/blob/master/notebooks/kang_schafer_population_mean.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Simulation
Imports
Selection Bias
Correctly Specified Model
Misspecified Model
Adding transformations of covariates
Adding extra covariates
Benchmark Execution Time
Simulation
The true set of covariates is generated independently and identically
distributed from the standard normal distribution
$$
(Z_1, Z_2, Z_3, Z_4) \sim N(0, \mathbf{I}_4).
$$
The outcome is generated as
$$
Y = 210 + 27.4 Z_1 + 13.7 Z_2 + 13.7 Z_3 + 13.7 Z_4 + \epsilon,
$$
where $\epsilon \sim N(0, 1)$.
The propensity score is defined as
$$
Pr(T = 1 | Z) = \text{expit}(-Z_1 + 0.5 Z_2 - 0.25 Z_3 - 0.1 Z_4),
$$
where $\text{expit}(x) = 1/(1+\text{exp}(-x)).$
This mechanism produces an equal-sized treated and control group
on average. Given the covariates, the outcome is independent of the treatment
assignment, thus the true ATT is zero. The overall outcome mean is 210. Due to
the treatment selection bias, the outcome mean for the treated group (200) is
lower than that of the control group (220).
The typical exercise is to examine the performance of an observational method
under both correctly specified and misspecified propensity score and/or outcome
regression models. Misspecification occurs when the following nonlinear
transformation $X_i$'s are observed in place of the true covariates
\begin{align}
X_{i1} & = \exp(Z_{i1}/2), \
X_{i2} & = Z_{i2} / (1 + \exp(Z_{i1})) + 10, \
X_{i3} & = (Z_{i1} Z_{i3} / 25 + 0.6)^3, \
X_{i4} & = (Z_{i2} + Z_{i4} + 20)^2.
\end{align}
For more context, see paper.
Imports
End of explanation
"""
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000)
df = pd.DataFrame(
np.column_stack([
simulation.treatment, simulation.covariates,
simulation.transformed_covariates, simulation.outcome
]))
df.columns = [
"treatment", "z1", "z2", "z3", "z4", "x1", "x2", "x3", "x4", "outcome"
]
"""
Explanation: Selection Bias
We first simulate one dataset of size $2000$ to examine the selection bias.
End of explanation
"""
print(df.groupby("treatment").mean().T)
"""
Explanation: The treated group has a lower outcome mean than that of the control group, but the difference is not necessarily attributed to the causal effect of the treatment.
End of explanation
"""
def show_hist(name):
plt.figure(figsize=(6, 2))
plt.hist(
df.loc[df['treatment'] == 1, name],
bins=20,
alpha=0.4,
color='#00BFC4',
label='treated',
edgecolor='none')
plt.hist(
df.loc[df['treatment'] == 0, name],
bins=20,
alpha=0.4,
color='#F8766D',
label='control',
edgecolor='none')
plt.xlabel(name)
plt.legend(loc='upper left', prop={'size': 12})
plt.show()
tb = widgets.TabBar(['covariates', 'transformed_covariates', 'outcome'])
with tb.output_to('covariates'):
for name in ["z1", "z2", "z3", "z4"]:
show_hist(name)
with tb.output_to('transformed_covariates'):
for name in ["x1", "x2", "x3", "x4"]:
show_hist(name)
with tb.output_to('outcome'):
show_hist("outcome")
"""
Explanation: The distributions of covariates or transformed covariates don't completely overlap between the treated and control groups.
End of explanation
"""
def estimate_mean(formula):
simulation = ec.data.kang_schafer.Simulation(size=1000)
t = simulation.treatment
y = simulation.outcome
df = pd.DataFrame(
np.column_stack(
[simulation.covariates, simulation.transformed_covariates]))
df.columns = ["z1", "z2", "z3", "z4", "x1", "x2", "x3", "x4"]
x = patsy.dmatrix(formula, df, return_type="dataframe").values
weights = ec.from_formula(formula=formula,
df=df.loc[t==1],
target_df=df)[0]
return np.mean(np.sum(y[t == 1] * weights))
def show_estimates(estimates):
estimates = pd.Series(estimates)
ax = estimates.hist(bins=20, alpha=0.8, edgecolor='none')
plt.axvline(estimates.mean(), linestyle='dashed', color='red')
# True population mean is 210.
print('bias is {}'.format(estimates.mean() - 210.))
print('rmse is {}'.format(np.sqrt(np.mean((estimates - 210.) ** 2))))
estimates_correct = [estimate_mean("-1 + z1 + z2 + z3 + z4") for i in xrange(1000)]
"""
Explanation: Correctly Specified Model
We run the simulation $1000$ times under correctly specified logistic propensity score. For each simulation, the treatment group was weighted so that it matches the population in terms of their covariate distributions.
The estimator is the weighted value of $y$ in the treatment group.
End of explanation
"""
show_estimates(estimates_correct)
"""
Explanation: With correctly specified covariates to match ($Z1, \dots, Z4$),
the bias is smaller and the RMSE is better than any of the methods in the Kang & Schafer paper, where the best RMSE was 1.17.
End of explanation
"""
estimates_miss = [estimate_mean("-1 + x1 + x2 + x3 + x4") for i in xrange(1000)]
show_estimates(estimates_miss)
"""
Explanation: Misspecified Model
If the transformed covariates are observed in place of the true covariates, aka the propensity score model is misspecified, the estimate is no long un-biased.
End of explanation
"""
formula = ("-1 + (x1 + x2 + x3 + x4)**2 + I(np.log(x1)) + I(np.log(x2)) + "
"I(np.log(x3)) + I(np.log(x4))")
estimates_expanded = [estimate_mean(formula) for i in xrange(1000)]
show_estimates(estimates_expanded)
"""
Explanation: Adding transformations of covariates
One reasonable strategy is to expand the set of balancing covariates and hope it will make the model less "misspecified". If we additional balance the two-way interactions and the log transformation, the bias indeed reduces.
End of explanation
"""
formula = "-1 + z1 + z2 + z3 + z4 + x1 + x2 + x3 + x4"
estimates_redundant = [estimate_mean(formula) for i in xrange(1000)]
show_estimates(estimates_redundant)
"""
Explanation: Adding extra covariates
If the model is misspecified in the sense that more covariates are included than necessary, the causal estimate remains unbiased.
End of explanation
"""
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=20000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=200000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
np.random.seed(123)
simulation = ec.data.kang_schafer.Simulation(size=2000000)
x1 = simulation.covariates[simulation.treatment == 1]
x0 = simulation.covariates[simulation.treatment == 0]
%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]
"""
Explanation: Benchmark Execution Time
The execution time is generally linear with respect to the sample size.
End of explanation
"""
|
therealAJ/python-sandbox
|
data-science/learning/ud1/DataScience/SimilarMovies.ipynb
|
gpl-3.0
|
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
m_cols = ['movie_id', 'title']
movies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2))
ratings = pd.merge(movies, ratings)
ratings.head()
"""
Explanation: Finding Similar Movies
We'll start by loading up the MovieLens dataset. Using Pandas, we can very quickly load the rows of the u.data and u.item files that we care about, and merge them together so we can work with movie names instead of ID's. (In a real production job, you'd stick with ID's and worry about the names at the display layer to make things more efficient. But this lets us understand what's going on better for now.)
End of explanation
"""
movieRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
movieRatings.head()
"""
Explanation: Now the amazing pivot_table function on a DataFrame will construct a user / movie rating matrix. Note how NaN indicates missing data - movies that specific users didn't rate.
End of explanation
"""
starWarsRatings = movieRatings['Star Wars (1977)']
starWarsRatings.head()
"""
Explanation: Let's extract a Series of users who rated Star Wars:
End of explanation
"""
similarMovies = movieRatings.corrwith(starWarsRatings)
similarMovies = similarMovies.dropna()
df = pd.DataFrame(similarMovies)
df.head(10)
"""
Explanation: Pandas' corrwith function makes it really easy to compute the pairwise correlation of Star Wars' vector of user rating with every other movie! After that, we'll drop any results that have no data, and construct a new DataFrame of movies and their correlation score (similarity) to Star Wars:
End of explanation
"""
similarMovies.sort_values(ascending=False)
"""
Explanation: (That warning is safe to ignore.) Let's sort the results by similarity score, and we should have the movies most similar to Star Wars! Except... we don't. These results make no sense at all! This is why it's important to know your data - clearly we missed something important.
End of explanation
"""
import numpy as np
movieStats = ratings.groupby('title').agg({'rating': [np.size, np.mean]})
movieStats.head()
"""
Explanation: Our results are probably getting messed up by movies that have only been viewed by a handful of people who also happened to like Star Wars. So we need to get rid of movies that were only watched by a few people that are producing spurious results. Let's construct a new DataFrame that counts up how many ratings exist for each movie, and also the average rating while we're at it - that could also come in handy later.
End of explanation
"""
popularMovies = movieStats['rating']['size'] >= 100
movieStats[popularMovies].sort_values([('rating', 'mean')], ascending=False)[:15]
"""
Explanation: Let's get rid of any movies rated by fewer than 100 people, and check the top-rated ones that are left:
End of explanation
"""
df = movieStats[popularMovies].join(pd.DataFrame(similarMovies, columns=['similarity']))
df.head()
"""
Explanation: 100 might still be too low, but these results look pretty good as far as "well rated movies that people have heard of." Let's join this data with our original set of similar movies to Star Wars:
End of explanation
"""
df.sort_values(['similarity'], ascending=False)[:15]
"""
Explanation: And, sort these new results by similarity score. That's more like it!
End of explanation
"""
|
liganega/Gongsu-DataSci
|
previous/notes2017/W10/GongSu22_Statistics_Population_Variance.ipynb
|
gpl-3.0
|
from GongSu21_Statistics_Averages import *
"""
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
모집단 분산 점추정
안내사항
지난 시간에 다룬 21장 내용을 활용하고자 한다.
따라서 아래와 같이 21장 내용을 모듈로 담고 있는 파이썬 파일을 임포트 해야 한다.
주의: GongSu21_Statistics_Averages.py 파일이 동일한 디렉토리에 있어야 한다.
End of explanation
"""
prices_pd.head()
"""
Explanation: 주요 내용
모집단과 표본
모집단 분산의 점추정
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주를 예제로 하여 주(State)별로 담배(식물) 도매가 전체에 대한 거래가의 평균과 분산을 점추정(point estimation)하는 방법을 다룬다.
주요 모듈
pandas: 통계분석 전용 모듈
numpy 모듈을 바탕으로 하여 통계분석에 특화된 모듈임.
마이크로소프트의 엑셀처럼 작동하는 기능을 지원함
datetime: 날짜와 시간을 적절하게 표시하도록 도와주는 기능을 지원하는 모듈
scipy: 수치계산, 공업수학 등을 지원하는 모듈
주의: 언급된 모듈은 이미 GongSu21_Statistics_Averages.py 모듈에서 임포트 되었음.
오늘 사용할 데이터
주별 담배(식물) 도매가격 및 판매일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png" style="width:600">
</td>
</tr>
</table>
</p>
주의: 언급된 파일이 GongSu21_Statistics_Averages 모듈에서 prices_pd 라는 변수에 저장되었음.
또한 주(State)별, 거래날짜별(date) 기준으로 이미 정렬되어 있음.
따라서 아래에서 볼 수 있듯이 예를 들어, prices_pd의 첫 다섯 줄의 내용은 알파벳순으로 가장 빠른 이름을 가진 알라바마(Alabama) 주에서 거래된 데이터 중에서 가정 먼저 거래된 5개의 거래내용을 담고 있다.
End of explanation
"""
california_pd['HighQ_dev'] = (california_pd['HighQ'] - ca_mean) ** 2
california_pd.head()
"""
Explanation: 모집단과 표본
Weed_Price.csv 파일에 담긴 담배(식물) 도매가는 미국에서 거래된 모든 도매가 정보가 아니라 소수의 거래 정보만을 담고 있다.
이와같이 조사대상의 소수만을 모아 둔 데이터를 표본(Sample)이라 부른다.
반면에 미국에서 거래되는 모든 담배(식물) 도매가 전체는 현재 조사하고자 하는 대상들의 모집단이라 부른다.
여기서는 Weed_Price.csv 파일에 담긴 표본을 이용하여 모집단에 대한 분산과, 주별로 이루어진 거래 사이의 상관관계를 확인하고자 한다.
참고: 모집단과 표본, 점추정에 대한 보다 자세한 설명은 아래의 두 파일을 참조한다.
* GongSu22_Statistics_Sampling_a.pdf
* GongSu22_Statistics_Sampling_b.pdf
모집단 평균값과 분산의 점추정
모집단의 평균값 점추정: 표본의 평균값을 그대로 이용한다.
$$\hat{x} = \bar x = \frac{\Sigma_{i=1}^{n} x_i}{n}$$
$\hat x\,\,$는 모집단 평균값의 점추정 기호
$\bar x$는 표본 데이터들의 평균값 기호
모집단의 분산 점추정: 표본 데이터를 이용해서 모집단의 분산을 추정할 수 있다.
$$\hat\sigma\,\, {}^2 = s^2 = \frac{\Sigma_{i = 1}^{n}(x_i - \bar x)^2}{n-1}$$
$\hat \sigma\,\, {}^2$는 모집단 분산의 점추정 기호
주의:
* $s^2$을 계산할 때 $n$ 대신에 $n-1$로 나누는 것에 주의한다.
* 모집단의 분산은 일반적으로 표본의 분산보다 좀 더 크기 때문이다.
캘리포니아 주에서 거래된 HighQ 담배(식물)의 도매가 전체에 대한 분산의 점추정
먼저 prices_pd에 포함된 데이터 중에서 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물)의 가격들에 대한 연산이 필요하다.
즉, 아래 공식의 분자를 계산하기 위한 준비과정이다.
$$s^2 = \frac{\Sigma_{i = 1}^{n}(x_i - \bar x)^2}{n-1}$$
주의: 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물)의 도매가의 평균값은 ca_mean으로 이미 계산되었다.
End of explanation
"""
ca_HighQ_variance = california_pd.HighQ_dev.sum() / (ca_count - 1)
ca_HighQ_variance
"""
Explanation: 이제 캘리포니아 주 거래된 상품(HighQ) 담배(식물)의 거래가 전체 모집단에 대한 분산 점추정을 계산할 수 있다.
주의: 표본의 크기는 ca_count이다.
End of explanation
"""
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 표준편차
ca_HighQ_SD = np.sqrt(ca_HighQ_variance)
ca_HighQ_SD
"""
Explanation: 주의:
* DataFrame 자료형의 연산은 넘파이 어레이의 연산처럼 항목별로 실행된다.
* sum 메소드의 활용을 기억한다.
표준편차의 점추정
모집단 분산의 점추정으로 얻은 값에다가 루트를 씌우면 된다.
End of explanation
"""
|
etpinard/delightfulsoup
|
examples/ipython-notebook/notebook.ipynb
|
mit
|
import plotly
plotly.__version__
"""
Explanation: Plotly maps
with Plotly's Python API library and Basemap
This notebook comes in response to <a href="https://twitter.com/rjallain/status/496767038782570496" target="_blank">this</a> Rhett Allain tweet.
Although Plotly does not feature built-in maps functionality (yet), this notebook demonstrates how to plotly-fy maps generated by Basemap.
<hr>
First, check the version which version of the Python API library installed on your machine:
End of explanation
"""
import plotly.plotly as py
"""
Explanation: From root:
<img src="./block-diagram.svg" />
From a folder:
<img src="./assets/block-diagram.svg" />
Next, if you have a plotly account as well as a credentials file set up on your machine, singing in to Plotly's servers is done automatically while importing plotly.plotly.
End of explanation
"""
from plotly.graph_objs import *
"""
Explanation: Import the plotly graph objects (in particular Contour) to help build our figure:
End of explanation
"""
import numpy as np
from scipy.io import netcdf
"""
Explanation: Data with this notebook will be taken from a NetCDF file, so import netcdf class from the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.netcdf.netcdf_file.html" target="_blank">scipy.io</a> module, along with numpy:
End of explanation
"""
from mpl_toolkits.basemap import Basemap
"""
Explanation: Finally, import the Matplotlib <a href="http://matplotlib.org/basemap/" target="_blank">Basemap</a> Toolkit, its installation instructions can found <a href="http://matplotlib.org/basemap/users/installing.html" target="_blank">here</a>.
End of explanation
"""
# Path the downloaded NetCDF file (different for each download)
f_path = '/home/etienne/Downloads/compday.Bo3cypJYyE.nc'
# Retrieve data from NetCDF file
with netcdf.netcdf_file(f_path, 'r') as f:
lon = f.variables['lon'][::] # copy as list
lat = f.variables['lat'][::-1] # invert the latitude vector -> South to North
air = f.variables['air'][0,::-1,:] # squeeze out the time dimension,
# invert latitude index
"""
Explanation: 1. Get the data!
The data is taken from <a href="http://www.esrl.noaa.gov/psd/data/composites/day/" target="_blank">NOAA Earth System Research Laboratory</a>.
Unfortunately, this website does not allow to code your output demand and/or use wget to download the data. <br>
That said, the data used for this notebook can be downloaded in a only a few clicks:
Select Air Temperature in Varaibles
Select Surface in Analysis level?
Select Jul | 1 and Jul | 31
Enter 2014 in the Enter Year of last day of range field
Select Anomaly in Plot type?
Select All in Region of globe
Click on Create Plot
Then on the following page, click on Get a copy of the netcdf data file used for the plot to download the NetCDF on your machine.
Note that the data represents the average daily surface air temperature anomaly (in deg. C) for July 2014 with respect to 1981-2010 climatology.
Now, import the NetCDF file into this IPython session. The following was inspired by this earthpy blog <a href="http://earthpy.org/interpolation_between_grids_with_basemap.html" target="_blank">post</a>.
End of explanation
"""
# Shift 'lon' from [0,360] to [-180,180], make numpy array
tmp_lon = np.array([lon[n]-360 if l>=180 else lon[n]
for n,l in enumerate(lon)]) # => [0,180]U[-180,2.5]
i_east, = np.where(tmp_lon>=0) # indices of east lon
i_west, = np.where(tmp_lon<0) # indices of west lon
lon = np.hstack((tmp_lon[i_west], tmp_lon[i_east])) # stack the 2 halves
# Correspondingly, shift the 'air' array
tmp_air = np.array(air)
air = np.hstack((tmp_air[:,i_west], tmp_air[:,i_east]))
"""
Explanation: The values lon start a 0 degrees and increase eastward to 360 degrees. So, the air array is centered about the Pacific Ocean. For a better-looking plot, shift the data so that it is centered about the 0 meridian:
End of explanation
"""
trace1 = Contour(
z=air,
x=lon,
y=lat,
colorscale="RdBu",
zauto=False, # custom contour levels
zmin=-5, # first contour level
zmax=5 # last contour level => colorscale is centered about 0
)
"""
Explanation: 2. Make Contour graph object
Very simply,
End of explanation
"""
# Make shortcut to Basemap object,
# not specifying projection type for this example
m = Basemap()
# Make trace-generating function (return a Scatter object)
def make_scatter(x,y):
return Scatter(
x=x,
y=y,
mode='lines',
line=Line(color="black"),
name=' ' # no name on hover
)
# Functions converting coastline/country polygons to lon/lat traces
def polygons_to_traces(poly_paths, N_poly):
'''
pos arg 1. (poly_paths): paths to polygons
pos arg 2. (N_poly): number of polygon to convert
'''
traces = [] # init. plotting list
for i_poly in range(N_poly):
poly_path = poly_paths[i_poly]
# get the Basemap coordinates of each segment
coords_cc = np.array(
[(vertex[0],vertex[1])
for (vertex,code) in poly_path.iter_segments(simplify=False)]
)
# convert coordinates to lon/lat by 'inverting' the Basemap projection
lon_cc, lat_cc = m(coords_cc[:,0],coords_cc[:,1], inverse=True)
# add plot.ly plotting options
traces.append(make_scatter(lon_cc,lat_cc))
return traces
# Function generating coastline lon/lat traces
def get_coastline_traces():
poly_paths = m.drawcoastlines().get_paths() # coastline polygon paths
N_poly = 91 # use only the 91st biggest coastlines (i.e. no rivers)
return polygons_to_traces(poly_paths, N_poly)
# Function generating country lon/lat traces
def get_country_traces():
poly_paths = m.drawcountries().get_paths() # country polygon paths
N_poly = len(poly_paths) # use all countries
return polygons_to_traces(poly_paths, N_poly)
"""
Explanation: 3. Get the coastlines and country boundaries with Basemap
The Basemap module includes data for drawing coastlines and country boundaries onto world maps. Adding coastlines and/or country boundaries on a matplotlib figure is done with the .drawcoaslines() or .drawcountries() Basemap methods.
Next, we will retrieve the Basemap plotting data (or polygons) and convert them to longitude/latitude arrays (inspired by this stackoverflow <a href="http://stackoverflow.com/questions/14280312/world-map-without-rivers-with-matplotlib-basemap" target="_blank">post</a>) and then package them into Plotly Scatter graph objects .
In other words, the goal is to plot each continuous coastline and country boundary lines as 1 Plolty scatter line trace.
End of explanation
"""
# Get list of of coastline and country lon/lat traces
traces_cc = get_coastline_traces()+get_country_traces()
"""
Explanation: Then,
End of explanation
"""
data = Data([trace1]+traces_cc)
"""
Explanation: 4. Make a figue object and plot!
Package the Contour trace with the coastline and country traces. Note that the Contour trace must be placed before the coastline and country traces in order to make all traces visible.
End of explanation
"""
title = u"Average daily surface air temperature anomalies [\u2103]<br> \
in July 2014 with respect to 1981-2010 climatology"
anno_text = "Data courtesy of \
<a href='http://www.esrl.noaa.gov/psd/data/composites/day/'>\
NOAA Earth System Research Laboratory</a>"
axis_style = dict(
zeroline=False,
showline=False,
showgrid=False,
ticks='',
showticklabels=False,
)
layout = Layout(
title=title,
showlegend=False,
hovermode="closest", # highlight closest point on hover
xaxis=XAxis(
axis_style,
range=[lon[0],lon[-1]] # restrict y-axis to range of lon
),
yaxis=YAxis(
axis_style,
),
annotations=Annotations([
Annotation(
text=anno_text,
xref='paper',
yref='paper',
x=0,
y=1,
yanchor='bottom',
showarrow=False
)
]),
autosize=False,
width=1000,
height=500,
)
"""
Explanation: Layout options are set in a Layout object:
End of explanation
"""
fig = Figure(data=data, layout=layout)
py.iplot(fig, filename="maps", width=1000)
"""
Explanation: Package data and layout in a Figure object and send it to plotly:
End of explanation
"""
from IPython.display import display, HTML
import urllib2
url = 'https://raw.githubusercontent.com/plotly/python-user-guide/master/custom.css'
display(HTML(urllib2.urlopen(url).read()))
"""
Explanation: See this graph in full screen <a href="https://plot.ly/~etpinard/453" target="_blank">here</a>.
To learn more about Plotly's Python API
Refer to
our online documentation <a href="https://plot.ly/python/" target="_blank">page</a> or
our <a href="https://plot.ly/python/user-guide/" target="_blank">User Guide</a>.
<br>
<hr>
<br>
<div style="float:right; \">
<img alt="plotly logo" src="http://i.imgur.com/4vwuxdJ.png"
align=right style="float:right; margin-left: 5px; margin-top: -10px" />
</div>
<h4 style="margin-top:60px;"> Got Questions or Feedback? </h4>
About <a href="https://plot.ly" target="_blank">Plotly</a>
email: feedback@plot.ly
tweet:
<a href="https://twitter.com/plotlygraphs" target="_blank">@plotlygraphs</a>
<h4 style="margin-top:30px;">Notebook styling ideas</h4>
Big thanks to
<a href="http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Prologue/Prologue.ipynb" target="_blank">Cam Davidson-Pilon</a>
<a href="http://lorenabarba.com/blog/announcing-aeropython/#.U1ULXdX1LJ4.google_plusone_share" target="_blank">Lorena A. Barba</a>
<br>
End of explanation
"""
|
gaufung/ISL
|
training-materials/Stasmodels-training/Regrssion Diagnostics.ipynb
|
mit
|
from statsmodels.compat import lzip
import statsmodels
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.stats.api as sms
import matplotlib.pyplot as plt
# Load data
url = 'http://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv'
dat = pd.read_csv(url)
# Fit regression model (using the natural log of one of the regressaors)
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit()
# Inspect the results
print(results.summary())
%matplotlib inline
"""
Explanation: Regression Diagnostics
In many cases of statistical analysis, we are not sure whether our statistical model is correctly specified. For example when using ols, then linearity and homoscedasticity are assumed, some test statistics additionally assume that the errors are normally distributed or that we have a large sample. Since our results depend on these statistical assumptions, the results are only correct of our assumptions hold (at least approximately).
One solution to the problem of uncertainty about the correct specification is to use robust methods, for example robust regression or robust covariance (sandwich) estimators. The second approach is to test whether our sample is consistent with these assumptions.
Estimate a regression model
End of explanation
"""
name = ['Jarque-Bera', 'Chi^2 two-tail prob.', 'Skew', 'Kurtosis']
test = sms.jarque_bera(results.resid)
lzip(name, test)
name = ['Chi^2', 'Two-tail probability']
test = sms.omni_normtest(results.resid)
lzip(name, test)
"""
Explanation: Normality of the residuals
End of explanation
"""
from statsmodels.stats.outliers_influence import OLSInfluence
test_class = OLSInfluence(results)
test_class.dfbetas[:5,:]
"""
Explanation: Influences Tests
Once created, an object of class OLSInfluence holds attributes and methods that allow users to assess the influence of each observation. For example, we can compute and extract the first few rows of DFbetas by:
End of explanation
"""
from statsmodels.graphics.regressionplots import plot_leverage_resid2
fig, ax = plt.subplots(figsize=(8,6))
fig = plot_leverage_resid2(results, ax = ax)
"""
Explanation: Useful information on leverage can also be plotted:
End of explanation
"""
# condition number
np.linalg.cond(results.model.exog)
"""
Explanation: Multiconllinearity
End of explanation
"""
name = ['Lagrange multiplier statistic', 'p-value',
'f-value', 'f p-value']
test = sms.het_breushpagan(results.resid, results.model.exog)
lzip(name, test)
"""
Explanation: Heteroskedasity Tests
End of explanation
"""
name = ['t value', 'p value']
test = sms.linear_harvey_collier(results)
lzip(name, test)
"""
Explanation: Linearity
End of explanation
"""
|
peastman/deepchem
|
examples/tutorials/The_Basic_Tools_of_the_Deep_Life_Sciences.ipynb
|
mit
|
!pip install --pre deepchem
"""
Explanation: The Basic Tools of the Deep Life Sciences
Welcome to DeepChem's introductory tutorial for the deep life sciences. This series of notebooks is a step-by-step guide for you to get to know the new tools and techniques needed to do deep learning for the life sciences. We'll start from the basics, assuming that you're new to machine learning and the life sciences, and build up a repertoire of tools and techniques that you can use to do meaningful work in the life sciences.
Scope: This tutorial will encompass both the machine learning and data handling needed to build systems for the deep life sciences.
Colab
This tutorial and the rest in the sequences are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Why do the DeepChem Tutorial?
1) Career Advancement: Applying AI in the life sciences is a booming
industry at present. There are a host of newly funded startups and initiatives
at large pharmaceutical and biotech companies centered around AI. Learning and
mastering DeepChem will bring you to the forefront of this field and will
prepare you to enter a career in this field.
2) Humanitarian Considerations: Disease is the oldest cause of human
suffering. From the dawn of human civilization, humans have suffered from pathogens,
cancers, and neurological conditions. One of the greatest achievements of
the last few centuries has been the development of effective treatments for
many diseases. By mastering the skills in this tutorial, you will be able to
stand on the shoulders of the giants of the past to help develop new
medicine.
3) Lowering the Cost of Medicine: The art of developing new medicine is
currently an elite skill that can only be practiced by a small core of expert
practitioners. By enabling the growth of open source tools for drug discovery,
you can help democratize these skills and open up drug discovery to more
competition. Increased competition can help drive down the cost of medicine.
Getting Extra Credit
If you're excited about DeepChem and want to get more involved, there are some things that you can do right now:
Star DeepChem on GitHub! - https://github.com/deepchem/deepchem
Join the DeepChem forums and introduce yourself! - https://forum.deepchem.io
Say hi on the DeepChem gitter - https://gitter.im/deepchem/Lobby
Make a YouTube video teaching the contents of this notebook.
Prerequisites
This tutorial sequence will assume some basic familiarity with the Python data science ecosystem. We will assume that you have familiarity with libraries such as Numpy, Pandas, and TensorFlow. We'll provide some brief refreshers on basics through the tutorial so don't worry if you're not an expert.
Setup
The first step is to get DeepChem up and running. We recommend using Google Colab to work through this tutorial series. You'll also need to run the following commands to get DeepChem installed on your colab notebook.
End of explanation
"""
import deepchem as dc
dc.__version__
"""
Explanation: You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the deepchem package to play with.
End of explanation
"""
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
"""
Explanation: Training a Model with DeepChem: A First Example
Deep learning can be used to solve many sorts of problems, but the basic workflow is usually the same. Here are the typical steps you follow.
Select the data set you will train your model on (or create a new data set if there isn't an existing suitable one).
Create the model.
Train the model on the data.
Evaluate the model on an independent test set to see how well it works.
Use the model to make predictions about new data.
With DeepChem, each of these steps can be as little as one or two lines of Python code. In this tutorial we will walk through a basic example showing the complete workflow to solve a real world scientific problem.
The problem we will solve is predicting the solubility of small molecules given their chemical formulas. This is a very important property in drug development: if a proposed drug isn't soluble enough, you probably won't be able to get enough into the patient's bloodstream to have a therapeutic effect. The first thing we need is a data set of measured solubilities for real molecules. One of the core components of DeepChem is MoleculeNet, a diverse collection of chemical and molecular data sets. For this tutorial, we can use the Delaney solubility data set. The property of solubility in this data set is reported in log(solubility) where solubility is measured in moles/liter.
End of explanation
"""
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
"""
Explanation: I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the featurizer argument passed to the load_delaney() function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets: a training set, a validation set, and a test set. Each of these serves a different function in the standard deep learning workflow.
Now that we have our data, the next step is to create a model. We will use a particular kind of model called a "graph convolutional network", or "graphconv" for short.
End of explanation
"""
model.fit(train_dataset, nb_epoch=100)
"""
Explanation: Here again I will not say much about the code. Later tutorials will give lots more information about GraphConvModel, as well as other types of models provided by DeepChem.
We now need to train the model on the data set. We simply give it the data set and tell it how many epochs of training to perform (that is, how many complete passes through the data to make).
End of explanation
"""
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
"""
Explanation: If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling evaluate() on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
End of explanation
"""
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility, test_solubility in zip(test_dataset.ids, solubilities, test_dataset.y):
print(solubility, test_solubility, molecule)
"""
Explanation: Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
Our model still has quite respectable performance on the test set. For comparison, a model that produced totally random outputs would have a correlation of 0, while one that made perfect predictions would have a correlation of 1. Our model does quite well, so now we can use it to make predictions about other molecules we care about.
Since this is just a tutorial and we don't have any other molecules we specifically want to predict, let's just use the first ten molecules from the test set. For each one we print out the chemical structure (represented as a SMILES string) and the predicted log(solubility). To put these predictions in
context, we print out the log(solubility) values from the test set as well.
End of explanation
"""
|
flowersteam/naminggamesal
|
notebooks/5_Intro_Experiment.ipynb
|
agpl-3.0
|
import naminggamesal.ngsimu as ngsimu
"""
Explanation: Experiments
End of explanation
"""
xp_cfg={
'pop_cfg':{
'voc_cfg':{
'voc_type':'matrix',
'M':5,
'W':10
},
'strat_cfg':{
'strat_type':'success_threshold',
'voc_update':'Minimal'
},
'interact_cfg':{
'interact_type':'speakerschoice'
},
'nbagent':10
},
'step':1
}
testexp=ngsimu.Experiment(**xp_cfg)
testexp
print(testexp)
testexp.continue_exp(1)
print(testexp)
testexp.visual()
"""
Explanation: Let's create an experiment
End of explanation
"""
Tvec=[20,50,100]
for i in range(100):
testexp.continue_exp()
#print str(testexp._poplist[-1])
for i in Tvec:
testexp.visual(tmax=i)
"""
Explanation: Let's see the evolution of this vocabulary, after 20, 50 and 100 interactions.
End of explanation
"""
#testexp.graph("srtheo").show()
test=testexp.graph("srtheo")
test.show()
testexp.graph("Nlinksurs").show()
testexp.graph("entropy").show()
testexp.graph("entropycouples").show()
"""
Explanation: We can graph measures on this population (more info on other possibile measures: Design_newMeasures.ipynb):
End of explanation
"""
|
jeffzhengye/pylearn
|
.ipynb_checkpoints/jpx-tokyo-simple-lstm-network-scuec-checkpoint.ipynb
|
unlicense
|
# check gpu env with torch
import torch
print(torch.__version__) # 查看torch当前版本号
print(torch.version.cuda) # 编译当前版本的torch使用的cuda版本号
print("is_cuda_available:", torch.cuda.is_available()) # 查看当前cuda是否可用于当前版本的Torch,如果输出
print('gpu count:', torch.cuda.device_count())
# 查看指定GPU的容量、名称
device = "cuda:0"
print(f"{device} capability:", torch.cuda.get_device_capability(device))
print(f"{device} name:", torch.cuda.get_device_name(device))
"""
Explanation: Introduction
The competition is to predict the highest future returns for stocks that are actually traded on the Japan Exchange Group, Inc.
In this notebook, we will work with jpx_tokyo_market_prediction, which is unfamiliar to Kaggle beginners, and how to extract the relevant data in the training data.
Table of Contents
Explanation of data
jpx_tokyo_market_prediction
Create models and submit data
TODO
add sharp ratio metrics for evaluation
improve prediction with dataframe operation
pytorch gpu check
End of explanation
"""
import numpy as np
import pandas as pd
"""
Explanation: Explanation of data
Loading Modules
First, load the required modules.
In this case, we will use pandas to load the data.
End of explanation
"""
stock_price_df = pd.read_csv("/mnt/d/dataset/quant/kaggle22/train_files/stock_prices.csv")
test_stock_price_df = pd.read_csv("/mnt/d/dataset/quant/kaggle22/supplemental_files/stock_prices.csv")
# stock_price_df = pd.read_csv("../input/jpx-tokyo-stock-exchange-prediction//train_files/stock_prices.csv")
# test_stock_price_df = pd.read_csv("../input/jpx-tokyo-stock-exchange-prediction/supplemental_files/stock_prices.csv")
from datetime import datetime
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
# columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
def preprocess(df, processor, columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag'], is_fit=True):
df = df.copy()
df['ExpectedDividend'] = df['ExpectedDividend'].fillna(0)
df['SupervisionFlag'] = df['SupervisionFlag'].map({True: 1, False: 0})
df['Date'] = pd.to_datetime(df['Date'])
# df.info()
df = df.dropna(how='any')
df[columns] = processor.fit_transform(df[columns])
return df
train_df = preprocess(stock_price_df, stdsc, is_fit=True)
test_df = preprocess(test_stock_price_df, stdsc, is_fit=False)
train_df
"""
Explanation: Check the data
Read stock_price.csv using read_csv in pandas.
End of explanation
"""
def calc_spread_return_sharpe(df: pd.DataFrame, portfolio_size: int = 200, toprank_weight_ratio: float = 2, rank='Rank') -> float:
"""
Args:
df (pd.DataFrame): predicted results
portfolio_size (int): # of equities to buy/sell
toprank_weight_ratio (float): the relative weight of the most highly ranked stock compared to the least.
Returns:
(float): sharpe ratio
"""
def _calc_spread_return_per_day(df, portfolio_size, toprank_weight_ratio, rank='Rank'):
"""
Args:
df (pd.DataFrame): predicted results
portfolio_size (int): # of equities to buy/sell
toprank_weight_ratio (float): the relative weight of the most highly ranked stock compared to the least.
Returns:
(float): spread return
"""
assert df[rank].min() == 0
assert df[rank].max() == len(df[rank]) - 1
weights = np.linspace(start=toprank_weight_ratio, stop=1, num=portfolio_size)
purchase = (df.sort_values(by=rank)['Target'][:portfolio_size] * weights).sum() / weights.mean()
short = (df.sort_values(by=rank, ascending=False)['Target'][:portfolio_size] * weights).sum() / weights.mean()
return purchase - short
buf = df.groupby('Date').apply(_calc_spread_return_per_day, portfolio_size, toprank_weight_ratio, rank)
sharpe_ratio = buf.mean() / buf.std()
return sharpe_ratio
# add rank according to Target for train
train_df['Rank'] = train_df.groupby("Date")['Target'].transform('rank', ascending=False, method="first") - 1
train_df['Rank'] = train_df['Rank'].astype(int)
# print(train_df['Rank'].min())
df_astock = train_df[train_df['Date'] == '2021-12-03']
# make sure it's correct
print(df_astock['Rank'].min(), df_astock['Rank'].max())
df_astock.sort_values(by=['Target'], ascending=False)
# sharp = calc_spread_return_sharpe(train_df)
tmpdf = test_df.copy()
tmpdf["Close_shift1"] = tmpdf["Close"].shift(-1)
tmpdf["Close_shift2"] = tmpdf["Close"].shift(-2)
tmpdf["rate"] = (tmpdf["Close_shift2"] - tmpdf["Close_shift1"]) / tmpdf["Close_shift1"]
tmpdf.fillna(value={'rate': 0.}, inplace=True)
tmpdf['Rank'] = tmpdf.groupby("Date")['Target'].transform('rank', ascending=False, method="first") - 1
tmpdf['Rank'] = tmpdf['Rank'].astype(int)
tmpdf
test_df
sharp_train = calc_spread_return_sharpe(train_df)
sharp_test = calc_spread_return_sharpe(tmpdf)
print(f"train={sharp_train}, test={sharp_test}")
train_df.drop(['Rank'], axis=1, inplace=True)
"""
Explanation: define metric: sharp rate and compute for the training data with known Target
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class LSTM(nn.Module):
def __init__(self, d_feat=6, hidden_size=64, num_layers=2, dropout=0.0):
super().__init__()
self.rnn = nn.LSTM(
input_size=d_feat,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first=True,
dropout=dropout,
)
self.fc_out = nn.Linear(hidden_size, 1)
self.d_feat = d_feat
def forward(self, x):
# x: [N, F*T]
# x = x.reshape(len(x), self.d_feat, -1) # [N, F, T]
# x = x.permute(0, 2, 1) # [N, T, F]
out, _ = self.rnn(x)
return self.fc_out(out[:, -1, :]).squeeze(dim=-1)
"""
Explanation: Check the form of this data (nrows ,columns) and the contents.
The data contained in stock_price.csv was as follows.
* SecuritiesCode ... Securities Code (number assigned to each stock)
* Open ... Opening price (price per share at the beginning of the day (9:00 am))
* High ... High ... the highest price of the day
* Low ... Low price
* Colse ... Closing price
* Volume ... Volume (number of shares traded in a day)
* AdjustmentFactor ... Used to calculate the theoretical stock price and volume at the time of a reverse stock split or reverse stock split
* ExpectedDividend ... Expected dividend on ex-rights date
* SupercisionFlag ... Flag for supervised issues and delisted issues
* Target ... Percentage change in adjusted closing price (from one day to the next)
Although many other data are available for this competition, we will implement this using only the information in stock_price.csv.
jpx_tokyo_market_prediction
Next, we will check the usage of the API named jpx_tokyo_market_prediction.
First, import it as you would any other module.
Since jpx_tokyo_market_prediction can only be executed once, we will write the image in Markdown.
python
import jpx_tokyo_market_prediction
env = jpx_tokyo_market_prediction.make_env()
iter_test = env.iter_test()
The environment was created by executing make_env() and the object was created by executing iter_test().
As shown below, looking at the type, iter_test is a generator, so we can confirm that it is an object that can be called one by one with a for statement.
python
print(type(iter_test))
[出力]
<class 'generator'>
By turning a for statement, check the operation as follows.
python
count = 0
for (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:
print(prices.head())
env.predict(sample_prediction)
count += 1
break
[出力]
```
This version of the API is not optimized and should not be used to estimate the runtime of your code on the hidden test set.
Date RowId SecuritiesCode Open High Low Close \
0 2021-12-06 20211206_1301 1301 2982.0 2982.0 2965.0 2971.0
1 2021-12-06 20211206_1332 1332 592.0 599.0 588.0 589.0
2 2021-12-06 20211206_1333 1333 2368.0 2388.0 2360.0 2377.0
3 2021-12-06 20211206_1375 1375 1230.0 1239.0 1224.0 1224.0
4 2021-12-06 20211206_1376 1376 1339.0 1372.0 1339.0 1351.0
Volume AdjustmentFactor ExpectedDividend SupervisionFlag
0 8900 1.0 NaN False
1 1360800 1.0 NaN False
2 125900 1.0 NaN False
3 81100 1.0 NaN False
4 6200 1.0 NaN False
```
The names of each variable are as follows.
* price ... Data for each stock on the target day, the same as the information in stock_price.csv without Target.
* options ... Same information as options.csv for the target date.
* finacials ... Same information as finacials.csv for the target date.
* trades ... Same information as trades.csv of the target date
* secondary_prices ... Same information as secondary_stock_price.csv without Target for the target date.
* sample_prediction ... Data from sample_prediction.csv for the target date.
Thus, if we call the 2000 stocks of the target date one day at a time using jpx_tokyo_market_prediction, forecast them with the model we created, and then create the submitted data with env.predict, we can produce a score.
Create models and submit data
Here, we will create a simple training model using stock_price.csv and implement it up to submission.
Create Model(LSTM)
We use a model called LSTM (Long Short Term Memory).
LSTM is one of the RNNs used for series data and is a model that can learn long-term dependencies.
We will implement LSTM using Pytorch.
End of explanation
"""
# stock_price_df['ExpectedDividend'] = stock_price_df['ExpectedDividend'].fillna(0)
# stock_price_df['SupervisionFlag'] = stock_price_df['SupervisionFlag'].map({True: 1, False: 0})
# stock_price_df['Date'] = pd.to_datetime(stock_price_df['Date'])
# stock_price_df.info()
"""
Explanation: Create Dataset
Create a data set that can be retrieved for each Code.
First, convert Nan in stock_price_df to 0, bool to int, and 'Date' to datetime.
End of explanation
"""
# stock_price_df = stock_price_df.dropna(how='any')
# # Confirmation of missing information
# stock_price_df_na = (stock_price_df.isnull().sum() / len(stock_price_df)) * 100
# stock_price_df_na = stock_price_df_na.drop(stock_price_df_na[stock_price_df_na == 0].index).sort_values(ascending=False)[:30]
# missing_data = pd.DataFrame({'Missing Ratio' :stock_price_df_na})
# missing_data.head(22)
"""
Explanation: Some of them contained missing values, so they were removed.
End of explanation
"""
# from sklearn.preprocessing import StandardScaler
# stdsc = StandardScaler()
# columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
# stock_price_df[columns] = stdsc.fit_transform(stock_price_df[columns])
# stock_price_df.head()
"""
Explanation: 数据预处理是否应该把所有股票放在一起?
stdscStandardize the features (other than RowId, Date, and SecuritiesCode) to be used in this project using sklearn's StandardScaler.
End of explanation
"""
dataset_dict = {}
for sc in train_df['SecuritiesCode'].unique():
dataset_dict[str(sc)] = train_df[train_df['SecuritiesCode'] == sc].values[:, 3:].astype(np.float32)
print(dataset_dict['1301'].shape)
"""
Explanation: Store data for each issue in dictionary form and store it in such a way that it can be recalled for each issue.
End of explanation
"""
from torch.utils.data.sampler import SubsetRandomSampler
class MyDataset(torch.utils.data.Dataset):
def __init__(self, X, sequence_num=31, y=None, mode='train'):
self.data = X
self.teacher = y
self.sequence_num = sequence_num
self.mode = mode
def __len__(self):
return len(self.teacher)
def __getitem__(self, idx):
out_data = self.data[idx]
if self.mode == 'train':
out_label = self.teacher[idx[-1]]
return out_data, out_label
else:
return out_data
def create_dataloader(dataset, dataset_num, sequence_num=31, input_size=8, batch_size=32, shuffle=False):
sampler = np.array([list(range(i, i+sequence_num)) for i in range(dataset_num-sequence_num+1)])
if shuffle is True:
np.random.shuffle(sampler)
dataloader = torch.utils.data.DataLoader(dataset, batch_size, sampler=sampler)
return dataloader
test_df.loc[test_df['Date'] == "2021-12-06"]
"""
Explanation: Use Pytorch dataloader to recall data for each mini-batch.
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot(log_train[0][1:], log_train[1][1:])
plt.plot(log_eval[0][1:], log_eval[1][1:])
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
from tqdm import tqdm
import time
import os
import copy
output_dir = "output_lstm"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
epochs = 20
batch_size = 512
seq_len = 14
num_layers = 2
input_size = 5
lstm_dim = 64
dropout = 0.
# Check wheter GPU is available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Model Instantiation
model = LSTM(d_feat=input_size, hidden_size=lstm_dim, num_layers=num_layers, dropout=dropout)
model.to(device)
model.train()
# setting optimizer
lr = 0.0001
weight_decay = 1.0e-05
optimizer = torch.optim.Adagrad(model.parameters(), lr=lr, weight_decay=weight_decay)
# setting criterion
criterion = nn.MSELoss()
def train_epoch(train_df, model, seq_len=30, batch_size=512):
groups = train_df.groupby(['SecuritiesCode'])
total_loss = 0.
iteration = 0
model.train()
def collect_batch_index(): # index for a stock with seq_len continuous days
batch_index = []
for sc, group in groups:
indices = np.arange(len(group))
for i in range(len(indices))[:: seq_len]:
if len(indices) - i < seq_len:
break
batch_index.append(group.index[i: i + seq_len])
return batch_index
batch_index = collect_batch_index()
indices = np.arange(len(batch_index))
np.random.shuffle(indices)
for i in range(len(indices))[:: batch_size]:
# if len(indices) - i < batch_size:
# break
x_train = []
y_train = []
for index in indices[i: i + batch_size]:
values = train_df.loc[batch_index[index]].values
x_train.append(values[:, 3: 3 + input_size].astype(np.float32))
y_train.append(values[:, -1][-1])
# print(y_train)
feature = torch.from_numpy(np.vstack(x_train).reshape((len(y_train), seq_len, -1))).float().to(device)
label = torch.from_numpy(np.vstack(y_train)).flatten().float().to(device)
# print(feature.size(), label.size())
pred = model(feature)
# print(pred.size(), label.size())
loss = criterion(pred, label)
total_loss += loss.item()
# if list(label.size())[0] < batch_size:
# print('train', pred.size(), label.size(), feature.size())
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_value_(model.parameters(), 3.0)
optimizer.step()
iteration += 1
return total_loss/iteration
def test_epoch(train_df, model, seq_len=30, batch_size=512):
groups = train_df.groupby(['SecuritiesCode'])
total_loss = 0.
iteration = 0
model.eval()
tmp_df = train_df.copy()
tmp_df['pred'] = 0 # 默认没预测的涨幅都为 0
def collect_batch_index(): # index for a stock with seq_len continuous days
batch_index = []
for sc, group in groups:
indices = np.arange(len(group))
for i in range(len(indices))[:: seq_len]:
if len(indices) - i < seq_len:
break
batch_index.append(group.index[i: i + seq_len])
return batch_index
batch_index = collect_batch_index()
indices = np.arange(len(batch_index))
pre_indices = [index[-1] for index in batch_index]
pred_array = np.array([])
for i in range(len(indices))[:: batch_size]:
# if len(indices) - i < batch_size:
# break
x_train = []
y_train = []
for index in indices[i: i + batch_size]:
values = train_df.loc[batch_index[index]].values
# see the train_df format upstair
x_train.append(values[:, 3: 3 + input_size].astype(np.float32))
y_train.append(values[:, -1][-1])
feature = torch.from_numpy(np.vstack(x_train).reshape((len(y_train), seq_len, -1))).float().to(device)
label = torch.from_numpy(np.vstack(y_train)).flatten().float().to(device)
pred = model(feature)
loss = criterion(pred, label)
# if list(label.size())[0] < batch_size:
# print('test', pred.size(), label.size(), feature.size())
total_loss += loss.item()
# print(pred)
pred_array = np.append(pred_array, pred.detach().cpu().numpy())
iteration += 1
# print(len(pre_indices), len(pred_array))
tmp_df.loc[pre_indices, 'pred'] = pred_array
tmp_df['Rank'] = tmp_df.groupby("Date")['pred'].transform('rank', ascending=False, method="first") - 1
tmp_df['Rank'] = tmp_df['Rank'].astype(int)
sharp = calc_spread_return_sharpe(tmp_df)
return total_loss/iteration, sharp
log_train = [[0], [np.inf]]
log_eval = [[0], [np.inf]]
best_eval_loss = np.inf
best_model_path = 'Unknown'
if True:
model.eval()
train_loss, train_sharp = test_epoch(train_df, model, batch_size=batch_size, seq_len=seq_len)
test_loss, test_sharp = test_epoch(test_df, model, batch_size=batch_size, seq_len=seq_len)
print("with training, random train_loss={}, train_sharp={}, eval_loss={}, eval_sharp={}".format(train_loss, train_sharp, test_loss, test_sharp))
_tqdm = tqdm(range(epochs))
for epoch in _tqdm:
epoch_loss = 0.0
# set iteration counter
iteration = 0
start_time = time.time()
epoch_loss = train_epoch(train_df, model, seq_len=seq_len, batch_size=batch_size)
end_time = time.time()
# print('epoch_loss={}'.format(epoch_loss))
log_train[0].append(epoch)
log_train[1].append(epoch_loss)
# eval
eval_loss, sharp = test_epoch(test_df, model, seq_len=seq_len, batch_size=batch_size)
train_loss, train_sharp = test_epoch(train_df, model, seq_len=seq_len, batch_size=batch_size)
log_eval[0].append(epoch)
log_eval[1].append(eval_loss)
if best_eval_loss > eval_loss:
best_eval_loss = eval_loss
best_model_path = f"{output_dir}/{epoch}.pt"
# print("epoch {}, run_time={}, train loss={}, eval_loss={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss))
print("epoch {}, run_time={}, train loss={}, eval_loss={}, eval_sharp={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss, sharp))
print("\t train_loss={}, train_sharp={}".format(train_loss, train_sharp))
# _tqdm.set_description("epoch {}, run_time={}, train loss={}, eval_loss={}, eval_sharp={}".format(epoch, (end_time - start_time), epoch_loss, eval_loss, sharp))
# save mode
save_path = f"{output_dir}/{epoch}.pt"
param = copy.deepcopy(model.state_dict())
torch.save(param, save_path)
print(best_model_path)
# best_model_path = "output_lstm/17.pt"
model.load_state_dict(torch.load(best_model_path))
"""
Explanation: Trainig
For each stock, LSTM training is conducted by repeatedly creating a data set and training the model.
To see the learning status, check the phenomenon of the loss function.
→Can be learned.
End of explanation
"""
from datetime import datetime
columns = ['Open', 'High', 'Low', 'Close', 'Volume', 'AdjustmentFactor', 'ExpectedDividend', 'SupervisionFlag']
def predict(model, X_df, sequence=30):
pred_df = X_df[['Date', 'SecuritiesCode']]
# Grouping by `groupby` and retrieving one by one
code_group = X_df.groupby('SecuritiesCode')
X_all = np.array([])
for sc, group in code_group:
# Standardize target data
group_std = stdsc.transform(group[columns])
# Calling up past data of the target data
X = dataset_dict[str(sc)][-1*(sequence-1):, :-1]
# concat
group_std_add = np.zeros((group_std.shape[0], group_std.shape[1]+1))
group_std_add[:, :-1] = group_std
dataset_dict[str(sc)] = np.vstack((dataset_dict[str(sc)], group_std_add))
X = np.vstack((X[:, :input_size], group_std[:, :input_size]))
X_all = np.append(X_all, X)
X_all = X_all.reshape(-1, sequence, X.shape[1])
y_pred = np.array([])
for it in range(X_all.shape[0]//512+1):
data = X_all[it*512:(it+1)*512]
data = torch.from_numpy(data.astype(np.float32)).clone()
data = data.to(torch.float32)
data = data.to(device)
print('input size', data.size())
# print(data)
output = model.forward(data)
# print(output)
# output = output.view(1, -1)
output = output.to('cpu').detach().numpy().copy()
y_pred = np.append(y_pred, output)
pred_df['target'] = y_pred
# print(y_pred, y_pred.shape)
pred_df['Rank'] = pred_df["target"].rank(ascending=False, method="first") - 1
pred_df['Rank'] = pred_df['Rank'].astype(int)
pred_df = pred_df.drop('target', axis=1)
return pred_df
"""
Explanation: Prediction
The trained model will be used to make predictions on the submitted data.
DataFrame → Ndarray → tensor and transform the data to make predictions.
End of explanation
"""
import sys
sys.path.append("/mnt/d/dataset/quant/kaggle22/")
import jpx_tokyo_market_prediction
env = jpx_tokyo_market_prediction.make_env()
iter_test = env.iter_test()
count = 0
for (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:
prices = prices.fillna(0)
prices['SupervisionFlag'] = prices['SupervisionFlag'].map({True: 1, False: 0})
prices['Date'] = pd.to_datetime(prices['Date'])
pred_df = predict(model, prices)
# print(pred_df)
env.predict(pred_df)
count += 1
pred_df
"""
Explanation: Submission
Perform data preparation for submission from jpx_tokyo_market_prediction.
End of explanation
"""
|
microsoft/dowhy
|
docs/source/example_notebooks/load_graph_example.ipynb
|
mit
|
import os, sys
import random
sys.path.append(os.path.abspath("../../../"))
import numpy as np
import pandas as pd
import dowhy
from dowhy import CausalModel
from IPython.display import Image, display
"""
Explanation: Different ways to load an input graph
We recommend using the GML graph format to load a graph. You can also use the DOT format, which requires additional dependencies (either pydot or pygraphviz).
DoWhy supports both loading a graph as a string, or as a file (with the extensions 'gml' or 'dot').
Below is an example showing the different ways of loading the same graph.
End of explanation
"""
z=[i for i in range(10)]
random.shuffle(z)
df = pd.DataFrame(data = {'Z': z, 'X': range(0,10), 'Y': range(0,100,10)})
df
"""
Explanation: I. Generating dummy data
We generate some dummy data for three variables: X, Y and Z.
End of explanation
"""
# With GML string
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="""graph[directed 1 node[id "Z" label "Z"]
node[id "X" label "X"]
node[id "Y" label "Y"]
edge[source "Z" target "X"]
edge[source "Z" target "Y"]
edge[source "X" target "Y"]]"""
)
model.view_model()
display(Image(filename="causal_model.png"))
# With GML file
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="../example_graphs/simple_graph_example.gml"
)
model.view_model()
display(Image(filename="causal_model.png"))
"""
Explanation: II. Loading GML or DOT graphs
GML format
End of explanation
"""
# With DOT string
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="digraph {Z -> X;Z -> Y;X -> Y;}"
)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
# With DOT file
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="../example_graphs/simple_graph_example.dot"
)
model.view_model()
display(Image(filename="causal_model.png"))
"""
Explanation: DOT format
End of explanation
"""
|
mwcraig/reducer
|
reducer/reducer-template.ipynb
|
bsd-3-clause
|
import reducer.gui
import reducer.astro_gui as astro_gui
from reducer.image_browser import ImageBrowser
from ccdproc import ImageFileCollection
from reducer import __version__
print(__version__)
"""
Explanation: Reducer: (Put your name here)
Reviewer: (Put your name here)
jupyter notebook crash course
Click on a code cell (has grey background) then press Shift-Enter (at the same time) to run a code cell. That will add the controls (buttons, etc) you use to do the reduction one-by-one; then use them for reduction.
reducer crash course
Rule 0: Run the code cells in order
The world won't end if you break this rule, but you are more likely to end up with nonsensical results or errors. Incidentally, welcome to python indexing, which starts numbering at zero.
Rule 1: Do not run this notebook in the directory containing your unreduced data
reducer will not overwrite anything but the idea is that you will keep a copy of this notebook with your reduced data.
Rule 2: Keep the cells you need, delete the ones you don't
IPython notebooks are essentially customizable apps. If you don't shoot dark frames, for example, delete the stuff related to darks.
Rule 3: If you find bugs, please report them
You can report bugs, make feature requests or (best of all) submit pull requests from reducer's home on github
Bonus Pro Tip: Feel free to ignore the code in the code cells
Code is there so that people who know python can see what is going on, but if you don't know python you should still be able to use the notebook. Just remember to Shift-Enter on each code cell to run it, then fill in the form(s) that appear in the notebook.
End of explanation
"""
# To use the sample data set:
data_dir = reducer.notebook_dir.get_data_path()
# Or, uncomment line below and modify as needed
# data_dir = 'path/to/your/data'
destination_dir = '.'
"""
Explanation: Enter name of directory that contains your data in the cell below, or...
...leave it unchanged to try out reducer with low-resolution dataset
That low-resolution dataset will expand to about 300MB when uncompressed
End of explanation
"""
images = ImageFileCollection(location=data_dir, keywords='*')
"""
Explanation: Type any comments about this dataset here
Double-click on the cell to start editing it.
Load your data set
End of explanation
"""
fits_browser = ImageBrowser(images, keys=['imagetyp', 'exposure'])
fits_browser.display()
"""
Explanation: Image Summary
Includes browser and image/metadata viewer
This is not, strictly speaking, part of reduction, but is a handy way to take a quick look at your files.
End of explanation
"""
im_a_tree_too = ImageBrowser(images, keys=['filter', 'imagetyp', 'exposure'])
im_a_tree_too.display()
"""
Explanation: You can reconfigure the image browser if you want (or not)
By passing different keys into the tree constructor you can generate a navigable tree based on any keys you want.
End of explanation
"""
bias_reduction = astro_gui.Reduction(description='Reduce bias frames',
toggle_type='button',
allow_bias=False,
allow_dark=False,
allow_flat=False,
input_image_collection=images,
apply_to={'imagetyp': 'bias'},
destination=destination_dir)
bias_reduction.display()
print(bias_reduction)
"""
Explanation: Make a combined bias image
Reduce the bias images
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
bias = astro_gui.Combiner(description="Combined Bias Settings",
toggle_type='button',
file_name_base='combined_bias',
image_source=reduced_collection,
apply_to={'imagetyp': 'bias'},
destination=destination_dir)
bias.display()
print(bias)
"""
Explanation: Combine bias images to make combined bias
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
dark_reduction = astro_gui.Reduction(description='Reduce dark frames',
toggle_type='button',
allow_bias=True,
master_source=reduced_collection,
allow_dark=False,
allow_flat=False,
input_image_collection=images,
destination=destination_dir,
apply_to={'imagetyp': 'dark'})
dark_reduction.display()
print(dark_reduction)
"""
Explanation: Make a combined dark
Reduce dark images
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
dark = astro_gui.Combiner(description="Make Combined Dark(s)",
toggle_type='button',
file_name_base='combined_dark',
group_by='exposure',
image_source=reduced_collection,
apply_to={'imagetyp': 'dark'},
destination=destination_dir)
dark.display()
print(dark)
"""
Explanation: Combine reduced darks
Note the Group by option in the controls that appear after you execute the cell below. reducer will make a master for each value of the FITS keyword listed in Group by. By default this keyword is named exposure for darks, so if you have darks with exposure times of 10 sec, 15 sec and 120 sec you will get three master darks, one for each exposure time.
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
flat_reduction = astro_gui.Reduction(description='Reduce flat frames',
toggle_type='button',
allow_bias=True,
master_source=reduced_collection,
allow_dark=True,
allow_flat=False,
input_image_collection=images,
destination=destination_dir,
apply_to={'imagetyp': 'flat'})
flat_reduction.display()
print(flat_reduction)
"""
Explanation: Make combined flats
Reduce flat images
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
flat = astro_gui.Combiner(description="Make Combined Flat(s)",
toggle_type='button',
file_name_base='combined_flat',
group_by='filter',
image_source=reduced_collection,
apply_to={'imagetyp': 'flat'},
destination=destination_dir)
flat.display()
print(flat)
"""
Explanation: Build combined flats
Again, note the presence of Group by. If you typically use twilight flats you will almost certainly want to group by filter, not by filter and exposure.
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
light_reduction = astro_gui.Reduction(description='Reduce light frames',
toggle_type='button',
allow_cosmic_ray=True,
master_source=reduced_collection,
input_image_collection=images,
destination=destination_dir,
apply_to={'imagetyp': 'light'})
light_reduction.display()
"""
Explanation: Reduce the science images
There is some autmatic matching going on here:
If darks are subtracted a dark of the same edxposure time will be used, if available. If not, and dark scaling is enabled, the dark with the closest exposure time will be scaled to match the science image.
If the dark you want to scale appears not to be bias-subtracted an error will be raised.
Flats are matched by filter.
End of explanation
"""
reduced_collection = ImageFileCollection(location=destination_dir, keywords='*')
reduced_browser = ImageBrowser(reduced_collection, keys=['imagetyp', 'filter'])
reduced_browser.display()
"""
Explanation: Wonder what the reduced images look like? Make another image browser...
End of explanation
"""
|
bjodah/aqchem
|
examples/kinetics_cstr.ipynb
|
bsd-2-clause
|
from collections import defaultdict
import numpy as np
from IPython.display import Latex
import matplotlib.pyplot as plt
from pyodesys.symbolic import SymbolicSys
from chempy import Substance, ReactionSystem
from chempy.kinetics.ode import get_odesys
from chempy.units import SI_base_registry, default_units as u
from chempy.util.graph import rsys2graph
%matplotlib inline
rsys = ReactionSystem.from_string("A -> B; 'k'", substance_factory=lambda k: Substance(k))
rsys
odesys, extra = get_odesys(rsys, include_params=False)
odesys.names, odesys.param_names
"""
Explanation: Continuously stirred tank reactor (CSTR)
This notebook shows how to solve chemical kintics problems for a continuously stirred tank reactor using ChemPy.
End of explanation
"""
t, c1, c2, IA, IB, f, k, fc_A, fc_B = map(odesys.be.Symbol, 't c1 c2 I_A I_B f k phi_A phi_B'.split())
newp = f, fc_A, fc_B
c_feed = {'A': fc_A, 'B': fc_B}
cstr = SymbolicSys(
[
(dep, expr - f*dep + f*c_feed[name]) for name, dep, expr
in zip(odesys.names, odesys.dep, odesys.exprs)
],
params=list(odesys.params) + list(newp),
names=odesys.names,
param_names=list(odesys.param_names) + [p.name for p in newp],
dep_by_name=True,
par_by_name=True,
)
Latex('$' + r'\\ '.join(map(lambda x: ':'.join(x), zip(map(lambda x: r'\frac{d%s}{dt}' % x, cstr.names),
map(cstr.be.latex, cstr.exprs)))) + '$')
cstr.param_names
init_c, pars = {'A': .15, 'B': .1}, {'k': 0.8, 'f': 0.3, 'phi_A': .7, 'phi_B': .1}
res = cstr.integrate(10, init_c, pars, integrator='cvode')
res.plot()
"""
Explanation: We can change the expressions of the ODE system manually to account for source and sink terms from the flux:
End of explanation
"""
k = cstr.params[0]
e = cstr.be.exp
exprs = [
fc_A*f/(f + k) + c1 * e(-t*(f + k)),
(fc_A*k + fc_B*(f + k))/(f + k) - c1*e(-f*t)*(e(-t*k) - 1) + c2*e(-f*t)
]
cstr.be.init_printing()
exprs
exprs0 = [expr.subs(t, 0) for expr in exprs]
exprs0
sol = cstr.be.solve([expr - c0 for expr, c0 in zip(exprs0, (IA, IB))], (c1, c2))
sol
exprs2 = [expr.subs(sol) for expr in exprs]
exprs2
IA
import sympy as sp
cses, expr_cse = sp.cse([expr.subs({fc_A: sp.Symbol('fr'), fc_B: sp.Symbol('fp'), f: sp.Symbol('fv'),
IA: sp.Symbol('r'), IB: sp.Symbol('p')}) for expr in exprs2])
s = '\n'.join(['%s = %s' % (lhs, rhs) for lhs, rhs in cses] + [str(tuple(expr_cse))])
print(s.replace('exp', 'be.exp').replace('\n(', '\nreturn ('))
exprs2_0 = [expr.subs(t, 0).simplify() for expr in exprs2]
exprs2_0
_cb = cstr.be.lambdify([t, IA, IB, k, f, fc_A, fc_B], exprs2)
def analytic(x, c0, params):
return _cb(x, c0['A'], c0['B'], params['k'], params['f'], params['phi_A'], params['phi_B'])
def get_ref(result, parameters=None):
drctn = -1 if result.odesys.names[0] == 'B' else 1
return np.array(analytic(
result.xout,
{k: result.named_dep(k)[0] for k in result.odesys.names},
parameters or {k: result.named_param(k) for k in result.odesys.param_names})).T[:, ::drctn]
yref = get_ref(res)
yref.shape
"""
Explanation: We can derive an analytic solution to the ODE system:
End of explanation
"""
fig, axes = plt.subplots(1, 2, figsize=(14, 4))
res.plot(ax=axes[0])
res.plot(ax=axes[0], y=yref)
res.plot(ax=axes[1], y=res.yout - yref)
"""
Explanation: Plotting the error (by comparison to the analytic solution):
End of explanation
"""
cstr2, extra2 = get_odesys(rsys, include_params=False, cstr=True)
cstr2.exprs
"""
Explanation: Automatically generating CSTR expressions using chempy
ChemPy has support for generating the CSTR expressions directly in ReactionSystem.rates & get_odesys. This simplifies the code the user needs to write considerably:
End of explanation
"""
cstr2.param_names
renamed_pars = {'k': pars['k'], 'fc_A': pars['phi_A'], 'fc_B': pars['phi_B'], 'feedratio': pars['f']}
res2 = cstr2.integrate(10, init_c, renamed_pars, integrator='cvode')
ref2 = get_ref(res2, pars)
res2.plot(y=res2.yout - ref2)
"""
Explanation: Note how we only needed to pass cstr=True to get_odesys to get the right expressions.
End of explanation
"""
from chempy.kinetics.integrated import unary_irrev_cstr
help(unary_irrev_cstr)
"""
Explanation: The analytic solution of a unary reaction in a CSTR is also available in ChemPy:
End of explanation
"""
fr, fc = extra2['cstr_fr_fc']
def get_ref2(result):
drctn = -1 if result.odesys.names[0] == 'B' else 1
return np.array(unary_irrev_cstr(
result.xout,
result.named_param('k'),
result.named_dep('A')[0],
result.named_dep('B')[0],
result.named_param(fc['A']),
result.named_param(fc['B']),
result.named_param(fr))).T[:, ::drctn]
res2.plot(y=res2.yout - get_ref2(res2))
assert np.allclose(res2.yout, get_ref2(res2))
"""
Explanation: The symbols of the feedratio and feed concentrations are available in the second output of get_odesys (the extra dictionary):
End of explanation
"""
|
mdda/pycon.sg-2015_deep-learning
|
ipynb/blocks-introduction-mnist.ipynb
|
mit
|
from theano import tensor
x = tensor.matrix('features')
"""
Explanation: Introduction tutorial
In this tutorial we will perform handwriting recognition by training a
multilayer perceptron (MLP)
on the MNIST handwritten digit database.
The Task
MNIST is a dataset which consists of 70,000 handwritten digits. Each
digit is a grayscale image of 28 by 28 pixels. Our task is to classify
each of the images into one of the 10 categories representing the
numbers from 0 to 9.
The Model
We will train a simple MLP with a single hidden layer that uses the
rectifier
activation function. Our output layer will consist of a
softmax function with
10 units; one for each class. Mathematically speaking, our model is
parametrized by $\mathbf{\theta}$, defined as the weight matrices
$\mathbf{W}^{(1)}$ and $\mathbf{W}^{(2)}$, and bias vectors
$\mathbf{b}^{(1)}$ and $\mathbf{b}^{(2)}$. The rectifier
activation function is defined as
\begin{equation}
\mathrm{ReLU}(\mathbf{x})_i = \max(0, \mathbf{x}_i)
\end{equation}
and our softmax output function is defined as
\begin{equation}
\mathrm{softmax}(\mathbf{x})i = \frac{e^{\mathbf{x}_i}}{\sum{j=1}^n e^{\mathbf{x}_j}}
\end{equation}
Hence, our complete model is
\begin{equation}
f(\mathbf{x}; \mathbf{\theta}) = \mathrm{softmax}(\mathbf{W}^{(2)}\mathrm{ReLU}(\mathbf{W}^{(1)}\mathbf{x} + \mathbf{b}^{(1)}) + \mathbf{b}^{(2)})
\end{equation}
Since the output of a softmax sums to 1, we can interpret it as a
categorical probability distribution: $f(\mathbf{x})_c = \hat p(y = c \mid\mathbf{x})$, where $\mathbf{x}$ is the 784-dimensional (28 × 28)
input and $c \in {0, ..., 9}$ one of the 10 classes. We can train
the parameters of our model by minimizing the negative log-likelihood
i.e. the cross-entropy between our model's output and the target
distribution. This means we will minimize the sum of
\begin{equation}
l(\mathbf{f}(\mathbf{x}), y) = -\sum_{c=0}^9 \mathbf{1}_{(y=c)} \log f(\mathbf{x})_c = -\log f(\mathbf{x})_y
\end{equation}
(where $\mathbf{1}$ is the indicator function) over all examples. We
use stochastic gradient
descent
(SGD) on mini-batches for this.
Building the model
Blocks uses "bricks" to build models. Bricks are parametrized Theano
operations. You can read more about it in the
building with bricks tutorial.
Constructing the model with Blocks is very simple. We start by defining
the input variable using Theano.
End of explanation
"""
from blocks.bricks import Linear, Rectifier, Softmax
input_to_hidden = Linear(name='input_to_hidden', input_dim=784,output_dim=100)
h = Rectifier().apply(input_to_hidden.apply(x))
hidden_to_output = Linear(name='hidden_to_output', input_dim=100, output_dim=10)
y_hat = Softmax().apply(hidden_to_output.apply(h))
"""
Explanation: Note that we picked the name 'features' for our input. This is
important, because the name needs to match the name of the data source
we want to train on. MNIST defines two data sources: 'features' and
'targets'.
For the sake of this tutorial, we will go through building an MLP the
long way. For a much quicker way, skip right to the end of the next
section. We begin with applying the linear transformations and
activations.
We start by initializing bricks with certain parameters e.g.
input_dim. After initialization we can apply our bricks on Theano
variables to build the model we want. We'll talk more about bricks in
the next tutorial, bricks_overview.
End of explanation
"""
y = tensor.lmatrix('targets')
from blocks.bricks.cost import CategoricalCrossEntropy
cost = CategoricalCrossEntropy().apply(y.flatten(), y_hat)
"""
Explanation: Loss function and regularization
Now that we have built our model, let's define the cost to minimize. For
this, we will need the Theano variable representing the target labels.
End of explanation
"""
from blocks.bricks import WEIGHT
from blocks.graph import ComputationGraph
from blocks.filter import VariableFilter
cg = ComputationGraph(cost)
W1, W2 = VariableFilter(roles=[WEIGHT])(cg.variables)
cost = cost + 0.005 * (W1 ** 2).sum() + 0.005 * (W2 ** 2).sum()
cost.name = 'cost_with_regularization'
"""
Explanation: To reduce the risk of overfitting, we can penalize excessive values of
the parameters by adding a \(L2\)-regularization term (also known as
weight decay) to the objective function:
\[l(\mathbf{f}(\mathbf{x}), y) = -\log f(\mathbf{x})_y + \lambda_1\|\mathbf{W}^{(1)}\|^2 + \lambda_2\|\mathbf{W}^{(2)}\|^2\]
To get the weights from our model, we will use Blocks' annotation
features (read more about them in the cg tutorial).
End of explanation
"""
from blocks.bricks import MLP
mlp = MLP(
activations=[Rectifier(), Softmax()],
dims=[784, 100, 10]
).apply(x)
"""
Explanation: note
Note that we explicitly gave our variable a name. We do this so that
when we monitor the performance of our model, the progress monitor
will know what name to report in the logs.
Here we set \(\lambda_1 = \lambda_2 = 0.005\). And that's it! We now
have the final objective function we want to optimize.
But creating a simple MLP this way is rather cumbersome. In practice, we
would have used the .MLP class instead.
End of explanation
"""
from blocks.initialization import IsotropicGaussian,Constant
input_to_hidden.weights_init = hidden_to_output.weights_init = IsotropicGaussian(0.01)
input_to_hidden.biases_init = hidden_to_output.biases_init = Constant(0)
input_to_hidden.initialize()
hidden_to_output.initialize()
"""
Explanation: Initializing the parameters
When we constructed the .Linear bricks to build our model, they
automatically allocated Theano shared variables to store their
parameters in. All of these parameters were initially set to NaN.
Before we start training our network, we will want to initialize these
parameters by sampling them from a particular probability distribution.
Bricks can do this for you.
End of explanation
"""
W1.get_value()
# array([[ 0.01624345, -0.00611756, -0.00528172, ..., 0.00043597, ...
"""
Explanation: We have now initialized our weight matrices with entries drawn from a
normal distribution with a standard deviation of 0.01.
End of explanation
"""
from fuel.datasets import MNIST
mnist = MNIST("train")
"""
Explanation: Training your model
Besides helping you build models, Blocks also provides the main other
features needed to train a model. It has a set of training algorithms
(like SGD), an interface to datasets, and a training loop that allows
you to monitor and control the training process.
We want to train our model on the training set of MNIST. We load the
data using the Fuel framework.
Have a look at this
tutorial
to get started.
After having configured Fuel, you can load the dataset.
End of explanation
"""
from fuel.streams import DataStream
from fuel.schemes
import SequentialScheme
from fuel.transformers import Flatten
data_stream = Flatten(DataStream.default_stream( mnist,
iteration_scheme=SequentialScheme(mnist.num_examples,batch_size=256)
))
"""
Explanation: Datasets only provide an interface to the data. For actual training, we
will need to iterate over the data in minibatches. This is done by
initiating a data stream which makes use of a particular iteration
scheme. We will use an iteration scheme that iterates over our MNIST
examples sequentially in batches of size 256.
End of explanation
"""
from blocks.algorithms import GradientDescent, Scale
algorithm = GradientDescent(
cost=cost,
params=cg.parameters,
step_rule=Scale(learning_rate=0.1)
)
"""
Explanation: The training algorithm we will use is straightforward SGD with a fixed
learning rate.
End of explanation
"""
mnist_test = MNIST("test")
data_stream_test = Flatten(DataStream.default_stream(
mnist_test,
iteration_scheme=SequentialScheme(
mnist_test.num_examples,
batch_size=1024)
)
)
"""
Explanation: During training we will want to monitor the performance of our model on
a separate set of examples. Let's create a new data stream for that.
End of explanation
"""
from blocks.extensions.monitoring import DataStreamMonitoring
monitor = DataStreamMonitoring(
variables=[cost],
data_stream=data_stream_test,
prefix="test"
)
"""
Explanation: In order to monitor our performance on this data stream during training,
we need to use one of Blocks' extensions, namely the
.DataStreamMonitoring extension.
End of explanation
"""
from blocks.main_loop import MainLoop
from blocks.extensions import FinishAfter, Printing
main_loop = MainLoop(
data_stream=data_stream,
algorithm=algorithm,
extensions=[monitor, FinishAfter(after_n_epochs=1), Printing()])
main_loop.run()
"""
Explanation: We can now use the .MainLoop to combine all the different bits and
pieces. We use two more extensions to make our training stop after a
single epoch and to make sure that our progress is printed.
End of explanation
"""
|
zoofIO/flexx-notebooks
|
flexx_tutorial_event.ipynb
|
bsd-3-clause
|
%gui asyncio
from flexx import event
"""
Explanation: Tutorial for flexx.event - properties and events
End of explanation
"""
class MyObject(event.Component):
@event.reaction('!foo')
def on_foo(self, *events):
print('received the foo event %i times' % len(events))
ob = MyObject()
for i in range(3):
ob.emit('foo', {})
"""
Explanation: Events
In flexx.event, events are represented with dictionary objects that
provide information about the event (such as what button was pressed,
or the new value of a property). A custom Dict class is used that inherits from dict and allows attribute access,
e.g. ev.button as an alternative to ev['button'].
Reactions
Events originate from Component objects. When an event is emitted, it can be reacted upon.
End of explanation
"""
ob.on_foo()
"""
Explanation: Note how the reaction is connected using a "connection string", which (in this case) indicates we connect to the type "foo" event of the object. The connection string allows some powerful mechanics, as we will see later in this tutorial. Here, we prefixed the connection string with "!", to supress a warning that Flexx would otherwise give, because it does not know about the "foo" event.
Also note how the reaction accepts multiple events at once. This means that in situations where we only care about something being changed, we can skip "duplicate" events. In situation where each individual event needs processing, use for ev in events: ....
Reactions can also be used as normal methods:
End of explanation
"""
class MyObject(event.Component):
@event.reaction('!foo', '!bar')
def on_foo_or_bar(self, *events):
for ev in events:
print('received the %s event' % ev.type)
ob = MyObject()
ob.emit('foo', {}); ob.emit('foo', {}); ob.emit('bar', {})
"""
Explanation: A reaction can also connect to multiple events:
End of explanation
"""
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.reaction('foo')
def on_foo(self, *events):
print('foo changed from', events[0].old_value, 'to', events[-1].new_value)
ob = MyObject()
ob.set_foo(7)
print(ob.foo)
"""
Explanation: Properties
Properties represent the state of a component.
End of explanation
"""
ob = MyObject(foo=12)
"""
Explanation: Properties can also be set during initialization.
End of explanation
"""
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.action
def increase_foo(self):
self._mutate_foo(self.foo + 1)
@event.reaction('foo')
def on_foo(self, *events):
print('foo changed from', events[0].old_value, 'to', events[-1].new_value)
ob = MyObject()
ob.increase_foo()
"""
Explanation: Properties are readonly. This may seem like a limitation at first, but it helps make apps more predictable, especially as they become larger. Properties can be mutated using actions. In the above example, a setter action was created automatically because we specified setter=True in the definition of the property.
Actions
Actions are special functions that are invoked asynchronously, i.e. when they are invoked (called) the action itself is applied in a futre iteration of the event loop. (The %gui asyncio at the top of the notebook makes sure that Flexx' event loop is running.)
End of explanation
"""
class MyObject(event.Component):
@event.emitter
def mouse_down(self, js_event):
''' Event emitted when the mouse is pressed down. '''
return dict(button=js_event['button'])
@event.reaction('mouse_down')
def on_bar(self, *events):
for ev in events:
print('detected mouse_down, button', ev.button)
ob = MyObject()
ob.mouse_down({'button': 1})
ob.mouse_down({'button': 2})
"""
Explanation: The action above mutates the foo property. Properties can only be mutated by actions. This ensures that the state of a component (and of the whole app) is consistent during the handling of reactions.
Emitters
Emitters make it easy to generate events from specific input (e.g. an event from another kind of event system) and act as a placeholder for the docs of public events.
End of explanation
"""
class MyObject(event.Component):
foo = event.IntProp(2, settable=True)
@event.reaction
def on_foo(self):
print('foo changed is now', self.foo)
ob = MyObject()
ob.set_foo(99)
"""
Explanation: Implicit reactions
Implicit reactions make it easy to write concise code that needs to keep track of state. To create an implicit reaction, simply provide no connection strings. The reaction will now automatically track all properties that the reaction is accessing. This even works dynamically, e.g. when accessing a property on each element in a list property, the reaction will automatically "reconnect" when the list changes.
End of explanation
"""
class MyObject(event.Component):
@event.reaction('!foo:bb')
def foo_handler1(self, *events):
print('foo B')
@event.reaction('!foo:cc')
def foo_handler2(self, *events):
print('foo C')
@event.reaction('!foo:aa')
def foo_handler3(self, *events):
print('foo A')
ob = MyObject()
ob.emit('foo', {})
ob.disconnect('foo:bb')
ob.emit('foo', {})
"""
Explanation: Labels
Labels are a feature that makes it possible to infuence the order by
which event handlers are called, and provide a means to disconnect
specific (groups of) handlers. The label is part of the connection
string: 'foo.bar:label'.
End of explanation
"""
class Root(event.Component):
children = event.TupleProp([], settable=True)
@event.reaction('children', 'children*.count')
def update_total_count(self, *events):
total_count = sum([child.count for child in self.children])
print('total count is', total_count)
class Sub(event.Component):
count = event.IntProp(0, settable=True)
root = Root()
sub1, sub2, sub3 = Sub(count=1), Sub(count=2), Sub(count=3)
root.set_children([sub1, sub2, sub3])
"""
Explanation: Dynamism
Dynamism is a concept that allows one to connect to events for which the source can change. It essentially allows events to be connected automatically, which greatly reduced boilerplate code. I makes it easy to connect different parts of an application in a robust way.
End of explanation
"""
sub1.set_count(100)
"""
Explanation: Updating the count property on any of its children will invoke the callback:
End of explanation
"""
root.set_children([sub2, sub3])
"""
Explanation: We also connected to the children property, so that the handler is also invoked when the children are added/removed:
End of explanation
"""
sub4 = Sub()
root.set_children([sub3, sub4])
sub4.set_count(10)
sub1.set_count(1000) # no update, sub1 is not part of root's children
"""
Explanation: Naturally, when the count on new children changes ...
End of explanation
"""
|
dwhswenson/openpathsampling
|
examples/tests/test_snapshot.ipynb
|
mit
|
from __future__ import print_function
import numpy as np
import openpathsampling as paths
import openpathsampling.engines.features as features
"""
Explanation: Some testing and analysis of the new Snapshot implementation
End of explanation
"""
from IPython.display import Markdown
def code_to_md(snapshot_class):
md = '```py\n'
for f, s in snapshot_class.__features__.debug.items():
if s is not None:
md += s
else:
md += 'def ' + f + '(...):\n # user defined\n pass'
md += '\n\n'
md += '```'
return md
"""
Explanation: Function to show the generated source code
End of explanation
"""
EmptySnap = paths.engines.snapshot.SnapshotFactory('no', [], 'Empty', use_lazy_reversed=False)
"""
Explanation: Check generated source code
Generate simple Snapshot without any features using factory
End of explanation
"""
@features.base.attach_features([
features.velocities,
features.coordinates,
features.box_vectors,
features.topology
])
class A(paths.BaseSnapshot):
def copy(self):
return 'copy'
"""
Explanation: Generate Snapshot with overridden .copy method.
End of explanation
"""
#! lazy
# lazy because of some issue with Py3k comparing strings
try:
@features.base.attach_features([
])
class B(A):
pass
except RuntimeWarning as e:
print(e)
else:
raise RuntimeError('Should have raised a RUNTIME warning')
a = A()
assert(a.copy() == 'copy')
# NBVAL_IGNORE_OUTPUT
Markdown(code_to_md(A))
# NBVAL_IGNORE_OUTPUT
Markdown(code_to_md(EmptySnap))
SuperSnap = paths.engines.snapshot.SnapshotFactory(
'my', [
paths.engines.features.coordinates,
paths.engines.features.box_vectors,
paths.engines.features.velocities
], 'No desc', use_lazy_reversed=False)
# NBVAL_IGNORE_OUTPUT
Markdown(code_to_md(SuperSnap))
MegaSnap = paths.engines.snapshot.SnapshotFactory(
'mega', [
paths.engines.features.statics,
paths.engines.features.kinetics,
paths.engines.features.engine
], 'Long desc', use_lazy_reversed=False)
# NBVAL_IGNORE_OUTPUT
Markdown(code_to_md(MegaSnap))
"""
Explanation: Check that subclassing with overridden copy needs more overriding.
End of explanation
"""
@features.base.attach_features([
])
class HyperSnap(MegaSnap):
pass
"""
Explanation: Test subclassing
End of explanation
"""
@features.base.attach_features([
paths.engines.features.statics,
])
class HyperSnap(MegaSnap):
pass
"""
Explanation: Test subclassing with redundant features (should work / be ignored)
End of explanation
"""
try:
@features.base.attach_features([
paths.engines.features.statics,
paths.engines.features.coordinates
])
class HyperSnap(MegaSnap):
pass
except RuntimeWarning as e:
print(e)
else:
raise RuntimeError('Should have raised a RUNTIME warning')
# NBVAL_IGNORE_OUTPUT
Markdown(code_to_md(paths.engines.openmm.MDSnapshot))
"""
Explanation: Test subclassing with conflicting features (should not work)
End of explanation
"""
|
gwu-libraries/notebooks
|
20181127-top-hashtags-json.ipynb
|
mit
|
!cat 50tweets.json | jq -cr '[.entities.hashtags][0][].text'
!cat tweets4hashtags.json | jq -cr '[.entities.hashtags][0][].text' > allhashtags.txt
"""
Explanation: Computing the top hashtags (JSON)
So you have tweets in a JSON file, and you'd like to get a list of the hashtags, from the most frequently occurring hashtags on down.
There are many, many different ways to accomplish this. Since we're working with the tweets in JSON format, this solution will use jq, as well as a few bash shell / command line tools: cat, sort, uniq, and wc. If you haven't used jq yet, our Working with Twitter Using jq notebook is a good place to start.
Where are the hashtags in tweet JSON?
When we look at a tweet, we see that it has a key called entities, and that the value of entities contains a key called hashtags. The value of hashtags is a list (note the square brackets); each item in the list contains the text of a single hashtag, and the indices of the characters in the tweet text where the hashtag begins and ends.
```
{
created_at: "Tue Oct 30 09:15:45 +0000 2018",
id: 1057199367411679200,
id_str: "1057199367411679234",
text: "Lesson from Indra's elephant https://t.co/h5K3y5g4Ju #India #Hinduism #Buddhism #History #Culture https://t.co/qFyipqzPnE",
...
entities: {
hashtags: [
{
text: "India",
indices: [
54,
60
]
},
{
text: "Hinduism",
indices: [
61,
70
]
},
{
text: "Buddhism",
indices: [
71,
80
]
},
{
text: "History",
indices: [
81,
89
]
},
{
text: "Culture",
indices: [
90,
98
]
}
],
...
```
When we use jq, we'll need to construct a filter that pulls out the hashtag text values.
End of explanation
"""
!wc -l allhashtags.txt
"""
Explanation: Let's see how many hashtags we extracted:
End of explanation
"""
!cat allhashtags.txt | sort | uniq -c | sort -nr > rankedhashtags.txt
"""
Explanation: What we'd like to do now is to count up how many of each hashtag we have. We'll use a combination of bash's sort and uniq commands for that. We'll also use the -c option for uniq, which prefaces each line with the count of lines it collapsed together in the process of uniqing a group of identical lines. sort's -nr options will allow us to sort by just the count on each line.
End of explanation
"""
!head -n 50 rankedhashtags.txt
"""
Explanation: Let's take a look at what we have now.
End of explanation
"""
!wc -l rankedhashtags.txt
"""
Explanation: Personally, I have no idea what most of these hashtags are about, but this is apparently what people were tweeting about on October 31, 2018.
And as for how many unique hashtags are in this set:
End of explanation
"""
|
MTG/essentia
|
src/examples/python/musicbricks-tutorials/1-stft_analsynth.ipynb
|
agpl-3.0
|
# import essentia in streaming mode
import essentia
import essentia.streaming as es
"""
Explanation: STFT Analysis/Synthesis - MusicBricks Tutorial
Introduction
This tutorial will guide you through some tools for performing spectral analysis and synthesis using the Essentia library (http://www.essentia.upf.edu). STFT stands for Short-Time Fourier Transform and it processes an input audio signal as a sequence of spectral frames. Spectral frames are complex-valued arrays contain the frequency representation of the windowed input signal.
This algorithm shows how to analyze the input signal, and resynthesize it again, allowing to apply new transformations directly on the spectral domain.
You should first install the Essentia library with Python bindings. Installation instructions are detailed here: http://essentia.upf.edu/documentation/installing.html .
Processing steps
End of explanation
"""
# import matplotlib for plotting
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: After importing Essentia library, let's import other numerical and plotting tools
End of explanation
"""
# algorithm parameters
framesize = 1024
hopsize = 256
"""
Explanation: Define the parameters of the STFT workflow
End of explanation
"""
inputFilename = 'singing-female.wav'
outputFilename = 'singing-female-stft.wav'
# create an audio loader and import audio file
out = np.array(0)
loader = es.MonoLoader(filename = inputFilename, sampleRate = 44100)
pool = essentia.Pool()
"""
Explanation: Specify input and output audio filenames
End of explanation
"""
# algorithm instantation
fcut = es.FrameCutter(frameSize = framesize, hopSize = hopsize, startFromZero = False);
w = es.Windowing(type = "hann");
fft = es.FFT(size = framesize);
ifft = es.IFFT(size = framesize);
overl = es.OverlapAdd (frameSize = framesize, hopSize = hopsize, gain = 1./framesize );
awrite = es.MonoWriter (filename = outputFilename, sampleRate = 44100);
"""
Explanation: Define algorithm chain for frame-by-frame process:
FrameCutter -> Windowing -> FFT -> IFFT -> OverlapAdd -> AudioWriter
End of explanation
"""
loader.audio >> fcut.signal
fcut.frame >> w.frame
w.frame >> fft.frame
fft.fft >> ifft.fft
ifft.frame >> overl.frame
overl.signal >> awrite.audio
overl.signal >> (pool, 'audio')
"""
Explanation: Now we set the algorithm network and store the processed audio samples in the output file
End of explanation
"""
essentia.run(loader)
"""
Explanation: Finally we run the process that will store an output file in a WAV file
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ipsl/cmip6/models/sandbox-1/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
jvcarr/portfolio
|
projects/West-Nile-Final.ipynb
|
mit
|
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import cross_val_score, StratifiedKFold , train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, roc_curve, auc
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
import xgboost as xgb
from sklearn.grid_search import GridSearchCV
# cleaning up the notebook
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('/Users/jcarr/downloads/train.csv')
"""
Explanation: Predicting West Nile Virus in Chicago
This project was to predict the probability that a mosquito trap in Chicago will have captured a mosquito with West Nile Virus (WNV). This is a closed competition on Kaggle, but still accepts submissions and will tell you where your submission would have ranked and what your area under the curve (AUC) score would be on the test data set.
The training data contains information on if WNV was present in a trap when it was checked in 2007, 2009, 2011, and 2013, along with the date it was checked, the species of mosquito that were found, and the number of mosquitos found. This was tested against data from 2008, 2010, 2012, and 2014, with the same information included except for if West Nile Virus was present when the trap was checked and the number of mosquitos in a trap.
First, we need to read in our data and import all necessary libraries.
End of explanation
"""
df['Date'] = pd.to_datetime(df['Date'])
df['month'] = df.Date.apply(lambda x: x.month)
df['year'] = df.Date.apply(lambda x: x.year)
df['WkNb'] = df.Date.apply(lambda x: float(x.strftime("%U")))
df['Trap'] = df.Trap.str[:4]
## Create column w just '1' in each column to sum and weight traps with more mosquitos
df['weight'] = 1
## Sum of traps having WNV by month, put in new DFs
df_2 = df.groupby(['Date','Trap']).weight.sum().reset_index()
df_target = df.groupby(['Date','Trap']).WnvPresent.max().reset_index()
## extract month and year from date format
df_2['Date'] = pd.to_datetime(df_2['Date'])
df_2['month'] = df_2.Date.apply(lambda x: x.month)
df_2['year'] = df_2.Date.apply(lambda x: x.year)
df_2['WkNb'] = df_2.Date.apply(lambda x: float(x.strftime("%U")))
"""
Explanation: The data given requires several transformations. The date field needed to be turned into a datetime from a string, and then month, year, and week number (i.e. the first week in a given month, second week, etc) extracted from that.
The main transformation that needed to take place was to count the number of records for traps that were checked on a given day. The competition did not provide the number of mosquitos that were found in a trap in the test data, but it was included in the training data. However, from reviewing the number of mosquitos listed in the training data it was determined that the total number of records for a trap on a given day could be used as a proxy for the number of mosquitos in a trap. Each row represents a group of 50 mosquitos. If a trap was checked and had 150 mosquitos in it, this data would be presented in 3 rows, with 3 separate groups of 50 mosquitos evaluated for the presence of WNV. While the number of mosquitos was not made available as part of the test set, counting the number of records for a trap on a given day provides us with a suitable proxy for number of mosquitos found in a trap.
End of explanation
"""
## get weight of traps by month... num of records w wnv present over total records for trap and month
df_test = df.groupby(['Date','Trap','Species','WnvPresent']).weight.sum().reset_index()
## Same conversions for date
df_test['Date'] = pd.to_datetime(df_test['Date'])
df_test['month'] = df_test.Date.apply(lambda x: x.month)
df_test['year'] = df_test.Date.apply(lambda x: x.year)
df_test_2 = df_test.groupby(['Trap','month','WnvPresent']).weight.sum().reset_index()
df_test_2_full = df_test_2.groupby(['Trap','month']).weight.sum().reset_index()
df_test_2_y = df_test_2[df_test_2.WnvPresent == 1].groupby(['Trap','month']).weight.sum().reset_index()
df_test_2_y.rename(columns={'weight':'WNV'}, inplace = True)
df_ratio = pd.merge(df_test_2_full, df_test_2_y, how = 'left', on = ['Trap','month'])
df_ratio.fillna(0, inplace = True)
df_ratio['WNV_ratio'] = df_ratio.WNV / df_ratio.weight
df_ratio.head(15)
"""
Explanation: After the transformations above, my partner and I decided to assign weights to traps based on the prevalence of WNV in a given trap in the years that we knew WNV was there. (Note - this was a bit of a hack given that this was a Kaggle competition. This likely would not have been as good of an option if the competition was not set up this way, but it proved to be effective for our purposes). We assigned a weight based on the number of rows that did have WNV present out of the number that had no WNV present within a given month over the 4 years of data. The first few rows of this weighted data are output by the cell below.
End of explanation
"""
## Automatically set specific probabilities to zero
null_traps = df_test_2_y.groupby(['Trap']).WNV.sum().reset_index()
null_traps.rename(columns={'WNV':'WnvEver'}, inplace = True)
df_ratio = pd.merge(df_ratio, null_traps, how = 'left', on = ['Trap'])
## Adjust weight - max ratio is 0.5, so adding 0.5 to make at least some probabilities = 1
#df_ratio_2['WNV_ratio'] = df_ratio_2.WNV_ratio + 0.5
df_ratio.loc[df_ratio.WnvEver == 0, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 5, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 6, 'WNV_ratio'] = 0.0
df_ratio.loc[df_ratio.month == 10, 'WNV_ratio'] = 0.0
## Encode traps, since they are categorical values
le = LabelEncoder()
traps = le.fit_transform(df_ratio.Trap)
traps = pd.DataFrame(data = traps, columns = ['Trap_Encode'])
df_ratio_2 = pd.concat([df_ratio, traps], axis = 1)
## Joining predicted probabilities to original dataframe w West Nile predictions
prob_pred = pd.merge(df, df_ratio_2, how = 'left', on = ['Trap','month'])
### Transforming Kaggle submission file below
test = pd.read_csv('/Users/jcarr/downloads/test.csv')
traps = le.fit_transform(test.Trap)
traps = pd.DataFrame(traps, columns = ['Trap_Encode'])
test['Date'] = pd.to_datetime(test['Date'])
test['month'] = test.Date.apply(lambda x: x.month)
test['year'] = test.Date.apply(lambda x: x.year)
test['WkNb'] = test.Date.apply(lambda x: float(x.strftime("%U")))
test['Trap'] = test.Trap.str[:4]
test = pd.concat([test, traps], axis = 1)
test = pd.merge(test, df_ratio, how = 'left', on = ['Trap','month'])
test.head()
test.loc[test.WnvEver == 0, 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('SALINARIUS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('TERRITANS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('TARSALIS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('ERRATICUS'), 'WNV_ratio'] = 0.0
test.loc[test.Species.str.contains('UNSPECIFIED'), 'WNV_ratio'] = 0.0
test.loc[test.month == 5, 'WNV_ratio'] = 0.0
test.loc[test.month == 6, 'WNV_ratio'] = 0.0
test.loc[test.month == 10, 'WNV_ratio'] = 0.0
test['weight'] = 1
## Calculate rates of traps having WNV by month, put in new DFs
test_weight = test.groupby(['month','year','Trap']).weight.sum().reset_index()
test_weight.rename(columns = {'weight': 'leakage'}, inplace = True)
test_2 = pd.merge(test, test_weight, how = 'left', on = ['month','year','Trap'])
test_2['WNV_ratio_2'] = test_2.WNV_ratio * test_2.leakage
test_2.fillna(0, inplace = True)
"""
Explanation: The cell below was also done because the task was a Kaggle competition. In an effort to increase our model's AUC and prevent any false positives from occurring, we manually assigned certain traps a predicted probability of zero. These were traps that never caught a mosquito with WNV in the 4 years of training data, as well as the 3 months of the year that either never had a trap with WNV, or in the case of June and October, had at most 2 traps over the 4 year period where a trap caught a mosquito with WNV.
End of explanation
"""
X_train = prob_pred[['Trap_Encode', 'month', 'year', 'WNV_ratio', 'Latitude', 'Longitude', 'WkNb']]
y_train = prob_pred.WnvPresent
cv_params = {'max_depth': [3,5,7], 'min_child_weight': [1,3,5], 'learning_rate': [0.1, 0.01], 'subsample': [0.7,0.8,0.9]}
ind_params = {'n_estimators': 1000, 'seed':0, 'colsample_bytree': 0.8,
'objective': 'binary:logistic'}
optimized_GBM = GridSearchCV(xgb.XGBClassifier(**ind_params),
cv_params,
scoring = 'roc_auc', cv = 5, n_jobs = -1)
"""
Explanation: XGBoost, a gradient-boosted decision tree classifier, provided us with the best scores as determined by AUC. We attempted using a Random Forest Classifier, as well as just using our created weighted trap value as the probability that a trap had WNV at the point it was checked. The process to create predictions with the XGBoost model is below.
The features used are the trap, month, year, and week checked, latitude and longitude, and then the weighted value that was calculated for each trap.
End of explanation
"""
actual = prob_pred.WnvPresent
ratio = prob_pred.WNV_ratio
FPR = dict()
TPR = dict()
ROC_AUC = dict()
# For class 1, find the area under the curve
FPR[1], TPR[1], _ = roc_curve(actual, ratio)
ROC_AUC[1] = auc(FPR[1], TPR[1])
# Plot of a ROC curve for class 1
plt.plot(FPR[1], TPR[1], label='ROC curve (area = %0.2f)' % ROC_AUC[1], linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', linewidth=4)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC for West Nile Prediction - Weighted Trap Probability')
plt.legend(loc="lower right")
plt.show()
optimized_GBM.fit(X_train, y_train)
"""
Explanation: The code above selected the best parameters from the grid search, and now it is fit to the training data below.
End of explanation
"""
X_test = test_2[['Trap_Encode', 'month', 'year', 'WNV_ratio', 'Latitude', 'Longitude', 'WkNb']]
results = optimized_GBM.predict_proba(X_test)
xgbres = pd.DataFrame(results[:,1], columns=['xgbres'])
final = test_2.join(xgbres)
p = []
p = pd.DataFrame(p)
p['Id'] = final.Id
p['WnvPresent'] = final.xgbres
"""
Explanation: Below, the same transformations are applied to the test data that Kaggle uses to score the model, and the file is created that is submitted to Kaggle for scoring.
End of explanation
"""
|
Intel-Corporation/tensorflow
|
tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
"""
! pip install tflite-model-maker
import os
import glob
import random
import shutil
import librosa
import soundfile as sf
from IPython.display import Audio
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import tflite_model_maker as mm
from tflite_model_maker import audio_classifier
from tflite_model_maker.config import ExportFormat
print(f"TensorFlow Version: {tf.__version__}")
print(f"Model Maker Version: {mm.__version__}")
"""
Explanation: Retrain a speech recognition model with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_speech_recognition"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a speech recognition model that can classify spoken words or short phrases using one-second sound samples. The Model Maker library uses transfer learning to retrain an existing TensorFlow model with a new dataset, which reduces the amount of sample data and time required for training.
By default, this notebook retrains the model (BrowserFft, from the TFJS Speech Command Recognizer) using a subset of words from the speech commands dataset (such as "up," "down," "left," and "right"). Then it exports a TFLite model that you can run on a mobile device or embedded system (such as a Raspberry Pi). It also exports the trained model as a TensorFlow SavedModel.
This notebook is also designed to accept a custom dataset of WAV files, uploaded to Colab in a ZIP file. The more samples you have for each class, the better your accuracy will be, but because the transfer learning process uses feature embeddings from the pre-trained model, you can still get a fairly accurate model with only a few dozen samples in each of your classes.
Note: The model we'll be training is optimized for speech recognition with one-second samples. If you want to perform more generic audio classification (such as detecting different types of music), we suggest you instead follow this Colab to retrain an audio classifier.
If you want to run the notebook with the default speech dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. However, if you want to use your own dataset, then continue down to Prepare the dataset and follow the instructions there.
Import the required packages
You'll need TensorFlow, TFLite Model Maker, and some modules for audio manipulation, playback, and visualizations.
End of explanation
"""
use_custom_dataset = False #@param ["False", "True"] {type:"raw"}
"""
Explanation: Prepare the dataset
To train with the default speech dataset, just run all the code below as-is.
But if you want to train with your own speech dataset, follow these steps:
Note:
The model you'll retrain expects input data to be roughly one second of audio at 44.1 kHz. Model Maker perfoms automatic resampling for the training dataset, so there's no need to resample your dataset if it has a sample rate other than 44.1 kHz. But beware that audio samples longer than one second will be split into multiple one-second chunks, and the final chunk will be discarded if it's shorter than one second.
Be sure each sample in your dataset is in WAV file format, about one second long. Then create a ZIP file with all your WAV files, organized into separate subfolders for each classification. For example, each sample for a speech command "yes" should be in a subfolder named "yes". Even if you have only one class, the samples must be saved in a subdirectory with the class name as the directory name. (This script assumes your dataset is not split into train/validation/test sets and performs that split for you.)
Click the Files tab in the left panel and just drag-drop your ZIP file there to upload it.
Use the following drop-down option to set use_custom_dataset to True.
Then skip to Prepare a custom audio dataset to specify your ZIP filename and dataset directory name.
End of explanation
"""
tf.keras.utils.get_file('speech_commands_v0.01.tar.gz',
'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz',
cache_dir='./',
cache_subdir='dataset-speech',
extract=True)
tf.keras.utils.get_file('background_audio.zip',
'https://storage.googleapis.com/download.tensorflow.org/models/tflite/sound_classification/background_audio.zip',
cache_dir='./',
cache_subdir='dataset-background',
extract=True)
"""
Explanation: Generate a background noise dataset
Whether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence).
Because the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence:
End of explanation
"""
# Create a list of all the background wav files
files = glob.glob(os.path.join('./dataset-speech/_background_noise_', '*.wav'))
files = files + glob.glob(os.path.join('./dataset-background', '*.wav'))
background_dir = './background'
os.makedirs(background_dir, exist_ok=True)
# Loop through all files and split each into several one-second wav files
for file in files:
filename = os.path.basename(os.path.normpath(file))
print('Splitting', filename)
name = os.path.splitext(filename)[0]
rate = librosa.get_samplerate(file)
length = round(librosa.get_duration(filename=file))
for i in range(length - 1):
start = i * rate
stop = (i * rate) + rate
data, _ = sf.read(file, start=start, stop=stop)
sf.write(os.path.join(background_dir, name + str(i) + '.wav'), data, rate)
"""
Explanation: Note: Although there is a newer version available, we're using v0.01 of the speech commands dataset because it's a smaller download. v0.01 includes 30 commands, while v0.02 adds five more ("backward", "forward", "follow", "learn", and "visual").
End of explanation
"""
if not use_custom_dataset:
commands = [ "up", "down", "left", "right", "go", "stop", "on", "off", "background"]
dataset_dir = './dataset-speech'
test_dir = './dataset-test'
# Move the processed background samples
shutil.move(background_dir, os.path.join(dataset_dir, 'background'))
# Delete all directories that are not in our commands list
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
name = os.path.basename(os.path.normpath(dir))
if name not in commands:
shutil.rmtree(dir)
# Count is per class
sample_count = 150
test_data_ratio = 0.2
test_count = round(sample_count * test_data_ratio)
# Loop through child directories (each class of wav files)
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
files = glob.glob(os.path.join(dir, '*.wav'))
random.seed(42)
random.shuffle(files)
# Move test samples:
for file in files[sample_count:sample_count + test_count]:
class_dir = os.path.basename(os.path.normpath(dir))
os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)
os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))
# Delete remaining samples
for file in files[sample_count + test_count:]:
os.remove(file)
"""
Explanation: Prepare the speech commands dataset
We already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model.
This dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things:
Specify which classifications we want to use, and delete the rest.
Keep only 150 samples of each class for training (to prove that transfer learning works well with smaller datasets and simply to reduce the training time).
Create a separate directory for a test dataset so we can easily run inference with them later.
End of explanation
"""
if use_custom_dataset:
# Specify the ZIP file you uploaded:
!unzip YOUR-FILENAME.zip
# Specify the unzipped path to your custom dataset
# (this path contains all the subfolders with classification names):
dataset_dir = './YOUR-DIRNAME'
"""
Explanation: Prepare a custom dataset
If you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset:
End of explanation
"""
def move_background_dataset(dataset_dir):
dest_dir = os.path.join(dataset_dir, 'background')
if os.path.exists(dest_dir):
files = glob.glob(os.path.join(background_dir, '*.wav'))
for file in files:
shutil.move(file, dest_dir)
else:
shutil.move(background_dir, dest_dir)
if use_custom_dataset:
# Move background samples into custom dataset
move_background_dataset(dataset_dir)
# Now we separate some of the files that we'll use for testing:
test_dir = './dataset-test'
test_data_ratio = 0.2
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
files = glob.glob(os.path.join(dir, '*.wav'))
test_count = round(len(files) * test_data_ratio)
random.seed(42)
random.shuffle(files)
# Move test samples:
for file in files[:test_count]:
class_dir = os.path.basename(os.path.normpath(dir))
os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)
os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))
print('Moved', test_count, 'images from', class_dir)
"""
Explanation: After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook.
The following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set.
End of explanation
"""
def get_random_audio_file(samples_dir):
files = os.path.abspath(os.path.join(samples_dir, '*/*.wav'))
files_list = glob.glob(files)
random_audio_path = random.choice(files_list)
return random_audio_path
def show_sample(audio_path):
audio_data, sample_rate = sf.read(audio_path)
class_name = os.path.basename(os.path.dirname(audio_path))
print(f'Class: {class_name}')
print(f'File: {audio_path}')
print(f'Sample rate: {sample_rate}')
print(f'Sample length: {len(audio_data)}')
plt.title(class_name)
plt.plot(audio_data)
display(Audio(audio_data, rate=sample_rate))
random_audio = get_random_audio_file(test_dir)
show_sample(random_audio)
"""
Explanation: Play a sample
To be sure the dataset looks correct, let's play at a random sample from the test set:
End of explanation
"""
spec = audio_classifier.BrowserFftSpec()
"""
Explanation: Define the model
When using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS.
The model expects input as an audio sample that's 44.1 kHz, and just under a second long: the exact sample length must be 44034 frames.
You don't need to do any resampling with your training dataset. Model Maker takes care of that for you. But when you later run inference, you must be sure that your input matches that expected format.
All you need to do here is instantiate the BrowserFftSpec:
End of explanation
"""
if not use_custom_dataset:
train_data_ratio = 0.8
train_data = audio_classifier.DataLoader.from_folder(
spec, dataset_dir, cache=True)
train_data, validation_data = train_data.split(train_data_ratio)
test_data = audio_classifier.DataLoader.from_folder(
spec, test_dir, cache=True)
"""
Explanation: Load your dataset
Now you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec.
We already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split: the training set, the validation set, and the test set.
Load the speech commands dataset
End of explanation
"""
if use_custom_dataset:
train_data_ratio = 0.8
train_data = audio_classifier.DataLoader.from_folder(
spec, dataset_dir, cache=True)
train_data, validation_data = train_data.split(train_data_ratio)
test_data = audio_classifier.DataLoader.from_folder(
spec, test_dir, cache=True)
"""
Explanation: Load a custom dataset
Note: Setting cache=True is important to make training faster (especially when the dataset must be re-sampled) but it will also require more RAM to hold the data. If you use a very large custom dataset, caching might exceed your RAM capacity.
End of explanation
"""
# If your dataset has fewer than 100 samples per class,
# you might want to try a smaller batch size
batch_size = 25
epochs = 25
model = audio_classifier.create(train_data, spec, validation_data, batch_size, epochs)
"""
Explanation: Train the model
Now we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training.
If you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set.
Note: The first epoch takes longer because it must create the cache.
End of explanation
"""
model.evaluate(test_data)
"""
Explanation: Review the model performance
Even if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here:
End of explanation
"""
def show_confusion_matrix(confusion, test_labels):
"""Compute confusion matrix and normalize."""
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
sns.set(rc = {'figure.figsize':(6,6)})
sns.heatmap(
confusion_normalized, xticklabels=test_labels, yticklabels=test_labels,
cmap='Blues', annot=True, fmt='.2f', square=True, cbar=False)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
confusion_matrix = model.confusion_matrix(test_data)
show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)
"""
Explanation: View the confusion matrix
When training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data.
End of explanation
"""
TFLITE_FILENAME = 'browserfft-speech.tflite'
SAVE_PATH = './models'
print(f'Exporing the model to {SAVE_PATH}')
model.export(SAVE_PATH, tflite_filename=TFLITE_FILENAME)
model.export(SAVE_PATH, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
"""
Explanation: Export the model
The last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere.
When exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.)
End of explanation
"""
# This library provides the TFLite metadata API
! pip install -q tflite_support
from tflite_support import metadata
import json
def get_labels(model):
"""Returns a list of labels, extracted from the model metadata."""
displayer = metadata.MetadataDisplayer.with_model_file(model)
labels_file = displayer.get_packed_associated_file_list()[0]
labels = displayer.get_associated_file_buffer(labels_file).decode()
return [line for line in labels.split('\n')]
def get_input_sample_rate(model):
"""Returns the model's expected sample rate, from the model metadata."""
displayer = metadata.MetadataDisplayer.with_model_file(model)
metadata_json = json.loads(displayer.get_metadata_json())
input_tensor_metadata = metadata_json['subgraph_metadata'][0][
'input_tensor_metadata'][0]
input_content_props = input_tensor_metadata['content']['content_properties']
return input_content_props['sample_rate']
"""
Explanation: Run inference with TF Lite model
Now your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python.
End of explanation
"""
# Get a WAV file for inference and list of labels from the model
tflite_file = os.path.join(SAVE_PATH, TFLITE_FILENAME)
labels = get_labels(tflite_file)
random_audio = get_random_audio_file(test_dir)
# Ensure the audio sample fits the model input
interpreter = tf.lite.Interpreter(tflite_file)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_size = input_details[0]['shape'][1]
sample_rate = get_input_sample_rate(tflite_file)
audio_data, _ = librosa.load(random_audio, sr=sample_rate)
if len(audio_data) < input_size:
audio_data.resize(input_size)
audio_data = np.expand_dims(audio_data[:input_size], axis=0)
# Run inference
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], audio_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
# Display prediction and ground truth
top_index = np.argmax(output_data[0])
label = labels[top_index]
score = output_data[0][top_index]
print('---prediction---')
print(f'Class: {label}\nScore: {score}')
print('----truth----')
show_sample(random_audio)
"""
Explanation: To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below.
End of explanation
"""
try:
from google.colab import files
except ImportError:
pass
else:
files.download(tflite_file)
"""
Explanation: Download the TF Lite model
Now you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example.
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
|
ex26-Identify Marine Heatwaves from High-resolution Daily SST Data.ipynb
|
mit
|
%matplotlib inline
import numpy as np
from datetime import date
from matplotlib import pyplot as plt
# Load marineHeatWaves definition module
import marineHeatWaves as mhw
"""
Explanation: Identify Marine Heatwaves from High-resolution Daily SST Data
Marine ecosystems are strongly influenced by heatwaves, a kind of extreme climatic events. Marine heatwaves (MHWs), which can be caused by a combination of atmospheric and oceanographic processes, have a strong influence on marine ecosystem structure and function, including mass mortality of abalone, benthic habitat loss and altered human use of the ocean. MHWs have been observed around the world and are expected to increase in intensity and frequency under anthropogenic climate change(Oliver et al., 2017).
A general definition of MHW has been proposed by Hobday et al.(2016). A MHW is defined as a prolonged discrete anomalously warm water event that can be described by its duration, intensity, rate of evolution, and spatial extent. Specifically, an anomalously warm event is considered to be a MHW if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year historical baseline period.
The python module of marineHeatWaves implements the Marine Heatwave (MHW) definition proposed by Hobday et al. (2016). Moreover, the marineHeatWaves module provides three examples to show how to apply the MHW definition to observed SST records to identifies three historical MHWs: the 2011 Western Australia event, the 2012 Northwest Atlantic event, and the 2003 Mediterranean event.
We take the 2011 Western Australia event from the original tutorial (Hobday et al., 2016) as an example and reproduce it in this notebook. The MHW took place during the austral summer of 2011 off Western Australia (the so-called 'Ningaloo Niño'). It was largely driven by atmospheric and oceanographic processes associated with the strong 2010/11 La Niña, which led to anomalous advection of warm tropical waters poleward into temperate regions (Feng et al., 2013; Benthuysen et al., 2014). This Western Australia MHW caused major shifts in benthic ecosystem structure and functioning in a tropical–temperate transition zone, through widespread mortality of cool-water habitat forming species (Wernberg et al., 2013; Smale and Wernberg, 2013), and impacted a valuable fishery (Caputi et al., 2015).
In this notebook, the NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is used to identify MHWs. The data is an analysis constructed by combining observations from different platforms (satellites, ships, buoys) on a regular global grid. A spatially complete SST map is produced by interpolating to fill in gaps. The methodology includes bias adjustment of satellite and ship observations (referenced to buoys) to compensate for platform differences and sensor biases (See more at https://www.ncdc.noaa.gov/oisst).
1. Load all needed libraries
End of explanation
"""
sst = np.loadtxt('data/sst_WA.csv', delimiter=',')
# Generate time vector using datetime format (January 1 of year 1 is day 1)
t = np.arange(date(1982,1,1).toordinal(),date(2017,12,31).toordinal()+1)
dates = [date.fromordinal(tt.astype(int)) for tt in t]
"""
Explanation: 2. Load daily SST data
The daily time series of SST off Western Australia at the location of [112.5$^∘$E, 29.5$^∘$S] has been preprocessed over the 1982 to 2017 period in advance. This can be done using NCO, CDO, Matlab, or Python itself. The location is right at the center of domain [112.375~112.625$^∘$E, 29.375~29.625$^∘$S]. So the daily time series was produced from the nearest 4 grids over the domain using a bilinear interpolation method. The data is stored as a CSV file of sst_WA.csv.
End of explanation
"""
mhws, clim = mhw.detect(t, sst)
"""
Explanation: 3. Detect Marine Heatwave
The marineHeatWaves (mhw) module consists of a number of functions for the detection and characterization of MHWs. The main function is the detection function (detect) which takes as input a time series of temperature (and a corresponding time vector) and outputs a set of detected MHWs.
3.1 Detect
Run the MHW detection algorithm which returns the variable mhws, consisting of the detected MHWs, and clim, consisting of the climatological (varying by day-of-year) seasonal cycle and extremes threshold.
End of explanation
"""
mhws['n_events']
"""
Explanation: 3.2 Check properties of MHWs
The number of MHW events:
End of explanation
"""
mhws['intensity_max'][0:10]
"""
Explanation: Maximum intensities (in $^∘$C) of the first ten events
End of explanation
"""
ev = np.argmax(mhws['intensity_max']) # Find largest event
print 'Maximum intensity:', mhws['intensity_max'][ev], 'deg. C'
print 'Average intensity:', mhws['intensity_mean'][ev], 'deg. C'
print 'Cumulative intensity:', mhws['intensity_cumulative'][ev], 'deg. C-days'
print 'Duration:', mhws['duration'][ev], 'days'
print 'Start date:', mhws['date_start'][ev].strftime("%d %B %Y")
print 'End date:', mhws['date_end'][ev].strftime("%d %B %Y")
"""
Explanation: Properties of the event with the largest maximum intensity
End of explanation
"""
plt.figure(figsize=(14,10))
plt.subplot(2,1,1)
# Plot SST, seasonal cycle, and threshold
plt.plot(dates, sst, 'k-')
plt.plot(dates, clim['thresh'], 'g-')
plt.plot(dates, clim['seas'], 'b-')
plt.title('SST (black), seasonal climatology (blue), \
threshold (green), detected MHW events (shading)')
plt.xlim(t[0], t[-1])
plt.ylim(sst.min()-0.5, sst.max()+0.5)
plt.ylabel(r'SST [$^\circ$C]')
plt.subplot(2,1,2)
# Find indices for all ten MHWs before and after event of interest and shade accordingly
for ev0 in np.arange(ev-10, ev+11, 1):
t1 = np.where(t==mhws['time_start'][ev0])[0][0]
t2 = np.where(t==mhws['time_end'][ev0])[0][0]
plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \
color=(1,0.6,0.5))
# Find indices for MHW of interest (2011 WA event) and shade accordingly
t1 = np.where(t==mhws['time_start'][ev])[0][0]
t2 = np.where(t==mhws['time_end'][ev])[0][0]
plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \
color='r')
# Plot SST, seasonal cycle, threshold, shade MHWs with main event in red
plt.plot(dates, sst, 'k-', linewidth=2)
plt.plot(dates, clim['thresh'], 'g-', linewidth=2)
plt.plot(dates, clim['seas'], 'b-', linewidth=2)
plt.title('SST (black), seasonal climatology (blue), \
threshold (green), detected MHW events (shading)')
plt.xlim(mhws['time_start'][ev]-150, mhws['time_end'][ev]+150)
plt.ylim(clim['seas'].min() - 1, clim['seas'].max() + mhws['intensity_max'][ev] + 0.5)
plt.ylabel(r'SST [$^\circ$C]')
"""
Explanation: 4. Visualize
From the properties of the event with the largest maximum intensity, it can be found that it is the most famous 2011 MHW off WA.
4.1 Plot the SST time series and have a closer look at the identified MHW event
End of explanation
"""
plt.figure(figsize=(15,7))
# Duration
plt.subplot(2,2,1)
evMax = np.argmax(mhws['duration'])
plt.bar(range(mhws['n_events']), mhws['duration'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['duration'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['duration'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.ylabel('[days]')
plt.title('Duration')
# Maximum intensity
plt.subplot(2,2,2)
evMax = np.argmax(mhws['intensity_max'])
plt.bar(range(mhws['n_events']), mhws['intensity_max'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_max'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_max'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.ylabel(r'[$^\circ$C]')
plt.title('Maximum Intensity')
# Mean intensity
plt.subplot(2,2,4)
evMax = np.argmax(mhws['intensity_mean'])
plt.bar(range(mhws['n_events']), mhws['intensity_mean'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_mean'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_mean'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.title('Mean Intensity')
plt.ylabel(r'[$^\circ$C]')
plt.xlabel('MHW event number')
# Cumulative intensity
plt.subplot(2,2,3)
evMax = np.argmax(mhws['intensity_cumulative'])
plt.bar(range(mhws['n_events']), mhws['intensity_cumulative'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_cumulative'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_cumulative'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.title(r'Cumulative Intensity')
plt.ylabel(r'[$^\circ$C$\times$days]')
plt.xlabel('MHW event number')
"""
Explanation: Yep, It's certainly picked out the largest event in the series (dark red shading). This event also seems to have been preceded and succeeded by a number of shorter, weaker events (light red shading).
4.2 Visualize distributions of MHW statistics across all the detected events
End of explanation
"""
|
bashtage/statsmodels
|
examples/notebooks/statespace_fixed_params.ipynb
|
bsd-3-clause
|
%matplotlib inline
from importlib import reload
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
"""
Explanation: Estimating or specifying parameters in state space models
In this notebook we show how to fix specific values of certain parameters in statsmodels' state space models while estimating others.
In general, state space models allow users to:
Estimate all parameters by maximum likelihood
Fix some parameters and estimate the rest
Fix all parameters (so that no parameters are estimated)
End of explanation
"""
endog = DataReader('CPIAPPNS', 'fred', start='1980').asfreq('MS')
endog.plot(figsize=(15, 3));
"""
Explanation: To illustrate, we will use the Consumer Price Index for Apparel, which has a time-varying level and a strong seasonal component.
End of explanation
"""
# Run the HP filter with lambda = 129600
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)
# The unobserved components model above is the local linear trend, or "lltrend", specification
mod = sm.tsa.UnobservedComponents(endog, 'lltrend')
print(mod.param_names)
"""
Explanation: It is well known (e.g. Harvey and Jaeger [1993]) that the HP filter output can be generated by an unobserved components model given certain restrictions on the parameters.
The unobserved components model is:
$$
\begin{aligned}
y_t & = \mu_t + \varepsilon_t & \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \
\mu_t &= \mu_{t-1} + \beta_{t-1} + \eta_t & \eta_t \sim N(0, \sigma_\eta^2) \
\beta_t &= \beta_{t-1} + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2) \
\end{aligned}
$$
For the trend to match the output of the HP filter, the parameters must be set as follows:
$$
\begin{aligned}
\frac{\sigma_\varepsilon^2}{\sigma_\zeta^2} & = \lambda \
\sigma_\eta^2 & = 0
\end{aligned}
$$
where $\lambda$ is the parameter of the associated HP filter. For the monthly data that we use here, it is usually recommended that $\lambda = 129600$.
End of explanation
"""
res = mod.smooth([1., 0, 1. / 129600])
print(res.summary())
"""
Explanation: The parameters of the unobserved components model (UCM) are written as:
$\sigma_\varepsilon^2 = \text{sigma2.irregular}$
$\sigma_\eta^2 = \text{sigma2.level}$
$\sigma_\zeta^2 = \text{sigma2.trend}$
To satisfy the above restrictions, we will set $(\sigma_\varepsilon^2, \sigma_\eta^2, \sigma_\zeta^2) = (1, 0, 1 / 129600)$.
Since we are fixing all parameters here, we do not need to use the fit method at all, since that method is used to perform maximum likelihood estimation. Instead, we can directly run the Kalman filter and smoother at our chosen parameters using the smooth method.
End of explanation
"""
ucm_trend = pd.Series(res.level.smoothed, index=endog.index)
"""
Explanation: The estimate that corresponds to the HP filter's trend estimate is given by the smoothed estimate of the level (which is $\mu_t$ in the notation above):
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(hp_trend, label='HP estimate')
ax.plot(ucm_trend, label='UCM estimate')
ax.legend();
"""
Explanation: It is easy to see that the estimate of the smoothed level from the UCM is equal to the output of the HP filter:
End of explanation
"""
# Construct a local linear trend model with a stochastic seasonal component of period 1 year
mod = sm.tsa.UnobservedComponents(endog, 'lltrend', seasonal=12, stochastic_seasonal=True)
print(mod.param_names)
"""
Explanation: Adding a seasonal component
However, unobserved components models are more flexible than the HP filter. For example, the data shown above is clearly seasonal, but with time-varying seasonal effects (the seasonality is much weaker at the beginning than at the end). One of the benefits of the unobserved components framework is that we can add a stochastic seasonal component. In this case, we will estimate the variance of the seasonal component by maximum likelihood while still including the restriction on the parameters implied above so that the trend corresponds to the HP filter concept.
Adding the stochastic seasonal component adds one new parameter, sigma2.seasonal.
End of explanation
"""
# Here we restrict the first three parameters to specific values
with mod.fix_params({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600}):
# Now we fit any remaining parameters, which in this case
# is just `sigma2.seasonal`
res_restricted = mod.fit()
"""
Explanation: In this case, we will continue to restrict the first three parameters as described above, but we want to estimate the value of sigma2.seasonal by maximum likelihood. Therefore, we will use the fit method along with the fix_params context manager.
The fix_params method takes a dictionary of parameters names and associated values. Within the generated context, those parameters will be used in all cases. In the case of the fit method, only the parameters that were not fixed will be estimated.
End of explanation
"""
res_restricted = mod.fit_constrained({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600})
"""
Explanation: Alternatively, we could have simply used the fit_constrained method, which also accepts a dictionary of constraints:
End of explanation
"""
print(res_restricted.summary())
"""
Explanation: The summary output includes all parameters, but indicates that the first three were fixed (and so were not estimated).
End of explanation
"""
res_unrestricted = mod.fit()
"""
Explanation: For comparison, we construct the unrestricted maximum likelihood estimates (MLE). In this case, the estimate of the level will no longer correspond to the HP filter concept.
End of explanation
"""
# Construct the smoothed level estimates
unrestricted_trend = pd.Series(res_unrestricted.level.smoothed, index=endog.index)
restricted_trend = pd.Series(res_restricted.level.smoothed, index=endog.index)
# Construct the smoothed estimates of the seasonal pattern
unrestricted_seasonal = pd.Series(res_unrestricted.seasonal.smoothed, index=endog.index)
restricted_seasonal = pd.Series(res_restricted.seasonal.smoothed, index=endog.index)
"""
Explanation: Finally, we can retrieve the smoothed estimates of the trend and seasonal components.
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(unrestricted_trend, label='MLE, with seasonal')
ax.plot(restricted_trend, label='Fixed parameters, with seasonal')
ax.plot(hp_trend, label='HP filter, no seasonal')
ax.legend();
"""
Explanation: Comparing the estimated level, it is clear that the seasonal UCM with fixed parameters still produces a trend that corresponds very closely (although no longer exactly) to the HP filter output.
Meanwhile, the estimated level from the model with no parameter restrictions (the MLE model) is much less smooth than these.
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(unrestricted_seasonal, label='MLE')
ax.plot(restricted_seasonal, label='Fixed parameters')
ax.legend();
"""
Explanation: Finally, the UCM with the parameter restrictions is still able to pick up the time-varying seasonal component quite well.
End of explanation
"""
|
hesam-setareh/nest-simulator
|
pynest/examples/gif_pop_psc_exp.ipynb
|
gpl-2.0
|
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import nest
"""
Explanation: Population rate model of generalized integrate-and-fire neurons
This script simulates a finite network of generalized integrate-and-fire (GIF) neurons directly on the mesoscopic population level using the effective stochastic population rate dynamics derived in the paper [Schwalger et al. PLoS Comput Biol. 2017]. The stochastic population dynamics is implemented in the NEST model gif_pop_psc_exp. We demonstrate this model using the example of a Brunel network of two coupled populations, one exhitory and one inhibitory population.
Note that the population model represents the mesoscopic level description of the corresponding microscopic network based on the NEST model gif_psc_exp.
At first, we load the necessary modules:
End of explanation
"""
#all times given in milliseconds
dt=0.5
dt_rec=1.
#Simulation time
t_end=2000.
#Parameters
size = 200
N = np.array([ 4, 1 ]) * size
M = len(N) #number of populations
#neuronal parameters
t_ref = 4. * np.ones(M) #absolute refractory period
tau_m = 20 * np.ones(M) #membrane time constant
mu = 24. * np.ones(M) #constant base current mu=R*(I0+Vrest)
c = 10. * np.ones(M) #base rate of exponential link function
Delta_u = 2.5 * np.ones(M) #softness of exponential link function
V_reset = 0. * np.ones(M) #Reset potential
V_th = 15. * np.ones(M) #baseline threshold (non-accumulating part)
tau_sfa_exc = [100., 1000.] #adaptation time constants of excitatory neurons
tau_sfa_inh = [100., 1000.] #adaptation time constants of inhibitory neurons
J_sfa_exc = [1000.,1000.] #size of feedback kernel theta (= area under exponential) in mV*ms
J_sfa_inh = [1000.,1000.] #in mV*ms
tau_theta = np.array([tau_sfa_exc, tau_sfa_inh])
J_theta = np.array([J_sfa_exc, J_sfa_inh ])
#connectivity
J = 0.3 #excitatory synaptic weight in mV if number of input connections is C0 (see below)
g = 5. #inhibition-to-excitation ratio
pconn = 0.2 * np.ones((M, M))
delay = 1. * np.ones((M, M))
C0 = np.array([[ 800, 200 ], [800, 200]]) * 0.2 #constant reference matrix
C = np.vstack((N,N)) * pconn #numbers of input connections
J_syn = np.array([[ J, -g * J], [J, -g * J]]) * C0 / C #final synaptic weights scaling as 1/C
taus1_ = [3., 6.] #time constants of exc. and inh. post-synaptic currents (PSC's)
taus1 = np.array([taus1_ for k in range(M)])
#step current input
step=[[20.],[20.]] #jump size of mu in mV
tstep=np.array([[1500.],[1500.]]) #times of jumps
#synaptic time constants of excitatory and inhibitory connections
tau_ex = 3. # in ms
tau_in = 6. # in ms
"""
Explanation: Next, we set the parameters of the microscopic model
End of explanation
"""
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
nest.SetKernelStatus({'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0=nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
C_m = 250. # irrelavant value for membrane capacity, cancels out in simulation
g_L = C_m / tau_m
for i, nest_i in enumerate( nest_pops ):
nest.SetStatus([nest_i], {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'tau_m': tau_m[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
'len_kernel': -1, # -1 triggers automatic history size
'N': N[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.
})
# connect the populations
g_syn = np.ones_like(J_syn) #synaptic conductance
g_syn[:,0] = C_m / tau_ex
g_syn[:,1] = C_m / tau_in
for i, nest_i in enumerate( nest_pops ):
for j, nest_j in enumerate( nest_pops ):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i,j] * g_syn[i,j] * pconn[i,j],
'delay': delay[i,j]} )
nest.Connect( [nest_j], [nest_i], 'all_to_all')
"""
Explanation: Simulation on the mesoscopic level
To directly simulate the mesoscopic population activities (i.e. generating the activity of a finite-size population without simulating single neurons), we can build the populations using the NEST model gif_pop_psc_exp:
End of explanation
"""
# monitor the output using a multimeter, this only records with dt_rec!
nest_mm = nest.Create('multimeter')
nest.SetStatus( nest_mm, {'record_from':['n_events', 'mean'],
'withgid': True,
'withtime': False,
'interval': dt_rec})
nest.Connect(nest_mm, nest_pops, 'all_to_all')
# monitor the output using a spike detector
nest_sd = []
for i, nest_i in enumerate( nest_pops ):
nest_sd.append( nest.Create('spike_detector') )
nest.SetStatus( nest_sd[i], {'withgid': False,
'withtime': True,
'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1.,
'delay': dt} )
nest.Connect( [nest_pops[i]], nest_sd[i], 'all_to_all')
"""
Explanation: To record the instantaneous population rate $\bar A(t)$ we use a multimeter, and to get the population activity $A_N(t)$ we use spike detector:
End of explanation
"""
#set initial value (at t0+dt) of step current generator to zero
tstep = np.hstack((dt * np.ones((M,1)), tstep))
step = np.hstack((np.zeros((M,1)), step))
# create the step current devices
nest_stepcurrent = nest.Create('step_current_generator', M )
# set the parameters for the step currents
for i in range(M):
nest.SetStatus( [nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] *g_L[i], 'origin': t0, 'stop': t_end})
pop_ = nest_pops[i]
if type(nest_pops[i])==int:
pop_ = [pop_]
nest.Connect( [nest_stepcurrent[i]], pop_, syn_spec={'weight':1.} )
"""
Explanation: All neurons in a given population will be stimulated with a step input current:
End of explanation
"""
local_num_threads = 1
seed=1
msd =local_num_threads * seed + 1 #master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones( (t.size, M) ) * np.nan
Abar = np.ones_like( A_N ) * np.nan
#simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
data_mm = nest.GetStatus( nest_mm )[0]['events']
for i, nest_i in enumerate( nest_pops ):
a_i = data_mm['mean'][ data_mm['senders']==nest_i ]
a = a_i / N[i] / dt
min_len = np.min([len(a), len(Abar)])
Abar[:min_len,i] = a[:min_len]
data_sd = nest.GetStatus(nest_sd[i], keys=['events'])[0][0]['times'] * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:,i]=A
"""
Explanation: We can now start the simulation:
End of explanation
"""
plt.clf()
plt.subplot(2,1,1)
plt.plot(t,A_N*1000) #plot population activities (in Hz)
plt.ylabel(r'$A_N$')
plt.subplot(2,1,2)
plt.plot(t,Abar*1000) #plot instantaneous population rates (in Hz)
plt.ylabel(r'$\bar A$')
"""
Explanation: and plot the activity:
End of explanation
"""
nest.ResetKernel()
nest.SetKernelStatus({'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0=nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
nest_pops = []
for k in range(M):
nest_pops.append( nest.Create('gif_psc_exp', N[k]) )
# set single neuron properties
for i, nest_i in enumerate( nest_pops ):
nest.SetStatus(nest_i, {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'g_L': g_L[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.,
'V_m': 0.
})
# connect the populations
for i, nest_i in enumerate( nest_pops ):
for j, nest_j in enumerate( nest_pops ):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i,j] * g_syn[i,j],
'delay': delay[i,j]} )
if np.allclose( pconn[i,j], 1. ):
conn_spec = {'rule': 'all_to_all'}
else:
conn_spec = {'rule': 'fixed_indegree', 'indegree': int(pconn[i,j] * N[j])}
nest.Connect( nest_j, nest_i, conn_spec )
"""
Explanation: Microscopic ("direct") simulation
As mentioned above, the population model gif_pop_psc_exp directly simulates the mesoscopic population activities, i.e. without the need to simulate single neurons. On the other hand, if we want to know single neuron activities, we must simulate on the microscopic level. This is possible by building a corresponding network of gif_psc_exp neuron models:
End of explanation
"""
# monitor the output using a multimeter and a spike detector
nest_sd = []
for i, nest_i in enumerate(nest_pops ):
nest_sd.append( nest.Create('spike_detector') )
nest.SetStatus(nest_sd[i], {'withgid': False,
'withtime': True, 'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1., 'delay': dt} )
#record all spikes from population to compute population activity
nest.Connect(nest_pops[i], nest_sd[i], 'all_to_all')
Nrecord=[5,0] #for each population i the first Nrecord[i] neurons are recorded
nest_mm_Vm = []
for i, nest_i in enumerate( nest_pops ):
nest_mm_Vm.append( nest.Create('multimeter') )
nest.SetStatus(nest_mm_Vm[i], {'record_from':['V_m'], \
'withgid': True, 'withtime': True, \
'interval': dt_rec})
nest.Connect(nest_mm_Vm[i], list( np.array(nest_pops[i])[:Nrecord[i]]), 'all_to_all')
"""
Explanation: We want to record all spikes of each population in order to compute the mesoscopic population activities $A_N(t)$ from the microscopic simulation. We also record the membrane potentials of five example neurons:
End of explanation
"""
# create the step current devices if they do not exist already
nest_stepcurrent = nest.Create('step_current_generator', M )
# set the parameters for the step currents
for i in range(M):
nest.SetStatus( [nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] *g_L[i], 'origin': t0, 'stop': t_end #, 'stop': sim_T + t0
})
pop_ = nest_pops[i]
if type(nest_pops[i])==int:
pop_ = [pop_]
nest.Connect( [nest_stepcurrent[i]], pop_, syn_spec={'weight':1.} )
"""
Explanation: As before, all neurons in a given population will be stimulated with a step input current. The following code block is identical to the one for the mesoscopic simulation above:
End of explanation
"""
local_num_threads = 1
seed=1
msd =local_num_threads * seed + 1 #master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones( (t.size, M) ) * np.nan
#simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
"""
Explanation: We can now start the microscopic simulation:
End of explanation
"""
for i, nest_i in enumerate( nest_pops ):
data_sd = nest.GetStatus(nest_sd[i], keys=['events'])[0][0]['times'] * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:,i]=A * 1000 #in Hz
t = np.arange(dt,t_end+dt,dt_rec)
plt.plot(t, A_N[:,0])
plt.xlabel('time [ms]')
plt.ylabel('population activity [Hz]')
"""
Explanation: Let's retrieve the data of the spike detector and plot the activity of the excitatory population (in Hz):
End of explanation
"""
voltage=[]
for i in range(M):
if Nrecord[i]>0:
senders = nest.GetStatus(nest_mm_Vm[i])[0]['events']['senders']
v = nest.GetStatus(nest_mm_Vm[i])[0]['events']['V_m']
voltage.append( np.array([v[np.where(senders==j)] for j in set(senders)]) )
else:
voltage.append(np.array([]))
f, axarr = plt.subplots(Nrecord[0], sharex=True)
for i in range(Nrecord[0]):
axarr[i].plot(voltage[0][i])
axarr[i].set_yticks((0,15,30))
axarr[i].set_xlabel('time [ms]')
"""
Explanation: This looks similar to the population activity obtained from the mesoscopic simulation based on the NEST model gif_pop_psc_exp (cf. previous figure). Now we retrieve the data of the multimeter, which allows us to look at the membrane potentials of single neurons. Here we plot the voltage traces (in mV) of five example neurons:
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp
|
day03/2.3 Deep Convolutional Neural Networks.ipynb
|
mit
|
from keras.applications import VGG16
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
import os
# -- Jupyter/IPython way to see documentation
# please focus on parameters (e.g. include top)
VGG16??
vgg16 = VGG16(include_top=True, weights='imagenet')
"""
Explanation: Deep CNN Models
Constructing and training your own ConvNet from scratch can be Hard and a long task.
A common trick used in Deep Learning is to use a pre-trained model and finetune it to the specific data it will be used for.
Famous Models with Keras
This notebook contains code and reference for the following Keras models (gathered from https://github.com/fchollet/keras/tree/master/keras/applications)
VGG16
VGG19
ResNet50
Inception v3
Xception
... more to come
References
Very Deep Convolutional Networks for Large-Scale Image Recognition - please cite this paper if you use the VGG models in your work.
Deep Residual Learning for Image Recognition - please cite this paper if you use the ResNet model in your work.
Rethinking the Inception Architecture for Computer Vision - please cite this paper if you use the Inception v3 model in your work.
All architectures are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image dimension ordering set in your Keras configuration file at ~/.keras/keras.json.
For instance, if you have set image_data_format="channels_last", then any model loaded from this repository will get built according to the TensorFlow dimension ordering convention, "Width-Height-Depth".
VGG16
<img src="imgs/vgg16.png" >
VGG19
<img src="imgs/vgg19.png" >
keras.applications
End of explanation
"""
IMAGENET_FOLDER = 'imgs/imagenet' #in the repo
!ls imgs/imagenet
"""
Explanation: If you're wondering where this HDF5 files with weights is stored, please take a look at ~/.keras/models/
HandsOn VGG16 - Pre-trained Weights
End of explanation
"""
from keras.preprocessing import image
import numpy as np
img_path = os.path.join(IMAGENET_FOLDER, 'strawberry_1157.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
"""
Explanation: <img src="imgs/imagenet/strawberry_1157.jpeg" >
End of explanation
"""
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_696.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
"""
Explanation: <img src="imgs/imagenet/apricot_696.jpeg" >
End of explanation
"""
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_565.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
"""
Explanation: <img src="imgs/imagenet/apricot_565.jpeg" >
End of explanation
"""
# from keras.applications import VGG19
"""
Explanation: Hands On:
Try to do the same with VGG19 Model
End of explanation
"""
## from keras.applications import ...
"""
Explanation: Residual Networks
<img src="imgs/resnet_bb.png" >
ResNet 50
<img src="imgs/resnet34.png" >
End of explanation
"""
|
sdss/marvin
|
docs/sphinx/jupyter/Shanghai_Demo_Tools.ipynb
|
bsd-3-clause
|
from __future__ import print_function, division, absolute_import
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Marvin Workshop (Shanghai 2016)
This Jupyter notebook will guide you through the installation of Marvin and will give you a hint of its capabilities. But enough talk, let's begin by installing Marvin. For that, run
pip install sdss-marvin
Now that you have installed Marvin, it's time to take your first steps. If you want to learn more about how Marvin works, then go see General Info to learn about Marvin Modes, Versions, or Downloading. If you just want to play, then read on.
First let's run some boilerplate code for Python 2/3 compatibility and plotting in the notebook:
End of explanation
"""
import marvin
"""
Explanation: Now, let’s import Marvin:
End of explanation
"""
marvin.config.release
"""
Explanation: Let's see what release we're using. Releases can be either MPLs (e.g. MPL-5) or DRs (e.g. DR13), however DRs are currently disabled in Marvin.
End of explanation
"""
from marvin import config
config.setRelease('MPL-4')
print('MPL:', config.release)
"""
Explanation: On intial import, Marvin will set the default data release to use the latest MPL available, currently MPL-5. You can change the version of MaNGA data using the Marvin Config.
End of explanation
"""
config.setMPL('MPL-5')
print('MPL:', config.release)
"""
Explanation: In general, we recommend using MPL-5 unless you have already stated a science project using MPL-4. So, let's go back to MPL-5
End of explanation
"""
from marvin.tools.cube import Cube
"""
Explanation: My First Cube
Now let’s play with a Marvin Cube!
Import the Marvin-Tools Cube class:
End of explanation
"""
#----- EDIT THIS CELL -----#
# Point filename to the location of the Cube you want to load
filename = './manga-8485-1901-LOGCUBE.fits.gz'
drpall = './drpall-v2_0_1.fits'
"""
Explanation: Let's load a cube from a local file. Start by specifying the full path and name of the file, such as:
./manga-8485-1901-LOGCUBE.fits.gz
EDIT Next Cell if necessary
End of explanation
"""
cc = Cube(filename=filename, drpall=drpall)
"""
Explanation: Create a Cube object:
End of explanation
"""
print(cc)
"""
Explanation: Now we have a Cube object:
End of explanation
"""
cc.ra, cc.dec, cc.header['SRVYMODE']
"""
Explanation: How about we look at some meta-data
End of explanation
"""
cc.targetbit
cc.qualitybit
"""
Explanation: ...and the quality and target bits
End of explanation
"""
spax = cc[10,10]
# print the spaxel to see the x,y coord from the lower left, and the coords relative to the cube center, x_cen/y_cen
spax
"""
Explanation: Get a Spaxel
Cubes have several functions currently available: getSpaxel, getMaps, getAperture. Let's look at spaxels. We can retrieve spaxels from a cube easily via indexing. In this manner, spaxels are 0-indexed from the lower left corner. Let's get spaxel (x=10, y=10):
End of explanation
"""
# let's grab the central spaxel
spax = cc.getSpaxel(x=0, y=0)
spax
spax.spectrum.wavelength
spax.spectrum.flux
"""
Explanation: Spaxels have a spectrum associated with it. It has the wavelengths and fluxes of each spectral channel:
Alternatively grab a spaxel with getSpaxel. Use the xyorig keyword to set the coordinate origin point: 'lower' or 'center'. The default is "center"
End of explanation
"""
# turn on interactive plotting
%matplotlib notebook
spax.spectrum.plot()
"""
Explanation: Plot the spectrum!
End of explanation
"""
# To save the plot, we need to draw it in the same cell as the save command.
spax.spectrum.plot()
import os
plt.savefig(os.getenv('HOME') + '/Downloads/my-first-spectrum.png')
# NOTE - if you are using the latest version of iPython and Jupyter notebooks, then interactive matplotlib plots
# should be enabled. You can save the figure with the save icon in the interactive toolbar.
"""
Explanation: Save plot to Downloads directory:
End of explanation
"""
spax.properties
# Gets the ha_alpha flux and the ivar
ha_flux = spax.properties['emline_gflux_ha_6564']
print(ha_flux.value, ha_flux.ivar, ha_flux.mask)
"""
Explanation: By default, the spaxel object contains the DAP properties for that spaxel in the unbinned MAPS.
End of explanation
"""
# import the maps
from marvin.tools.maps import Maps
# Load a MPL-5 map
mapfile = './manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'
# Let's get a default map of
maps = Maps(filename=mapfile)
print(maps)
"""
Explanation: Marvin Maps
Marvin Maps is how you deal with the DAP MAPS FITS files easily. You can retrieve maps in several ways. Let's take a took.
From a Marvin Maps
Marvin Maps takes the same inputs as cube: filename, plateifu, or mangaid. It also accepts keywords bintype and template_kin. These uniquely define a DAP MAPS file. By default, Marvin will load a MAPS file of bintype=SPX and template_kin=GAU-MILESHC for MPL-5. For MPL-4, the defaults are bintype=NONE, and template_kin=MIUSCAT-THIN.
End of explanation
"""
# Let's grab the H-alpha flux emission line map
haflux = maps.getMap('emline_gflux', channel='ha_6564')
print(haflux)
# turn on interactive plotting
%matplotlib notebook
# let's plot it
haflux.plot()
# You can get the flux and ivar arrays
haflux.value, haflux.mask
"""
Explanation: Once you have a maps object, you can access the raw maps file and header and extensions via maps.header and maps.data. Alternatively, you can access individual maps using the getMap method. getMap works by specifying a parameter and a channel. The parameter and channels names are equivalent to those found in the MAPS FITS extensions and headers, albeit lowercased.
End of explanation
"""
# Let's look at the NII-to-Halpha emission-line ratio map
niiha = maps.getMapRatio('emline_gflux', 'nii_6585', 'ha_6564')
print(niiha)
niiha.plot()
"""
Explanation: From the maps object, we can also easily plot the ratio between two maps, e.g. emission-line ratios, using the getMapRatio method. Map ratios are Map objects the same as any other, so you can access their array values or plot them
End of explanation
"""
maps = cc.getMaps()
print(maps)
"""
Explanation: From a Marvin Cube
Once we have a cube, we can get its maps using the getMaps method. getMaps is just a wrapper to the Marvin Maps Tool. Once we have the maps, we can do all the same things as before.
End of explanation
"""
from marvin.tools.modelcube import ModelCube
# For the sake of variety, let's open this ModelCube remotely. For that, simply use the plate-ifu of the target.
model_cube = ModelCube(plateifu='8485-1901', bintype='VOR10')
print(model_cube)
"""
Explanation: Note that the cube was opened from remote!
Marvin: now with ModelCubes
MPL-5 introduced LOGCUBE files, which contain the fitted spectra for each spaxel. LOGCUBES are called ModelCube in Marvin, and they are instantiated in the same way as a Cube or a Maps. Let's see it.
End of explanation
"""
sp = model_cube.getSpaxel(x=0, y=0)
print(sp.cube)
print(sp.maps)
print(sp.modelcube)
print('Spaxel bintype is:', sp.bintype)
"""
Explanation: IMPORTANT: the remote mode rocks!
Let's see a bit more about the model_cube. We can get a spaxel, which will include the properties and the cube spectrum.
End of explanation
"""
cube_spectrum = sp.spectrum
model_spectrum = sp.model
ax = cube_spectrum.plot(xlim=[6500, 9000])
ax.plot(model_spectrum.wavelength, model_spectrum.flux, 'r')
print(model_spectrum.mask)
"""
Explanation: VERY IMPORTANT: although the original ModelCube was binned (VOR10) an Spaxel is always unbinned (SPX)!!!
We can get the model spectrum for this spaxel. And plot it.
End of explanation
"""
sp.save('such_a_great_spaxel.mpf')
"""
Explanation: Saving the spaxel
Starting in Marvin Beta, you can pickle all the objects and restore them later. Let's assume you want to save that last spaxel. You simply do.
End of explanation
"""
from marvin.tools.spaxel import Spaxel
restored_spaxel = Spaxel.restore('such_a_great_spaxel.mpf')
print(restored_spaxel)
"""
Explanation: And then you can restore by doing
End of explanation
"""
from marvin.tools.maps import Maps
remote_maps = Maps(plateifu='8485-1902', mode='remote')
print(remote_maps)
"""
Explanation: It works for all the objects!
Downlading data
We have seen that the remote mode allows you to access all the Manga data without worrying where it actually lives. However, once you have located the data you want to use for your science you probably will want to download it for faster/airplane access. That easy to do with Marvin. Let's start by creating a maps from remote.
End of explanation
"""
filename = remote_maps.download()
print(filename)
"""
Explanation: Now let's download it!
End of explanation
"""
from marvin.tools.bin import Bin
bin_100 = Bin(binid=100, plateifu='8485-1901', bintype='VOR10')
print(bin_100)
"""
Explanation: Binned data
For many science purposes, you will want to use binned data. The DAP produces MAPS and LOGCUBES with different types of binning schemas and kinnematic templates. Refer to the DAP documentation for more information.
Marvin makes very easy to access binned data without needing to know much about the gory details of the data model. We have seen already how to get a binned Maps. Now, let's see how to get a Bin and access its spaxels.
End of explanation
"""
print(bin_100.properties['emline_gflux_ha_6564'])
bin_100.model.plot()
"""
Explanation: You can access all the properties and model data for the bin as you do with an spaxel.
End of explanation
"""
print(bin_100.spaxels)
"""
Explanation: But you can also access the (unbinned) spaxels for that bin.
End of explanation
"""
bin_100.spaxels[0].load()
print(bin_100.spaxels)
bin_100.spaxels[0].bintype
bin_100.spaxels[0].properties['emline_gflux_ha_6564']
"""
Explanation: You will note that the spaxels are not loaded. That means that the spectra and properties have not yet been retrieved for each spaxel. That is call lazy loading and we do it to improve loading time. You can then load any (or all) of the spaxels doing
End of explanation
"""
|
kit-cel/lecture-examples
|
mloc/ch4_Deep_Learning/pytorch/pytorch_tutorial_2.ipynb
|
gpl-2.0
|
import torch
import numpy as np
%matplotlib inline
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from IPython import display
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
"""
Explanation: PyTorch Tutorial - Part 2
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* Get accustomed to the basics of pytorch
* Do simple operations
End of explanation
"""
np.random.seed(0)
x = np.arange(-2, 5, 0.1)
y = x**3 - 4*x**2 - 2*x + 2
y_noise = y + np.random.normal(0, 1.5, size=(len(x),))
# simple function to get a random mini-batch
def get_batch(x, y, batch_size=20):
idxs = np.random.randint(0, len(x), (batch_size))
return x[idxs], y[idxs]
"""
Explanation: Generate data coming from a simple polynomial and corrupt with noise
End of explanation
"""
num_iter = 100*300
image_cycle = 300
mini_batch_size = 10
neurons_H1 = 4
neurons_H2 = 5
# contains all the x values. We need to expand the dimensions of the input tensor
x_eval_tensor = torch.from_numpy(np.expand_dims(x,1)).float().to(device)
# predefined linear layers, parameters are input and output neurons
layer1 = torch.nn.Linear(1, neurons_H1).to(device)
layer2 = torch.nn.Linear(neurons_H1, neurons_H2).to(device)
layer3 = torch.nn.Linear(neurons_H2, 1, bias=False).to(device) # do not use bias on final layer
# Activation function
activation_function = torch.nn.Tanh()
# gather parameters of both layers
parameters = list(layer1.parameters()) + list(layer2.parameters()) + list(layer3.parameters())
# Adam and MSE Loss
optimizer = torch.optim.Adam(parameters)
loss_fn = torch.nn.MSELoss(reduction='mean')
"""
Explanation: Define graph by model, use 2 hidden layers, one with ReLU and the other one with tanh activation (overkill for this example, but for illustration purposes).
End of explanation
"""
fig,ax = plt.subplots(1,1,figsize=(5,5))
plt.ion()
plt.show()
fig.show()
fig.canvas.draw()
# main loop
for step in range(num_iter):
batch_x, batch_y = get_batch(x,y_noise, mini_batch_size)
x_train_tensor = torch.from_numpy(np.expand_dims(batch_x,1)).float().to(device)
y_train_tensor = torch.from_numpy(np.expand_dims(batch_y,1)).float().to(device)
yhat = layer3(activation_function(layer2(activation_function(layer1(x_train_tensor)))))
loss = loss_fn(yhat, y_train_tensor)
# compute gradients
loss.backward()
# carry out one optimization step with Adam
optimizer.step()
# reset gradients to zero
optimizer.zero_grad()
# plot result of learning
if step % image_cycle == 0:
y_est = layer3(activation_function(layer2(activation_function(layer1(x_eval_tensor)))))
ax.clear()
ax.scatter(x, y_noise)
ax.plot(x, y)
ax.plot(x, y_est.detach().cpu().numpy())
fig.canvas.draw()
"""
Explanation: Main loop of learning. Calculate output of a batch, compute loss and respective gradients, the do the optimization. Import is to reset the gradients after each step.
End of explanation
"""
|
Kaggle/learntools
|
notebooks/intro_to_programming/raw/ex5.ipynb
|
apache-2.0
|
from learntools.core import binder
binder.bind(globals())
from learntools.intro_to_programming.ex5 import *
print('Setup complete.')
"""
Explanation: In the tutorial, you learned how to define and modify Python lists. In this exercise, you will use your new knowledge to solve several problems.
Set up the notebook
Run the next code cell without changes to set up the notebook.
End of explanation
"""
# Do not change: Initial menu for your restaurant
menu = ['stewed meat with onions', 'bean soup', 'risotto with trout and shrimp',
'fish soup with cream and onion', 'gyro']
# TODO: remove 'bean soup', and add 'roasted beet salad' to the end of the menu
____
# Do not change: Check your answer
q1.check()
#%%RM_IF(PROD)%%
# has extra values that need to be removed
q1.assert_check_failed()
#%%RM_IF(PROD)%%
# not a python list
menu = 2
q1.assert_check_failed()
#%%RM_IF(PROD)%%
# items missing
menu = ['stewed meat with onions', 'bean soup']
q1.assert_check_failed()
#%%RM_IF(PROD)%%
# items duplicated
menu = ['stewed meat with onions', 'fish soup with cream and onion', 'gyro',
'roasted beet salad', 'risotto with trout and shrimp', 'risotto with trout and shrimp']
q1.assert_check_failed()
#%%RM_IF(PROD)%%
# items out of order
menu = ['stewed meat with onions', 'fish soup with cream and onion', 'gyro',
'roasted beet salad', 'risotto with trout and shrimp']
q1.assert_check_passed()
#%%RM_IF(PROD)%%
# solution works
menu = ['stewed meat with onions', 'bean soup', 'risotto with trout and shrimp',
'fish soup with cream and onion', 'gyro']
# Remove 'bean soup', and add 'roasted beet salad' to the end of the menu
menu.remove('bean soup')
menu.append('roasted beet salad')
q1.assert_check_passed()
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q1.hint()
# Uncomment to see the solution
#_COMMENT_IF(PROD)_
q1.solution()
"""
Explanation: Question 1
You own a restaurant with five food dishes, organized in the Python list menu below. One day, you decide to:
- remove bean soup ('bean soup') from the menu, and
- add roasted beet salad ('roasted beet salad') to the menu.
Implement this change to the list below. While completing this task,
- do not change the line that creates the menu list.
- your answer should use .remove() and .append().
End of explanation
"""
# Do not change: Number of customers each day for the last month
num_customers = [137, 147, 135, 128, 170, 174, 165, 146, 126, 159,
141, 148, 132, 147, 168, 153, 170, 161, 148, 152,
141, 151, 131, 149, 164, 163, 143, 143, 166, 171]
# TODO: Fill in values for the variables below
avg_first_seven = ____
avg_last_seven = ____
max_month = ____
min_month = ____
# Do not change: Check your answer
q2.check()
#%%RM_IF(PROD)%%
# Fill in values for the variables below
avg_first_seven = sum(num_customers[:7])/7
avg_last_seven = sum(num_customers[-7:])/7
max_month = max(num_customers)
min_month = min(num_customers)
q2.assert_check_passed()
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q2.hint()
# Uncomment to see the solution
#_COMMENT_IF(PROD)_
q2.solution()
"""
Explanation: Question 2
The list num_customers contains the number of customers who came into your restaurant every day over the last month (which lasted thirty days). Fill in values for each of the following:
- avg_first_seven - average number of customers who visited in the first seven days
- avg_last_seven - average number of customers who visited in the last seven days
- max_month - number of customers on the day that got the most customers in the last month
- min_month - number of customers on the day that got the least customers in the last month
Answer this question by writing code. For instance, if you have to find the minimum value in a list, use min() instead of scanning for the smallest value and directly filling in a number.
End of explanation
"""
flowers = "pink primrose,hard-leaved pocket orchid,canterbury bells,sweet pea,english marigold,tiger lily,moon orchid,bird of paradise,monkshood,globe thistle"
"""
Explanation: Question 3
In the tutorial, we gave an example of a Python string with information that was better as a list.
End of explanation
"""
print(flowers.split(","))
"""
Explanation: You can actually use Python to quickly turn this string into a list with .split(). In the parentheses, we need to provide the character should be used to mark the end of one list item and the beginning of another, and enclose it in quotation marks. In this case, that character is a comma.
End of explanation
"""
# DO not change: Define two Python strings
alphabet = "A.B.C.D.E.F.G.H.I.J.K.L.M.N.O.P.Q.R.S.T.U.V.W.X.Y.Z"
address = "Mr. H. Potter,The cupboard under the Stairs,4 Privet Drive,Little Whinging,Surrey"
# TODO: Convert strings into Python lists
letters = ____
formatted_address = ____
# Do not change: Check your answer
q3.check()
#%%RM_IF(PROD)%%
letters = alphabet.split(".")
formatted_address = address.split(",")
q3.assert_check_passed()
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q3.hint()
# Uncomment to see the solution
#_COMMENT_IF(PROD)_
q3.solution()
"""
Explanation: Now it is your turn to try this out! Create two Python lists:
- letters should be a Python list where each entry is an uppercase letter of the English alphabet. For instance, the first two entries should be "A" and "B", and the final two entries should be "Y" and "Z". Use the string alphabet to create this list.
- address should be a Python list where each row in address is a different item in the list. Currently, each row in address is separated by a comma.
End of explanation
"""
test_ratings = [1, 2, 3, 4, 5]
"""
Explanation: Question 4
In the Python course, you'll learn all about list comprehensions, which allow you to create a list based on the values in another list. In this question, you'll get a brief preview of how they work.
Say we're working with the list below.
End of explanation
"""
test_liked = [i>=4 for i in test_ratings]
print(test_liked)
"""
Explanation: Then we can use this list (test_ratings) to create a new list (test_liked) where each item has been turned into a boolean, depending on whether or not the item is greater than or equal to four.
End of explanation
"""
def percentage_liked(ratings):
list_liked = [i>=4 for i in ratings]
# TODO: Complete the function
percentage_liked = ____
return percentage_liked
# Do not change: should return 0.5
percentage_liked([1, 2, 3, 4, 5, 4, 5, 1])
# Do not change: Check your answer
q4.check()
#%%RM_IF(PROD)%%
def percentage_liked(ratings):
list_liked = [i >= 4 for i in ratings]
percentage_liked = sum(list_liked)/len(list_liked)
return percentage_liked
q4.assert_check_passed()
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q4.hint()
# Uncomment to see the solution
#_COMMENT_IF(PROD)_
q4.solution()
"""
Explanation: In this question, you'll use this list comprehension to define a function percentage_liked() that takes one argument as input:
- ratings: list of ratings that people gave to a movie, where each rating is a number between 1-5, inclusive
We say someone liked the movie, if they gave a rating of either 4 or 5. Your function should return the percentage of people who liked the movie.
For instance, if we supply a value of [1, 2, 3, 4, 5, 4, 5, 1], then 50% (4/8) of the people liked the movie, and the function should return 0.5.
Part of the function has already been completed for you. You need only use list_liked to calculate percentage_liked.
End of explanation
"""
# TODO: Edit the function
def percentage_growth(num_users, yrs_ago):
growth = (num_users[len(num_users)-1] - num_users[len(num_users)-yrs_ago])/num_users[len(num_users)-2]
return growth
# Do not change: Variable for calculating some test examples
num_users_test = [920344, 1043553, 1204334, 1458996, 1503323, 1593432, 1623463, 1843064, 1930992, 2001078]
# Do not change: Should return .036
print(percentage_growth(num_users_test, 1))
# Do not change: Should return 0.66
print(percentage_growth(num_users_test, 7))
# Do not change: Check your answer
q5.check()
#%%RM_IF(PROD)%%
#default answer fails
q5.assert_check_failed()
#%%RM_IF(PROD)%%
def percentage_growth(num_users, yrs_ago):
growth = (num_users[len(num_users)-1] - num_users[len(num_users)-yrs_ago-1])/num_users[len(num_users)-yrs_ago-1]
return growth
q5.assert_check_passed()
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q5.hint()
# Uncomment to see the solution
#_COMMENT_IF(PROD)_
q5.solution()
"""
Explanation: 🌶️ Question 5
Say you're doing analytics for a website. You need to write a function that returns the percentage growth in the total number of users relative to a specified number of years ago.
Your function percentage_growth() should take two arguments as input:
- num_users = Python list with the total number of users each year. So num_users[0] is the total number of users in the first year, num_users[1] is the total number of users in the second year, and so on. The final entry in the list gives the total number of users in the most recently completed year.
- yrs_ago = number of years to go back in time when calculating the growth percentage
For instance, say num_users = [920344, 1043553, 1204334, 1458996, 1503323, 1593432, 1623463, 1843064, 1930992, 2001078].
- if yrs_ago = 1, we want the function to return a value of about 0.036. This corresponds to a percentage growth of approximately 3.6%, calculated as (2001078 - 1930992)/1930992.
- if years_ago = 7, we would want to return approximately 0.66. This corresponds to a percentage growth of approximately 66%, calculated as (2001078 - 1204334)/1204334.
Your coworker sent you a draft of a function, but it doesn't seem to be doing the correct calculation. Can you figure out what has gone wrong and make the needed changes?
End of explanation
"""
|
dereneaton/ipyrad
|
tests/cookbook-bucky.ipynb
|
gpl-3.0
|
## conda install -c BioBuilds mrbayes
## conda install -c ipyrad ipyrad
## conda install -c ipyrad bucky
## import Python libraries
import ipyrad.analysis as ipa
import ipyparallel as ipp
"""
Explanation: Cookbook for running BUCKy in parallel in a Jupyter notebook
This notebook uses the Pedicularis example data set from the first empirical ipyrad tutorial. Here I show how to run BUCKy on a large set of loci parsed from the output file with the .alleles.loci ending. All code in this notebook is Python. You can simply follow along and execute this same code in a Jupyter notebook of your own.
Software requirements for this notebook
All required software can be installed through conda by running the commented out code below in a terminal.
End of explanation
"""
## look for running ipcluster instance, and create load-balancer
ipyclient = ipp.Client()
print "{} engines found".format(len(ipyclient))
"""
Explanation: Cluster setup
To execute code in parallel we will use the ipyparallel Python library. A quick guide to starting a parallel cluster locally can be found here, and instructions for setting up a remote cluster on a HPC cluster is available here. In either case, this notebook assumes you have started an ipcluster instance that this notebook can find, which the cell below will test.
End of explanation
"""
## make a list of sample names you wish to include in your BUCKy analysis
samples = [
"29154_superba",
"30686_cyathophylla",
"41478_cyathophylloides",
"33413_thamno",
"30556_thamno",
"35236_rex",
"40578_rex",
"38362_rex",
"33588_przewalskii",
]
## initiate a bucky object
c = ipa.bucky(
name="buckytest",
data="analysis-ipyrad/pedic_outfiles/pedic.alleles.loci",
workdir="analysis-bucky",
samples=samples,
minsnps=0,
maxloci=100,
)
## print the params dictionary
c.params
"""
Explanation: Create a bucky analysis object
The two required arguments are the name and data arguments. The data argument should be a .loci file or a .alleles.loci file. The name will be used to name output files, which will be written to {workdir}/{name}/{number}.nexus. Bucky doesn't deal well with missing data, so loci will only be included if they contain data for all samples in the analysis. By default, all samples found in the loci file will be used, unless you enter a list of names (the samples argument) to subsample taxa, which we do here. It is best to select one individual per species or subspecies. You can set a number of additional parameters in the .params dictionary. Here I use the maxloci argument to limit the total number of loci so that the example analysis will finish faster. But in practice, BUCKy runs quite fast and I would typically just use all of your loci in a real analysis.
End of explanation
"""
## This will write nexus files to {workdir}/{name}/[number].nex
c.write_nexus_files(force=True)
"""
Explanation: Write data to nexus files
As you will see below, one step of this analysis is to convert the data into nexus files with a mrbayes code block. Let's run that step quickly here just to see what the converted files look like.
End of explanation
"""
## print an example nexus file
! cat analysis-bucky/buckytest/1.nex
"""
Explanation: An example nexus file
End of explanation
"""
## run the complete analysis
c.run(force=True, ipyclient=ipyclient)
"""
Explanation: Complete a BUCKy analysis
There are four parts to a full BUCKy analysis. The first is converting the data into nexus files. The following are .run_mrbayes(), then .run_mbsum(), and finally .run_bucky(). Each uses the files produced by the previous function in order. You can use the force flag to overwrite existing files. An ipyclient should be provided to distribute the jobs in parallel. The parallelization is especially important for the mrbayes analyses, where more cores will lead to approximately linear speed improvements. An ipyrad.bucky analysis object will run all four steps sequentially by simply calling the .run() command. See the end of the notebook for results.
End of explanation
"""
## (1) This will write nexus files to {workdir}/{name}/[number].nex
c.write_nexus_files(force=True)
## (2) distributes mrbayes jobs across the parallel client
c.run_mrbayes(force=True, ipyclient=ipyclient)
## (3) this step is fast, simply summing the gene-tree posteriors
c.run_mbsum(force=True, ipyclient=ipyclient)
## (4) infer concordance factors with BUCKy. This will run in parallel
## for however many alpha values are in b.params.bucky_alpha list
b.run_bucky(force=True, ipyclient=ipyclient)
"""
Explanation: Alternatively, you can run each step separately
End of explanation
"""
## print first 50 lines of a results files
! head -n 50 analysis-bucky/buckytest/CF-a1.0.concordance
"""
Explanation: Convenient access to results
View the results in the file [workdir]/[name]/CF-{alpha-value}.concordance. We haven't yet developed any further ipyrad tools for parsing these results, but hope to do so in the future. The main results you are typically interested in are the Primary Concordance Tree and the Splits in the Primary Concordance Tree.
End of explanation
"""
|
ababino/circles_metacog
|
circles_metacog_analysis.ipynb
|
mit
|
%matplotlib inline
from __future__ import unicode_literals
import pandas as pd
import numpy as np
from glob import glob
from matplotlib import pyplot as plt
import seaborn as sns
from metacog_utils import add_sdt_utils, metacog_dfs, jointplot_group
from IPython.display import display
"""
Explanation: Circles Metacog Analysis
Imports
End of explanation
"""
dfs = []
for f in glob('data_anto/*.csv'):
dfs.append(pd.read_csv(f, encoding='utf-8'))
df = pd.concat(dfs)
df = add_sdt_utils(df)
means, counts, proba, mecog = metacog_dfs(df)
"""
Explanation: Load Data
The metacog_dfs function creates 4 dataframes with metacognitive information
End of explanation
"""
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'Trial', 'Scale')
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'trials.thisTrialN', 'Signal')
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'trials.thisTrialN', 'cmax2')
"""
Explanation: Sanity check
Let's see if the scale converge in order to keep the performance level constant
End of explanation
"""
df.groupby('Name')[['Response', 'Signal', 'Confidence', 'Wager']].mean()
sns.jointplot('Response', 'Wager', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
sns.jointplot('Response', 'Confidence', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
sns.jointplot('Wager', 'Confidence', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
df.groupby('Name')[['Response RT', 'Wager RT', 'Confidence RT']].mean()
df.groupby('Name')[['Response', 'Wager', 'Confidence']].count()
df[df['TrialType'].str.contains('easy|hard')].pivot_table(index='Name', columns='TrialType', values='Response')
"""
Explanation: It is also important to see that no subject has a Wager value of 1 (or 0)
End of explanation
"""
|
matthias-k/pysaliency-examples
|
notebooks/Demo_Saliency_Maps.ipynb
|
mit
|
import pysaliency
import pysaliency.external_datasets
data_location = 'cache/datasets'
mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=data_location)
index = 0
plt.imshow(mit_stimuli.stimuli[index])
f = mit_fixations[mit_fixations.n == index]
plt.scatter(f.x, f.y, color='r')
_ = plt.axis('off')
"""
Explanation: Pysaliency
Saliency Map Models
pysaliency comes with a variety of features to evaluate saliency map models. This notebooks demonstrates these features.
First we load the MIT1003 dataset:
End of explanation
"""
cutoff = 10
short_stimuli = pysaliency.FileStimuli(filenames=mit_stimuli.filenames[:cutoff])
short_fixations = mit_fixations[mit_fixations.n < cutoff]
"""
Explanation: As some evaluation methods can take quite a long time to run, we prepare a smaller dataset consisting of only the first 10 stimuli:
End of explanation
"""
aim = pysaliency.AIM(location='cache/model_sources', cache_location='cache/model_caches/AIM')
smap = aim.saliency_map(mit_stimuli[10])
plt.imshow(smap)
plt.axis('off');
"""
Explanation: We will use the saliency model AIM by Bruce and Tsotos
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations='uniform', verbose=True)
"""
Explanation: Evaluating Saliency Map Models
Pysaliency is able to use a variety of evaluation methods to evaluate saliency models, both saliency map based models and probabilistic models. Here we demonstrate the evaluation of saliency map models
We can evaluate area under the curve with respect to a uniform nonfixation distribution:
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations='shuffled', verbose=True)
"""
Explanation: By setting nonfixations='shuffled' the fixations from all other stimuli will be used:
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations=short_fixations, verbose=True)
"""
Explanation: Also, you can hand over arbitrary Fixations instances as nonfixations:
End of explanation
"""
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations='uniform')
print('Fixation based KL-divergence wrt. uniform nonfixations: {:.02f}'.format(perf))
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations='shuffled')
print('Fixation based KL-divergence wrt. shuffled nonfixations: {:.02f}'.format(perf))
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations=short_fixations)
print('Fixation based KL-divergence wrt. identical nonfixations: {:.02f}'.format(perf))
"""
Explanation: Another popular saliency metric is the fixation based KL-Divergence as introduced by Itti. Usually it is just called KL-Divergence which creates confusion as there is also another completely different saliency metric called KL-Divergence (here called image based KL-Divergence, see below).
As AUC, fixation based KL-Divergence needs a nonfixation distribution to compare to. Again, you can use uniform, shuffled or any Fixations instance for this.
End of explanation
"""
gold_standard = pysaliency.FixationMap(short_stimuli, short_fixations, kernel_size=30)
perf = aim.image_based_kl_divergence(short_stimuli, gold_standard)
print("Image based KL-divergence: {} bit".format(perf / np.log(2)))
"""
Explanation: The image based KL-Divergence can be calculated, too. Unlike all previous metrics, it needs a gold standard to compare to. Here we use a fixation map that has been blured with a Gaussian kernel of size 30px. Often a kernel size of one degree of visual angle is used.
End of explanation
"""
gold_standard.image_based_kl_divergence(short_stimuli, gold_standard, minimum_value=1e-20)
"""
Explanation: The gold standard is assumed to be the real distribution, hence it has a image based KL divergence of zero:
End of explanation
"""
class MySaliencyMapModel(pysaliency.SaliencyMapModel):
def _saliency_map(self, stimulus):
return np.ones((stimulus.shape[0], stimulus.shape[1]))
msmm = MySaliencyMapModel()
"""
Explanation: To implement you own saliency map model, inherit from pysaliency.SaliencyMapModel and implement the _saliency_map method.
End of explanation
"""
|
ethanrowe/flowz
|
userguide/02. Intro to Artifacts.ipynb
|
mit
|
# An ExtantArtifact that will be used here and elsewhere in the guide
class GuideExtantArtifact(ExtantArtifact):
def __init__(self, num):
super(GuideExtantArtifact, self).__init__(self.get_me, name='GuideExtantArtifact')
self.num = num
@gen.coroutine
def get_me(self):
# O, pardon! since a crooked figure may attest in little place a million;
# On your imaginary forces work... Piece out our imperfections with your thoughts;
# Think when we talk of horses, that you see them printing their proud hoofs in the receiving earth;
# For 'tis your thoughts that now must deck our kings...
# (in other words, pretend this got some impressive data asynchronously)
raise gen.Return((self.num, self.num * 100))
chan = IterChannel([GuideExtantArtifact(i) for i in range(3)])
print_chans(chan)
"""
Explanation: Introduction to Artifacts
Background
If channels only used sources like lists and dictionaries already in memory, they would have little value. The true values comes when getting data from external sources or heavy computations that take time. In such an environment, accessing the data asynchronously and concurrently -- and even "out of order" in some cases -- can lead to nice performance benefits, and possibly more elegant code.
A significant boost in unlocking that asynchrony is artifacts. Artifacts are objects that know how to retrieve, compute, or transform their data, but don't necessarily do it right away. They delay getting the data until requested to do so, and then they use the tornado/futures infrastructure to get their data asynchronously. Once ready, their data is available to others.
ExtantArtifact
An ExtantArtifact is an artifact that represents data that is known to exist, and it uses a tornado coroutine to get its data. It is particularly suitable for fetching data via existing asynchronous mechanisms, like httpclient.AsyncHTTPClient.
End of explanation
"""
def lame_deriver(num):
return (num, -10 * num)
chan = IterChannel(DerivedArtifact(lame_deriver, i) for i in range(3))
print_chans(chan)
"""
Explanation: Surprise! Artifacts have logging built into them. In the case of ExtantArtifact, it logs before calling the getter (which then yields) and after it has completed the retrieval of the data. (These log messages are a blend of DEBUG and INFO level, so the detail will vary at times in this guide.)
DerivedArtifact
A DerivedArtifact is an artifact that uses a normal synchronous function as a deriver of its data. That function will be passed any number of "sources", and flowz will make sure that all of the sources have been fully resolved before being passed as parameters. For instance, if the sources are artifacts, they will be resolved to their values.
End of explanation
"""
def not_as_lame_deriver(val):
num, extant_val = val
return (num, extant_val / -10)
chan = IterChannel(GuideExtantArtifact(i) for i in range(3)).map(lambda a: DerivedArtifact(not_as_lame_deriver, a))
print_chans(chan)
"""
Explanation: Here again we see logging, but a bit more. Before the deriver is called, flowz first makes sure that each one of the sources is resolved. Then the deriver is called -- synchronously -- and the results are ready.
Note above that there are two "ready" messages before the first value is actually printed. That, again, is an indicator of the asychronous processing of the channels.
In practice, DerivedArtifacts are used to gather and transform data that began with ExtantArtifacts.
End of explanation
"""
from concurrent import futures
executor = futures.ThreadPoolExecutor(1)
chan = IterChannel(ThreadedDerivedArtifact(executor, lame_deriver, i) for i in range(3))
print_chans(chan)
# Okay, no more logging!
config_logging('WARN')
"""
Explanation: Now you can see that the firing of each DerivedArtifact caused its source to be resolved, which meant that its wrapped GuideExtantArtifact retrieved its value. Only after that was the result (a tuple) passed into the deriver.
ThreadedDerivedArtifact
A ThreadedDerivedArtifact is just like a DerivedArtifact, but it is passed a concurrent.futures.ThreadPoolExecutor on which it will run.
Some things are IO-bound and thus quite amenable to the async IO pattern around which tornado is built. But they aren't implemented in terms of async IO. In such cases, you can get good results by pushing the blocking IO onto a thread pool executor, which this enables. The individual threads can block on the synchronous IO, but the main IOLoop continues on its merry way all the while. So if you're dealing with synchronous IO-bound clients, put 'em in here. (NOTE: boto and boto3 are prime examples of this.)
Some routines are pretty hoggy in terms of computation, and they'll starve the IOLoop unless you take steps to offload them. In such cases, getting them onto a thread pool executor (and a shallow pool, at that) can be helpful.
End of explanation
"""
chan = IterChannel(TransformedArtifact(GuideExtantArtifact(i), transformer=not_as_lame_deriver) for i in range(3))
print_chans(chan)
"""
Explanation: TransformedArtifact
A TransformedArtifact wraps another artifact and transforms its value. Its not that different from a DerivedArtifact. One advantage that it has -- inherited from its superclass WrappedArtifact, which is rarely used directly -- is that indexing and attribute calls are passed on through to the underlying artifact.
End of explanation
"""
|
arne-cl/alt-mulig
|
python/rstdt-lisp-import-test.ipynb
|
gpl-3.0
|
import os
import sys
import glob
import nltk
RSTDT_MAIN_ROOT = os.path.expanduser('~/repos/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0')
RSTDT_DOUBLE_ROOT = os.path.expanduser('~/repos/rst_discourse_treebank/data/RSTtrees-WSJ-double-1.0')
RSTDT_TOKENIZED_ROOT = os.path.expanduser('~/repos/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0-tokenized')
RSTDT_TEST_FILE = os.path.join(RSTDT_MAIN_ROOT, 'TEST', 'wsj_1306.out.dis')
RSTDT_TOKENIZED_TEST_FILE = os.path.join(RSTDT_TOKENIZED_ROOT, 'TEST', 'wsj_1306.out.dis')
PTB_WSJ_ROOT_DIR = os.path.expanduser('~/corpora/pennTreebank/parsed/mrg/wsj')
"""
Explanation: parse RST-DT documents in LISP/S-Expression format
RST-DT *.rs3 files are broken, cf. my notebook on RST-DT/PTB merging
only use the *.dis files, the *.lisp.name and *.step.name may be broken, too
the RST-DT people probably used Marcu's tools
to convert their annotations into *.dis format
The RST Discourse Treebank contains 385 WSJ articles from PTB with Rhetorical Structure Theory (RST) annotations.
The following information was taken from the RST-DT documentation:
RSTtrees-WSJ-main-1.0
This directory contains 385 Wall Street Journal articles, broken into TRAINING (347 documents) and TEST (38 documents) sub-directories.
Filenames are in one of two forms:
* wsj_####.ext (380 documents)
* file#.ext(5 documents)
The 5 files named file# were identified as the following filenames in Treebank-2:
file1 - 07/wsj_0764
file2 - 04/wsj_0430
file3 - 07/wsj_0766
file4 - 07/wsj_0778
file5 - 21/wsj_2172
(More information is available in a
compressed file
via ftp, which provides the relationship between the 2,499 PTB filenames and the corresponding WSJ DOCNO strings in TIPSTER.)
<docno>.rst/
A directory with three files:
<docno>.lisp.name - discourse structure created by a human judge for a text.
<docno>.step.name - list of all human actions taken
during the creation of the discourse structure
## -- a file with an integer as its name - temp file;
contains last human action during creation of the discourse structure
All annotations were produced using a discourse annotation tool that can be downloaded from http://www.isi.edu/~marcu/discourse.
The files in the .rst directories are provided only to enable interested users to visualize and print in a convenient format the discourse annotations in the corpus.
<docno>.dis``** - contains the manually annotated discourse structure
of the file **<docno>**
The .dis files were generated automatically from the **.step** and **.lisp`
files using a mapping program.
More information about this program is available at http://www.isi.edu/~marcu/discourse.
IMPORTANT NOTE: The .lisp files may contain errors introduced by the discourse annotation tool. Please use the .lisp and .step files only for visualizing the trees.
Use the .dis files for training/testing purposes (the mapping program that produced the .dis file was written so as to eliminate the errors introduced by the annotation tool).
<docno>.edus - edus (elementary discourse units) listed line by line.
RSTtrees-WSJ-double-1.0
This directory contains the same types of files as the subdirectory RSTtrees-WSJ-main-1.0, for 53 documents which were reviewed by a second analyst.
End of explanation
"""
FILES_UNPARSABLE_WITH_NLTK = set([
'/home/arne/corpora/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_1107.out.dis',
'/home/arne/corpora/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_2353.out.dis',
'/home/arne/corpora/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_2367.out.dis'])
def get_nodelabel(node):
"""returns the node label of an nltk Tree or one of its subtrees"""
if isinstance(node, nltk.tree.Tree):
return node.label()
elif isinstance(node, unicode):
return node.encode('utf-8')
else:
raise ValueError("Unexpected node type: {}, {}".format(type(node), node))
from nltk.corpus.reader import BracketParseCorpusReader
def parse_rstfile_nltk(rst_filepath):
"""parse a *.dis RST file into an nltk.tree.Tree"""
rst_path, rst_filename = os.path.split(rst_filepath)
parsed_doc = BracketParseCorpusReader(rst_path, [rst_filename])
parsed_sents_iter = parsed_doc.parsed_sents()
return parsed_sents_iter[0] # there's only one tree in a *.dis
from collections import defaultdict
def nested_tree_count(tree, result_dict=None):
if not result_dict:
result_dict = defaultdict(lambda : defaultdict(int))
for i, subtree in enumerate(tree):
if isinstance(subtree, nltk.tree.Tree) and subtree.label() in ('Nucleus', 'Satellite'):
rhs = tuple([get_nodelabel(st) for st in subtree])
result_dict[get_nodelabel(subtree)][rhs] += 1
if rhs[0] == u'leaf' and len(rhs) != 3: # (leaf, rel2par, text)
raise ValueError('Badly escaped s-expression\n{}\n'.format(subtree))
nested_tree_count(subtree, result_dict)
"""
Explanation: Find unparsable files
only 3 files that nltk's Bracket parser can't handle at all
End of explanation
"""
# BADLY_ESCAPED_FILES = set()
# for folder in ('TEST', 'TRAINING'):
# for rst_fpath in glob.glob(os.path.join(RSTDT_MAIN_ROOT, folder, '*.dis')):
# if rst_fpath not in FILES_UNPARSABLE_WITH_NLTK:
# rst_tree = parse_rstfile_nltk(rst_fpath)
# try:
# nested_tree_count(rst_tree)
# except ValueError as e:
# BADLY_ESCAPED_FILES.add(rst_fpath)
# len(BADLY_ESCAPED_FILES) # 22 files
"""
Explanation: Files with bad escaping
22 badly escaped files
"(" and ")" aren't escaped in text field!
/home/arne/corpora/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_0612.out.dis
( Satellite (span 22 28) (rel2par elaboration-set-member-e)
( Nucleus (span 22 23) (rel2par span)
( Nucleus (leaf 22) (rel2par span) (text _!Canadian Imperial Bank of Commerce_!) )
( Satellite (leaf 23) (rel2par elaboration-additional) (text _!(Canada) --_!) )
)
End of explanation
"""
import sys
import traceback
import sexpdata
def parse_rstfile_sexpdata(rst_filepath):
with open(rst_filepath) as rstfile:
try:
return sexpdata.load(rstfile)
except sexpdata.ExpectClosingBracket as e:
raise ValueError(u"{}\n{}\n\n".format(rst_fpath, e))
except sexpdata.ExpectNothing as e:
error_msg = e.args[0][:100] # complete msg would contain the whole document
raise ValueError(u"{}\n{}...\n\n".format(rst_fpath, e.args[0][:100]))
except AssertionError as e:
raise ValueError(u"{}\n{}\n\n".format(rst_fpath, traceback.format_exc()))
except AttributeError as e:
raise ValueError(u"{}\n{}\n\n".format(rst_fpath, traceback.format_exc()))
# FILES_UNPARSABLE_WITH_SEXPDATA = set()
# for folder in ('TEST', 'TRAINING'):
# for rst_fpath in glob.glob(os.path.join(RSTDT_MAIN_ROOT, folder, '*.dis')):
# try:
# parse_rstfile_sexpdata(rst_fpath)
# except ValueError as e:
# FILES_UNPARSABLE_WITH_SEXPDATA.add(rst_fpath)
# len(FILES_UNPARSABLE_WITH_SEXPDATA) # 113 unparsable files
"""
Explanation: Files unparsable with sexpdata (due to bad bracketing)
113 files that aren't valid s-expressions (nltk parses them, as it is very forgiving)
End of explanation
"""
# ALL_UNPARSABLE_FILES = FILES_UNPARSABLE_WITH_NLTK.union(FILES_UNPARSABLE_WITH_SEXPDATA).union(BADLY_ESCAPED_FILES)
# len(ALL_UNPARSABLE_FILES) # 124 unparsable files
"""
Explanation: set of all 'unparsable' files (before tokenization and text escaping)
End of explanation
"""
sexp_tree = parse_rstfile_sexpdata(RSTDT_TEST_FILE)
# a list that contains Symbol instances (and lists of Symbol instances and integers)
root = sexp_tree[0]
print sexp_tree[1]
print sexp_tree[1][0]
print sexp_tree[1][0].value()
nuc_tree = sexp_tree[2]
print nuc_tree[1][0].value()
print nuc_tree[1][1], nuc_tree[1][2]
for i, e in enumerate(nuc_tree):
print i, e, '\n'
"""
Explanation: try parsing files into graphs
Summary of RST tree rules
Root --> span (N+ | N S | S N)
Nucleus --> leaf rel2par text (N | S | re.compile('.*_!') )?
Nucleus --> span rel2par (N+ | N S | S N | S N S)
Satellite --> leaf rel2par text (N | re.compile('.*_!') )?
Satellite --> span rel2par (N+ | N S | S | S N | S N S)
rel2par --> any RST relation string
End of explanation
"""
import discoursegraphs as dg
from collections import Counter
class RSTLispDocumentGraph(dg.DiscourseDocumentGraph):
"""
A directed graph with multiple edges (based on a networkx
MultiDiGraph) that represents the rhetorical structure of a
document.
Attributes
----------
name : str
name, ID of the document or file name of the input file
ns : str
the namespace of the document (default: rst)
root : str
name of the document root node ID
tokens : list of str
sorted list of all token node IDs contained in this document graph
"""
def __init__(self, dis_filepath, name=None, namespace='rst',
tokenize=True, precedence=False):
"""
Creates an RSTLispDocumentGraph from a Rhetorical Structure *.dis file and adds metadata
to it.
Parameters
----------
dis_filepath : str
absolute or relative path to the Rhetorical Structure *.dis file to be
parsed.
name : str or None
the name or ID of the graph to be generated. If no name is
given, the basename of the input file is used.
namespace : str
the namespace of the document (default: rst)
precedence : bool
If True, add precedence relation edges
(root precedes token1, which precedes token2 etc.)
"""
# super calls __init__() of base class DiscourseDocumentGraph
super(RSTLispDocumentGraph, self).__init__()
self.name = name if name else os.path.basename(dis_filepath)
self.ns = namespace
self.root = 0
self.add_node(self.root, layers={self.ns}, label=self.ns+':root_node')
if 'discoursegraph:root_node' in self:
self.remove_node('discoursegraph:root_node')
self.tokenized = tokenize
self.tokens = []
self.rst_tree = parse_rstfile_sexpdata(dis_filepath)
self.parse_rst_tree(self.rst_tree)
def parse_rst_tree(self, rst_tree, indent=0):
tree_type = self.get_tree_type(rst_tree)
assert tree_type in ('Root', 'Nucleus', 'Satellite')
if tree_type == 'Root':
span, children = rst_tree[1], rst_tree[2:]
for child in children:
self.parse_rst_tree(child, indent=indent+1)
else: # tree_type in ('Nucleus', 'Satellite')
node_id = self.get_node_id(rst_tree)
node_type = self.get_node_type(rst_tree)
relation_type = self.get_relation_type(rst_tree)
if node_type == 'leaf':
edu_text = self.get_edu_text(rst_tree[3])
self.add_node(node_id, attr_dict={self.ns+':text': edu_text,
'label': u'{}: {}'.format(node_id, edu_text[:20])})
if self.tokenized:
edu_tokens = edu_text.split()
for i, token in enumerate(edu_tokens):
token_node_id = '{}_{}'.format(node_id, i)
self.tokens.append(token_node_id)
self.add_node(token_node_id, attr_dict={self.ns+':token': token,
'label': token})
self.add_edge(node_id, '{}_{}'.format(node_id, i))
else: # node_type == 'span'
self.add_node(node_id, attr_dict={self.ns+':rel_type': relation_type,
self.ns+':node_type': node_type})
children = rst_tree[3:]
child_types = self.get_child_types(children)
expected_child_types = set(['Nucleus', 'Satellite'])
unexpected_child_types = set(child_types).difference(expected_child_types)
assert not unexpected_child_types, \
"Node '{}' contains unexpected child types: {}\n".format(node_id, unexpected_child_types)
if 'Satellite' not in child_types:
# span only contains nucleii -> multinuc
for child in children:
child_node_id = self.get_node_id(child)
self.add_edge(node_id, child_node_id, attr_dict={self.ns+':rel_type': relation_type})
elif len(child_types['Satellite']) == 1 and len(child_types['Nucleus']) == 1:
# standard RST relation, where one satellite is dominated by one nucleus
nucleus_index = child_types['Nucleus'][0]
satellite_index = child_types['Satellite'][0]
nucleus_node_id = self.get_node_id(children[nucleus_index])
satellite_node_id = self.get_node_id(children[satellite_index])
self.add_edge(node_id, nucleus_node_id, attr_dict={self.ns+':rel_type': 'span'},
edge_type=dg.EdgeTypes.spanning_relation)
self.add_edge(nucleus_node_id, satellite_node_id,
attr_dict={self.ns+':rel_type': relation_type},
edge_type=dg.EdgeTypes.dominance_relation)
else:
raise ValueError("Unexpected child combinations: {}\n".format(child_types))
for child in children:
self.parse_rst_tree(child, indent=indent+1)
def get_child_types(self, children):
"""
maps from (sub)tree type (i.e. Nucleus or Satellite) to a list
of all children of this type
"""
child_types = defaultdict(list)
for i, child in enumerate(children):
child_types[self.get_tree_type(child)].append(i)
return child_types
def get_edu_text(self, text_subtree):
assert text_subtree[0].value() == 'text'
return u' '.join(word.value().decode('utf-8')
if isinstance(word, sexpdata.Symbol) else word.decode('utf-8')
for word in text_subtree[1:])
def get_tree_type(self, tree):
"""returns the type of the (sub)tree: Root, Nucleus or Satellite"""
tree_type = tree[0].value()
return tree_type
def get_node_type(self, tree):
"""returns the node type (leaf or span) of a subtree (i.e. Nucleus or Satellite)"""
node_type = tree[1][0].value()
assert node_type in ('leaf', 'span')
return node_type
def get_relation_type(self, tree):
"""returns the RST relation type attached to the parent node of an RST relation"""
return tree[2][1].value()
def get_node_id(self, nuc_or_sat):
node_type = self.get_node_type(nuc_or_sat)
if node_type == 'leaf':
leaf_id = nuc_or_sat[1][1]
return '{}:{}'.format(self.ns, leaf_id)
else: # node_type == 'span'
span_start = nuc_or_sat[1][1]
span_end = nuc_or_sat[1][2]
return '{}:span:{}-{}'.format(self.ns, span_start, span_end)
RSTDT_TOKENIZED_ROOT = os.path.expanduser('~/repos/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0-tokenized')
import traceback
# for folder in ('TEST', 'TRAINING'):
# for rst_fpath in glob.glob(os.path.join(RSTDT_TOKENIZED_ROOT, folder, '*.dis')):
# try:
# RSTLispDocumentGraph(rst_fpath)
# # print rst_fpath
# except ValueError as e:
# sys.stderr.write("Error in file '{}'\n{}\n".format(rst_fpath, e))
# TODO: error in attachment: rst:span:18-20 -> 18-19
rdg = RSTLispDocumentGraph(RSTDT_TOKENIZED_TEST_FILE, tokenize=False)
# %load_ext gvmagic
# %dotstr dg.print_dot(rdg)
"""
Explanation: SEXPDATA fail: ' must be escaped
```python
sexpdata.loads("(text this won't hurt)")
[Symbol('text'), Symbol('this'), Symbol('won'), Quoted(True), Symbol('hurt')]
```
Epic fail: RST-DT files contain superflous //TT_ERR strings
I fixed the files in the RSTtrees-WSJ-main-1.0-tokenized directory
arne@ziegelstein ~/repos/rst_discourse_treebank $ ack-grep -cl "//TT_ERR"
data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_2367.out.dis:102
data/RSTtrees-WSJ-main-1.0/TRAINING/wsj_2353.out.dis:53
data/RSTtrees-WSJ-main-1.0-tokenized/TRAINING/wsj_2367.out.dis:102
data/RSTtrees-WSJ-main-1.0-tokenized/TRAINING/wsj_2353.out.dis:53
End of explanation
"""
RSTDT_NLTK_TOKENIZED_ROOT = os.path.expanduser('~/repos/rst_discourse_treebank/data/RSTtrees-WSJ-main-1.0-nltk-tokenized')
dis_file = os.path.join(RSTDT_NLTK_TOKENIZED_ROOT, 'TEST/wsj_2386.out.dis')
mrg_file = os.path.join(PTB_WSJ_ROOT_DIR, '23/wsj_2386.mrg')
rdg = RSTLispDocumentGraph(dis_file)
pdg = dg.read_ptb(mrg_file)
for t in rdg.tokens[:10]: print t,
print
for t in pdg.tokens[:10]: print t,
print dis_file
rdg.merge_graphs(pdg, verbose=True)
import re
import glob
import sys
WSJ_SUBDIR_REGEX = re.compile('wsj_(\d{2})')
WSJ_DOCID_REGEX = re.compile('wsj_(\d{4})')
for folder in ('TEST', 'TRAINING'):
for rst_fpath in glob.glob(os.path.join(RSTDT_NLTK_TOKENIZED_ROOT, folder, '*.dis')):
doc_id = os.path.basename(rst_fpath).split('.')[0]
try:
rdg = RSTLispDocumentGraph(rst_fpath)
rst_fname = os.path.basename(rst_fpath).lower()
doc_id = WSJ_DOCID_REGEX.match(rst_fname).groups()[0]
wsj_subdir = WSJ_SUBDIR_REGEX.match(rst_fname).groups()[0]
ptb_file = os.path.join(PTB_WSJ_ROOT_DIR, wsj_subdir, 'wsj_{}.mrg'.format(doc_id))
pdg = dg.read_ptb(ptb_file)
try:
rdg.merge_graphs(pdg)
print "merged: {}\n".format(rst_fpath)
except Exception as e:
sys.stderr.write("Error in {}: {}\n".format(rst_fpath, e))
except Exception as e:
sys.stderr.write("Error in {}: {}\n".format(rst_fpath, e))
os.path.basename(RSTDT_TEST_FILE)
PTB_TEST_FILE = os.path.expanduser('~/corpora/pennTreebank/parsed/mrg/wsj/13/wsj_1306.mrg')
sent0_root = pdg.sentences[0]
ptb_1306_tokens = list(pdg.get_tokens(token_strings_only=True))
"""
Explanation: Do RSTDT-CoreNLP tokenizations match PTB?
End of explanation
"""
RSTDT_TEST_FILE
rst_tree = parse_rstfile_nltk(RSTDT_TEST_FILE)
span_tree = rst_tree[0]
print span_tree, span_tree.productions(), span_tree.leaves()
print rst_tree[1][1]
# print open(RSTDT_TEST_FILE).read()
"""
Explanation: Epic Fail: we can't use nltk's Bracket parser, as it parses (span 1 5) as (span 1)
End of explanation
"""
|
antonpetkoff/learning
|
information-retreival/2018_10_08_inverted_index.ipynb
|
gpl-3.0
|
sample_bbc_news_sentences = [
"China confirms Interpol chief detained",
"Turkish officials believe the Washington Post writer was killed in the Saudi consulate in Istanbul.",
"US wedding limousine crash kills 20",
"Bulgarian journalist killed in park",
"Kanye West deletes social media profiles",
"Brazilians vote in polarised election",
"Bull kills woman at French festival",
"Indonesia to wrap up tsunami search",
"Tina Turner reveals wedding night ordeal",
"Victory for Trump in Supreme Court battle",
"Clashes at German far-right rock concert",
"The Walking Dead actor dies aged 76",
"Jogger in Netherlands finds lion cub",
"Monkey takes the wheel of Indian bus"
]
#basic tokenization
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
sample_bbc_news_sentences_tokenized = [tokenizer.tokenize(sent) for sent in sample_bbc_news_sentences]
sample_bbc_news_sentences_tokenized[0]
sample_bbc_news_sentences_tokenized_lower = [[_t.lower() for _t in _s] for _s in sample_bbc_news_sentences_tokenized]
sample_bbc_news_sentences_tokenized_lower[0]
#get all unique tokens
unique_tokens = set(sum(sample_bbc_news_sentences_tokenized_lower, []))
unique_tokens
# create incidence matrix (term-document frequency)
import numpy as np
incidence_matrix = np.array([[sent.count(token) for sent in sample_bbc_news_sentences_tokenized_lower]
for token in unique_tokens])
print(incidence_matrix)
"""
Explanation: Incidence Matrixes
End of explanation
"""
!ls data/mini_newsgroups/sci.electronics/
!tail -50 data/mini_newsgroups/sci.electronics/52464 | head -10
"""
Explanation: For a bigger vocab can take too much memory (number of tokens * number of documents), while also being sparse!
Which words will have highest and which lowest Total freq?
Dataset
https://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups
End of explanation
"""
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize
from collections import defaultdict, Counter
import os
def prepare_dataset(documents_dir):
tokenized_documents = []
for document in os.listdir(documents_dir):
with open(os.path.join(documents_dir, document)) as outf:
sentence_tokens = [tokenizer.tokenize(sent.lower()) for sent in sent_tokenize(outf.read())]
tokenized_documents.append(np.array(sum(sentence_tokens, [])))
print("Found documents: ", len(tokenized_documents))
return tokenized_documents
def document_frequency(tokenized_documents):
all_unique_tokens = set(sum(tokenized_documents, []))
print("Found unique tokens: ", len(all_unique_tokens))
tokens = {token: sum([1 for doc in tokenized_documents if token in doc])
for token in all_unique_tokens}
return tokens
# Load data
selected_category = 'data/mini_newsgroups/sci.crypt/'
print(selected_category)
tokenized_dataset = prepare_dataset(selected_category)
print("Sample tokenized document:")
print(tokenized_dataset[0])
# statistics
all_terms = np.concatenate(tokenized_dataset).ravel()
unique_terms = np.unique(all_terms)
unique_terms.sort()
document_count = len(tokenized_dataset)
all_terms_count = len(all_terms)
unique_terms_count = len(unique_terms)
print("documents count: {}".format(document_count))
print("total term count: {}".format(all_terms_count))
print("unique term count: {}".format(unique_terms_count))
# incidence matrix
# rows are documents
# columns are terms
incidence_matrix = np.zeros([document_count, unique_terms_count])
# construct postings array of tuples
# each tuple is of the form: (term_id, doc_id, frequency of term in doc, positions of term in doc)
# the tuple can be expanded even more
postings = []
for term_id, term in enumerate(unique_terms):
for doc_id, doc in enumerate(tokenized_dataset):
positions_of_term_in_doc = np.where(doc == term)[0]
term_count_in_doc = positions_of_term_in_doc.size
if term_count_in_doc > 0:
postings.append((term_id, doc_id, term_count_in_doc, positions_of_term_in_doc))
incidence_matrix[doc_id, term_id] = term_count_in_doc
# construct lexicon
# key: term
# value: [total term frequency, document frequency of term]
lexicon = {term: [
incidence_matrix[:, term_id].sum(), # total term frequency
np.count_nonzero(incidence_matrix[:, term_id]) # document frequency of term
]
for term_id, term in enumerate(unique_terms)}
postings
"""
Explanation: You will now have to construct the Inverted Index - only the dictionary part (term and #docs)
End of explanation
"""
|
rajul/tvb-library
|
tvb/simulator/demos/region_deterministic_smooth_parameter_variation.ipynb
|
gpl-2.0
|
from tvb.simulator.lab import *
"""
Explanation: Demonstrate using the simulator at the region level, deterministic integration, how to smoothly change a model parameter at run time.
Run Time ~ 3 seconds
End of explanation
"""
#rs.configure()
LOG.info("Configuring...")
#Initialise a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity(load_default=True)
white_matter.speed = numpy.array([4.0])
white_matter_coupling = coupling.Linear(a=0.0154)
#Initialise an Integrator
heunint = integrators.HeunDeterministic(dt=2 ** -6)
#Initialise some Monitors with period in physical time
momo = monitors.Raw()
mama = monitors.TemporalAverage(period=2 ** -2)
#Bundle them
what_to_watch = (momo, mama)
#Initialise a Simulator -- Model, Connectivity, Integrator, and Monitors.
sim = simulator.Simulator(model=oscillator, connectivity=white_matter,
coupling=white_matter_coupling,
integrator=heunint, monitors=what_to_watch)
sim.configure()
simulation_length = numpy.array([2 ** 6, ])
# Define a model parameter as a function of time
equation = True
par_length = simulation_length[0] / sim.integrator.dt / mama.istep
# a) as an equally spaced range
if not equation:
a = numpy.r_[0.0:4.2:par_length.astype(complex)]
# b) using an Equation datatype
else:
t = numpy.linspace((sim.integrator.dt * mama.istep) / 2,
float(simulation_length[0]),
par_length)
eqn_t = equations.Gaussian()
eqn_t.parameters["amp"] = 4.2
eqn_t.parameters["midpoint"] = simulation_length[0] / 2.0
eqn_t.pattern = t
a = eqn_t.pattern
LOG.info("Starting simulation...")
#Perform the simulation
raw_data, raw_time = [], []
tavg_data, tavg_time = [], []
for raw, tavg in sim(simulation_length=float(simulation_length[0])):
if not raw is None:
raw_time.append(raw[0])
raw_data.append(raw[1])
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
# Change a model parameter at runtime
sim.model.a = a[len(tavg_time) - 1]
LOG.info("Finished simulation.")
"""
Explanation: Perform the simulation
End of explanation
"""
#Plot defaults in a few combinations
#Make the lists numpy.arrays for easier use.
RAW = numpy.array(raw_data)
TAVG = numpy.array(tavg_data)
#Plot raw time series
figure(1)
plot(raw_time, RAW[:, 0, :, 0])
title("Raw -- State variable 0")
figure(2)
plot(raw_time, RAW[:, 1, :, 0])
title("Raw -- State variable 1")
#Plot temporally averaged time series + parameter
figure(3)
plot(tavg_time, TAVG[:, 0, :, 0])
plot(tavg_time, a, 'r', linewidth=2)
title("Temporal average")
#Show them
show()
"""
Explanation: Plot pretty pictures of what we just did
End of explanation
"""
|
Pittsburgh-NEH-Institute/Institute-Materials-2017
|
schedule/week_2/Near_matching.ipynb
|
gpl-3.0
|
from collatex import *
collation = Collation()
collation.add_plain_witness("A", "The gray koala")
collation.add_plain_witness("B", "The big grey koala")
alignment_table = collate(collation, segmentation=False)
print(alignment_table)
from collatex import *
collation = Collation()
collation.add_plain_witness("A", "The gray koala")
collation.add_plain_witness("B", "The big grey koala")
alignment_table = collate(collation, segmentation=False, near_match=True)
print(alignment_table)
"""
Explanation: Near matching
What is near matching and why do we use it?
Near matching is another term for fuzzy matching, that is, is based on the idea that two items (such as two word tokens in a collation) should sometimes be considered matching even when they are not string equal (that is, not identical in every character). More precisely, near matching is a strategy for finding the closest match in situations where they not be an exact match.
Consider the following example from the Rus′ primary chronicle:
<img src="images/pvl_3.5.png"/>
The last column contains slightly differing forms of fraci, but the second witness, Tro, reads fraki. Normalization, including Soundex takes care of the slight variation during Gothenburg stage 2, but we don’t want to merge c and k globally because the difference between them is usually significant.
When CollateX cannot find an exact match and there is more than one possible alignment for a token, it defaults to placing the token to the left. This means that without near matching, fraki, which does match perfectly either i or fraci, would be misaligned. With near matching, though, CollateX can recognize that fraci is more like fraki than it is like i, and thus place it in the correct (right) column.
How does near matching work in CollateX?
The way near matching currently works in CollateX is that if the user has turned it on (it is off by default), after the alignment stage has been completed, the system looks for situations that cannot be resolved solely by exact matching, that is, entirely at the alignment stage. Those situations must show both of the following properties:
Different numbers of tokens: Between two blocks (vertical sets of tokens) that are firmly aligned there is an unequal number of tokens in the witnesses. In the example above, the alignment of “The” and “koala” is clear because both witnesses have the identical reading (that is, they are complete vertical blocks), but in one witness there is one token between them and in the other witness there are two tokens. If, on the other hand, the two witnesses read “The gray koala” and “The grey koala”, although the middle tokens don’t match, there’s no ambiguity because the alignment is forced: since “gray” and “grey” are sandwiched between perfect matches before and after, they can only be aligned with each other, so there is no need for near matching.
No exact match: The tokens in the shorter witnesses do not have an exact string match in the longer witnesses. If they did have an exact match, that would have been found at the alignment stage and there would be no need for near matching. For example, if the two witnesses read “The grey koala” and “The big grey koala”, although there are three tokens in the first witness and four in the second, each token in the first has an exact string match in the second, which means that it can be aligned at the alignment stage.
If and only if both of those conditions are met, CollateX compares the floating token in the shorter witness (“gray” in the example above) to all possible matches (“big” and “grey” in this example) and calculates the nearest match using a measure called edit distance or Levenshtein distance (see https://en.wikipedia.org/wiki/Edit_distance for more information). CollateX calculates the edit distance between the floating token and the tokens in the other witnesses at all of the locations where it could be placed, and determines the best placement.
Edit distance and computational complexity
Calculating exact matches is relatively efficient computationally. If the strings are different lengths, we don’t have to look at any characters to know that they don’t match. If they’re the same length, we can look character by character, and as soon as we hit a non-match, we don’t have to look further. (This is not necessarily how exact matching would be implemented; they may be other tricks that would get a quick yes-or-no answer.)
Calculating edit distance to find the closest match is expensive because the entire strings have to be evaluated. In a tradition with a large number of witnesses and large gaps, the number of comparisons grows quickly, which means that you don’t want to calculate edit distance except where you need to. Performing computationally inexpensive exact string matching first (in the alignment stage) and then calculating the more expensive edit distance (in the analysis stage) only where alignment has failed to give a satisfactory answer reduces the amount of computation required.
Example of near matching in CollateX
End of explanation
"""
import Levenshtein
Levenshtein.distance('gray', 'grey')
"""
Explanation: Edit distance
The edit distance between two strings is the number of single-character changes required to convert one into the other. The most common distance measure is called Levenshtein distance, and it counts insertions, deletions, and substitutions. A variant, Damerau-Levenshtein distance, also counts transpositions, which in classic Levenshtein distance would be two steps, an insertion and a deletion.
When you installed CollateX, you installed the Python Levenshtein package, which is what CollateX uses to find the closest match. You can practice with it independently by changing the strings below to your own text:
End of explanation
"""
Levenshtein.distance('gray','Grey')
"""
Explanation: The Levenshtein module does its comparison on the basis of Unicode values, so upper and lowercase characters are different:
End of explanation
"""
print(Levenshtein.distance('color','colour'))
print(Levenshtein.distance('clod', 'cloud'))
"""
Explanation: How is Levenshtein distance a useful way of finding the closest match? That is, are all single-character differences of equal importance philologically? What about the following:
End of explanation
"""
print(Levenshtein.distance('book', 'books'))
print(Levenshtein.distance('book', 'cook'))
"""
Explanation: What about:
End of explanation
"""
|
sharefm/DSF
|
project.ipynb
|
gpl-3.0
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from collections import Counter
%matplotlib inline
"""
Explanation: Project for Data Science Fundamentals course
Loay Abdulatif & Sharef Mustafa
The Question that we are investigating is that:
Who are the attackers of a website
End of explanation
"""
df = pd.read_csv('pdata.csv')
print(df.columns.values)
print('\nNumber of hits on the website is %s hit' %len(df['Source IP']))
"""
Explanation: The data sets that we will work on are logs from the firewall, we will use the small file pdata.csv to simplify readings and then we will use the huge file projectdata.csv which contains logs on source IP addresses , timestamp, http event ..etc
for the period from Feb 1st 2017 till Feb 28 2016
End of explanation
"""
total_hits = Counter(df['Source IP'])
# filterout IP addresses with hit count > 100 hit
hits = {k:v for k,v in total_hits.items() if(v > 100)}
print('\n Size of filtered hits is %s hits' %len(hits))
# x contains the IP addresses
sourceIPs = hits.keys()
# y contains the number of hits per that IP
num_hits = hits.values()
# indexes will facilitate in drawing as index of IP addresses
indexes = np.arange(len(hits))
plt.bar(indexes, num_hits,width=0.5)
plt.xlabel("Source IP Addresses")
plt.ylabel("Number of hits")
plt.show()
"""
Explanation: we will start by using bar plot to visualise IP addresses Vs their hit count, however due to the huge size we will select samples with hit number > 100
End of explanation
"""
sns.boxplot(pd.Series(hits))
"""
Explanation: As shown in the bar chart above few samples have extream values , we have to investigate them deeper
End of explanation
"""
full_df = pd.read_csv('projectdata.csv')
total_hits = Counter(full_df['Source IP'])
# filterout IP addresses with hit count > 700 hit
hits = {k:v for k,v in total_hits.items() if(v > 700)}
print('\n Size of filtered hits is %s hits' %len(hits))
# x contains the IP addresses
#sourceIPs = hits.keys()
# y contains the number of hits per that IP
num_hits = hits.values()
# indexes will facilitate in drawing as index of IP addresses
indexes = np.arange(len(hits))
plt.bar(indexes, num_hits,width=0.5)
plt.xlabel("Source IP Addresses")
plt.ylabel("Number of hits")
plt.show()
"""
Explanation: The box plot above shows the outliars outside of the box, also it shows the box and the variance are closer to the 1st quartile which indicates that we should use a hit count of 700 instead of 100 in order to isolate the outliars
End of explanation
"""
plt.scatter(indexes,pd.Series(hits))
"""
Explanation: As shown in the bar chart above there are 455 source IP address with hit count over 700, also there few outliars with even more than 10000 hits
End of explanation
"""
num_of_IPs = len(hits)
suspects = {k:v for k,v in hits.items() if(v > 10000)}
print('There are %s attacker out of %s suspected IPs\n' %(len(suspects), len(hits)))
suspects
"""
Explanation: the scatter plot above shows the top suspected attackers, as we can see there are
End of explanation
"""
|
ethen8181/machine-learning
|
data_science_is_software/notebooks/data_science_is_software.ipynb
|
mit
|
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css', plot_style = False)
os.chdir(path)
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Data-Science-is-Software" data-toc-modified-id="Data-Science-is-Software-1"><span class="toc-item-num">1 </span>Data Science is Software</a></span><ul class="toc-item"><li><span><a href="#Section-1:--Environment-Reproducibility" data-toc-modified-id="Section-1:--Environment-Reproducibility-1.1"><span class="toc-item-num">1.1 </span>Section 1: Environment Reproducibility</a></span><ul class="toc-item"><li><span><a href="#watermark-extension" data-toc-modified-id="watermark-extension-1.1.1"><span class="toc-item-num">1.1.1 </span><a href="https://github.com/rasbt/watermark" target="_blank">watermark</a> extension</a></span></li><li><span><a href="#Create-A-Separate-Environment" data-toc-modified-id="Create-A-Separate-Environment-1.1.2"><span class="toc-item-num">1.1.2 </span>Create A Separate Environment</a></span></li><li><span><a href="#The-pip-requirements.txt-file" data-toc-modified-id="The-pip-requirements.txt-file-1.1.3"><span class="toc-item-num">1.1.3 </span>The pip <a href="https://pip.readthedocs.org/en/1.1/requirements.html" target="_blank">requirements.txt</a> file</a></span></li><li><span><a href="#Separation-of-configuration-from-codebase" data-toc-modified-id="Separation-of-configuration-from-codebase-1.1.4"><span class="toc-item-num">1.1.4 </span>Separation of configuration from codebase</a></span></li></ul></li><li><span><a href="#Section-2:--Writing-code-for-reusability" data-toc-modified-id="Section-2:--Writing-code-for-reusability-1.2"><span class="toc-item-num">1.2 </span>Section 2: Writing code for reusability</a></span><ul class="toc-item"><li><span><a href="#No-more-docs-guessing" data-toc-modified-id="No-more-docs-guessing-1.2.1"><span class="toc-item-num">1.2.1 </span>No more docs-guessing</a></span></li><li><span><a href="#No-more-copying-pasting" data-toc-modified-id="No-more-copying-pasting-1.2.2"><span class="toc-item-num">1.2.2 </span>No more copying-pasting</a></span></li><li><span><a href="#No-more-copy-pasting-between-notebooks" data-toc-modified-id="No-more-copy-pasting-between-notebooks-1.2.3"><span class="toc-item-num">1.2.3 </span>No more copy-pasting between notebooks</a></span></li><li><span><a href="#I'm-too-good!-Now-this-code-is-useful-to-other-projects!" data-toc-modified-id="I'm-too-good!-Now-this-code-is-useful-to-other-projects!-1.2.4"><span class="toc-item-num">1.2.4 </span>I'm too good! Now this code is useful to other projects!</a></span></li></ul></li><li><span><a href="#Section-3--Don't-let-others-break-your-toys" data-toc-modified-id="Section-3--Don't-let-others-break-your-toys-1.3"><span class="toc-item-num">1.3 </span>Section 3 Don't let others break your toys</a></span><ul class="toc-item"><li><span><a href="#numpy.testing" data-toc-modified-id="numpy.testing-1.3.1"><span class="toc-item-num">1.3.1 </span>numpy.testing</a></span></li><li><span><a href="#engarde-decorators" data-toc-modified-id="engarde-decorators-1.3.2"><span class="toc-item-num">1.3.2 </span><a href="https://github.com/TomAugspurger/engarde" target="_blank">engarde</a> decorators</a></span></li><li><span><a href="#Creating-a-test-suite-with-pytest" data-toc-modified-id="Creating-a-test-suite-with-pytest-1.3.3"><span class="toc-item-num">1.3.3 </span>Creating a test suite with pytest</a></span></li></ul></li><li><span><a href="#Other-Tips-and-Tricks" data-toc-modified-id="Other-Tips-and-Tricks-1.4"><span class="toc-item-num">1.4 </span>Other Tips and Tricks</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
# once it is installed, we'll just need this in future notebooks:
%load_ext watermark
%watermark -a "Ethen" -d -t -v -p numpy,pandas,seaborn,watermark,matplotlib
"""
Explanation: Data Science is Software
Developer life hacks for Data Scientist.
Section 1: Environment Reproducibility
watermark extension
Tell everyone when you ran the notebook, and packages' version that you were using. Listing these dependency information at the top of a notebook is especially useful for nbviewer, blog posts and other media where you are not sharing the notebook as executable code.
End of explanation
"""
import os
from dotenv import load_dotenv
# load the .env file
load_dotenv('.env')
# obtain the value of the variable
os.environ.get('FOO')
"""
Explanation: Here, we're only importing the watermark extension, but it's also a good idea to do all of our other imports at the first cell of the notebook.
Create A Separate Environment
Continuum's conda tool provides a way to create isolated environments. The conda env functionality let's you created an isolated environment on your machine, that way we can
Start from "scratch" on each project
Choose Python 2 or 3 as appropriate
To create an empty environment:
conda create -n <name> python=3
Note: python=2 will create a Python 2 environment; python=3 will create a Python 3 environment.
To work in a particular virtual environment:
source activate <name>
To leave a virtual environment:
source deactivate
Note: on Windows, the commands are just activate and deactivate, no need to type source.
There are other Python tools for environment isolation, but none of them are perfect. If you're interested in the other options, virtualenv and pyenv both provide environment isolation. There are sometimes compatibility issues between the Anaconda Python distribution and these packages, so if you've got Anaconda on your machine you can use conda env to create and manage environments.
<p>
<div class="alert alert-info">
Create a new environment for every project you work on
</div>
### The pip [requirements.txt](https://pip.readthedocs.org/en/1.1/requirements.html) file
It's a convention in the Python ecosystem to track a project's dependencies in a file called `requirements.txt`. We recommend using this file to keep track of your MRE, "Minimum reproducible environment". An example of `requirements.txt` might look something like the following:
```text
pandas>=0.19.2
matplotlib>=2.0.0
```
The format for a line in the requirements file is:
| Syntax | Result |
| --- | --- |
| `package_name` | for whatever the latest version on PyPI is |
| `package_name==X.X.X` | for an exact match of version X.X.X |
| `package_name>=X.X.X` | for at least version X.X.X |
Now, contributors can create a new virtual environment (using conda or any other tool) and install your dependencies just by running:
`pip install -r requirements.txt`
<p>
<div class="alert alert-info">
Never again run `pip install [package]`. Instead, update `requirements.txt` and run `pip install -r requirements.txt`. And for data science projects, favor `package>=0.0.0` rather than `package==0.0.0`, this prevents you from having many versions of large packages (e.g. numpy, scipy, pandas) with complex dependencies sitting around
</div>
Usually the package version will adhere to [semantic versioning](http://semver.org/). Let’s take 0.19.2 as an example and break down what each number represents.
- (**0**.19.2) The first number in this chain is called the major version.
- (0.**19**.2) The second number is called the minor version.
- (0.19.**2**) The third number is called the patch version.
These versions are incremented when code changes are introduced. Depending on the nature of the change, a different number is incremented.
- The major version (first number) is incremented when there's backwards incompatible changes, i.e. changes that break the old API are released. Usually, when major versions are released there’s a guide released with how to update from the old version to the new one
- The minor version (second number) is incremented when backwards compatible changes. Functionality is added (or speed improvements) that does not break any existing functionality, at least the public API that end-users would use
- The patch version (third number) is for backwards compatible bug fixes. Bug fixes are in contrast here with features (adding functionality). These patches go out when something is wrong with existing functionality or when improvements to existing functionality are implemented
Both the `requirements.txt` file and `conda` virtual environment are ways to isoloate each project's environment and dependencies so we or other people that are trying to reproduce our work can save a lot of time recreating the environment.
### Separation of configuration from codebase
There are some things you don't want to be openly reproducible: your private database url, your AWS credentials for downloading the data, your SSN, which you decided to use as a hash. These shouldn't live in source control, but may be essential for collaborators or others reproducing your work.
This is a situation where we can learn from some software engineering best practices. The [12-factor app principles](http://12factor.net/) give a set of best-practices for building web applications. Many of these principles are relevant for best practices in the data-science codebases as well.
Using a dependency manifest like `requirements.txt` satisfies [II. Explicitly declare and isolate dependencies](http://12factor.net/dependencies). Another important principle is [III. Store config in the environment](http://12factor.net/config):
> An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). Apps sometimes store config as constants in the code. This is a violation of twelve-factor, which requires strict separation of config from code. Config varies substantially across deploys, code does not. A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.
The [`dotenv` package](https://github.com/theskumar/python-dotenv) allows you to easily store these variables in a file that is not in source control (as long as you keep the line `.env` in your `.gitignore` file!). You can then reference these variables as environment variables in your application with `os.environ.get('VARIABLE_NAME')`.
End of explanation
"""
import os
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
pump_data_path = os.path.join('..', 'data', 'raw', 'pumps_train_values.csv')
df = pd.read_csv(pump_data_path)
df.head(1)
"""
Explanation: Note that there's also configparser.
Section 2: Writing code for reusability
If the code prints out some output and we want the reader to see it within some context (e.g. presenting a data story), then jupyter notebook it a ideal place for it to live. However, we wish to use the same piece of code in multiple notebooks then we should save it to a standalone .py file to prevent copying and pasting the same piece of code every single time. Finally, if the code is going to used in multiple data analysis project then we should consider creating a package for it.
No more docs-guessing
Don't edit-run-repeat to try to remember the name of a function or argument. Jupyter provides great docs integration and easy ways to remember the arguments to a function.
To check the doc, we can simply add a question mark ? after the method, or press Shift Tab (press both at the same time) inside the bracket of the method and it will print out argument to the method. Also the Tab key can be used for auto-completion of methods and arguments for a method.
Consider the following example. To follow along, please download the dataset pumps_train_values.csv from the following link and move it to the ../data/raw file path, or change the pump_data_path below to where you like to store it.
End of explanation
"""
# we can do ?pd.read_csv or just check the
# documentation online since it usually looks nicer ...
df = pd.read_csv(pump_data_path, index_col = 0)
df.head(1)
"""
Explanation: After reading in the data, we discovered that the data provides an id column and we wish to change it to the index column. But we forgot the parameter to do so.
End of explanation
"""
# 1. magic for inline plot
# 2. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = 8, 6
# create a chart, and we might be tempted to
# paste the code for 'construction_year'
# paste the code for 'gps_height'
plot_data = df['amount_tsh']
sns.kdeplot(plot_data, bw = 1000)
plt.show()
"""
Explanation: No more copying-pasting
End of explanation
"""
def kde_plot(dataframe, variable, upper = None, lower = None, bw = 0.1):
"""
Plots a density plot for a variable with optional upper and
lower bounds on the data (inclusive)
Parameters
----------
dataframe : DataFrame
variable : str
input column, must exist in the input dataframe.
upper : int
upper bound for the input column, i.e. data points
exceeding this threshold will be excluded.
lower : int
lower bound for the input column, i.e. data points
below this threshold will be excluded.
bw : float, default 0.1
bandwidth for density plot's line.
References
----------
Numpy style docstring
- http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html#example-numpy
"""
plot_data = dataframe[variable]
if upper is not None:
plot_data = plot_data[plot_data <= upper]
if lower is not None:
plot_data = plot_data[plot_data >= lower]
sns.kdeplot(plot_data, bw = bw)
plt.show()
kde_plot(df, variable = 'amount_tsh', bw = 1000, lower = 0)
kde_plot(df, variable = 'construction_year', bw = 1, lower = 1000, upper = 2016)
kde_plot(df, variable = 'gps_height', bw = 100)
"""
Explanation: After making this plot, we might want to do the same for other numeric variables. To do this we can copy the entire cell and modify the parameters. This might be ok in a draft, but after a while the notebook can become quite unmanageable.
When we realize we're starting to step on our own toes, that we are no longer effective and the development become clumsy, it is time to organize the notebook. Start over, copy the good code, rewrite and generalize bad one.
Back to our original task of plotting the same graph for other numeric variables, instead of copying and pasting the cell multiple times, we should refactor this a little bit to not repeat ourselves, i.e., create a function to do it instead of copying and pasting. And for the function, write appropriate docstrings.
End of explanation
"""
# add local python functions
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join('..', 'src')
sys.path.append(src_dir)
# import my method from the source code,
# which drops rows with 0 in them
from features.build_features import remove_invalid_data
df = remove_invalid_data(pump_data_path)
df.shape
"""
Explanation: No more copy-pasting between notebooks
Have a method that gets used in multiple notebooks? Refactor it into a separate .py file so it can live a happy life! Note: In order to import your local modules, you must do three things:
put the .py file in a separate folder.
add an empty __init__.py file to the folder so the folder can be recognized as a package.
add that folder to the Python path with sys.path.append.
End of explanation
"""
# Load the "autoreload" extension
# it comes with jupyter notebook
%load_ext autoreload
# always reload all modules
%autoreload 2
# or we can reload modules marked with "%aimport"
# import my method from the source code
# %autoreload 1
# %aimport features.build_features
"""
Explanation: Jupyter notebook is smart about importing methods. Hence, after importing the method for the first time it will use that version, even if we were to change it afterwards. To overcome this "issue" we can use a jupyter notebook extension to tell it to reload the method every time it changes.
End of explanation
"""
# the randomly generated data from the normal distribution with a mean of 1
# should have a mean that's almost equal to 0, hence no error occurs
import numpy as np
data = np.random.normal(0.0, 1.0, 1000000)
np.testing.assert_almost_equal(np.mean(data), 0.0, decimal = 2)
"""
Explanation: I'm too good! Now this code is useful to other projects!
Importing local code is great if you want to use it in multiple notebooks, but once you want to use the code in multiple projects or repositories, it gets complicated. This is when we get serious about isolation!
We can build a python package to solve that! In fact, there is a cookiecutter to create Python packages.
Once we create this package, we can install it in "editable" mode, which means that as we change the code the changes will get picked up if the package is used. The process looks like
```bash
install cookiecutter first
pip install cookiecutter
cookiecutter https://github.com/wdm0006/cookiecutter-pipproject
cd package_name
pip install -e .
```
Now we can have a separate repository for this code and it can be used across projects without having to maintain code in multiple places.
Section 3 Don't let others break your toys
Include tests.
numpy.testing
Provides useful assertion methods for values that are numerically close and for numpy arrays.
End of explanation
"""
# pip install engarde
import engarde.decorators as ed
test_data = pd.DataFrame({'a': np.random.normal(0, 1, 100),
'b': np.random.normal(0, 1, 100)})
@ed.none_missing()
def process(dataframe):
dataframe.loc[10, 'a'] = 1 # change the 1 to np.nan and the code assertion will break
return dataframe
process(test_data).head()
"""
Explanation: Also check the docs for numpy.isclose and numpy.allclose. When making assertions about data, especially where small probabilistic changes or machine precision may result in numbers that aren't exactly equal. Consider using this instead of == for numbers involved in anything where randomness may influence the results
engarde decorators
A library that lets you practice defensive program -- specifically with pandas DataFrame objects. It provides a set of decorators that check the return value of any function that returns a DataFrame and confirms that it conforms to the rules.
End of explanation
"""
import pytest
import pandas as pd
@pytest.fixture()
def df():
"""read in the raw data file and return the dataframe"""
pump_data_path = os.path.join('..', 'data', 'raw', 'pumps_train_values.csv')
df = pd.read_csv(pump_data_path)
return df
def test_df_fixture(df):
assert df.shape == (59400, 40)
useful_columns = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'region',
'population', 'construction_year', 'extraction_type_class',
'management_group', 'quality_group', 'source_type',
'waterpoint_type', 'status_group']
for column in useful_columns:
assert column in df.columns
"""
Explanation: engarde has an awesome set of decorators:
none_missing - no NaNs (great for machine learning--sklearn does not care for NaNs)
has_dtypes - make sure the dtypes are what you expect
verify - runs an arbitrary function on the dataframe
verify_all - makes sure every element returns true for a given function
More can be found in the docs.
Creating a test suite with pytest
Creating a test suite with pytest to start checking the functions we've
written. To pytest test_ prefixed test functions or methods are test items. For more info, check the getting started guide.
The term "test fixtures" refers to known objects or mock data used to put other pieces of the system to the the test. We want these to have the same, known state every time.
For those familiar with unittest, this might be data that you read in as part of the setUp method. pytest does things a bit differently; you define functions that return expected fixtures, and use a special decorator so that your tests automatically get passed the fixture data when you add the fixture function name as an argument.
We need to set up a way to get some data in here for testing. There are two basic choices — reading in the actual data or a known subset of it, or making up some smaller, fake data. You can choose whatever you think works best for your project.
Remove the failing test from above and copy the following into your testing file:
End of explanation
"""
|
matthewzimmer/traffic-sign-classification
|
plotting/matplotlib/plotting.ipynb
|
mit
|
x = linspace(0, 5, 10)
y = x ** 2
figure()
plot(x, y, 'r')
xlabel('x')
ylabel('y')
title('title')
plot()
"""
Explanation: plot example
End of explanation
"""
from __future__ import division
from IPython.display import display
from sympy.interactive import printing
printing.init_printing(use_latex='mathjax')
import sympy as sym
from sympy import *
x, y, z = symbols("x y z")
k, m, n = symbols("k m n", integer=True)
f, g, h = map(Function, 'fgh')
Rational(3,2)*pi + exp(I*x) / (x**2 + y)
"""
Explanation: Writing Formulae
$$c = \sqrt{a^2 + b^2}$$
As you see, PyCharm's IPython Notebook integration makes it possible to use LaTex notation and render formulae, labels and text.
End of explanation
"""
|
tommyogden/maxwellbloch
|
docs/examples/mbs-lambda-weak-pulse-more-atoms-with-coupling.ipynb
|
mit
|
mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"detuning_positive": true,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"fwhm": 1.0
},
"rabi_freq_t_func": "gaussian"
},
{
"coupled_levels": [[1, 2]],
"detuning": 0.0,
"detuning_positive": false,
"label": "coupling",
"rabi_freq": 5.0,
"rabi_freq_t_args":
{
"ampl": 1.0,
"fwhm": 0.2,
"on": -1.0,
"off": 9.0
},
"rabi_freq_t_func": "ramp_onoff"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 120,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 100,
"z_steps_inner": 2,
"interaction_strengths": [10.0, 10.0],
"savefile": "mbs-lambda-weak-pulse-more-atoms-no-coupling"
}
"""
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
%time Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
"""
Explanation: Λ-Type Three-Level: Weak Pulse, With Coupling
Define and Solve
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 1.0e-3, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 10.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
"""
Explanation: Field Output
End of explanation
"""
|
mathcoding/programming
|
notebooks_v3/Lab1_Introduzione.v3.ipynb
|
mit
|
345
"""
Explanation: Elementi di Programmazione
Un linguaggio di programmazione serve sia per istruire una macchina ad eseguire dei conti, che per organizzare le nostre idee su come quei conti devono essere eseguiti. Per questo, nella scelta di un linguaggio di programmazione, dobbiamo tener presente quali sono gli strumenti che vengono offerti dal linguaggio per formare idee più complesse partendo da poche idee semplici.
Ogni linguaggio di programmazione dovrebbe avere almeno tre caratteristiche per ottenere questo obiettivo:
Delle espressioni primitive che rappresentano le entità più semplici del linguaggio
Dei metodi per combinare gli elementi primitivi in elementi composti
Dei metodi per astrarre concetti primitivi in modo che gli elementi composti possano essere utilizzati a loro volta come elementi primitivi di entità ancora più complesse
In programmazione abbiamo due tipi di elementi: le PROCEDURE e i DATI.
In modo informale possiamo definire i dati come gli oggetti che vorremmo manipolare, e le procedure come la descrizione delle regole per manipolare i dati. Quindi, per quanto spiegato sopra, un linguaggio dovrebbe avere dei dati primitivi e delle procedure primitive, e dovrebbe avere dei metodi per combinare e astrarre i dati e le procedure.
Dati e procedure numeriche
Iniziamo a vedere qualche semplice iterazione con l'interprete di Python: se digitiamo sulla tastiera una espressione, l'interprete risponde valutando tale espressione. Per esempio se digitiamo il numero 345 e poi premiamo la combinazione di tasti shift + enter, l'interprete valuta l'espressione che abbiamo appena scritto:
End of explanation
"""
339 + 6
345 - 6
2.7 / 12.1
345 - 12/6
"""
Explanation: Semplici espressioni numeriche possono essere combinate usando delle procedure primitive che rappresentano l'applicazione di procedure a quei numeri. Per esempio:
End of explanation
"""
# Importa tutte le procedure (funzioni) definite nel modulo "operator"
from operator import *
"""
Explanation: Si noti come in questo caso, per queste semplici procedure numeriche che corrispondono agli operatori aritmetici, viene implicitamente usata una notazione chiamata postfix. Importando la libreria operator è possibile esprimere le stesse espressioni in notazione prefix:
End of explanation
"""
add(339, 6)
"""
Explanation: Si consiglia di leggere la documentazione della libreria operator sul sito di python. Le funzioni principali che useremo in questo notebook sono:
add(a,b) corrisponde ad a+b
sub(a,b) corrisponde ad a-b
mul(a,b) corrisponde ad a*b
truediv(a,b) corrisponde ad a/b
Per esempio:
End of explanation
"""
sub(345, truediv(12, 6))
mul(add(2,3), (sub(add(2,2), add(3,2))))
"""
Explanation: Uno dei vantaggi della notazione prefix è che rende sempre chiaro qual è l'operatore/procedura che deve essere svolta, applicandola a quali dati: add è il nome dell'operatore, mentre tra parentesi sono definiti i due dati numerici a cui deve essere applicata l'operazione.
End of explanation
"""
a = 13
"""
Explanation: Si noti come l'espressione precedente sarebbe più chiara se scritta come:
mul(
add(2, 3),
sub(
add(2, 2),
add(3, 2)
)
)
In questo caso l'interprete lavora in un ciclo chiamato "leggi-valuta-stampa": legge le espressioni composte e le espressioni primitive, le valuta nell'ordine in cui le trova, e stampa alla fine il risultato finale.
Assegnazione di nomi ad oggetti
Un aspetto critico della programmazione è il modo in cui vengono assegnati i nomi agli oggetti computazionali.
Si dice che un nome identifica una variabile il cui valore è l'oggetto a cui viene associato. Per esempio:
End of explanation
"""
3*a
add(a, add(a,a))
pi = 3.14159
raggio = 5
circonferenza = 2*pi*raggio
circonferenza
raggio = 10
circonferenza
"""
Explanation: In questo caso abbiamo una variabile, che abbiamo chiamato a, e il cui valore è il numero 13. A questo punto possiamo usare la variabile a come un oggetto di tipo numerico:
End of explanation
"""
who
"""
Explanation: In questo caso, l'interprete del linguaggio, ha prima valutato l'espressione 2*pi*raggio, e dopo ha assegnato il valore ottenuto dalla valutazione dell'espressione alla variabile di nome circonferenza.
In questo caso, l'operatore di assegnamento = rappresenta il più semplice meccanismo di astrazione, perché permette di dare un nome al risultato di operazioni più complesse.
In pratica, qualsiasi programma viene costruito partendo dalla costruzione, passo passo, di oggetti computazionali via via più complessi.
L'uso di un interprete, che in modo incrementale valute le espressioni che li vengono passate, favorisce la definizione di tante piccole procedure, innestate l'uno nell'altra.
Dovrebbe essere chiaro a questo punto, che l'interprete deve mantenere una sorta di MEMORIA che tiene traccia di tutti gli assegnamenti di nomi a oggetti, chiamato il global environment. Per vedere quali sono i nomi memorizzati in memoria si usa il comando who:
End of explanation
"""
quadrato = mul(3, 3)
quadrato
power = mul(x,x)
print(power)
x = 7
"""
Explanation: La valutazione di espressioni composte
Uno degli obiettivo di questo corso è di insegnare a pensare in maniera "algoritmica". Proviamo ad analizzare come l'interprete del linguaggio valuta operazioni composte come quelle viste prima. In pratica, la valutazione di operazioni composte avviene attraverso la procedura seguente:
Per valutare un'espressione composta:
1.1 Prima, valuta le sottoespressioni della espressioni composta
1.2 Applica la procedura indicata dalla sottoespressioni più a sinistra (l'operatore), agli argomenti che sono i valori della sottoespressione (gli operandi).
Si noti come questa procedura per valutare un'operazione composta, per prima cosa deve eseguire il processo di valutazione a ogni elemento dell'espressione composta. Quindi la regola di valutazione di un'espressione è intrinsecamente RICORSIVA, ovvero include come uno dei suoi passi la chiamata a se stessa.
NOTA: mostrare alla lavagna un recursion tree con l'espressione precedente.
Definizione di procedure composte
Abbiamo identificato alcuni elementi che devono appartenere ad un linguaggio di programmazione:
I numeri e le operazioni aritmetiche sono dati e procedure primitive (in gergo, vengono chiamate builtin)
L'annidamento di espressioni composte offrono un meccanismo per comporre delle operazioni
Gli assegnamenti di nomi di variabili a valori offrono un livello di astrazione piuttosto limitato
Abbiamo quindi bisogno di un modo per poter definire nuove procedure, in modo che una nuova operazione possa essere definita in termini di composizione di operazioni più semplici.
Consideriamo per esempio una procedura di elevamento al quadrato.
End of explanation
"""
def Quadrato(numero):
return mul(numero, numero)
Quadrato
who
Quadrato(532)
quadrato(mul(3,2))
"""
Explanation: Per ottenere un livello di astrazione più alto abbiamo bisogno di un meccanismo (una sintassi del linguaggio) per definire nuove procedure (funzioni). La sintassi è la seguente:
def <Nome>(<parametri formali>):
<corpo della procedura>
Si noti come sia apparsa la prima parole chiave riservata del linguaggio: def. Inoltre, <Nome> è il nome che noi vogliamo associare alla procedura (funzione) che stiamo definendo, e i <parametri formali> (chiamati argomenti della procedura) sono le variabili che non appartengono direttamente al working eniviroment (ovvero alla MEMORIA dell'interprete), ma sono "visibili" solo internamente alla procedura in cui sono definite.
Se torniamo all'esempio delle definizione di una procedura per l'elevamento al quadrato, possiamo scrivere:
End of explanation
"""
def SommaQuadrati(x, y):
return add(Quadrato(x), Quadrato(y))
SommaQuadrati(4,3)
x
del x
"""
Explanation: A questo punto possiamo anche definire nuove procedure in termini della procedura appena definita, definendo per esempio una nuova procedura chiamata SommaDiQuadrati:
End of explanation
"""
def F(a):
return SommaQuadrati(add(a, 1), mul(a, 2))
F(5)
"""
Explanation: ESEMPIO: considera la seguente espressione composta:
End of explanation
"""
|
yashdeeph709/Algorithms
|
PythonBootCamp/Complete-Python-Bootcamp-master/List Comprehensions.ipynb
|
apache-2.0
|
# Grab every letter in string
lst = [x for x in 'word']
# Check
lst
"""
Explanation: Comprehensions
In addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.
List comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line for loop built inside of brackets. For a simple example:
Example 1
End of explanation
"""
# Square numbers in range and turn into list
lst = [x**2 for x in range(0,11)]
lst
"""
Explanation: This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10}
Lets see a few more example of list comprehensions in Python:
Example 2
End of explanation
"""
# Check for even numbers in a range
lst = [x for x in range(11) if x % 2 == 0]
lst
"""
Explanation: Example 3
Lets see how to add in if statements:
End of explanation
"""
# Convert Celsius to Fahrenheit
celsius = [0,10,20.1,34.5]
fahrenheit = [ ((float(9)/5)*temp + 32) for temp in Celsius ]
fahrenheit
"""
Explanation: Example 4
Can also do more complicated arithmetic:
End of explanation
"""
lst = [ x**2 for x in [x**2 for x in range(11)]]
lst
"""
Explanation: Example 5
We can also perform nested list comprehensions, for example:
End of explanation
"""
|
garibaldu/boundary-seekers
|
boundary-seeker.ipynb
|
mit
|
def sigmoid(phi):
return 1.0/(1.0 + np.exp(-phi))
def calc_prob_class1(params):
# Sigmoid perceptron ('logistic regression')
tildex = X - params['mean']
W = params['wgts']
phi = np.dot(tildex, W)
return sigmoid(phi) # Sigmoid perceptron ('logistic regression')
def calc_membership(params):
# NB. this is just a helper function for training_loss really.
tildex = X - params['mean']
W, r2, R2 = params['wgts'], params['r2'], params['R2']
Dr2 = np.power(np.dot(tildex, W), 2.0)
L2X = (np.power(tildex, 2.0)).sum(1)
DR2 = L2X - Dr2
dist2 = (Dr2/r2) + (DR2/R2) # rescaled 'distance' to the shifted 'origin'
membership = np.exp(-0.5*dist2)
#print(membership)
return np.array(membership)
def classification_loss(params):
membership = calc_membership(params)
Y = calc_prob_class1(params)
return np.sum(membership*(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
"""
Explanation: Loss under a Local Perceptron model
End of explanation
"""
def MoG_loss(params):
membership = calc_membership(params)
return np.sum(membership)
"""
Explanation: Loss under a Mixture of Gaussians model
End of explanation
"""
classification_gradient = grad(classification_loss)
MoG_gradient = grad(MoG_loss)
"""
Explanation: We use autograd for functions that deliver gradients of those losses
End of explanation
"""
# Be able to show the current solution, against the data in 2D.
def show_result(params, X, Targ):
print("Parameters:")
for key in params.keys():
print(key,'\t', params[key])
print("Loss:", training_loss(params))
membership = calc_membership(params)
Y = calc_prob_class1(params)
pl.clf()
marksize = 8
cl ={0:'red', 1:'black'}
for i, x in enumerate(X):
pl.plot(x[0],x[1],'x',color=cl[int(Targ[i])],alpha=.4,markersize=marksize)
pl.plot(x[0],x[1],'o',color=cl[int(Targ[i])],alpha=1.-float(abs(Targ[i]-Y[i])),markersize=marksize)
pl.axis('equal')
s = X.ravel().max() - X.ravel().min()
m, w = params['mean'], params['wgts']
# Show the mean in blue
#pl.arrow(0, 0, m[0], m[1], head_width=0.25, head_length=0.5, fc='b', ec='b', linewidth=1, alpha=.95)
# Show the perceptron decision boundary, in green
pl.arrow(m[0]-w[0], m[1]-w[1], w[0], w[1], head_width=s, head_length=s/5, fc='g', ec='g', linewidth=3, alpha=.5)
pl.show()
"""
Explanation: Just a pretty display
Red and Black are target 0 and 1 patterns respectively.
They will get "filled in" once the perceptron is getting them correct.
End of explanation
"""
def do_one_learning_step(params,X,Targ,rate):
grads = classification_gradient(params)
params['wgts'] = params['wgts'] + rate * grads['wgts'] # one step of learning
params['mean'] = params['mean'] + rate * grads['mean'] # one step of learning
return (params)
init_w = rng.normal(0,1,size=(Nins))
init_m = 4.*rng.normal(0,1,size=(Nins))
rate = 0.5 / Npats
params = {'wgts':init_w, 'mean':init_m, 'r2':1000.0, 'R2':1000.0}
for t in range(250):
params = do_one_learning_step(params,X,Targ,rate)
show_result(params, X, Targ)
Y = sigmoid(np.dot(X-params['mean'], params['wgts']))
print('vanilla loss: ', np.sum(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
"""
Explanation: Learning, starting from random weights and bias.
End of explanation
"""
|
AdityaSoni19031997/Machine-Learning
|
Coursera_DL/Python+Basics+With+Numpy+v3.ipynb
|
mit
|
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
"""
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
"""
# GRADED FUNCTION: basic_sigmoid
import math
import numpy as np
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1 + np.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
"""
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
"""
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(np.asarray(x)) # you will see this give an error when you run it, because x is a vector.
"""
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
"""
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
"""
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
"""
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
"""
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
"""
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(np.asarray(-x)))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
"""
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
"""
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = 1/(1+np.exp((-x)))
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
"""
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
"""
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
"""
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
"""
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
"""
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
"""
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims= True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
"""
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
"""
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
"""
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
"""
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat) ,keepdims=True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
"""
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
"""
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum((y-yhat)**2, keepdims=True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
"""
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation
"""
|
delsner/dl-exploration
|
notebooks/04 - Backpropagation .ipynb
|
mit
|
import numpy as np
from matplotlib import pyplot as plt
"""
Explanation: Backpropagation
This is meant to deepen the understanding of backpropagation and (stochastic) gradient descent in NN.
Softmax Linear Classifier
Initially a linear classifier, then move to 2-layer NN.
End of explanation
"""
# Generate a spiral dataset
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N * K, D)) # data matrix (each row = single example)
y = np.zeros(N * K, dtype='uint8') # class labels
for j in range(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 4, (j + 1) * 4, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
# lets visualize the data:
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.show()
# initialize parameters randomly
W = 0.01 * np.random.randn(D, K)
b = np.zeros((1, K))
scores = np.dot(X, W) + b
scores
"""
Explanation: Normally we would want to preprocess the dataset so that each feature has zero mean and unit standard deviation, but in this case the features are already in a nice range from -1 to 1, so we skip this step.
End of explanation
"""
scores.shape
scores[:4]
# compute loss of the scores
num_examples = X.shape[0]
# get unnormalized probabilities
exp_scores = np.exp(scores)
# normalize them for each example
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# each row contains the class probabilities
probs[:4]
# get log of probabilities of the actual classes
# the array indexing/querying here works as follows np.array([...])[[ROW_INDICES], [COL_INDICES]]
correct_probs = probs[range(num_examples),y]
corect_logprobs = -np.log(correct_probs)
reg = 0.5 # regularization strength
# compute the loss: average cross-entropy loss and regularization
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
"""
Explanation: Compute the Loss for the Softmax classifier:
$L_i = -\log\left(\frac{e^{f_{y_i}}}{ \sum_j e^{f_j} }\right)$
Softmax classifier interprets every element of f as holding the (unnormalized) log probabilities of the three classes. We exponentiate these to get (unnormalized) probabilities and then normalize them to get probabilities.
As $-\log(x)$ coverges towards infinity for x=0 and 0 for x=1 the loss is high if the probability inside the parentheses is small and low if it is large.
The full Softmax classifier loss is then defined as the average cross-entropy loss over all training examples:
$ L = \underbrace{ \frac{1}{N} \sum_i L_i }\text{data loss} + \underbrace{ \frac{1}{2} \lambda \sum_k\sum_l W{k,l}^2 }_\text{regularization loss} \\ $
End of explanation
"""
# probs are probabilities of all classes (as rows)
dscores = np.copy(probs)
dscores[range(num_examples),y] -= 1 # using the previously calculated formulat (p_k - 1)
# avg gradients on scores
dscores /= num_examples
"""
Explanation: Computing the analytic gradient with backpropagation.
Loss for one example is:
$ p_k = \frac{e^{f_k}}{ \sum_j e^{f_j} } \hspace{1in} L_i =-\log\left(p_{y_i}\right) $
We now want to understand how the computed scores inside $f$ should change to decrease the loss $L_i$. In other words derive the gradient $ \partial L_i / \partial f_k $ .
Chain rule:
$ \frac{\partial L_i}{\partial f_k} = \frac{\partial L_i}{\partial p} \frac{\partial p}{\partial f_k} $
$ \frac{\partial L_i }{ \partial f_k } = p_k - \mathbb{1}(y_i = k) $
That means for probabilities of p = [0.2, 0.3, 0.5] and correct class is middle one, the gradient on the scores would be df = [0.2, -0.7, 0.5].
End of explanation
"""
# backpropagate into W and b
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # don't forget the regularization gradient
step_size = 1e-0
# Perform a parameter update in the negative gradient direction to decrease loss!
W += -step_size * dW
b += -step_size * db
# putting it all together
# initialize parameters randomly
W = 0.01 * np.random.randn(D, K)
b = np.zeros((1, K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in range(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs) / num_examples
reg_loss = 0.5 * reg * np.sum(W * W)
loss = data_loss + reg_loss
if i % 10 == 0:
print("iteration %d: loss %f" % (i, loss))
# compute the gradient on scores
dscores = probs
dscores[range(num_examples), y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg * W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
# evaluate training set accuracy
scores = np.dot(X, W) + b
predicted_class = np.argmax(scores, axis=1)
print('training accuracy: %.2f' % (np.mean(predicted_class == y)))
"""
Explanation: Note that the regularization gradient has the very simple form reg*W since we used the constant 0.5 for its loss contribution (i.e. $ \frac{d}{dw} ( \frac{1}{2} \lambda w^2) = \lambda w $)
End of explanation
"""
|
spatialaudio/sweep
|
software_sweep.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Simulation of Impulse Response Measurements
The software (https://github.com/franzpl/sweep) has been written in the context of my bachelor thesis with the topic "On the influence of windowing of sweep signals for room impulse measurements" at the University of Rostock.
Impulse responses are an important tool to determine acoustic properties of a Device Under Test. The main requirement is that all desired frequencies cover the interesting frequency range with sufficient energy. Therefore, sweep signals and white noise are usally favored to excite DUT's. In this context sweep signals and LTI-Systems were used. However, the design of sweep signals in time domain causes ripple in the excitation spectrum at the start and stop frequency. It is possible to reduce ripple with the use of convenient windows. With this software, you can evaluate the effect of windowing of sweep signals on impulse responses under the influence of noise. This Ipython3 Notebook shows an examplary impulse response measurement (Sweep -> DUT -> System Response -> Impulse Response -> Quality of Impulse Response). You can also use the software for real measurements, because measurement module and simulation module are seperated strictly. <br>
Let's start the simulation of an impulse response measurement!
Imports
First, you need imports from Python and the software.
Python Modules
End of explanation
"""
import generation
import plotting
import ir_imitation
import calculation
import windows
import measurement_chain
"""
Explanation: Software Modules
End of explanation
"""
fs = 44100
fstart = 1
fstop = 22050
duration = 1 # seconds
pad = 5 # attach 5 seconds zeros to excitation signal
"""
Explanation: Excitation
Than, you have to design the excitation signal.
Excitation Parameters
End of explanation
"""
excitation = generation.log_sweep(fstart, fstop, duration, fs)
"""
Explanation: Excitation Signal
Generate a excitation signal with the excitation parameters above.
End of explanation
"""
plotting.plot_time(excitation, fs);
"""
Explanation: Plot Time Domain
End of explanation
"""
plotting.plot_freq(excitation, fs, scale='db')
plt.xscale('log')
plt.xlim([1, fs/2])
plt.ylim([-55, -14]);
"""
Explanation: Plot Frequency Domain
As shown in this figure, the excitation spectrum is characterized by ripple at the start and stop frequency.
End of explanation
"""
fade_in = 50 # ms
fade_out = 10 # ms
beta = 7 # kaiser window
"""
Explanation: Window Parameters
A window reduces the ringing artifacts. Fade in and fade out parameters of the window can help to produce a smoother spectrum.
End of explanation
"""
window = windows.window_kaiser(len(excitation), fade_in, fade_out, fs, beta)
excitation_windowed = window * excitation
"""
Explanation: Windowed Sweep
End of explanation
"""
excitation_windowed_zeropadded = generation.zero_padding(excitation_windowed, pad, fs)
"""
Explanation: Zeropadding
Zeropadding makes space for the recording signal.
End of explanation
"""
plotting.plot_time(excitation_windowed_zeropadded, fs);
"""
Explanation: Plot Zeropadded Windowed Sweep
End of explanation
"""
dirac = measurement_chain.convolution([1])
"""
Explanation: System
Now, you have to design the DUT.
FIR-Filter System
For a better understanding, in this example a dirac impuls was used as filter.
End of explanation
"""
noise_level = -30 # RMS (dB)
awgn = measurement_chain.additive_noise(noise_level)
"""
Explanation: Noise System
In addition, you can define the system with additve noise.
End of explanation
"""
system = measurement_chain.chained(dirac, awgn)
"""
Explanation: Combinate System Elements
Finally, the system elements must be combined. Feel free to add more elements (lowpass, bandpass, gain, ...) to the system.
End of explanation
"""
system_response = system(excitation_windowed_zeropadded)
"""
Explanation: System Response
To record the system response, you have to simply pass the excitation signal to the system.
End of explanation
"""
plotting.plot_time(system_response,fs);
"""
Explanation: Plot System Response
End of explanation
"""
ir = calculation.deconv_process(excitation_windowed_zeropadded, system_response, fs)[:len(excitation_windowed_zeropadded)]
"""
Explanation: Impulse Response
Via the FFT and IFFT, the impulse response is calculated. That's it! A Plot with linear and dB scale show you the characteristics of the IR.
End of explanation
"""
plotting.plot_time(ir, fs)
plt.xlim([-1, 5])
plt.ylim([-0.1, 1.1]);
"""
Explanation: Plot Impulse Response (linear)
End of explanation
"""
plotting.plot_time(ir, fs, scale='db')
plt.xlim([-1, 5])
plt.ylim([-60, 2]);
"""
Explanation: Plot Impulse Response (dB)
End of explanation
"""
pnr = calculation.pnr_db(ir[0], ir[fs:pad*fs])
print(str(pnr), 'dB')
"""
Explanation: Impulse Response Quality
The 'Peak to Noise Ratio' provides information about the quality of the IR.
End of explanation
"""
|
danielbultrini/FXFEL
|
Particle Distribution Visualization.ipynb
|
bsd-3-clause
|
import processing_tools as pt
"""
Explanation: First, import the processing tools that contain classes and methods to read, plot and process standard unit particle distribution files.
End of explanation
"""
filepath = './example/example.h5'
data = pt.ParticleDistribution(filepath)
data.su2si
data.dict['x']
"""
Explanation: The module consists of a class 'ParticleDistribution' that initializes to a dictionary containing the following entries given a filepath:
|key | value |
|----|-----------|
|'x' | x position|
|'y' | y position|
|'z' | z position|
|'px'| x momentum|
|'py'| y momentum|
|'pz'| z momentum|
|'NE'| number of electrons per macroparticle|
The units are in line with the Standard Unit specifications, but can be converted to SI by calling the class method SU2SI
Values can then be called by calling the 'dict':
End of explanation
"""
panda_data = data.DistFrame()
panda_data[0:5]
"""
Explanation: Alternatively one can ask for a pandas dataframe where each column is one of the above properties of a macroparticle per row.
End of explanation
"""
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') #optional
x_axis = 'py'
y_axis = 'px'
plot = panda_data.plot(kind='scatter',x=x_axis,y=y_axis)
#sets axis limits
plot.set_xlim([panda_data[x_axis].min(),panda_data[x_axis].max()])
plot.set_ylim([panda_data[y_axis].min(),panda_data[y_axis].max()])
plt.show(plot)
"""
Explanation: This allows for quick plotting using the inbuilt pandas methods
End of explanation
"""
stats = pt.Statistics(filepath)
#preparing the statistics
stats.slice(100)
stats.calc_emittance()
stats.calc_CoM()
stats.calc_current()
#display pandas example
panda_stats = stats.StatsFrame()
panda_stats[0:5]
ax = panda_stats.plot(x='z_pos',y='CoM_y')
panda_stats.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes
plt.show()
"""
Explanation: If further statistical analysis is required, the class 'Statistics' is provided. This contains methods to process standard properties of the electron bunch. This is called by giving a filepath to 'Statistics' The following operations can be performed:
| Function | Effect and dict keys |
|---------------------|-------------------------------------------------------------------------------------------------------------------------------|
| calc_emittance | Calculates the emittance of all the slices, accessible by 'e_x' and 'e_y' |
| calc_CoM | Calculates the weighed averages and standard deviations per slice of every parameter and beta functions, see below for keys. |
| calc_current | Calculates current per slice, accessible in the dict as 'current'. |
|slice | Slices the data in equal slices of an integer number. |
This is a subclass of the ParticleDistribution and all the methods previously described work.
| CoM Keys | Parameter (per slice) |
|------------------------|------------------------------------------------------------|
| CoM_x, CoM_y, CoM_z | Centre of mass of x, y, z positions |
| std_x, std_y, std_z | Standard deviation of x, y, z positions |
| CoM_px, CoM_py, CoM_pz | Centre of mass of x, y, z momenta |
| std_px, std_py, std_pz | Standard deviation of x, y, z momenta |
| beta_x, beta_y | Beta functions (assuming Gaussian distribution) in x and y |
Furthermore, there is a 'Step_Z' which returns the size of a slice as well as 'z_pos' which gives you central position of a given slice.
And from this class both the DistFrame (containing the same data as above) and StatsFrame can be called:
End of explanation
"""
FEL = pt.ProcessedData(filepath,num_slices=100,undulator_period=0.00275,k_fact=2.7)
panda_FEL = FEL.FELFrame()
panda_stats= FEL.StatsFrame()
panda_FEL[0:5]
"""
Explanation: And finally there is the FEL_Approximations which calculate simple FEL properties per slice. This is a subclass of statistics and as such every method described above is callable.
This class conatins the 'undulator' function that calculates planar undulator parameters given a period and either a peak magnetic field or K value.
The data must be sliced and most statistics have to be run before the other calculations can take place.
These are 'pierce' which calculates the pierce parameter and 1D gain length for a given slice, 'gain length' which calculates the Ming Xie gain and returns three entries in the dict 'MX_gain', '1D_gain', 'pierce', which hold an array for these values per slice.
'FELFrame' returns a pandas dataframe with these and 'z_pos' for reference.
To make this easier, the class ProcessedData takes a filepath, number of slcies, undulator period, magnetic field or K and performs all the necessary steps automatically. As this is a subclass of FEL_Approximations all the values written above are accessible from here.
End of explanation
"""
import pandas as pd
cat = pd.concat([panda_FEL,panda_stats], axis=1, join_axes=[panda_FEL.index]) #joins the two if you need to plot
#FEL parameters as well as slicel statistics on the same plot
cat['1D_gain']=cat['1D_gain']*40000000000 #one can scale to allow for visual comparison if needed
az = cat.plot(x='z_pos',y='1D_gain')
cat.plot(ax=az, x='z_pos',y='MX_gain',c='b')
plt.show()
"""
Explanation: If it is important to plot the statistical data alongside the FEL data, that can be easily achieved by concatinating the two sets as shown below
End of explanation
"""
|
danielfrg/danielfrg.github.io-source
|
content/blog/notebooks/2016/02/ssn-names.ipynb
|
apache-2.0
|
%matplotlib inline
import pandas as pd
import os
data_dir = os.path.expanduser("~/data/names/names")
files = os.listdir(data_dir)
data = pd.DataFrame(columns=["year", "name", "sex", "occurrences"])
for fname in files:
if fname.endswith(".txt"):
fpath = os.path.join(data_dir, fname)
df = pd.read_csv(fpath, header=None, names=["name", "sex", "occurrences"])
df["year"] = int(fname[3:7])
data = data.append(df)
data.year = data.year.astype(int)
data.head()
data.shape
data.dtypes
"""
Explanation: <p class="note">
ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code.
</p>
Today as small return for the ReproduceIt series
I try to reproduce a simple but nice data analysis and webapp that braid.io did
called Most Beyonces are 14 years old and most Kanyes are about 11.
The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the SSN website: Beyond the Top 1000 Names
The data is very small and loading it into pandas and plotting using bokeh it was very easy.
End of explanation
"""
beyonce = data[data["name"] == "Beyonce"][["year", "occurrences"]]
from bokeh.charts import ColumnDataSource, Bar, output_notebook, show
from bokeh.models import HoverTool
output_notebook()
p = Bar(data=beyonce, label="year", values="occurrences", title="No. Babies named Beyoncé",
color="#0277BD", ylabel='', tools="save,reset")
show(p)
"""
Explanation: Beyonce
Now that the data is into a simple dataframe we can just filter by the name we want and make a Bar Chart.
End of explanation
"""
|
coursemdetw/reveal2
|
content/notebook/Elements of Evolutionary Algorithms.ipynb
|
mit
|
import random
from deap import algorithms, base, creator, tools
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
def evalOneMax(individual):
return (sum(individual),)
"""
Explanation: <img src='http://www.puc-rio.br/sobrepuc/admin/vrd/brasao/download/ass_vertpb_reduz4.jpg' align='left'/>
Demostration Class 02
Elements of Evolutionary Algorithms
Luis Martí, LIRA/DEE/PUC-Rio
http://lmarti.com; lmarti@ele.puc-rio.br
Advanced Evolutionary Computation: Theory and Practice
The notebook is better viewed rendered as slides. You can convert it to slides and view them by:
- using nbconvert with a command like:
bash
$ ipython nbconvert --to slides --post serve <this-notebook-name.ipynb>
- installing Reveal.js - Jupyter/IPython Slideshow Extension
- using the online IPython notebook slide viewer (some slides of the notebook might not be properly rendered).
This and other related IPython notebooks can be found at the course github repository:
* https://github.com/lmarti/evolutionary-computation-course
In this demonstration class we will deal with the features and problems shared by most evolutionary algorithms.
Note: Most of the material used in this notebook comes from DEAP documentation.
Elements to take into account using evolutionary algorithms
Individual representation (binary, Gray, floating-point, etc.);
evaluation and fitness assignment;
mating selection, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.
variation, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population. This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.
environmental selection, that merges the parent and offspring individuals to produce the population that will be used in the next iteration. This process often involves the deletion of some individuals using a given criterion in order to keep the amount of individuals bellow a certain threshold.
stopping criterion, that determines when the algorithm shoulod be stopped, either because the optimum was reach or because the optimization process is not progressing.
Hence a 'general' evolutionary algorithm can be described as
```
def evolutionary_algorithm():
'Pseudocode of an evolutionary algorithm'
populations = [] # a list with all the populations
populations[0] = initialize_population(pop_size)
t = 0
while not stop_criterion(populations[t]):
fitnesses = evaluate(populations[t])
offspring = matting_and_variation(populations[t],
fitnesses)
populations[t+1] = environmental_selection(
populations[t],
offspring)
t = t+1
```
Python libraries for evolutionary computation
PaGMO/PyGMO
Inspyred
Distributed Evolutionary Algorithms in Python (DEAP)
There are potentially many more, feel free to give me some feedback on this.
<table>
<tr>
<td width='47%'>
<img src='https://raw.githubusercontent.com/DEAP/deap/master/doc/_static/deap_long.png' title="DEAP logo" width='92%' align='center'/>
</td>
<td>
<ul>
<li> Open source Python library with,
<li> genetic algorithm using any representation;
<li> evolutionary strategies (including CMA-ES);
<li> multi-objective optimization from the start;
<li> co-evolution (cooperative and competitive) of multiple populations;
<li> parallelization of the evaluations (and more) using SCOOP;
<li> statistics keeping, and;
<li> benchmarks module containing some common test functions.
<li> [https://github.com/DEAP/deap](https://github.com/DEAP/deap)
</ul>
</td>
</tr>
</table>
Lets start with an example and analyze it
The One Max problem
Maximize the number of ones in a binary string (list, vector, etc.).
More formally, from the set of binary strings of length $n$,
$$\mathcal{S}=\left{s_1,\ldots,s_n\right}, \text{ with } s_i=\left{0,1\right}.$$
Find $s^\ast\in\mathcal{S}$ such that
$$s^\ast = \operatorname*{arg\,max}{s\in\mathcal{S}} \sum{i=1}^{n}{s_i}.$$
Its clear that the optimum is an all-ones string.
Coding the problem
End of explanation
"""
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual,
toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
"""
Explanation: Defining the elements
End of explanation
"""
pop = toolbox.population(n=300)
"""
Explanation: Running the experiment
End of explanation
"""
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=10, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2,
ngen=50, verbose=False)
print('Current best fitness:', evalOneMax(tools.selBest(pop, k=1)[0]))
"""
Explanation: Lets run only 10 generations
End of explanation
"""
import random
from deap import base
from deap import creator
from deap import tools
IND_SIZE = 5
creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0))
creator.create("Individual", list, fitness=creator.FitnessMin)
toolbox1 = base.Toolbox()
toolbox1.register("attr_float", random.random)
toolbox1.register("individual", tools.initRepeat, creator.Individual,
toolbox1.attr_float, n=IND_SIZE)
"""
Explanation: Essential features
deap.creator: meta-factory allowing to create classes that will fulfill the needs of your evolutionary algorithms.
deap.base.Toolbox: A toolbox for evolution that contains the evolutionary operators. You may populate the toolbox with any other function by using the register() method
deap.base.Fitness([values]): The fitness is a measure of quality of a solution. If values are provided as a tuple, the fitness is initalized using those values, otherwise it is empty (or invalid). You should inherit from this class to define your custom fitnesses.
Defining an individual
First import the required modules and register the different functions required to create individuals that are a list of floats with a minimizing two objectives fitness.
End of explanation
"""
ind1 = toolbox1.individual()
"""
Explanation: The first individual can now be built
End of explanation
"""
print ind1
print ind1.fitness.valid
"""
Explanation: Printing the individual ind1 and checking if its fitness is valid will give something like this
End of explanation
"""
def evaluate(individual):
# Do some hard computing on the individual
a = sum(individual)
b = len(individual)
return a, 1. / b
ind1.fitness.values = evaluate(ind1)
print ind1.fitness.valid
print ind1.fitness
"""
Explanation: The individual is printed as its base class representation (here a list) and the fitness is invalid because it contains no values.
Evaluation
The evaluation is the most "personal" part of an evolutionary algorithm
* it is the only part of the library that you must write yourself.
* A typical evaluation function takes one individual as argument and return its fitness as a tuple.
* A fitness is a list of floating point values and has a property valid to know if this individual shall be re-evaluated.
* The fitness is set by setting the values to the associated tuple.
For example, the following evaluates the previously created individual ind1 and assign its fitness to the corresponding values.
End of explanation
"""
mutant = toolbox1.clone(ind1)
ind2, = tools.mutGaussian(mutant, mu=0.0, sigma=0.2, indpb=0.2)
del mutant.fitness.values
"""
Explanation: Dealing with single objective fitness is not different, the evaluation function must return a tuple because single-objective is treated as a special case of multi-objective.
Mutation
The next kind of operator that we will present is the mutation operator.
There is a variety of mutation operators in the deap.tools module.
Each mutation has its own characteristics and may be applied to different type of individual.
Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.
The general rule for mutation operators is that they only mutate, this means that an independent copy must be made prior to mutating the individual if the original individual has to be kept or is a reference to an other individual (see the selection operator).
In order to apply a mutation (here a gaussian mutation) on the individual ind1, simply apply the desired function.
End of explanation
"""
print ind2 is mutant
print mutant is ind2
"""
Explanation: The fitness’ values are deleted because they not related to the individual anymore. As stated above, the mutation does mutate and only mutate an individual it is not responsible of invalidating the fitness nor anything else. The following shows that ind2 and mutant are in fact the same individual.
End of explanation
"""
child1, child2 = [toolbox1.clone(ind) for ind in (ind1, ind2)]
tools.cxBlend(child1, child2, 0.5)
del child1.fitness.values
del child2.fitness.values
"""
Explanation: Crossover
There is a variety of crossover operators in the deap.tools module.
Each crossover has its own characteristics and may be applied to different type of individuals.
Be careful to read the documentation of the selected operator in order to avoid undesirable behaviour.
The general rule for crossover operators is that they only mate individuals, this means that an independent copies must be made prior to mating the individuals if the original individuals have to be kept or is are references to other individuals (see the selection operator).
Lets apply a crossover operation to produce the two children that are cloned beforehand.
End of explanation
"""
selected = tools.selBest([child1, child2], 2)
print child1 in selected
"""
Explanation: Selection
Selection is made among a population by the selection operators that are available in the deap.operators module.
The selection operator usually takes as first argument an iterable container of individuals and the number of individuals to select. It returns a list containing the references to the selected individuals.
The selection is made as follow.
End of explanation
"""
from deap import base
from deap import tools
toolbox1 = base.Toolbox()
def evaluateInd(individual):
# Do some computation
return result,
toolbox1.register("mate", tools.cxTwoPoint)
toolbox1.register("mutate", tools.mutGaussian, mu=0, sigma=1, indpb=0.2)
toolbox1.register("select", tools.selTournament, tournsize=3)
toolbox1.register("evaluate", evaluateInd)
"""
Explanation: Using the Toolbox
The toolbox is intended to contain all the evolutionary tools, from the object initializers to the evaluation operator.
It allows easy configuration of each algorithms.
The toolbox has basically two methods, register() and unregister(), that are used to add or remove tools from the toolbox.
The usual names for the evolutionary tools are mate(), mutate(), evaluate() and select(), however, any name can be registered as long as it is unique. Here is how they are registered in the toolbox.
End of explanation
"""
def checkBounds(min, max):
def decorator(func):
def wrapper(*args, **kargs):
offspring = func(*args, **kargs)
for child in offspring:
for i in xrange(len(child)):
if child[i] > max:
child[i] = max
elif child[i] < min:
child[i] = min
return offspring
return wrapper
return decorator
toolbox.register("mate_example", tools.cxBlend, alpha=0.2)
toolbox.register("mutate_example", tools.mutGaussian, mu=0, sigma=2)
MIN = 0; MAX = 10
toolbox.decorate("mate_example", checkBounds(MIN, MAX))
toolbox.decorate("mutate_example", checkBounds(MIN, MAX))
"""
Explanation: Tool Decoration
A powerful feature that helps to control very precise thing during an evolution without changing anything in the algorithm or operators.
A decorator is a wrapper that is called instead of a function.
It is asked to make some initialization and termination work before and after the actual function is called.
For example, in the case of a constrained domain, one can apply a decorator to the mutation and crossover in order to keep any individual from being out-of-bound.
The following defines a decorator that checks if any attribute in the list is out-of-bound and clips it if it is the case.
* The decorator is defined using three functions in order to receive the min and max arguments.
* Whenever the mutation or crossover is called, bounds will be check on the resulting individuals.
End of explanation
"""
from deap import algorithms
NGEN = 20 # number of generations
CXPB = 0.6
MUTPB = 0.05
for g in range(NGEN):
# Select and clone the next generation individuals
offspring = map(toolbox.clone, toolbox.select(pop, len(pop)))
# Apply crossover and mutation on the offspring
offspring = algorithms.varAnd(offspring, toolbox, CXPB, MUTPB)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# The population is entirely replaced by the offspring
pop[:] = offspring
"""
Explanation: This will work on crossover and mutation because both return a tuple of individuals. The mutation is often considered to return a single individual but again like for the evaluation, the single individual case is a special case of the multiple individual case.
Variations
Variations allows to build simple algorithms using predefined small building blocks.
In order to use a variation, the toolbox must be set to contain the required operators.
For example, in the lastly presented complete algorithm, the crossover and mutation are regrouped in the varAnd() function, this function requires the toolbox to contain the mate() and mutate() functions. The variations can be used to simplify the writing of an algorithm as follow.
End of explanation
"""
from deap import algorithms
result = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=50)
"""
Explanation: Algorithms
There are several algorithms implemented in the algorithms module.
They are very simple and reflect the basic types of evolutionary algorithms present in the literature.
The algorithms use a Toolbox as defined in the last sections.
In order to setup a toolbox for an algorithm, you must register the desired operators under a specified names, refer to the documentation of the selected algorithm for more details.
Once the toolbox is ready, it is time to launch the algorithm.
The simple evolutionary algorithm takes 5 arguments, a population, a toolbox, a probability of mating each individual at each generation (cxpb), a probability of mutating each individual at each generation (mutpb) and a number of generations to accomplish (ngen).
End of explanation
"""
stats = tools.Statistics(key=lambda ind: ind.fitness.values)
"""
Explanation: Computing Statistics
Often, one wants to compile statistics on what is going on in the optimization. The Statistics are able to compile such data on arbitrary attributes of any designated object. To do that, one need to register the desired statistic functions inside the stats object using the exact same syntax as the toolbox.
End of explanation
"""
import numpy
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
"""
Explanation: The statistics object is created using a key as first argument. This key must be supplied a function that will later be applied to the data on which the statistics are computed. The previous code sample uses the fitness.values attribute of each element.
End of explanation
"""
pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=0,
stats=stats, verbose=True)
"""
Explanation: The statistical functions are now registered.
The register function expects an alias as first argument and a function operating on vectors as second argument.
Any subsequent argument is passed to the function when called. The creation of the statistics object is now complete.
Predefined Algorithms
When using a predefined algorithm such as eaSimple(), eaMuPlusLambda(), eaMuCommaLambda(), or eaGenerateUpdate(), the statistics object previously created can be given as argument to the algorithm.
End of explanation
"""
record = stats.compile(pop)
"""
Explanation: Statistics will automatically be computed on the population every generation.
The verbose argument prints the statistics on screen while the optimization takes place.
Once the algorithm returns, the final population and a Logbook are returned.
See the next section or the Logbook documentation for more information.
Writing Your Own Algorithm
When writing your own algorithm, including statistics is very simple. One need only to compile the statistics on the desired object.
For example, compiling the statistics on a given population is done by calling the compile() method.
End of explanation
"""
>>> print(record)
{'std': 4.96, 'max': 63.0, 'avg': 50.2, 'min': 39.0}
"""
Explanation: The argument to the compile function must be an iterable of elements on which the key will be called. Here, our population (pop) contains individuals.
The statistics object will call the key function on every individual to retrieve their fitness.values attribute.
The resulting array of values is finally given the each statistic function and the result is put into the record dictionary under the key associated with the function.
Printing the record reveals its nature.
End of explanation
"""
logbook = tools.Logbook()
logbook.record(gen=0, evals=30, **record)
"""
Explanation: Logging Data
Once the data is produced by the statistics, one can save it for further use in a Logbook.
The logbook is intended to be a chronological sequence of entries (as dictionaries).
It is directly compliant with the type of data returned by the statistics objects, but not limited to this data.
In fact, anything can be incorporated in an entry of the logbook.
End of explanation
"""
gen, avg = logbook.select("gen", "avg")
"""
Explanation: The record() method takes a variable number of argument, each of which is a data to be recorded. In the last example, we saved the generation, the number of evaluations and everything contained in the record produced by a statistics object using the star magic. All record will be kept in the logbook until its destruction.
After a number of records, one may want to retrieve the information contained in the logbook.
End of explanation
"""
logbook.header = "gen", "avg", "spam"
"""
Explanation: The select() method provides a way to retrieve all the information associated with a keyword in all records. This method takes a variable number of string arguments, which are the keywords used in the record or statistics object. Here, we retrieved the generation and the average fitness using a single call to select.
Printing to Screen
A logbook can be printed to screen or file.
Its __str__() method returns a header of each key inserted in the first record and the complete logbook for each of these keys.
The row are in chronological order of insertion while the columns are in an undefined order.
The easiest way to specify an order is to set the header attribute to a list of strings specifying the order of the columns.
End of explanation
"""
print(logbook)
"""
Explanation: The result is:
End of explanation
"""
gen = logbook.select("gen")
fit_mins = logbook.chapters["fitness"].select("min")
size_avgs = logbook.chapters["size"].select("avg")
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax1 = plt.subplots()
line1 = ax1.plot(gen, fit_mins, "b-", label="Minimum Fitness")
ax1.set_xlabel("Generation")
ax1.set_ylabel("Fitness", color="b")
for tl in ax1.get_yticklabels():
tl.set_color("b")
ax2 = ax1.twinx()
line2 = ax2.plot(gen, size_avgs, "r-", label="Average Size")
ax2.set_ylabel("Size", color="r")
for tl in ax2.get_yticklabels():
tl.set_color("r")
lns = line1 + line2
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc="center right")
plt.show()
"""
Explanation: Plotting Features
One of the most common operation when an optimization is finished is to plot the data during the evolution.
The Logbook allows to do this very efficiently.
Using the select method, one can retrieve the desired data and plot it using matplotlib.
End of explanation
"""
from math import sin
from deap import base
from deap import tools
def evalFct(individual):
"""Evaluation function for the individual."""
x = individual[0]
return (x - 5)**2 * sin(x) * (x/3),
def feasible(individual):
"""Feasability function for the individual. Returns True if feasible False
otherwise."""
if 3 < individual[0] < 5:
return True
return False
def distance(individual):
"""A distance function to the feasability region."""
return (individual[0] - 5.0)**2
toolbox = base.Toolbox()
toolbox.register("evaluate", evalFct)
toolbox.decorate("evaluate", tools.DeltaPenality(feasible, 7.0, distance))
"""
Explanation: <img src='http://deap.readthedocs.org/en/master/_images/twin_logbook.png' width='92%'/>
Constraint Handling
We have already seen some alternatives.
Penality functions are the most basic way of handling constrains for individuals that cannot be evaluated or are forbiden for problem specific reasons, when falling in a given region.
The penality function gives a fitness disavantage to theses individuals based on the amount of constraint violation in the solution.
<img src='http://deap.readthedocs.org/en/master/_images/constraints.png' width='92%'/>
In DEAP, a penality function can be added to any evaluation function using the DeltaPenality decorator provided in the tools module.
End of explanation
"""
|
jupyter/nbgrader
|
nbgrader/docs/source/user_guide/autograded/hacker/ps1/problem1.ipynb
|
bsd-3-clause
|
NAME = "Alyssa P. Hacker"
COLLABORATORS = "Ben Bitdiddle"
"""
Explanation: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below:
End of explanation
"""
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
if n < 1:
raise ValueError
return [i ** 2 for i in range(1, n + 1)]
"""
Explanation: For this problem set, we'll be using the Jupyter notebook:
Part A (2 points)
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
End of explanation
"""
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
"""
Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:
End of explanation
"""
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
return sum(squares(n))
"""
Explanation: Part B (1 point)
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
End of explanation
"""
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
"""
Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
End of explanation
"""
import math
def hypotenuse(n):
"""Finds the hypotenuse of a right triangle with one side of length n and
the other side of length n-1."""
# find (n-1)**2 + n**2
if (n < 2):
raise ValueError("n must be >= 2")
elif n == 2:
sum1 = 5
sum2 = 0
else:
sum1 = sum_of_squares(n)
sum2 = sum_of_squares(n-2)
return math.sqrt(sum1 - sum2)
print(hypotenuse(2))
print(math.sqrt(2**2 + 1**2))
print(hypotenuse(10))
print(math.sqrt(10**2 + 9**2))
"""
Explanation: Part C (1 point)
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
$\sum_{i=1}^n i^2$
Part D (2 points)
Find a usecase for your sum_of_squares function and implement that usecase in the cell below.
End of explanation
"""
|
Merinorus/adaisawesome
|
Homework/05 - Taming Text/HW05_awesometeam_Q2.ipynb
|
gpl-3.0
|
import pandas as pd
import pycountry
from nltk.sentiment import *
import numpy as np
import matplotlib.pyplot as plt
import codecs
import math
import re
import string
"""
Explanation: Question 2) Find all the mentions of world countries in the whole corpus,
using the pycountry utility (HINT: remember that there will be different surface forms
for the same country in the text, e.g., Switzerland, switzerland, CH, etc.)
Perform sentiment analysis on every email message using the demo methods
in the nltk.sentiment.util module. Aggregate the polarity information of all
the emails by country, and plot a histogram (ordered and colored by polarity level)
that summarizes the perception of the different countries. Repeat the aggregation and plotting steps using different demo methods from the sentiment analysis module.
Can you find substantial differences?
End of explanation
"""
emails = pd.read_csv("hillary-clinton-emails/Emails.csv")
# Drop columns that won't be used
emails = emails.drop(['DocNumber', 'MetadataPdfLink','DocNumber', 'ExtractedDocNumber', 'MetadataCaseNumber'], axis=1)
emails.head()
emails_cut = emails[['ExtractedBodyText']].copy()
emails_cut.head()
emails_cut = emails_cut.dropna()
emails_cut.head()
"""
Explanation: Pre Process the Data, Dropping Irrelevant Columns
End of explanation
"""
from nltk import word_tokenize
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
tokenizer = RegexpTokenizer(r'\w+')
emails_tokenized = emails_cut.copy()
for index, row in emails_tokenized.iterrows():
row['ExtractedBodyText'] = tokenizer.tokenize(row['ExtractedBodyText'])
emails_tokenized.columns = ['TokenizedText']
emails_tokenized.reset_index(drop=True, inplace=True)
emails_tokenized.head()
"""
Explanation: Now we must tokenize the data...
End of explanation
"""
words_delete = ['IT', 'RE','LA','AND', 'AM', 'AT', 'IN', 'I', 'ME', 'DO',
'A', 'AN','BUT', 'IF', 'OR','AS','OF','BY', 'TO', 'UP','ON','ANY', 'NO', 'NOR', 'NOT','SO',
'S', 'T','DON','D', 'LL', 'M', 'O','VE', 'Y','PM', 'TV','CD','PA','ET', 'BY', 'IE','MS', 'MP', 'CC',
'GA','VA', 'BI','CV', 'AL','VAT', 'VA','AI', 'MD', 'SM', 'FM', 'EST', 'BB', 'BRB', 'AQ', 'MA', 'MAR', 'JAM', 'BM',
'Lybia', 'LY', 'LBY', 'MC', 'MCO', 'MO', 'MAC', 'NC', 'PG', 'PNG', 'SUR', 'VI', 'lybia', 'ARM']
emails_final = emails_tokenized.copy()
emails_final['TokenizedText'] = emails_final['TokenizedText'].apply(lambda x: [item for item in x if item not in words_delete])
emails_final.head()
"""
Explanation: Figure out what words to remove...
End of explanation
"""
countries_cited = []
for emails in emails_final['TokenizedText']:
for word in emails:
try:
country_name = pycountry.countries.get(alpha_2=word)
countries_cited.append(country_name.name)
except KeyError:
try:
country_name = pycountry.countries.get(alpha_3=word)
countries_cited.append(country_name.name)
except KeyError:
try:
country = pycountry.countries.get(name=word)
countries_cited.append(country_name.name)
except KeyError: pass
"""
Explanation: Create list of countries
End of explanation
"""
#List with Unique Entries of Countries Cited
final_countries = list(set(countries_cited))
size = len(final_countries)
final_countries
#Create New DataFrame for the Counts
Country_Sent = pd.DataFrame(index=range(0,size),columns=['Country', 'Count'])
Country_Sent['Country']=final_countries
Country_Sent.head()
count_list = []
for country in Country_Sent['Country']:
count = countries_cited.count(country)
count_list.append(count)
Country_Sent['Count']=count_list
Country_Sent.head()
#Take Out Countries with Less than 20 Citations
Country_Sent= Country_Sent[Country_Sent['Count'] > 14]
Country_Sent = Country_Sent.reset_index(drop=True)
Country_Sent.head()
#plot to see frequencies
Country_Sent.plot.bar(x='Country', y='Count')
plt.show()
#We have repeatedly plotted this, identifying weird occurances (small countries with high counts),
#and then elimitating them from the data set and repating the process
#create a list with all possible names of the countries above
countries_used_name = []
countries_used_alpha_2 =[]
countries_used_alpha_3 =[]
for country in Country_Sent['Country']:
country_names = pycountry.countries.get(name=country)
countries_used_name.append(country_names.name)
countries_used_alpha_2.append(country_names.alpha_2)
countries_used_alpha_3.append(country_names.alpha_3)
Country_Sent['Alpha_2']=countries_used_alpha_2
Country_Sent['Alpha_3']=countries_used_alpha_3
Country_Sent.head()
len(Country_Sent)
"""
Explanation: Organize List and Count Occurrence of Each Country
End of explanation
"""
sentiments = []
vader_analyzer = SentimentIntensityAnalyzer()
size = len(Country_Sent['Alpha_2'])
for i in range(1,size):
country_score =[]
for email in emails_no_stop['TokenizedText']:
if Country_Sent['Alpha_2'][i] in email or Country_Sent['Alpha_3'][i] in email or Country_Sent['Country'][i] in email:
str_email = ' '.join(email)
sentiment = vader_analyzer.polarity_scores(str_email)
score = sentiment['compound']
country_score.append(score)
else: pass
if len(country_score)!=0:
sentiment_score = sum(country_score) / float(len(country_score))
sentiments.append(sentiment_score)
else:
sentiments.append(999)
sentiments
#error in iteration, must drop NZ because it was not taken into account in the sentiments analysis
Country_Sent = Country_Sent.drop(Country_Sent.index[[0]])
len(Country_Sent)
#add sentiment list to data frame
Country_Sent['Sentiment'] = sentiments
Country_Sent.head()
#delete any row with sentiment value of 999
Country_Sent = Country_Sent[Country_Sent['Sentiment'] != 999]
Country_Sent.head()
#reorder dataframe in ascending order of sentiment
Country_Sent.sort_values(['Sentiment'], ascending=True, inplace=True)
Country_Sent.head()
#reorder index
Country_Sent = Country_Sent.reset_index(drop=True)
Country_Sent.head()
"""
Explanation: Now we check sentiment on emails around these names
End of explanation
"""
#We must normalize the sentiment scores and create a gradient based on that (green, blue & red gradient)
#first we sort the ones that are below zero, than the ones above zero
color_grad = []
size = len(Country_Sent['Sentiment'])
for i in range(0,size):
if Country_Sent['Sentiment'][i] < 0:
high = 0
low = np.min(sentiments)
rg = low-high
new_entry = (low-Country_Sent['Sentiment'][i])/rg
red = 1 - new_entry
color_grad.append((red,0,0))
else:
high = np.max(sentiments)
low = 0
rg2 = high-low
new_entry = (Country_Sent['Sentiment'][i]-low)/rg2
green = 1 - new_entry
color_grad.append((0,green,0))
Country_Sent['color_grad'] = color_grad
Country_Sent.head()
#Now we create the bar plot based on this palette
import seaborn as sns
plt.figure(figsize=(30,20))
plot = sns.barplot(x='Country', y='Sentiment', data=Country_Sent, orient='vertical', palette=color_grad)
plt.ylabel('Country Sentiment');
plt.show()
#Now we create a bar plot with an automatic gradient based on sentiment
size = len(Country_Sent['Sentiment'])
plt.figure(figsize=(30,20))
grad = sns.diverging_palette(10, 225, n=32)
plot = sns.barplot(x='Country', y='Sentiment', data=Country_Sent, orient='vertical', palette = grad )
plt.xticks(rotation=60);
plt.ylabel('Country Sentiment');
plt.show()
"""
Explanation: Now we make a color gradient for the histogram
End of explanation
"""
|
iagapov/ocelot
|
demos/ipython_tutorials/4_wake.ipynb
|
gpl-3.0
|
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
import time
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# load beam distribution
# this function convert Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
"""
Explanation: *This notebook was created by Sergey Tomin and Igor Zagorodnov for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016. *
Tutorial N4. Wakefields.
Chirper.
Influence of corrugated structure on the electron beam.
This example based on the work: I. Zagorodnov, G. Feng, T. Limberg. Corrugated structure insertion for extending the SASE bandwidth up to 3% at the European XFEL.
Gerometry of the corrugated structure. The blue ellipse represents an electron beam
propagating along the z axis.
<img src="4_corrugated_str.png" />
Wakefields
In order to take into account the impact of the wake field on the beam the longitudinal wake function
of point charge through the second order Taylor expansion is used.
In general case it uses 13 one-dimensional functions to represent the longitudinal component of the wake
function for arbitrary sets of the source and the wittness particles near to the reference axis.
The wake field impact on the beam is included as series of kicks.
The implementation of the wakefields follows closely the approach described
in:
* O. Zagorodnova, T. Limberg, Impedance budget database for the European XFEL,
in Proceedings of 2009 Particle Accelerator Conference,(Vancouver, Canada, 2009)
* M. Dohlus, K. Floettmann, C. Henning, Fast particle tracking with wake
fields, Report No. DESY 12-012, 2012.
This example will cover the following topics:
Initialization of the wakes and the places of their applying
tracking of second order with wakes
Requirements
beam_chirper.ast - input file, initial beam distribution in ASTRA format (was obtained from s2e simulation performed with ASTRA and CSRtrack).
wake_vert_1m.txt - wake table of the vertical corrugated structure (was calculated with ECHO)
wake_hor_1m.txt - wake table of the vertical corrugated structure (was calculated with ECHO)
Import of modules
End of explanation
"""
D00m25 = Drift(l = 0.25)
D01m = Drift(l = 1)
D02m = Drift(l = 2)
# Create markers for defining places of the wakes applying
w1_start = Marker()
w1_stop = Marker()
w2_start = Marker()
w2_stop = Marker()
w3_start = Marker()
w3_stop = Marker()
w4_start = Marker()
w4_stop = Marker()
w5_start = Marker()
w5_stop = Marker()
w6_start = Marker()
w6_stop = Marker()
# quadrupoles
Q1 = Quadrupole(l = 0.5, k1 = 0.215)
# lattice
lattice = (D01m, w1_start, D02m, w1_stop, w2_start, D02m, w2_stop, w3_start, D02m, w3_stop, D00m25, Q1,
D00m25, w4_start, D02m, w4_stop, w5_start, D02m, w5_stop, w6_start, D02m, w6_stop, D01m)
# creation MagneticLattice
method = MethodTM()
method.global_method = SecondTM
lat = MagneticLattice(lattice, method=method)
# calculate twiss functions with initial twiss parameters
tws0 = Twiss()
tws0.E = 14 # in GeV
tws0.beta_x = 22.5995
tws0.beta_y = 22.5995
tws0.alpha_x = -1.4285
tws0.alpha_y = 1.4285
tws = twiss(lat, tws0, nPoints=None)
# ploting twiss paramentrs.
plot_opt_func(lat, tws, top_plot=["Dx"], fig_name="i1", legend=False)
plt.show()
"""
Explanation: Layout of the corrugated structure insertion. Create Ocelot lattice <img src="4_layout.png" />
End of explanation
"""
# load and convert ASTRA file to OCELOT beam distribution
# p_array_init = astraBeam2particleArray(filename='beam_chirper.ast')
# save ParticleArray to compresssed numpy array
# save_particle_array("chirper_beam.npz", p_array_init)
p_array_init = load_particle_array("chirper_beam.npz")
plt.plot(-p_array_init.tau()*1000, p_array_init.p(), "r.")
plt.grid(True)
plt.xlabel(r"$\tau$, mm")
plt.ylabel(r"$\frac{\Delta E}{E}$")
plt.show()
"""
Explanation: Load beam file
End of explanation
"""
from ocelot.cpbd.wake3D import *
# load wake tables of corrugated structures
wk_vert = WakeTable('wake_vert_1m.txt')
wk_hor = WakeTable('wake_hor_1m.txt')
# creation of wake object with parameters
wake_v1 = Wake()
# w_sampling - defines the number of the equidistant sampling points for the one-dimensional
# wake coefficients in the Taylor expansion of the 3D wake function.
wake_v1.w_sampling = 500
wake_v1.wake_table = wk_vert
wake_v1.step = 1 # step in Navigator.unit_step, dz = Navigator.unit_step * wake.step [m]
wake_h1 = Wake()
wake_h1.w_sampling = 500
wake_h1.wake_table = wk_hor
wake_h1.step = 1
wake_v2 = deepcopy(wake_v1)
wake_h2 = deepcopy(wake_h1)
wake_v3 = deepcopy(wake_v1)
wake_h3 = deepcopy(wake_h1)
"""
Explanation: Initialization of the wakes and the places of their applying
End of explanation
"""
navi = Navigator(lat)
# add physics proccesses
navi.add_physics_proc(wake_v1, w1_start, w1_stop)
navi.add_physics_proc(wake_h1, w2_start, w2_stop)
navi.add_physics_proc(wake_v2, w3_start, w3_stop)
navi.add_physics_proc(wake_h2, w4_start, w4_stop)
navi.add_physics_proc(wake_v3, w5_start, w5_stop)
navi.add_physics_proc(wake_h3, w6_start, w6_stop)
# definiing unit step in [m]
navi.unit_step = 0.2
# deep copy of the initial beam distribution
p_array = deepcopy(p_array_init)
print("tracking with Wakes .... ")
start = time.time()
tws_track, p_array = track(lat, p_array, navi)
print("\n time exec:", time.time() - start, "sec")
"""
Explanation: Add the wakes in the lattice
Navigator defines step (dz) of tracking and which, if it exists, physical process will be applied on each step.
In order to add collective effects (Space charge, CSR or wake) method add_physics_proc() must be run.
Method:
* Navigator.add_physics_proc(physics_proc, elem1, elem2)
- physics_proc - physics process, can be CSR, SpaceCharge or Wake,
- elem1 and elem2 - first and last elements between which the physics process will be applied.
Also must be define unit_step in [m] (by default 1 m). unit_step is minimal step of tracking for any collective effect.
For each collective effect must be define number of unit_steps so step of applying physics process will be
dz = unit_step*step [m]
End of explanation
"""
tau0 = p_array_init.tau()
p0 = p_array_init.p()
tau1 = p_array.tau()
p1 = p_array.p()
print(len(p1))
plt.figure(1)
plt.plot(-tau0*1000, p0, "r.", -tau1*1000, p1, "b.")
plt.legend(["before", "after"], loc=4)
plt.grid(True)
plt.xlabel(r"$\tau$, mm")
plt.ylabel(r"$\frac{\Delta E}{E}$")
plt.show()
"""
Explanation: Longitudinal beam distribution
End of explanation
"""
tau = np.array([p.tau for p in p_array])
dp = np.array([p.p for p in p_array])
x = np.array([p.x for p in p_array])
y = np.array([p.y for p in p_array])
ax1 = plt.subplot(311)
ax1.plot(-tau*1000, x*1000, 'r.')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel("x, mm")
plt.grid(True)
ax2 = plt.subplot(312, sharex=ax1)
ax2.plot(-tau*1000, y*1000, 'r.')
plt.setp(ax2.get_xticklabels(), visible=False)
plt.ylabel("y, mm")
plt.grid(True)
ax3 = plt.subplot(313, sharex=ax1)
ax3.plot(-tau*1000, dp, 'r.')
plt.ylabel("dp/p")
plt.xlabel("s, mm")
plt.grid(True)
# plotting twiss parameters.
plot_opt_func(lat, tws_track, top_plot=["Dx"], fig_name="i1", legend=False)
plt.show()
"""
Explanation: Beam distribution
End of explanation
"""
|
ireapps/cfj-2017
|
completed/00. Python Fundamentals (Part 1).ipynb
|
mit
|
# variable assignment
# https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-python-3
# strings -- enclose in single or double quotes, just make sure they match
my_name = 'Cody'
# numbers
int_num = 6
float_num = 6.4
# the print function
print(8)
print('Hello!')
print(my_name)
print(int_num)
print(float_num)
# booleans
print(True)
print(False)
print(4 > 6)
print(6 == 6)
print('ell' in 'Hello')
"""
Explanation: Python fundamentals
A quick introduction to the Python programming language and Jupyter notebooks. (We're using Python 3, not Python 2.)
Basic data types and the print() function
End of explanation
"""
# addition
add_eq = 4 + 2
# subtraction
sub_eq = 4 - 2
# multiplication
mult_eq = 4 * 2
# division
div_eq = 4 / 2
# etc.
"""
Explanation: Basic math
You can do basic math with Python. (You can also do more advanced math.)
End of explanation
"""
# create a list: name, hometown, age
# an item's position in the list is the key thing
cody = ['Cody', 'Midvale, WY', 32]
# create another list of mixed data
my_list = [1, 2, 3, 'hello', True, ['a', 'b', 'c']]
# use len() to get the number of items in the list
my_list_count = len(my_list)
print('There are', my_list_count, 'items in my list.')
# use square brackets [] to access items in a list
# (counting starts at zero in Python)
# get the first item
first_item = my_list[0]
print(first_item)
# you can do negative indexing to get items from the end of your list
# get the last item
last_item = my_list[-1]
print(last_item)
# Use colons to get a range of items in a list
# get the first two items
# the last number in a list slice is the first list item that's ~not~ included in the result
my_range = my_list[0:2]
print(my_range)
# if you leave the last number off, it takes the item at the first number's index and everything afterward
# get everything from the third item onward
my_open_range = my_list[2:]
print(my_open_range)
# Use append() to add things to a list
my_list.append(5)
print(my_list)
# Use pop() to remove items from the end of a list
my_list.pop()
print(my_list)
# use join() to join items from a list into a string with a delimiter of your choosing
letter_list = ['a', 'b', 'c']
joined_list = '-'.join(letter_list)
print(joined_list)
"""
Explanation: Lists
A comma-separated collection of items between square brackets: []. Python keeps track of the order of things inside a list.
End of explanation
"""
my_dict = {'name': 'Cody', 'title': 'Training director', 'organization': 'IRE'}
# Access items in a dictionary using square brackets and the key (typically a string)
my_name = my_dict['name']
print(my_name)
# You can also use the `get()` method to retrieve values
# you can optionally provide a second argument as the default value
# if the key doesn't exist (otherwise defaults to `None`)
my_name = my_dict.get('name', 'Jefferson Humperdink')
print(my_name)
# Use the .keys() method to get the keys of a dictionary
print(my_dict.keys())
# Use the .values() method to get the values
print(my_dict.values())
# add items to a dictionary using square brackets, the name of the key (typically a string)
# and set the value like you'd set a variable, with =
my_dict['my_age'] = 32
print(my_dict)
# delete an item from a dictionary with `del`
del my_dict['my_age']
print(my_dict)
"""
Explanation: Dictionaries
A data structure that maps keys to values inside curly brackets: {}. Items in the dictionary are separated by commas. Python does not keep track of the order of items in a dictionary; if you need to keep track of insertion order, use an OrderedDict instead.
End of explanation
"""
# this is a one-line comment
"""
This is a
multi-line comment
~~~
"""
"""
Explanation: Commenting your code
Python skips lines that begin with a hashtag # -- these lines are used to write comments to help explain the code to others (and to your future self).
Multi-line comments are enclosed between triple quotes: """ """
End of explanation
"""
4 > 6
'Hello!' == 'Hello!'
(2 + 2) != (4 * 2)
100.2 >= 100
"""
Explanation: Comparison operators
When you want to compare values, you can use these symbols:
< means less than
> means greater than
== means equal
>= means greater than or equal
<= means less than or equal
!= means not equal
End of explanation
"""
whitespace_str = ' hello! '
print(whitespace_str.strip())
"""
Explanation: String functions
Python has a number of built-in methods to work with strings. They're useful if, say, you're using Python to clean data. Here are a few of them:
strip()
Call strip() on a string to remove whitespace from either side. It's like using the =TRIM() function in Excel.
End of explanation
"""
my_name = 'Cody'
my_name_upper = my_name.upper()
print(my_name_upper)
my_name_lower = my_name.lower()
print(my_name_lower)
"""
Explanation: upper() and lower()
Call .upper() on a string to make the characters uppercase. Call .lower() on a string to make the characters lowercase. This can be useful when testing strings for equality.
End of explanation
"""
company = 'Bausch & Lomb'
company_no_ampersand = company.replace('&', 'and')
print(company_no_ampersand)
"""
Explanation: replace()
Use .replace() to substitute bits of text.
End of explanation
"""
date = '6/4/2011'
date_split = date.split('/')
print(date_split)
"""
Explanation: split()
Use .split() to split a string on some delimiter. If you don't specify a delimiter, it uses a single space as the default.
End of explanation
"""
mangled_zip = '2301'
fixed_zip = mangled_zip.zfill(5)
print(fixed_zip)
num_zip = 2301
fixed_num_zip = str(num_zip).zfill(5)
print(fixed_num_zip)
"""
Explanation: zfill()
Among other things, you can use .zfill() to add zero padding -- for instance, if you're working with ZIP code data that was saved as a number somewhere and you've lost the leading zeroes for that handful of ZIP codes that begin with 0.
Note: .zfill() is a string method, so if you want to apply it to a number, you'll need to first coerce it to a string with str().
End of explanation
"""
my_string = 'supercalifragilisticexpialidocious'
chunk = my_string[9:20]
print(chunk)
"""
Explanation: slicing
Like lists, strings are iterables, so you can use slicing to grab chunks.
End of explanation
"""
str_to_test = 'hello'
print(str_to_test.startswith('hel'))
print(str_to_test.endswith('lo'))
print('el' in str_to_test)
print(str_to_test in ['hi', 'whatsup', 'salutations', 'hello'])
"""
Explanation: startswith(), endswith() and in
If you need to test whether a string starts with a series of characters, use .startswith(). If you need to test whether a string ends with a series of characters, use .endswith(). If you need to test whether a string is part of another string -- or a list of strings -- use .in().
These are case sensitive, so you'd typically .upper() or .lower() the strings you're comparing to ensure an apples-to-apples comparison.
End of explanation
"""
# date in m/d/yyyy format
in_date = '8/17/1982'
# split out individual pieces of the date
# using a shortcut method to assign variables to the resulting list
month, day, year = in_date.split('/')
# reshuffle as yyyy-mm-dd using .format()
# use a formatting option (:0>2) to left-pad month/day numbers with a zero
out_date = '{}-{:0>2}-{:0>2}'.format(year, month, day)
print(out_date)
# construct a greeting template
greeting = 'Hello, {}! My name is {}.'
your_name = 'Pat'
my_name = 'Cody'
print(greeting.format(your_name, my_name))
"""
Explanation: String formatting
Using curly brackets with the various options available to the .format() method, you can create string templates for your data. Some examples:
End of explanation
"""
# two strings of numbers
num_1 = '100'
num_2 = '200'
# what happens when you add them without coercing?
concat = num_1 + num_2
print(concat)
# coerce to integer, then add them
added = int(num_1) + int(num_2)
print(added)
"""
Explanation: Type coercion
Consider:
```python
this is a number, can't do string-y things to it
age = 32
this is a string, can't do number-y things to it
age = '32'
```
There are several functions you can use to coerce a value of one type to a value of another type. Here are a couple of them:
int() tries to convert to an integer
str() tries to convert to a string
float() tries to convert to a float
End of explanation
"""
|
ageron/ml-notebooks
|
06_decision_trees.ipynb
|
apache-2.0
|
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "decision_trees"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
"""
Explanation: Chapter 6 – Decision Trees
This notebook contains all the sample code and solutions to the exercises in chapter 6.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/06_decision_trees.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
"""
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
X = iris.data[:, 2:] # petal length and width
y = iris.target
tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)
tree_clf.fit(X, y)
from sklearn.tree import export_graphviz
def image_path(fig_id):
return os.path.join(IMAGES_PATH, fig_id)
export_graphviz(
tree_clf,
out_file=image_path("iris_tree.dot"),
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if not iris:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
if plot_training:
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica")
plt.axis(axes)
if iris:
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
else:
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
if legend:
plt.legend(loc="lower right", fontsize=14)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf, X, y)
plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2)
plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2)
plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2)
plt.text(1.40, 1.0, "Depth=0", fontsize=15)
plt.text(3.2, 1.80, "Depth=1", fontsize=13)
plt.text(4.05, 0.5, "(Depth=2)", fontsize=11)
save_fig("decision_tree_decision_boundaries_plot")
plt.show()
"""
Explanation: Training and visualizing
End of explanation
"""
tree_clf.predict_proba([[5, 1.5]])
tree_clf.predict([[5, 1.5]])
"""
Explanation: Predicting classes and class probabilities
End of explanation
"""
X[(X[:, 1]==X[:, 1][y==1].max()) & (y==1)] # widest Iris-Versicolor flower
not_widest_versicolor = (X[:, 1]!=1.8) | (y==2)
X_tweaked = X[not_widest_versicolor]
y_tweaked = y[not_widest_versicolor]
tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40)
tree_clf_tweaked.fit(X_tweaked, y_tweaked)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False)
plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2)
plt.plot([0, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.text(1.0, 0.9, "Depth=0", fontsize=15)
plt.text(1.0, 1.80, "Depth=1", fontsize=13)
save_fig("decision_tree_instability_plot")
plt.show()
from sklearn.datasets import make_moons
Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53)
deep_tree_clf1 = DecisionTreeClassifier(random_state=42)
deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42)
deep_tree_clf1.fit(Xm, ym)
deep_tree_clf2.fit(Xm, ym)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("No restrictions", fontsize=16)
plt.subplot(122)
plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14)
save_fig("min_samples_leaf_plot")
plt.show()
angle = np.pi / 180 * 20
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xr = X.dot(rotation_matrix)
tree_clf_r = DecisionTreeClassifier(random_state=42)
tree_clf_r.fit(Xr, y)
plt.figure(figsize=(8, 3))
plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False)
plt.show()
np.random.seed(6)
Xs = np.random.rand(100, 2) - 0.5
ys = (Xs[:, 0] > 0).astype(np.float32) * 2
angle = np.pi / 4
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xsr = Xs.dot(rotation_matrix)
tree_clf_s = DecisionTreeClassifier(random_state=42)
tree_clf_s.fit(Xs, ys)
tree_clf_sr = DecisionTreeClassifier(random_state=42)
tree_clf_sr.fit(Xsr, ys)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
plt.subplot(122)
plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
save_fig("sensitivity_to_rotation_plot")
plt.show()
"""
Explanation: Sensitivity to training set details
End of explanation
"""
# Quadratic training set + noise
np.random.seed(42)
m = 200
X = np.random.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + np.random.randn(m, 1) / 10
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg.fit(X, y)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2)
tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"):
x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1)
y_pred = tree_reg.predict(x1)
plt.axis(axes)
plt.xlabel("$x_1$", fontsize=18)
if ylabel:
plt.ylabel(ylabel, fontsize=18, rotation=0)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_regression_predictions(tree_reg1, X, y)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
plt.text(0.21, 0.65, "Depth=0", fontsize=15)
plt.text(0.01, 0.2, "Depth=1", fontsize=13)
plt.text(0.65, 0.8, "Depth=1", fontsize=13)
plt.legend(loc="upper center", fontsize=18)
plt.title("max_depth=2", fontsize=14)
plt.subplot(122)
plot_regression_predictions(tree_reg2, X, y, ylabel=None)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
for split in (0.0458, 0.1298, 0.2873, 0.9040):
plt.plot([split, split], [-0.2, 1], "k:", linewidth=1)
plt.text(0.3, 0.5, "Depth=2", fontsize=13)
plt.title("max_depth=3", fontsize=14)
save_fig("tree_regression_plot")
plt.show()
export_graphviz(
tree_reg1,
out_file=image_path("regression_tree.dot"),
feature_names=["x1"],
rounded=True,
filled=True
)
tree_reg1 = DecisionTreeRegressor(random_state=42)
tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
x1 = np.linspace(0, 1, 500).reshape(-1, 1)
y_pred1 = tree_reg1.predict(x1)
y_pred2 = tree_reg2.predict(x1)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", fontsize=18, rotation=0)
plt.legend(loc="upper center", fontsize=18)
plt.title("No restrictions", fontsize=14)
plt.subplot(122)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14)
save_fig("tree_regression_regularization_plot")
plt.show()
"""
Explanation: Regression trees
End of explanation
"""
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=10000, noise=0.4, random_state=42)
"""
Explanation: Exercise solutions
1. to 6.
See appendix A.
7.
Exercise: train and fine-tune a Decision Tree for the moons dataset.
a. Generate a moons dataset using make_moons(n_samples=10000, noise=0.4).
Adding random_state=42 to make this notebook's output constant:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
"""
Explanation: b. Split it into a training set and a test set using train_test_split().
End of explanation
"""
from sklearn.model_selection import GridSearchCV
params = {'max_leaf_nodes': list(range(2, 100)), 'min_samples_split': [2, 3, 4]}
grid_search_cv = GridSearchCV(DecisionTreeClassifier(random_state=42), params, n_jobs=-1, verbose=1, cv=3)
grid_search_cv.fit(X_train, y_train)
grid_search_cv.best_estimator_
"""
Explanation: c. Use grid search with cross-validation (with the help of the GridSearchCV class) to find good hyperparameter values for a DecisionTreeClassifier. Hint: try various values for max_leaf_nodes.
End of explanation
"""
from sklearn.metrics import accuracy_score
y_pred = grid_search_cv.predict(X_test)
accuracy_score(y_test, y_pred)
"""
Explanation: d. Train it on the full training set using these hyperparameters, and measure your model's performance on the test set. You should get roughly 85% to 87% accuracy.
By default, GridSearchCV trains the best model found on the whole training set (you can change this by setting refit=False), so we don't need to do it again. We can simply evaluate the model's accuracy:
End of explanation
"""
from sklearn.model_selection import ShuffleSplit
n_trees = 1000
n_instances = 100
mini_sets = []
rs = ShuffleSplit(n_splits=n_trees, test_size=len(X_train) - n_instances, random_state=42)
for mini_train_index, mini_test_index in rs.split(X_train):
X_mini_train = X_train[mini_train_index]
y_mini_train = y_train[mini_train_index]
mini_sets.append((X_mini_train, y_mini_train))
"""
Explanation: 8.
Exercise: Grow a forest.
a. Continuing the previous exercise, generate 1,000 subsets of the training set, each containing 100 instances selected randomly. Hint: you can use Scikit-Learn's ShuffleSplit class for this.
End of explanation
"""
from sklearn.base import clone
forest = [clone(grid_search_cv.best_estimator_) for _ in range(n_trees)]
accuracy_scores = []
for tree, (X_mini_train, y_mini_train) in zip(forest, mini_sets):
tree.fit(X_mini_train, y_mini_train)
y_pred = tree.predict(X_test)
accuracy_scores.append(accuracy_score(y_test, y_pred))
np.mean(accuracy_scores)
"""
Explanation: b. Train one Decision Tree on each subset, using the best hyperparameter values found above. Evaluate these 1,000 Decision Trees on the test set. Since they were trained on smaller sets, these Decision Trees will likely perform worse than the first Decision Tree, achieving only about 80% accuracy.
End of explanation
"""
Y_pred = np.empty([n_trees, len(X_test)], dtype=np.uint8)
for tree_index, tree in enumerate(forest):
Y_pred[tree_index] = tree.predict(X_test)
from scipy.stats import mode
y_pred_majority_votes, n_votes = mode(Y_pred, axis=0)
"""
Explanation: c. Now comes the magic. For each test set instance, generate the predictions of the 1,000 Decision Trees, and keep only the most frequent prediction (you can use SciPy's mode() function for this). This gives you majority-vote predictions over the test set.
End of explanation
"""
accuracy_score(y_test, y_pred_majority_votes.reshape([-1]))
"""
Explanation: d. Evaluate these predictions on the test set: you should obtain a slightly higher accuracy than your first model (about 0.5 to 1.5% higher). Congratulations, you have trained a Random Forest classifier!
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/l3.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: "Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.filter(qualifier='l3_mode')
"""
Explanation: Relevant Parameters
An l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux.
Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: So let's add a LC dataset
End of explanation
"""
print(b.filter(qualifier='l3*'))
"""
Explanation: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
End of explanation
"""
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3'))
"""
Explanation: l3_mode = 'flux'
When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.
End of explanation
"""
print(b.compute_l3s())
"""
Explanation: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
"""
b.set_value('l3_mode', 'fraction')
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3_frac'))
"""
Explanation: l3_mode = 'fraction'
When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.
End of explanation
"""
print(b.compute_l3s())
"""
Explanation: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
"""
b.run_compute(irrad_method='none', model='no_third_light')
b.set_value('l3_mode', 'flux')
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light')
"""
Explanation: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
End of explanation
"""
afig, mplfig = b['lc01'].plot(model='no_third_light')
afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
"""
Explanation: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('l3', 0.0)
b.run_compute(irrad_method='none', model='no_third_light', overwrite=True)
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light', overwrite=True)
print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light')))
print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))
"""
Explanation: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our models again and look at the values of the intensities in the mesh.
End of explanation
"""
|
Esri/gis-stat-analysis-py-tutor
|
notebooks/NeighborhoodSearching.ipynb
|
apache-2.0
|
import Weights as WEIGHTS
import os as OS
inputFC = r'../data/CA_Polygons.shp'
fullFC = OS.path.abspath(inputFC)
fullPath, fcName = OS.path.split(fullFC)
masterField = "MYID"
"""
Explanation: Neighborhood Structures in the ArcGIS Spatial Statistics Library
Spatial Weights Matrix
On-the-fly Neighborhood Iterators [GA Table]
Contructing PySAL Spatial Weights
Spatial Weight Matrix File
Stores the spatial weights so they do not have to be re-calculated for each analysis.
In row-compressed format.
Little endian byte encoded.
Requires a unique long/short field to identify each features. Can NOT be the OID/FID.
Construction
End of explanation
"""
swmFile = OS.path.join(fullPath, "fixed250k.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField,
threshold = 250000)
"""
Explanation: Distance-Based Options
INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
fixed (boolean): fixed (1) or inverse (0) distance?
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
exponent {float, 1.0}: distance decay
threshold {float, None}: distance threshold
kNeighs (int): number of neighbors to return
rowStandard {bool, True}: row standardize weights?
Example: Fixed Distance
End of explanation
"""
swmFile = OS.path.join(fullPath, "inv2_250k.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, fixed = False,
exponent = 2.0, threshold = 250000)
"""
Explanation: Example: Inverse Distance Squared
End of explanation
"""
swmFile = OS.path.join(fullPath, "knn8.swm")
fixedSWM = WEIGHTS.kNearest2SWM(fullFC, swmFile, masterField, kNeighs = 8)
"""
Explanation: k-Nearest Neighbors Options
INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
kNeighs {int, 1}: number of neighbors to return
rowStandard {bool, True}: row standardize weights?
Example: 8-nearest neighbors
End of explanation
"""
swmFile = OS.path.join(fullPath, "fixed250k_knn8.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, kNeighs = 8,
threshold = 250000)
"""
Explanation: Example: Fixed Distance - k-nearest neighbor hybrid [i.e. at least k neighbors but may have more...]
End of explanation
"""
swmFile = OS.path.join(fullPath, "delaunay.swm")
fixedSWM = WEIGHTS.delaunay2SWM(fullFC, swmFile, masterField)
"""
Explanation: Delaunay Triangulation Options
INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
rowStandard {bool, True}: row standardize weights?
Example: delaunay
End of explanation
"""
swmFile = OS.path.join(fullPath, "rook_bin.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, rowStandard = False)
"""
Explanation: Polygon Contiguity Options <a id="poly_options"></a>
``` INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
kNeighs {int, 0}: number of neighbors to return (1)
rowStandard {bool, True}: row standardize weights?
contiguityType {str, Rook}: {Rook = Edges Only, Queen = Edges/Vertices}
NOTES:
(1) kNeighs is an option often used when you know there are polygon
features that are not contiguous (e.g. islands). A kNeighs value
of 2 will assure that ALL features have at least 2 neighbors.
If a polygon is determined to only touch a single other polygon,
then a nearest neighbor search based on true centroids are used
to find the additional neighbor.
```
Example: Rook [Binary]
End of explanation
"""
swmFile = OS.path.join(fullPath, "queen.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, contiguityType = "QUEEN")
"""
Explanation: *Example: Queen Contiguity [Row Standardized]
End of explanation
"""
swmFile = OS.path.join(fullPath, "hybrid.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, kNeighs = 4)
"""
Explanation: Example: Queen Contiguity - KNN Hybrid [Prevents Islands w/ no Neighbors]
(1)
End of explanation
"""
import SSDataObject as SSDO
inputFC = r'../data/CA_Polygons.shp'
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = ssdo.oidName
ssdo.obtainData(uniqueIDField, requireSearch = True)
"""
Explanation: On-the-fly Neighborhood Iterators [GA Table]
Reads centroids of input features into spatial tree structure.
Distance Based Queries.
Scalable: In-memory/disk-space swap for large data.
Requires a unique long/short field to identify each features. Can be the OID/FID.
Uses requireSearch = True when using ssdo.obtainData
Pre-Example: Load the Data into GA Version of SSDataObject
End of explanation
"""
import arcgisscripting as ARC
import WeightsUtilities as WU
import gapy as GAPY
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef)
gaSearch.init_nearest(0.0, 4, gaConcept)
neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch)
for i in range(len(neighSearch)):
neighOrderIDs = neighSearch[i]
if i < 5:
print(neighOrderIDs)
import arcgisscripting as ARC
import WeightsUtilities as WU
import gapy as GAPY
import SSUtilities as UTILS
inputGrid = r'D:\Data\UC\UC17\Island\Dykstra\Dykstra.gdb\emerge'
ssdo = SSDO.SSDataObject(inputGrid)
ssdo.obtainData(ssdo.oidName, requireSearch = True)
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef)
gaSearch.init_nearest(300., 0, gaConcept)
neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch)
print(ssdo.distanceInfo.name)
for i in range(len(neighSearch)):
neighOrderIDs = neighSearch[i]
x0,y0 = ssdo.xyCoords[i]
if i < 5:
nhs = ", ".join([str(i) for i in neighOrderIDs])
dist = []
for nh in neighOrderIDs:
x1,y1 = ssdo.xyCoords[nh]
dij = WU.euclideanDistance(x0,x1,y0,y1)
dist.append(UTILS.formatValue(dij, "%0.2f"))
print("ID {0} has {1} neighs, they are {2}".format(i, len(neighOrderIDs), nhs))
print("The Distances are... {0}".format(", ".join(dist)))
"""
Explanation: Example: NeighborSearch - When you only need your Neighbor IDs
gaSearch.init_nearest(distance_band, minimum_num_neighs, {"euclidean", "manhattan")
End of explanation
"""
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
gaSearch.init_nearest(250000, 0, gaConcept)
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 0, exponent = 2.0)
for i in range(len(neighSearch)):
neighOrderIDs, neighWeights = neighSearch[i]
if i < 3:
print(neighOrderIDs)
print(neighWeights)
"""
Explanation: Example: NeighborWeights - When you need non-uniform spatial weights (E.g. Inverse Distance Squared)
NeighborWeights(gaTable, gaSearch, weight_type [0: inverse_distance, 1: fixed_distance], exponent = 1.0, row_standard = True, include_self = False)
End of explanation
"""
import pysal as PYSAL
import WeightsUtilities as WU
import SSUtilities as UTILS
def swm2Weights(ssdo, swmfile):
"""Converts ArcGIS Sparse Spatial Weights Matrix (*.swm) file to
PySAL Sparse Spatial Weights Class.
INPUTS:
ssdo (class): instance of SSDataObject [1,2]
swmFile (str): full path to swm file
NOTES:
(1) Data must already be obtained using ssdo.obtainData()
(2) The masterField for the swm file and the ssdo object must be
the same and may NOT be the OID/FID/ObjectID
"""
neighbors = {}
weights = {}
#### Create SWM Reader Object ####
swm = WU.SWMReader(swmfile)
#### SWM May NOT be a Subset of the Data ####
if ssdo.numObs > swm.numObs:
ARCPY.AddIDMessage("ERROR", 842, ssdo.numObs, swm.numObs)
raise SystemExit()
#### Parse All SWM Records ####
for r in UTILS.ssRange(swm.numObs):
info = swm.swm.readEntry()
masterID, nn, nhs, w, sumUnstandard = info
#### Must Have at Least One Neighbor ####
if nn:
#### Must be in Selection Set (If Exists) ####
if masterID in ssdo.master2Order:
outNHS = []
outW = []
#### Transform Master ID to Order ID ####
orderID = ssdo.master2Order[masterID]
#### Neighbors and Weights Adjusted for Selection ####
for nhInd, nhVal in enumerate(nhs):
try:
nhOrder = ssdo.master2Order[nhVal]
outNHS.append(nhOrder)
weightVal = w[nhInd]
if swm.rowStandard:
weightVal = weightVal * sumUnstandard[0]
outW.append(weightVal)
except KeyError:
pass
#### Add Selected Neighbors/Weights ####
if len(outNHS):
neighbors[orderID] = outNHS
weights[orderID] = outW
swm.close()
#### Construct PySAL Spatial Weights and Standardize as per SWM ####
w = PYSAL.W(neighbors, weights)
if swm.rowStandard:
w.transform = 'R'
return w
def poly2Weights(ssdo, contiguityType = "ROOK", rowStandard = True):
"""Uses GP Polygon Neighbor Tool to construct contiguity relationships
and stores them in PySAL Sparse Spatial Weights class.
INPUTS:
ssdo (class): instance of SSDataObject [1]
contiguityType {str, ROOK}: ROOK or QUEEN contiguity
rowStandard {bool, True}: whether to row standardize the spatial weights
NOTES:
(1) Data must already be obtained using ssdo.obtainData() or ssdo.obtainDataGA ()
"""
neighbors = {}
weights = {}
polyNeighDict = WU.polygonNeighborDict(ssdo.inputFC, ssdo.masterField,
contiguityType = contiguityType)
for masterID, neighIDs in UTILS.iteritems(polyNeighDict):
orderID = ssdo.master2Order[masterID]
neighbors[orderID] = [ssdo.master2Order[i] for i in neighIDs]
w = PYSAL.W(neighbors)
if rowStandard:
w.transform = 'R'
return w
def distance2Weights(ssdo, neighborType = 1, distanceBand = 0.0, numNeighs = 0, distanceType = "euclidean",
exponent = 1.0, rowStandard = True, includeSelf = False):
"""Uses ArcGIS Neighborhood Searching Structure to create a PySAL Sparse Spatial Weights Matrix.
INPUTS:
ssdo (class): instance of SSDataObject [1]
neighborType {int, 1}: 0 = inverse distance, 1 = fixed distance,
2 = k-nearest-neighbors, 3 = delaunay
distanceBand {float, 0.0}: return all neighbors within this distance for inverse/fixed distance
numNeighs {int, 0}: number of neighbors for k-nearest-neighbor, can also be used to set a minimum
number of neighbors for inverse/fixed distance
distanceType {str, euclidean}: manhattan or euclidean distance [2]
exponent {float, 1.0}: distance decay factor for inverse distance
rowStandard {bool, True}: whether to row standardize the spatial weights
includeSelf {bool, False}: whether to return self as a neighbor
NOTES:
(1) Data must already be obtained using ssdo.obtainDataGA()
(2) Chordal Distance is used for GCS Data
"""
neighbors = {}
weights = {}
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
if neighborType == 3:
gaSearch.init_delaunay()
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 1)
else:
if neighborType == 2:
distanceBand = 0.0
weightType = 1
else:
weightType = neighborType
concept, gaConcept = WU.validateDistanceMethod(distanceType.upper(), ssdo.spatialRef)
gaSearch.init_nearest(distanceBand, numNeighs, gaConcept)
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = weightType,
exponent = exponent, include_self = includeSelf)
for i in range(len(neighSearch)):
neighOrderIDs, neighWeights = neighSearch[i]
neighbors[i] = neighOrderIDs
weights[i] = neighWeights
w = PYSAL.W(neighbors, weights)
if rowStandard:
w.transform = 'R'
return w
"""
Explanation: Contructing PySAL Spatial Weights
Convert masterID to orderID when using ssdo.obtainData (SWM File, Polygon Contiguity)
Data is already in orderID when using ssdo.obtainDataGA (Distance Based)
Methods in next cell can be imported from pysal2ArcGIS.py
End of explanation
"""
import WeightConvertor as W_CONVERT
swmFile = OS.path.join(fullPath, "queen.swm")
galFile = OS.path.join(fullPath, "queen.gal")
convert = W_CONVERT.WeightConvertor(swmFile, galFile, inputFC, "MYID", "SWM", "GAL")
convert.createOutput()
"""
Explanation: Converting Spatial Weight Matrix Formats (e.g. .swm, .gwt, *.gal)
Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https://github.com/Esri/PySAL-ArcGIS-Toolbox]
Please make note of the section on Adding a Git Project to your ArcGIS Installation Python Path.
End of explanation
"""
import numpy as NUM
NUM.random.seed(100)
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = "MYID"
fieldNames = ['PCR2010', 'POP2010', 'PERCNOHS']
ssdo.obtainDataGA(uniqueIDField, fieldNames)
df = ssdo.getDataFrame()
X = df.as_matrix()
swmFile = OS.path.join(fullPath, "rook_bin.swm")
w = swm2Weights(ssdo, swmFile)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
"""
Explanation: Calling MaxP Regions Using SWM Based on Rook Contiguity, No Row Standardization
End of explanation
"""
NUM.random.seed(100)
w = poly2Weights(ssdo, rowStandard = False)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
"""
Explanation: Calling MaxP Regions Using Rook Contiguity, No Row Standardization
End of explanation
"""
NUM.random.seed(100)
w = distance2Weights(ssdo, distanceBand = 250000.0, numNeighs = 2)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
"""
Explanation: Identical results because the random seed was set to 100 and they have the same spatial neighborhood
Calling MaxP Regions Using Fixed Distance 250000, Hyrbid to Assure at least 2 Neighbors
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
machine-learning/calculate_the_trace_of_a_matrix.ipynb
|
mit
|
# Load library
import numpy as np
"""
Explanation: Title: Calculate The Trace Of A Matrix
Slug: calculate_the_trace_of_a_matrix
Summary: How to calculate the trace of a matrix in Python.
Date: 2017-09-02 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create matrix
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
"""
Explanation: Create Matrix
End of explanation
"""
# Calculate the tracre of the matrix
matrix.diagonal().sum()
"""
Explanation: Calculate The Trace
End of explanation
"""
|
fdmazzone/Ecuaciones_Diferenciales
|
Teoria_Basica/scripts/Segundo Parcial 2015.ipynb
|
gpl-2.0
|
from sympy import *
init_printing()
x,y=symbols('x,y')
u=y*x**2-x**2/y**2
(x*u.diff(x)+y*u.diff(y)).simplify()
u.subs(y,1)
"""
Explanation: Ejercicio 1 Resolver el siguiente problema de valores iniciales para una ecuación en derivadas parciales
$$x\frac{\partial u}{\partial x}+y\frac{\partial u}{\partial y}=3x^2y$$
$$u(x,1)=0$$
La ecuación de las características es
$$\frac{dy}{dx}=\frac{y}{x}\Rightarrow \frac{dy}{y}=\frac{dx}{x}\Rightarrow \ln(|y|)=\ln(|x|)+C_1,\quad \text{donde } C_1\in\mathbb{R} $$
Luego
$$|y|=e^{C_1}|x|\Rightarrow |y|=\pm C_2|x|,\quad \text{con } C_2=e^{C_1}>0 \Rightarrow y= C_3 x,\quad \text{con } C_3\in\mathbb{R}$$
Ahora para $u$ tenemos
$$\frac{du}{dx}=\frac{3x^2y}{x}=3xy=3C_3x^2\Rightarrow u=\int du=C_3\int 3x^2dx=C_3x^3+f(C_3)=yx^2+f\left(\frac{y}{x}\right)$$
Ahora
$$u(x,1)=0\Rightarrow x^2+f\left(\frac{1}{x}\right)=0\Rightarrow f\left(\frac{1}{x}\right)=-x^2\Rightarrow f\left(z\right)=-\frac{1}{z^2}\quad (\text{hice } z=\frac{1}{x})$$
Finalmente
$$u= yx^2-\frac{x^2}{y^2}$$
Chequeemos
End of explanation
"""
x,r=symbols('x,r',real=True)
y=x**r
Ecua=x**3*y.diff(x,3)-6*x*y.diff(x)+12*y
Ecua
Ecua.factor()
"""
Explanation: Ejercicio 2 Dada la ecuación
$$x^3y'''(x)-6xy'(x)+12y=0.$$
proponer una solución de la forma $y=x^r$ y demostrar que existen tres soluciones linealmente independientes de esa forma.
End of explanation
"""
y1=x**2
y2=x**3
y3=x**(-2)
A=Matrix([[y1,y2,y3],[y1.diff(x,1),y2.diff(x,1),y3.diff(x,1) ],[y1.diff(x,2),y2.diff(x,2),y3.diff(x,2) ] ])
A.det()
"""
Explanation: Claramente, la expresión nos dice que debe ser $r=3,2,-2$. Veamos que estos valores de $r$ determinan soluciones linealmente independientes. Usamos el wronskiano
End of explanation
"""
orden=5
coef=symbols('a:5')
coef
m=symbols('m',real=True)
y=x**m* sum([coef[i]*x**i for i in range(orden)])
y
a,b=symbols('a,b',real=True)
Ecua=y.diff(x,2)+(b-x)/x*y.diff(x)-a/x*y
Ecua=Ecua/x**(m-2)
Ecua=Ecua.expand()
Ecua
Ecuaciones=[Ecua.diff(x,i).subs(x,0)/factorial(i) for i in range(orden)]
Ecuaciones
"""
Explanation: Como $W\neq 0$ son linealmente independientes
Ejercicio 3 La siguiente ecuación:
$$xy''+(b-x)y'-ay=0,$$
con $a,b\in\mathbb{R}$, se conoce como Ecuación de Kummer.
* Justificar que $x=0$ es un punto regular singular para esta ecuación.
* Encontrar y hallar las soluciones de la ecuación indicial (Respuesta $m=0$ y $m=1-b$).
* Justificar que cuando $b\notin \mathbb{Z}$ la ecuación tiene dos soluciones linealmente independientes que se desarrollan en serie de Frobenius. El radio de convergencia de estas series es infinito.
* Justificar que, cuando $b$ no es un entero menor o igual a cero, obtenemos una solución que es una función entera, que eligiendo convenientemente la condición inicial, es:
$$
M(a,b;x)=1+\frac{a}{b}x+\frac{a(a+1)}{b(b+1)}\frac{x^2}{2!}+\cdots+\frac{a(a+1)\cdots(a+n)}{b(b+1)\cdots
(b+n)}\frac{x^n}{n!}+\cdots$$
Esta función se conoce con el pomposo nombre de función hipergeométrica confluente de Kummer.
Solución
tenemos que $p(x)=\frac{b-x}{x}$ y $q(x)=-\frac{a}{x}$. Luego $x=0$ es un punto singular de la ecuación. Además
$xp(x)=b-x$ y $x^2q(x)=-ax$ son polinomios y por lo tanto funciones analíticas. De esta manera $x=0$ es un punto regular singular.
En lo que sigue nos apoyaremos en SymPy
End of explanation
"""
Sol_Ecua_Ind=solve(Ecuaciones[0],m)
Sol_Ecua_Ind
"""
Explanation: Hallemos las soluciones de la ecuación indicial
End of explanation
"""
for i in range(1,orden):
Recu=solve(Ecuaciones[i],coef[i])[0].factor()
pprint(coef[i])
pprint(Recu)
"""
Explanation: Esto resuelve el inciso 2 del ejercicio. Para obtener la relación de recurrencia debemos despejar $a_n$ de cada ecuación.
End of explanation
"""
Sol_a_n=solve(Ecuaciones,coef[1:])
y.subs(Sol_a_n).subs(m,0)
"""
Explanation: Obtenemos
$$a_n=\frac{a+m+n-1}{(m+n)(b+m+n-1)}a_{n-1}$$
Si $b\notin\mathbb{Z}$ entonces la diferencia entre las raíces de la ecuación indicial $0-(1-b)=b\notin\mathbb{Z}$. Así por el Teorema que hemos visto tenemos dos soluciones en serie de Frobenius
$$y_1=x^0\sum\limits_{n=0}^{\infty}a_nx^n=\sum\limits_{n=0}^{\infty}a_nx^n,\quad\text{con }a_0\neq 0$$
y
$$y_2=x^{1-b}\sum\limits_{n=0}^{\infty}b_nx^n=\sum\limits_{n=0}^{\infty}b_nx^n,\quad\text{con }b_0\neq 0$$
Son linealmente independientes (esto creo que en algún lugar del teórico está dicho) pues como $1-b\neq 0$
$$\lim_{x\to 0^+}\frac{y_1}{y_2}=\lim_{x\to 0^+}\frac{1}{x^{1-b}}\frac{a_0}{b_0}$$
y este límite es 0 o $\infty$ acorde a que $1-b<0$ o $1-b>0$ respectivamente. Entonces $y_1/y_2$ no puede ser constante.
Cuando $b\notin \mathbb{Z}$ tenemos que $m=0$ determina una solución. Pero, si $b$ es entero positivo, la raíz $m=0$ es mayor o igual que la raíz $m=1-b$ y entonces, en este caso también, $m=0$ nos da una solución en serie de Frobenius.
En cualquier caso el radio de convergencia es infinito pues
$$\lim_{n\to\infty}\frac{|a_n||x|^n}{|a_{n-1}||x|^{n-1}}=\lim_{n\to\infty}\frac{a+m+n-1}{(m+n)(b+m+n-1)}|x|=0$$
y se invoca el criterio del cociente
End of explanation
"""
y.subs(Sol_a_n).subs(m,0).subs(coef[0],1)
"""
Explanation: Para que quede lo que dice el ejercicio, debemos elegir $a_0=1$
End of explanation
"""
y.subs(Sol_a_n).subs(m,1-b)
"""
Explanation: Si de la misma forma si $b\notin \mathbb{Z}$ o $b$ es entero menor o igual que cero, $m=1-b$ da una solución en serie de Frobenius. Hallémosla
End of explanation
"""
for an in coef[1:]:
Sol_a_n[an]=Sol_a_n[an].factor().subs(m,1-b)
y.subs(Sol_a_n)
"""
Explanation: No está muy claro el patrón que sigue la expresión porque SyPy no muestra los denominadores factorizados. Pidamos que lo haga y, de paso, sustituyamos $m$ por $1-b$
End of explanation
"""
orden=10
coef=symbols('a:10')
y=sum([coef[i]*x**i for i in range(orden)])
a,b=symbols('a,b',real=True)
Ecua=y.diff(x,2)-y.diff(x)-y
Ecua
Ecuaciones=[Ecua.diff(x,i).subs(x,0)/factorial(i) for i in range(orden)]
Ecuaciones
"""
Explanation: De esto se ve que, tomando $a_0=1$, la otra solución es
$$y=1+\sum_{n=1}^{\infty}\frac{(a-b+1)(a-b+2)\cdots(a-b+n)}{n!(2-b)(3-b)\cdots (n+1-b)}x^n$$
Ejercicio 4 Considerar el PVI:
$$y''=y'+y,\quad y(0)=0,\quad y'(0)=1.$$
Derivar la solución en serie de potencias
$$y(x)=\sum\limits_{n=1}^{\infty} \frac{F_n}{n!}x^n,$$
donde $F_n$ es la sucesión de números de Fibonacci, definidos por $F_1=1$, $F_2=1$ y $F_n=F_{n-1}+F_{n-2}$.
Demostrar que la serie anterior tiene radio de convergencia infinito.
Solución
Hallemos la relación de recurrencia
End of explanation
"""
Ecuaciones=Ecuaciones[:-2]
Ecuaciones
for i in range(6):
Recu=solve(Ecuaciones[i],coef[i+2])[0].factor()
pprint(coef[i+2])
pprint(Recu)
"""
Explanation: Las dos últimas ecuaciones no sirven
End of explanation
"""
|
gfeiden/Notebook
|
Projects/senap/common_blocks.ipynb
|
mit
|
import fileinput as fi
"""
Explanation: MARCS Common Blocks
Identifying Fortran common blocks used throughout the MARCS model atmosphere package. The goal is to have a list of common blocks with an index of each routine they appear in.
End of explanation
"""
!head -n 5 marcs_common_blocks.txt
"""
Explanation: I have already run grep from the command line using
bash
grep -i -n "common" *.f > marcs_common_blocks.txt
The option -i indicates that the search should be case insensitive and -n returns the line number on which the search phrase is used. I've also piped the output to a file called marcs_common_blocks.txt for easy manipulation. All MARCS files have the .f Fortran extension, so all instances should be returned using this search.
Now, let's look at the file structure.
End of explanation
"""
marcs_common_blocks = [line.split(':') for line in fi.input('marcs_common_blocks.txt')]
"""
Explanation: The basic structure is filename.f:##: followed by the contents on the line. Since older Fortran required users to start in the 7th column, there is ample whitespace between the file information and the line content. The only exception is when a line is commented out.
We can read the data in and separate it using the colon, :, as a delimeter.
End of explanation
"""
marcs_common_blocks[0]
"""
Explanation: Check to make sure we've acheived what we set out to do.
End of explanation
"""
common_block_names = [entry[2].rstrip('\n').lower().replace(' ', '')
for entry in marcs_common_blocks if entry[2][0].lower() not in ['c', '!', '*']]
"""
Explanation: Now we need to figure out whether we can easily access common block names. They are always surrounded by / /, but we need to be careful to avoid irregular spacings. It is therefore advantageous to trim all whitespace in the third column before populating the list. We also want to strip new line characters and convert everything to lower case. However, let us also avoid commented lines and focus only on active common blocks. Comments are indicated by either c, !, or *.
End of explanation
"""
common_block_names = [entry[entry.find('/') + 1:entry.rfind('/')] for entry in common_block_names
if entry[0].lower() == 'c']
"""
Explanation: With commented entries removed, all common blocks can be identified by their initial c character. This will ensure that all unwanted entries that spuriously ended up in the list are removed. Then, we extract common block names by looking what is between the / /.
End of explanation
"""
common_block_names[0], common_block_names[50], common_block_names[-1]
"""
Explanation: Check whether we've isolated common block names.
End of explanation
"""
common_block_names = list(set(common_block_names))
"""
Explanation: Remove duplicates from the list.
End of explanation
"""
common_block_names.sort()
common_block_names
"""
Explanation: Here's a full listing.
End of explanation
"""
second_round_names = [entry[entry.rfind('/') + 1:] for entry in common_block_names if entry.rfind('/') != -1]
second_round_names
"""
Explanation: Thre are clearly some issues related to programming styles. Most repeated occurrences are the result of the user "closing" the common block or by including multiple common blocks on a single line. Let's remove those with some brute force tactics.
End of explanation
"""
first_round_names = [entry for entry in common_block_names if entry.rfind('/') == -1]
"""
Explanation: Only one entry has three common block names, but luckily the third name is already indexed, so we can move on. Get only the first common block name from the original list.
End of explanation
"""
common_block_names = list(set(first_round_names + second_round_names))
common_block_names.sort()
"""
Explanation: Combine the two lists and remove duplicate entries.
End of explanation
"""
key = common_block_names[0]
print key.upper()
for entry in marcs_common_blocks:
if entry[2].lower().find(key) != -1:
print "\t {:16s} on line: {:4s}".format(entry[0], entry[1])
"""
Explanation: Now we are in a position to create a table of contents for our common blocks. It may be best for visualization if we write it in both plain text and markdown.
First, a test to get a proper formatting.
End of explanation
"""
for key in common_block_names:
print key.upper()
for entry in marcs_common_blocks:
if entry[2].lower().find(key) != -1:
print "\t {:16s} on line: {:4s}".format(entry[0], entry[1])
else:
pass
"""
Explanation: That seems to be quite reasonable. Now for all keys,
End of explanation
"""
plaint = open('common_block_index.txt', 'w')
for key in common_block_names:
plaint.write(key.upper() + '\n')
for entry in marcs_common_blocks:
if entry[2].lower().find(key) != -1:
plaint.write("\t {:30s} on line: {:4s} \n".format(entry[0], entry[1]))
else:
pass
plaint.write('\n')
plaint.close()
"""
Explanation: That clearly works, so let's output that information to a plain text file.
End of explanation
"""
markd = open('common_block_index.md', 'w')
for key in common_block_names:
markd.write('## ' + key.upper() + '\n')
for entry in marcs_common_blocks:
if entry[2].lower().find(key) != -1:
markd.write("\t {:30s} on line: {:4s} \n".format(entry[0], entry[1]))
else:
pass
markd.write('\n')
markd.close()
"""
Explanation: And in markdown for easy reading online.
End of explanation
"""
|
Krastanov/cutiepy
|
examples/Schroedinger_Equation_Solver_Examples-with_code.ipynb
|
bsd-3-clause
|
from cutiepy import *
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
cutiepy.codegen.DEBUG = True
"""
Explanation: Table of Contents
Rabi Oscillations
Simulating the Full Hamiltonian
With Rotating Wave Approximation
Coherent State in a Harmonic Oscillator
Jaynes-Cummings Revival
Definite Photon State
Coherent State
The only addition to this notebook compared to the original is the DEBUG=True setting that prints all of the generated Cython code.
End of explanation
"""
initial_state = basis(2, 0)
initial_state
ω0 = 1
Δ = 0.002
Ω = 0.005
ts = 2*np.pi/Ω*np.linspace(0,1,40)
H = ω0/2 * sigmaz() + Ω * sigmax() * sin((ω0+Δ)*t)
H
res = sesolve(H, initial_state, ts)
σz_expect = expect(sigmaz(), res)
plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result')
Ωp = (Ω**2+Δ**2)**0.5
plt.plot(ts*Ω/np.pi, 1-(Ω/Ωp)**2*2*np.sin(Ωp*ts/2)**2, 'b-',
label=r'$1-2(\Omega^\prime/\Omega)^2\sin^2(\Omega^\prime t/2)$')
plt.title(r'$\langle\sigma_z\rangle$-vs-$t\Omega/\pi$ at '
r'$\Delta/\Omega=%.2f$, $\omega_0/\Omega=%.2f$'%(Δ/Ω, ω0/Ω))
plt.ylim(-1,1)
plt.legend(loc=3);
"""
Explanation: Rabi Oscillations
Simulating the Full Hamiltonian
$\hat{H} = \hat{H}_0 + \Omega \sin((\omega_0+\Delta)t) \hat{\sigma}_x$
$\hat{H}_0 = \frac{\omega_0}{2}\hat{\sigma}_z$
End of explanation
"""
Hp = Δ/2 * sigmaz() + Ω/2 * sigmax()
Hp
res = sesolve(Hp, initial_state, ts)
σz_expect = expect(sigmaz(), res)
plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result')
Ωp = (Ω**2+Δ**2)**0.5
plt.plot(ts*Ω/np.pi, 1-(Ω/Ωp)**2*2*np.sin(Ωp*ts/2)**2, 'b-',
label=r'$1-2(\Omega^\prime/\Omega)^2\sin^2(\Omega^\prime t/2)$')
plt.title(r'$\langle\sigma_z\rangle$-vs-$t\Omega/\pi$ at '
r'$\Delta/\Omega=%.2f$ in RWA'%(Δ/Ω))
plt.ylim(-1,1)
plt.legend(loc=3);
"""
Explanation: With Rotating Wave Approximation
$\hat{H}^\prime = e^{i\hat{H}_0 t}\hat{H} e^{-i\hat{H}_0 t} \approx \frac{\Delta}{2} \hat{\sigma}_z + \frac{\Omega}{2} \hat{\sigma}_x$
End of explanation
"""
N_cutoff = 40
α = 2.5
initial_state = coherent(N_cutoff, α)
initial_state
H = num(N_cutoff)
H
ts = 2*np.pi*np.linspace(0,1,41)
res = sesolve(H, initial_state, ts)
a = destroy(N_cutoff)
a_expect = expect(a, res, keep_complex=True)
plt.figure(figsize=(4,4))
plt.plot(np.real(a_expect), np.imag(a_expect), 'b-')
for t, alpha in list(zip(ts,a_expect))[:40:4]:
plt.plot(np.real(alpha), np.imag(alpha), 'r.')
plt.text(np.real(alpha), np.imag(alpha), r'$t=%.1f\pi$'%(t/np.pi), fontsize=14)
plt.title(r'$\langle\hat{a}\rangle$-vs-$t$')
plt.ylabel(r'$\mathcal{I}(\alpha)$')
plt.xlabel(r'$\mathcal{R}(\alpha)$');
"""
Explanation: Coherent State in a Harmonic Oscillator
$|\alpha\rangle$ evolving under $\hat{H} = \hat{n}$
End of explanation
"""
ω = 1
g = 0.1
ts = np.pi/g*np.linspace(0,1,150)
N_cutoff = 40
H0 = ω*(tensor(num(N_cutoff), identity(2)) + 0.5 * tensor(identity(N_cutoff), sigmaz()))
Hp = g*(tensor(destroy(N_cutoff),sigmap()) + tensor(create(N_cutoff), sigmam()))
H0 + Hp
"""
Explanation: Jaynes-Cummings Revival
$\hat{H} = \hat{H}_0 + \hat{H}^\prime$
$\hat{H}_0 = \omega \hat{n} + \omega \frac{1}{2} \hat{\sigma}_z$
$\hat{H}^\prime = g (\hat{\sigma}+\hat{a} + \hat{\sigma}-\hat{a}^\dagger)$
End of explanation
"""
n = 3
n_p = tensor(basis(N_cutoff,n), basis(2,0))
np1_m = tensor(basis(N_cutoff,n+1), basis(2,1))
n_p
res = sesolve(H0 + Hp, n_p, ts)
ovlps = overlap([n_p, np1_m], res)
plt.plot(ts*g/np.pi, np.abs(ovlps)**2)
plt.legend([r'$|%d,+\rangle$'%n, r'$|%d,-\rangle$'%(n+1)])
plt.title(r'Population-vs-$gt/\pi$');
n = 8
n_p = tensor(basis(N_cutoff,n), basis(2,0))
np1_m = tensor(basis(N_cutoff,n+1), basis(2,1))
res = sesolve(H0 + Hp, n_p, ts)
ovlps = overlap([n_p, np1_m], res)
plt.plot(ts*g/np.pi, np.abs(ovlps)**2)
plt.legend([r'$|%d,+\rangle$'%n, r'$|%d,-\rangle$'%(n+1)])
plt.title(r'Population-vs-$gt/\pi$');
"""
Explanation: Definite Photon State
End of explanation
"""
alpha = 5
coh = tensor(coherent(N_cutoff, alpha), basis(2,0))
coh
ts = 80/g*np.linspace(0,1,4000)
res = sesolve(H0 + Hp, coh, ts)
inversion = expect(tensor(identity(N_cutoff), sigmaz()), res)
plt.plot(ts*g, inversion)
plt.title(r'$\langle \hat{\sigma}_z \rangle$-vs-$gt$');
"""
Explanation: Coherent State
End of explanation
"""
|
google/eng-edu
|
ml/pc/exercises/image_classification_part2.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC.
End of explanation
"""
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
"""
Explanation: Cat vs. Dog Image Classification
Exercise 2: Reducing Overfitting
Estimated completion time: 30 minutes
In this notebook we will build on the model we created in Exercise 1 to classify cats vs. dogs, and improve accuracy by employing a couple strategies to reduce overfitting: data augmentation and dropout.
We will follow these steps:
Explore how data augmentation works by making random transformations to training images.
Add data augmentation to our data preprocessing.
Add dropout to the convnet.
Retrain the model and evaluate loss and accuracy.
Let's get started!
Exploring Data Augmentation
Let's get familiar with the concept of data augmentation, an essential way to fight overfitting for computer vision models.
In order to make the most of our few training examples, we will "augment" them via a number of random transformations, so that at training time, our model will never see the exact same picture twice. This helps prevent overfitting and helps the model generalize better.
This can be done by configuring a number of random transformations to be performed on the images read by our ImageDataGenerator instance. Let's get started with an example:
End of explanation
"""
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip -O \
/tmp/cats_and_dogs_filtered.zip
import os
import zipfile
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
train_cat_fnames = os.listdir(train_cats_dir)
train_dog_fnames = os.listdir(train_dogs_dir)
"""
Explanation: These are just a few of the options available (for more, see the Keras documentation. Let's quickly go over what we just wrote:
rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
shear_range is for randomly applying shearing transformations.
zoom_range is for randomly zooming inside pictures.
horizontal_flip is for randomly flipping half of the images horizontally. This is relevant when there are no assumptions of horizontal assymmetry (e.g. real-world pictures).
fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images. First let's set up our example files, as in Exercise 1.
NOTE: The 2,000 images used in this exercise are excerpted from the "Dogs vs. Cats" dataset available on Kaggle, which contains 25,000 images. Here, we use a subset of the full dataset to decrease training time for educational purposes.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from tensorflow.keras.preprocessing.image import array_to_img, img_to_array, load_img
img_path = os.path.join(train_cats_dir, train_cat_fnames[2])
img = load_img(img_path, target_size=(150, 150)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)
# The .flow() command below generates batches of randomly transformed images
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(array_to_img(batch[0]))
i += 1
if i % 5 == 0:
break
"""
Explanation: Next, let's apply the datagen transformations to a cat image from the training set to produce five random variants. Rerun the cell a few times to see fresh batches of random variants.
End of explanation
"""
# Adding rescale, rotation_range, width_shift_range, height_shift_range,
# shear_range, zoom_range, and horizontal flip to our ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
val_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 32 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 32 using val_datagen generator
validation_generator = val_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
"""
Explanation: Add Data Augmentation to the Preprocessing Step
Now let's add our data-augmentation transformations from Exploring Data Augmentation to our data preprocessing configuration:
End of explanation
"""
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.optimizers import RMSprop
# Our input feature map is 150x150x3: 150x150 for the image pixels, and 3 for
# the three color channels: R, G, and B
img_input = layers.Input(shape=(150, 150, 3))
# First convolution extracts 16 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Conv2D(16, 3, activation='relu')(img_input)
x = layers.MaxPooling2D(2)(x)
# Second convolution extracts 32 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
# Third convolution extracts 64 filters that are 3x3
# Convolution is followed by max-pooling layer with a 2x2 window
x = layers.Convolution2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
# Flatten feature map to a 1-dim tensor
x = layers.Flatten()(x)
# Create a fully connected layer with ReLU activation and 512 hidden units
x = layers.Dense(512, activation='relu')(x)
# Add a dropout rate of 0.5
x = layers.Dropout(0.5)(x)
# Create output layer with a single node and sigmoid activation
output = layers.Dense(1, activation='sigmoid')(x)
# Configure and compile the model
model = Model(img_input, output)
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
"""
Explanation: If we train a new network using this data augmentation configuration, our network will never see the same input twice. However the inputs that it sees are still heavily intercorrelated, so this might not be quite enough to completely get rid of overfitting.
Adding Dropout
Another popular strategy for fighting overfitting is to use dropout.
TIP: To learn more about dropout, see Training Neural Networks in Machine Learning Crash Course.
Let's reconfigure our convnet architecture from Exercise 1 to add some dropout, right before the final classification layer:
End of explanation
"""
# WRITE CODE TO TRAIN THE MODEL ON ALL 2000 IMAGES FOR 30 EPOCHS, AND VALIDATE
# ON ALL 1,000 VALIDATION IMAGES
"""
Explanation: Retrain the Model
With data augmentation and dropout in place, let's retrain our convnet model. This time, let's train on all 2,000 images available, for 30 epochs, and validate on all 1,000 validation images. (This may take a few minutes to run.) See if you can write the code yourself:
End of explanation
"""
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
"""
Explanation: Solution
Click below for the solution.
End of explanation
"""
# Retrieve a list of accuracy results on training and validation data
# sets for each training epoch
acc = history.history['acc']
val_acc = history.history['val_acc']
# Retrieve a list of list results on training and validation data
# sets for each training epoch
loss = history.history['loss']
val_loss = history.history['val_loss']
# Get number of epochs
epochs = range(len(acc))
# Plot training and validation accuracy per epoch
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.figure()
# Plot training and validation loss per epoch
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
"""
Explanation: Note that with data augmentation in place, the 2,000 training images are randomly transformed each time a new training epoch runs, which means that the model will never see the same image twice during training.
Evaluate the Results
Let's evaluate the results of model training with data augmentation and dropout:
End of explanation
"""
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
"""
Explanation: Much better! We are no longer overfitting, and we have gained ~3 validation accuracy percentage points (see the green line in the top chart). In fact, judging by our training profile, we could keep fitting our model for 30+ more epochs and we could probably get to ~80%!
Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free memory resources:
End of explanation
"""
|
kfollette/AST337-Fall2017
|
Labs/Lab6/Lab6.ipynb
|
mit
|
# The standard fare:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# Recall our use of this module to work with FITS files in Lab 4:
from astropy.io import fits
# This lets us use various Unix (or Unix-like) commands within Python:
import os
# We will see what this does shortly.
import glob
"""
Explanation: Lab 6: Working with FITS Image Data and Manipulating Arrays
<u>Names:</u>
The main goal of this lab is to create "master" calibration frames from the raw calibration frames we took at Smith. Your homework will then involve applying the calibrations you created in this lab to the cluster and standard star images we took, so that you can work with calibrated science frames.
To work with the calibration frames, we will learn new methods for organizing files, working with arrays of image data, using the FITS data we took from our first observing night at Smith. Along the way, we will learn to use a few Unix tasks from within Python, and we will use and write a number of functions that we will later use outside of the Jupyter Notebook environment.
Part 1: Sorting Data
When you pulled the class Git repository for this lab, you will have also downloaded the zipped folder with all of the FITS files we took at Smith.
First, unzip this file, which will decompress all of files into a single folder, 2017oct04.
To start working with different groups of images, it will be help to first organize all of those files into subfolders. We will import our usual Python modules and a few others needed for this lab:
End of explanation
"""
cd 2017oct04/
ls
"""
Explanation: 1.1 Using Glob
glob is an extremely useful function in Python. First, move into your data directory using cd, then ls to see the file list:
End of explanation
"""
a_few_files = glob.glob('science*.fit')
a_few_files
"""
Explanation: Now, execute the following two cells.
End of explanation
"""
# Finish this cell
all_fits =
"""
Explanation: What does glob do? What kind of Python object is a_few_files?
Answer:
<br><br>
Now use glob to create a new variable, all_fits, that contains the names of all of the FITS files in your directory. We will use this variable a number of times.
End of explanation
"""
# Complete this cell to save the header of the zeroth (that is, the first) FITS file in the all_fits list:
test_header =
# Now, view the header info here:
"""
Explanation: 1.2 Iterating to View Header Info
We now would like to take all of our FITS files and sort them into subfolders based on the type of image, e.g., calibration, science, standard star, etc. Helpfully, the file names include some clues about what sort of files we have. However, if you remember from our observing run and log sheets, not all of the file names are always correct. This can happen quite easily at the telescope, because camera software programs typically have various automated ways of naming files (often different from observatory to observatory) and sometimes require the user to remember to change settings while observing.
(To avoid this, some observatories name their files in a uniform way, like "Image_00001.fits" or with a time/date stamp, like "NACO_2017-10-05T00:04:18.fits", which provide unique identifiers.)
Therefore, to check our data and sort it accurately, we'll look at the header information first to get a sense of the type of files we're dealing with. There are a couple different ways we can look at FITS headers; in Lab 4, we used fits.open, which is very versatile. Here, we'll use two new functions to quickly view the header and data:
fits.getheader('filename')
and
fits.getdata('filename')
End of explanation
"""
test_header['IMAGETYP']
test_header['OBJECT']
"""
Explanation: We can also use our new variable to view specific header keyword values, such as the image type, or object:
("IMAGETYP" and "OBJECT", since FITS files can only have 8 character keywords)
End of explanation
"""
# Here is an example for loop that provides all of the image types.
# (1) Run this function to view all of the 'IMAGETYP' values, then
# (2) Edit and re-run the function to view all of the 'OBJECT' values, then
# (3) Edit and re-run once more to view a header keyword value of your choice.
for filename in all_fits:
header = fits.getheader(filename)
filetype = header['IMAGETYP']
print(filetype)
"""
Explanation: We are going to use for loops to iterate over each FITS file in our directory, so we can quickly see what kinds of FITS files we have based on keyword.
If iteration is newer to you (or you'd like a brief refresher), check out the other notebook in the main directory ('Unix_Programming_Refresher') for some quick reference exercises.
End of explanation
"""
def filesorter(filename, foldername, fitskeyword_to_check, keyword):
'''
Edit this docstring to describe the function purpose and use.
'''
if os.path.exists(filename):
pass
else:
print(filename + " does not exist or has already been moved.")
return
header = fits.getheader(filename)
fits_type = header[keyword]
if os.path.exists(foldername):
pass
else:
print("Making new directory: " + foldername)
os.mkdir(foldername)
if fits_type == fitskeyword_to_check:
destination = foldername + '/'
print("Moving " + filename + " to: ./" + destination + filename)
os.rename(filename, destination + filename)
return
"""
Explanation: 1.3 The Sorting Function
Because data taken in different filters and with different exposure times require matching calibration files, our goal is to create the following directory structure:
<img src="./folder_flowchart.png" width='50%'>
We want a generic function that:
(1) takes as input:
the name of a single FITS file,
the desired folder name,
the desired type of FITS file,
the header keyword to match the FITS file type
(2) reads in the file's header information,
(3) reads in the file's type based on the keyword
(4) checks if the desired folder name exists, and makes a new directory if it doesn't
(5) checks if the file's type matches the desired type of fitsfile
(6) moves the file into the new or existing folder, if the file matches the right file type.
<br>
We've provided a function that does all of these steps, but lacks a docstring. You'll be using this function multiple times, so discuss within your group exactly how the function works, then update the docstring and add comments within the function (using #).
End of explanation
"""
filesorter()
"""
Explanation: Now test out the filesorter function for a single file, cal-001_Rflat.fit. We know this should be a calibration file from the name, but the 'calibration' folder doesn't exist yet, and we can test whether or not it is actually a 'cal' file by checking the 'OBJECT' keyword.
Complete and execute the cell below:
End of explanation
"""
# Your for loop to sort calibration data here:
# Move into the calibration folder you created:
# Re-glob the fits files in the calibration director to a new variable
cal_files =
"""
Explanation: 1.4 Sort all the calibration data
Below, write your own for loop that goes through each file in all_fits, and applies filesorter to each file based on your choice of folder name ('calibration'), keyword value, and keyword.
End of explanation
"""
# Loop to sort flats:
# Loop to sort bias frames:
# Loop to sort dark frames:
# Below, list the contents of each subfolder to make sure things moved correctly:
ls flats
ls biasframes
ls darks
"""
Explanation: In the following three cells, use the same for loop structure to make subdirectories for biasframes, darks, and flats. We won't be using 'OBJECT' as the keyword, since that's a more generic category -- decide which keyword is most appropriate here.
End of explanation
"""
cd biasframes
ls
"""
Explanation: Part 2: Working with Array Data!
Now that our calibration data are all sorted into folders, we'll start to work with the raw frames to make the master calibration files.
2.1 Bias Frames
It's simplest to create a master bias frame first, so we'll start with the bias frames. Change directories into your newly-created biasframes folder and ls to make sure everything has transferred correctly:
End of explanation
"""
# Complete to create a new bias frame list:
biasfiles =
"""
Explanation: Determining bias frame properties
The first thing we'll need to do, as before, is to use glob to create a new list of only bias frames to work with:
End of explanation
"""
# Complete: Define a new variable, n, and determine how many bias files there are (length of biasfiles):
n =
# And use fits.getdata to read in the data for only the first bias frame (the zeroth element of the bias list):
first_bias_data =
"""
Explanation: As discussed in your reading and in lecture, what we ultimately want is a median combination of all of the individual bias frames. So we will create a stack of bias images, and then we will take the median values at each pixel location in the stack to create the final combined frame.
End of explanation
"""
np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
"""
Explanation: Of course, our FITS images are 2-D, and we will be working with arrays of various dimensions -- not all FITS images are the same size (e.g. 1024 x 1024). In numpy, we can define arrays of various sizes as follows. Execute the cell, and Jupyter will print the formatted version:
End of explanation
"""
test_array = np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
test_array.shape
"""
Explanation: To determine the shape of the array, we can simply use the .shape command:
End of explanation
"""
# Complete to get the dimensions of first_bias_data, then check the values
imsize_y, imsize_x =
imsize_y, imsize_x
"""
Explanation: In the cell below, use this method to determine the dimensions of the first bias image and define them as new variables:
End of explanation
"""
# Create blank stack of arrays to hold each of the frames
# **IMPORTANT** Note that Y is listed first! This is a pecularity in how python reads arrays:
biasarray = np.zeros((imsize_y, imsize_x , n), dtype = np.float32)
# Now check the shape of our new "blank" array:
biasarray.shape
# In this cell, check what the values in the biasarray look like now:
"""
Explanation: How many bias files are there?
Answer:
What are the dimensions of the first bias frame?
Answer:
Creating a blank 3-D array and adding images to it
In order to take the median value of the stacked bias frames, we'll need to insert them into a larger array first. We can do this by creating a "blank" 3-D array filled with zero values with dimensions of (y dimension, x dimension, number of images):
End of explanation
"""
# Insert each bias frame into a three dimensional stack, one by one:
for ii in range(0, n):
im = fits.getdata(biasfiles[ii])
biasarray[:,:,ii] = im
# How do the biasarray values look now?
biasarray
"""
Explanation: We can make an image stack of bias frames by adding the data from each of the FITS files into the "blank" stack, one by one:
End of explanation
"""
# Take the median of the 3-D stack:
med_bias = np.median(biasarray, axis = 2) # axis=2 means stack along the *third* axis, since python is zero-indexed
# And get the header for the first image in the list:
biasheader = fits.getheader(biasfiles[0])
# Define a name for the new output FITS file:
master_bias = 'Master_Bias.fit'
# Save your new FITS file!
fits.writeto(master_bias, med_bias, biasheader, clobber=True)
"""
Explanation: Taking the median and saving the master bias:
Now the final steps to create a master bias frame are to:
(1) take the median of the 3-D array, along the appropriate axis, which will collapse the image to a regular two dimensional array,
and
(2) save this new 2-D array -- the master bias -- as a brand-new fits file with a new name, giving it the same header as the the first bias image for simplicity.
End of explanation
"""
def mediancombine(filelist):
'''
Edit this docstring accordingly!
'''
n = len(filelist)
first_frame_data = fits.getdata(filelist[0])
imsize_y, imsize_x = first_frame_data.shape
fits_stack = np.zeros((imsize_y, imsize_x , n), dtype = np.float32)
for ii in range(0, n):
im = fits.getdata(filelist[ii])
fits_stack[:,:,ii] = im
med_frame = np.median(fits_stack, axis=2)
return med_frame
# Now our step to create a median of all the bias frames is much simpler!
median_bias = mediancombine(biasfiles)
# Below, how would you save the new median_bias frame as a FITS file?
# Complete the function below and save the duplicate master bias as "Backup_MasterBias.fit")
fits.writeto()
"""
Explanation: What inputs does the fits.writeto function require to save a new FITS files?
Answer:
The final step is to check the final master bias frame to see if it appears normal. In DS9, open up a single raw calibration bias frame as well as the the new Master_Bias.fit that you just created.
How do the two compare? Do the pixel values seem reasonable? Do the dimensions of the image make sense?
Answer:
Median Combination Function
For ease of use, let's write all of those proceeding steps into a single function that we can re-use later. Edit the docstring below and add any comments on how to use the function that future you will find helpful.
End of explanation
"""
master_bias_path = os.getcwd() + '/Master_Bias.fit'
master_bias_path
"""
Explanation: One last note on the master bias -- we will need to determine the path to the Master_Bias.fit file, because we will use functions that call it from different folders. We can do this using os.getcwd (get current working directory) and adding a string with the filename, as follows:
End of explanation
"""
cd ../darks
darkfiles = glob.glob('*fit')
# Write your bias subtraction function here:
def bias_subtract(filename, path_to_bias):
'''
Add your docstring here.
'''
# Your code goes here.
fits.writeto('b_' + filename, ) # finish this code too to save the FITS
return
# Test out your function on an individual frame (remember, we defined "master_bias_path" just before Section 2.2:
bias_subtract('cal-001_dark60.fit', master_bias_path)
# Did it work? You can test whether the bias subtraction worked by viewing the before/after frames in DS9.
# Now write and execute a for loop that subtracts the bias from each of the dark frames.
"""
Explanation: 2.2 Dark Frames
Bias subtracting the darks:
We want to median combine our darks, but contribution from bias is present in every image taken, so our first step after creating a master bias frame is always to subtract it from every other image. Array subtraction is more straightforward -- as long as two arrays have the same dimension, they can be subtracted from one another on a pixel-by-pixel basis.
Below, write a generalized function below that subtracts Master_Bias.fit from a single frame. Your function should:
1) take a FITS file name and path to the master bias as inputs,
2) load in the data for the file to be calibrated,
3) loads in the header information for that file,
4) loads in the data for Master_Bias.fit as a variable (remember, it's located in a different folder than where you are now! Hence the second input),
5) subtract the bias from the input FITS file, and
6) save (writeto) the new bias-subtracted FITS file with a modified name (e.g., cal-001_dark60.fit would become b_cal-001_dark60.fit, for bias-subtraction).
Once you've written and tested your function, you will apply it to all of the dark frames.
The first step is to move into the darks directory from your current location and glob the dark files.
End of explanation
"""
ls
"""
Explanation: You should now have twice the number of dark frames in your directory, half of which have the prefix 'b_'. These are the frames we want to median combine into the master dark!
End of explanation
"""
# Your lines of code here:
ls
"""
Explanation: In the cell below, use the mediancombine function from earlier to combine all of the bias-subtracted dark frames into a single master dark. We will call it Master_Dark_60s.fit , since dark frames need to match exposure times.
Be careful using glob to select only the darks that have been bias-subtracted!
End of explanation
"""
cd ../flats/
# Sort by filter into new subfolders below, using the filesorter function and updating its inputs as needed:
ls
"""
Explanation: How does the master dark compare to a single raw dark frame? Take a look in DS9 and compare:
Answer:
2.3 Flat Fields
The final master calibration we want to create is a master flat field (flat). As you may have noticed during our observing night, the features that appear in the flats are highly specific to the filters in which they are taken -- so we will end up with two master flat fields. Therefore, our first steps will be to cd into the flats folder, glob files, and run our filesorter function to make the two flat subfolders in our diagram, 'Vband' and 'Rband'.
End of explanation
"""
cd Vband
# Bias-subtract the flat fields:
ls
"""
Explanation: We'll work just with Vband for now, so go into that directory, and you'll work with the Rband reduction in the homework. Like always, our first step is to subtract the master bias. Do this below for all of the files, using your bias_subtract function.
End of explanation
"""
# Check the flat field exposure times here for the files in your directory:
"""
Explanation: Dark subtract the flat fields:
Now, we will want to subtract the dark contribution from the flat fields. This can be accomplished by creating a new function below, dark_subtract, that looks very much like your bias_subtract function.
Most importantly, make sure that the dark you subtract matches the exposure time of the flat fields!
Check the flat exposure times in the cell below. What is/are the value(s)?
Answer:
End of explanation
"""
# Copy other master darks to directory in the following cell:
# Write your dark subtraction function here:
def dark_subtract(filename, path_to_dark):
'''
Add your docstring here.
'''
# Your code goes here.
return
# Now dark subtract the bias-subtracted flat fields:
# Did that work?
ls
"""
Explanation: Typically you would have to scale the 60s master dark to different exposure times, but to save you a bit of effort, we've scaled them for you ahead of time. Both 1s and 10s master dark frames can be found in the "ExtraFiles" folder in the top level directory. Copy these files into your 'darks' folder.
When you save your dark-subtracted FITS file, be sure to add another prefix to the file name, and it's important to only dark subtract the bias-subtracted images. For example, this would change 'b_cal-001_Vflat.fit' to 'db_cal-001_Vflat.fit'.
End of explanation
"""
def norm_combine_flats(filelist):
'''
Edit this docstring accordingly!
'''
n = len(filelist)
first_frame_data = fits.getdata(filelist[0])
imsize_y, imsize_x = first_frame_data.shape
fits_stack = np.zeros((imsize_y, imsize_x , n), dtype = np.float32)
for ii in range(0, n):
im = fits.getdata(filelist[ii])
norm_im = # finish new line here to normalize flats
fits_stack[:,:,ii] = norm_im
med_frame = np.median(fits_stack, axis=2)
return med_frame
"""
Explanation: 2.4 Making a Master Flat Field
The final step in creating master calibrations is to make a master flat field. Before we median combine into a single image, we want to divide each individual flat by its median pixel value, such that all the pixel values in each frame have values of approximately ~1.0 -- that is, we want to normalize them. Only then do we median combine the stack of normalized flat fields to create a master flat.
We can do this in a single function by editing the mediancombine function from earlier, and simply adding a single new line of code.
In line 11 below, add this extra line of code that normalizes im before adding it to the 3D array of fits_stack:
End of explanation
"""
# Make your list of files first, as usual:
# Apply norm_combine_flats to that list:
# Look at the output of the variable you defined in the previous cell to check the values:
# As a final step, finish the code below to save the master flat as a new fits file (Master_Flat_Vband.fit),
# with the header taken from the first frame in the flat list.
flat_header =
fits.writeto('Master_Flat_Vband.fit', )
"""
Explanation: In the following cells, run norm_combine_flats on your list of dark-subtracted, bias-subtracted frames (only 3 images!), and then check the values of the output to ensure they're close to 1.0.
End of explanation
"""
|
mamrehn/machine-learning-tutorials
|
ipynb/[tinydb] First steps.ipynb
|
cc0-1.0
|
path = './testData.json'
from tinydb import TinyDB, where
db = TinyDB(path)
"""
Explanation: TinyDB
TinyDB is a small and lightweight NoSQL database framework based on simple JSON files.
Source
Official Website:
- getting started
- advanced usage
Code
Some examples to create a database and insert, delete and seach for elements.
Basics
Generate new database. If path leads to an existing file, the data is read. Otherwise a new database is created.
End of explanation
"""
print(db.insert({'a':1, 'b':3})) # return the elements id [1,2,..]
db.all() # returns list of dicts
"""
Explanation: Insert some data into db.
End of explanation
"""
db.purge()
from sklearn import datasets
iris = datasets.load_iris()
data = iris.data
keys = iris.feature_names
print('== data ==\n', data[:5], '\n ...')
print('== keys ==\n', keys)
if(0 == len(db)):
for d in data:
db.insert({
keys[0]: d[0],
keys[1]: d[1],
keys[2]: d[2],
keys[3]: d[3]
})
# or
# db.insert_multiple([ {}, {}, ...]) # returns list of ids
db.search(where(keys[0]) >= 7.6 and where(keys[1]) >= 4.2)
"""
Explanation: Fill with some data (iris).
End of explanation
"""
try:
r = db.search(where(keys[0]) > 1000)[0]
print(r)
except:
print('Not in db.')
r = db.get(where(keys[0]) > 1000)
if not r:
print('Not in db.') # r is None
"""
Explanation: Error Handling
End of explanation
"""
db.contains(where(keys[2]) == 1)
db.count(where(keys[2]) == 1)
"""
Explanation: If the actual data is irrelevant, to check for the existance of an element, use contains or count.
End of explanation
"""
elem = db.get(where(keys[2]) == 1)
elem.eid
if db.contains(eids=[11, 12]):
e1 = db.get(eid=11)
e2 = db.get(eid=12)
db.remove(eids=[11, 12])
db.insert_multiple([e1, e2])
print(len(db))
# db.remove(eids=list(arange(70, 75)))
# print(len(db))
"""
Explanation: IDs
End of explanation
"""
# db.remove(where('field').has('name').has('last_name') == 'Doe')
db.insert({'field': {'name': {'first_name': 'John', 'last_name': 'Doe'}}})
# print(db.search(where('field.name.last_name') == 'Doe'))
# print(db.search(where('field.name.last_name').matches('[0-9]*')),'\n')
db.remove(where('field').any(where('val') == 1))
db.insert({'field': [{'val': 1}, {'val': 2}, {'val': 3}]})
print(db.search(where('field').any(where('val') == 1)))
"""
Explanation: Regex and nested queries
End of explanation
"""
|
rishuatgithub/MLPy
|
nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/01-Tokenization.ipynb
|
apache-2.0
|
# Import spaCy and load the language library
import spacy
nlp = spacy.load('en_core_web_sm')
# Create a string that includes opening and closing quotation marks
mystring = '"We\'re moving to L.A.!"'
print(mystring)
# Create a Doc object and explore tokens
doc = nlp(mystring)
for token in doc:
print(token.text, end=' | ')
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Tokenization
The first step in creating a Doc object is to break down the incoming text into component pieces or "tokens".
End of explanation
"""
doc2 = nlp(u"We're here to help! Send snail-mail, email support@oursite.com or visit us at http://www.oursite.com!")
for t in doc2:
print(t)
"""
Explanation: <img src="../tokenization.png" width="600">
Prefix: Character(s) at the beginning ▸ $ ( “ ¿
Suffix: Character(s) at the end ▸ km ) , . ! ”
Infix: Character(s) in between ▸ - -- / ...
Exception: Special-case rule to split a string into several tokens or prevent a token from being split when punctuation rules are applied ▸ St. U.S.
Notice that tokens are pieces of the original text. That is, we don't see any conversion to word stems or lemmas (base forms of words) and we haven't seen anything about organizations/places/money etc. Tokens are the basic building blocks of a Doc object - everything that helps us understand the meaning of the text is derived from tokens and their relationship to one another.
Prefixes, Suffixes and Infixes
spaCy will isolate punctuation that does not form an integral part of a word. Quotation marks, commas, and punctuation at the end of a sentence will be assigned their own token. However, punctuation that exists as part of an email address, website or numerical value will be kept as part of the token.
End of explanation
"""
doc3 = nlp(u'A 5km NYC cab ride costs $10.30')
for t in doc3:
print(t)
"""
Explanation: <font color=green>Note that the exclamation points, comma, and the hyphen in 'snail-mail' are assigned their own tokens, yet both the email address and website are preserved.</font>
End of explanation
"""
doc4 = nlp(u"Let's visit St. Louis in the U.S. next year.")
for t in doc4:
print(t)
"""
Explanation: <font color=green>Here the distance unit and dollar sign are assigned their own tokens, yet the dollar amount is preserved.</font>
Exceptions
Punctuation that exists as part of a known abbreviation will be kept as part of the token.
End of explanation
"""
len(doc)
"""
Explanation: <font color=green>Here the abbreviations for "Saint" and "United States" are both preserved.</font>
Counting Tokens
Doc objects have a set number of tokens:
End of explanation
"""
len(doc.vocab)
"""
Explanation: Counting Vocab Entries
Vocab objects contain a full library of items!
End of explanation
"""
doc5 = nlp(u'It is better to give than to receive.')
# Retrieve the third token:
doc5[2]
# Retrieve three tokens from the middle:
doc5[2:5]
# Retrieve the last four tokens:
doc5[-4:]
"""
Explanation: <font color=green>NOTE: This number changes based on the language library loaded at the start, and any new lexemes introduced to the vocab when the Doc was created.</font>
Tokens can be retrieved by index position and slice
Doc objects can be thought of as lists of token objects. As such, individual tokens can be retrieved by index position, and spans of tokens can be retrieved through slicing:
End of explanation
"""
doc6 = nlp(u'My dinner was horrible.')
doc7 = nlp(u'Your dinner was delicious.')
# Try to change "My dinner was horrible" to "My dinner was delicious"
doc6[3] = doc7[3]
"""
Explanation: Tokens cannot be reassigned
Although Doc objects can be considered lists of tokens, they do not support item reassignment:
End of explanation
"""
doc8 = nlp(u'Apple to build a Hong Kong factory for $6 million')
for token in doc8:
print(token.text, end=' | ')
print('\n----')
for ent in doc8.ents:
print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_)))
"""
Explanation: Named Entities
Going a step beyond tokens, named entities add another layer of context. The language model recognizes that certain words are organizational names while others are locations, and still other combinations relate to money, dates, etc. Named entities are accessible through the ents property of a Doc object.
End of explanation
"""
len(doc8.ents)
"""
Explanation: <font color=green>Note how two tokens combine to form the entity Hong Kong, and three tokens combine to form the monetary entity: $6 million</font>
End of explanation
"""
doc9 = nlp(u"Autonomous cars shift insurance liability toward manufacturers.")
for chunk in doc9.noun_chunks:
print(chunk.text)
doc10 = nlp(u"Red cars do not carry higher insurance rates.")
for chunk in doc10.noun_chunks:
print(chunk.text)
doc11 = nlp(u"He was a one-eyed, one-horned, flying, purple people-eater.")
for chunk in doc11.noun_chunks:
print(chunk.text)
"""
Explanation: Named Entity Recognition (NER) is an important machine learning tool applied to Natural Language Processing.<br>We'll do a lot more with it in an upcoming section. For more info on named entities visit https://spacy.io/usage/linguistic-features#named-entities
Noun Chunks
Similar to Doc.ents, Doc.noun_chunks are another object property. Noun chunks are "base noun phrases" – flat phrases that have a noun as their head. You can think of noun chunks as a noun plus the words describing the noun – for example, in Sheb Wooley's 1958 song, a "one-eyed, one-horned, flying, purple people-eater" would be one long noun chunk.
End of explanation
"""
from spacy import displacy
doc = nlp(u'Apple is going to build a U.K. factory for $6 million.')
displacy.render(doc, style='dep', jupyter=True, options={'distance': 110})
"""
Explanation: We'll look at additional noun_chunks components besides .text in an upcoming section.<br>For more info on noun_chunks visit https://spacy.io/usage/linguistic-features#noun-chunks
Built-in Visualizers
spaCy includes a built-in visualization tool called displaCy. displaCy is able to detect whether you're working in a Jupyter notebook, and will return markup that can be rendered in a cell right away. When you export your notebook, the visualizations will be included as HTML.
For more info visit https://spacy.io/usage/visualizers
Visualizing the dependency parse
Run the cell below to import displacy and display the dependency graphic
End of explanation
"""
doc = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million.')
displacy.render(doc, style='ent', jupyter=True)
"""
Explanation: The optional 'distance' argument sets the distance between tokens. If the distance is made too small, text that appears beneath short arrows may become too compressed to read.
Visualizing the entity recognizer
End of explanation
"""
doc = nlp(u'This is a sentence.')
displacy.serve(doc, style='dep')
"""
Explanation: Creating Visualizations Outside of Jupyter
If you're using another Python IDE or writing a script, you can choose to have spaCy serve up html separately:
End of explanation
"""
|
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/DockerImages/20_transfer_learning/notebooks/transfer-learning/Transfer_Learning.ipynb
|
mit
|
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
JrtPec/opengrid
|
notebooks/Demo/Demo_caching.ipynb
|
apache-2.0
|
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
import charts
hp = houseprint.Houseprint()
"""
Explanation: Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
End of explanation
"""
cache_water = caching.Cache(variable='water_daily_min')
df_cache = cache_water.get(sensors=hp.get_sensors(sensortype='water'))
charts.plot(df_cache.ix[-8:], stock=True, show='inline')
"""
Explanation: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder: path_to_folder
End of explanation
"""
hp.sync_tmpos()
start = pd.Timestamp('now') - pd.Timedelta(weeks=1)
df_water = hp.get_data(sensortype='water', head=start, )
df_water.info()
"""
Explanation: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
End of explanation
"""
daily_min = analysis.DailyAgg(df_water, agg='min').result
daily_min.info()
daily_min
cache_water.update(daily_min)
"""
Explanation: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
End of explanation
"""
sensors = hp.get_sensors(sensortype='water') # sensor objects
charts.plot(cache_water.get(sensors=sensors, start=start, end=None), show='inline', stock=True)
"""
Explanation: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
End of explanation
"""
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
from opengrid.library import analysis
import charts
hp = houseprint.Houseprint()
#hp.sync_tmpos()
sensors = hp.get_sensors(sensortype='water')
caching.cache_results(hp=hp, sensors=sensors, resultname='water_daily_min', AnalysisClass=analysis.DailyAgg, agg='min')
cache = caching.Cache('water_daily_min')
daily_min = cache.get(sensors = sensors, start = '20151201')
charts.plot(daily_min, stock=True, show='inline')
"""
Explanation: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things.
End of explanation
"""
|
Mahdisadjadi/phoenixcrime
|
map.ipynb
|
mit
|
import shapefile
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
"""
Explanation: Inspired by this gist!
To get data from go to this website:
http://www.census.gov/cgi-bin/geo/shapefiles2010/main
used this: http://www.christianpeccei.com/zipmap/
states: ftp://ftp2.census.gov/geo/tiger/TIGER2010/STATE/2010/
End of explanation
"""
df = pd.read_csv('./data/cleaneddataset.csv')
#list of unique zipcodes
#zipcodes = df['zip'].unique().tolist()
zipval = df['zip'].value_counts()
zipval = zipval[zipval>10] # only more than 10 crimes
#normalized
zipval = zipval/zipval.max()
#list of unique zipcodes
zipcodes = zipval.index.tolist()
"""
Explanation: Find which zipcodes we needd first
End of explanation
"""
sfile = shapefile.Reader('./tl_2010_04_zcta510/tl_2010_04_zcta510.shp')
shape_recs = sfile.shapeRecords()
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
allpatches=[]
for rec in shape_recs:
# points that create each zipcode
points = rec.shape.points
# metadata
meta = rec.record
zipcode=int(meta[1])
# color map
cmap = plt.cm.PuRd
#If only zipcode was part of our database,plot it!
if zipcode in zipcodes:
# pick out the right color
c = cmap(zipval[zipcode]) #np.random.rand(3,1)
#create a patch
patch = patches.Polygon(points,closed=True,facecolor=c,
edgecolor=(0.3, 0.3, 0.3, 1.0), linewidth=0.2)
# collect the patches
# allpatches.append(patch)
ax.add_patch(patch)
# if you want to see irrelany zipcodes
#else:
# patch = patches.Polygon(points,True,facecolor='k',edgecolor='white',linewidth=0.2)
# ax.add_patch(patch)
#p = PatchCollection(allpatches, match_original=False, alpha=0.3 , linewidth=1)
#ax.add_collection(p)
ax.autoscale()
ax.set_title('Number of Crimes per ZIP Code in Phoenix (2016)')
plt.tight_layout()
plt.axis('off')
plt.savefig("my_map.png")
"""
Explanation: Plot them
End of explanation
"""
|
tensorflow/docs
|
site/en/guide/migrate/tensorboard.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
import tensorflow.compat.v1 as tf1
import tensorflow as tf
import tempfile
import numpy as np
import datetime
%load_ext tensorboard
mnist = tf.keras.datasets.mnist # The MNIST dataset.
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
"""
Explanation: Migrate TensorBoard: TensorFlow's visualization toolkit
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/tensorboard">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/tensorboard.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/tensorboard.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/tensorboard.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorBoard is a built-in tool for providing measurements and visualizations in TensorFlow. Common machine learning experiment metrics, such as accuracy and loss, can be tracked and displayed in TensorBoard. TensorBoard is compatible with TensorFlow 1 and 2 code.
In TensorFlow 1, tf.estimator.Estimator saves summaries for TensorBoard by default. In comparison, in TensorFlow 2, summaries can be saved using a tf.keras.callbacks.TensorBoard <a href="https://keras.io/api/callbacks/" class="external">callback</a>.
This guide demonstrates how to use TensorBoard, first, in TensorFlow 1 with Estimators, and then, how to carry out the equivalent process in TensorFlow 2.
Setup
End of explanation
"""
%reload_ext tensorboard
feature_columns = [tf1.feature_column.numeric_column("x", shape=[28, 28])]
config = tf1.estimator.RunConfig(save_summary_steps=1,
save_checkpoints_steps=1)
path = tempfile.mkdtemp()
classifier = tf1.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 32],
optimizer=tf1.train.AdamOptimizer(0.001),
n_classes=10,
dropout=0.1,
model_dir=path,
config = config
)
train_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_train},
y=y_train.astype(np.int32),
num_epochs=10,
batch_size=50,
shuffle=True,
)
test_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_test},
y=y_test.astype(np.int32),
num_epochs=10,
shuffle=False
)
train_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)
eval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,
steps=10,
throttle_secs=0)
tf1.estimator.train_and_evaluate(estimator=classifier,
train_spec=train_spec,
eval_spec=eval_spec)
%tensorboard --logdir {classifier.model_dir}
"""
Explanation: TensorFlow 1: TensorBoard with tf.estimator
In this TensorFlow 1 example, you instantiate a tf.estimator.DNNClassifier, train and evaluate it on the MNIST dataset, and use TensorBoard to display the metrics:
End of explanation
"""
%reload_ext tensorboard
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
steps_per_execution=10)
log_dir = tempfile.mkdtemp()
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir,
histogram_freq=1) # Enable histogram computation with each epoch.
model.fit(x=x_train,
y=y_train,
epochs=10,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
%tensorboard --logdir {tensorboard_callback.log_dir}
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="images/tensorboard_TF1.png"/> -->
TensorFlow 2: TensorBoard with a Keras callback and Model.fit
In this TensorFlow 2 example, you create and store logs with the tf.keras.callbacks.TensorBoard callback, and train the model. The callback tracks the accuracy and loss per epoch. It is passed to Model.fit in the callbacks list.
End of explanation
"""
|
BDannowitz/polymath-progression-blog
|
jlab-ml-lunch-2/notebooks/02-Recommender-System-Surprise.ipynb
|
gpl-2.0
|
import pandas as pd
from surprise import Dataset, Reader
from surprise.model_selection import cross_validate
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from jlab import load_test_data, get_test_detector_plane
"""
Explanation: 02 - Surprise Recommender System
Use a well-supported recommender package
Instead of homebrew matrix decomposition
End of explanation
"""
scaler = MinMaxScaler(feature_range=(1,5))
scaler = StandardScaler()
# Load, fit the scaler, transform
X_train = pd.read_csv('MLchallenge2_training.csv')
X_train_scaled_values = scaler.fit_transform(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled_values, columns=X_train.columns,
index=X_train.index)
# Load, transform
X_test = load_test_data('test_in.csv')
X_test_scaled_values = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled_values, columns=X_test.columns,
index=X_test.index)
# While we're at it, get the detector plane that'll be used for evaluation
eval_planes = get_test_detector_plane(X_test)
# Combine datasets
X = (pd.concat([X_test_scaled, X_train_scaled], axis=0)
.reset_index(drop=True))
# Melt the dataframe into a user/item/rating format
# For our purposes, it's trackID / kinematic / value
X.index.name = "track_id"
X_melt = X.reset_index().melt(id_vars=['track_id'])
# Also, load our truth values
X_true = pd.read_csv('test_prediction.csv', names=['x', 'y', 'px', 'py', 'pz'],
header=None)
X.head()
X_melt.sample(10)
X_true.head()
MIN = X.min().min()
MAX = X.max().max()
"""
Explanation: Load up and prep the datasets
Surprise requires a User, Item, Rating system
"Ratings" also need to be on the same scale with the same min/max values
Use melt and MinMaxScaler to achieve these things
In the spirit of the movie ratings system that's popularly used with Surprise, let's set Min/Max to 1/5
End of explanation
"""
from surprise import (
SVD, SVDpp, SlopeOne, NMF, CoClustering,
KNNBasic, KNNWithMeans, KNNWithZScore,
NormalPredictor, BaselineOnly
)
"""
Explanation: Train some Surprise predictors
End of explanation
"""
# A reader is still needed but only the rating_scale param is requiered.
reader = Reader(rating_scale=(MIN, MAX))
# The columns must correspond to user id, item id and ratings (in that order).
data = Dataset.load_from_df(X_melt[['track_id', 'variable', 'value']]
.query('track_id >= 10000 and track_id < 11000'),
reader)
algo = SVD()
cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=3, verbose=True)
"""
Explanation: Simple workflow
Train with just 1k full tracks
Train set (with all detectors) starts after track_id 10000
End of explanation
"""
algo_dict = {'SVD': SVD(),
'SVDpp': SVDpp(),
'SlopeOne': SlopeOne(),
'CoClustering': CoClustering(),
'KNNWithMeans': KNNWithMeans(),
'NormalPredictor': NormalPredictor(),
'BaselineOnly': BaselineOnly()}
for algo in algo_dict:
print(algo)
print(cross_validate(algo_dict[algo], data, measures=['RMSE', 'MAE'], cv=3, verbose=True))
"""
Explanation: Give them all a shot
See which ones to pursue
End of explanation
"""
data = Dataset.load_from_df(X_melt[['track_id', 'variable', 'value']]
.query('track_id < 50000'),
reader)
data = data.build_full_trainset()
algo = SVDpp(n_factors=20, n_epochs=20)
algo.fit(train)
"""
Explanation: SVD, SVDpp, and KNN do well
Probably do even better with more data, but it takes time...
Move forward with SVDpp
Train with 1k, create pred workflow for the detector of choice
End of explanation
"""
def get_kinematic_pred(algo, track_id, kinematic):
return algo.predict(track_id, kinematic).est
def get_track_kinematic_pred_for_plane(algo, track_id, plane):
kinematics = [k + str(int(plane))
for k in ['x', 'y', 'px', 'py', 'pz']]
plane_dict = {kin: get_kinematic_pred(algo, track_id, kin)
for kin in kinematics}
return plane_dict
get_track_kinematic_pred_for_plane(algo, 0, 15)
def fill_eval_plane_for_track(algo, X, track_id):
plane = get_test_detector_plane(X.loc[track_id])
plane_dict = get_track_kinematic_pred_for_plane(algo, track_id, plane)
for kin in plane_dict:
X.loc[track_id, kin] = plane_dict[kin]
X_pred_scaled = X_test_scaled.copy()
for ix in X_pred_scaled.index.values:
fill_eval_plane_for_track(algo, X_pred_scaled, ix)
X_pred_values = scaler.inverse_transform(X_pred_scaled)
X_pred = pd.DataFrame(X_pred_values, columns=X_pred_scaled.columns,
index=X_pred_scaled.index)
"""
Explanation: Time to make predictions
Make a copy of our X_test
For each track, for each plane that we need to predict, predict x, y, px, py, pz
End of explanation
"""
for track in [20, 50, 1000, 5000]:
plane = get_test_detector_plane(X_test.loc[track])
print("PRED:\n", X_pred.loc[track, [kin + str(int(plane))
for kin in ['x', 'y', 'px', 'py', 'pz']]],
"\n")
print("TRUE:\n", X_true.loc[track], "\n\n-------------\n")
"""
Explanation: Spot check!
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.