repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
gojomo/gensim
docs/notebooks/doc2vec-wikipedia.ipynb
lgpl-2.1
from gensim.corpora.wikicorpus import WikiCorpus from gensim.models.doc2vec import Doc2Vec, TaggedDocument from pprint import pprint import multiprocessing """ Explanation: Doc2Vec to wikipedia articles We conduct the replication to Document Embedding with Paragraph Vectors (http://arxiv.org/abs/1507.07998). In this paper, they showed only DBOW results to Wikipedia data. So we replicate this experiments using not only DBOW but also DM. Basic Setup Let's import Doc2Vec module. End of explanation """ wiki = WikiCorpus("enwiki-latest-pages-articles.xml.bz2") #wiki = WikiCorpus("enwiki-YYYYMMDD-pages-articles.xml.bz2") """ Explanation: Preparing the corpus First, download the dump of all Wikipedia articles from here (you want the file enwiki-latest-pages-articles.xml.bz2, or enwiki-YYYYMMDD-pages-articles.xml.bz2 for date-specific dumps). Second, convert the articles to WikiCorpus. WikiCorpus construct a corpus from a Wikipedia (or other MediaWiki-based) database dump. For more details on WikiCorpus, you should access Corpus from a Wikipedia dump. End of explanation """ class TaggedWikiDocument(object): def __init__(self, wiki): self.wiki = wiki self.wiki.metadata = True def __iter__(self): for content, (page_id, title) in self.wiki.get_texts(): yield TaggedDocument([c.decode("utf-8") for c in content], [title]) documents = TaggedWikiDocument(wiki) """ Explanation: Define TaggedWikiDocument class to convert WikiCorpus into suitable form for Doc2Vec. End of explanation """ pre = Doc2Vec(min_count=0) pre.scan_vocab(documents) for num in range(0, 20): print('min_count: {}, size of vocab: '.format(num), pre.scale_vocab(min_count=num, dry_run=True)['memory']['vocab']/700) """ Explanation: Preprocessing To set the same vocabulary size with original paper. We first calculate the optimal min_count parameter. End of explanation """ cores = multiprocessing.cpu_count() models = [ # PV-DBOW Doc2Vec(dm=0, dbow_words=1, size=200, window=8, min_count=19, iter=10, workers=cores), # PV-DM w/average Doc2Vec(dm=1, dm_mean=1, size=200, window=8, min_count=19, iter =10, workers=cores), ] models[0].build_vocab(documents) print(str(models[0])) models[1].reset_from(models[0]) print(str(models[1])) """ Explanation: In the original paper, they set the vocabulary size 915,715. It seems similar size of vocabulary if we set min_count = 19. (size of vocab = 898,725) Training the Doc2Vec Model To train Doc2Vec model by several method, DBOW and DM, we define the list of models. End of explanation """ for model in models: %%time model.train(documents, total_examples=model.corpus_count, epochs=model.iter) """ Explanation: Now we’re ready to train Doc2Vec of the English Wikipedia. End of explanation """ for model in models: print(str(model)) pprint(model.docvecs.most_similar(positive=["Machine learning"], topn=20)) """ Explanation: Similarity interface After that, let's test both models! DBOW model show similar results with the original paper. First, calculating cosine similarity of "Machine learning" using Paragraph Vector. Word Vector and Document Vector are separately stored. We have to add .docvecs after model name to extract Document Vector from Doc2Vec Model. End of explanation """ for model in models: print(str(model)) pprint(model.docvecs.most_similar(positive=["Lady Gaga"], topn=10)) """ Explanation: DBOW model interpret the word 'Machine Learning' as a part of Computer Science field, and DM model as Data Science related field. Second, calculating cosine simillarity of "Lady Gaga" using Paragraph Vector. End of explanation """ for model in models: print(str(model)) vec = [model.docvecs["Lady Gaga"] - model["american"] + model["japanese"]] pprint([m for m in model.docvecs.most_similar(vec, topn=11) if m[0] != "Lady Gaga"]) """ Explanation: DBOW model reveal the similar singer in the U.S., and DM model understand that many of Lady Gaga's songs are similar with the word "Lady Gaga". Third, calculating cosine simillarity of "Lady Gaga" - "American" + "Japanese" using Document vector and Word Vectors. "American" and "Japanese" are Word Vectors, not Paragraph Vectors. Word Vectors are already converted to lowercases by WikiCorpus. End of explanation """
dwhswenson/contact_map
examples/contact_trajectory.ipynb
lgpl-2.1
from __future__ import print_function %matplotlib inline import matplotlib.pyplot as plt import numpy as np from contact_map import ContactTrajectory, RollingContactFrequency import mdtraj as md traj = md.load("data/gsk3b_example.h5") print(traj) # to see number of frames; size of system """ Explanation: Contact Trajectories Sometimes you're interested in how contacts evolve in a trajectory, frame-by-frame. Contact Map Explorer provides the ContactTrajectory class for this purpose. We'll look at this using a trajectory of a specific inhibitor during its binding process to GSK3B. This system is also studied in the notebook on contact concurrences (with very similar initial discussion). End of explanation """ topology = traj.topology yyg = topology.select('resname YYG and element != "H"') protein = topology.select('protein and element != "H"') """ Explanation: First, we'll use MDTraj's atom selection language to split out the protein and the ligand, which has residue name YYG in the input files. We're only interested in contacts between the protein and the ligand (not contacts within the protein). We'll also only look at heavy atom contacts. End of explanation """ contacts = ContactTrajectory(traj, query=yyg, haystack=protein) """ Explanation: Making an accessing a contact trajectory Contact trajectories have the same keyword arguments as other contact objects End of explanation """ contacts[0].residue_contacts.most_common() contacts.residue_contacts[0].most_common() """ Explanation: Once the ContactTrajectory has been made, contacts for individual frames can be accessed either by taking the index of the ContactTrajectory itself, or by getting the list of contact (e.g., all the residue contacts frame-by-frame) and selecting the frame of interest. End of explanation """ for contact in contacts[50:80:4]: print(contact.residue_contacts.most_common()[:3]) """ Explanation: Advanced Python indexing is also allowed. In this example, note how the most common partners for YYG change! This is also what we see in the contact concurrences example. End of explanation """ freq = contacts.contact_frequency() fig, ax = plt.subplots(figsize=(5.5,5)) freq.residue_contacts.plot_axes(ax=ax) """ Explanation: We can easily turn the ContactTrajectory into ContactFrequency: End of explanation """ RollingContactFrequency(contacts, width=30, step=14) rolling_frequencies = contacts.rolling_frequency(window_size=30, step=14) rolling_frequencies """ Explanation: Rolling Contact Frequencies A ContactTrajectory keeps all the time-dependent information about the contacts, whereas a ContactFrequency, as plotted above, loses all of it. What about something in between? For this, we have a RollingContactFrequency, which acts like a rolling average. It creates a contact frequency over a certain window of frames, with a certain step size between each window. This can be created either with the RollingContactFrequency object, or, more easily, with the ContactTrajectory.rolling_frequency() method. End of explanation """ fig, axs = plt.subplots(3, 2, figsize=(12, 10)) for ax, freq in zip(axs.flatten(), rolling_frequencies): freq.residue_contacts.plot_axes(ax=ax) """ Explanation: Now we'll plot each windowed frequency, and we will see the transition as some contacts fade out and others grow in. End of explanation """
jskDr/jamespy_py3
wireless/algorithm_nb/qsort_by_numba.ipynb
mit
from numba import jit, int32 import numpy as np """ Explanation: Quick Sort Algorithm Code Implemented by Numba in Python 파이썬의 numba 패키지를 이용한 퀵 정렬 알고리즘 구현 Numba는 파이썬 코드를 실시간으로 C로 번역해 속도를 높힌다. Numba로 구현했을 때와 일반적인 파이썬을 사용한 경우의 속도를 비교한다. 길이 1000짜리 정수 배열을 아래 알고리즘으로 퀵정렬한 경우, numba를 사용한 경우의 속도가 266배 빠르다. End of explanation """ def qsort_nojit(A, left, right): if left < right: #print(left, right) pivot = partisioning_nojit(A, left, right) qsort_nojit(A, left, pivot-1) qsort_nojit(A, pivot+1, right) def partisioning_nojit(A, left, right): st = left ed = right p_item = A[left] while st < ed: while st <= right and A[st] <= p_item: st += 1 while ed >= left and A[ed] > p_item: ed -= 1 if st < ed: A[st], A[ed] = A[ed], A[st] #print(st, ed) A[left] = A[ed] A[ed] = p_item return ed def quicksort_nojit(A): qsort_nojit(A, 0, len(A)-1) return A quicksort_nojit(np.array([6, 5, 38, 42, 3, 4, 7, 2, 1, 10, 100, 20])) """ Explanation: Numba를 사용하지 않은 파이썬 퀵정렬 코드 End of explanation """ @jit def qsort(A, left, right): if left < right: pivot = partisioning(A, left, right) qsort(A, left, pivot-1) qsort(A, pivot+1, right) return A @jit def partisioning(A, left, right): st = left ed = right p_item = A[left] while st < ed: while st <= right and A[st] <= p_item: st += 1 while ed >= left and A[ed] > p_item: ed -= 1 if st < ed: A[st], A[ed] = A[ed], A[st] #print(st, ed) A[left] = A[ed] A[ed] = p_item return ed @jit def quicksort(A): qsort(A, 0, len(A)-1) return A quicksort(np.array([6, 5, 38, 42, 3, 4, 7, 2, 1, 10, 100, 20])) """ Explanation: Numba를 사용한 경우 파이썬 퀵정렬 코드 End of explanation """ A = np.random.randint(1000, size=1000) %timeit B = quicksort_nojit(A) %timeit B = quicksort(A) 75.3e3 / 283 """ Explanation: Numba를 사용한 경우와 그렇지 않은 경우를 비교 배열의 길이가 1000인 경우에 대해 상호 비교 Numba를 사용한 경우의 속도가 266배 빠르다. End of explanation """
tensorflow/docs-l10n
site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb
apache-2.0
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2018 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ # Install seaborn for pretty visualizations !pip3 install --quiet seaborn # Install SentencePiece package # SentencePiece package is needed for Universal Sentence Encoder Lite. We'll # use it for all the text processing and sentence feature ID lookup. !pip3 install --quiet sentencepiece from absl import logging import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import tensorflow_hub as hub import sentencepiece as spm import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns """ Explanation: Universal Sentence Encoder-Lite demo <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/universal-sentence-encoder-lite/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> This Colab illustrates how to use the Universal Sentence Encoder-Lite for sentence similarity task. This module is very similar to Universal Sentence Encoder with the only difference that you need to run SentencePiece processing on your input sentences. The Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data. Getting started Setup End of explanation """ module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-lite/2") input_placeholder = tf.sparse_placeholder(tf.int64, shape=[None, None]) encodings = module( inputs=dict( values=input_placeholder.values, indices=input_placeholder.indices, dense_shape=input_placeholder.dense_shape)) """ Explanation: Load the module from TF-Hub End of explanation """ with tf.Session() as sess: spm_path = sess.run(module(signature="spm_path")) sp = spm.SentencePieceProcessor() with tf.io.gfile.GFile(spm_path, mode="rb") as f: sp.LoadFromSerializedProto(f.read()) print("SentencePiece model loaded at {}.".format(spm_path)) def process_to_IDs_in_sparse_format(sp, sentences): # An utility method that processes sentences with the sentence piece processor # 'sp' and returns the results in tf.SparseTensor-similar format: # (values, indices, dense_shape) ids = [sp.EncodeAsIds(x) for x in sentences] max_len = max(len(x) for x in ids) dense_shape=(len(ids), max_len) values=[item for sublist in ids for item in sublist] indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))] return (values, indices, dense_shape) """ Explanation: Load SentencePiece model from the TF-Hub Module The SentencePiece model is conveniently stored inside the module's assets. It has to be loaded in order to initialize the processor. End of explanation """ # Compute a representation for each message, showing various lengths supported. word = "Elephant" sentence = "I am a sentence for which I would like to get its embedding." paragraph = ( "Universal Sentence Encoder embeddings also support short paragraphs. " "There is no hard limit on how long the paragraph is. Roughly, the longer " "the more 'diluted' the embedding will be.") messages = [word, sentence, paragraph] values, indices, dense_shape = process_to_IDs_in_sparse_format(sp, messages) # Reduce logging output. logging.set_verbosity(logging.ERROR) with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) message_embeddings = session.run( encodings, feed_dict={input_placeholder.values: values, input_placeholder.indices: indices, input_placeholder.dense_shape: dense_shape}) for i, message_embedding in enumerate(np.array(message_embeddings).tolist()): print("Message: {}".format(messages[i])) print("Embedding size: {}".format(len(message_embedding))) message_embedding_snippet = ", ".join( (str(x) for x in message_embedding[:3])) print("Embedding: [{}, ...]\n".format(message_embedding_snippet)) """ Explanation: Test the module with a few examples End of explanation """ def plot_similarity(labels, features, rotation): corr = np.inner(features, features) sns.set(font_scale=1.2) g = sns.heatmap( corr, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd") g.set_xticklabels(labels, rotation=rotation) g.set_title("Semantic Textual Similarity") def run_and_plot(session, input_placeholder, messages): values, indices, dense_shape = process_to_IDs_in_sparse_format(sp,messages) message_embeddings = session.run( encodings, feed_dict={input_placeholder.values: values, input_placeholder.indices: indices, input_placeholder.dense_shape: dense_shape}) plot_similarity(messages, message_embeddings, 90) """ Explanation: Semantic Textual Similarity (STS) task example The embeddings produced by the Universal Sentence Encoder are approximately normalized. The semantic similarity of two sentences can be trivially computed as the inner product of the encodings. End of explanation """ messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "Eating strawberries is healthy", "Is paleo better than keto?", # Asking about age "How old are you?", "what is your age?", ] with tf.Session() as session: session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) run_and_plot(session, input_placeholder, messages) """ Explanation: Similarity visualized Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j. End of explanation """ import pandas import scipy import math def load_sts_dataset(filename): # Loads a subset of the STS dataset into a DataFrame. In particular both # sentences and their human rated similarity score. sent_pairs = [] with tf.gfile.GFile(filename, "r") as f: for line in f: ts = line.strip().split("\t") # (sent_1, sent_2, similarity_score) sent_pairs.append((ts[5], ts[6], float(ts[4]))) return pandas.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"]) def download_and_load_sts_data(): sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = load_sts_dataset( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv")) sts_test = load_sts_dataset( os.path.join( os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv")) return sts_dev, sts_test sts_dev, sts_test = download_and_load_sts_data() """ Explanation: Evaluation: STS (Semantic Textual Similarity) Benchmark The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements. Download data End of explanation """ sts_input1 = tf.sparse_placeholder(tf.int64, shape=(None, None)) sts_input2 = tf.sparse_placeholder(tf.int64, shape=(None, None)) # For evaluation we use exactly normalized rather than # approximately normalized. sts_encode1 = tf.nn.l2_normalize( module( inputs=dict(values=sts_input1.values, indices=sts_input1.indices, dense_shape=sts_input1.dense_shape)), axis=1) sts_encode2 = tf.nn.l2_normalize( module( inputs=dict(values=sts_input2.values, indices=sts_input2.indices, dense_shape=sts_input2.dense_shape)), axis=1) sim_scores = -tf.acos(tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)) """ Explanation: Build evaluation graph End of explanation """ #@title Choose dataset for benchmark dataset = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} values1, indices1, dense_shape1 = process_to_IDs_in_sparse_format(sp, dataset['sent_1'].tolist()) values2, indices2, dense_shape2 = process_to_IDs_in_sparse_format(sp, dataset['sent_2'].tolist()) similarity_scores = dataset['sim'].tolist() def run_sts_benchmark(session): """Returns the similarity scores""" scores = session.run( sim_scores, feed_dict={ sts_input1.values: values1, sts_input1.indices: indices1, sts_input1.dense_shape: dense_shape1, sts_input2.values: values2, sts_input2.indices: indices2, sts_input2.dense_shape: dense_shape2, }) return scores with tf.Session() as session: session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) scores = run_sts_benchmark(session) pearson_correlation = scipy.stats.pearsonr(scores, similarity_scores) print('Pearson correlation coefficient = {0}\np-value = {1}'.format( pearson_correlation[0], pearson_correlation[1])) """ Explanation: Evaluate sentence embeddings End of explanation """
geektoni/shogun
doc/ipython-notebooks/multiclass/KNN.ipynb
bsd-3-clause
import numpy as np import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat, savemat from numpy import random from os import path import matplotlib.pyplot as plt %matplotlib inline import shogun as sg mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = mat['data'] Yall = np.array(mat['label'].squeeze(), dtype=np.double) # map from 1..10 to 0..9, since shogun # requires multiclass labels to be # 0, 1, ..., K-1 Yall = Yall - 1 random.seed(0) subset = random.permutation(len(Yall)) Xtrain = Xall[:, subset[:5000]] Ytrain = Yall[subset[:5000]] Xtest = Xall[:, subset[5000:6000]] Ytest = Yall[subset[5000:6000]] Nsplit = 2 all_ks = range(1, 21) print(Xall.shape) print(Xtrain.shape) print(Xtest.shape) """ Explanation: K-Nearest Neighbors (KNN) by Chiyuan Zhang and S&ouml;ren Sonnenburg This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown. The basics The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias. In SHOGUN, you can use KNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard EuclideanDistance, but in general, any subclass of Distance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K. First we load and init data split: End of explanation """ def plot_example(dat, lab): for i in range(5): ax=plt.subplot(1,5,i+1) plt.title(int(lab[i])) ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest') ax.set_xticks([]) ax.set_yticks([]) _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xtrain, Ytrain) _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xtest, Ytest) """ Explanation: Let us plot the first five examples of the train data (first row) and test data (second row). End of explanation """ labels = sg.create_labels(Ytrain) feats = sg.create_features(Xtrain) k=3 dist = sg.create_distance('EuclideanDistance') knn = sg.create_machine("KNN", k=k, distance=dist, labels=labels) labels_test = sg.create_labels(Ytest) feats_test = sg.create_features(Xtest) knn.train(feats) pred = knn.apply(feats_test) print("Predictions", pred.get("labels")[:5]) print("Ground Truth", Ytest[:5]) evaluator = sg.create_evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(pred, labels_test) print("Accuracy = %2.2f%%" % (100*accuracy)) """ Explanation: Then we import shogun components and convert the data to shogun objects: End of explanation """ idx=np.where(pred != Ytest)[0] Xbad=Xtest[:,idx] Ybad=Ytest[idx] _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xbad, Ybad) """ Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect. End of explanation """ knn.put('k', 13) multiple_k=knn.get("classify_for_multiple_k") print(multiple_k.shape) """ Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step: End of explanation """ for k in range(13): print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest))) """ Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies: End of explanation """ %%time knn.put('k', 3) knn.put('knn_solver', "KNN_BRUTE") pred = knn.apply(feats_test) # FIXME: causes SEGFAULT # %%time # knn.put('k', 3) # knn.put('knn_solver', "KNN_COVER_TREE") # pred = knn.apply(feats_test) """ Explanation: So k=3 seems to have been the optimal choice. Accellerating KNN Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above: End of explanation """ def evaluate(labels, feats, use_cover_tree=False): import time split = sg.create_splitting_strategy("CrossValidationSplitting", labels=labels, num_subsets=Nsplit) split.build_subsets() accuracy = np.zeros((Nsplit, len(all_ks))) acc_train = np.zeros(accuracy.shape) time_test = np.zeros(accuracy.shape) for i in range(Nsplit): idx_train = split.generate_subset_inverse(i) idx_test = split.generate_subset_indices(i) for j, k in enumerate(all_ks): #print "Round %d for k=%d..." % (i, k) feats.add_subset(idx_train) labels.add_subset(idx_train) dist = sg.create_distance('EuclideanDistance') dist.init(feats, feats) knn = sg.create_machine("KNN", k=k, distance=dist, labels=labels) #knn.set_store_model_features(True) #FIXME: causes SEGFAULT if use_cover_tree: continue # knn.put('knn_solver', "KNN_COVER_TREE") else: knn.put('knn_solver', "KNN_BRUTE") knn.train() evaluator = sg.create_evaluation("MulticlassAccuracy") pred = knn.apply() acc_train[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() feats.add_subset(idx_test) labels.add_subset(idx_test) t_start = time.clock() pred = knn.apply_multiclass(feats) time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels() accuracy[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() return {'eout': accuracy, 'ein': acc_train, 'time': time_test} """ Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN: End of explanation """ labels = sg.create_labels(Ytest) feats = sg.create_features(Xtest) print("Evaluating KNN...") wo_ct = evaluate(labels, feats, use_cover_tree=False) # wi_ct = evaluate(labels, feats, use_cover_tree=True) print("Done!") """ Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds: End of explanation """ fig = plt.figure(figsize=(8,5)) plt.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*') # plt.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*') plt.legend(["Test Accuracy", "Training Accuracy"]) plt.xlabel('K') plt.ylabel('Accuracy') plt.title('KNN Accuracy') plt.tight_layout() fig = plt.figure(figsize=(8,5)) plt.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*') # plt.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d') plt.xlabel("K") plt.ylabel("time") plt.title('KNN time') plt.legend(["Plain KNN", "CoverTree KNN"], loc='center right') plt.tight_layout() """ Explanation: Generate plots with the data collected in the evaluation: End of explanation """ width=80 C=1 gk=sg.create_kernel("GaussianKernel", width=width) svm=sg.create_machine("GMNPSVM", C=C, kernel=gk, labels=labels) _=svm.train(feats) """ Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results. Comparison to Multiclass Support Vector Machines In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above. Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance). End of explanation """ out=svm.apply(feats_test) evaluator = sg.create_evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(out, labels_test) print("Accuracy = %2.2f%%" % (100*accuracy)) """ Explanation: Let's apply the SVM to the same test data set to compare results: End of explanation """ Xrem=Xall[:,subset[6000:]] Yrem=Yall[subset[6000:]] feats_rem=sg.create_features(Xrem) labels_rem=sg.create_labels(Yrem) out=svm.apply(feats_rem) evaluator = sg.create_evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(out, labels_rem) print("Accuracy = %2.2f%%" % (100*accuracy)) idx=np.where(out.get("labels") != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xbad, Ybad) """ Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. End of explanation """
xtr33me/deep-learning
tensorboard/Anna_KaRNNa_Summaries.ipynb
mit
import time from collections import namedtuple import numpy as np import tensorflow as tf """ Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation """ with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100] """ Explanation: First we'll load the text file and convert it into integers for our network to use. End of explanation """ def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y train_x, train_y, val_x, val_y = split_data(chars, 10, 200) train_x.shape train_x[:,:10] """ Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. End of explanation """ def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph with tf.name_scope('inputs'): inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') with tf.name_scope('targets'): targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers with tf.name_scope("RNN_cells"): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) with tf.name_scope("RNN_init_state"): initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers with tf.name_scope("RNN_forward"): outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one row for each cell output with tf.name_scope('sequence_reshape'): seq_output = tf.concat(outputs, axis=1,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN outputs to a softmax layer and calculate the cost with tf.name_scope('logits'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b tf.summary.histogram('softmax_w', softmax_w) tf.summary.histogram('softmax_b', softmax_b) with tf.name_scope('predictions'): preds = tf.nn.softmax(logits, name='predictions') tf.summary.histogram('predictions', preds) with tf.name_scope('cost'): loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') tf.summary.scalar('cost', cost) # Optimizer for training, using gradient clipping to control exploding gradients with tf.name_scope('train'): tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) merged = tf.summary.merge_all() # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer', 'merged'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph """ Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. End of explanation """ batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 """ Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. End of explanation """ !mkdir -p checkpoints/anna epochs = 10 save_every_n = 100 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph) test_writer = tf.summary.FileWriter('./logs/2/test') # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) train_writer.add_summary(summary, iteration) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} summary, batch_loss, new_state = sess.run([model.merged, model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) test_writer.add_summary(summary, iteration) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') #saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna') """ Explanation: Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. End of explanation """ def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) """ Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation """
lesonkorenac/dataquest-projects
1. Python Introduction/Exploring Gun Deaths in the US/Exploring Gun Deaths in the US.ipynb
mit
census = list(csv.reader(open("census.csv", 'r'))) for index, column in enumerate(census[0]): print("{} - {}: {}".format(index, column, census[1][index])) def get_race_count(census, column_indexes): return sum([int(census[1][index]) for index in column_indexes]) race_percentage = { "Black": get_race_count(census, [12]), "Asian/Pacific Islander": get_race_count(census, [14, 15]), "White": get_race_count(census, [10]), "Hispanic": get_race_count(census, [11]), "Native American/Native Alaskan": get_race_count(census, [13]) } race_rate_by_100k = {} for key in race_counts: race_rate_by_100k[key] = float(race_counts[key]) / race_percentage[key] * 100000 plot_counts(race_rate_by_100k, "race rate by 100 000") homicide_by_race_counts = extract_counts([row for row in data if row[3] == 'Homicide'], lambda row: row[7]) plot_counts(homicide_by_race_counts, "race (homicides)") set([row[3] for row in data]) homicide_by_race_rate_by_100k = {} for key in homicide_by_race_counts: homicide_by_race_rate_by_100k[key] = float(homicide_by_race_counts[key]) / race_percentage[key] * 100000 plot_counts(homicide_by_race_rate_by_100k, "race rate by 100 000 (homicide)") """ Explanation: Observations Date of death There is no significan difference between years 2012, 2013 and 2014. There is correlation between months within year. Lowers death counts is in february for each year. Highest deat count is in summer. Gender Much more men have died by gun than women. Race Mostly white people died by gun. However distribution by race is not considered. End of explanation """ for death_type in set([row[3] for row in data]): race_counts = extract_counts([row for row in data if row[3] == death_type], lambda row: row[7]) plot_counts(race_counts, "race (%s)" % death_type) race_rate_by_100k = {} for key in race_counts: race_rate_by_100k[key] = float(race_counts[key]) / race_percentage[key] * 100000 plot_counts(race_rate_by_100k, "race rate by 100 000 (%s)" % death_type) """ Explanation: Death by race We have learned that black people are much more likely to get killed by gun than people of other races. For deth rate by race, it could be interesting to evaluate other kinds of death types by race. Other steps There are many things to explore in this data set. It is (sadly) interesting set because of recent development in United States. End of explanation """
jornvdent/WUR-Geo-Scripting-Course
Lesson 14/Lesson 14 - Assignment.ipynb
gpl-3.0
from twython import TwythonStreamer import string, json, pprint import urllib from datetime import datetime from datetime import date from time import * import string, os, sys, subprocess, time import psycopg2 import re from osgeo import ogr """ Explanation: Import modules End of explanation """ # get access to the twitter API APP_KEY = 'fQCYxyQmFDUE6aty0JEhDoZj7' APP_SECRET = 'ZwVIgnWMpuEEVd1Tlg6TWMuyRwd3k90W3oWyLR2Ek1tnjnRvEG' OAUTH_TOKEN = '824520596293820419-f4uGwMV6O7PSWUvbPQYGpsz5fMSVMct' OAUTH_TOKEN_SECRET = '1wq51Im5HQDoSM0Fb5OzAttoP3otToJtRFeltg68B8krh' """ Explanation: Enter your details for twitter API End of explanation """ dbname = "demo" user = "user" password = "user" table = "tweets" """ Explanation: Set up details for PostGIS DB, run in terminal: We are going to use a PostGis database, which requires you to have an empty database. Enter these steps into the terminal to set up you databse. In this example we use "demo" as the name of our database. Feel free to give you database another name, but replace "demo" with the name you have chosen. Connect to postgres psql -d postgres" Create database postgres=# CREATE DATABASE demo; Switch to new DB postgres=# \c demo Add PostGIS extension to new DB demo=# create extension postgis; Add Table demo=# CREATE TABLE tweets (id serial primary key, tweet_id BIGINT, text varchar(140), date DATE, time TIME, geom geometry(POINT,4326) ); Enter your database connection details: End of explanation """ def insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon): try: conn = psycopg2.connect(dbname = dbname, user = user, password = password) cur = conn.cursor() # enter stuff in database sql = "INSERT INTO " + str(table) + " (tweet_id, text, date, time, geom) \ VALUES (" + str(tweet_id) + ", '" + str(tweet_text) + "', '" + str(tweet_date) + "', '" + str(tweet_time) + "', \ ST_GeomFromText('POINT(" + str(tweet_lon) + " " + str(tweet_lat) + ")', 4326))" cur.execute(sql) conn.commit() conn.close() except psycopg2.DatabaseError, e: print 'Error %s' % e """ Explanation: Function which connects to PostGis database and inserts data End of explanation """ def remove_link(text): pattern = r'(https://)' matcher = re.compile(pattern) match = matcher.search(text) if match != None: text = text[:match.start(1)] return text """ Explanation: Function to remove the hyperlinks from the text End of explanation """ #Class to process JSON data comming from the twitter stream API. Extract relevant fields class MyStreamer(TwythonStreamer): def on_success(self, data): tweet_lat = 0.0 tweet_lon = 0.0 tweet_name = "" retweet_count = 0 if 'id' in data: tweet_id = data['id'] if 'text' in data: tweet_text = data['text'].encode('utf-8').replace("'","''").replace(';','') tweet_text = remove_link(tweet_text) if 'coordinates' in data: geo = data['coordinates'] if geo is not None: latlon = geo['coordinates'] tweet_lon = latlon[0] tweet_lat = latlon[1] if 'created_at' in data: dt = data['created_at'] tweet_datetime = datetime.strptime(dt, '%a %b %d %H:%M:%S +0000 %Y') tweet_date = str(tweet_datetime)[:11] tweet_time = str(tweet_datetime)[11:] if 'user' in data: users = data['user'] tweet_name = users['screen_name'] if 'retweet_count' in data: retweet_count = data['retweet_count'] if tweet_lat != 0: # call function to write to DB insert_into_DB(tweet_id, tweet_text, tweet_date, tweet_time, tweet_lat, tweet_lon) def on_error(self, status_code, data): print "OOPS FOUTJE: " +str(status_code) #self.disconnect """ Explanation: Process JSON twitter streamd data End of explanation """ def main(): try: stream = MyStreamer(APP_KEY, APP_SECRET,OAUTH_TOKEN, OAUTH_TOKEN_SECRET) print 'Connecting to twitter: will take a minute' except ValueError: print 'OOPS! that hurts, something went wrong while making connection with Twitter: '+str(ValueError) # Filter based on bounding box see twitter api documentation for more info try: stream.statuses.filter(locations='-0.351468, 51.38494, 0.148271, 51.672343') except ValueError: print 'OOPS! that hurts, something went wrong while getting the stream from Twitter: '+str(ValueError) if __name__ == '__main__': main() """ Explanation: Main procedure End of explanation """
tensorflow/docs-l10n
site/en-snapshot/probability/examples/Probabilistic_Layers_VAE.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ #@title Import { display-mode: "form" } import numpy as np import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_datasets as tfds import tensorflow_probability as tfp tfk = tf.keras tfkl = tf.keras.layers tfpl = tfp.layers tfd = tfp.distributions """ Explanation: TFP Probabilistic Layers: Variational Auto Encoder <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this example we show how to fit a Variational Autoencoder using TFP's "probabilistic layers." Dependencies & Prerequisites End of explanation """ if tf.test.gpu_device_name() != '/device:GPU:0': print('WARNING: GPU device not found.') else: print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name())) """ Explanation: Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU". The following snippet will verify that we have access to a GPU. End of explanation """ datasets, datasets_info = tfds.load(name='mnist', with_info=True, as_supervised=False) def _preprocess(sample): image = tf.cast(sample['image'], tf.float32) / 255. # Scale to unit interval. image = image < tf.random.uniform(tf.shape(image)) # Randomly binarize. return image, image train_dataset = (datasets['train'] .map(_preprocess) .batch(256) .prefetch(tf.data.AUTOTUNE) .shuffle(int(10e3))) eval_dataset = (datasets['test'] .map(_preprocess) .batch(256) .prefetch(tf.data.AUTOTUNE)) """ Explanation: Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Load Dataset End of explanation """ input_shape = datasets_info.features['image'].shape encoded_size = 16 base_depth = 32 prior = tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1), reinterpreted_batch_ndims=1) encoder = tfk.Sequential([ tfkl.InputLayer(input_shape=input_shape), tfkl.Lambda(lambda x: tf.cast(x, tf.float32) - 0.5), tfkl.Conv2D(base_depth, 5, strides=1, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2D(base_depth, 5, strides=2, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2D(2 * base_depth, 5, strides=1, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2D(2 * base_depth, 5, strides=2, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2D(4 * encoded_size, 7, strides=1, padding='valid', activation=tf.nn.leaky_relu), tfkl.Flatten(), tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(encoded_size), activation=None), tfpl.MultivariateNormalTriL( encoded_size, activity_regularizer=tfpl.KLDivergenceRegularizer(prior)), ]) decoder = tfk.Sequential([ tfkl.InputLayer(input_shape=[encoded_size]), tfkl.Reshape([1, 1, encoded_size]), tfkl.Conv2DTranspose(2 * base_depth, 7, strides=1, padding='valid', activation=tf.nn.leaky_relu), tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2DTranspose(2 * base_depth, 5, strides=2, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2DTranspose(base_depth, 5, strides=1, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2DTranspose(base_depth, 5, strides=2, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2DTranspose(base_depth, 5, strides=1, padding='same', activation=tf.nn.leaky_relu), tfkl.Conv2D(filters=1, kernel_size=5, strides=1, padding='same', activation=None), tfkl.Flatten(), tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits), ]) vae = tfk.Model(inputs=encoder.inputs, outputs=decoder(encoder.outputs[0])) """ Explanation: Note that preprocess() above returns image, image rather than just image because Keras is set up for discriminative models with an (example, label) input format, i.e. $p\theta(y|x)$. Since the goal of the VAE is to recover the input x from x itself (i.e. $p_\theta(x|x)$), the data pair is (example, example). VAE Code Golf Specify model. End of explanation """ negloglik = lambda x, rv_x: -rv_x.log_prob(x) vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3), loss=negloglik) _ = vae.fit(train_dataset, epochs=15, validation_data=eval_dataset) """ Explanation: Do inference. End of explanation """ # We'll just examine ten random digits. x = next(iter(eval_dataset))[0][:10] xhat = vae(x) assert isinstance(xhat, tfd.Distribution) #@title Image Plot Util import matplotlib.pyplot as plt def display_imgs(x, y=None): if not isinstance(x, (np.ndarray, np.generic)): x = np.array(x) plt.ioff() n = x.shape[0] fig, axs = plt.subplots(1, n, figsize=(n, 1)) if y is not None: fig.suptitle(np.argmax(y, axis=1)) for i in range(n): axs.flat[i].imshow(x[i].squeeze(), interpolation='none', cmap='gray') axs.flat[i].axis('off') plt.show() plt.close() plt.ion() print('Originals:') display_imgs(x) print('Decoded Random Samples:') display_imgs(xhat.sample()) print('Decoded Modes:') display_imgs(xhat.mode()) print('Decoded Means:') display_imgs(xhat.mean()) # Now, let's generate ten never-before-seen digits. z = prior.sample(10) xtilde = decoder(z) assert isinstance(xtilde, tfd.Distribution) print('Randomly Generated Samples:') display_imgs(xtilde.sample()) print('Randomly Generated Modes:') display_imgs(xtilde.mode()) print('Randomly Generated Means:') display_imgs(xtilde.mean()) """ Explanation: Look Ma, No ~~Hands~~Tensors! End of explanation """
sorig/shogun
doc/ipython-notebooks/multiclass/naive_bayes.ipynb
bsd-3-clause
%matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') import numpy as np import pylab as pl np.random.seed(0) n_train = 300 models = [{'mu': [8, 0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}, {'mu': [0, 0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}, {'mu': [-8,0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}] """ Explanation: Naive Bayes by Chiyuan Zhang This notebook illustrates <a href="http://en.wikipedia.org/wiki/Multiclass_classification">multiclass</a> learning using <a href="http://en.wikipedia.org/wiki/Naive_Bayes_classifier">Naive Bayes</a> in Shogun. A semi-formal introduction to <a href="http://en.wikipedia.org/wiki/Logistic_regression">Logistic Regression</a> is provided at the end. $$ P\left( Y=k | X = x \right) = \frac{P(X=x|Y=k)P(Y=k)}{P(X=x)} $$ The prediction is then made by $$ y = \operatorname*{argmax}_{k\in{1,\ldots,K}}\; P(Y=k|X=x) $$ Since $P(X=x)$ is a constant factor for all $P(Y=k|X=x)$, $k=1,\ldots,K$, there is no need to compute it. In SHOGUN, CGaussianNaiveBayes implements the Naive Bayes algorithm. It is prefixed with "Gaussian" because the probability model for $P(X=x|Y=k)$ for each $k$ is taken to be a multi-variate Gaussian distribution. Furthermore, each dimension of the feature vector $X$ is assumed to be independent. The Naive independence assumption enables us the learn the model by estimating the parameters for each feature dimension independently, thus the whole learning algorithm runs very quickly. And this is also the reason for its name. However, this assumption can be very restrictive. In this demo, we show a simple 2D example. There are 3 linearly separable classes. The scattered points are training samples with colors indicating their labels. The filled area indicate the hypothesis learned by the CGaussianNaiveBayes. The training samples are actually generated from three Gaussian distributions. But since the covariance for those Gaussian distributions are not diagonal (i.e. there are rotations), the GNB algorithm cannot handle them properly. We first init the models for generating samples for this demo: End of explanation """ def gen_samples(n_samples): X_all = np.zeros((2, 0)) Y_all = np.zeros(0) for i, model in enumerate(models): Y = np.zeros(n_samples) + i+1 X = np.array(model['sigma']).dot(np.random.randn(2, n_samples)) + np.array(model['mu']).reshape((2,1)) X_all = np.hstack((X_all, X)) Y_all = np.hstack((Y_all, Y)) return (X_all, Y_all) """ Explanation: A helper function is defined to generate samples: End of explanation """ from shogun import GaussianNaiveBayes from shogun import features from shogun import MulticlassLabels X_train, Y_train = gen_samples(n_train) machine = GaussianNaiveBayes() machine.put('features', features(X_train)) machine.put('labels', MulticlassLabels(Y_train)) machine.train() """ Explanation: Then we train the GNB model with SHOGUN: End of explanation """ delta = 0.1 x = np.arange(-20, 20, delta) y = np.arange(-20, 20, delta) X,Y = np.meshgrid(x,y) Z = machine.apply_multiclass(features(np.vstack((X.flatten(), Y.flatten())))).get_labels() """ Explanation: Run classification over the whole area to generate color regions: End of explanation """ pl.figure(figsize=(8,5)) pl.contourf(X, Y, Z.reshape(X.shape), np.arange(0, len(models)+1)) pl.scatter(X_train[0,:],X_train[1,:], c=Y_train) pl.axis('off') pl.tight_layout() """ Explanation: Plot figure: End of explanation """
yhilpisch/dx
03_dx_valuation_single_risk.ipynb
agpl-3.0
from dx import * from pylab import plt plt.style.use('seaborn') """ Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4"> Single-Risk Derivatives Valuation This part introduces into the modeling and valuation of derivatives instruments (contingent claims) based on a single risk factor (e.g. a stock price, stock index level or interest rate). It also shows how to model and value portfolios composed of such instruments. End of explanation """ r = constant_short_rate('r', 0.06) me = market_environment('me', dt.datetime(2015, 1, 1)) me.add_constant('initial_value', 36.) me.add_constant('volatility', 0.2) me.add_constant('final_date', dt.datetime(2015, 12, 31)) me.add_constant('currency', 'EUR') me.add_constant('frequency', 'W') me.add_constant('paths', 25000) me.add_curve('discount_curve', r) gbm = geometric_brownian_motion('gbm', me) """ Explanation: The following single risk factor valuation classes are available: valuation_mcs_european_single for derivatives with European exercise valuation_mcs_american_single for derivatives with American/Bermudan exercise Modeling the Risk Factor Before moving on to the valuation classes, we need to model an instantiate an underlying risk factor, in this case a geometric_brownian_motion object. Background information is provided in the respective part of the documentation about model classes. End of explanation """ me.add_constant('maturity', dt.datetime(2015, 12, 31)) me.add_constant('strike', 40.) """ Explanation: valuation_mcs_european_single The first instrument we value is a European call option written on the single relevant risk factor as embodied by the gbm model object. To this end, we add a maturity date to the market environment and a strike price. End of explanation """ call_eur = valuation_mcs_european_single( name='call_eur', underlying=gbm, mar_env=me, payoff_func='np.maximum(maturity_value - strike, 0)') """ Explanation: To instantiate a the valuation_mcs_european_single class, the following information/data is to be provided: name as a string object instance of a model class market environment payoff of the instrument a string object and containing "regular" Python/NumPy code End of explanation """ payoff = 'np.maximum(np.minimum(maturity_value) * 2 - 50, 0)' """ Explanation: In this case, the payoff is that of a regular, plain vanilla European call option. If $T$ is the maturity date, $S_T$ the value of the relevant risk factor at that date and $K$ the strike price, the payoff $h_T$ at maturity of such an option is given by $$ h_T = \max[S_T - K, 0] $$ maturity_value represents the value vector of the risk factor at maturity. Any other "sensible" payoff definition is possible. For instance, the following works as well: End of explanation """ call_eur.present_value() """ Explanation: Other standardized payoff elemenets include mean_value, max_value and min_value representing maturity value vectors with the pathwise means, maxima and minima. Using these payoff elements allows the easy definition of options with Asian features. Having instantiated the valuation class, the present_value method returns the present value Monte Carlo estimator for the call option. End of explanation """ call_eur.delta() call_eur.vega() """ Explanation: Similarly, the delta and vega methods return the delta and the vega of the option, estimated numerically by a forward difference scheme and Monte Carlo simulation. End of explanation """ %%time k_list = np.arange(26., 46.1, 2) pv = []; de = []; ve = []; th = []; rh = []; ga = [] for k in k_list: call_eur.update(strike=k) pv.append(call_eur.present_value()) de.append(call_eur.delta(0.5)) ve.append(call_eur.vega(0.2)) th.append(call_eur.theta()) rh.append(call_eur.rho()) ga.append(call_eur.gamma()) %matplotlib inline """ Explanation: This approach allows to work with such a valuation object similar to an analytical valuation formula like the one of Black-Scholes-Merton (1973). For example, you can estimate and plot present values, deltas, gammas, vegas, thetas and rhos for a range of different initial values of the risk factor. End of explanation """ plot_option_stats_full(k_list, pv, de, ga, ve, th, rh) """ Explanation: There is a little plot helper function available to plot these statistics conveniently. End of explanation """ me.add_constant('initial_value', 36.) # reset initial_value put_ame = valuation_mcs_american_single( name='put_eur', underlying=gbm, mar_env=me, payoff_func='np.maximum(strike - instrument_values, 0)') """ Explanation: valuation_mcs_american_single The modeling and valuation of derivatives with American/Bermudan exercise is almost completely the same as in the more simple case of European exercise. End of explanation """ put_ame.present_value() """ Explanation: The only difference to consider here is that for American options where exercise can take place at any time before maturity, the inner value of the option (payoff of immediate exercise) is relevant over the whole set of dates. Therefore, maturity_value needs to be replaced by instrument_values in the definition of the payoff function. End of explanation """ put_ame.delta() put_ame.vega() %%time k_list = np.arange(26., 46.1, 2.) pv = []; de = []; ve = [] for k in k_list: put_ame.update(strike=k) pv.append(put_ame.present_value()) de.append(put_ame.delta(.5)) ve.append(put_ame.vega(0.2)) plot_option_stats(k_list, pv, de, ve) """ Explanation: Since DX Analytics relies on Monte Carlo simulation and other numerical methods, the calculation of the delta and vega of such an option is identical to the European exercise case. End of explanation """ me.add_constant('model', 'gbm') """ Explanation: Portfolio Valuation In general, market players (asset managers, investment banks, hedge funds, insurance companies, etc.) have to value not only single derivatvies instruments but rather portfolios composed of several derivatives instruments. A consistent derivatives portfolio valuation is particularly important when there are multiple derivatives written on the same risk factor and/or correlations between different risk factors. These are the classes availble for a consistent portfolio valuation: derivatives_position to model a portfolio position derivatives_portfolio to model a derivatives portfolio derivatives_position We work with the market_environment object from before and add information about the risk factor model we are using. End of explanation """ put = derivatives_position( name='put', # name of position quantity=1, # number of instruments underlyings=['gbm'], # relevant risk factors mar_env=me, # market environment otype='American single', # the option type payoff_func='np.maximum(40. - instrument_values, 0)') # the payoff funtion """ Explanation: A derivatives position consists of "data only" and not instantiated model or valuation objects. The necessary model and valuation objects are instantiated during the portfolio valuation. End of explanation """ put.get_info() """ Explanation: The method get_info prints an overview of the all relevant information stored for the respective derivatives_position object. End of explanation """ me_jump = market_environment('me_jump', dt.datetime(2015, 1, 1)) me_jump.add_environment(me) me_jump.add_constant('lambda', 0.8) me_jump.add_constant('mu', -0.8) me_jump.add_constant('delta', 0.1) me_jump.add_constant('model', 'jd') """ Explanation: derivatives_portfolio The derivatives_portfolio class implements the core portfolio valuation tasks. This sub-section illustrates to cases, one with uncorrelated underlyings and another one with correlated underlyings Uncorrelated Underlyings The first example is based on a portfolio with two single-risk factor instruments on two different risk factors which are not correlated. In addition to the gbm object, we define a jump_diffusion object. End of explanation """ call_jump = derivatives_position( name='call_jump', quantity=3, underlyings=['jd'], mar_env=me_jump, otype='European single', payoff_func='np.maximum(maturity_value - 36., 0)') """ Explanation: Based on this new risk factor model object, a European call option is defined. End of explanation """ risk_factors = {'gbm': me, 'jd' : me_jump} positions = {'put' : put, 'call_jump' : call_jump} """ Explanation: Our relevant market now takes on the following form (defined a dictionary objects): End of explanation """ val_env = market_environment('general', dt.datetime(2015, 1, 1)) val_env.add_constant('frequency', 'M') val_env.add_constant('paths', 50000) val_env.add_constant('starting_date', val_env.pricing_date) val_env.add_constant('final_date', val_env.pricing_date) val_env.add_curve('discount_curve', r) """ Explanation: To instantiate the derivatives_portfolio class, a valuation environment (instance of market_environment class) is needed. End of explanation """ port = derivatives_portfolio( name='portfolio', # name positions=positions, # derivatives positions val_env=val_env, # valuation environment risk_factors=risk_factors, # relevant risk factors correlations=False, # correlation between risk factors fixed_seed=False, # fixed seed for randon number generation parallel=False) # parallel valuation of portfolio positions """ Explanation: For the instantiation, we pass all the elements to the portfolio class. End of explanation """ %%time stats = port.get_statistics() stats """ Explanation: Once instantiated, the method get_statistics provides major portfolio statistics like position values, position deltas ans position vegas. End of explanation """ stats[['pos_value', 'pos_delta', 'pos_vega']].sum() """ Explanation: The method returns a standard pandas DataFrame object with which you can work as you are used to. End of explanation """ %time port.get_values() """ Explanation: The metod get_values only calculates the present values of the derivatives instruments and positions and is therefore a bit less compute and time intensive. End of explanation """ port.get_positions() """ Explanation: The method get_positions provides detailed information about the single derivatives positions of the derivatives_portfolio object. End of explanation """ correlations = [['gbm', 'jd', 0.9]] """ Explanation: Correlated Underlyings The second example case is exactly the same but now with a highly positive correlation between the two relevant risk factors. Correlations are to be provided as a list of list objects using the risk factor model names to reference them. End of explanation """ port = derivatives_portfolio( name='portfolio', positions=positions, val_env=val_env, risk_factors=risk_factors, correlations=correlations, fixed_seed=True, parallel=False) port.get_statistics() """ Explanation: Except from now passing this new object, the application and usage remains the same. End of explanation """ port.val_env.lists['cholesky_matrix'] """ Explanation: The Cholesky matrix has been added to the valuation environment (which gets passed to the risk factor model objects). End of explanation """ path_no = 0 paths1 = port.underlying_objects['gbm'].get_instrument_values()[:, path_no] paths2 = port.underlying_objects['jd'].get_instrument_values()[:, path_no] """ Explanation: Let us pick two specific simulated paths, one for each risk factor, and let us visualize these. End of explanation """ plt.figure(figsize=(10, 6)) plt.plot(port.time_grid, paths1, 'r', label='gbm') plt.plot(port.time_grid, paths2, 'b', label='jd') plt.gcf().autofmt_xdate() plt.legend(loc=0); plt.grid(True) # highly correlated underlyings # -- with a large jump for one risk factor """ Explanation: The plot illustrates that the two paths are indeed highly positively correlated. However, in this case a large jump occurs for the jump_diffusion object. End of explanation """
tien-le/kaggle-titanic
Titanic - Machine Learning from Disaster - Applying Machine Learning Techniques.ipynb
gpl-3.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import random """ Explanation: Titanic: Machine Learning from Disaster - Applying Machine Learning Techniques Homepage: https://github.com/tien-le/kaggle-titanic unbelivable ... to achieve 1.000. How did they do this? Just curious, how did they cheat the score? ANS: maybe, we have the information existing in https://www.encyclopedia-titanica.org/titanic-victims/ Competition Description The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships. One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class. In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy. References https://www.kaggle.com/c/titanic https://triangleinequality.wordpress.com/2013/09/08/basic-feature-engineering-with-the-titanic-data/ https://triangleinequality.wordpress.com/2013/05/19/machine-learning-with-python-first-steps-munging/ https://www.kaggle.com/mrisdal/exploring-survival-on-the-titanic https://github.com/justmarkham/scikit-learn-videos End of explanation """ #Training Corpus trn_corpus_after_preprocessing = pd.read_csv("output/trn_corpus_after_preprocessing.csv") #Testing Corpus tst_corpus_after_preprocessing = pd.read_csv("output/tst_corpus_after_preprocessing.csv") #tst_corpus_after_preprocessing[tst_corpus_after_preprocessing["Fare"].isnull()] trn_corpus_after_preprocessing.info() print("-"*36) tst_corpus_after_preprocessing.info() """ Explanation: Load Corpus After Preprocessing ... End of explanation """ trn_corpus_after_preprocessing.columns list_of_non_preditor_variables = ['Survived','PassengerId'] #Method 1 #x_train = trn_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived'] #y_train = trn_corpus_after_preprocessing.ix[:,"Survived"] #Method 2 x_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy() y_train = trn_corpus_after_preprocessing['Survived'].copy() #y_train = trn_corpus_after_preprocessing.iloc[:,-1] #y_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns[-1]] #x_train #y_train x_train.columns # check the types of the features and response #print(type(x_train)) #print(type(x_test)) #Method 1 #x_test = tst_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived'] #y_test = tst_corpus_after_preprocessing.ix[:,"Survived"] #Method 2 x_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy() y_test = tst_corpus_after_preprocessing['Survived'].copy() #y_test = tst_corpus_after_preprocessing.iloc[:,-1] #y_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns[-1]] #x_test #y_test # display the first 5 rows x_train.head() # display the last 5 rows x_train.tail() # check the shape of the DataFrame (rows, columns) x_train.shape """ Explanation: Basic & Advanced machine learning tools Agenda What is machine learning? What are the two main categories of machine learning? What are some examples of machine learning? How does machine learning "work"? What is machine learning? One definition: "Machine learning is the semi-automated extraction of knowledge from data" Knowledge from data: Starts with a question that might be answerable using data Automated extraction: A computer provides the insight Semi-automated: Requires many smart decisions by a human What are the two main categories of machine learning? Supervised learning: Making predictions using data Example: Is a given email "spam" or "ham"? There is an outcome we are trying to predict Unsupervised learning: Extracting structure from data Example: Segment grocery store shoppers into clusters that exhibit similar behaviors There is no "right answer" How does machine learning "work"? High-level steps of supervised learning: First, train a machine learning model using labeled data "Labeled data" has been labeled with the outcome "Machine learning model" learns the relationship between the attributes of the data and its outcome Then, make predictions on new data for which the label is unknown The primary goal of supervised learning is to build a model that "generalizes": It accurately predicts the future rather than the past! Questions about machine learning How do I choose which attributes of my data to include in the model? How do I choose which model to use? How do I optimize this model for best performance? How do I ensure that I'm building a model that will generalize to unseen data? Can I estimate how well my model is likely to perform on unseen data? Benefits and drawbacks of scikit-learn Benefits: Consistent interface to machine learning models Provides many tuning parameters but with sensible defaults Exceptional documentation Rich set of functionality for companion tasks Active community for development and support Potential drawbacks: Harder (than R) to get started with machine learning Less emphasis (than R) on model interpretability Further reading: Ben Lorica: Six reasons why I recommend scikit-learn scikit-learn authors: API design for machine learning software Data School: Should you teach Python or R for data science? Types of supervised learning Classification: Predict a categorical response Regression: Predict a ordered/continuous response Note that each value we are predicting is the response (also known as: target, outcome, label, dependent variable) Model evaluation metrics Regression problems: Mean Absolute Error, Mean Squared Error, Root Mean Squared Error Classification problems: Classification accuracy Load Corpus End of explanation """ print(x_train.shape) display(x_train.head()) display(x_train.describe()) """ Explanation: What are the features? - AgeClass: - AgeClassSquared: - AgeSquared: - ... What is the response? - Survived: 1-Yes, 0-No What else do we know? - Because the response variable is dicrete, this is a Classification problem. - There are 200 observations (represented by the rows), and each observation is a single market. Note that if the response variable is continuous, this is a regression problem. End of explanation """ from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(x_train, y_train) #Once trained, we can export the tree in Graphviz format using the export_graphviz exporter. #Below is an example export of a tree trained on the entire iris dataset: with open("output/titanic.dot", 'w') as f: f = tree.export_graphviz(clf, out_file=f) #Then we can use Graphviz’s dot tool to create a PDF file (or any other supported file type): #dot -Tpdf titanic.dot -o titanic.pdf. import os os.unlink('output/titanic.dot') #Alternatively, if we have Python module pydotplus installed, we can generate a PDF file #(or any other supported file type) directly in Python: import pydotplus dot_data = tree.export_graphviz(clf, out_file=None) graph = pydotplus.graph_from_dot_data(dot_data) graph.write_pdf("output/titanic.pdf") #The export_graphviz exporter also supports a variety of aesthetic options, #including coloring nodes by their class (or value for regression) #and using explicit variable and class names if desired. #IPython notebooks can also render these plots inline using the Image() function: """from IPython.display import Image dot_data = tree.export_graphviz(clf, out_file=None, feature_names= list(x_train.columns[1:]), #iris.feature_names, class_names= ["Survived"], #iris.target_names, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png())""" print("accuracy score: ", clf.score(x_test,y_test)) """ Explanation: Decision Trees Classification End of explanation """ #After being fitted, the model can then be used to predict the class of samples: y_pred_class = clf.predict(x_test); #Alternatively, the probability of each class can be predicted, #which is the fraction of training samples of the same class in a leaf: clf.predict_proba(x_test); # calculate accuracy from sklearn import metrics print(metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Classification accuracy: percentage of correct predictions End of explanation """ # examine the class distribution of the testing set (using a Pandas Series method) y_test.value_counts() # calculate the percentage of ones y_test.mean() # calculate the percentage of zeros 1 - y_test.mean() # calculate null accuracy (for binary classification problems coded as 0/1) max(y_test.mean(), 1 - y_test.mean()) # calculate null accuracy (for multi-class classification problems) y_test.value_counts().head(1) / len(y_test) """ Explanation: Null accuracy: accuracy that could be achieved by always predicting the most frequent class End of explanation """ # print the first 25 true and predicted responses from __future__ import print_function print('True:', y_test.values[0:25]) print('Pred:', y_pred_class[0:25]) """ Explanation: Comparing the true and predicted response values End of explanation """ # IMPORTANT: first argument is true values, second argument is predicted values print(metrics.confusion_matrix(y_test, y_pred_class)) """ Explanation: Conclusion: ??? Classification accuracy is the easiest classification metric to understand But, it does not tell you the underlying distribution of response values And, it does not tell you what "types" of errors your classifier is making Confusion matrix Table that describes the performance of a classification model End of explanation """ # save confusion matrix and slice into four pieces confusion = metrics.confusion_matrix(y_test, y_pred_class) TP = confusion[1, 1] TN = confusion[0, 0] FP = confusion[0, 1] FN = confusion[1, 0] print(TP, TN, FP, FN) """ Explanation: Basic terminology True Positives (TP): we correctly predicted that they do have diabetes True Negatives (TN): we correctly predicted that they don't have diabetes False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error") False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error") End of explanation """ print((TP + TN) / float(TP + TN + FP + FN)) print(metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Metrics computed from a confusion matrix Classification Accuracy: Overall, how often is the classifier correct? End of explanation """ print((FP + FN) / float(TP + TN + FP + FN)) print(1 - metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Classification Error: Overall, how often is the classifier incorrect? Also known as "Misclassification Rate" End of explanation """ print(TN / float(TN + FP)) """ Explanation: Specificity: When the actual value is negative, how often is the prediction correct? How "specific" (or "selective") is the classifier in predicting positive instances? End of explanation """ print(FP / float(TN + FP)) """ Explanation: False Positive Rate: When the actual value is negative, how often is the prediction incorrect? End of explanation """ print(TP / float(TP + FP)) print(metrics.precision_score(y_test, y_pred_class)) print("Presicion: ", metrics.precision_score(y_test, y_pred_class)) print("Recall: ", metrics.recall_score(y_test, y_pred_class)) print("F1 score: ", metrics.f1_score(y_test, y_pred_class)) """ Explanation: Precision: When a positive value is predicted, how often is the prediction correct? How "precise" is the classifier when predicting positive instances? End of explanation """ from sklearn import svm model = svm.LinearSVC() # fit a model to the data model.fit(x_train, y_train) acc_score = model.score(x_test, y_test) print("Accuracy score: ", acc_score) y_pred_class = model.predict(x_test) from sklearn import metrics confusion_matrix = metrics.confusion_matrix(y_test, y_pred_class) print(confusion_matrix) # summarize the fit of the model print(metrics.classification_report(y_test, y_pred_class)) print(metrics.confusion_matrix(y_test, y_pred_class)) """ Explanation: Many other metrics can be computed: F1 score, Matthews correlation coefficient, etc. Conclusion: Confusion matrix gives you a more complete picture of how your classifier is performing Also allows you to compute various classification metrics, and these metrics can guide your model selection Which metrics should you focus on? Choice of metric depends on your business objective Spam filter (positive class is "spam"): Optimize for precision or specificity because false negatives (spam goes to the inbox) are more acceptable than false positives (non-spam is caught by the spam filter) Fraudulent transaction detector (positive class is "fraud"): Optimize for sensitivity because false positives (normal transactions that are flagged as possible fraud) are more acceptable than false negatives (fraudulent transactions that are not detected) Support Vector Machine (SVM) Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. Ref: http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC End of explanation """ from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.neural_network import MLPClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.datasets import make_classification from sklearn.preprocessing import StandardScaler from matplotlib.colors import ListedColormap #classifiers #x_train #sns.pairplot(x_train) x_train_scaled = StandardScaler().fit_transform(x_train) x_test_scaled = StandardScaler().fit_transform(x_test) x_train_scaled[0] len(x_train_scaled[0]) df_x_train_scaled = pd.DataFrame(columns=x_train.columns, data=x_train_scaled) df_x_train_scaled.head() #sns.pairplot(df_x_train_scaled) """ Explanation: Classifier comparison http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html A comparison of a several classifiers in scikit-learn on synthetic datasets. The point of this example is to illustrate the nature of decision boundaries of different classifiers. This should be taken with a grain of salt, as the intuition conveyed by these examples does not necessarily carry over to real datasets. Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers. The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set. End of explanation """ from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn import model_selection # prepare configuration for cross validation test harness seed = 7 # prepare models models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] scoring = 'accuracy' for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (+-%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # boxplot algorithm comparison fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() """ Explanation: How To Compare Machine Learning Algorithms in Python with scikit-learn Ref: http://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/ Choose The Best Machine Learning Model How do you choose the best model for your problem? When you work on a machine learning project, you often end up with multiple good models to choose from. Each model will have different performance characteristics. Using resampling methods like cross validation, you can get an estimate for how accurate each model may be on unseen data. You need to be able to use these estimates to choose one or two best models from the suite of models that you have created. Compare Machine Learning Models Carefully When you have a new dataset, it is a good idea to visualize the data using different techniques in order to look at the data from different perspectives. The same idea applies to model selection. You should use a number of different ways of looking at the estimated accuracy of your machine learning algorithms in order to choose the one or two to finalize. A way to do this is to use different visualization methods to show the average accuracy, variance and other properties of the distribution of model accuracies. Compare Machine Learning Algorithms Consistently The key to a fair comparison of machine learning algorithms is ensuring that each algorithm is evaluated in the same way on the same data. You can achieve this by forcing each algorithm to be evaluated on a consistent test harness. In the example below 6 different algorithms are compared: Logistic Regression Linear Discriminant Analysis K-Nearest Neighbors Classification and Regression Trees Naive Bayes Support Vector Machines The problem is a standard binary classification dataset from the UCI machine learning repository called the Pima Indians onset of diabetes problem. The problem has two classes and eight numeric input variables of varying scales. The 10-fold cross validation procedure is used to evaluate each algorithm, importantly configured with the same random seed to ensure that the same splits to the training data are performed and that each algorithms is evaluated in precisely the same way. Each algorithm is given a short name, useful for summarizing results afterward. End of explanation """ names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA", "Gaussian Process"] classifiers = [ KNeighborsClassifier(3), SVC(kernel="linear", C=0.025), SVC(gamma=2, C=1), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis() #, GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True), # Take too long... ] # iterate over classifiers for name, model in zip(names, classifiers): # fit a model to the data model.fit(x_train_scaled, y_train) # make predictions - not used # summarize the fit of the model acc_score = model.score(x_test_scaled, y_test) print(name, " - accuracy score: ", acc_score) #end for names_classifiers = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA", "Gaussian Process"] classifiers = [ KNeighborsClassifier(3), SVC(kernel="linear", C=0.025), SVC(gamma=2, C=1), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis() #, GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True), # Take too long... ] # prepare configuration for cross validation test harness seed = 7 models = zip(names_classifiers, classifiers) # evaluate each model in turn results = [] names = [] scoring = 'accuracy' for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, x_train_scaled, y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (+-%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # boxplot algorithm comparison fig = plt.figure(figsize=(16, 6)) fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() """ Explanation: Comment: From above results, it would suggest that both logistic regression, linear discriminate analysis, CART and NB are perhaps worthy of further study on this problem. Note that, we use x_train for cross validation. Using some hyperparameters ... End of explanation """ from xgboost import XGBClassifier from sklearn.metrics import accuracy_score # fit model no training data model = XGBClassifier() model.fit(x_train, y_train) # make predictions for test data y_pred = model.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # Save Model Using pickle import pickle # fit model no training data model = XGBClassifier() model.fit(x_train, y_train) # save the model to disk filename = 'output/XGBClassifier_model-pickle.sav' pickle.dump(model, open(filename, 'wb')) # some time later... # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) # make predictions for test data y_pred = loaded_model.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # Save Model Using joblib from sklearn.externals import joblib # fit model no training data model = XGBClassifier() model.fit(x_train, y_train) # save the model to disk filename = 'output/XGBClassifier_model-joblib.sav' joblib.dump(model, filename) # some time later... # load the model from disk loaded_model = joblib.load(filename) # make predictions for test data y_pred = loaded_model.predict(x_test) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # Using data after scaling ... # fit model no training data model = XGBClassifier() model.fit(x_train_scaled, y_train) # make predictions for test data y_pred = model.predict(x_test_scaled) predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) from sklearn.model_selection import KFold, cross_val_score seed = 7 kfold = KFold(n_splits=10, random_state=seed) model = XGBClassifier() results = cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) #for train_indices, test_indices in kfold.split(x_train): # print('Train: %s | test: %s' % (train_indices, test_indices)) """ Explanation: Comment: From above results, it would suggest that both Nearest Neighbors, Decision Tree, Random Forest and AdaBoost are perhaps worthy of further study on this problem. Note that, we use x_train_scaled for cross validation. How to Develop Your First XGBoost Model in Python with scikit-learn Ref: + http://machinelearningmastery.com/develop-first-xgboost-model-python-scikit-learn/ + http://machinelearningmastery.com/stochastic-gradient-boosting-xgboost-scikit-learn-python/ End of explanation """ from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn import model_selection seed = 7 kfold = model_selection.KFold(n_splits=10, random_state=seed) num_trees = 100 clf = DecisionTreeClassifier() model = BaggingClassifier(base_estimator=clf, n_estimators=num_trees, random_state=seed) results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a robust estimate of model accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Ensemble Machine Learning Algorithms Ref: http://machinelearningmastery.com/ensemble-machine-learning-algorithms-python-scikit-learn/ Combine Model Predictions Into Ensemble Predictions The three most popular methods for combining the predictions from different models are: Bagging: Building multiple models (typically of the same type) from different subsamples of the training dataset. Boosting: Building multiple models (typically of the same type) each of which learns to fix the prediction errors of a prior model in the chain. Voting: Building multiple models (typically of differing types) and simple statistics (like calculating the mean) are used to combine predictions. Each ensemble algorithm is demonstrated using 10 fold cross validation, a standard technique used to estimate the performance of any machine learning algorithm on unseen data. In this part, we discovered ensemble machine learning algorithms for improving the performance of models on our problems. + Bagging Ensembles including Bagged Decision Trees, Random Forest and Extra Trees. + Boosting Ensembles including AdaBoost and Stochastic Gradient Boosting. + Voting Ensembles for averaging the predictions for any arbitrary models. Bagging Algorithms Bootstrap Aggregation or bagging involves taking multiple samples from your training dataset (with replacement) and training a model for each sample. The final output prediction is averaged across the predictions of all of the sub-models. The three bagging models covered in this section are as follows: Bagged Decision Trees Random Forest Extra Trees Bagged Decision Trees Bagging performs best with algorithms that have high variance. A popular example are decision trees, often constructed without pruning. In the example below see an example of using the BaggingClassifier with the Classification and Regression Trees algorithm (DecisionTreeClassifier). A total of 100 trees are created. End of explanation """ # Random Forest Classification from sklearn import model_selection from sklearn.ensemble import RandomForestClassifier seed = 7 num_trees = 100 max_features = 3 kfold = model_selection.KFold(n_splits=10, random_state=seed) model = RandomForestClassifier(n_estimators=num_trees, max_features=max_features) results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a mean estimate of classification accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Random Forest Random forest is an extension of bagged decision trees. Samples of the training dataset are taken with replacement, but the trees are constructed in a way that reduces the correlation between individual classifiers. Specifically, rather than greedily choosing the best split point in the construction of the tree, only a random subset of features are considered for each split. We can construct a Random Forest model for classification using the RandomForestClassifier (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) class. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default). The example below provides an example of Random Forest for classification with 100 trees and split points chosen from a random selection of 3 features. End of explanation """ # Extra Trees Classification import pandas from sklearn import model_selection from sklearn.ensemble import ExtraTreesClassifier seed = 7 num_trees = 100 max_features = 7 kfold = model_selection.KFold(n_splits=10, random_state=seed) model = ExtraTreesClassifier(n_estimators=num_trees, max_features=max_features) results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a mean estimate of classification accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Extra Trees Extra Trees are another modification of bagging where random trees are constructed from samples of the training dataset. You can construct an Extra Trees model for classification using the ExtraTreesClassifier (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html) class. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The example below provides a demonstration of extra trees with the number of trees set to 100 and splits chosen from 7 random features. End of explanation """ # AdaBoost Classification from sklearn import model_selection from sklearn.ensemble import AdaBoostClassifier seed = 7 num_trees = 30 kfold = model_selection.KFold(n_splits=10, random_state=seed) model = AdaBoostClassifier(n_estimators=num_trees, random_state=seed) results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a mean estimate of classification accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Boosting Algorithms Boosting ensemble algorithms creates a sequence of models that attempt to correct the mistakes of the models before them in the sequence. Once created, the models make predictions which may be weighted by their demonstrated accuracy and the results are combined to create a final output prediction. The two most common boosting ensemble machine learning algorithms are: AdaBoost Stochastic Gradient Boosting AdaBoost AdaBoost was perhaps the first successful boosting ensemble algorithm. It generally works by weighting instances in the dataset by how easy or difficult they are to classify, allowing the algorithm to pay or or less attention to them in the construction of subsequent models. You can construct an AdaBoost model for classification using the AdaBoostClassifier (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) class. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases. This class implements the algorithm known as AdaBoost-SAMME [2]. The example below demonstrates the construction of 30 decision trees in sequence using the AdaBoost algorithm. End of explanation """ # Stochastic Gradient Boosting Classification from sklearn import model_selection from sklearn.ensemble import GradientBoostingClassifier seed = 7 num_trees = 100 kfold = model_selection.KFold(n_splits=10, random_state=seed) model = GradientBoostingClassifier(n_estimators=num_trees, random_state=seed) results = model_selection.cross_val_score(model, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a mean estimate of classification accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Stochastic Gradient Boosting Stochastic Gradient Boosting (also called Gradient Boosting Machines) are one of the most sophisticated ensemble techniques. It is also a technique that is proving to be perhaps of the the best techniques available for improving performance via ensembles. You can construct a Gradient Boosting model for classification using the GradientBoostingClassifier (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) class. Gradient Boosting for classification - GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage n_classes_ regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function. Binary classification is a special case where only a single regression tree is induced. The example below demonstrates Stochastic Gradient Boosting for classification with 100 trees. End of explanation """ # Voting Ensemble for Classification from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.ensemble import VotingClassifier seed = 7 kfold = model_selection.KFold(n_splits=10, random_state=seed) # create the sub models estimators = [] model1 = LogisticRegression() estimators.append(('logistic', model1)) model2 = DecisionTreeClassifier() estimators.append(('cart', model2)) model3 = SVC() estimators.append(('svm', model3)) # create the ensemble model ensemble = VotingClassifier(estimators) results = model_selection.cross_val_score(ensemble, x_train, y_train, cv=kfold) print(results) print("max: ", results.max()) print("min: ", results.min()) print("mean: ", results.mean()) # We get a mean estimate of classification accuracy. print("Accuracy: %0.2f (+/- %0.2f)" % (results.mean(), results.std() * 2)) """ Explanation: Voting Ensemble Voting is one of the simplest ways of combining the predictions from multiple machine learning algorithms. It works by first creating two or more standalone models from your training dataset. A Voting Classifier can then be used to wrap your models and average the predictions of the sub-models when asked to make predictions for new data. The predictions of the sub-models can be weighted, but specifying the weights for classifiers manually or even heuristically is difficult. More advanced methods can learn how to best weight the predictions from submodels, but this is called stacking (stacked aggregation) and is currently not provided in scikit-learn. You can create a voting ensemble model for classification using the VotingClassifier (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) class. Soft Voting/Majority Rule classifier for unfitted estimators. New in version 0.17. The code below provides an example of combining the predictions of logistic regression, classification and regression trees and support vector machines together for a classification problem. End of explanation """
henchc/Data-on-the-Mind-2017-scraping-apis
01-APIs/solutions/01-API_solutions.ipynb
mit
import requests # to make the GET request import json # to parse the JSON response to a Python dictionary import time # to pause after each API call import csv # to write our data to a CSV import pandas # to see our CSV """ Explanation: Accessing Databases via Web APIs In this lesson we'll learn what an API (Application Programming Interface) is, how it's normally used, and how we can collect data from it. We'll then look at how Python can help us quickly gather data from APIs, parse the data, and write to a CSV. There are four sections: Constructing an API GET request Parsing the JSON response Looping through result pages Exporting to CSV First we'll import the required Python libraries End of explanation """ # set API key var key = "" # set base url var base_url = "http://api.nytimes.com/svc/search/v2/articlesearch" # set response format var response_format = ".json" """ Explanation: All of these are standard Python libraries, so no matter your distribution, these should be installed. 1. Constructing an API GET request We're going to use the New York Times API. You'll need to first sign up for an API key. We know that every call to any API will require us to provide: a base URL for the API, (usually) some authorization code or key, and a format for the response. Let's write this information to some variables: End of explanation """ # set search parameters search_params = {"q": "Duke Ellington", "api-key": key} """ Explanation: Notice we assign each variable as a string. While the requests library will convert integers, it's better to be consistent and use strings for all parameters of a GET request. We choose JSON as the response format, as it is easy to parse quickly with Python, though XML is often an viable frequently offered alternative. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we're going to look for articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term. We know these key names from the NYT API documentation. End of explanation """ # make request response = requests.get(base_url + response_format, params=search_params) """ Explanation: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request. End of explanation """ print(response.url) """ Explanation: Now, we have a response object called response. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens. End of explanation """ # set date parameters here search_params = {"q": "Duke Ellington", "api-key": key, "begin_date": "20150101", # date must be in YYYYMMDD format "end_date": "20151231"} # uncomment to test r = requests.get(base_url + response_format, params=search_params) print(r.url) """ Explanation: Click on that link to see it returns! Notice that all Python is doing here for us is helping us construct a complicated URL built with &amp; and = signs. You just noticed we could just as well copy and paste this URL to a browser and then save the response, but Python's requests library is much easier and scalable when making multiple queries in succession. Challenge 1: Adding a date range What if we only want to search within a particular date range? The NYT Article API allows us to specify start and end dates. Alter the search_params code above so that the request only searches for articles in the year 2015. You're going to need to look at the documentation to see how to do this. End of explanation """ # set page parameters here search_params["page"] = 0 # uncomment to test r = requests.get(base_url + response_format, params=search_params) print(r.url) """ Explanation: Challenge 2: Specifying a results page The above will return the first 10 results. To get the next ten, you need to add a "page" parameter. Change the search parameters above to get the second 10 results. End of explanation """ # inspect the content of the response, parsing the result as text response_text = r.text print(response_text[:1000]) """ Explanation: 2. Parsing the JSON response We can read the content of the server’s response using .text End of explanation """ # convert JSON response to a dictionary data = json.loads(response_text) print(data) """ Explanation: What you see here is JSON text, encoded as unicode text. As mentioned, JSON is bascially a Python dictionary, and we can convert this string text to a Python dictionary by using the loads to load from a string. End of explanation """ print(data.keys()) # this is boring print(data['status']) # so is this print(data['copyright']) # this is what we want! print(data['response']) print(data['response'].keys()) print(data['response']['meta'].keys()) print(data['response']['meta']['hits']) """ Explanation: That looks intimidating! But it's really just a big dictionary. The most time-consuming part of using APIs is traversing the various key-value trees to see where the information you want resides. Let's see what keys we got in there. End of explanation """ print(data['response']['docs']) """ Explanation: Looks like there were 93 hits total for our query. Let's take a look: End of explanation """ print(type(data['response']['docs'])) """ Explanation: It starts with a square bracket, so it looks like a list, and from a glance it looks like the list of articles we're interested in. End of explanation """ docs = data['response']['docs'] print(docs[0]) """ Explanation: Let's just save this list to a new variable. Often when using web APIs, you'll spend the majority of your time restructuring the response data to how you want it. End of explanation """ print(len(docs)) """ Explanation: Wow! That's a lot of information about just one article! But wait... End of explanation """ # get number of hits total (in any page we request) hits = data['response']['meta']['hits'] print("number of hits: ", str(hits)) # get number of pages pages = hits // 10 + 1 # make an empty list where we'll hold all of our docs for every page all_docs = [] # now we're ready to loop through the pages for i in range(pages): print("collecting page", str(i)) # set the page parameter search_params['page'] = i # make request r = requests.get(base_url + response_format, params=search_params) # get text and convert to a dictionary data = json.loads(r.text) # get just the docs docs = data['response']['docs'] # add those docs to the big list all_docs = all_docs + docs # IMPORTANT pause between calls time.sleep(5) print(len(all_docs)) """ Explanation: 3. Looping through result pages We're making progress, but we only have 10 items. The original response said we had 93 hits! Which means we have to make 93 /10, or 10 requests to get them all. Sounds like a job for a loop! End of explanation """ final_docs = [] for d in all_docs: # create empty dict for each doc to collect info targeted_info = {} targeted_info['id'] = d['_id'] targeted_info['headline'] = d['headline']['main'] targeted_info['date'] = d['pub_date'][0:10] # cutting time of day. targeted_info['word_count'] = d['word_count'] targeted_info['keywords'] = [keyword['value'] for keyword in d['keywords']] try: # some docs don't have this info targeted_info['lead_paragraph'] = d['lead_paragraph'] except: pass # append final doc info to list final_docs.append(targeted_info) """ Explanation: 4. Exporting to CSV Great, now we have all the articles. Let's just take out some bits of information and write to a CSV. End of explanation """ header = final_docs[1].keys() with open('all-docs.csv', 'w') as output_file: dict_writer = csv.DictWriter(output_file, header) dict_writer.writeheader() dict_writer.writerows(final_docs) pandas.read_csv('all-docs.csv') """ Explanation: We can write our sifted information to a CSV now: End of explanation """
ibm-cds-labs/spark.samples
notebook/Twitter Sentiment with Watson TA and PI.ipynb
apache-2.0
!pip install --user python-twitter !pip install --user watson-developer-cloud """ Explanation: Twitter Sentiment analysis with Watson Tone Analyzer and Watson Personality Insights <img style="max-width: 800px; padding: 25px 0px;" src="https://ibm-watson-data-lab.github.io/spark.samples/Twitter%20Sentiment%20with%20Watson%20TA%20and%20PI%20architecture%20diagram.png"/> In this notebook, we perform the following steps: 1. Install python-twitter and watson-developer-cloud modules 2. Install the streaming Twitter jar using PixieDust packageManager 3. Invoke the streaming Twitter app using the PixieDust Scala Bridge to get a DataFrame containing all the tweets enriched with Watson Tone Analyzer scores 4. Create a new RDD that groups the tweets by author and concatenates all the associated tweets into one blob 5. For each author and aggregated text, invoke the Watson Personality Insights to get the scores 6. Visualize results using PixieDust display Learn more Watson Tone Analyzer Watson Personality Insights python-twitter watson-developer-cloud PixieDust Realtime Sentiment Analysis of Twitter Hashtags with Spark Install python-twitter and watson-developer-cloud If you haven't already installed the following modules, run these 2 cells: End of explanation """ !pip install --upgrade --user pixiedust """ Explanation: Install latest pixiedust Make sure you are running the latest pixiedust version. After upgrading restart the kernel before continuing to the next cells. End of explanation """ import pixiedust jarPath = "https://github.com/ibm-watson-data-lab/spark.samples/raw/master/dist/streaming-twitter-assembly-1.6.jar" pixiedust.installPackage(jarPath) print("done") """ Explanation: Install the streaming Twitter jar in the notebook from the Github repo This jar file contains the Spark Streaming application (written in Scala) that connects to Twitter to fetch the tweets and send them to Watson Tone Analyzer for analysis. The resulting scores are then added to the tweets dataframe as separate columns. End of explanation """ import pixiedust sqlContext=SQLContext(sc) #Set up the twitter credentials, they will be used both in scala and python cells below consumerKey = "XXXX" consumerSecret = "XXXX" accessToken = "XXXX" accessTokenSecret = "XXXX" #Set up the Watson Personality insight credentials piUserName = "XXXX" piPassword = "XXXX" #Set up the Watson Tone Analyzer credentials taUserName = "XXXX" taPassword = "XXXX" %%scala val demo = com.ibm.cds.spark.samples.StreamingTwitter demo.setConfig("twitter4j.oauth.consumerKey",consumerKey) demo.setConfig("twitter4j.oauth.consumerSecret",consumerSecret) demo.setConfig("twitter4j.oauth.accessToken",accessToken) demo.setConfig("twitter4j.oauth.accessTokenSecret",accessTokenSecret) demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api") demo.setConfig("watson.tone.password",taPassword) demo.setConfig("watson.tone.username",taUserName) import org.apache.spark.streaming._ demo.startTwitterStreaming(sc, Seconds(30)) //Run the application for a limited time """ Explanation: <h3>If PixieDust or the streaming Twitter jar were just installed or upgraded, <span style="color: red">restart the kernel</span> before continuing.</h3> Use Scala Bridge to run the command line version of the app Insert your credentials for Twitter, Watson Tone Analyzer, and Watson Personality Insights. Then run the following cell. Read how to provision these services and get credentials. End of explanation """ %%scala val demo = com.ibm.cds.spark.samples.StreamingTwitter val (__sqlContext, __df) = demo.createTwitterDataFrames(sc) """ Explanation: Create a tweets dataframe from the data fetched above and transfer it to Python Notice the __ prefix for each variable which is used to signal PixieDust that the variable needs to be transfered back to Python End of explanation """ import pyspark.sql.functions as F usersDF = __df.groupby("author", "userid").agg(F.avg("Anger").alias("Anger"), F.avg("Disgust").alias("Disgust")) usersDF.show() """ Explanation: Group the tweets by author and userid This will be used later to fetch the last 200 tweets for each author End of explanation """ import twitter api = twitter.Api(consumer_key=consumerKey, consumer_secret=consumerSecret, access_token_key=accessToken, access_token_secret=accessTokenSecret) #print(api.VerifyCredentials()) """ Explanation: Set up the Twitter API from python-twitter module End of explanation """ def getTweets(screenName): statuses = api.GetUserTimeline(screen_name=screenName, since_id=None, max_id=None, count=200, include_rts=False, trim_user=False, exclude_replies=True) return statuses usersWithTweetsRDD = usersDF.flatMap(lambda s: [(s.user.screen_name, s.text.encode('ascii', 'ignore')) for s in getTweets(s['userid'])]) print(usersWithTweetsRDD.count()) """ Explanation: For each author, fetch the last 200 tweets use flatMap to return a new RDD that contains a list of tuples composed of userid and tweets text: (userid, tweetText) End of explanation """ import re usersWithTweetsRDD2 = usersWithTweetsRDD.map(lambda s: (s[0], s[1])).reduceByKey(lambda s,t: s + '\n' + t)\ .filter(lambda s: len(re.findall(r'\w+', s[1])) > 100 ) print(usersWithTweetsRDD2.count()) #usersWithTweetsRDD2.take(2) """ Explanation: Concatenate all the tweets for each user so we have enough words to send to Watson Personality Insights Use map to create an RDD of key, value pair composed of userId and tweets Use reduceByKey to group all record with same author and concatenate the tweets End of explanation """ from pyspark.sql.types import * from watson_developer_cloud import PersonalityInsightsV3 broadCastPIUsername = sc.broadcast(piUserName) broadCastPIPassword = sc.broadcast(piPassword) def getPersonalityInsight(text, schema=False): personality_insights = PersonalityInsightsV3( version='2016-10-20', username=broadCastPIUsername.value, password=broadCastPIPassword.value) try: p = personality_insights.profile( text, content_type='text/plain', raw_scores=True, consumption_preferences=True) if schema: return \ [StructField(t['name'], FloatType()) for t in p["needs"]] + \ [StructField(t['name'], FloatType()) for t in p["values"]] + \ [StructField(t['name'], FloatType()) for t in p['personality' ]] else: return \ [t['raw_score'] for t in p["needs"]] + \ [t['raw_score'] for t in p["values"]] + \ [t['raw_score'] for t in p['personality']] except: return [] usersWithPIRDD = usersWithTweetsRDD2.map(lambda s: [s[0]] + getPersonalityInsight(s[1])).filter(lambda s: len(s)>1) print(usersWithPIRDD.count()) #usersWithPIRDD.take(2) """ Explanation: Call Watson Personality Insights on the text for each author Watson Personality Insights requires at least 100 words from its lexicon to be available, which may not exist for each user. This is why the getPersonlityInsight helper function guards against exceptions from calling Watson PI. If an exception occurs, then an empty array is returned. Each record with empty array is filtered out of the resulting RDD. Note also that we use broadcast variables to propagate the userName and password to the cluster End of explanation """ #convert to dataframe schema = StructType( [StructField('userid',StringType())] + getPersonalityInsight(usersWithTweetsRDD2.take(1)[0][1], schema=True) ) usersWithPIDF = sqlContext.createDataFrame( usersWithPIRDD, schema ) usersWithPIDF.cache() display(usersWithPIDF) """ Explanation: Convert the RDD back to a DataFrame and call PixieDust display to visualize the results The schema is automatically created from introspecting a sample payload result from Watson Personality Insights End of explanation """ candidates = "realDonaldTrump HillaryClinton".split(" ") candidatesRDD = sc.parallelize(candidates)\ .flatMap(lambda s: [(t.user.screen_name, t.text.encode('ascii', 'ignore')) for t in getTweets(s)])\ .map(lambda s: (s[0], s[1]))\ .reduceByKey(lambda s,t: s + '\n' + t)\ .filter(lambda s: len(re.findall(r'\w+', s[1])) > 100 )\ .map(lambda s: [s[0]] + getPersonalityInsight(s[1])) candidatesPIDF = sqlContext.createDataFrame( candidatesRDD, schema ) c = candidatesPIDF.collect() broadCastTrumpPI = sc.broadcast(c[0][1:]) broadCastHillaryPI = sc.broadcast(c[1][1:]) display(candidatesPIDF) candidatesPIDF.select('userid','Emotional range','Agreeableness', 'Extraversion','Conscientiousness', 'Openness').show() usersWithPIDF.describe(['Emotional range']).show() usersWithPIDF.describe(['Agreeableness']).show() usersWithPIDF.describe(['Extraversion']).show() usersWithPIDF.describe(['Conscientiousness']).show() usersWithPIDF.describe(['Openness']).show() """ Explanation: Compare Twitter users Personality Insights scores with this year presidential candidates For a quick look on the difference in Personality Insights scores Spark provides a describe() function that computes stddev and mean values off the dataframe. Compare differences in the scores of twitter users and presidential candidates. End of explanation """ import numpy as np from pyspark.sql.types import Row def addEuclideanDistance(s): dict = s.asDict() def getEuclideanDistance(a,b): return np.linalg.norm(np.array(a) - np.array(b)).item() dict["distDonaldTrump"]=getEuclideanDistance(s[1:], broadCastTrumpPI.value) dict["distHillary"]=getEuclideanDistance(s[1:], broadCastHillaryPI.value) dict["closerHillary"] = "Yes" if dict["distHillary"] < dict["distDonaldTrump"] else "No" return Row(**dict) #add euclidean distances to Trump and Hillary euclideanDF = sqlContext.createDataFrame(usersWithPIDF.map(lambda s: addEuclideanDistance(s))) #Reorder columns to have userid and distances first cols = euclideanDF.columns reorderCols = ["userid","distHillary","distDonaldTrump", "closerHillary"] euclideanDF = euclideanDF.select(reorderCols + [x for x in cols if x not in reorderCols]) #PixieDust display. #To visualize the distribution, select the bar chart display, use closerHillary as key and value and aggregation=count display(euclideanDF) """ Explanation: Calculate Euclidean distance (norm) between each Twitter user and the presidential candidates using the Personality Insights scores Add the distances into 2 extra columns and display the results End of explanation """ tweets=__df tweets.count() display(tweets) """ Explanation: Optional: do some extra data science on the tweets End of explanation """ #create an array that will hold the count for each sentiment sentimentDistribution=[0] * 13 #For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60% #Store the data in the array for i, sentiment in enumerate(tweets.columns[-13:]): sentimentDistribution[i]=__sqlContext.sql("SELECT count(*) as sentCount FROM tweets where " + sentiment + " > 60")\ .collect()[0].sentCount %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt ind=np.arange(13) width = 0.35 bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions") params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) ) plt.ylabel('Tweet count') plt.xlabel('Tone') plt.title('Distribution of tweets by sentiments > 60%') plt.xticks(ind+width, tweets.columns[-13:]) plt.legend() plt.show() """ Explanation: Compute the sentiment distributions for tweets with scores greater than 60% and create matplotlib chart visualization End of explanation """ from operator import add import re tagsRDD = tweets.flatMap( lambda t: re.split("\s", t.text))\ .filter( lambda word: word.startswith("#") )\ .map( lambda word : (word, 1 ))\ .reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a)) top10tags = tagsRDD.take(10) %matplotlib inline import matplotlib import matplotlib.pyplot as plt params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*2, plSize[1]*2) ) labels = [i[0] for i in top10tags] sizes = [int(i[1]) for i in top10tags] colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"] plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90) plt.axis('equal') plt.show() """ Explanation: Compute the top hashtags used in each tweet End of explanation """ cols = tweets.columns[-13:] def expand( t ): ret = [] for s in [i[0] for i in top10tags]: if ( s in t.text ): for tone in cols: ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))] return ret def makeList(l): return l if isinstance(l, list) else [l] #Create RDD from tweets dataframe tagsRDD = tweets.map(lambda t: t ) #Filter to only keep the entries that are in top10tags tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) ) #Create a flatMap using the expand function defined above, this will be used to collect all the scores #for a particular tag with the following format: Tag-Tone-ToneScore tagsRDD = tagsRDD.flatMap( expand ) #Create a map indexed by Tag-Tone keys tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) )) #Call combineByKey to format the data as follow #Key=Tag-Tone #Value=(count, sum_of_all_score_for_this_tone) tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)), (lambda x, y: (x[0] + y, x[1] + 1)), (lambda x, y: (x[0] + y[0], x[1] + y[1]))) #ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple #Key=Tag #Value=(Tone, average_score) tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2)))) #Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) ) #Sort the (Tone,average_score) tuples alphabetically by Tone tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) ) #Format the data as expected by the plotting code in the next cell. #map the Values to a tuple as follow: ([list of tone], [list of average score]) #e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0]) tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) ) #Use custom sort function to sort the entries by order of appearance in top10tags def customCompare( key ): for (k,v) in top10tags: if k == key: return v return 0 tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare) #Take the mean tone scores for the top 10 tags top10tagsMeanScores = tagsRDD.take(10) %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*3, plSize[1]*2) ) top5tagsMeanScores = top10tagsMeanScores[:5] width = 0 ind=np.arange(13) (a,b) = top5tagsMeanScores[0] labels=b[0] colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"] idx=0 for key, value in top5tagsMeanScores: plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key) width += 0.15 idx += 1 plt.xticks(ind+0.3, labels) plt.ylabel('AVERAGE SCORE') plt.xlabel('TONES') plt.title('Breakdown of top hashtags by sentiment tones') plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.) plt.show() """ Explanation: Compute the aggregate sentiment distribution for all the tweets that contain the top hashtags End of explanation """ %%scala val demo = com.ibm.cds.spark.samples.PixiedustStreamingTwitter demo.setConfig("twitter4j.oauth.consumerKey",consumerKey) demo.setConfig("twitter4j.oauth.consumerSecret",consumerSecret) demo.setConfig("twitter4j.oauth.accessToken",accessToken) demo.setConfig("twitter4j.oauth.accessTokenSecret",accessTokenSecret) demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api") demo.setConfig("watson.tone.password",taPassword) demo.setConfig("watson.tone.username",taUserName) demo.setConfig("checkpointDir", System.getProperty("user.home") + "/pixiedust/ssc") !pip install --upgrade --user pixiedust-twitterdemo from pixiedust_twitterdemo import * twitterDemo() """ Explanation: Optional: Use Twitter demo embedded app to run the same app with a UI End of explanation """ display(__tweets) from pyspark.sql import Row from pyspark.sql.types import * emotions=__tweets.columns[-13:] distrib = __tweets.flatMap(lambda t: [(x,t[x]) for x in emotions]).filter(lambda t: t[1]>60)\ .toDF(StructType([StructField('emotion',StringType()),StructField('score',DoubleType())])) display(distrib) __tweets.registerTempTable("pixiedust_tweets") #create an array that will hold the count for each sentiment sentimentDistribution=[0] * 13 #For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60% #Store the data in the array for i, sentiment in enumerate(__tweets.columns[-13:]): sentimentDistribution[i]=sqlContext.sql("SELECT count(*) as sentCount FROM pixiedust_tweets where " + sentiment + " > 60")\ .collect()[0].sentCount %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt ind=np.arange(13) width = 0.35 bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions") params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) ) plt.ylabel('Tweet count') plt.xlabel('Tone') plt.title('Distribution of tweets by sentiments > 60%') plt.xticks(ind+width, __tweets.columns[-13:]) plt.legend() plt.show() from operator import add import re tagsRDD = __tweets.flatMap( lambda t: re.split("\s", t.text))\ .filter( lambda word: word.startswith("#") )\ .map( lambda word : (word, 1 ))\ .reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a)) top10tags = tagsRDD.take(10) %matplotlib inline import matplotlib import matplotlib.pyplot as plt params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*2, plSize[1]*2) ) labels = [i[0] for i in top10tags] sizes = [int(i[1]) for i in top10tags] colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"] plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90) plt.axis('equal') plt.show() cols = __tweets.columns[-13:] def expand( t ): ret = [] for s in [i[0] for i in top10tags]: if ( s in t.text ): for tone in cols: ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))] return ret def makeList(l): return l if isinstance(l, list) else [l] #Create RDD from tweets dataframe tagsRDD = __tweets.map(lambda t: t ) #Filter to only keep the entries that are in top10tags tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) ) #Create a flatMap using the expand function defined above, this will be used to collect all the scores #for a particular tag with the following format: Tag-Tone-ToneScore tagsRDD = tagsRDD.flatMap( expand ) #Create a map indexed by Tag-Tone keys tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) )) #Call combineByKey to format the data as follow #Key=Tag-Tone #Value=(count, sum_of_all_score_for_this_tone) tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)), (lambda x, y: (x[0] + y, x[1] + 1)), (lambda x, y: (x[0] + y[0], x[1] + y[1]))) #ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple #Key=Tag #Value=(Tone, average_score) tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2)))) #Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) ) #Sort the (Tone,average_score) tuples alphabetically by Tone tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) ) #Format the data as expected by the plotting code in the next cell. #map the Values to a tuple as follow: ([list of tone], [list of average score]) #e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0]) tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) ) #Use custom sort function to sort the entries by order of appearance in top10tags def customCompare( key ): for (k,v) in top10tags: if k == key: return v return 0 tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare) #Take the mean tone scores for the top 10 tags top10tagsMeanScores = tagsRDD.take(10) %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches( (plSize[0]*3, plSize[1]*2) ) top5tagsMeanScores = top10tagsMeanScores[:5] width = 0 ind=np.arange(13) (a,b) = top5tagsMeanScores[0] labels=b[0] colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"] idx=0 for key, value in top5tagsMeanScores: plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key) width += 0.15 idx += 1 plt.xticks(ind+0.3, labels) plt.ylabel('AVERAGE SCORE') plt.xlabel('TONES') plt.title('Breakdown of top hashtags by sentiment tones') plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.) plt.show() """ Explanation: The embedded app has generated a DataFrame called __tweets. Let's use it to do some data science End of explanation """
harper/dlnd_thirdproject
seq2seq/sequence_to_sequence_implementation.ipynb
mit
import helper source_path = 'data/letters_source.txt' target_path = 'data/letters_target.txt' source_sentences = helper.load_data(source_path) target_sentences = helper.load_data(target_path) """ Explanation: Character Sequence to Sequence In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. <img src="images/sequence-to-sequence.jpg"/> Dataset The dataset lives in the /data/ folder. At the moment, it is made up of the following files: * letters_source.txt: The list of input letter sequences. Each sequence is its own line. * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number. End of explanation """ source_sentences[:50].split('\n') """ Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols. End of explanation """ target_sentences[:50].split('\n') """ Explanation: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line. End of explanation """ def extract_character_vocab(data): special_words = ['<pad>', '<unk>', '<s>', '<\s>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int # Build int2letter and letter2int dicts source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences) # Convert characters to ids source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')] target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')] print("Example source sequence") print(source_letter_ids[:3]) print("\n") print("Example target sequence") print(target_letter_ids[:3]) """ Explanation: Preprocess To do anything useful with it, we'll need to turn the characters into a list of integers: End of explanation """ def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length): new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \ for sentence in source_ids] new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \ for sentence in target_ids] return new_source_ids, new_target_ids # Use the longest sequence as sequence length sequence_length = max( [len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids]) # Pad all sequences up to sequence length source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int, target_letter_ids, target_letter_to_int, sequence_length) print("Sequence Length") print(sequence_length) print("\n") print("Input sequence example") print(source_ids[:3]) print("\n") print("Target sequence example") print(target_ids[:3]) """ Explanation: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length. End of explanation """ from distutils.version import LooseVersion import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) """ Explanation: This is the final shape we need them to be in. We can now proceed to building the model. Model Check the Version of TensorFlow This will check to make sure you have the correct version of TensorFlow End of explanation """ # Number of Epochs epochs = 60 # Batch Size batch_size = 128 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 13 decoding_embedding_size = 13 # Learning Rate learning_rate = 0.001 """ Explanation: Hyperparameters End of explanation """ input_data = tf.placeholder(tf.int32, [batch_size, sequence_length]) targets = tf.placeholder(tf.int32, [batch_size, sequence_length]) lr = tf.placeholder(tf.float32) """ Explanation: Input End of explanation """ source_vocab_size = len(source_letter_to_int) # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) # Encoder enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) _, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32) """ Explanation: Sequence to Sequence The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model). First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM. Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step. Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code). <img src="images/sequence-to-sequence-inference-decoder.png"/> We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs. Notice that the inference decoder feeds the output of each time step as an input to the next. As for the training decoder, we can think of it as looking like this: <img src="images/sequence-to-sequence-training-decoder.png"/> The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters). Encoding Embed the input data using tf.contrib.layers.embed_sequence Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output. End of explanation """ import numpy as np # Process the input we'll feed to the decoder ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1) demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length)) sess = tf.InteractiveSession() print("Targets") print(demonstration_outputs[:2]) print("\n") print("Processed Decoding Input") print(sess.run(dec_input, {targets: demonstration_outputs})[:2]) """ Explanation: Process Decoding Input End of explanation """ target_vocab_size = len(target_letter_to_int) # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # Decoder RNNs dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) with tf.variable_scope("decoding") as decoding_scope: # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope) """ Explanation: Decoding Embed the decoding input Build the decoding RNNs Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders. End of explanation """ with tf.variable_scope("decoding") as decoding_scope: # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(train_pred) """ Explanation: Decoder During Training Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder. Apply the output layer to the output of the training decoder End of explanation """ with tf.variable_scope("decoding", reuse=True) as decoding_scope: # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'], sequence_length - 1, target_vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) """ Explanation: Decoder During Inference Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True) Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder. The output function is applied to the output in this step End of explanation """ # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([batch_size, sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Optimization Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits. End of explanation """ import numpy as np train_source = source_ids[batch_size:] train_target = target_ids[batch_size:] valid_source = source_ids[:batch_size] valid_target = target_ids[:batch_size] sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source}) train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2))) valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2))) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss)) """ Explanation: Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size. End of explanation """ input_sentence = 'hello' input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()] input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence)) batch_shell = np.zeros((batch_size, sequence_length)) batch_shell[0] = input_sentence chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0] print('Input') print(' Word Ids: {}'.format([i for i in input_sentence])) print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)])) print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)])) """ Explanation: Prediction End of explanation """
liumengjun/cn-deep-learning
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: 语言翻译 在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个序列到序列模型(sequence to sequence model),该模型能够将新的英语句子翻译成法语。 获取数据 因为将整个英语语言内容翻译成法语需要大量训练时间,所以我们提供了一小部分的英语语料库。 End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words in source: {}'.format(len({word: None for word in source_text.split()}))) print('Roughly the number of unique words in target: {}'.format(len({word: None for word in target_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: 探索数据 研究 view_sentence_range,查看并熟悉该数据的不同部分。 End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_sentences = source_text.split('\n') source_id_text = [] for sentence in source_sentences: source_id_text.append([source_vocab_to_int[word] for word in sentence.split()]) target_sentences = target_text.split('\n') target_id_text = [] for sentence in target_sentences: target_id_text.append([target_vocab_to_int[word] for word in sentence.split()]+[target_vocab_to_int['<EOS>']]) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: 实现预处理函数 文本到单词 id 和之前的 RNN 一样,你必须首先将文本转换为数字,这样计算机才能读懂。在函数 text_to_ids() 中,你需要将单词中的 source_text 和 target_text 转为 id。但是,你需要在 target_text 中每个句子的末尾,添加 &lt;EOS&gt; 单词 id。这样可以帮助神经网络预测句子应该在什么地方结束。 你可以通过以下代码获取 &lt;EOS&gt; 单词ID: python target_vocab_to_int['&lt;EOS&gt;'] 你可以使用 source_vocab_to_int 和 target_vocab_to_int 获得其他单词 id。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: 预处理所有数据并保存 运行以下代码单元,预处理所有数据,并保存到文件中。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: 检查点 这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,可以从这里继续。预处理的数据已保存到磁盘上。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: 检查 TensorFlow 版本,确认可访问 GPU 这一检查步骤,可以确保你使用的是正确版本的 TensorFlow,并且能够访问 GPU。 End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input_ = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, None, name='keep_prob') return input_, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: 构建神经网络 你将通过实现以下函数,构建出要构建一个序列到序列模型所需的组件: model_inputs process_decoding_input encoding_layer decoding_layer_train decoding_layer_infer decoding_layer seq2seq_model 输入 实现 model_inputs() 函数,为神经网络创建 TF 占位符。该函数应该创建以下占位符: 名为 “input” 的输入文本占位符,并使用 TF Placeholder 名称参数(等级(Rank)为 2)。 目标占位符(等级为 2)。 学习速率占位符(等级为 0)。 名为 “keep_prob” 的保留率占位符,并使用 TF Placeholder 名称参数(等级为 0)。 在以下元祖(tuple)中返回占位符:(输入、目标、学习速率、保留率) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function del_last_datas = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1,1]) go_ids = tf.fill([batch_size, 1], target_vocab_to_int['<GO>']) return tf.concat([go_ids, del_last_datas], axis=1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: 处理解码输入 使用 TensorFlow 实现 process_decoding_input,以便删掉 target_data 中每个批次的最后一个单词 ID,并将 GO ID 放到每个批次的开头。 End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function def lstm_cell(): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32) return final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: 编码 实现 encoding_layer(),以使用 tf.nn.dynamic_rnn() 创建编码器 RNN 层级。 End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(tf.nn.dropout(train_pred, keep_prob)) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: 解码 - 训练 使用 tf.contrib.seq2seq.simple_decoder_fn_train() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建训练分对数(training logits)。将 output_fn 应用到 tf.contrib.seq2seq.dynamic_rnn_decoder() 输出上。 End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: 解码 - 推论 使用 tf.contrib.seq2seq.simple_decoder_fn_inference() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建推论分对数(inference logits)。 End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Decoder RNNs def lstm_cell(): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected( x, vocab_size, activation_fn=None, scope=decoding_scope) training_decoder_output = decoding_layer_train( encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: start_of_sequence_id = target_vocab_to_int["<GO>"] end_of_sequence_id = target_vocab_to_int["<EOS>"] inference_decoder_output = decoding_layer_infer( encoder_state, dec_cell, dec_embeddings,start_of_sequence_id, end_of_sequence_id, sequence_length, vocab_size, decoding_scope, output_fn, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: 构建解码层级 实现 decoding_layer() 以创建解码器 RNN 层级。 使用 rnn_size 和 num_layers 创建解码 RNN 单元。 使用 lambda 创建输出函数,将输入,也就是分对数转换为类分对数(class logits)。 使用 decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) 函数获取训练分对数。 使用 decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) 函数获取推论分对数。 注意:你将需要使用 tf.variable_scope 在训练和推论分对数间分享变量。 End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) # Encoder enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) # Process Decoding Input dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) train_logits, refer_logits = decoding_layer( dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, refer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: 构建神经网络 应用你在上方实现的函数,以: 向编码器的输入数据应用嵌入。 使用 encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) 编码输入。 使用 process_decoding_input(target_data, target_vocab_to_int, batch_size) 函数处理目标数据。 向解码器的目标数据应用嵌入。 使用 decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) 解码编码的输入数据。 End of explanation """ # Number of Epochs epochs = 5 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 300 # Learning Rate learning_rate = 0.002 # Dropout Keep Probability keep_probability = 0.8 """ Explanation: 训练神经网络 超参数 调试以下参数: 将 epochs 设为 epoch 次数。 将 batch_size 设为批次大小。 将 rnn_size 设为 RNN 的大小。 将 num_layers 设为层级数量。 将 encoding_embedding_size 设为编码器嵌入大小。 将 decoding_embedding_size 设为解码器嵌入大小 将 learning_rate 设为训练速率。 将 keep_probability 设为丢弃保留率(Dropout keep probability)。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: 构建图表 使用你实现的神经网络构建图表。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: 训练 利用预处理的数据训练神经网络。如果很难获得低损失值,请访问我们的论坛,看看其他人是否遇到了相同的问题。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: 保存参数 保存 batch_size 和 save_path 参数以进行推论(for inference)。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: 检查点 End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function return [vocab_to_int.get(word, vocab_to_int["<UNK>"]) for word in sentence.lower().split()] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: 句子到序列 要向模型提供要翻译的句子,你首先需要预处理该句子。实现函数 sentence_to_seq() 以预处理新的句子。 将句子转换为小写形式 使用 vocab_to_int 将单词转换为 id 如果单词不在词汇表中,将其转换为&lt;UNK&gt; 单词 id End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: 翻译 将 translate_sentence 从英语翻译成法语。 End of explanation """
tpin3694/tpin3694.github.io
regex/match_urls.ipynb
mit
# Load regex package import re """ Explanation: Title: Match URLs Slug: match_urls Summary: Match URLs Date: 2016-05-01 12:00 Category: Regex Tags: Basics Authors: Chris Albon Source: StackOverflow Preliminaries End of explanation """ # Create a variable containing a text string text = 'My blog is http://www.chrisalbon.com and not http://chrisalbon.com' """ Explanation: Create some text End of explanation """ # Find any ISBN-10 or ISBN-13 number re.findall(r'(http|ftp|https):\/\/([\w\-_]+(?:(?:\.[\w\-_]+)+))([\w\-\.,@?^=%&amp;:/~\+#]*[\w\-\@?^=%&amp;/~\+#])?', text) """ Explanation: Apply regex End of explanation """
jseabold/statsmodels
examples/notebooks/statespace_seasonal.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt plt.rc("figure", figsize=(16,8)) plt.rc("font", size=14) """ Explanation: Seasonality in time series data Consider the problem of modeling time series data with multiple seasonal components with different periodicities. Let us take the time series $y_t$ and decompose it explicitly to have a level component and two seasonal components. $$ y_t = \mu_t + \gamma^{(1)}_t + \gamma^{(2)}_t $$ where $\mu_t$ represents the trend or level, $\gamma^{(1)}_t$ represents a seasonal component with a relatively short period, and $\gamma^{(2)}_t$ represents another seasonal component of longer period. We will have a fixed intercept term for our level and consider both $\gamma^{(2)}_t$ and $\gamma^{(2)}_t$ to be stochastic so that the seasonal patterns can vary over time. In this notebook, we will generate synthetic data conforming to this model and showcase modeling of the seasonal terms in a few different ways under the unobserved components modeling framework. End of explanation """ # First we'll simulate the synthetic data def simulate_seasonal_term(periodicity, total_cycles, noise_std=1., harmonics=None): duration = periodicity * total_cycles assert duration == int(duration) duration = int(duration) harmonics = harmonics if harmonics else int(np.floor(periodicity / 2)) lambda_p = 2 * np.pi / float(periodicity) gamma_jt = noise_std * np.random.randn((harmonics)) gamma_star_jt = noise_std * np.random.randn((harmonics)) total_timesteps = 100 * duration # Pad for burn in series = np.zeros(total_timesteps) for t in range(total_timesteps): gamma_jtp1 = np.zeros_like(gamma_jt) gamma_star_jtp1 = np.zeros_like(gamma_star_jt) for j in range(1, harmonics + 1): cos_j = np.cos(lambda_p * j) sin_j = np.sin(lambda_p * j) gamma_jtp1[j - 1] = (gamma_jt[j - 1] * cos_j + gamma_star_jt[j - 1] * sin_j + noise_std * np.random.randn()) gamma_star_jtp1[j - 1] = (- gamma_jt[j - 1] * sin_j + gamma_star_jt[j - 1] * cos_j + noise_std * np.random.randn()) series[t] = np.sum(gamma_jtp1) gamma_jt = gamma_jtp1 gamma_star_jt = gamma_star_jtp1 wanted_series = series[-duration:] # Discard burn in return wanted_series duration = 100 * 3 periodicities = [10, 100] num_harmonics = [3, 2] std = np.array([2, 3]) np.random.seed(8678309) terms = [] for ix, _ in enumerate(periodicities): s = simulate_seasonal_term( periodicities[ix], duration / periodicities[ix], harmonics=num_harmonics[ix], noise_std=std[ix]) terms.append(s) terms.append(np.ones_like(terms[0]) * 10.) series = pd.Series(np.sum(terms, axis=0)) df = pd.DataFrame(data={'total': series, '10(3)': terms[0], '100(2)': terms[1], 'level':terms[2]}) h1, = plt.plot(df['total']) h2, = plt.plot(df['10(3)']) h3, = plt.plot(df['100(2)']) h4, = plt.plot(df['level']) plt.legend(['total','10(3)','100(2)', 'level']) plt.show() """ Explanation: Synthetic data creation We will create data with multiple seasonal patterns by following equations (3.7) and (3.8) in Durbin and Koopman (2012). We will simulate 300 periods and two seasonal terms parametrized in the frequency domain having periods 10 and 100, respectively, and 3 and 2 number of harmonics, respectively. Further, the variances of their stochastic parts are 4 and 9, respectively. End of explanation """ model = sm.tsa.UnobservedComponents(series.values, level='fixed intercept', freq_seasonal=[{'period': 10, 'harmonics': 3}, {'period': 100, 'harmonics': 2}]) res_f = model.fit(disp=False) print(res_f.summary()) # The first state variable holds our estimate of the intercept print("fixed intercept estimated as {0:.3f}".format(res_f.smoother_results.smoothed_state[0,-1:][0])) res_f.plot_components() plt.show() model.ssm.transition[:, :, 0] """ Explanation: Unobserved components (frequency domain modeling) The next method is an unobserved components model, where the trend is modeled as a fixed intercept and the seasonal components are modeled using trigonometric functions with primary periodicities of 10 and 100, respectively, and number of harmonics 3 and 2, respectively. Note that this is the correct, generating model. The process for the time series can be written as: $$ \begin{align} y_t & = \mu_t + \gamma^{(1)}t + \gamma^{(2)}_t + \epsilon_t\ \mu{t+1} & = \mu_t \ \gamma^{(1)}{t} &= \sum{j=1}^2 \gamma^{(1)}{j, t} \ \gamma^{(2)}{t} &= \sum_{j=1}^3 \gamma^{(2)}{j, t}\ \gamma^{(1)}{j, t+1} &= \gamma^{(1)}{j, t}\cos(\lambda_j) + \gamma^{, (1)}{j, t}\sin(\lambda_j) + \omega^{(1)}{j,t}, ~j = 1, 2, 3\ \gamma^{, (1)}{j, t+1} &= -\gamma^{(1)}{j, t}\sin(\lambda_j) + \gamma^{, (1)}_{j, t}\cos(\lambda_j) + \omega^{, (1)}{j, t}, ~j = 1, 2, 3\ \gamma^{(2)}{j, t+1} &= \gamma^{(2)}{j, t}\cos(\lambda_j) + \gamma^{, (2)}{j, t}\sin(\lambda_j) + \omega^{(2)}{j,t}, ~j = 1, 2\ \gamma^{, (2)}{j, t+1} &= -\gamma^{(2)}{j, t}\sin(\lambda_j) + \gamma^{, (2)}_{j, t}\cos(\lambda_j) + \omega^{, (2)}_{j, t}, ~j = 1, 2\ \end{align} $$ where $\epsilon_t$ is white noise, $\omega^{(1)}{j,t}$ are i.i.d. $N(0, \sigma^2_1)$, and $\omega^{(2)}{j,t}$ are i.i.d. $N(0, \sigma^2_2)$, where $\sigma_1 = 2.$ End of explanation """ model = sm.tsa.UnobservedComponents(series, level='fixed intercept', seasonal=10, freq_seasonal=[{'period': 100, 'harmonics': 2}]) res_tf = model.fit(disp=False) print(res_tf.summary()) # The first state variable holds our estimate of the intercept print("fixed intercept estimated as {0:.3f}".format(res_tf.smoother_results.smoothed_state[0,-1:][0])) fig = res_tf.plot_components() fig.tight_layout(pad=1.0) """ Explanation: Observe that the fitted variances are pretty close to the true variances of 4 and 9. Further, the individual seasonal components look pretty close to the true seasonal components. The smoothed level term is kind of close to the true level of 10. Finally, our diagnostics look solid; the test statistics are small enough to fail to reject our three tests. Unobserved components (mixed time and frequency domain modeling) The second method is an unobserved components model, where the trend is modeled as a fixed intercept and the seasonal components are modeled using 10 constants summing to 0 and trigonometric functions with a primary periodicities of 100 with 2 harmonics total. Note that this is not the generating model, as it presupposes that there are more state errors for the shorter seasonal component than in reality. The process for the time series can be written as: $$ \begin{align} y_t & = \mu_t + \gamma^{(1)}t + \gamma^{(2)}_t + \epsilon_t\ \mu{t+1} & = \mu_t \ \gamma^{(1)}{t + 1} &= - \sum{j=1}^9 \gamma^{(1)}{t + 1 - j} + \omega^{(1)}_t\ \gamma^{(2)}{j, t+1} &= \gamma^{(2)}{j, t}\cos(\lambda_j) + \gamma^{, (2)}{j, t}\sin(\lambda_j) + \omega^{(2)}{j,t}, ~j = 1, 2\ \gamma^{, (2)}{j, t+1} &= -\gamma^{(2)}{j, t}\sin(\lambda_j) + \gamma^{, (2)}_{j, t}\cos(\lambda_j) + \omega^{, (2)}{j, t}, ~j = 1, 2\ \end{align} $$ where $\epsilon_t$ is white noise, $\omega^{(1)}{t}$ are i.i.d. $N(0, \sigma^2_1)$, and $\omega^{(2)}{j,t}$ are i.i.d. $N(0, \sigma^2_2)$. End of explanation """ model = sm.tsa.UnobservedComponents(series, level='fixed intercept', freq_seasonal=[{'period': 100}]) res_lf = model.fit(disp=False) print(res_lf.summary()) # The first state variable holds our estimate of the intercept print("fixed intercept estimated as {0:.3f}".format(res_lf.smoother_results.smoothed_state[0,-1:][0])) fig = res_lf.plot_components() fig.tight_layout(pad=1.0) """ Explanation: The plotted components look good. However, the estimated variance of the second seasonal term is inflated from reality. Additionally, we reject the Ljung-Box statistic, indicating we may have remaining autocorrelation after accounting for our components. Unobserved components (lazy frequency domain modeling) The third method is an unobserved components model with a fixed intercept and one seasonal component, which is modeled using trigonometric functions with primary periodicity 100 and 50 harmonics. Note that this is not the generating model, as it presupposes that there are more harmonics then in reality. Because the variances are tied together, we are not able to drive the estimated covariance of the non-existent harmonics to 0. What is lazy about this model specification is that we have not bothered to specify the two different seasonal components and instead chosen to model them using a single component with enough harmonics to cover both. We will not be able to capture any differences in variances between the two true components. The process for the time series can be written as: $$ \begin{align} y_t & = \mu_t + \gamma^{(1)}t + \epsilon_t\ \mu{t+1} &= \mu_t\ \gamma^{(1)}{t} &= \sum{j=1}^{50}\gamma^{(1)}{j, t}\ \gamma^{(1)}{j, t+1} &= \gamma^{(1)}{j, t}\cos(\lambda_j) + \gamma^{, (1)}{j, t}\sin(\lambda_j) + \omega^{(1}{j,t}, ~j = 1, 2, \dots, 50\ \gamma^{, (1)}{j, t+1} &= -\gamma^{(1)}{j, t}\sin(\lambda_j) + \gamma^{, (1)}_{j, t}\cos(\lambda_j) + \omega^{, (1)}{j, t}, ~j = 1, 2, \dots, 50\ \end{align} $$ where $\epsilon_t$ is white noise, $\omega^{(1)}_{t}$ are i.i.d. $N(0, \sigma^2_1)$. End of explanation """ model = sm.tsa.UnobservedComponents(series, level='fixed intercept', seasonal=100) res_lt = model.fit(disp=False) print(res_lt.summary()) # The first state variable holds our estimate of the intercept print("fixed intercept estimated as {0:.3f}".format(res_lt.smoother_results.smoothed_state[0,-1:][0])) fig = res_lt.plot_components() fig.tight_layout(pad=1.0) """ Explanation: Note that one of our diagnostic tests would be rejected at the .05 level. Unobserved components (lazy time domain seasonal modeling) The fourth method is an unobserved components model with a fixed intercept and a single seasonal component modeled using a time-domain seasonal model of 100 constants. The process for the time series can be written as: $$ \begin{align} y_t & =\mu_t + \gamma^{(1)}t + \epsilon_t\ \mu{t+1} &= \mu_{t} \ \gamma^{(1)}{t + 1} &= - \sum{j=1}^{99} \gamma^{(1)}_{t + 1 - j} + \omega^{(1)}_t\ \end{align} $$ where $\epsilon_t$ is white noise, $\omega^{(1)}_{t}$ are i.i.d. $N(0, \sigma^2_1)$. End of explanation """ # Assign better names for our seasonal terms true_seasonal_10_3 = terms[0] true_seasonal_100_2 = terms[1] true_sum = true_seasonal_10_3 + true_seasonal_100_2 time_s = np.s_[:50] # After this they basically agree fig1 = plt.figure() ax1 = fig1.add_subplot(111) idx = np.asarray(series.index) h1, = ax1.plot(idx[time_s], res_f.freq_seasonal[0].filtered[time_s], label='Double Freq. Seas') h2, = ax1.plot(idx[time_s], res_tf.seasonal.filtered[time_s], label='Mixed Domain Seas') h3, = ax1.plot(idx[time_s], true_seasonal_10_3[time_s], label='True Seasonal 10(3)') plt.legend([h1, h2, h3], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth'], loc=2) plt.title('Seasonal 10(3) component') plt.show() time_s = np.s_[:50] # After this they basically agree fig2 = plt.figure() ax2 = fig2.add_subplot(111) h21, = ax2.plot(idx[time_s], res_f.freq_seasonal[1].filtered[time_s], label='Double Freq. Seas') h22, = ax2.plot(idx[time_s], res_tf.freq_seasonal[0].filtered[time_s], label='Mixed Domain Seas') h23, = ax2.plot(idx[time_s], true_seasonal_100_2[time_s], label='True Seasonal 100(2)') plt.legend([h21, h22, h23], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth'], loc=2) plt.title('Seasonal 100(2) component') plt.show() time_s = np.s_[:100] fig3 = plt.figure() ax3 = fig3.add_subplot(111) h31, = ax3.plot(idx[time_s], res_f.freq_seasonal[1].filtered[time_s] + res_f.freq_seasonal[0].filtered[time_s], label='Double Freq. Seas') h32, = ax3.plot(idx[time_s], res_tf.freq_seasonal[0].filtered[time_s] + res_tf.seasonal.filtered[time_s], label='Mixed Domain Seas') h33, = ax3.plot(idx[time_s], true_sum[time_s], label='True Seasonal 100(2)') h34, = ax3.plot(idx[time_s], res_lf.freq_seasonal[0].filtered[time_s], label='Lazy Freq. Seas') h35, = ax3.plot(idx[time_s], res_lt.seasonal.filtered[time_s], label='Lazy Time Seas') plt.legend([h31, h32, h33, h34, h35], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth', 'Lazy Freq. Seas', 'Lazy Time Seas'], loc=1) plt.title('Seasonal components combined') plt.tight_layout(pad=1.0) """ Explanation: The seasonal component itself looks good--it is the primary signal. The estimated variance of the seasonal term is very high ($>10^5$), leading to a lot of uncertainty in our one-step-ahead predictions and slow responsiveness to new data, as evidenced by large errors in one-step ahead predictions and observations. Finally, all three of our diagnostic tests were rejected. Comparison of filtered estimates The plots below show that explicitly modeling the individual components results in the filtered state being close to the true state within roughly half a period. The lazy models took longer (almost a full period) to do the same on the combined true state. End of explanation """
abatula/MachineLearningIntro
InstructorNotebooks/Iris_DataSet_Instructor.ipynb
gpl-2.0
# Print figures in the notebook %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import datasets # Import datasets from scikit-learn # Import patch for drawing rectangles in the legend from matplotlib.patches import Rectangle # Create color maps cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) # Create a legend for the colors, using rectangles for the corresponding colormap colors labelList = [] for color in cmap_bold.colors: labelList.append(Rectangle((0, 0), 1, 1, fc=color)) """ Explanation: What is a dataset? A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest. What is an example? An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example. Examples are often referred to with the letter $x$. What is a feature? A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the number of floorboards is not, even though they both describe the house. Features are sometimes specified as a single element of an example, $x_i$ What is a label? A label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house. Labels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color). They are typically specified by the letter $y$. The Iris Dataset Here, we use the Iris dataset, available through scikit-learn. Scikit-learn's explanation of the dataset is here. This dataset contains information on three species of iris flowers (Setosa, Versicolour, and Virginica. |<img src="Images/Setosa.jpg" width=200>|<img src="Images/Versicolor.jpg" width=200>|<img src="Images/Virginica.jpg" width=200>| |:-------------------------------------:|:-----------------------------------------:|:----------------------------------------:| | Iris Setosa source | Iris Versicolour source | Iris Virginica source | Each example has four features (or measurements): sepal length, sepal width, petal length, and petal width. All measurements are in cm. |<img src="Images/Petal-sepal.jpg" width=200>| |:------------------------------------------:| |Petal and sepal of a primrose plant. From wikipedia| Examples The datasets consists of 150 examples, 50 examples from each species of iris. Features The features are the columns of the dataset. In order from left to right (or 0-3) they are: sepal length, sepal width, petal length, and petal width Our goal The goal, for this dataset, is to train a computer to predict the species of a new iris plant, given only the measured length and width of its sepal and petal. Setup Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures), and ListedColormap (for plotting colors), datasets. Also create the color maps to use to color the plotted data, and "labelList", which is a list of colored rectangles to use in plotted legends End of explanation """ # Import some data to play with iris = datasets.load_iris() # List the data keys print('Keys: ' + str(iris.keys())) print('Label names: ' + str(iris.target_names)) print('Feature names: ' + str(iris.feature_names)) print('') # Store the labels (y), label names, features (X), and feature names y = iris.target # Labels are stored in y as numbers labelNames = iris.target_names # Species names corresponding to labels 0, 1, and 2 X = iris.data featureNames = iris.feature_names # Show the first five examples print(iris.data[1:5,:]) """ Explanation: Import the dataset Import the dataset and store it to a variable called iris. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target_names', 'target', 'data', 'feature_names'] The data features are stored in iris.data, where each row is an example from a single flow, and each column is a single feature. The feature names are stored in iris.feature_names. Labels are stored as the numbers 0, 1, or 2 in iris.target, and the names of these labels are in iris.target_names. End of explanation """ # Plot the data # Sepal length and width X_sepal = X[:,:2] # Get the minimum and maximum values with an additional 0.5 border x_min, x_max = X_sepal[:, 0].min() - .5, X_sepal[:, 0].max() + .5 y_min, y_max = X_sepal[:, 1].min() - .5, X_sepal[:, 1].max() + .5 plt.figure(figsize=(8, 6)) # Plot the training points plt.scatter(X_sepal[:, 0], X_sepal[:, 1], c=y, cmap=cmap_bold) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.title('Sepal width vs length') # Set the plot limits plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.legend(labelList, labelNames) plt.show() """ Explanation: Visualizing the data Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of sepal length (x-axis) vs sepal width (y-axis). The colors of the datapoints correspond to the labeled species of iris for that example. After plotting, look at the data. What do you notice about the way it is arranged? End of explanation """ # Put your code here! # Plot the data # Petal length and width X_petal = X[:,2:] # Get the minimum and maximum values with an additional 0.5 border x_min, x_max = X_petal[:, 0].min() - .5, X_petal[:, 0].max() + .5 y_min, y_max = X_petal[:, 1].min() - .5, X_petal[:, 1].max() + .5 plt.figure(figsize=(8, 6)) # Plot the training points plt.scatter(X_petal[:, 0], X_petal[:, 1], c=y, cmap=cmap_bold) plt.xlabel('Petal length (cm)') plt.ylabel('Petal width (cm)') plt.title('Petal width vs length') # Set the plot limits plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.legend(labelList, labelNames) plt.show() """ Explanation: Make your own plot Below, try making your own plots. First, modify the previous code to create a similar plot, showing the petal width vs the petal length. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work. How is the data arranged differently? Do you think these additional features would be helpful in determining to which species of iris a new plant should be categorized? What about plotting other feature combinations, like petal length vs sepal length? Once you've plotted the data several different ways, think about how you would predict the species of a new iris plant, given only the length and width of its sepals and petals. End of explanation """ from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3) print('Original dataset size: ' + str(X.shape)) print('Training dataset size: ' + str(X_train.shape)) print('Test dataset size: ' + str(X_test.shape)) """ Explanation: Training and Testing Sets In order to evaluate our data properly, we need to divide our dataset into training and testing sets. Training Set A portion of the data, usually a majority, used to train a machine learning classifier. These are the examples that the computer will learn in order to try to predict data labels. Testing Set A portion of the data, smaller than the training set (usually about 30%), used to test the accuracy of the machine learning classifier. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct. Creating training and testing sets Below, we create a training and testing set from the iris dataset using using the train_test_split() function. End of explanation """
encima/Comp_Thinking_In_Python
Session_5/5_IO, Formatting Strings and Functions.ipynb
mit
name = "bob" print(name) name * 5 print(name) print(name * 10) #This output is kinda useless, right? name = 11 print("Name is equal to " + str(name)) print("Something about: ") print(name) name *= 5 print("Name has been multiplied by 5 and is now equal to " + name) #slightly more informative """ Explanation: Input, Ouput, Formatting Strings and Functions Dr. Chris Gwilliams gwilliamsc@cardiff.ac.uk So Far and Overview Output (we have covered this with the print statement String formatting Input (how do we get user input) Casting types the import keyword the standard modules in Python writing your own modules Running scripts with command line input easy_install and pip Standard Output The last few sessions have seen us printing to the screen like maniacs. We have printed variables, literals and a few different types. python print("hello") name = "Chris" print(name) When I introduced strings, we saw there are a few different ways to write them, the same is true for formatting a string. Just printing a variable to the screen is a terrible idea, especially if you are using print statements to debug your code End of explanation """ def return_a_sum(arg0, arg1): return arg0 + arg1 returned_variable = return_a_sum(40, 2) #call the function here and return something print(returned_variable) """ Explanation: A bit of a contrived example but you can see how this is useful Exercise Try writing the above example with a variable with the value 11 What happens? Functions We have seen these a lot in the course so far. Every time you print something or want to find out the type of a variable, these are all functions built into the standard library in Python. Exercise Give me 4 examples of functions in Python Writing Your Own Function: python def method_name(argument0, argument1): variable_inside_method = "I am indented, so I am part of the function" return variable_inside_method def is the keyword used to define the start of a declaration the name follows it and make sure it conforms to what the method does the arguments of the method go inside the brackets NOTE THE COLON AT THE END OF THE LINE!! ALL code that is part of the method needs to be indented Once a method has run, we can return a value, but we do not have to! End of explanation """ print("My name is {}".format('Fred')) print("My name is {} {}".format(4, 'Terry')) print("My name is {0}".format(True)) # format does not care about types """ Explanation: Methods Methods are functions that are attached to objects. While we call a function like this: function(argument1, argument2) Methods are called like this: object.method(argument1, argument2) Objects in Python have methods attached to them that you can use. Let's learn about methods attached to strings! What is the name of the function in Python that lists the methods attached to an object? String Formatting Creating a string like this: python bit_to_add = "something else" my_string = "something something " + bit_to_add + " blah blah blah" This is the more basic way of handling strings and it means we have to deal with different types (that we will cover later in the session) .format Python now has support to do something commonly known as string formatting. Format is a method (we will learn about these as a concept later) built into the string type that allows you to specify what it contains. End of explanation """ `"Never {0} give you {1}, Never {0} let you {2}".format("gonna", "up", "down")` We can reuse arguments by referring to their index here! """ Explanation: Zero-Index The 0 in the curly braces refers to the number of arguments for the format method. How many arguments do you see in the format method? Many languages start at a zero index, this means that when you have 10 elements, their index ranges from 0 to 9. (We will see this in more detail when we come to lists in Python) python INDEX: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | ELEMENT: "a"|"b"|"c"|"d"|"e"|"f"|"g"|"h"|"i"|"j"| Exercise "Never {0} give you {1}, Never {2} let you {3}".format("gonna", "up", "gonna", "down") How many arguments are in the formatted string method above? What is the index of the last argument? Do you think this could be written better? End of explanation """ my_string = "Super Cool Words" print(len(my_string)) print("super cool {0}".format("words")) print(my_string.lower()) print(type(my_string)) print(my_string.upper()) print(int(my_string)) """ Explanation: Exercise Given the following string and arguments, complete the format method so they match the famous MeatLoaf song. Note: You do not have to use all of the arguments! I would {0} {1} for {2} but I won't {0} {3} climb - anything swim drive Everest that do sleep kill truck something eat anything Methods vs Functions We have introduced both methods and functions so far, and we will cover them in more detail as the course progresses, but what is the difference? Functions are blocks of code to perform some operations and provide an output Methods are attached to objects Exercise Which of the calls below are methods or functions? Write what each of these methods does in one line. End of explanation """ pet_name = "Mr. Woofington" print("Hello " + pet_name) pet_name = 14055 print("Hello " + pet_name) #Why is this? """ Explanation: function method method function method function Casting Types We can instantiate a variable to be of a particular type, and we can change the type by changing the value assigned, like so: End of explanation """ print(type(1)) print(type("what am I?")) print(type(True)) """ Explanation: Checking types We can use the built in type function to check what type a value is: End of explanation """ print(str(1)) #convert int to string print(int(True)) #bool to int print(int("12")) #string to int """ Explanation: Casting Types Using built in functions, we can convert values from one type to another. End of explanation """ global_var = "I am a global variable" def a_function(): local_var = "I am local" print(local_var) a_function() print(global_var) print(local_var) # What happens here? """ Explanation: Exercise Write a script that: Prints out a request for the user's name, favourite movie and IMDb rating Cast the rating to an int and turn it into a percentage Print a formatted string that tells the user what their percentage is If the rating is more than 70%: print out 'We have a winner' If it is between 50 and 70: print out 'Not bad' If it is between 30 and 50: print out 'Not a hit' If it less than 30: print out 'Rotten tomato' Exercise Write a function that takes a string as an input and returns the string in all upper case Call the function with different strings and print the result What happens when you pass an integer as the argument? Modify the function so it can handle arguments of different types Scope Scope defines where we are in our program and what variables/functions/objects etc we have access to. This gives us local and global scope. global means it can be accessed from anywehre local means it can only be accessed from within the scope it was created in End of explanation """ def func(): local = "I am local" return(local) global_var = func() print(global_var) age = 200 def change_age(): age = "test" print(age) change_age() print(age) #What should this equal? """ Explanation: Scope local_var was created within the scope of a_function and was not in the global scope, so it could not be printed. Exercise Write a function that delcares a variable within it Return the variable from the function and assign it to a global variable Print the global variable Now you have changed the scope of a variable from local to global End of explanation """ age = 200 def change_age(): global age age = "test" print(age) change_age() print(age) def change_age_no_glob(age): age = "test" return age age = change_age_no_glob(age) """ Explanation: Scope This is a practical example of local scope vesus global scope. age is a global variable that has the value: 200. Within the change_age function, a new variable, also called age, is created that is only within the local scope. How do we access the global variable within a function? End of explanation """ def function_to_do_stuff(): g_var = "Hello" function_to_do_stuff() print(g_var) def function_to_do_stuff(): global g_var # makes new variable available in global scope g_var = "Hello" function_to_do_stuff() print(g_var) """ Explanation: global The global keyword allows you to use variables in the global scope inside the scope of a function. global is an excellent way to showcase scope. Note: This is a feature that is primarily unique to Python (and PHP) Another Example End of explanation """
staeiou/assorted-notebooks
infinite_scream/2017-07-27/infinite_scream.ipynb
mit
!pip install tweepy pandas seaborn """ Explanation: Graphing the number of favorites to @infinite_scream over time By R. Stuart Geiger (@staeiou), Released CC-BY 4.0 & MIT License Setup Installing dependencies End of explanation """ import random import twitter_login # a file containing my API keys import tweepy import pandas as pd import matplotlib %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import datetime import pytz """ Explanation: Importing libraries End of explanation """ CONSUMER_KEY = twitter_login.CONSUMER_KEY CONSUMER_SECRET = twitter_login.CONSUMER_SECRET ACCESS_TOKEN = twitter_login.ACCESS_TOKEN ACCESS_TOKEN_SECRET = twitter_login.ACCESS_TOKEN_SECRET # Authenticate auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) api = tweepy.API(auth) """ Explanation: Authenticate with Twitter End of explanation """ # by yanofsky, https://gist.github.com/yanofsky/5436496 screen_name="infinite_scream" #initialize a list to hold all the tweepy Tweets alltweets = [] #make initial request for most recent tweets (200 is the maximum allowed count) new_tweets = api.user_timeline(screen_name = screen_name,count=200) #save most recent tweets alltweets.extend(new_tweets) #save the id of the oldest tweet less one oldest = alltweets[-1].id - 1 #keep grabbing tweets until there are no tweets left to grab while len(new_tweets) > 0: print("getting tweets before %s" % (oldest)) #all subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest) #save most recent tweets alltweets.extend(new_tweets) #update the id of the oldest tweet less one oldest = alltweets[-1].id - 1 print("...%s tweets downloaded so far" % (len(alltweets))) """ Explanation: Getting the tweets This is limited to the last ~3200 tweets from the API. :( End of explanation """ tweets = [[tweet.id_str, tweet.created_at, tweet.favorite_count, tweet.retweet_count,\ tweet.favorite_count + tweet.retweet_count, tweet.in_reply_to_screen_name is not None,\ tweet.text, len(tweet.text)] for tweet in alltweets] """ Explanation: Data processing First, go through all the tweets and pull out a few key variables, putting it into a two-dimensional array. End of explanation """ tweets_df = pd.DataFrame.from_dict(tweets) tweets_df.columns = ["id_str", "created_at", "favorite_count", "retweet_count", "fav_rt_count", \ "is_reply", "tweet_text", "tweet_length"] """ Explanation: Convert to a pandas DataFrame for easy processing End of explanation """ tweets_df[1000:1005] """ Explanation: Peek into this dataframe, pulling out rows 1000 to 1005 End of explanation """ tweets_noreplies_df = tweets_df[tweets_df['is_reply'] == False] """ Explanation: Filter out tweets that were replies to specific users. End of explanation """ fav_count = tweets_noreplies_df.set_index("created_at")["fav_rt_count"] fav_count = fav_count.tz_localize(pytz.utc).tz_convert(pytz.timezone('US/Pacific')) fav_count[0:10] """ Explanation: Time is in UTC / GMT-0 from the API, convert to US/Pacific End of explanation """ ax = fav_count.plot(figsize=[14,8], fontsize=16) ax.set_title("Engagement to @infinite_scream's tweets over time", {"fontsize":20}) ax.set_ylabel("Number of favorites + retweets", {"fontsize":20}) ax.set_xlabel("Time (Pacific Standard Time / GMT-8)", {"fontsize":20}) xlim_start, xlim_end = plt.xlim() xlim_end = xlim_end + .25 plt.xlim(xlim_start, xlim_end) """ Explanation: Visualization End of explanation """ tweets_df.index.name = "tweet_num" tweets_df.to_csv("infinite_scream.csv") tweets_df.to_json("infinite_scream.json") !head infinite_scream.csv """ Explanation: Output to file End of explanation """
albahnsen/ML_RiskManagement
notebooks/09_StatisticalInference.ipynb
mit
import pandas as pd data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Credit.csv', index_col=0) data.head(10) """ Explanation: 09 - Statistical Inference by Alejandro Correa Bahnsen & Iván Torroledo version 1.2, Feb 2018 Part of the class Machine Learning for Risk Management This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. How we do Statiscal Inference in Machine Learning? It's usually acepted that Machine Learning algorithms have a huge power to predict and describe unknown data based on observed data. However, Machine Learning algorithms is not generally concern about the statistical inference for example as significance of predictions or estimated parameters. This focus it's usually true for traditional quantitative areas like econometrics, psicometrics that use significance as a evaluation metrics of models. The following data is a sample of demographic and bank information of certain group of clients. End of explanation """ import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.pairplot(data,hue='Gender',palette="husl",markers="+") """ Explanation: Usually, this kind of data it's commonly used to create scoring models. With the tools already studied, we could achieve this task easily. However this time, we would like to know which variables are important to explain Balance account of a given client?. In other words, we would like to know if it is a statistical relation between Balance and the other variables. For now, take Gender to test this hypothesis. Question: Is Gender statistically relevant to explain Balance account of a client? To answer this question we could find if there are a difference in Balance account between Males and Females. But, first analyze data visually to get a sense of data: End of explanation """ # splitting data male_= data[data.Gender==' Male'].Balance female_ = data[data.Gender=='Female'].Balance fig = plt.figure(figsize=(14,7)) n, bins, patches = plt.hist(male_, bins =50, facecolor='blue', alpha=0.5,label='Male') n, bins, patches = plt.hist(female_, bins =50,facecolor='red', alpha=0.5,label='Female') plt.axvline(male_.mean(),linestyle='--',color='blue',) plt.axvline(female_.mean(),linestyle='--',color='red',) plt.xlabel('Balance') plt.legend(); Gender_differences = data.groupby('Gender').mean() Gender_differences print('The mean difference in Balance by Gender is : '+ str(Gender_differences.loc[' Male','Balance']-Gender_differences.loc['Female','Balance'])) """ Explanation: It seems that Balance account distribution doesn't change across Gender. But, if we calculate the mean value of the Balance by Male and Female? End of explanation """ # Building features and target variable X = data.Gender.map({' Male': 1, 'Female':0}) Y = data.Balance """ Explanation: So, we got it?, is this difference between Male and Female Balance enough to answer the initial question? Short Answer: No! Long Answer: No, we calculate a mean difference, but we haven't checked yet whether this value is statistically significant Hypothesis Testing To check the statistical significance of the mean difference estimated above we can postulate the following hypothesis: Ho: There is no difference in Balance account between Male and Female Ha: There is a statistical difference in Balance account between Male and Female We want to calculate the p-value of our estimation to compare with a accepted threshold of significance choosen by us: $\alpha = (1\%,5\%,10\%)$ How we calculate the P-value? We can use the traditional method of statistics: assume a distribution for the data, calculate a statistcs like t distribution. Using the data and some sampling techniques we can computing the empirical distrubutin of data, and to check what is the probability asocieated with our estimation (P-value). As we know traditional method (1), lets do the uncommon approach. We are going to see that this method can be implemented easily and have a huge power in more complicated tasks. Data Sampling: Shuffling Algorithm The Shuffling algorithm is a sampling technique commonly used to simulate empirical distributions from the data. The basic idea is to simulate the distribution by shuffling the labels (Male and Female) repeatedly and computing a desired statistic. In our case, the choosen statistic is the mean difference. If the labels (Male and Female) really don't matter to explain Balance, then switching them randomly sould not change the result we got. Steps: Shuffle labels in the data. Rearrange Compunte the statistics: mean by Gender. End of explanation """ original_difference = female_.mean() - male_.mean() print('The difference in Balance by Gender (in the data) is: '+ str(original_difference)) # Create a Data Frame with desiered variables dataframe = pd.DataFrame(X) dataframe['Balance'] = Y dataframe.head() # Step 1 & 2 def shuffle_data(frame): vec = np.zeros(frame.Gender.count())#.astype(float) vec[np.random.choice(frame.Gender.count(),int(sum(frame.Gender)),replace=False)] = 1 frame['Gender'] = vec return frame # Step 3 def mean_difference(frame): return frame.groupby('Gender').mean().loc[0,'Balance'] - frame.groupby('Gender').mean().loc[1,'Balance'] import numpy as np def simulate_distribution(frame, N=100): a = [] for i in range(N): a.append(mean_difference(shuffle_data(dataframe))) return a def plot_distribution(dist,data,color='blue',bins=bins,orig=True): fig = plt.figure(figsize=(10,6)) n, bins, patches = plt.hist(dist, bins = bins, normed=1.0, facecolor=color, alpha=0.5) values, base = np.histogram(dist, bins = bins) if orig: plt.axvline(np.mean(data), color=color, linestyle='dashed', linewidth=2,label='Original data') plt.legend() plt.title('Mean difference') ## Simulation N = 1000 distribution = simulate_distribution(dataframe,N) plot_distribution(distribution,original_difference,'blue',100) # Calculating P-Value def pvalue(dist,estimation): return float(sum(np.array(dist)>estimation))/len(dist) p_value = pvalue(distribution,original_difference) p_value """ Explanation: First calculate the statistics (mean difference) in the data. End of explanation """
kkkddder/dmc
notebooks/week-4/01-tensorflow ANN for regression.ipynb
apache-2.0
%matplotlib inline import math import random import seaborn as sns import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_boston import numpy as np import tensorflow as tf sns.set(style="ticks", color_codes=True) """ Explanation: Lab 4 - Tensorflow ANN for regression In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task. As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks. At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are: While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution. Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand. You can get more details about various popular machine learning libraries in this comparison. To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features. This lab will conclude with two assignments: Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model. Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem. Let's start by importing some of the libraries we will use for this tutorial: End of explanation """ #load data from scikit-learn library dataset = load_boston() #load data as DataFrame houses = pd.DataFrame(dataset.data, columns=dataset.feature_names) #add target data to DataFrame houses['target'] = dataset.target #print first 5 entries of data print houses.head() """ Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with. End of explanation """ print dataset['DESCR'] """ Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set. End of explanation """ # Create a datset of correlations between house features corrmat = houses.corr() # Set up the matplotlib figure f, ax = plt.subplots(figsize=(9, 6)) # Draw the heatmap using seaborn sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5}) sns.heatmap(corrmat, annot=True, square=True) f.tight_layout() """ Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation). End of explanation """ sns.jointplot(houses['target'], houses['RM'], kind='hex') sns.jointplot(houses['target'], houses['LSTAT'], kind='hex') """ Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail: End of explanation """ # convert housing data to numpy format houses_array = houses.as_matrix().astype(float) # split data into feature and target sets X = houses_array[:, :-1] y = houses_array[:, -1] # normalize the data per feature by dividing by the maximum value in each column X = X / X.max(axis=0) # split data into training and test sets trainingSplit = int(.7 * houses_array.shape[0]) X_train = X[:trainingSplit] y_train = y[:trainingSplit] X_test = X[trainingSplit:] y_test = y[trainingSplit:] print('Training set', X_train.shape, y_train.shape) print('Test set', X_test.shape, y_test.shape) """ Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target. This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value. Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab: We will first re-split the data into a feature set (X) and a target set (y) Then we will normalize the feature set so that the values range from 0 to 1 Finally, we will split both data sets into a training and test set. End of explanation """ # helper variables num_samples = X_train.shape[0] num_features = X_train.shape[1] num_outputs = 1 # Hyper-parameters batch_size = 50 num_hidden_1 = 12 num_hidden_2 = 12 learning_rate = 0.0001 training_epochs = 100 dropout_keep_prob = 0.3 # set to no dropout by default # variable to control the resolution at which the training results are stored display_step = 1 """ Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include: batch size, which sets how many training samples are used at a time learning rate which controls how quickly the gradient descent algorithm works training epochs which sets how many rounds of training occurs dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here. End of explanation """ def accuracy(predictions, targets): error = np.absolute(predictions.reshape(-1) - targets) return np.mean(error) def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) """ Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined. The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion. The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation End of explanation """ '''First we create a variable to store our graph''' graph = tf.Graph() '''Next we build our neural network within this graph variable''' with graph.as_default(): '''Our training data will come in as x feature data and y target data. We need to create tensorflow placeholders to capture this data as it comes in''' x = tf.placeholder(tf.float32, shape=(None, num_features)) _y = tf.placeholder(tf.float32, shape=(None)) '''Another placeholder stores the hyperparameter that controls dropout''' keep_prob = tf.placeholder(tf.float32) '''Finally, we convert the test and train feature data sets to tensorflow constants so we can use them to generate predictions on both data sets''' tf_X_test = tf.constant(X_test, dtype=tf.float32) tf_X_train = tf.constant(X_train, dtype=tf.float32) '''Next we create the parameter variables for the model. Each layer of the neural network needs it's own weight and bias variables which will be tuned during training. The sizes of the parameter variables are determined by the number of neurons in each layer.''' W_fc1 = weight_variable([num_features, num_hidden_1]) b_fc1 = bias_variable([num_hidden_1]) W_fc2 = weight_variable([num_hidden_1, num_hidden_2]) b_fc2 = bias_variable([num_hidden_2]) W_fc3 = weight_variable([num_hidden_2, num_outputs]) b_fc3 = bias_variable([num_outputs]) '''Next, we define the forward computation of the model. We do this by defining a function model() which takes in a set of input data, and performs computations through the network until it generates the output.''' def model(data, keep): # computing first hidden layer from input, using relu activation function fc1 = tf.nn.relu(tf.matmul(data, W_fc1) + b_fc1) # adding dropout to first hidden layer fc1_drop = tf.nn.dropout(fc1, keep) # computing second hidden layer from first hidden layer, using relu activation function fc2 = tf.nn.relu(tf.matmul(fc1_drop, W_fc2) + b_fc2) # adding dropout to second hidden layer fc2_drop = tf.nn.dropout(fc2, keep) # computing output layer from second hidden layer # the output is a single neuron which is directly interpreted as the prediction of the target value fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3 # the output is returned from the function return fc3 '''Next we define a few calls to the model() function which will return predictions for the current batch input data (x), as well as the entire test and train feature set''' prediction = model(x, keep_prob) test_prediction = model(tf_X_test, 1.0) train_prediction = model(tf_X_train, 1.0) '''Finally, we define the loss and optimization functions which control how the model is trained. For the loss we will use the basic mean square error (MSE) function, which tries to minimize the MSE between the predicted values and the real values (_y) of the input dataset. For the optimization function we will use basic Gradient Descent (SGD) which will minimize the loss using the specified learning rate.''' loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y))) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) '''We also create a saver variable which will allow us to save our trained model for later use''' saver = tf.train.Saver() """ Explanation: Now we are ready to build our neural network model in Tensorflow. Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things: describes the architecture of the network, including how many layers it has and how many neurons are in each layer initializes all the parameters of the network describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result defines the loss function which describes how well the model is performing specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'. End of explanation """ # create an array to store the results of the optimization at each epoch results = [] '''First we open a session of Tensorflow using our graph as the base. While this session is active all the parameter values will be stored, and each step of training will be using the same model.''' with tf.Session(graph=graph) as session: '''After we start a new session we first need to initialize the values of all the variables.''' tf.initialize_all_variables().run() print('Initialized') '''Now we iterate through each training epoch based on the hyper-parameter set above. Each epoch represents a single pass through all the training data. The total number of training steps is determined by the number of epochs and the size of mini-batches relative to the size of the entire training set.''' for epoch in range(training_epochs): '''At the beginning of each epoch, we create a set of shuffled indexes so that we are using the training data in a different order each time''' indexes = range(num_samples) random.shuffle(indexes) '''Next we step through each mini-batch in the training set''' for step in range(int(math.floor(num_samples/float(batch_size)))): offset = step * batch_size '''We subset the feature and target training sets to create each mini-batch''' batch_data = X_train[indexes[offset:(offset + batch_size)]] batch_labels = y_train[indexes[offset:(offset + batch_size)]] '''Then, we create a 'feed dictionary' that will feed this data, along with any other hyper-parameters such as the dropout probability, to the model''' feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob} '''Finally, we call the session's run() function, which will feed in the current training data, and execute portions of the graph as necessary to return the data we ask for. The first argument of the run() function is a list specifying the model variables we want it to compute and return from the function. The most important is 'optimizer' which triggers all calculations necessary to perform one training step. We also include 'loss' and 'prediction' because we want these as ouputs from the function so we can keep track of the training process. The second argument specifies the feed dictionary that contains all the data we want to pass into the model at each training step.''' _, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict) '''At the end of each epoch, we will calcule the error of predictions on the full training and test data set. We will then store the epoch number, along with the mini-batch, training, and test accuracies to the 'results' array so we can visualize the training process later. How often we save the data to this array is specified by the display_step variable created above''' if (epoch % display_step == 0): batch_acc = accuracy(p, batch_labels) train_acc = accuracy(train_prediction.eval(session=session), y_train) test_acc = accuracy(test_prediction.eval(session=session), y_test) results.append([epoch, batch_acc, train_acc, test_acc]) '''Once training is complete, we will save the trained model so that we can use it later''' save_path = saver.save(session, "model_houses.ckpt") print("Model saved in file: %s" % save_path) """ Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we: Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above. Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow. End of explanation """ df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"]) df.set_index("epoch", drop=True, inplace=True) fig, ax = plt.subplots(1, 1, figsize=(10, 4)) ax.plot(df) ax.set(xlabel='Epoch', ylabel='Error', title='Training result') ax.legend(df.columns, loc=1) print "Minimum test loss:", np.min(df["test_acc"]) """ Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps. End of explanation """
CopernicusMarineInsitu/INSTACTraining
PythonNotebooks/IndexFilePlots/plot_positions_latest_global.ipynb
mit
datadir = "~/CMEMS_INSTAC/INSITU_GLO_NRT_OBSERVATIONS_013_030/latest/20151201/" %matplotlib inline import matplotlib.pyplot as plt import glob import os import netCDF4 import numpy as np """ Explanation: In this exercise we will plot all the data locations available for a given day in the latest directory of the global product (INSITU_GLO_NRT_OBSERVATIONS_013_030) directory. We assume the data have been downloaded in the following directory: End of explanation """ datadir = os.path.expanduser(datadir) filelist = sorted(glob.glob(datadir + '*nc')) nfiles = len(filelist) print("Number of files = %u" % (nfiles)) """ Explanation: File reading We create a list of the files available in the directory: End of explanation """ for datafiles in filelist[0:10]: print os.path.basename(datafiles) """ Explanation: Now we can loop on the files (just the first 10) to check if it's okay: End of explanation """ lon = np.zeros(nfiles) lat = np.zeros(nfiles) for count, datafiles in enumerate(filelist): with netCDF4.Dataset(datafiles) as nc: lon[count] = np.nanmean(nc.variables['LONGITUDE'][:]) lat[count] = np.nanmean(nc.variables['LATITUDE'][:]) """ Explanation: Basic plot We read the coordinate variables from the file and plot them on the map. Then we loop on the files: End of explanation """ lon = np.ma.masked_outside(lon, -180., 360.) lat = np.ma.masked_outside(lat, -90., 90.) fig = plt.figure(figsize=(8, 8)) plt.plot(lon, lat, 'ko', markersize=2) plt.show() """ Explanation: We also mask the bad values of coordinates: End of explanation """ from mpl_toolkits.basemap import Basemap m = Basemap(projection='moll', lon_0=0, resolution='c') lon_p, lat_p = m(lon, lat) import matplotlib font = {'family' : 'serif', 'size' : 16} matplotlib.rc('font', **font) fig = plt.figure(figsize=(10,8)) m.plot(lon_p, lat_p, 'ko', markersize=2) m.drawparallels(np.arange(-80, 80., 20.),labels=[True,False,False,True], zorder=2) m.drawmeridians(np.arange(-180, 180., 30.), zorder=2) m.fillcontinents(color='gray', zorder=3) m.drawcoastlines(linewidth=0.5) plt.title('Platform locations on\n December 1st, 2015', fontsize=20) plt.show() """ Explanation: Improved plot A projection is created, so we have access to a coastline, land mask, ... End of explanation """ datatypelist = ('BA', 'BO', 'CT', 'DB', 'DC', 'FB', 'GL', 'MO', 'ML', 'PF', 'RF', 'TE', 'TS', 'XB') for datatype in datatypelist: filelist = sorted(glob.glob(datadir + '*LATEST_*_' + datatype + '*nc')) nfiles = len(filelist) print("Number of files of type %s = %u" % (datatype, nfiles)) """ Explanation: It is interesting to see the highly variable coverage.<br/> For example near the Equator in the Atlantic Ocean, the coverage is very low. Plot by platform The idea is to have a different color depending on the platform type: * drifting buoys, * gliders, * moorings... To do so, we will create a list of files for each parameter.<br/> There is an extensive description of all the data types in this document. End of explanation """ datatypename = {'DB': 'Drifting buoys', 'DC': 'Drifting buoy reporting calculated sea water current', 'MO': 'Fixed buoys or mooring time series', 'PF': 'Profiling floats vertical profiles', 'RF': 'River flows', 'TE': 'TESAC messages on GTS'} datatypename.keys() """ Explanation: We see that some data types are not often present in the list, so we will only take those with at least 50 data files.<br> We create a dictionnary with the abbreviation and the corresponding data types: End of explanation """ colorlist = ['red', 'yellowgreen', 'lightskyblue', 'gold', 'violet', 'lightgray'] m = Basemap(projection='robin', lat_0=0, lon_0=0., resolution='l') fig = plt.figure(figsize=(10,8)) ax = plt.subplot(111) nfiles = np.zeros(len(datatypename.keys())) for ntype, datatype in enumerate(datatypename.keys()): filelist = sorted(glob.glob(datadir + '*LATEST_*_' + datatype + '*nc')) nfiles[ntype] = len(filelist) lon = np.zeros(nfiles[ntype]) lat = np.zeros(nfiles[ntype]) for count, datafiles in enumerate(filelist): with netCDF4.Dataset(datafiles) as nc: lon[count] = np.nanmean(nc.variables['LONGITUDE'][:]) lat[count] = np.nanmean(nc.variables['LATITUDE'][:]) lon_p, lat_p = m(lon, lat) m.plot(lon_p, lat_p, 'o', markerfacecolor=colorlist[ntype], markeredgecolor=colorlist[ntype], markersize=3, label=datatypename[datatype] + ': ' + str(int(nfiles[ntype])) + ' files') box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=14) m.drawparallels(np.arange(-80, 80., 30.),labels=[True,False,False,True], zorder=2) m.drawmeridians(np.arange(-180, 180., 90.),labels=[True,False,False,True], zorder=2) m.fillcontinents(color='gray', zorder=3) m.drawcoastlines(linewidth=0.5) plt.title('Platform locations on\n December 1st, 2015', fontsize=20) plt.savefig('./figures/platform_types_20151201.png', dpi=300) plt.show() """ Explanation: We create a list of colors for the plot: End of explanation """ fig2 = plt.figure(figsize=(10, 8)) plt.pie(nfiles, labels=datatypename.values(), colors=colorlist, autopct='%1.1f%%', startangle=90) plt.savefig('./figures/platform_piechart_20151201.png', dpi=300) plt.show() """ Explanation: Pie chart It is also useful to have a pie chart showing the relative importance of each data type. End of explanation """
takanory/python-machine-learning
pydata-tokyo-tutorial.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv("train.csv") df.head() """ Explanation: PyData.Tokyo Tutorial https://pydata.tokyo/ipynb/tutorial-1/dh.html df[df.Embarked=='C'] # Embarked=='C'で絞り込み df[df.Embarked=='C']['Survived'] # Embarked=='C'で絞り込んだdfのSurvived列を取得 df[(df.Embarked=='C') &amp; (df.Sex=='male')] # 複数の絞り込み条件 df.query("Embarked=='C' &amp; Sex=='male'") # df.queryを使うとすっきり書ける。ただし、変な列名(スペースがあったり/が入ってたりPython予約語と被っている)だと死ぬ df[['PassengerId', 'Survived', 'Pclass']] # 'PassengerId', 'Survived', 'Pclass'の列からなるdfを作成 1-1 データの読み込み https://www.kaggle.com/c/titanic から data.csv を落とす End of explanation """ # 65歳の人のデータを抜き出す df[df.Age == 65][['Name', 'Age']] """ Explanation: 以下のようなデータがある PassengerId: 乗客ID Survived: 1 = 生き残り 0 = 死亡 Pclass: 等級 Name: 名前 Sex: 性別 Age: 年齢 Parch: 子供の数 Ticket: チケット番号 Fare: 運賃 Cabin: 部屋番号 Embarked: 乗船地 End of explanation """ # データ件数 len(df) # 先頭2件を確認 df.head(2) # 後ろ5件(デフォルト5件)を確認 df.tail() # 先頭3件の名前、年齢、性別のみ出力 df[['Name', 'Age', 'Sex']].head(3) """ Explanation: データを見てみよう End of explanation """ # 計算可能なもの(数値)だけが対象 df.describe() max_age = df['Age'].max() print('年齢の最大値: {0}'.format(max_age)) mean_age = df['Age'].mean() print('年齢の平均値: {0}'.format(mean_age)) # 女性で年齢の高い上位10名を確認する df[df.Sex=='female'][['Name', 'Sex', 'Age']].sort_values('Age', ascending=0).head(10) """ Explanation: 1-2. 集計 describe()関数を利用すると、データフレームの概要を把握することが出来ます。 count: レコード数です。 mean: 平均値です。 std: 標準偏差です。 min: 最小値です。 25%, 50%, 75%: 第1四分位, 中央値、第3四分位です。 max: 最大値です。 End of explanation """ # Cabin (部屋番号)などの値には多くの欠損データが含まれている # 687件が欠損している df['Cabin'].isnull().sum() # Ticket(チケット番号)は、今回の分析では有用ではなさそう df[['Name', 'Ticket']].head() # CabinとTicketのカラムを削除する # 列指定で削除したいので axis=1 を指定する df = df.drop(['Ticket', 'Cabin'], axis=1) # df を上書きしているので、2回実行すると「TicketとCabinがないよー」って言われる df.head() """ Explanation: 1-3. データの前処理 不要カラムの削除 End of explanation """ # Age, Cabin などに欠損値(NaN)があることを確認 df.loc[4:10] # interpolate() 関数で欠損値を補間する # とはいえ、年齢で前後の間を埋めるのはよくない df.loc[4:6][['Name', 'Age']].interpolate() df.loc[4:6][['Name', 'Age']] # 年齢の欠損値を、性別毎の年齢の平均値で補間する female_age_mean = round(df[df.Sex=='female']['Age'].mean()) male_age_mean = round(df[df.Sex=='male']['Age'].mean()) print('女性の平均年齢は{0}歳、男性は{1}歳です。この平均年齢で補間します。'.format(female_age_mean, male_age_mean)) # 女性で年齢が欠損している人の例 df[df.PassengerId==20][['PassengerId', 'Name', 'Sex', 'Age']] # 男性で年齢が欠損している人の例 df[df.PassengerId==6][['PassengerId', 'Name', 'Sex', 'Age']] # 年齢の欠損値を平均で埋める dff = df[df.Sex=='female'].fillna({'Age': female_age_mean}) dfm = df[df.Sex=='male'].fillna({'Age': male_age_mean}) df2 = dff.append(dfm) # 新しいデータフレームでは年齢の平均値が入っている df2[df2.PassengerId==20][['PassengerId', 'Name', 'Sex', 'Age']] df2[df2.PassengerId==6][['PassengerId', 'Name', 'Sex', 'Age']] """ Explanation: 欠損値の補間 End of explanation """ # 年齢で分類し、数値をふる def classification_age(age): if age <= 19: return '1' elif age <= 34: return '2' elif age <= 49: return '3' elif age >= 50: return '4' else: return '0' # df.Age は df['Age] と同じ df = df2 df['AgeClass'] = df.Age.map(classification_age) df.head() """ Explanation: カラムの追加 End of explanation """ # 0 = 死亡, 1 = 生存という2つの軸でテータを見る df['Survived'].plot(alpha=0.6, kind='hist', bins=2) plt.xlabel('Survived') plt.ylabel('N') # 男性女性の死亡/生存をグラフにする fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(6, 3)) for i, sex in enumerate(['male', 'female']): df['Survived'][df.Sex==sex].hist(alpha=0.5, bins=2, ax=axes[i]) axes[i].set_title(sex) fig.subplots_adjust(hspace=0.3) fig.tight_layout() # 男性の年齢ごとの死亡/生存をグラフにする plt.hist([df[(df.Survived==0) & (df.Sex=='male')]['Age'], df[(df.Survived==1) & (df.Sex=='male')]['Age']], alpha=0.6, range=(1,80), bins=10, stacked=True, label=('Died', 'Survived')) plt.legend() plt.xlabel('Age') plt.ylabel('N') plt.title('male') # 女性の年齢ごとの死亡/生存をグラフにする plt.hist([df[(df.Survived==0) & (df.Sex=='female')]['Age'], df[(df.Survived==1) & (df.Sex=='female')]['Age']], alpha=0.6, range=(1,80), bins=10, stacked=True, label=('Died', 'Survived')) plt.legend() plt.xlabel('Age') plt.ylabel('N') plt.title('female') # 女性と男性のグラフをY軸を合わせて並べて描画 fig = plt.figure(figsize=[15, 5]) # http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot # 121 は nrows, ncols, plot_number ax1 = fig.add_subplot(121) plt.hist([df[(df.Survived==0) & (df.Sex=='female')]['Age'], df[(df.Survived==1) & (df.Sex=='female')]['Age']], alpha=0.6, range=(1,80), bins=10, stacked=True, label=('Died', 'Survived')) plt.xlabel('Age') plt.yticks([0, 50, 100, 150, 200, 250]) plt.ylabel('N') plt.title('female') plt.legend() ax2 = fig.add_subplot(122) plt.hist([df[(df.Survived==0) & (df.Sex=='male')]['Age'], df[(df.Survived==1) & (df.Sex=='male')]['Age']], alpha=0.6, range=(1,80), bins=10, stacked=True, label=('Died', 'Survived')) plt.xlabel('Age') plt.yticks([0, 50, 100, 150, 200, 250]) plt.ylabel('N') plt.title('male') plt.legend() plt.show() mean_age = df['Age'].mean() for pclass in [1, 2, 3]: fig, axes = plt.subplots(nrows=2, ncols=2, figsize=[10, 10]) sex_n=0 for sex in ['male', 'female']: for survived in [0, 1]: fig = df[((df.Survived==survived) & (df.Sex==sex) & (df.Pclass==pclass) )].Age.hist(alpha=0.6, bins=10, ax=axes[sex_n][survived]) fig.set_xlabel("Age") fig.set_ylabel('N ('+sex+str(survived)+' )') axes[sex_n][survived].set_ylim(0,70) fig.set_title('Pclass = {0} / mean_age = {1}'.format(pclass, round(mean_age))) sex_n += 1 plt.subplots_adjust(hspace=0.5) plt.show() """ Explanation: 1-4. データの可視化 End of explanation """
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
notebooks/10_04_Astronomy_Astroplan.ipynb
mit
%matplotlib inline import numpy as np import math import matplotlib.pyplot as plt import seaborn from astropy.io import fits from astropy import units as u from astropy.coordinates import SkyCoord plt.rcParams['figure.figsize'] = (12, 8) plt.rcParams['font.size'] = 14 plt.rcParams['lines.linewidth'] = 2 plt.rcParams['xtick.labelsize'] = 13 plt.rcParams['ytick.labelsize'] = 13 plt.rcParams['axes.titlesize'] = 14 plt.rcParams['legend.fontsize'] = 13 """ Explanation: Planning Observations Planning of observations is currently a weakness in python. We have the following packages to perform this task: pyephem - works well, but is not maitained anymore and a bit obsolete astropy - has basic operations (i.e. compute altaz of sources) but has no high-level functionalities astroplan - aims many interesting and useful features but it is at a very early stage and some features do not work (and it is slow!) Here we will mostly review astroplan as we believe it will be adopted by astropy and possibly become a reference. End of explanation """ from astropy.coordinates import SkyCoord from astroplan import FixedTarget coordinates = SkyCoord('18h18m48.0s', '−13d49m0.0s', frame='icrs') eagle_nebula = FixedTarget(name='M16', coord=coordinates) print (eagle_nebula) """ Explanation: In order to know the altitude and azimuth of a fixed target in the sky we will mainly need to know: The location of the target (on Sky) The location of the observer (on Earth) The time Let's define first the target we want to observe: End of explanation """ eagle_nebula = FixedTarget.from_name('M16') print (eagle_nebula) """ Explanation: You can also search by its name if it is in CDS... End of explanation """ from astroplan import Observer observer = Observer.at_site('lapalma') print (observer) """ Explanation: Now we should specify where the observer will be on Earth: End of explanation """ import astropy.units as u from astropy.coordinates import EarthLocation #from pytz import timezone from astroplan import Observer longitude = '-17d52m54s' latitude = '28d45m38s' elevation = 2344 * u.m location = EarthLocation.from_geodetic(longitude, latitude, elevation) observer = Observer(name='WHT', location=location, pressure=0.615 * u.bar, relative_humidity=0.04, temperature=18 * u.deg_C, #timezone=timezone('US/Hawaii'), description="Our beloved William Herschel Telescope") print (observer) """ Explanation: But maybe we are picky and want the exact location (or specify a different location that is not present in the database...) End of explanation """ from astropy.time import Time time = Time('2017-09-15 23:30:00') """ Explanation: Finally we need to set up the time, which by default is set in UTC. End of explanation """ observer.target_is_up(time, eagle_nebula) """ Explanation: Let's ask python if we can see the Nebula tonight from la palma: End of explanation """ observer.is_night(time) """ Explanation: We assume that at 11.30 pm will be dark, but let's make sure... End of explanation """ from astroplan import download_IERS_A download_IERS_A() """ Explanation: You might get an IERS warning (International Earth Rotation and Reference Systems Service) to update the Earth Location. For more info: http://astroplan.readthedocs.io/en/latest/faq/iers.html Let's do it: End of explanation """ observer.sun_set_time(time, which='nearest').iso """ Explanation: Calculate rise/set/meridian transit times It can also provide information about all the twilight times: End of explanation """ observer.sun_set_time(time, which='next').iso observer.twilight_evening_civil(time, which='nearest').iso observer.twilight_evening_nautical(time, which='nearest').iso observer.twilight_evening_astronomical(time, which='nearest').iso """ Explanation: By default it set's the nearest sunset but you can specify also next or previous. End of explanation """ observer.target_rise_time(time, eagle_nebula).iso observer.target_set_time(time, eagle_nebula).iso """ Explanation: Similarly, we can ask when the target will be raising or setting: End of explanation """ altaz_eagle = observer.altaz(time, eagle_nebula) altaz_eagle.alt, altaz_eagle.az """ Explanation: Calculate alt/az positions for targets and Airmass With this information we can also ask what is the Altitute and Azimuth of our target at that specific time End of explanation """ altaz_eagle.secz """ Explanation: With the integrated sec function we can easily get the Airmass End of explanation """ from astropy.time import TimeDelta time_list = [] airmass_list = [] current_time = observer.sun_set_time(time, which='nearest') while current_time < observer.sun_rise_time(time, which='nearest'): current_altaz = observer.altaz(current_time, eagle_nebula) if current_altaz.alt > 0: airmass_list.append(current_altaz.alt.value) else: airmass_list.append(0) time_list.append(current_time.datetime) current_time += TimeDelta(3600, format='sec') plt.plot(time_list, airmass_list) """ Explanation: We can now aim to make an altitude plot scanning the altitude of our target every hour End of explanation """ from astroplan.plots import plot_airmass middle_of_the_night = Time('2017-09-16 01:00:00') plot_airmass(targets=eagle_nebula, observer=observer, time=middle_of_the_night, #brightness_shading=True, #altitude_yaxis=True ) plt.legend() """ Explanation: Fortunately, there is a function that does it (much faster) within the day around the date we provide: End of explanation """ from astroplan.plots import dark_style_sheet start_time = observer.sun_set_time(time, which='nearest') end_time = observer.sun_rise_time(time, which='nearest') delta_t = end_time - start_time observe_time = start_time + delta_t*np.linspace(0, 1, 75) andromeda = FixedTarget.from_name('M31') pleiades = FixedTarget.from_name('M45') some_nice_stuff_to_look_tonight = [eagle_nebula, andromeda, pleiades] plot_airmass(targets=some_nice_stuff_to_look_tonight, observer=observer, time=observe_time, brightness_shading=True, altitude_yaxis=True, #style_sheet=dark_style_sheet ) plt.legend() """ Explanation: We can also give a range of dates to focus on a specific region of time (dark time) End of explanation """ from astroplan.plots import plot_sky plot_sky(eagle_nebula, observer, middle_of_the_night) plot_sky(pleiades, observer, middle_of_the_night) plot_sky(andromeda, observer, middle_of_the_night) plt.legend() observe_time = Time('2000-03-15 17:00:00') observe_time = observe_time + np.linspace(-4, 5, 10)*u.hour plot_sky(pleiades, observer, observe_time) plt.legend(loc='center left', bbox_to_anchor=(1.25, 0.5)) plt.show() """ Explanation: Making sky charts End of explanation """ from astroplan.plots import plot_finder_image plot_finder_image(eagle_nebula, survey='DSS', log=True) """ Explanation: Finder Chart Image Astroplan also provides the option to display sky charts from a list of surveys (but it goes really slow..) End of explanation """
tacaswell/altair
notebooks/ChartExamples.ipynb
bsd-3-clause
import random from IPython.display import HTML, display import numpy as np import pandas as pd import altair.api as alt from altair import html """ Explanation: Altair Basic Charting This notebook seeks to walk you through many of the basic chart types you're going to be building with Altair, such as line charts, bar charts, histograms, etc. End of explanation """ # Dict of lists list_data = {'col_1': [1, 2, 3, 4, 5, 6, 7], 'col_2': [10, 20, 30, 20, 15, 30, 45]} df = pd.DataFrame(list_data) fig_1 = (alt.Viz(list_data) .encode(x="col_1", y="col_2") .configure(width=600, height=400, singleWidth=500, singleHeight=300)) """ Explanation: First we're going to generate some data that might be representative of datasets you're using. Altair works with Pandas DataFrames, so our examples will move from regular Python data structures to DataFrames. First we're going to create a simple bar chart out of an iterable of x/y pairs. We'll first encode the col_1 and col_2 data to x and y, and then specify widths/heights. Vega-lite has faceting built-in, so it has two dimension specifications: * width/height: The width/height of the entire chart * singleWidth/singleHeight: The width/height of a single facet In the following example, we only have one facet, so we'll bound our box appropriately. End of explanation """ fig_1.encoding.x, fig_1.encoding.y """ Explanation: Let's take a look a quick look at our dat encoding End of explanation """ out = html.render(fig_1) display(HTML(html.render(fig_1))) """ Explanation: As you can see, Altair figured out that both data types where Quantitative, or Q. Let's make a chart: End of explanation """ list_data = {'col_1': ['A', 'B', 'C', 'D', 'E', 'F', 'G'], 'col_2': [10, 20, 30, 20, 15, 30, 45]} df = pd.DataFrame(list_data) fig_2 = (alt.Viz(list_data) .encode(x="col_1:O", y="col_2:Q") .configure(width=600, height=400, singleWidth=500, singleHeight=300) .bar()) fig_2.encoding.x.band = alt.Band(size=60) display(HTML(html.render(fig_2))) """ Explanation: By default Altair created a scatterplot with a Quantitative axis. Let's take a look at similar data, but with categorical types in the first column. End of explanation """ df = pd.DataFrame(list_data) fig_3 = (alt.Viz(list_data) .encode(x="col_1", y="col_2") .set_single_dims(width=500, height=300) .circle()) display(HTML(html.render(fig_3))) """ Explanation: You'll see a couple things above. First is that we presented some type hints for the x and y encoding that were parsed by Altair. Second is that the band on the x encoding above was important- this dictates the width of each categorical element on that axis. In the case of Ordinal or Nominal data (O or N), the band width determines the total chart width. However, you might not want to spend a lot of time fiddling with band widths and so on. If you just want to create a single chart and fiddle with the dimensions, you can use the set_single_dims helper; End of explanation """ normal = np.random.normal(size=1000) df = pd.DataFrame({"normal": normal}) fig_4 = (alt.Viz(df) .hist(x="normal:O", bins=20) .configure(width=700, height=500, singleHeight=300)) # Band width will largely depend on your number of bins fig_4.encoding.x.band = alt.Band(size=30) display(HTML(html.render(fig_4))) """ Explanation: Altair supports some higher-level charts, such as histograms. For these high-level charts, you can pass your x-encoding straight into the hist method call: End of explanation """ normal = np.random.normal(size=1000) cats = np.random.choice(["A", "B", "C", "D"], size=1000) df = pd.DataFrame({"normal": normal, "cats": cats}) fig_5 = (alt.Viz(df) .hist(x="normal:O", color="cats", bins=20) .configure(width=700, height=500, singleHeight=300)) # Band width will largely depend on your number of bins fig_5.encoding.x.band = alt.Band(size=30) display(HTML(html.render(fig_5))) """ Explanation: You can also facet histograms by a second column/dimension, using the color keyword in the hist call: End of explanation """ cats = ['y1', 'y2', 'y3', 'y4'] date_cats = pd.Categorical.from_array(pd.date_range('1/1/2015', periods=365, freq='D')).astype(str) date_data = {"date": date_cats, "values": np.random.randint(0, 100, size=365), "categories": np.random.choice(cats, size=365)} df = pd.DataFrame(date_data) fig_6 = (alt.Viz(df) .encode(x="date:T", y="avg(values):Q") .configure(width=900, height=500, singleHeight=400) .line()) fig_6.encoding.x.band = alt.Band(size=50) fig_6.encoding.x.timeUnit = "month" display(HTML(html.render(fig_6))) """ Explanation: We can also build line charts with Time dimension types. The following example also uses the aggregation shorthand to average the y-values. End of explanation """
KIPAC/StatisticalMethods
tutorials/microlensing.ipynb
gpl-2.0
exec(open('tbc.py').read()) # define TBC and TBC_above import numpy as np import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt import scipy.stats as st %matplotlib inline import incredible as cr from corner import corner TBC() # dat = np.loadtxt('../ignore/phot.dat') # edit path if needed t = dat[:,0] I = dat[:,1] Ierr = dat[:,2] t0 = 2450000. t -= t0 """ Explanation: Tutorial: Fitting a Simple Microlensing Model to an OGLE Lightcurve Here we'll pick up from the introduction in the OGLE notebook. Our goal will be to 1. see our simpleminded Metropolis implementation struggle with a real-world problem, and then 2. apply a more efficient implementation or algorithm (using a package) for comparison. Data and model Let's read in the data, as in the last notebook. End of explanation """ data = {'t':t, 'I':I, 'Ierr':Ierr, 't0':t0} """ Explanation: For convenience, we'll organize the data in a dictionary as follows. End of explanation """ def model_I(t, I0, p, tmax, tE): """ Return the model lightcurve in magnitude units, I(t) """ TBC() TBC_above() """ Explanation: Next, copy over your model evaluation function from the ogle_lightcurve notebook. End of explanation """ TBC() # params = {'I0': ... put in broadly reasonable starting parameters from the previous notebook def log_prior(**params): TBC() TBC_above() # sanity check log_prior(**params) def log_likelihood(data, **params): TBC() TBC_above() # sanity check log_likelihood(data, **params) def log_posterior(data, **params): TBC() TBC_above() # sanity check log_posterior(data, **params) """ Explanation: Next, sketch the PGM and write out the probability expressions corresponding to the data set and the model given in the ogle_lightcurve notebook. (We'll think about the priors below.) TBC Finally, we need to chose priors. As always, you can experiment with different choices if you think they're justified. But for concreteness, and to enable comparison with a known solution, consider the following as a default. This seems like a situation where uniform priors are reasonable for all parameters. Note that $p\geq0$ is a physical requirement of the model definition (and $p>0$ is a numerical requirement, to avoid dividing by zero). Bounds for the prior distributions in $I_0$, $t_\mathrm{max}$ and $t_\mathrm{E}$ may not be obvious (strictly) a priori, but could be based on an absolutely minimal use of the data. For example, given that these lightcurves correspond to intervals the OGLE pipeline believes it found a microlensing event, it's reasonable to assume that $t_\mathrm{max}$ lies somewhere within the lightcurve, and similarly that the width $t_\mathrm{E}$ be less than the duration of the lightcurve, and that, for e.g., $I_0$ lies between the minimum and maximum of the measured $I(t)$ (maybe with an extra buffer of 1-2 magnitudes, if you want). Write down your chosen priors here. TBC Model implementation Implement log-prior, log-likelihood and log-posterior functions. The prototypes are of the same form we've been using, which is hopefully familiar now. For concreteness, and to agree with the argument list of model_I, let's call the parameters I0, p, tmax and tE in code, and also define a params dictionary as usual. The data argument will be the dictionary we actually called data above. End of explanation """ TBC() # proposal_distribution = {'I0': ... """ Explanation: You can either use your guess as a starting point for a chain, or insert some cells here and use a numerical minimizer to get params closer to the best fit. Fitting with simple Metropolis We'll first try to use your Metropolis implementation from the AGN photometry tutorials, and see how well that does. Define a proposal distribution with guesses for step sizes for each parameter, as we did in that notebook. End of explanation """ def propose(current_params, dists): TBC() def step(data, current_params, current_lnP, proposal_dists): TBC() TBC_above() """ Explanation: You can copy over your propose and step functions from that notebook also. End of explanation """ %%time current_lnP = log_posterior(data, **params) samples = np.zeros((10000, len(params))) for i in range(samples.shape[0]): params, current_lnP = step(data, params, current_lnP, proposal_distribution) samples[i,:] = [params['I0'], params['p'], params['tmax'], params['tE']] """ Explanation: Let's run a single chain to see how things are working. As before, you might want to go back and adjust the proposal distribution based on what you see below. There is nothing magic about the length of 10000 other than it's a nice round number that should be more than enough when we switch to a more advanced sampler. It should also be long enough for even a struggling sampler to move around at least a bit. End of explanation """ param_labels = [r'$I_0$', r'$p$', r'$t_{max}$', r'$t_E$'] plt.rcParams['figure.figsize'] = (16.0, 12.0) fig, ax = plt.subplots(len(param_labels), 1); cr.plot_traces(samples, ax, labels=param_labels) """ Explanation: Here we plot the traces as usual. End of explanation """ %%time chains = [np.zeros((10000, len(params))) for j in range(4)] for newsamples in chains: params = {'I0':st.norm.rvs(params['I0'], 5*np.std(samples[:,0])), 'p':st.norm.rvs(params['p'], 5*np.std(samples[:,1])), 'tmax':st.norm.rvs(params['tmax'], 5*np.std(samples[:,2])), 'tE':st.norm.rvs(params['tE'], 5*np.std(samples[:,3])) } current_lnP = log_posterior(data, **params) for i in range(samples.shape[0]): params, current_lnP = step(data, params, current_lnP, proposal_distribution) newsamples[i,:] = [params['I0'], params['p'], params['tmax'], params['tE']] print("Done with a chain") """ Explanation: Assuming this looks broadly reasonable, we can now run a few more, with dispersed starting points. Remember that these chains don't need to look perfect, since we're also going to use a more advanced sampler below. In fact, this is a pretty nasty parameter space, at least for my stupid Metropolis sampler, so dispersing starting points within very wide priors is a bad idea. The cell below disperses parameters by something like 5x the standard deviations of your test chain, which will hopefully be ok. End of explanation """ plt.rcParams['figure.figsize'] = (16.0, 12.0) fig, ax = plt.subplots(len(param_labels), 1); cr.plot_traces(chains, ax, labels=param_labels, Line2D_kwargs={'markersize':1.0}) """ Explanation: As always, our next move is to inspect the trace plots. Remember what we're looking for from the MCMC Diagnostics notebook? End of explanation """ TBC() # burn = ... chains = [chain[burn:,:] for chain in chains] """ Explanation: Remove burn-in: End of explanation """ TBC() # compute the Gelman-Rubin criterion for each parameter TBC() # compute the effective number of samples for each parameter """ Explanation: Have a look at the other diagnostics we covered. End of explanation """ np.mean(np.concatenate(chains, axis=0), axis=0) """ Explanation: Checkpoint: Without too much fiddling, I was able to get good convergence, but not a particularly large $n_\mathrm{eff}$ (hundreds). If your chains are similar, you're in good shape, considering! In fact, let's also quickly look at the posterior mean of each parameter as a cross check that your solution is broadly sound. Mine are about [19.822, 0.2662, 7434.4, 194.5]. Of course, this is only useful to know if you used the same data and priors. End of explanation """ corner(np.concatenate(chains, axis=0), labels=param_labels); """ Explanation: For completeness, let's make a quick triangle plot. If you have as few effectively independent samples as I do, it will be ugly! End of explanation """ TBC() # all the stuff above """ Explanation: Fit with a better sampler Now the fun, and more open-ended part! Fit the same data and model, but using a different sampler. This sampler can be provided by some Python package that you can pip or conda install. In fact, we encourage this, as it forces you to learn to use software that might be useful in general. You can use a more efficient Metropolis-Hastings sampler (e.g. with adaptation) or some other sampling method entirely. However, we would discourage treating these things as black boxes, so stick to methods where you have a reasonable idea what's happening under the hood. We refer you to the notes on More Sampling Methods and our incomplete list of sampling packages, though in principle you need not restrict yourself to these. Once you've installed and figured out how to use one of these things, run several chains as we did above and look at the usual diagnostics. (This assumes that the concept of "multiple chains" makes sense for the method you're using. If not, show and discuss whatever diagnostics make sense for that method.) Verify that you get essentially the same results as above (modulo the poorer sampling of simple our Metropolis implementation - a visual check is fine for this), and comment on the relative efficiency of the two algorithms. For compatibility with the remainder of the notebook, store your final list of samples in a single $N\times4$ array called samples. For multiple chains arranged as we're used to, like in the previous section, this could be done by samples = np.concatenate(chains, axis=0) (after removing burn-in). End of explanation """ mean_params = np.mean(samples, axis=0) plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(t, I, yerr=Ierr, fmt='none'); plt.xlabel('HJD - '+str(t0), fontsize=14); plt.ylabel('I-band magnitude', fontsize=14); plt.gca().invert_yaxis(); tgrid = np.linspace(t.min(), t.max(), 1000) plt.plot(tgrid, model_I(tgrid, *mean_params)); """ Explanation: TBC comments on the efficiency, results Compare the fitted model to the data As a simple check of whether the fit is reasonable, the cell below will plot the model curve defined by the posterior mean over the data. We can't possibly claim to be finished without looking at this! End of explanation """ TBC() """ Explanation: Summary of results Similarly, we're not done without finding the 1D marginalized best values and 68.3% credible intervals for each parameter. Do so. End of explanation """
MarkPinches/Metrum-Institute
MI250 Lab1 simple regression example.ipynb
gpl-3.0
from pymc3 import Model, Normal, Uniform, NUTS, sample, find_MAP, traceplot, summary, df_summary, trace_to_dataframe import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Introduction This notebook is a port to pymc3 of the example given in the Metrum Institutes MI250: Introduction to Bayesian PK-PD Modelling and Simulation(2010) Lab1 session. It illustrates the ease with which Bayesian models can be created and analysed with pymc3. It utilises a very simple linear regression model as described in the youtube video, that can be found here... https://www.youtube.com/watch?v=ozbQ9MtKuj4 other acknowledgements go to Jonathan Sedar for his pymc3 blog posts http://blog.applied.ai/bayesian-inference-with-pymc3-part-1/ From which I used some code to draw the posterior predictive check function Import the required modules and set matplotlib to plot in the notebook End of explanation """ x = np.array([1,2,3,4,5,6,7,8,9,10]) y =np.array([5.19, 6.56, 9.19, 8.09, 7.6, 7.08, 6.74, 9.3, 9.98, 11.5]) df = pd.DataFrame({'x':x, 'y':y}) plt.scatter(x,y) plt.xlabel('x') plt.ylabel('y') """ Explanation: Create dataset and plot End of explanation """ basic_model = Model() with basic_model: # Priors for unknown model parameters alpha = Normal('alpha', mu=0, sd=100) beta = Normal('beta', mu=0, sd=100) sigma = Uniform('sigma', lower=0, upper=10000) # Expected value of outcome mu = alpha + beta*x # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=y) """ Explanation: Create the model pymc3 follows a standard structure for model creation. create an instance of the model with that model, assign the priors, the expected outcome and then the Likelihood distribution of the expected values End of explanation """ #find_MAP can be used to establish a useful starting point for the sampling map_estimate = find_MAP(model=basic_model) print(map_estimate) """ Explanation: Priors.... End of explanation """ import scipy with basic_model: # obtain starting values via MAP start = find_MAP(fmin=scipy.optimize.fmin_powell) # draw 2000 posterior samples trace = sample(2000, start=start) """ Explanation: Find and assign the priors, then perform MCMC using the NUT sampler for 2000 iterations End of explanation """ traceplot(trace, lines={k: v['mean'] for k, v in df_summary(trace).iterrows()}) summary(trace) df_summary(trace, alpha =0.1) def plot_posterior_cr(mdl, trc, rawdata, xlims, npoints=1000): ''' Convenience fn: plot the posterior predictions from mdl given trcs ''' ## extract traces trc_mu = trace_to_dataframe(trc)[['alpha','beta']] trc_sd = trace_to_dataframe(trc)['sigma'] ## recreate the likelihood x = np.linspace(xlims[0], xlims[1], npoints).reshape((npoints,1)) X = x ** np.ones((npoints,2)) * np.array([0, 1]) like_mu = np.dot(X,trc_mu.T) + trc_mu.median()['alpha'] like_sd = np.tile(trc_sd.T,(npoints,1)) like = np.random.normal(like_mu, like_sd) ## Calculate credible regions and plot over the datapoints dfp = pd.DataFrame(np.percentile(like,[2.5, 25, 50, 75, 97.5], axis=1).T ,columns=['025','250','500','750','975']) dfp['x'] = x pal = sns.color_palette('Purples') f, ax1d = plt.subplots(1,1, figsize=(7,7)) ax1d.fill_between(dfp['x'], dfp['025'], dfp['975'], alpha=0.5 ,color=pal[1], label='CR 95%') ax1d.fill_between(dfp['x'], dfp['250'], dfp['750'], alpha=0.4 ,color=pal[4], label='CR 50%') ax1d.plot(dfp['x'], dfp['500'], alpha=0.5, color=pal[5], label='Median') _ = plt.legend() _ = ax1d.set_xlim(xlims) _ = sns.regplot(x='x', y='y', data=rawdata, fit_reg=False ,scatter_kws={'alpha':0.8,'s':80, 'lw':2,'edgecolor':'w'}, ax=ax1d) xlims = (df['x'].min() - np.ptp(df['x'])/10 ,df['x'].max() + np.ptp(df['x'])/10) plot_posterior_cr(basic_model, trace, df, xlims) """ Explanation: Inspect the traces End of explanation """
esa-as/2016-ml-contest
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
apache-2.0
import pandas as pd training_data = pd.read_csv('../training_data.csv') """ Explanation: 'Grouped' k-fold CV A quick demo by Matt In cross-validating, we'd like to drop out one well at a time. LeaveOneGroupOut is good for this: End of explanation """ X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values y = training_data['Facies'].values """ Explanation: Isolate X and y: End of explanation """ wells = training_data["Well Name"].values """ Explanation: We want the well names to use as groups in the k-fold analysis, so we'll get those too: End of explanation """ from sklearn.svm import SVC from sklearn.model_selection import LeaveOneGroupOut logo = LeaveOneGroupOut() for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] score = SVC().fit(X[train], y[train]).score(X[test], y[test]) print("{:>20s} {:.3f}".format(well_name, score)) """ Explanation: Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time: End of explanation """
amehrjou/amehrjou.github.io
markdown_generator/publications.ipynb
mit
!cat publications.tsv """ Explanation: Publications markdown generator for academicpages Takes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the markdown_generator folder after replacing publications.tsv with one containing your data. TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style. Data format The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top. excerpt and paper_url can be blank, but the others must have values. pub_date must be formatted as YYYY-MM-DD. url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/publications/YYYY-MM-DD-[url_slug] This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). End of explanation """ import pandas as pd """ Explanation: Import pandas We are using the very handy pandas library for dataframes. End of explanation """ publications = pd.read_csv("publications.tsv", sep="\t", header=0) publications """ Explanation: Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. End of explanation """ html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): """Produce entities within text.""" return "".join(html_escape_table.get(c,c) for c in text) """ Explanation: Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. End of explanation """ import os for row, item in publications.iterrows(): md_filename = str(item.pub_date) + "-" + item.url_slug + ".md" html_filename = str(item.pub_date) + "-" + item.url_slug year = item.pub_date[:4] ## YAML variables md = "---\ntitle: \"" + item.title + '"\n' md += """collection: publications""" md += """\npermalink: /publication/""" + html_filename if len(str(item.excerpt)) > 5: md += "\nexcerpt: '" + html_escape(item.excerpt) + "'" md += "\ndate: " + str(item.pub_date) md += "\nvenue: '" + html_escape(item.venue) + "'" if len(str(item.paper_url)) > 5: md += "\npaperurl: '" + item.paper_url + "'" md += "\ncitation: '" + html_escape(item.citation) + "'" md += "\n---" ## Markdown description for individual page if len(str(item.excerpt)) > 5: md += "\n" + html_escape(item.excerpt) + "\n" if len(str(item.paper_url)) > 5: md += "\n[Download paper here](" + item.paper_url + ")\n" md += "\nRecommended citation: " + item.citation md_filename = os.path.basename(md_filename) with open("../_publications/" + md_filename, 'w') as f: f.write(md) """ Explanation: Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. End of explanation """ !ls ../_publications/ !cat ../_publications/2009-10-01-paper-title-number-1.md """ Explanation: These files are in the publications directory, one directory below where we're working from. End of explanation """
QuantCrimAtLeeds/PredictCode
notebooks/kernel_estimation.ipynb
artistic-2.0
data = np.random.normal(loc=2.0, scale=1.5, size=20) kernel = scipy.stats.gaussian_kde(data) fig, ax = plt.subplots(figsize=(10,5)) x = np.linspace(-1, 5, 100) var = 2 * 1.5 ** 2 y = np.exp(-(x-2)**2/var) / np.sqrt(var * np.pi) ax.plot(x, y, color="red", linewidth=1) y = kernel(x) ax.plot(x, y, color="blue", linewidth=2) _ = ax.legend(["Actual", "Predicted"]) _ = ax.scatter(data, [0] * data) """ Explanation: Scipy gaussian kernel estimation Scipy comes with gaussian KDE out of the box: scipy.stats.gaussian_kde. As the docs say: It includes automatic bandwidth determination. The estimation works best for a unimodal distribution; bimodal or multi-modal distributions tend to be oversmoothed. For multi-dimensional data, the convention is: ... otherwise a 2-D array with shape (# of dims, # of data). End of explanation """ def measure(n): m1 = np.random.normal(size=n) m2 = np.random.normal(scale=0.5, size=n) return m1 + m2, m1 - m2 def actual_kernel(point): x, y = point[0], point[1] # m2 = 0.5 * np.random.normal # Transform matrix is: A = 1 1/2 # 1 -1/2 # So covariance matrix is AA^* = 5/4 3/4 # 3/4 5/4 a = x * (5 * x - 3 * y) / 4 + y * (-3 * x + 5 * y) / 4 return np.exp(-a/2) / 2*np.pi m1, m2 = measure(2000) xmin, xmax = min(m1), max(m1) ymin, ymax = min(m2), max(m2) data_2d = np.vstack([m1, m2]) kernel_2d = scipy.stats.gaussian_kde(data_2d) X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] positions = np.vstack([X.ravel(), Y.ravel()]) Z = np.reshape(kernel_2d(positions).T, X.shape) Z_actual = np.reshape(actual_kernel(positions).T, X.shape) fig, ax = plt.subplots(ncols=3, figsize=(16,10)) for i, z in enumerate([Z, Z, Z_actual]): ax[i].imshow(np.rot90(z), cmap=plt.cm.gist_earth_r, extent=[xmin, xmax, ymin, ymax]) ax[i].set_aspect(1) ax[0].plot(m1, m2, 'k.', markersize=2, alpha=0.3) ax[0].set_title("Estimated kernel and data") ax[1].set_title("Estimated kernel") ax[2].set_title("Actual kernel") None """ Explanation: Two dimensional case This example is from the scipy docs End of explanation """ import open_cp.kernels def plot_1d_knn(ax, k): kth_kernel_1d = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(data, k=k) x = np.linspace(-1, 5, 100) var = 2 * 1.5 ** 2 y = np.exp(-(x-2)**2/var) / np.sqrt(var * np.pi) ax.plot(x, y, color="red", linewidth=1) y = kth_kernel_1d(x) ax.plot(x, y, color="blue", linewidth=2) ax.legend(["Actual", "Predicted"]) ax.scatter(data, data * 0, alpha=0.5) ax.set_title("Using k={}".format(k)) fig, ax = plt.subplots(ncols=3, figsize=(18,5)) plot_1d_knn(ax[0], 3) plot_1d_knn(ax[1], 6) plot_1d_knn(ax[2], 9) """ Explanation: Nearest neighbour bandwidth selection This algorithm is described in Mohler et al, "Self-Exciting Point Process Modeling of Crime", Journal of the American Statistical Association, 2011 doi:10.1198/jasa.2011.ap09546. See the Appendix. Suppose we have data points $(x_i)$ where each $x_i = (x^{(i)}_1, \cdots, x^{(i)}_n)$ is a vector. Fix an integer $k$. Let $\sigma_i^2$ be the sample variance for each coordinate Scale each coordinate by $\sigma_i$. For the scaled data, let $D_j$ be the distance from point $j$ to the $k$th nearest neighbour in the scaled dataset Each point $j$ in coordinate $i$ will the contribute a gaussian kernel centred at $x^{(j)}_i$ and with variance $D_j \sigma_i$. Thus the total estimated kernel is $$ k(x) = \frac{1}{N} \sum_{j=1}^N \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma_i^2 D_j^2}} \exp\Big( \frac{(x_i - x^{(j)}_i)^2}{2\sigma_i^2D_j^2} \Big) $$ End of explanation """ def distances(pts, point): dists = [abs(x - point) for x in pts] dists.sort() return dists dists = [ distances(data, x)[6] for x in data ] fig, ax = plt.subplots(figsize=(12,6)) x = np.linspace(-1, 5, 150) yy = np.zeros_like(x) for pt, d in zip(data, dists): y = scipy.stats.norm(loc=pt, scale=d).pdf(x) yy += y ax.plot(x, y, alpha=0.5) ax.plot(x, yy/len(data), color="blue") clamped = data[ (-1<=data) & (data<=5) ] ax.scatter(clamped, [-0.1 for _ in clamped]) ax.set(xlim=[-1,5]) None """ Explanation: In the one dimensional case, we don't need to rescale the data, so we can visualise easily what happens. Let's look at $k=6$ What we end up with is a load of gaussian curves, one for each data point, centred at that data point, and with a variance chosen so that, in this case, the 6 nearest neighbours lie in one standard deviation. We then take the average to get our estimated kernel. End of explanation """ kth_kernel_2d = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(data_2d, k=20) X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] positions = np.vstack([X.ravel(), Y.ravel()]) Z = np.reshape(kth_kernel_2d(positions).T, X.shape) fig, ax = plt.subplots(ncols=3, figsize=(16,10)) for i, z in enumerate([Z, Z, Z_actual]): ax[i].imshow(np.rot90(z), cmap=plt.cm.gist_earth_r, extent=[xmin, xmax, ymin, ymax]) ax[i].set_aspect(1) ax[0].plot(m1, m2, 'k.', markersize=2, alpha=0.3) ax[0].set_title("Estimated kernel and data") ax[1].set_title("Estimated kernel") ax[2].set_title("Actual kernel") None """ Explanation: Two dimensional case Here with $k=20$ which is probably somewhat too small for this data. End of explanation """ import scipy.integrate def x_marginal(kernel, x): return scipy.integrate.quad(lambda y : kernel([x,y]), -5, 5)[0] def y_marginal(kernel, y): return scipy.integrate.quad(lambda x : kernel([x,y]), -5, 5)[0] fig, ax = plt.subplots(ncols=2, figsize=(15,5)) x = np.linspace(-4, 4, 100) inted = [x_marginal(kth_kernel_2d, xx) for xx in x] ax[0].scatter(x, inted, alpha=0.7) marginal = open_cp.kernels.marginal_knng(data_2d, 0, k=20) comp = marginal(x) ax[0].plot(x, comp, color="red") inted = [y_marginal(kth_kernel_2d, xx) for xx in x] ax[1].scatter(x, inted, alpha=0.7) marginal = open_cp.kernels.marginal_knng(data_2d, 1, k=20) comp = marginal(x) ax[1].plot(x, comp, color="red") None """ Explanation: One downside to this algorithm is that it is rather slow: the resulting kernel takes $O(Nk)$ to evaluation, with $N$ data points and $k$ dimensions. Computing marginals In the analysis of the kernels produced, it is often useful to compute marginals-- i.e. integrate out all but one variable. This is slow to do numerically, but given the special form of the kernel, quite easy to calculate explicity. The kernel is $$ k(x) = \frac{1}{N} \sum_{j=1}^N \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma_i D_j}} \exp\Big( - \frac{(x_i - x^{(j)}_i)^2}{2\sigma_i^2D_j^2} \Big) $$ If we (without loss of generality) integrate out all but the first variable, as everything is independent and a normalised Gaussian, we get $$ \int \cdots \int k((x_1,\cdots,x_n)) \ d x_2 \cdots \ d x_n = \frac{1}{N} \sum_{j=1}^N \frac{1}{\sqrt{2\pi\sigma_1^2 D_j^2}} \exp\Big( - \frac{(x_1 - x^{(j)}_1)^2}{2\sigma_1^2D_j^2} \Big) $$ End of explanation """ def space_time_sample(length=1): t = np.random.exponential(10, size=length) x = np.random.normal(scale=0.01, size=length) y = np.random.normal(scale=0.1, size=length) return np.vstack([t,x,y]) data_3d = space_time_sample(100) kth_kernel_3d = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(data_3d, k=20) def plot_actual_marginals(): def actual_t_marginal(t): return np.exp(-t/10)/10 def actual_x_marginal(x): return np.exp(-x*x/(2*0.01**2)) / np.sqrt(2*np.pi*0.01**2) def actual_y_marginal(y): return np.exp(-y*y/(2*0.1**2)) / np.sqrt(2*np.pi*0.1**2) fig, ax = plt.subplots(ncols=3, figsize=(16,5)) tc = np.linspace(0,60,100) actual = actual_t_marginal(tc) ax[0].plot(tc, actual, color="red", linewidth=1) xc = np.linspace(-0.05,0.05,100) actual = actual_x_marginal(xc) ax[1].plot(xc, actual, color="red", linewidth=1) yc = np.linspace(-0.5,0.5,100) actual = actual_y_marginal(yc) ax[2].plot(yc, actual, color="red", linewidth=1) return ax def tx_marginal(t, x): return scipy.integrate.quad(lambda y : kth_kernel_3d([t,x,y]), -1, 1)[0] def ty_marginal(t, y): return scipy.integrate.quad(lambda x : kth_kernel_3d([t,x,y]), -0.2, 0.2)[0] def t_marginal(t): return scipy.integrate.quad(lambda x : tx_marginal(t, x), -0.2, 0.2)[0] def x_marginal(x): return scipy.integrate.quad(lambda t : tx_marginal(t, x), -30, 100)[0] def y_marginal(y): return scipy.integrate.quad(lambda t : ty_marginal(t, y), -30, 100)[0] def plot_calculated_estimated_marginals(ax, data, k): tc = np.linspace(0,60,100) ax[0].plot(tc, open_cp.kernels.marginal_knng(data, 0, k)(tc)) xc = np.linspace(-0.05,0.05,100) ax[1].plot(xc, open_cp.kernels.marginal_knng(data, 1, k)(xc)) yc = np.linspace(-0.5,0.5,100) ax[2].plot(yc, open_cp.kernels.marginal_knng(data, 2, k)(yc)) return tc, xc, yc ax = plot_actual_marginals() tc, xc, yc = plot_calculated_estimated_marginals(ax, data_3d, k=20) actual = [t_marginal(t) for t in tc] ax[0].scatter(tc, actual, alpha=0.5) actual = [x_marginal(x) for x in xc] ax[1].scatter(xc, actual, alpha=0.5) actual = [y_marginal(y) for y in yc] ax[2].scatter(yc, actual, alpha=0.5) None """ Explanation: Example from Mohler et al. We sample from a three dimensional kernel: $$ g(t,x,y) = \frac{\omega}{2\pi\sigma_x\sigma_y} \exp(-\omega t) \exp\Big(-\frac{x^2}{2\sigma_x^2}\Big)\exp\Big(-\frac{y^2}{2\sigma_y^2}\Big) $$ with parameters $\sigma_x=0.01, \sigma_y=0.1, \omega^{-1}=10$. Let us start with a small sample, and k=20. End of explanation """ def two_dim_slices(pts, alpha=0.5): fig, ax = plt.subplots(ncols=3, figsize=(16,5)) ax[0].scatter(pts[0], pts[1], alpha=alpha) ax[0].set(xlabel="t", ylabel="x") ax[1].scatter(pts[0], pts[2], alpha=alpha) ax[1].set(xlabel="t", ylabel="y") ax[2].scatter(pts[1], pts[2], alpha=alpha) ax[2].set(xlabel="x", ylabel="y") normalised = data_3d / np.std(data_3d, axis=1, ddof=1)[:,None] two_dim_slices(normalised) data_3d = space_time_sample(1000) normalised = data_3d / np.std(data_3d, axis=1, ddof=1)[:,None] two_dim_slices(normalised, alpha=0.2) kth_kernel_3d = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(data_3d, k=20) ax = plot_actual_marginals() _ = plot_calculated_estimated_marginals(ax, data_3d, k=20) """ Explanation: We see very good agreement between the numerical integration and the calculated marginals. However, we see rather poor estimation of the actual kernels! End of explanation """
RoebideBruijn/datascience-intensive-course
exercises/naive_bayes/Mini_Project_Naive_Bayes.ipynb
mit
%matplotlib inline import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from six.moves import range import seaborn as sns # Setup Pandas pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) # Setup Seaborn sns.set_style("whitegrid") sns.set_context("poster") """ Explanation: Basic Text Classification with Naive Bayes In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions. End of explanation """ critics = pd.read_csv('./critics.csv') #let's drop rows with missing quotes critics = critics[~critics.quote.isnull()] critics.head() """ Explanation: Table of Contents Rotten Tomatoes Dataset Explore The Vector Space Model and a Search Engine In Code Naive Bayes Multinomial Naive Bayes and Other Likelihood Functions Picking Hyperparameters for Naive Bayes and Text Maintenance Interpretation Rotten Tomatoes Dataset End of explanation """ n_reviews = len(critics) n_movies = critics.rtid.unique().size n_critics = critics.critic.unique().size print("Number of reviews: {:d}".format(n_reviews)) print("Number of critics: {:d}".format(n_critics)) print("Number of movies: {:d}".format(n_movies)) df = critics.copy() df['fresh'] = df.fresh == 'fresh' grp = df.groupby('critic') counts = grp.critic.count() # number of reviews by each critic means = grp.fresh.mean() # average freshness for each critic means[counts > 100].hist(bins=10, edgecolor='w', lw=1) plt.xlabel("Average Rating per critic") plt.ylabel("Number of Critics") plt.yticks([0, 2, 4, 6, 8, 10]); """ Explanation: Explore End of explanation """ # Most critics give on average more times the qualification 'fresh' than 'rotten'. # None of them give only 'fresh' or 'rotten'. # There is a remarkable dip at 55-60% and peak at 60-65%. # The distribution looks a bit like a Bernouilli distribution (2 separate ones). # The dip and peak is interesting and no extremes. # Nobody reviews only films he/she likes or hates. # There a few sour people who write more reviews about films they hate than like, # but most rather make a review about a film they like or most people like more movies they watch than hate. # The dip/peak could be that there are people who prefer to write about what they like # and people who prefer to write about what they hate. There are not that many poeple in between 55-60%. """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set I</h3> <br/> <b>Exercise:</b> Look at the histogram above. Tell a story about the average ratings per critic. What shape does the distribution look like? What is interesting about the distribution? What might explain these interesting things? </div> End of explanation """ from sklearn.feature_extraction.text import CountVectorizer text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop'] print("Original text is\n{}".format('\n'.join(text))) vectorizer = CountVectorizer(min_df=0) # call `fit` to build the vocabulary vectorizer.fit(text) # call `transform` to convert text to a bag of words x = vectorizer.transform(text) # CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to # convert back to a "normal" numpy array x = x.toarray() print("") print("Transformed text vector is \n{}".format(x)) # `get_feature_names` tracks which word is associated with each column of the transformed x print("") print("Words for each feature:") print(vectorizer.get_feature_names()) # Notice that the bag of words treatment doesn't preserve information about the *order* of words, # just their frequency def make_xy(critics, vectorizer=None): #Your code here if vectorizer is None: vectorizer = CountVectorizer() X = vectorizer.fit_transform(critics.quote) X = X.tocsc() # some versions of sklearn return COO format y = (critics.fresh == 'fresh').values.astype(np.int) return X, y X, y = make_xy(critics) """ Explanation: The Vector Space Model and a Search Engine All the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze. Also check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec. Let us define the vector derived from document $d$ by $\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry "slot" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus. To define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So "hello" may be at index 5 and "world" at index 99. Suppose we have the following corpus: A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them. Suppose we treat each sentence as a document $d$. The vocabulary (often called the lexicon) is the following: $V = \left{\right.$ a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with$\left.\right}$ Then the document A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree may be represented as the following sparse vector of word counts: $$\bar V(d) = \left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \right)$$ or more succinctly as [(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1), (26, 1), (30, 1), (31, 1)] along with a dictionary { 0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes, 15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the, 30: tree, 31: vine, } Then, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary. Notice that this representation loses the relative ordering of the terms in the document. That is "cat ate rat" and "rat ate cat" are the same. Thus, this representation is also known as the Bag-Of-Words representation. Here is another example, from the book quoted above, although the matrix is transposed here so that documents are columns: Such a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other "Natural Language Processing" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove "stopwords" from our vocabulary, such as common words like "the". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application. From the book: The standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\bar V(d_1)$ and $\bar V(d_2)$: $$S_{12} = \frac{\bar V(d_1) \cdot \bar V(d_2)}{|\bar V(d_1)| \times |\bar V(d_2)|}$$ There is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. The key idea now: to assign to each document d a score equal to the dot product: $$\bar V(q) \cdot \bar V(d)$$ Then we can use this simple Vector Model as a Search engine. In Code End of explanation """ #your turn from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123) clf = MultinomialNB() clf.fit(X_train, y_train) print('train:', round(clf.score(X_train, y_train) * 100, 2), '%') print('test:', round(clf.score(X_test, y_test) * 100, 2), '%') # The classifier overfitted a lot on the training set, it's not general. # The accuracy score is a lot higher on the training set. """ Explanation: Naive Bayes From Bayes' Theorem, we have that $$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$ where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as $$P(c \vert f) \propto P(f \vert c) P(c) $$ $P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then: $$P(A \cap B) = P(A) \cdot P(B)$$ Thus, conditional independence implies $$P(f \vert c) = \prod_i P(f_i | c) $$ where $f_i$ is an individual feature (a word in this example). To make a classification, we then choose the class $c$ such that $P(c \vert f)$ is maximal. There is a small caveat when computing these probabilities. For floating point underflow we change the product into a sum by going into log space. This is called the LogSumExp trick. So: $$\log P(f \vert c) = \sum_i \log P(f_i \vert c) $$ There is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \vert c) = 0$ for that term, and thus $P(f \vert c) = \prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\alpha$ to each count. This is called Laplace Smoothing. $$P(f_i \vert c) = \frac{N_{ic}+\alpha}{N_c + \alpha N_i}$$ where $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\alpha$ is sometimes called a regularization parameter. Multinomial Naive Bayes and Other Likelihood Functions Since we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution. $$P(f \vert c) = \frac{\left( \sum_i f_i \right)!}{\prod_i f_i!} \prod_{f_i} P(f_i \vert c)^{f_i} \propto \prod_{i} P(f_i \vert c)$$ where the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1. There are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use Gaussian Naive Bayes. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \vert c)$ is given as follows $$P(f_i = v \vert c) = \frac{1}{\sqrt{2\pi \sigma^2_c}} e^{- \frac{\left( v - \mu_c \right)^2}{2 \sigma^2_c}}$$ <div class="span5 alert alert-info"> <h3>Exercise Set II</h3> <p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p> <ol> <li> split the data set into a training and test set <li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters. <li> train the classifier over the training set and test on the test set <li> print the accuracy scores for both the training and the test sets </ol> What do you notice? Is this a good classifier? If not, why not? </div> End of explanation """ # Your turn. X_df = pd.DataFrame(X.toarray()) print(X_df.shape) freq = X_df.sum() print(max(freq)) # to know the nr of bins so each nr of words is in one bin print(sum(freq==1)/len(freq)) # to check if the plot is ok (gives this proportion at unique words for docs) plt.hist(freq, cumulative=True, normed=True, bins=16805) plt.hist(freq, cumulative=True, normed=True, bins=16805) plt.xlim([0,100]) # to see where the plateau starts # I would put max_df at 20 plt.hist(freq, cumulative=True, normed=True, bins=16805) plt.xlim([0,20]) # to see the steep climb # It starts to climb steeply immediately so I would choose 2 """ Explanation: Picking Hyperparameters for Naive Bayes and Text Maintenance We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise. First, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation: min_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. <div class="span5 alert alert-info"> <h3>Exercise Set III</h3> <p><b>Exercise:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p> <p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p> </div> End of explanation """ from sklearn.model_selection import KFold def cv_score(clf, X, y, scorefunc): result = 0. nfold = 5 for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times clf.fit(X[train], y[train]) # fit the classifier, passed is as clf. result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data return result / nfold # average """ Explanation: The parameter $\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold. End of explanation """ def log_likelihood(clf, x, y): prob = clf.predict_log_proba(x) rotten = y == 0 fresh = ~rotten return prob[rotten, 0].sum() + prob[fresh, 1].sum() """ Explanation: We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV. The custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function. End of explanation """ from sklearn.model_selection import train_test_split _, itest = train_test_split(range(critics.shape[0]), train_size=0.7) mask = np.zeros(critics.shape[0], dtype=np.bool) mask[itest] = True """ Explanation: We'll cross-validate over the regularization parameter $\alpha$. Let's set up the train and test masks first, and then we can run the cross-validation procedure. End of explanation """ # The log likelihood function calculates the summed logged probabilities and adds these for both classes. # A higher loglikelihood means a better algorithm. # Namely, we try to optimize that the algorithm predicts with a high probability that the sample # belongs to the class it should belong too. # Then the data becomes less important and the regularization has more influence. # Hence the algorithm will have a harder time to learn from the data and will become more random. from sklearn.naive_bayes import MultinomialNB #the grid of parameters to search over alphas = [.1, 1, 5, 10, 50] best_min_df = 2 # YOUR TURN: put your value of min_df here. #Find the best value for alpha and min_df, and the best classifier best_alpha = 1 maxscore=-np.inf for alpha in alphas: vectorizer = CountVectorizer(min_df=best_min_df) Xthis, ythis = make_xy(critics, vectorizer) Xtrainthis = Xthis[mask] ytrainthis = ythis[mask] # your turn clf = MultinomialNB(alpha=alpha) print(alpha, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood)) print("alpha: {}".format(best_alpha)) """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set IV</h3> <p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p> <p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\alpha$ that is too high?</p> <p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p> </div> End of explanation """ # Old accuracies: train: 92.17 %, test: 77.28 % # So the training is very slightly better, but test is worse by 3%. # And still hugely overfits. Even though we used CV for the alpha selection. # The alpha we picked was the default so no difference there. # The min_df only seems to slightly better the result of the test set. # So the difference seems to be in the train/test split and not so much in the algorithm. # Maybe we should have tried more alpha's to get to a better result. # Picking the default will of course change nothing. vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) #your turn. Print the accuracy on the test and training dataset training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) from sklearn.metrics import confusion_matrix print(confusion_matrix(ytest, clf.predict(xtest))) """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set V: Working with the Best Parameters</h3> <p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p> </div> End of explanation """ words = np.array(vectorizer.get_feature_names()) x = np.eye(xtest.shape[1]) probs = clf.predict_log_proba(x)[:, 0] ind = np.argsort(probs) good_words = words[ind[:10]] bad_words = words[ind[-10:]] good_prob = probs[ind[:10]] bad_prob = probs[ind[-10:]] print("Good words\t P(fresh | word)") for w, p in zip(good_words, good_prob): print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p))) print("Bad words\t P(fresh | word)") for w, p in zip(bad_words, bad_prob): print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p))) """ Explanation: Interpretation What are the strongly predictive features? We use a neat trick to identify strongly predictive features (i.e. words). first, create a data set such that each row has exactly one feature. This is represented by the identity matrix. use the trained classifier to make predictions on this matrix sort the rows by predicted probabilities, and pick the top and bottom $K$ rows End of explanation """ # You test every word that way for what the probability of that word is that it belongs to class 'fresh'. # It works because it tests each word once without other words and the classifier was already trained on # the training set with all the words. # Words that have a high probability to belong to the class 'fresh' are predictive of this class. # Words with a low probability are more predictive for the other class. """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set VI</h3> <p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p> </div> End of explanation """ x, y = make_xy(critics, vectorizer) prob = clf.predict_proba(x)[:, 0] predict = clf.predict(x) bad_rotten = np.argsort(prob[y == 0])[:5] bad_fresh = np.argsort(prob[y == 1])[-5:] print("Mis-predicted Rotten quotes") print('---------------------------') for row in bad_rotten: print(critics[y == 0].quote.iloc[row]) print("") print("Mis-predicted Fresh quotes") print('--------------------------') for row in bad_fresh: print(critics[y == 1].quote.iloc[row]) print("") """ Explanation: The above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\left( \chi^2 \right)$ method. Prediction Errors We can see mis-predictions as well. End of explanation """ #your turn vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) clf = MultinomialNB(alpha=best_alpha).fit(X, y) line = 'This movie is not remarkable, touching, or superb in any way' print(clf.predict_proba(vectorizer.transform([line]))[:, 1]) clf.predict(vectorizer.transform([line])) # It predicts 'fresh' because almost all words are positive. # The 'not' is hard to give the weight it would need in this sentence # aka reversing the probabilities of all the words belonging to the 'not'. """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set VII: Predicting the Freshness for a New Review</h3> <br/> <div> <b>Exercise:</b> <ul> <li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'* <li> Is the result what you'd expect? Why (not)? </ul> </div> </div> End of explanation """ # http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction # http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref from sklearn.feature_extraction.text import TfidfVectorizer tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english') Xtfidf=tfidfvectorizer.fit_transform(critics.quote) """ Explanation: Aside: TF-IDF Weighting for Term Importance TF-IDF stands for Term-Frequency X Inverse Document Frequency. In the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word "movie" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus. There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in scikit-learn differs from that of most textbooks: $$\mbox{TF-IDF}(t, d) = \mbox{TF}(t, d)\times \mbox{IDF}(t) = n_{td} \log{\left( \frac{\vert D \vert}{\vert d : t \in d \vert} + 1 \right)}$$ where $n_{td}$ is the number of times term $t$ occurs in document $d$, $\vert D \vert$ is the number of documents, and $\vert d : t \in d \vert$ is the number of documents that contain $t$ End of explanation """ # If we try min_n_grams 1 and see up till 6 as max which is the best, # we see that 1 will be selected and that's what we had already, hence not useful. # If we select min and max n_grams the same, we can select (6,6), but the accuracies on the test set are very bad. from sklearn.naive_bayes import MultinomialNB #the grid of parameters to search over n_grams = [1, 2, 3, 4, 5, 6] best_min_df = 2 # YOUR TURN: put your value of min_df here. best_alpha = 1 maxscore=-np.inf for n_gram in n_grams: vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(1, n_gram)) Xthis, ythis = make_xy(critics, vectorizer) Xtrainthis = Xthis[mask] ytrainthis = ythis[mask] clf = MultinomialNB(alpha=best_alpha) print(n_gram, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood)) print() for n_gram in n_grams: vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(n_gram, n_gram)) Xthis, ythis = make_xy(critics, vectorizer) Xtrainthis = Xthis[mask] ytrainthis = ythis[mask] clf = MultinomialNB(alpha=best_alpha) print(n_gram, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood)) vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(6, 6)) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) """ Explanation: <div class="span5 alert alert-info"> <h3>Exercise Set VIII: Enrichment</h3> <p> There are several additional things we could try. Try some of these as exercises: <ol> <li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because "not good" and "so good" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse. <li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier. <li> Try adding supplemental features -- information about genre, director, cast, etc. <li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction. <li> Use TF-IDF weighting instead of word counts. </ol> </p> <b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result. </div> 1. n_grams End of explanation """ # RF overtrained even more drammatically. Logistic regression did better than RF, but not better than we had. from sklearn.ensemble import RandomForestClassifier vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = RandomForestClassifier(n_estimators=100).fit(xtrain, ytrain) training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) from sklearn.linear_model import LogisticRegression vectorizer = CountVectorizer(min_df=best_min_df) X, y = make_xy(critics, vectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = LogisticRegression(penalty='l1').fit(xtrain, ytrain) training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) """ Explanation: 2. RandomForest and Logistic regression End of explanation """ # Also overtrained and worse than we had. from sklearn.feature_extraction.text import TfidfVectorizer tfidfvectorizer = TfidfVectorizer(min_df=2, stop_words='english') X, y = make_xy(critics, tfidfvectorizer) xtrain=X[mask] ytrain=y[mask] xtest=X[~mask] ytest=y[~mask] clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain) training_accuracy = clf.score(xtrain, ytrain) test_accuracy = clf.score(xtest, ytest) print("Accuracy on training data: {:2f}".format(training_accuracy)) print("Accuracy on test data: {:2f}".format(test_accuracy)) """ Explanation: 5. TF-IDF weighting End of explanation """
DistrictDataLabs/yellowbrick
examples/uricod/ShoeSizeToHeight.ipynb
apache-2.0
from sklearn.model_selection import train_test_split, KFold from sklearn.linear_model import LinearRegression, Ridge, SGDRegressor, ElasticNet from sklearn.kernel_ridge import KernelRidge from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from yellowbrick.features import Rank2D, JointPlotVisualizer from yellowbrick.regressor import ResidualsPlot, PredictionError, ManualAlphaSelection, CooksDistance from yellowbrick.model_selection import cv_scores, LearningCurve, FeatureImportances, ValidationCurve import pandas as pd import numpy as np # GET THE CURRENT WORKING DIRECTORY SO YOU CAN LOAD THE PATH TO THE WO_MEN.XLSX FILE import os os.getcwd() """ Explanation: GOAL IS TO SHOW IF THERE IS LINEAR RELATIONSHIP BETWEEN HEIGHT AND SHOE SIZE - USING YELLOWBRICK End of explanation """ # YOU WILL HAVE TO INSTALL OPENPYXL - pip install openpyxl - TO BE ABLE TO OPEN EXCEL FILES WITH PANDAS df = pd.read_excel('data/wo_men.xlsx', sheet_name='wo_men') df.head(2) ds = df.drop(['time', 'sex', 'height', 'shoe_size - German', 'height in feet - String', 'height in inches'], axis=1) ds.shape ds.columns X = ds.drop(['shoe_size-american'], axis=1) y = ds['shoe_size-american'] """ Explanation: **DATASET WAS TAKEN FROM https://osf.io/ja9dw/ AND CONVERTED TO AMERICAN SHOE SIZES MANUALLY. SEE OTHER TABS IN WO_MEN.XLSX End of explanation """ viz = Rank2D(algorithm='pearson') viz.fit_transform(ds) viz.show() viz = JointPlotVisualizer(columns=['Height in Feet', 'shoe_size-american']) viz.fit_transform(ds) viz.show() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42) models = [ LinearRegression(), Ridge(alpha=2), SGDRegressor(max_iter=100), KernelRidge(alpha=2), SVR(), RandomForestRegressor(n_estimators=5), GradientBoostingRegressor(n_estimators=5) ] def visualize_model(X, y, estimator, **kwargs): viz = ResidualsPlot(estimator, **kwargs) viz.fit(X.values, y) viz.score(X.values, y) viz.show() viz = PredictionError(model) viz.fit(X_train.values, y_train) viz.score(X_test.values, y_test) viz.show() for model in models: visualize_model(X, y, model) """ Explanation: Women is coded as 1 vs Man being 0 so that's why there is negative correlation between sex and shoe size End of explanation """ model = RandomForestRegressor(n_estimators=5) cv = KFold(n_splits=5, shuffle=True, random_state=42) viz = cv_scores(model, X, y, cv=cv, scoring='r2') viz = LearningCurve(model, cv=cv, scoring='r2', ) viz.fit(X, y) viz.show() viz = FeatureImportances(model, stack=True, relative=False) viz.fit(X, y) viz.show() """ Explanation: SPECIFIC MODEL TUNING End of explanation """ viz = ValidationCurve(RandomForestRegressor(), param_name='n_estimators', param_range=range(1, 10), cv=cv, scoring='r2') viz.fit(X, y) viz.show() """ Explanation: RANDOM FOREST REGRESSOR HYPERPARAMETER TUNING End of explanation """ alphas = np.logspace(1, 2, 20) viz = ManualAlphaSelection(Ridge(), alphas=alphas, cv=cv, scoring='r2') viz.fit(X, y) viz.show() viz = CooksDistance() viz.fit(X, y) viz.show() """ Explanation: USE MODEL THAT HAS ALPHA AS PARAMETERS - RIDGE End of explanation """
quantopian/research_public
notebooks/lectures/Maximum_Likelihood_Estimation/questions/notebook.ipynb
apache-2.0
# Useful Libraries import pandas as pd import math import matplotlib.pyplot as plt import numpy as np import scipy import scipy.stats as stats """ Explanation: Exercises: Maximum Likelihood Estimation By Christopher van Hoecke, Max Margenot, and Delaney Mackenzie Lecture Link : https://www.quantopian.com/lectures/maximum-likelihood-estimation IMPORTANT NOTE: This lecture corresponds to the Maximum Likelihood Estimation lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. Key concepts Normal Distribution MLE Estimators: $$ \hat\mu = \frac{1}{T}\sum_{t=1}^{T} x_t \\qquad \hat\sigma = \sqrt{\frac{1}{T}\sum_{t=1}^{T}{(x_t - \hat\mu)^2}} $$ Exponential Distribution MLE Estimators: $$\hat\lambda = \frac{\sum_{t=1}^{T} x_t}{T}$$ End of explanation """ # Normal mean and standard deviation MLE estimators def normal_mu(X): # Get the number of observations T = #______# Your code goes here # Sum the observations s = #______# Your code goes here return 1.0/T * s def normal_sigma(X): T = #______# Your code goes here # Get the mu MLE mu = #______# Your code goes here # Sum the square of the differences s = #______# Your code goes here # Compute sigma^2 sigma_squared = return math.sqrt(sigma_squared) # Normal Distribution Sample Data TRUE_MEAN = 40 TRUE_STD = 10 X = np.random.normal(TRUE_MEAN, TRUE_STD, 10000000) # Use your functions to compute the MLE mu and sigma mu = #______# Your code goes here std = #______# Your code goes here print 'Maximum likelihood value of mu:', mu print 'Maximum likelihood value for sigma:', std # Fit the distribution using SciPy and compare those parameters with yours scipy_mu, scipy_std = #______# Your code goes here print 'Scipy Maximum likelihood value of mu:', scipy_mu print 'Scipy Maximum likelihood value for sigma:', scipy_std # Get the PDF, fill it with your calculated parameters, and plot it along x x = np.linspace(0, 80, 80) plt.hist(X, bins=x, normed='true') plt.plot(pdf(x, loc=mu, scale=std), color='red') plt.xlabel('Value') plt.ylabel('Observed Frequency') plt.legend(['Fitted Distribution PDF', 'Observed Data', ]); """ Explanation: Exercise 1: Normal Distribution Given the equations above, write down functions to calculate the MLE estimators $\hat{\mu}$ and $\hat{\sigma}$ of the normal distribution. Given the sample normally distributed set, find the maximum likelihood $\hat{\mu}$ and $\hat{\sigma}$. Fit the data to a normal distribution using SciPy. Compare SciPy's calculated parameters with your calculated values of $\hat{\mu}$ and $\hat{\sigma}$. Plot a normal distribution PDF with your estimated parameters End of explanation """ def exp_lambda(X): T = #______# Your code goes here s = #______# Your code goes here return s/T # Exponential distribution sample data TRUE_LAMBDA = 5 X = np.random.exponential(TRUE_LAMBDA, 1000) # Use your functions to compute the MLE lambda lam = #______# Your code goes here print "Lambda estimate: ", lam # Fit the distribution using SciPy and compare that parameter with yours _, l = #______# Your code goes here print 'Scipy lambds estimate: ', l # Get the PDF, fill it with your calculated parameter, and plot it along x x = range(0, 80) plt.hist(X, bins=x, normed='true') plt.plot(pdf(x, scale=l), color = 'red') plt.xlabel('Value') plt.ylabel('Observed Frequency') plt.legend(['Fitted Distribution PDF', 'Observed Data', ]); """ Explanation: Exercise 2: Exponential Distribution Given the equations above, write down functions to calculate the MLE estimator $\hat{\lambda}$ of the exponential distribution Given the sample exponentially distributed set, find the maximum likelihood Fit the data to an exponential distribution using SciPy. Compare SciPy's calculated parameter with your calculated values of $\hat{\lambda}$ Plot an exponential distribution PDF with your estimated parameter End of explanation """ prices = get_pricing('SPY', fields='price', start_date='2016-01-04', end_date='2016-01-05', frequency = 'minute') returns = prices.pct_change()[1:] mu = #______# Your code goes here std = #______# Your code goes here x = np.linspace(#______# Your code goes here) h = plt.hist(#______# Your code goes here) l = plt.plot(#______# Your code goes here) plt.show(h, l); """ Explanation: Exercise 3 : Fitting Data Using MLE Using the MLE estimators laid out in the lecture, fit the returns for SPY from 2014 to 2015 to a normal distribution. Check for normality using the Jarque-Bera test End of explanation """ alpha = 0.05 stat, pval = #______# Your code goes here print pval if pval > alpha: print 'Accept our null hypothesis' if pval < alpha: print 'Reject our null hypothesis' """ Explanation: Recall that this fit only makes sense if we have normally distributed data. End of explanation """
tensorflow/fairness-indicators
g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ !pip install apache_beam !pip install fairness-indicators !pip install witwidget import os import tempfile import apache_beam as beam import numpy as np import pandas as pd from datetime import datetime import tensorflow_hub as hub import tensorflow as tf import tensorflow_model_analysis as tfma import tensorflow_data_validation as tfdv from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators from tensorflow_model_analysis.addons.fairness.view import widget_view from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor from witwidget.notebook.visualization import WitConfigBuilder from witwidget.notebook.visualization import WitWidget """ Explanation: FaceSSD Fairness Indicators Example Colab <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Facessd_Fairness_Indicators_Example_Colab"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview In this activity, you'll use Fairness Indicators to explore the FaceSSD predictions on Labeled Faces in the Wild dataset. Fairness Indicators is a suite of tools built on top of TensorFlow Model Analysis that enable regular evaluation of fairness metrics in product pipelines. About the Dataset In this exercise, you'll work with the FaceSSD prediction dataset, approximately 200k different image predictions and groundtruths generated by FaceSSD API. About the Tools TensorFlow Model Analysis is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows users to evaluate their models on large amounts of data in a distributed manner, computing in-graph and other metrics over different slices of data and visualize in notebooks. TensorFlow Data Validation is one tool you can use to analyze your data. You can use it to find potential problems in your data, such as missing values and data imbalances, that can lead to Fairness disparities. With Fairness Indicators, users will be able to: Evaluate model performance, sliced across defined groups of users Feel confident about results with confidence intervals and evaluations at multiple thresholds Importing Run the following code to install the fairness_indicators library. This package contains the tools we'll be using in this exercise. Restart Runtime may be requested but is not necessary. End of explanation """ data_location = tf.keras.utils.get_file('lfw_dataset.tf', 'https://storage.googleapis.com/facessd_dataset/lfw_dataset.tfrecord') stats = tfdv.generate_statistics_from_tfrecord(data_location=data_location) tfdv.visualize_statistics(stats) """ Explanation: Download and Understand the Data Labeled Faces in the Wild is a public benchmark dataset for face verification, also known as pair matching. LFW contains more than 13,000 images of faces collected from the web. We ran FaceSSD predictions on this dataset to predict whether a face is present in a given image. In this Colab, we will slice data according to gender to observe if there are any significant differences between model performance for different gender groups. If there is more than one face in an image, gender is labeled as "MISSING". We've hosted the dataset on Google Cloud Platform for convenience. Run the following code to download the data from GCP, the data will take about a minute to download and analyze. End of explanation """ BASE_DIR = tempfile.gettempdir() tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result') compute_confidence_intervals = True slice_key = 'object/groundtruth/Gender' label_key = 'object/groundtruth/face' prediction_key = 'object/prediction/face' feature_map = { slice_key: tf.io.FixedLenFeature([], tf.string, default_value=['none']), label_key: tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]), prediction_key: tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]), } """ Explanation: Defining Constants End of explanation """ model_agnostic_config = agnostic_predict.ModelAgnosticConfig( label_keys=[label_key], prediction_keys=[prediction_key], feature_spec=feature_map) model_agnostic_extractors = [ model_agnostic_extractor.ModelAgnosticExtractor( model_agnostic_config=model_agnostic_config, desired_batch_size=3), tfma.extractors.slice_key_extractor.SliceKeyExtractor( [tfma.slicer.SingleSliceSpec(), tfma.slicer.SingleSliceSpec(columns=[slice_key])]) ] """ Explanation: Model Agnostic Config for TFMA End of explanation """ # Helper class for counting examples in beam PCollection class CountExamples(beam.CombineFn): def __init__(self, message): self.message = message def create_accumulator(self): return 0 def add_input(self, current_sum, element): return current_sum + 1 def merge_accumulators(self, accumulators): return sum(accumulators) def extract_output(self, final_sum): if final_sum: print("%s: %d"%(self.message, final_sum)) metrics_callbacks = [ tfma.post_export_metrics.fairness_indicators( thresholds=[0.1, 0.3, 0.5, 0.7, 0.9], labels_key=label_key, target_prediction_keys=[prediction_key]), tfma.post_export_metrics.auc( curve='PR', labels_key=label_key, target_prediction_keys=[prediction_key]), ] eval_shared_model = tfma.types.EvalSharedModel( add_metrics_callbacks=metrics_callbacks, construct_fn=model_agnostic_evaluate_graph.make_construct_fn( add_metrics_callbacks=metrics_callbacks, config=model_agnostic_config)) with beam.Pipeline() as pipeline: # Read data. data = ( pipeline | 'ReadData' >> beam.io.ReadFromTFRecord(data_location)) # Count all examples. data_count = ( data | 'Count number of examples' >> beam.CombineGlobally( CountExamples('Before filtering "Gender:MISSING"'))) # If there are more than one face in image, the gender feature is 'MISSING' # and we are filtering that image out. def filter_missing_gender(element): example = tf.train.Example.FromString(element) if example.features.feature[slice_key].bytes_list.value[0] != b'MISSING': yield element filtered_data = ( data | 'Filter Missing Gender' >> beam.ParDo(filter_missing_gender)) # Count after filtering "Gender:MISSING". filtered_data_count = ( filtered_data | 'Count number of examples after filtering' >> beam.CombineGlobally( CountExamples('After filtering "Gender:MISSING"'))) # Because LFW data set has always faces by default, we are adding # labels as 1.0 for all images. def add_face_groundtruth(element): example = tf.train.Example.FromString(element) example.features.feature[label_key].float_list.value[:] = [1.0] yield example.SerializeToString() final_data = ( filtered_data | 'Add Face Groundtruth' >> beam.ParDo(add_face_groundtruth)) # Run TFMA. _ = ( final_data | 'ExtractEvaluateAndWriteResults' >> tfma.ExtractEvaluateAndWriteResults( eval_shared_model=eval_shared_model, compute_confidence_intervals=compute_confidence_intervals, output_path=tfma_eval_result_path, extractors=model_agnostic_extractors)) eval_result = tfma.load_eval_result(output_path=tfma_eval_result_path) """ Explanation: Fairness Callbacks and Computing Fairness Metrics End of explanation """ widget_view.render_fairness_indicator(eval_result=eval_result, slicing_column=slice_key) """ Explanation: Render Fairness Indicators Render the Fairness Indicators widget with the exported evaluation results. Below you will see bar charts displaying performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the drop down menus at the top of the visualization. A relevant metric for this use case is true positive rate, also known as recall. Use the selector on the left hand side to choose the graph for true_positive_rate. These metric values match the values displayed on the model card. For some photos, gender is labeled as young instead of male or female, if the person in the photo is too young to be accurately annotated. End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_spm_faces_dataset.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import os.path as op import matplotlib.pyplot as plt import mne from mne.datasets import spm_face from mne.preprocessing import ICA, create_eog_epochs from mne import io from mne.minimum_norm import make_inverse_operator, apply_inverse print(__doc__) data_path = spm_face.data_path() subjects_dir = data_path + '/subjects' """ Explanation: From raw data to dSPM on SPM Faces dataset Runs a full pipeline using MNE-Python: - artifact removal - averaging Epochs - forward model computation - source reconstruction using dSPM on the contrast : "faces - scrambled" .. note:: This example does quite a bit of processing, so even on a fast machine it can take several minutes to complete. End of explanation """ raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds' raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run # Here to save memory and time we'll downsample heavily -- this is not # advised for real data as it can effectively jitter events! raw.resample(120., npad='auto') picks = mne.pick_types(raw.info, meg=True, exclude='bads') raw.filter(1, 30, method='iir') events = mne.find_events(raw, stim_channel='UPPT001') # plot the events to get an idea of the paradigm mne.viz.plot_events(events, raw.info['sfreq']) event_ids = {"faces": 1, "scrambled": 2} tmin, tmax = -0.2, 0.6 baseline = None # no baseline as high-pass is applied reject = dict(mag=5e-12) epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks, baseline=baseline, preload=True, reject=reject) # Fit ICA, find and remove major artifacts ica = ICA(n_components=0.95, random_state=0).fit(raw, decim=1, reject=reject) # compute correlation scores, get bad indices sorted by score eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject) eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908') ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on ica.plot_components(eog_inds) # view topographic sensitivity of components ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar ica.plot_overlay(eog_epochs.average()) # inspect artifact removal ica.apply(epochs) # clean data, default in place evoked = [epochs[k].average() for k in event_ids] contrast = evoked[1] - evoked[0] evoked.append(contrast) for e in evoked: e.plot(ylim=dict(mag=[-400, 400])) plt.show() # estimate noise covarariance noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk') """ Explanation: Load and filter data, set up epochs End of explanation """ trans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_' 'raw-trans.fif') maps = mne.make_field_map(evoked[0], trans_fname, subject='spm', subjects_dir=subjects_dir, n_jobs=1) evoked[0].plot_field(maps, time=0.170) """ Explanation: Visualize fields on MEG helmet End of explanation """ # Make source space src_fname = data_path + '/subjects/spm/bem/spm-oct-6-src.fif' if not op.isfile(src_fname): src = mne.setup_source_space('spm', src_fname, spacing='oct6', subjects_dir=subjects_dir, overwrite=True) else: src = mne.read_source_spaces(src_fname) bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif' forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem) forward = mne.convert_forward_solution(forward, surf_ori=True) """ Explanation: Compute forward model End of explanation """ snr = 3.0 lambda2 = 1.0 / snr ** 2 method = 'dSPM' inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov, loose=0.2, depth=0.8) # Compute inverse solution on contrast stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None) # stc.save('spm_%s_dSPM_inverse' % constrast.comment) # Plot contrast in 3D with PySurfer if available brain = stc.plot(hemi='both', subjects_dir=subjects_dir) brain.set_time(170.0) # milliseconds brain.show_view('ventral') # brain.save_image('dSPM_map.png') """ Explanation: Compute inverse solution End of explanation """
quoniammm/mine-tensorflow-examples
fastAI/deeplearning2/DCGAN.ipynb
mit
%matplotlib inline import importlib import utils2; importlib.reload(utils2) from utils2 import * from tqdm import tqdm """ Explanation: Generative Adversarial Networks in Keras End of explanation """ from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train.shape n = len(X_train) X_train = X_train.reshape(n, -1).astype(np.float32) X_test = X_test.reshape(len(X_test), -1).astype(np.float32) X_train /= 255.; X_test /= 255. """ Explanation: The original GAN! See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first. We'll be refering to the discriminator as 'D' and the generator as 'G'. End of explanation """ def plot_gen(G, n_ex=16): plot_multi(G.predict(noise(n_ex)).reshape(n_ex, 28,28), cmap='gray') """ Explanation: Train This is just a helper to plot a bunch of generated images. End of explanation """ def noise(bs): return np.random.rand(bs,100) """ Explanation: Create some random data for the generator. End of explanation """ def data_D(sz, G): real_img = X_train[np.random.randint(0,n,size=sz)] X = np.concatenate((real_img, G.predict(noise(sz)))) return X, [0]*sz + [1]*sz def make_trainable(net, val): net.trainable = val for l in net.layers: l.trainable = val """ Explanation: Create a batch of some real and some generated data, with appropriate labels, for the discriminator. End of explanation """ def train(D, G, m, nb_epoch=5000, bs=128): dl,gl=[],[] for e in tqdm(range(nb_epoch)): X,y = data_D(bs//2, G) dl.append(D.train_on_batch(X,y)) make_trainable(D, False) gl.append(m.train_on_batch(noise(bs), np.zeros([bs]))) make_trainable(D, True) return dl,gl """ Explanation: Train a few epochs, and return the losses for D and G. In each epoch we: Train D on one batch from data_D() Train G to create images that the discriminator predicts as real. End of explanation """ MLP_G = Sequential([ Dense(200, input_shape=(100,), activation='relu'), Dense(400, activation='relu'), Dense(784, activation='sigmoid'), ]) MLP_D = Sequential([ Dense(300, input_shape=(784,), activation='relu'), Dense(300, activation='relu'), Dense(1, activation='sigmoid'), ]) MLP_D.compile(Adam(1e-4), "binary_crossentropy") MLP_m = Sequential([MLP_G,MLP_D]) MLP_m.compile(Adam(1e-4), "binary_crossentropy") dl,gl = train(MLP_D, MLP_G, MLP_m, 8000) """ Explanation: MLP GAN We'll keep thinks simple by making D & G plain ole' MLPs. End of explanation """ plt.plot(dl[100:]) plt.plot(gl[100:]) """ Explanation: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train. End of explanation """ plot_gen() """ Explanation: This is what's known in the literature as "mode collapse". End of explanation """ X_train = X_train.reshape(n, 28, 28, 1) X_test = X_test.reshape(len(X_test), 28, 28, 1) """ Explanation: OK, so that didn't work. Can we do better?... DCGAN There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https://github.com/soumith/ganhacks for many tips! Because we're using a CNN from now on, we'll reshape our digits into proper images. End of explanation """ CNN_G = Sequential([ Dense(512*7*7, input_dim=100, activation=LeakyReLU()), BatchNormalization(mode=2), Reshape((7, 7, 512)), UpSampling2D(), Convolution2D(64, 3, 3, border_mode='same', activation=LeakyReLU()), BatchNormalization(mode=2), UpSampling2D(), Convolution2D(32, 3, 3, border_mode='same', activation=LeakyReLU()), BatchNormalization(mode=2), Convolution2D(1, 1, 1, border_mode='same', activation='sigmoid') ]) """ Explanation: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook. End of explanation """ CNN_D = Sequential([ Convolution2D(256, 5, 5, subsample=(2,2), border_mode='same', input_shape=(28, 28, 1), activation=LeakyReLU()), Convolution2D(512, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU()), Flatten(), Dense(256, activation=LeakyReLU()), Dense(1, activation = 'sigmoid') ]) CNN_D.compile(Adam(1e-3), "binary_crossentropy") """ Explanation: The discriminator uses a few downsampling steps through strided convolutions. End of explanation """ sz = n//200 x1 = np.concatenate([np.random.permutation(X_train)[:sz], CNN_G.predict(noise(sz))]) CNN_D.fit(x1, [0]*sz + [1]*sz, batch_size=128, nb_epoch=1, verbose=2) CNN_m = Sequential([CNN_G, CNN_D]) CNN_m.compile(Adam(1e-4), "binary_crossentropy") K.set_value(CNN_D.optimizer.lr, 1e-3) K.set_value(CNN_m.optimizer.lr, 1e-3) """ Explanation: We train D a "little bit" so it can at least tell a real image from random noise. End of explanation """ dl,gl = train(CNN_D, CNN_G, CNN_m, 2500) plt.plot(dl[10:]) plt.plot(gl[10:]) """ Explanation: Now we can train D & G iteratively. End of explanation """ plot_gen(CNN_G) """ Explanation: Better than our first effort, but still a lot to be desired:... End of explanation """
bgroveben/python3_machine_learning_projects
learn_kaggle/machine_learning/pipelines.ipynb
mit
import pandas as pd from sklearn.model_selection import train_test_split data = pd.read_csv('input/melbourne_data.csv') cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt'] X = data[cols_to_use] y = data.Price train_X, test_X, train_y, test_y = train_test_split(X, y) """ Explanation: Pipelines Pipelines are a simple way to keep your data processing and modeling code organized. Specifically, a pipeline bundles preprocessing and modeling steps so you can use the whole bundle as if it were a single step. Many data scientists hack together models without pipelines, but Pipelines have some important benefits, including: Cleaner Code: You won't need to keep track of your training (and validation) data at each step of processing. Accounting for data at each step of processing can get messy. With a pipeline, you don't need to manually keep track of each step. Fewer Bugs: There are fewer opportunities to misapply a step or forget a pre-processing step. Easier to Productionize: It can be surprisingly hard to transition a model from a prototype to something deployable at scale. We won't go into the many related concerns here, but pipelines can help. More Options For Model Testing: You will see an example in the next tutorial, which covers cross-validation. Example End of explanation """ from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Imputer my_pipeline = make_pipeline(Imputer(), RandomForestRegressor()) """ Explanation: You have a modeling process that uses an Imputer to fill in missing values, followed by a RandomForestRegressor to make predictions. These can be bundled together with the make_pipeline() function. End of explanation """ my_pipeline.fit(train_X, train_y) predictions = my_pipeline.predict(test_X) predictions[:5] """ Explanation: Now you can use this pipeline for fitting and prediction: End of explanation """ my_imputer = Imputer() my_model = RandomForestRegressor() imputed_train_X = my_imputer.fit_transform(train_X) imputed_test_X = my_imputer.transform(test_X) my_model.fit(imputed_train_X, train_y) predictions = my_model.predict(imputed_test_X) predictions[:5] """ Explanation: Compared to the code without pipelines: End of explanation """
tridesclous/tridesclous
example/example_olfactory_bulb_dataset.ipynb
mit
%matplotlib inline import time import numpy as np import matplotlib.pyplot as plt import tridesclous as tdc from tridesclous import DataIO, CatalogueConstructor, Peeler """ Explanation: tridesclous example with olfactory bulb dataset End of explanation """ #download dataset localdir, filenames, params = tdc.download_dataset(name='olfactory_bulb') print(filenames) print(params) print() #create a DataIO import os, shutil dirname = 'tridesclous_olfactory_bulb' if os.path.exists(dirname): #remove is already exists shutil.rmtree(dirname) dataio = DataIO(dirname=dirname) # feed DataIO dataio.set_data_source(type='RawData', filenames=filenames, **params) dataio.add_one_channel_group(channels=list(range(14))) print(dataio) """ Explanation: DataIO = define datasource and working dir trideclous provide some datasets than can be downloaded. Note this dataset contains 3 trials in 3 different files. (the original contains more!) Each file is considers as a segment. tridesclous automatically deal with it. Theses 3 files are in RawData format this means binary format with interleaved channels. End of explanation """ cc = CatalogueConstructor(dataio=dataio) print(cc) """ Explanation: CatalogueConstructor End of explanation """ from pprint import pprint params = tdc.get_auto_params_for_catalogue(dataio, chan_grp=0) pprint(params) """ Explanation: Use automatic parameters and apply the whole chain tridesclous propose an automatic parameters choice and can apply in one function all the steps. End of explanation """ cc.apply_all_steps(params, verbose=True) print(cc) """ Explanation: apply all catalogue steps End of explanation """ %gui qt5 import pyqtgraph as pg app = pg.mkQApp() win = tdc.CatalogueWindow(cc) win.show() app.exec_() # necessary if manual change cc.make_catalogue_for_peeler() """ Explanation: Open CatalogueWindow for visual check At the end we can save the catalogue. End of explanation """ peeler_params = tdc.get_auto_params_for_peelers(dataio, chan_grp=0) pprint(peeler_params) catalogue = dataio.load_catalogue() peeler = Peeler(dataio) peeler.change_params(catalogue=catalogue, **peeler_params) t1 = time.perf_counter() peeler.run() t2 = time.perf_counter() print('peeler.run', t2-t1) print() for seg_num in range(dataio.nb_segment): spikes = dataio.get_spikes(seg_num) print('seg_num', seg_num, 'nb_spikes', spikes.size) print(spikes[:3]) """ Explanation: Peeler Use automatic parameters. End of explanation """ %gui qt5 import pyqtgraph as pg app = pg.mkQApp() win = tdc.PeelerWindow(dataio=dataio, catalogue=initial_catalogue) win.show() app.exec_() """ Explanation: Open PeelerWindow for visual checking End of explanation """
tuanavu/python-cookbook-3rd
notebooks/ch01/05_implementing_a_priority_queue.ipynb
mit
import heapq class PriorityQueue: def __init__(self): self._queue = [] self._index = 0 def push(self, item, priority): heapq.heappush(self._queue, (-priority, self._index, item)) self._index += 1 def pop(self): return heapq.heappop(self._queue)[-1] """ Explanation: Implementing a Priority Queue Problem You want to make a list of the largest or smallest N items in a collection. Solution You want to implement a queue that sorts items by a given priority and always returns the item with the highest priority on each pop operation. End of explanation """ # Example use class Item: def __init__(self, name): self.name = name def __repr__(self): return 'Item({!r})'.format(self.name) q = PriorityQueue() q.push(Item('foo'), 1) q.push(Item('bar'), 5) q.push(Item('spam'), 4) q.push(Item('grok'), 1) print("Should be bar:", q.pop()) print("Should be spam:", q.pop()) print("Should be foo:", q.pop()) print("Should be grok:", q.pop()) """ Explanation: Example use End of explanation """ a = (1, Item('foo')) b = (5, Item('bar')) a < b c = (1, Item('grok')) a < c """ Explanation: Undertand the priority queue In this recipe, the queue consists of tuples of the form (-priority, index, item). The priority value is negated to get the queue to sort items from highest priority to lowest priority. If you make (priority, item) tuples, they can be compared as long as the priorities are different. However, if two tuples with equal priorities are compared, the comparison fails as before. End of explanation """ a = (1, 0, Item('foo')) b = (5, 1, Item('bar')) c = (1, 2, Item('grok')) a < b # True a < c # True """ Explanation: By introducing the extra index and making (priority, index, item) tuples, you avoid this problem entirely since no two tuples will ever have the same value for index End of explanation """
matmodlab/matmodlab2
notebooks/Hyperfit.ipynb
bsd-3-clause
%load_ext autoreload %autoreload 2 from numpy import * import numpy as np from bokeh.plotting import * from pandas import read_excel from matmodlab2.fitting.hyperopt import * output_notebook() """ Explanation: Hyperelastic Model Fitting End of explanation """ # uniaxial data udf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Uniaxial') ud = udf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)')) # Biaxial data bdf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Biaxial') bd = bdf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)')) # Pure shear data sdf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Pure Shear') sd = sdf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)')) """ Explanation: Experimental Data End of explanation """ uf = hyperopt(UNIAXIAL_DATA, ud[:,0], ud[:,1]) print(uf.summary()) """ Explanation: Uniaxial Data Find the optimal fit to the uniaxial stress data with a hyperelastic polynomial model. The symbolic constant UNIAXIAL_DATA instructs hyperopt to interpret the input data as coming from a uniaxial stress experiment. End of explanation """ uf.popt """ Explanation: At this point, the optimal parameters have been determined and are accessible with the popt attribute: End of explanation """ uf.todict() """ Explanation: The optimal parameters are also available as a dictionary via the todict method: End of explanation """ uf.error """ Explanation: The error in the fit: End of explanation """ show(uf.bp_plot()) """ Explanation: Plots are generated with the bp_plot method End of explanation """ bf = hyperopt(BIAXIAL_DATA, bd[:,0], bd[:,1]) print(bf.summary()) show(bf.bp_plot()) """ Explanation: Biaxial Data Biaxial data is fit in a similar manner: End of explanation """ sf = hyperopt(SHEAR_DATA, sd[:,0], sd[:,1]) print(sf.summary()) show(sf.bp_plot()) """ Explanation: Shear Data Lastly, the shear data is fit End of explanation """ y1 = sf.eval(overlay=uf) y2 = sf.eval() err = sqrt(mean((y1-y2)**2)) / average(abs(y2)) print(err) show(sf.bp_plot(overlay=[bf, uf])) show(uf.bp_plot(overlay=[sf])) show(bf.bp_plot(overlay=[uf, sf])) """ Explanation: Comparison of Fits Examine the error in the shear fit using parameters from the uniaxial fit End of explanation """ f = hyperopt2(SHEAR_DATA, sd[:,0], sd[:,1], UNIAXIAL_DATA, ud[:,0], ud[:,1], BIAXIAL_DATA, bd[:,0], bd[:,1]) print(f.summary()) f.error2 p = f.bp_plot(strain=linspace(0,6.5), points=False) p.circle(sd[:,0], sd[:,1], color='black', legend='Shear data') p.circle(bd[:,0], bd[:,1], color='red', legend='Biaxial data') p.circle(ud[:,0], ud[:,1], color='green', legend='Uniaxial data') show(p) """ Explanation: hyperopt2 hyperopt2 attempts to find the model that fits all given data the best. End of explanation """
FedericoMuciaccia/SistemiComplessi
src/heatmap_and_range.ipynb
mit
roma = pandas.read_csv("../data/Roma_towers.csv") coordinate = roma[['lat', 'lon']].values heatmap = gmaps.heatmap(coordinate) gmaps.display(heatmap) # TODO scrivere che dietro queste due semplici linee ci sta un pomeriggio intero di smadonnamenti colosseo = (41.890183, 12.492369) import gmplot from gmplot import GoogleMapPlotter # gmap = gmplot.from_geocode("San Francisco") mappa = gmplot.GoogleMapPlotter(41.890183, 12.492369, 11) #gmap.plot(latitudes, longitudes, 'cornflowerblue', edge_width=10) #gmap.plot((41.890183, 41.891183), (12.492369, 12.493369), 'cornflowerblue', edge_width=10) #gmap.scatter(more_lats, more_lngs, '#3B0B39', size=40, marker=False) #gmap.scatter(marker_lats, marker_lngs, 'k', marker=True) #gmap.heatmap(heat_lats, heat_lngs) #mappa.scatter((41.890183, 41.891183), (12.492369, 12.493369), color='#3B0B39', size=40, marker=False) #mappa.scatter(roma.lat.values, # roma.lon.values, # color='#3333ff', # size=0, # marker=False) mappa.heatmap(roma.lat.values,roma.lon.values) mappa.draw("../html/heatmap.html") #print a """ Explanation: Creazione della mappa invece che uno scatterplot con dei raggi, la libreria ci consente solo di fare una heatmap (eventualmente pesata) End of explanation """ # condizioni di filtro raggioMin = 1 # raggioMax = 1000 raggiPositivi = roma.range >= raggioMin # raggiCorti = roma.range < raggioMax # query con le condizioni #romaFiltrato = roma[raggiPositivi & raggiCorti] romaFiltrato = roma[raggiPositivi] raggi = romaFiltrato.range print max(raggi) # logaritmic (base 2) binning in log-log (base 10) plots of integer histograms def logBinnedHist(histogramResults): """ histogramResults = numpy.histogram(...) OR matplotlib.pyplot.hist(...) returns x, y to be used with matplotlib.pyplot.step(x, y, where='post') """ # TODO così funziona solo con l'istogramma di pyplot; # quello di numpy restituisce solo la tupla (values, binEdges) values, binEdges, others = histogramResults # print binEdges # TODO # if 0 in binEdges: # return "error: log2(0) = ?" # print len(values), len(binEdges) # print binEdges # TODO vedere quando non si parte da 1 # int arrotonda all'intero inferiore linMin = min(binEdges) linMax = max(binEdges) # print linMin, linMax logStart = int(numpy.log2(linMin)) logStop = int(numpy.log2(linMax)) # print logStart, logStop nLogBins = logStop - logStart + 1 # print nLogBins logBins = numpy.logspace(logStart, logStop, num=nLogBins, base=2, dtype=int) # print logBins # 1,2,4,8,16,32,64,128,256,512,1024 ###################### linStart = 2**logStop + 1 linStop = linMax # print linStart, linStop nLinBins = linStop - linStart + 1 # print nLinBins linBins = numpy.linspace(linStart, linStop, num=nLinBins, dtype=int) # print linBins ###################### bins = numpy.append(logBins, linBins) # print bins # print len(bins) # TODO rendere generale questa funzione!!! totalValues, binEdges, otherBinNumbers = scipy.stats.binned_statistic(raggi.values, raggi.values, statistic='count', bins=bins) # print totalValues # print len(totalValues) # uso le proprietà dei logaritmi in base 2: # 2^(n+1) - 2^n = 2^n correzioniDatiCanalizzatiLog = numpy.delete(logBins, -1) # print correzioniDatiCanalizzatiLog # print len(correzioniDatiCanalizzatiLog) correzioniDatiCanalizzatiLin = numpy.ones(nLinBins, dtype=int) # print correzioniDatiCanalizzatiLin # print len(correzioniDatiCanalizzatiLin) correzioniDatiCanalizzati = numpy.append(correzioniDatiCanalizzatiLog, correzioniDatiCanalizzatiLin) # print correzioniDatiCanalizzati # print len(correzioniDatiCanalizzati) x = numpy.concatenate(([0], bins)) conteggi = totalValues/correzioniDatiCanalizzati # TODO caso speciale per il grafico di sotto # (per non fare vedere la parte oltre l'ultima potenza di 2) l = len(correzioniDatiCanalizzatiLin) conteggi[-l:] = numpy.zeros(l, dtype='int') y = numpy.concatenate(([0], conteggi, [0])) return x, y # creazione di un istogramma log-log per la distribuzione del raggio di copertura # TODO provare a raggruppare le code # esempio: con bins=100 # oppure con canalizzazione a logaritmo di 2, ma mediato # in modo che venga equispaziato nel grafico logaritmico # il programma vuole pesati i dati e non i canali # si potrebbe implementare una mappa che pesa i dati # secondo la funzione divisione intera per logaritmo di 2 # TODO mettere cerchietto che indica il range massimo oppure scritta in rosso "20341 m!" # TODO spiegare perché ci sono così tanti conteggi a 1,2,4,... metri # TODO ricavare il range dai dati grezzi, facendo un algoritmo di clustering # sulle varie osservazioni delle antenne. machine learning? # TODO scrivere funzione che fa grafici logaritmici con canali # equispaziati nel plot logaritmico (canali pesati) # impostazioni plot complessivo # pyplot.figure(figsize=(20,8)) # dimensioni in pollici pyplot.figure(figsize=(10,10)) matplotlib.pyplot.xlim(10**0,10**5) matplotlib.pyplot.ylim(10**-3,10**2) pyplot.title('Distribuzione del raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m]") # pyplot.gca().set_xscale("log") # pyplot.gca().set_yscale("log") pyplot.xscale("log") pyplot.yscale("log") # lin binning distribuzioneRange = pyplot.hist(raggi.values, bins=max(raggi)-min(raggi), histtype='step', color='#3385ff', label='linear binning') # log_2 binning xLog2, yLog2 = logBinnedHist(distribuzioneRange) matplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', linewidth=2, label='log_2 weighted binning') #where = mid OR post # matplotlib.pyplot.plot(xLog2, yLog2) # linea verticale ad indicare il massimo grado pyplot.axvline(x=max(raggi), color='#808080', linestyle='dotted', label='max range (41832m)') # legenda e salvataggio pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/infinite_log_binning.svg', format='svg', dpi=600, transparent=True) """ Explanation: NOTE guardando la mappa Sembrano esserci dei problemi con la posizione delle antenne: ci sono antenne sul tevere, su ponte Sisto, dentro il parchetto di Castel Sant'Angelo, in mezzo al pratone della Sapienza, in cima al dipartimento di Fisica... Inoltre sembra esserci una strana clusterizzazione lungo le vie di traffico principali. Questo è ragionevole nell'ottica di garantire la copertura in una città con grossi flussi turistici come Roma, ma probabilmente non a tal punto da rendere plausibile la presenza di 7 antenne attorno a piazza Panteon. Ci sono anche coppie di antenne isolate che sembrano distare tra loro pochi metri. Probabilmente sono artefatti di ricostruzione. Probabilmente l'algoitmo di ricostruzione di Mozilla ha diversi problemi. Se questa è la situazione delle antenne non oso pensare alla situazione dei router wifi. Queste misure e queste ricostruzioni devono essere precise, perché è su queste che si poggerà il loro futuro servizio di geolocalizzazione. Bisognerebbe farglielo presente (magari ci prendono a lavorare da loro :-) ) Analisi del raggio di copertura delle antenne dato che ci servirà fare un grafico con scale logaritmiche teniamo solo i dati con range =! 0 End of explanation """ # istogramma sugli interi unique, counts = numpy.unique(raggi.values, return_counts=True) # print numpy.asarray((unique, counts)).T rank = numpy.arange(1,len(unique)+1) frequency = numpy.array(sorted(counts, reverse=True)) pyplot.figure(figsize=(20,10)) pyplot.title('Distribuzione del raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m] o ranking") pyplot.xscale("log") pyplot.yscale("log") matplotlib.pyplot.xlim(10**0,10**4) matplotlib.pyplot.ylim(10**0,10**2) matplotlib.pyplot.step(x=rank, y=frequency, where='post', label='frequency-rank', color='#00cc44') matplotlib.pyplot.scatter(x=unique, y=counts, marker='o', color='#3385ff', label='linear binning (scatter)') matplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', label='log_2 weighted binning') pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/range_distribution.svg', format='svg', dpi=600, transparent=True) """ Explanation: Frequency-rank End of explanation """ conteggi, binEdges = numpy.histogram(raggi.values, bins=max(raggi)-min(raggi)) conteggiCumulativi = numpy.cumsum(conteggi) valoriRaggi = numpy.delete(binEdges, -1) N = len(raggi.values) pyplot.figure(figsize=(12,10)) pyplot.title('Raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m]") pyplot.xscale("log") pyplot.yscale("log") matplotlib.pyplot.xlim(10**0,10**5) matplotlib.pyplot.ylim(10**0,10**4) matplotlib.pyplot.step(x=valoriRaggi, y=conteggiCumulativi, where='post', label='Cumulata', color='#009999') matplotlib.pyplot.step(x=valoriRaggi, y=N-conteggiCumulativi, where='post', label='N - Cumulata', color='#ff0066') pyplot.axhline(y=N, color='#808080', linestyle='dotted', label='N_max = 6505') pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/range_cumulated_distribution.svg', format='svg', dpi=600, transparent=True) # TODO fare fit a mano e controllare le relazioni tra i vari esponenti """ Explanation: Cumulative histogram the cumulative distribution function cdf(x) is the probability that a real-valued random variable X will take a value less than or equal to x End of explanation """
dnc1994/MachineLearning-UW
ml-classification/blank/module-5-decision-tree-assignment-2-blank.ipynb
mit
import graphlab """ Explanation: Implementing binary decision trees The goal of this notebook is to implement your own binary decision tree classifier. You will: Use SFrames to do some feature engineering. Transform categorical variables into binary variables. Write a function to compute the number of misclassified examples in an intermediate node. Write a function to find the best feature to split on. Build a binary decision tree from scratch. Make predictions using the decision tree. Evaluate the accuracy of the decision tree. Visualize the decision at the root node. Important Note: In this assignment, we will focus on building decision trees where the data contain only binary (0 or 1) features. This allows us to avoid dealing with: * Multiple intermediate nodes in a split * The thresholding issues of real-valued features. This assignment may be challenging, so brace yourself :) Fire up Graphlab Create Make sure you have the latest version of GraphLab Create. End of explanation """ loans = graphlab.SFrame('lending-club-data.gl/') """ Explanation: Load the lending club dataset We will be using the same LendingClub dataset as in the previous assignment. End of explanation """ loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1) loans = loans.remove_column('bad_loans') """ Explanation: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan. End of explanation """ features = ['grade', # grade of the loan 'term', # the term of the loan 'home_ownership', # home_ownership status: own, mortgage or rent 'emp_length', # number of years of employment ] target = 'safe_loans' loans = loans[features + [target]] """ Explanation: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical features: grade of the loan the length of the loan term the home ownership status: own, mortgage, rent number of years of employment. Since we are building a binary decision tree, we will have to convert these categorical features to a binary representation in a subsequent section using 1-hot encoding. End of explanation """ loans """ Explanation: Let's explore what the dataset looks like. End of explanation """ safe_loans_raw = loans[loans[target] == 1] risky_loans_raw = loans[loans[target] == -1] # Since there are less risky loans than safe loans, find the ratio of the sizes # and use that percentage to undersample the safe loans. percentage = len(risky_loans_raw)/float(len(safe_loans_raw)) safe_loans = safe_loans_raw.sample(percentage, seed = 1) risky_loans = risky_loans_raw loans_data = risky_loans.append(safe_loans) print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data)) print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data)) print "Total number of loans in our new dataset :", len(loans_data) """ Explanation: Subsample dataset to make sure classes are balanced Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results. End of explanation """ loans_data = risky_loans.append(safe_loans) for feature in features: loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1}) loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature) # Change None's to 0's for column in loans_data_unpacked.column_names(): loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0) loans_data.remove_column(feature) loans_data.add_columns(loans_data_unpacked) """ Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Transform categorical data into binary features In this assignment, we will implement binary decision trees (decision trees for binary features, a specific case of categorical variables taking on two values, e.g., true/false). Since all of our features are currently categorical features, we want to turn them into binary features. For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature {'home_ownership': 'RENT'} we want to turn this into three features: { 'home_ownership = OWN' : 0, 'home_ownership = MORTGAGE' : 0, 'home_ownership = RENT' : 1 } Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding. End of explanation """ features = loans_data.column_names() features.remove('safe_loans') # Remove the response variable features print "Number of features (after binarizing categorical variables) = %s" % len(features) """ Explanation: Let's see what the feature columns look like now: End of explanation """ loans_data['grade.A'] """ Explanation: Let's explore what one of these columns looks like: End of explanation """ print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum() print "Expexted answer : 6422" """ Explanation: This column is set to 1 if the loan grade is A and 0 otherwise. Checkpoint: Make sure the following answers match up. End of explanation """ train_data, test_data = loans_data.random_split(.8, seed=1) """ Explanation: Train-test split We split the data into a train test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result. End of explanation """ def intermediate_node_num_mistakes(labels_in_node): # Corner case: If labels_in_node is empty, return 0 if len(labels_in_node) == 0: return 0 # Count the number of 1's (safe loans) ## YOUR CODE HERE # Count the number of -1's (risky loans) ## YOUR CODE HERE # Return the number of mistakes that the majority classifier makes. ## YOUR CODE HERE """ Explanation: Decision tree implementation In this section, we will implement binary decision trees from scratch. There are several steps involved in building a decision tree. For that reason, we have split the entire assignment into several sections. Function to count number of mistakes while predicting majority class Recall from the lecture that prediction at an intermediate node works by predicting the majority class for all data points that belong to this node. Now, we will write a function that calculates the number of missclassified examples when predicting the majority class. This will be used to help determine which feature is the best to split on at a given node of the tree. Note: Keep in mind that in order to compute the number of mistakes for a majority classifier, we only need the label (y values) of the data points in the node. Steps to follow : * Step 1: Calculate the number of safe loans and risky loans. * Step 2: Since we are assuming majority class prediction, all the data points that are not in the majority class are considered mistakes. * Step 3: Return the number of mistakes. Now, let us write the function intermediate_node_num_mistakes which computes the number of misclassified examples of an intermediate node given the set of labels (y values) of the data points contained in the node. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in. End of explanation """ # Test case 1 example_labels = graphlab.SArray([-1, -1, 1, 1, 1]) if intermediate_node_num_mistakes(example_labels) == 2: print 'Test passed!' else: print 'Test 1 failed... try again!' # Test case 2 example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1]) if intermediate_node_num_mistakes(example_labels) == 2: print 'Test passed!' else: print 'Test 2 failed... try again!' # Test case 3 example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1]) if intermediate_node_num_mistakes(example_labels) == 2: print 'Test passed!' else: print 'Test 3 failed... try again!' """ Explanation: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong. End of explanation """ def best_splitting_feature(data, features, target): best_feature = None # Keep track of the best feature best_error = 10 # Keep track of the best error so far # Note: Since error is always <= 1, we should intialize it with something larger than 1. # Convert to float to make sure error gets computed correctly. num_data_points = float(len(data)) # Loop through each feature to consider splitting on that feature for feature in features: # The left split will have all data points where the feature value is 0 left_split = data[data[feature] == 0] # The right split will have all data points where the feature value is 1 ## YOUR CODE HERE right_split = # Calculate the number of misclassified examples in the left split. # Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes) # YOUR CODE HERE left_mistakes = # Calculate the number of misclassified examples in the right split. ## YOUR CODE HERE right_mistakes = # Compute the classification error of this split. # Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points) ## YOUR CODE HERE error = # If this is the best error we have found so far, store the feature as best_feature and the error as best_error ## YOUR CODE HERE if error < best_error: return best_feature # Return the best feature we found """ Explanation: Function to pick best feature to split on The function best_splitting_feature takes 3 arguments: 1. The data (SFrame of data which includes all of the feature columns and label column) 2. The features to consider for splits (a list of strings of column names to consider for splits) 3. The name of the target/label column (string) The function will loop through the list of possible features, and consider splitting on each of them. It will calculate the classification error of each split and return the feature that had the smallest classification error when split on. Recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}} $$ Follow these steps: * Step 1: Loop over each feature in the feature list * Step 2: Within the loop, split the data into two groups: one group where all of the data has feature value 0 or False (we will call this the left split), and one group where all of the data has feature value 1 or True (we will call this the right split). Make sure the left split corresponds with 0 and the right split corresponds with 1 to ensure your implementation fits with our implementation of the tree building process. * Step 3: Calculate the number of misclassified examples in both groups of data and use the above formula to compute the classification error. * Step 4: If the computed error is smaller than the best error found so far, store this feature and its error. This may seem like a lot, but we have provided pseudocode in the comments in order to help you implement the function correctly. Note: Remember that since we are only dealing with binary features, we do not have to consider thresholds for real-valued features. This makes the implementation of this function much easier. Fill in the places where you find ## YOUR CODE HERE. There are five places in this function for you to fill in. End of explanation """ if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months': print 'Test passed!' else: print 'Test failed... try again!' """ Explanation: To test your best_splitting_feature function, run the following code: End of explanation """ def create_leaf(target_values): # Create a leaf node leaf = {'splitting_feature' : None, 'left' : None, 'right' : None, 'is_leaf': } ## YOUR CODE HERE # Count the number of data points that are +1 and -1 in this node. num_ones = len(target_values[target_values == +1]) num_minus_ones = len(target_values[target_values == -1]) # For the leaf node, set the prediction to be the majority class. # Store the predicted class (1 or -1) in leaf['prediction'] if num_ones > num_minus_ones: leaf['prediction'] = ## YOUR CODE HERE else: leaf['prediction'] = ## YOUR CODE HERE # Return the leaf node return leaf """ Explanation: Building the tree With the above functions implemented correctly, we are now ready to build our decision tree. Each node in the decision tree is represented as a dictionary which contains the following keys and possible values: { 'is_leaf' : True/False. 'prediction' : Prediction at the leaf node. 'left' : (dictionary corresponding to the left tree). 'right' : (dictionary corresponding to the right tree). 'splitting_feature' : The feature that this node splits on. } First, we will write a function that creates a leaf node given a set of target values. Fill in the places where you find ## YOUR CODE HERE. There are three places in this function for you to fill in. End of explanation """ def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10): remaining_features = features[:] # Make a copy of the features. target_values = data[target] print "--------------------------------------------------------------------" print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values)) # Stopping condition 1 # (Check if there are mistakes at current node. # Recall you wrote a function intermediate_node_num_mistakes to compute this.) if == 0: ## YOUR CODE HERE print "Stopping condition 1 reached." # If not mistakes at current node, make current node a leaf node return create_leaf(target_values) # Stopping condition 2 (check if there are remaining features to consider splitting on) if remaining_features == : ## YOUR CODE HERE print "Stopping condition 2 reached." # If there are no remaining features to consider, make current node a leaf node return create_leaf(target_values) # Additional stopping condition (limit tree depth) if current_depth >= : ## YOUR CODE HERE print "Reached maximum depth. Stopping for now." # If the max tree depth has been reached, make current node a leaf node return create_leaf(target_values) # Find the best splitting feature (recall the function best_splitting_feature implemented above) ## YOUR CODE HERE # Split on the best feature that we found. left_split = data[data[splitting_feature] == 0] right_split = ## YOUR CODE HERE remaining_features.remove(splitting_feature) print "Split on feature %s. (%s, %s)" % (\ splitting_feature, len(left_split), len(right_split)) # Create a leaf node if the split is "perfect" if len(left_split) == len(data): print "Creating leaf node." return create_leaf(left_split[target]) if len(right_split) == len(data): print "Creating leaf node." ## YOUR CODE HERE # Repeat (recurse) on left and right subtrees left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth) ## YOUR CODE HERE right_tree = return {'is_leaf' : False, 'prediction' : None, 'splitting_feature': splitting_feature, 'left' : left_tree, 'right' : right_tree} """ Explanation: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions: 1. Stopping condition 1: All data points in a node are from the same class. 2. Stopping condition 2: No more features to split on. 3. Additional stopping condition: In addition to the above two stopping conditions covered in lecture, in this assignment we will also consider a stopping condition based on the max_depth of the tree. By not letting the tree grow too deep, we will save computational effort in the learning process. Now, we will write down the skeleton of the learning algorithm. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in. End of explanation """ def count_nodes(tree): if tree['is_leaf']: return 1 return 1 + count_nodes(tree['left']) + count_nodes(tree['right']) """ Explanation: Here is a recursive function to count the nodes in your tree: End of explanation """ small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3) if count_nodes(small_data_decision_tree) == 13: print 'Test passed!' else: print 'Test failed... try again!' print 'Number of nodes found :', count_nodes(small_data_decision_tree) print 'Number of nodes that should be there : 13' """ Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding. End of explanation """ # Make sure to cap the depth at 6 by using max_depth = 6 """ Explanation: Build the tree! Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree. Warning: This code block may take 1-2 minutes to learn. End of explanation """ def classify(tree, x, annotate = False): # if the node is a leaf node. if tree['is_leaf']: if annotate: print "At leaf, predicting %s" % tree['prediction'] return tree['prediction'] else: # split on feature. split_feature_value = x[tree['splitting_feature']] if annotate: print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value) if split_feature_value == 0: return classify(tree['left'], x, annotate) else: ### YOUR CODE HERE """ Explanation: Making predictions with a decision tree As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in. End of explanation """ test_data[0] print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0]) """ Explanation: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point. End of explanation """ classify(my_decision_tree, test_data[0], annotate=True) """ Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class: End of explanation """ def evaluate_classification_error(tree, data): # Apply the classify(tree, x) to each row in your data prediction = data.apply(lambda x: classify(tree, x)) # Once you've made the predictions, calculate the classification error and return it ## YOUR CODE HERE """ Explanation: Quiz question: What was the feature that my_decision_tree first split on while making the prediction for test_data[0]? Quiz question: What was the first feature that lead to a right split of test_data[0]? Quiz question: What was the last feature split on before reaching a leaf node for test_data[0]? Evaluating your decision tree Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset. Again, recall that the classification error is defined as follows: $$ \mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}} $$ Now, write a function called evaluate_classification_error that takes in as input: 1. tree (as described above) 2. data (an SFrame) This function should return a prediction (class label) for each row in data using the decision tree. Fill in the places where you find ## YOUR CODE HERE. There is one place in this function for you to fill in. End of explanation """ evaluate_classification_error(my_decision_tree, test_data) """ Explanation: Now, let's use this function to evaluate the classification error on the test set. End of explanation """ def print_stump(tree, name = 'root'): split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months' if split_name is None: print "(leaf, label: %s)" % tree['prediction'] return None split_feature, split_value = split_name.split('.') print ' %s' % name print ' |---------------|----------------|' print ' | |' print ' | |' print ' | |' print ' [{0} == 0] [{0} == 1] '.format(split_name) print ' | |' print ' | |' print ' | |' print ' (%s) (%s)' \ % (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'), ('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree')) print_stump(my_decision_tree) """ Explanation: Quiz Question: Rounded to 2nd decimal point, what is the classification error of my_decision_tree on the test_data? Printing out a decision stump As discussed in the lecture, we can print out a single decision stump (printing out the entire tree is left as an exercise to the curious reader). End of explanation """ print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature']) """ Explanation: Quiz Question: What is the feature that is used for the split at the root node? Exploring the intermediate left subtree The tree is a recursive dictionary, so we do have access to all the nodes! We can use * my_decision_tree['left'] to go left * my_decision_tree['right'] to go right End of explanation """ print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature']) """ Explanation: Exploring the left subtree of the left subtree End of explanation """
lucasmaystre/choix
notebooks/intro-pairwise.ipynb
mit
import choix import networkx as nx import numpy as np %matplotlib inline np.set_printoptions(precision=3, suppress=True) """ Explanation: Introduction using pairwise-comparison data This notebook provides a gentle introduction to the choix library. We consider the case of pairwise-comparison outcomes between items from some set. End of explanation """ n_items = 5 data = [ (1, 0), (0, 4), (3, 1), (0, 2), (2, 4), (4, 3), ] """ Explanation: In choix, items are represented by $n$ consecutive integers ${0, \ldots, n-1 }$. The event "item $i$ wins over item $j$" is represented by the Python tuple (i, j). Note that the winning item always comes first in the tuple. We start by defining a small dataset of comparison outcomes. End of explanation """ graph = nx.DiGraph(data=data) nx.draw(graph, with_labels=True) """ Explanation: This dataset can be visually represented by using a graph: each node is an item, and there is an edge from node $i$ to node $j$ for every observation "$i$ wins over $j$". End of explanation """ params = choix.ilsr_pairwise(n_items, data) print(params) """ Explanation: Suppose that we want to fit a Bradley-Terry model on this data. choix provides several algorithms to do this; below, we use a maximum-likelihood inference algorithm called I-LSR. End of explanation """ print("ranking (worst to best):", np.argsort(params)) """ Explanation: The parameters can be thought of as the "strength" (or utility) of each item. It is possible to use them to rank the items: simply order the items by increasing parameter value. End of explanation """ prob_1_wins, prob_4_wins = choix.probabilities([1, 4], params) print("Prob(1 wins over 4): {:.2f}".format(prob_1_wins)) """ Explanation: It is also possible to use the parameters to predict outcomes of future comparisons. End of explanation """ n_items = 4 data = [(3, 2), (2, 1), (1, 0)] graph = nx.DiGraph(data=data) nx.draw(graph, with_labels=True) """ Explanation: Dealing with sparsity When the comparison graph is not connected, the maximum-likelihood estimate is not well defined. This happens for example when one item always wins, or always loses. In the following example, item $3$ always wins, and item $0$ always loses. End of explanation """ choix.ilsr_pairwise(n_items, data) """ Explanation: In these cases, most of the estimators will fail by default. End of explanation """ choix.ilsr_pairwise(n_items, data, alpha=0.01) """ Explanation: The problem can be solved by adding a little bit of regularization as follows. End of explanation """
tatjanus/cianparser
cian_parser2.0.ipynb
bsd-2-clause
import requests import re from bs4 import BeautifulSoup import pandas as pd import time import numpy as np def html_stripper(text): return re.sub('<[^<]+?>', '', str(text)) """ Explanation: Посмотрев на свой предыдущий ноутбук, я ощутила острое желание все переделать и реструктурировать. Прошлая версия по сути была больше изготовлением кирпичиков, из которых сейчас уже я соберу полноценный парсер. Как функцию, а не последовательность ячеек. Все необходимое я перенесла сюда, также добавила что-то новое. В результате я хочу получить функцию cianParser(), которая возвращает DataFrame. End of explanation """ links = dict([('NW' ,[np.nan]), ('C',[np.nan]), ('N',[np.nan]), ('NE',[np.nan]), ('E',[np.nan]), ('SE',[np.nan]), ('S',[np.nan]), ('SW',[np.nan]), ('W',[np.nan])]) """ Explanation: Тут я соберу все ссылки на квартиры по каждому округу, запишу их в словарь links. Я хочу хранить ссылки в словаре. Ключами будут округи, значениями - все найденные ссылки для округа. End of explanation """ districts = {1: 'NW', 4: 'C', 5:'N', 6:'NE', 7:'E', 8:'SE', 9:'S', 10:'SW', 11:'W'} zone_bounds = {1: (125, 133), 4: (13, 23), 5: (23, 39), 6:(39, 56), 7:(56, 72), 8:(72, 84), 9:(84, 100), 10:(100, 112), 11:(112, 125)} for i in districts.keys(): left = zone_bounds[i][0] right = zone_bounds[i][1] for j in range(left, right): zone = 'http://www.cian.ru/cat.php?deal_type=sale&district%5B0%5D=' + str(j) + '&engine_version=2&offer_type=flat&room1=1&room2=1&room3=1&room4=1&room5=1&room6=1' for page in range(1, 31): page_url = zone.format(page) search_page = requests.get(page_url) search_page = search_page.content search_page = BeautifulSoup(search_page, 'lxml') flat_urls = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"}) flat_urls = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_urls)) for link in flat_urls: if link.isdigit(): links[districts.get(i)].append(link) """ Explanation: Cian предлагает для просмотра не более 30 страниц выдачи для каждого запроса. Всего у нас получится не более 30 страниц х 25 квартир на странице х 9 округов = 6750 объектов. Вроде нормально. Но практика показала, что если выбросить дубликаты, то останется около 900 объектов. С чем это может быть связано? Я заметила, что некоторые объявления фигурируют больше, чем в одном округе (я даже видела дом по Бирюлевской улице, отнесенный не только к своему, но и к Тверскому району ЦАО), какие-то объявления появляются в топе выдачи не только первой страницы запроса. Но причины тут не так важны, их наверняка побольше. Важно другое - мне захотелось в итоге иметь побольше датасет. Тогда я решила сделать свои запросы чуть более подробными и пройтись не по 9 округам, а по всем районам (в каждом округе их довольно много, порядка 15-20). Итого получилось прохождение по результатам 120 поисковых запросов. End of explanation """ def getPrice(flat_page): price = flat_page.find('div', attrs={'class':'object_descr_price'}) price = re.split('<div>|руб|\W', str(price)) price = "".join([i for i in price if i.isdigit()][-4:]) dollar = '808080' if dollar in price: price = price[6:] return int(price) """ Explanation: Здесь идет блок функций, наработанных и объясненных в прошлом ноутбуке. В этом я буду лишь указывать их предназначение. Цена По сравнению с предыдущим ноутбуком тут появилась обработка ситуации цены в долларах, которая мне встретилась только на этот раз End of explanation """ from math import radians, cos, sin, asin, sqrt AVG_EARTH_RADIUS = 6371 def haversine(point1, point2): # извлекаем долготу и широту lat1, lng1 = point1 lat2, lng2 = point2 # переводим все эти значения в радианы lat1, lng1, lat2, lng2 = map(radians, (lat1, lng1, lat2, lng2)) # вычисляем расстояние по формуле lat = lat2 - lat1 lng = lng2 - lng1 d = sin(lat * 0.5) ** 2 + cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2 h = 2 * AVG_EARTH_RADIUS * asin(sqrt(d)) return h def getCoords(flat_page): coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1] coords = re.split('&amp|center=|%2C', str(coords)) coords_list = [] for item in coords: if item[0].isdigit(): coords_list.append(item) lat = float(coords_list[0]) lon = float(coords_list[1]) return lat, lon def getDistance(coords): MSC_POINT_ZERO = (55.755831, 37.617673) return haversine(MSC_POINT_ZERO, coords) """ Explanation: Расстояние до центра города (вспомогательные: гаверсинус - расстояние между двумя точками на сфере, получение координат) End of explanation """ def getRoom(flat_page): rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'}) rooms_n = html_stripper(rooms_n) room_number = '' flag = 0 for i in re.split('-|\n', rooms_n): if 'много' in i: flag = 1 break elif 'комн' in i: break else: room_number += i if (flag): room_number = '6' room_number = "".join(room_number.split()) return int(room_number) """ Explanation: Количество комнат Тут я сменила значение mult на 6, так как решила все же впоследствии многокомнатным квартирам присваивать это число в это поле. End of explanation """ def getMetroDistance(flat_page): metro = flat_page.find('div', attrs={'class':'object_descr_metro'}) metro = re.split('metro_name|мин', str(metro)) if (len(metro) > 2): metro_dist = 0 power = 0 flag = 0 for i in range(0, len(metro[1])): if metro[1][-i-1].isdigit(): flag = 1 metro_dist += int(metro[1][-i-1]) * 10 ** power power += 1 elif (flag == 1): break else: metro_dist = np.nan return metro_dist """ Explanation: Расстояние до метро End of explanation """ def getMetroWalking(flat_page): metro = flat_page.find('div', attrs={'class':'object_descr_metro'}) metro = re.split('metro_name|мин', str(metro)) if (len(metro) > 2): if 'пешк' in metro[2]: walking = 1 elif 'машин' in metro[2]: walking = 0 else: walking = np.nan else: walking = np.nan return walking """ Explanation: До метро пешком/на машине End of explanation """ def getBrick(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) brick = np.nan building_block = re.split('Этаж|Тип продажи', table)[1] if 'Тип дом' in building_block: if (('кирпич' in building_block) | ('монолит' in building_block)): brick = 1 elif (('панельн' in building_block) | ('деревян' in building_block) | ('сталин' in building_block) | ('блочн' in building_block)): brick = 0 return brick def getNew(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) new = np.nan building_block = re.split('Этаж|Тип продажи', table)[1] if 'Тип дом' in building_block: if 'новостр' in building_block: new = 1 elif 'втор' in building_block: new = 0 return new """ Explanation: Тип дома: материал, новостройка/вторичка End of explanation """ def getFloor(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) floor_is = 0 building_block = re.split('Этаж|Тип продажи', table)[1] floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block) for i in range(1, len(floor_block[2]) + 1): if(floor_block[2][-i].isdigit()): floor_is += int(floor_block[2][-i]) * 10**(i - 1) return floor_is def getNFloor(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) floors_count = np.nan building_block = re.split('Этаж|Тип продажи', table)[1] floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block) if floor_block[3].isdigit(): floors_count = int(floor_block[3]) return floors_count """ Explanation: Этаж, этажность End of explanation """ def myStrToFloat(string): delimiter = 0 value = 0 for i in range(0, len(string)): if string[i] == ',': delimiter = i for i in range(0, delimiter): value += int(string[delimiter - i - 1]) * 10 ** i for i in range(1, len(string) - delimiter): value += (int(string[delimiter + i]) * (10 ** (-i))) return value def getTotsp(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) space_block = re.split('Общая площадь', table)[1] total = re.split('Площадь комнат', space_block)[0] total_space = re.split('\n|\xa0', total)[2] if total_space.isdigit(): total_space = int(total_space) else: total_space = myStrToFloat(total_space) return total_space """ Explanation: Общая площадь, жилая, площадь кухни (+ вспомогательный конвертор strToFloat() для чисел, для которых cian (в отличие от python) использует в качестве разделителя запятую, а не точку) Немного подправила конвертор, который ошибался на данных, которых не бывает на Циане, но всё же End of explanation """ def getLivesp(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) space_block = re.split('Общая площадь', table)[1] living = re.split('Жилая площадь', space_block)[1] living_space = re.split('\n|\xa0', living)[2] if living_space.isdigit(): living_space = int(living_space) elif (living_space == '–'): living_space = np.nan else: living_space = myStrToFloat(living_space) return living_space def getKitsp(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) space_block = re.split('Общая площадь', table)[1] optional_block = re.split('Жилая площадь', space_block)[1] kitchen_space = np.nan if 'Площадь кухни' in optional_block: kitchen_block = re.split('Площадь кухни', optional_block)[1] if re.split('\n|\xa0', kitchen_block)[2] != '–': if re.split('\n|\xa0', kitchen_block)[2].isdigit(): kitchen_space = int(re.split('\n|\xa0', kitchen_block)[2]) else: kitchen_space = myStrToFloat(re.split('\n|\xa0', kitchen_block)[2]) return kitchen_space """ Explanation: Добавила обработку ситуации с прочерком End of explanation """ def getBal(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) space_block = re.split('Общая площадь', table)[1] optional_block = re.split('Жилая площадь', space_block)[1] balcony = np.nan if 'Балкон' in optional_block: balcony_block = re.split('Балкон', optional_block)[1] if re.split('\n', balcony_block)[1] != 'нет': if re.split('\n', balcony_block)[1] != '–': balcony = int(re.split('\n', balcony_block)[1][0]) else: balcony = 0 return balcony def getTel(flat_page): table = flat_page.find('table', attrs = {'class':'object_descr_props'}) table = html_stripper(table) space_block = re.split('Общая площадь', table)[1] optional_block = re.split('Жилая площадь', space_block)[1] telephone = np.nan if 'Телефон' in optional_block: telephone_block = re.split('Телефон', optional_block)[1] if re.split('\n', telephone_block)[1] == 'да': telephone = 1 elif re.split('\n', telephone_block)[1] == 'нет': telephone = 0 return telephone """ Explanation: Отсутствие балкона/их количество, наличие телефона End of explanation """ def getFlatPage(link): flat_url = 'http://www.cian.ru/sale/flat/' + str(link) + '/' flat_page = requests.get(flat_url) flat_page = flat_page.content flat_page = BeautifulSoup(flat_page, 'lxml') return flat_page def getFlatUrl(page): page_url = district.format(page) search_page = requests.get(page_url) search_page = search_page.content search_page = BeautifulSoup(search_page, 'lxml') flat_url = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"}) flat_url = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_url)) return flat_url """ Explanation: Теперь новые функции Эти выведены в отдельные исключительно для удобства при чтении бОльших, содержащих эти части кода функций End of explanation """ def getInfo(link): flat_page = getFlatPage(link) price = getPrice(flat_page) coords = getCoords(flat_page) distance = getDistance(coords) rooms = getRoom(flat_page) metrdist = getMetroDistance(flat_page) metro_walking = getMetroWalking(flat_page) brick = getBrick(flat_page) new = getNew(flat_page) floor = getFloor(flat_page) nfloors = getNFloor(flat_page) bal = getBal(flat_page) kitsp = getKitsp(flat_page) livesp = getLivesp(flat_page) tel = getTel(flat_page) totsp = getTotsp(flat_page) walk = getMetroWalking(flat_page) info = [bal, brick, distance, floor, kitsp, livesp, metrdist, new, nfloors, price, rooms, tel, totsp, walk] return info """ Explanation: getInfo() возвращает полную информацию по квартире, вызывая внутри себя вышеперечисленные функции для определения значений признаков End of explanation """ def districtParser(links): apartments = [] for link in links: apartment = getInfo(link) apartment.append(link) apartments.append(apartment) return apartments districts def cianParser(districts, links): tmp = dict([(0 ,[np.nan]), (1,[np.nan]), (2,[np.nan]), (3,[np.nan]), (4,[np.nan]), (5,[np.nan]), (6,[np.nan]), (7,[np.nan]), (8,[np.nan]), (9,[np.nan]), (10,[np.nan]), (11,[np.nan]), ('Distr', [np.nan])]) data = pd.DataFrame(tmp) for i in districts.keys(): district_name = districts.get(i) tmp_links = links[links['Distr'] == district_name] tmp_links = tmp_links['link'] data_tmp = pd.DataFrame(districtParser(tmp_links)) data_tmp['Distr'] = district_name data = data.append(data_tmp) print('district', districts.get(i), 'is done!') return data """ Explanation: Мне видится удобной схема из двух функций: парсер по округу и большой парсер. Большой парсер внутри себя вызывает парсер по округу для всех интересующих нас округов (их 9, я не рассматриваю Зеленоград, Новую Москву и т.д.) End of explanation """ full_links = pd.read_csv('/Users/tatanakuzenko/lbNW.csv') full_links['Distr'] = 'NW' full_links.head() """ Explanation: Схема готова. Теперь поработаем со ссылками Выше была вырезана часть, где я сохраняла собранные ссылки по округам в соответствующие csv. Теперь я подгружу оттуда данные. End of explanation """ districts_cut = {4: 'C', 5:'N', 6:'NE', 7:'E', 8:'SE', 9:'S', 10:'SW', 11:'W'} for i in districts_cut.values(): links_append = pd.read_csv('/Users/tatanakuzenko/lb' + i + '.csv') links_append['Distr'] = i print(links_append.shape) full_links = full_links.append(links_append) full_links.shape """ Explanation: Проделаю это для всех остальных округов, соединю все вертикально, тк дальше буду удалять дубликаты. End of explanation """ full_links = full_links.dropna() full_links.shape """ Explanation: Nan был нулевым элементом для списка ссылок каждого округа. Удалим их (первая размерность должна стать на 9 меньше) End of explanation """ full_links = full_links.drop_duplicates() full_links.shape """ Explanation: Теперь удалим дубликаты End of explanation """ full_links.index = [x for x in range(len(full_links.index))] full_links.rename(columns={'0' : 'link'}, inplace = True) full_links['link'] = full_links['link'].astype(np.int32) full_links.head() """ Explanation: Вот данных и поубавилось. Но нам хватит. Приведем в порядок ссылки: установим верные индексы, переименуем колонку из 0 в link и переведем значения этой колонки в int. End of explanation """ data = cianParser(districts, full_links) data.head() data.shape """ Explanation: Теперь можно запускать парсер End of explanation """ data.to_csv('cian_full_data.csv', index = False) """ Explanation: Данные получены, сохраним их и приступим к очистке и визуализации. End of explanation """
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_03/Final/.ipynb_checkpoints/Indexing-checkpoint.ipynb
bsd-3-clause
import pandas as pd import numpy as np produce_dict = {'veggies': ['potatoes', 'onions', 'peppers', 'carrots'],'fruits': ['apples', 'bananas', 'pineapple', 'berries']} produce_df = pd.DataFrame(produce_dict) produce_df """ Explanation: Indexing and Selection | Operation | Syntax | Result | |-------------------------------|----------------|-----------| | Select column | df[col] | Series | | Select row by label | df.loc[label] | Series | | Select row by integer | df.iloc[loc] | Series | | Select rows | df[start:stop] | DataFrame | | Select rows with boolean mask | df[mask] | DataFrame | documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html End of explanation """ produce_df['fruits'] """ Explanation: selection using dictionary-like string End of explanation """ produce_df[ ['fruits', 'veggies'] ] """ Explanation: list of strings as index (note: double square brackets) End of explanation """ produce_df.iloc[2] """ Explanation: select row using integer index End of explanation """ produce_df.iloc[0:2] produce_df.iloc[:-2] """ Explanation: select rows using integer slice End of explanation """ produce_df + produce_df.iloc[0] """ Explanation: + is over-loaded as concatenation operator End of explanation """ df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D']) df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C']) sum_df = df + df2 sum_df """ Explanation: Data alignment and arithmetic Data alignment between DataFrame objects automatically align on both the columns and the index (row labels). Note locations for 'NaN' End of explanation """ sum_df>0 sum_df[sum_df>0] """ Explanation: Boolean indexing End of explanation """ mask = sum_df['B'] < 0 mask sum_df[mask] """ Explanation: first select rows in column B whose values are less than zero then, include information for all columns in that row in the resulting data set End of explanation """ produce_df.isin(['apples', 'onions']) """ Explanation: isin function End of explanation """ produce_df.where(produce_df > 'k') """ Explanation: where function End of explanation """
AlexDaciuk/Algoritmos
Random_Forest.ipynb
gpl-3.0
import base64 token = base64.b64decode("Njk4ZGVjMWE5Y2YyNDQ5ZmNhY2FkOWU4NDdjMDk5NWU1NTZhMDk5Yw====").decode("utf-8") ! rm -rf tp-datos-2c2020 datos ! git clone https://{token}@github.com/AlexDaciuk/tp-datos-2c2020.git ! mv tp-datos-2c2020 datos from datos.preproc import preprocessing from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from matplotlib import pyplot as plt from sklearn.tree import plot_tree import pandas as pd from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_curve, auc, roc_auc_score, confusion_matrix df_all = preprocessing.get_data() """ Explanation: <a href="https://colab.research.google.com/github/AlexDaciuk/Algoritmos/blob/master/Random_Forest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ df_forest = preprocessing.rforest_preproc(df_all) # Separate test and train data X_train, X_test, y_train, y_test = train_test_split(df_forest.drop('volveria', 1), df_forest['volveria']) model_rfr = RandomForestClassifier(max_depth=5) model_rfr.fit(X_train, y_train) preprocessing.report(model_rfr, X_train, y_train, X_test, y_test) pred_rfr = model_rfr.predict(X_test) """ Explanation: Random Forest Classifier End of explanation """ len(model_rfr.estimators_) """ Explanation: Cantidad de árboles entrenados: End of explanation """ with plt.style.context("classic"): plt.figure(figsize=(20, 10)) plot_tree(model_rfr.estimators_[0], filled=True) """ Explanation: Veamos el primero de ellos: End of explanation """ pred_first_estimator = model_rfr.estimators_[0].predict(X_test) accuracy_score(y_test, pred_first_estimator) precision_score(y_test, pred_first_estimator) recall_score(y_test, pred_first_estimator) f1_score(y_test, pred_first_estimator) pred_estimators = [ estimator.predict(X_test) for estimator in model_rfr.estimators_ ] acc_estimators = [accuracy_score(y_test, pred) for pred in pred_estimators] plt.figure(figsize=(10, 10)) plt.hist(acc_estimators) plt.xlabel("Accuracy", weight="bold", fontsize=15) plt.ylabel("Frecuencia", weight="bold", fontsize=15) plt.title( "Histograma de accuracy de los arboles del RF model", weight="bold", fontsize=16 ) max(acc_estimators) min(acc_estimators) """ Explanation: Y sus métricas: End of explanation """
dkirkby/bossdata
examples/nb/StackingWithSpeclite.ipynb
mit
%pylab inline import speclite print(speclite.version.version) import bossdata print(bossdata.__version__) finder = bossdata.path.Finder() mirror = bossdata.remote.Manager() """ Explanation: Examples of Stacking BOSS Spectra using Speclite Examples of using the speclite package to perform basic operations on spectral data accessed with the bossdata package. To keep the examples small, we use data from a single BOSS plate (6641 observed on MJD 56383) and show how to work with both the individual spec-lite files and the combined spPlate file (see here for details on the different data products). Package Initialization End of explanation """ spAll = bossdata.meta.Database(lite=True) sky_table = spAll.select_all(where='PLATE=6641 and OBJTYPE="SKY"') print('Found {0} sky fibers for plate 6641.'.format(len(sky_table))) """ Explanation: Stacked Sky Get a list of sky spectra on plate 6641: End of explanation """ def plot_stack(data, truncate_percentile): valid = data['ivar'] > 0 wlen = data['wavelength'][valid] flux = data['flux'][valid] dflux = data['ivar'][valid]**(-0.5) plt.figure(figsize=(12,5)) plt.fill_between(wlen, flux, lw=0, color='red') plt.errorbar(wlen, flux, dflux, color='black', alpha=0.5, ls='None', capthick=0) plt.xlim(np.min(wlen), np.max(wlen)) plt.ylim(0, np.percentile(flux + dflux, truncate_percentile)) plt.xlabel('Wavelength ($\AA$)') plt.ylabel('Flux $10^{-17}$ erg/(s cm$^2 \AA$)') plt.tight_layout(); """ Explanation: Plot a stacked spectrum: End of explanation """ spec_sky = None for row in sky_table: filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True) spectrum = bossdata.spec.SpecFile(mirror.get(filename)) data = spectrum.get_valid_data(include_sky=True, use_ivar=True, fiducial_grid=True) spec_sky = speclite.accumulate(spec_sky, data, data_out=spec_sky, join='wavelength', add=('flux', 'sky'), weight='ivar') spec_sky['flux'] += spec_sky['sky'] plot_stack(spec_sky, truncate_percentile=97.5) """ Explanation: Stack individual Spec-lite files Loop over all sky spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes. End of explanation """ plate_sky = None filename = finder.get_plate_spec_path(plate=6641, mjd=56383) plate = bossdata.plate.PlateFile(mirror.get(filename)) plate_data = plate.get_valid_data(sky_table['FIBER'], include_sky=True, use_ivar=True, fiducial_grid=True) for data in plate_data: plate_sky = speclite.accumulate(plate_sky, data, data_out=plate_sky, join='wavelength', add=('flux', 'sky'), weight='ivar') plate_sky['flux'] += plate_sky['sky'] plot_stack(plate_sky, truncate_percentile=97.5) """ Explanation: Stack Spectra from one Plate file Accumulate the sky spectra from a Plate file, which will be automatically downloaded if necessary. End of explanation """ DR12Q = bossdata.meta.Database(finder, mirror, quasar_catalog=True) qso_table = DR12Q.select_all(where='PLATE=6641 and ZWARNING=0', what='PLATE,MJD,FIBER,Z_VI') print('Found {0} QSO targets for plate 6641.'.format(len(qso_table))) """ Explanation: Stacked Quasars Get a list of sky spectra on plate 6641, observed on MJD 56383: End of explanation """ plt.hist(qso_table['Z_VI'], bins=25); plt.xlabel('Redshift z') plt.ylabel('Quasars') plt.tight_layout(); """ Explanation: Plot the redshift distribution of the selected quasars: End of explanation """ fiducial_grid = np.arange(1000.,3000.) rest_frame, resampled, spec_qso = None, None, None for row in qso_table: filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True) spectrum = bossdata.spec.SpecFile(mirror.get(filename)) data = spectrum.get_valid_data(use_ivar=True, fiducial_grid=True) rest_frame = speclite.redshift(z_in=row['Z_VI'], z_out=0, data_in=data, data_out=rest_frame, rules=[ dict(name='wavelength', exponent=+1), dict(name='flux', exponent=-1), dict(name='ivar', exponent=+2)]) resampled = speclite.resample(rest_frame, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'), data_out=resampled) spec_qso = speclite.accumulate(spec_qso, resampled, data_out=spec_qso, join='wavelength', add='flux', weight='ivar') plot_stack(spec_qso, truncate_percentile=99.5) """ Explanation: Stack spectra from individual Spec-lite files Loop over all quasar spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes. End of explanation """ filename = finder.get_plate_spec_path(plate=6641, mjd=56383) plate = bossdata.plate.PlateFile(mirror.get(filename)) plate_data = plate.get_valid_data(qso_table['FIBER'], use_ivar=True, fiducial_grid=True) zorder = np.argsort(qso_table['Z_VI']) """ Explanation: Stack spectra from one Plate file End of explanation """ z_in = qso_table['Z_VI'][:,np.newaxis] plate_data = speclite.redshift(z_in=z_in, z_out=0, data_in=plate_data, data_out=plate_data, rules=[ dict(name='wavelength', exponent=+1), dict(name='flux', exponent=-1), dict(name='ivar', exponent=+2) ]) """ Explanation: Transform each spectrum to its quasar rest frame. We perform this operation in place (re-using the memory of the input array) and in parallel on all spectra. End of explanation """ resampled, plate_qso = None, None for data in plate_data: resampled = speclite.resample(data, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'), data_out=resampled) plate_qso = speclite.accumulate(spec_qso, resampled, data_out=plate_qso, join='wavelength', add='flux', weight='ivar') plot_stack(plate_qso, truncate_percentile=99.5) """ Explanation: Resample each spectrum to a uniform rest wavelength grid and stack them together to calculate the mean rest-frame quasar spectrum. The resample() and accumulate() operations re-use the same memory for each input spectrum, so this loop has fixed (small) memory requirements, independent of the number of spectra being stacked. End of explanation """
GuillaumeDec/machine-learning
tutorials/deep-lstm-time-series.ipynb
gpl-3.0
from __future__ import print_function import os import mxnet as mx from mxnet import nd, autograd import numpy as np from exceptions import ValueError import warnings from collections import defaultdict # we use cpus here ctx = mx.cpu(0) warnings.filterwarnings('ignore', category=DeprecationWarning, module='.*/IPython/.*') %matplotlib inline import matplotlib import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from datetime import datetime sns.set_style('whitegrid') sns.set_context('poster') # Make inline plots vector graphics instead of raster graphics from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'png') # +1 because of the time shift between inputs and labels SEQ_LENGTH = 100 + 1 NUM_SAMPLES_TRAINING = 5000 + 1 # a few samples for testing our prediction outputs vs true labels NUM_SAMPLES_TESTING = 100 + 1 # set to False if you already have the data files locally CREATE_DATA_SETS = True """ Explanation: Deep Long Short Term Memory RNNs For 1D time-series prediction. End of explanation """ # return one random scalar from 0 to 1 using mxnet's nd def gimme_one_random_number(): return nd.random_uniform(low=0, high=1, shape=(1,1)).asnumpy()[0][0] # create one sine time series with fixed random frequency and amplitude, from Lakshmanan V. Medium's post def create_one_time_series(seq_length=10): freq = (gimme_one_random_number()*0.5) + 0.1 # 0.1 to 0.6 ampl = gimme_one_random_number() + 0.5 # 0.5 to 1.5 x = np.sin(np.arange(0, seq_length) * freq) * ampl return x def create_batch_time_series(seq_length=10, num_samples=4): """Create a dataframe with a batch of random sine time series. inputs: seq_length: number of time steps in each time series num_samples: number of time series outputs: df: pandas dataframe of shape (num_samples, seq_length) with each row being one time series, each column is a timestep """ column_labels = ['t'+str(i) for i in range(0, seq_length)] df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose() df.columns = column_labels df.index = ['s'+str(0)] for i in range(1, num_samples): more_df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose() more_df.columns = column_labels more_df.index = ['s'+str(i)] df = pd.concat([df, more_df], axis=0) return df ######################## # Create some time-series for training and testing ######################## # build predictible random time-series mx.random.seed(123) # store the data in this directory, create it if it does not exist if not os.path.exists('../data/timeseries/'): makedirs('../data/timeseries/') if CREATE_DATA_SETS: data_train = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TRAINING) data_test = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TESTING) # Write data to csvs data_train.to_csv("../data/timeseries/train.csv") data_test.to_csv("../data/timeseries/test.csv") else: data_train = pd.read_csv("../data/timeseries/train.csv", index_col=0) data_test = pd.read_csv("../data/timeseries/test.csv", index_col=0) """ Explanation: Dataset: "Some time-series" End of explanation """ # pick 3 time series at random and plot them (data_train.sample(3).transpose().iloc[range(0, SEQ_LENGTH)]).plot() """ Explanation: Check the data real quick End of explanation """ # number of samples to use for each bach when doing minibatch training on the Deep net batch_size = 64 # number of samples to use for testing. 1 is simple, to quickly evaluate prediction power on one time series batch_size_test = 1 # sequence length for training & testing. This is equal to the number of RNN input (and output) cells when unrolling the RNN net. Can be adjusted to your liking or the problem you are trying to solve. Longer sequences may require deeper nets. seq_length = 16 # number of minibatches available for training and testing num_batches_train = data_train.shape[0] // batch_size num_batches_test = data_test.shape[0] // batch_size_test # we do 1D time series for now, this is equivalent to vocab_size = 1 for text num_features = 1 # inputs are from t0 to t_seq_length - 1. because the last point is kept for the output ("labels") of the penultimate point data_train_inputs = data_train.loc[:,data_train.columns[:-1]] data_train_labels = data_train.loc[:,data_train.columns[1:]] data_test_inputs = data_test.loc[:,data_test.columns[:-1]] data_test_labels = data_test.loc[:,data_test.columns[1:]] # reshape the data to prepare for training & testing ingestion train_data_inputs = nd.array(data_train_inputs.values).reshape((num_batches_train, batch_size, seq_length, num_features)) train_data_labels = nd.array(data_train_labels.values).reshape((num_batches_train, batch_size, seq_length, num_features)) test_data_inputs = nd.array(data_test_inputs.values).reshape((num_batches_test, batch_size_test, seq_length, num_features)) test_data_labels = nd.array(data_test_labels.values).reshape((num_batches_test, batch_size_test, seq_length, num_features)) train_data_inputs = nd.swapaxes(train_data_inputs, 1, 2) train_data_labels = nd.swapaxes(train_data_labels, 1, 2) test_data_inputs = nd.swapaxes(test_data_inputs, 1, 2) test_data_labels = nd.swapaxes(test_data_labels, 1, 2) print('num_samples_training={0} | num_batches_train={1} | batch_size={2} | seq_length={3}'.format(NUM_SAMPLES_TRAINING, num_batches_train, batch_size, seq_length)) """ Explanation: Preparing the data for training End of explanation """ # for a 1D time series, this is just a scalar equal to 1 num_inputs = num_features # same comment num_outputs = num_features # num of hidden units in each hidden LSTM layer. This effectively sets the number of layers in the Deep Net. num_hidden_units = [8, 8, 8] # num of hidden LSTM layers num_hidden_layers = len(num_hidden_units) # num of units in each layer but the output layer num_units_layers = [num_features] + num_hidden_units ######################## # Weights connecting the inputs to the hidden layers ######################## # weights and biases are now dictionaries because there is one set of them per layer Wxg, Wxi, Wxf, Wxo, Whg, Whi, Whf, Who, bg, bi, bf, bo = {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {} for i_layer in range(1, num_hidden_layers+1): num_inputs = num_units_layers[i_layer-1] num_hidden_units = num_units_layers[i_layer] Wxg[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxi[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxf[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 Wxo[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01 ######################## # Recurrent weights connecting the hidden layer across time steps ######################## Whg[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Whi[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Whf[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 Who[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01 ######################## # Bias vector for hidden layer ######################## bg[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bi[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bf[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 bo[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01 ######################## # Weights to the output nodes ######################## Why = nd.random_normal(shape=(num_units_layers[-1], num_outputs), ctx=ctx) * .01 by = nd.random_normal(shape=num_outputs, ctx=ctx) * .01 """ Explanation: Long short-term memory (LSTM) RNNs An LSTM block has mechanisms to enable "memorizing" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps: $\newcommand{\xb}{\mathbf{x}} \newcommand{\RR}{\mathbb{R}}$ $$g_t = \text{tanh}(X_t W_{xg} + h_{t-1} W_{hg} + b_g),$$ $$i_t = \sigma(X_t W_{xi} + h_{t-1} W_{hi} + b_i),$$ $$f_t = \sigma(X_t W_{xf} + h_{t-1} W_{hf} + b_f),$$ $$o_t = \sigma(X_t W_{xo} + h_{t-1} W_{ho} + b_o),$$ $$c_t = f_t \odot c_{t-1} + i_t \odot g_t,$$ $$h_t = o_t \odot \text{tanh}(c_t),$$ where $\odot$ is an element-wise multiplication operator, and for all $\xb = [x_1, x_2, \ldots, x_k]^\top \in \RR^k$ the two activation functions: $$\sigma(\xb) = \left[\frac{1}{1+\exp(-x_1)}, \ldots, \frac{1}{1+\exp(-x_k)}]\right]^\top,$$ $$\text{tanh}(\xb) = \left[\frac{1-\exp(-2x_1)}{1+\exp(-2x_1)}, \ldots, \frac{1-\exp(-2x_k)}{1+\exp(-2x_k)}\right]^\top.$$ In the transformations above, the memory cell $c_t$ stores the "long-term" memory in the vector form. In other words, the information accumulatively captured and encoded until time step $t$ is stored in $c_t$ and is only passed along the same layer over different time steps. Given the inputs $c_t$ and $h_t$, the input gate $i_t$ and forget gate $f_t$ will help the memory cell to decide how to overwrite or keep the memory information. The output gate $o_t$ further lets the LSTM block decide how to retrieve the memory information to generate the current state $h_t$ that is passed to both the next layer of the current time step and the next time step of the current layer. Such decisions are made using the hidden-layer parameters $W$ and $b$ with different subscripts: these parameters will be inferred during the training phase by gluon. Deep LSTM Deep LSTM are multi-layered, which means they contain at least two LSTM layers stacked together. One deep LSTM layer, indexed by $i$ (nameley, the $i$-th hidden LSTM layer, starting with $i=1$) receives the hidden state at time $t$, noted $h^{[i-1]}t$, of the LSTM cell above it, and its own hidden state at time $t-1$, noted $h^{[i]}{t-1}$ (that is, the hidden state of itself at the previous timestep, just like any regular one-layered LSTM RNN.) Like any regular one-layered LSTM, the entire sequence is fed to each LSTM unit, by design, and this is repeated for each sequence in the minibatch, before backprop and updating the parameters is done. Everything else is left unchanged. The output layer will receive inputs from the last hidden layer. Allocate parameters End of explanation """ params = [] for i_layer in range(1, num_hidden_layers+1): params += [Wxg[i_layer], Wxi[i_layer], Wxf[i_layer], Wxo[i_layer], Whg[i_layer], Whi[i_layer], Whf[i_layer], Who[i_layer], bg[i_layer], bi[i_layer], bf[i_layer], bo[i_layer]] # add the output layer params += [Why, by] for param in params: param.attach_grad() """ Explanation: Attach the gradients End of explanation """ # we use standard rmse loss here, good enough for 1D time series def rmse(yhat, y): return nd.mean(nd.sqrt(nd.sum(nd.power(y - yhat, 2), axis=0, exclude=True))) def average_rmse_loss(outputs, labels): assert(len(outputs) == len(labels)) total_loss = 0. for (output, label) in zip(outputs,labels): total_loss = total_loss + rmse(output, label) return total_loss / len(outputs) """ Explanation: Averaging the loss over the sequence End of explanation """ # Standard Gradient Descent def SGD(params, learning_rate): for param in params: param[:] = param - learning_rate * param.grad # Adam Optimizer. Inspired from Agustinus Kristiadi's blog post on Optimizers def adam(params, learning_rate, M , R, index_adam_call, beta1, beta2, eps): k = -1 for param in params: k += 1 M[k] = beta1 * M[k] + (1. - beta1) * param.grad R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2 # bias correction since we initilized M & R to zeros, they're biased toward zero on the first few iterations m_k_hat = M[k] / (1. - beta1**(index_adam_call)) r_k_hat = R[k] / (1. - beta2**(index_adam_call)) if((np.isnan(M[k].asnumpy())).any() or (np.isnan(R[k].asnumpy())).any()): raise(ValueError('Error nans propagating in Adam Optimizer')) param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps) return params, M, R """ Explanation: Optimizer End of explanation """ def single_lstm_unit_calcs(X, c, Wxg, h, Whg, bg, Wxi, Whi, bi, Wxf, Whf, bf, Wxo, Who, bo): """ Function that does the LSTM computations for a single cell. input: all parameters (W, biases), input X, memory cell c. output: c: updated value for the memory cell. 'updated' means going from timestep t-1 to timestep t h: updated value for the hidden state """ g = nd.tanh(nd.dot(X, Wxg) + nd.dot(h, Whg) + bg) i = nd.sigmoid(nd.dot(X, Wxi) + nd.dot(h, Whi) + bi) f = nd.sigmoid(nd.dot(X, Wxf) + nd.dot(h, Whf) + bf) o = nd.sigmoid(nd.dot(X, Wxo) + nd.dot(h, Who) + bo) ####################### c = f * c + i * g h = o * nd.tanh(c) return c, h def deep_lstm_rnn(inputs, h, c, temperature=1.0): """ Do one forward pass of the entire deep net (accross all layers) for one minibatch 'inputs' h: dict of nd.arrays, each key is the index of a hidden layer (from 1 to num_hidden_layers). Index 0, if any, is the input layer """ outputs = [] # inputs is one MINIBATCH of sequences so its shape is number_of_seq, seq_length, features_dim # Note that features_dim is 1 for a time series, vocab_size for a character, n for a n-D times series) for X in inputs: # this loop is therefore a loop on the timesteps # X is one minibatch of one timestep value, NOT the entire sequence. E.g. if each batch has 37 sequences, then the first value of X will be a set of the 37 first values of each of the 37 sequences # that means each iteration on X corresponds to one timestep value, but it is done in batches of different sequences h[0] = X # the first hidden layer takes the input X as input for i_layer in range(1, num_hidden_layers+1): # lstm units now have the 2 following inputs: # i) h_t from the previous layer (equivalent to the input X for a non-deep lstm net), # ii) h_t-1 from the current layer (same as for non-deep lstm nets) c[i_layer], h[i_layer] = single_lstm_unit_calcs(h[i_layer-1], c[i_layer], Wxg[i_layer], h[i_layer], Whg[i_layer], bg[i_layer], Wxi[i_layer], Whi[i_layer], bi[i_layer], Wxf[i_layer], Whf[i_layer], bf[i_layer], Wxo[i_layer], Who[i_layer], bo[i_layer]) yhat_linear = nd.dot(h[num_hidden_layers], Why) + by # yhat is a batch of several values of the same timestep # this is basically the prediction of the next timestep value # we cannot use a 1.0-bounded activation function like tanh for example, since amplitudes can be greater than 1.0 yhat = yhat_linear # outputs has same shape as inputs, i.e. a list of minibatches of data points. outputs is thus a minibatch of output sequences outputs.append(yhat) return (outputs, h, c) """ Explanation: Define the model End of explanation """ def test_prediction(one_input_seq, one_label_seq, temperature=1.0): ##################################### # Set the initial state of the hidden representation ($h_0$) to the zero vector ##################################### # some better initialization needed?? h, c = {}, {} for i_layer in range(1, num_hidden_layers+1): h[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx) c[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx) outputs, h, c = deep_lstm_rnn(one_input_seq, h, c, temperature=temperature) loss = rmse(outputs[-1][0], one_label_seq) return outputs[-1][0].asnumpy()[-1], one_label_seq.asnumpy()[-1], loss.asnumpy()[-1], outputs, one_label_seq def check_prediction(index): """ Computes one prediction and its associated relative error on the index sequence of the test dataset input: index: index of the sequence in the test_data_inputs output: df: pandas dataframe that contains predicted values vs true values (called 'labels') """ o, label, loss, outputs, labels = test_prediction(test_data_inputs[index], test_data_labels[index], temperature=1.0) prediction = round(o, 3) true_label = round(label, 3) outputs = [float(i.asnumpy().flatten()) for i in outputs] true_labels = list(test_data_labels[index].asnumpy().flatten()) df = pd.DataFrame([outputs, true_labels]).transpose() df.columns = ['predicted', 'true'] return df # one epoch is one pass over the entire training dataset. epochs = 32 moving_loss = 0. # 0.001 works for a [8, 8, 8] after about 70 epochs of 32-sized batches. # Can be somewhat larger for deeper nets or when using SGD, to speed up training. learning_rate = 0.001 # # Adam Optimizer parameters beta1 = .9 beta2 = .999 index_adam_call = 0 # M & R arrays to keep track of momenta in adam optimizer. params is a list that contains all ndarrays of parameters M = {k: nd.zeros_like(v) for k, v in enumerate(params)} R = {k: nd.zeros_like(v) for k, v in enumerate(params)} # initialize dataframes that keep track of Loss and Errors of test predictions, for visualization purposes df_moving_loss = pd.DataFrame(columns=['Loss', 'Error']) df_moving_loss.index.name = 'Epoch' # needed to update plots on the fly %matplotlib notebook fig, axes_fig1 = plt.subplots(1,1, figsize=(6,3)) # for plotting predicted vs true time series fig2, axes_fig2 = plt.subplots(1,1, figsize=(6,3)) # for plotting training Loss and Relative error on predictions for a randomly chosen test sequence for e in range(epochs): ############################ # Attenuate the learning rate by a factor of 2 every 100 epochs. Only use for SGD, not needed (valid?) for Adam. ############################ if ((e+1) % 100 == 0): learning_rate = learning_rate / 2.0 # initialize hidden and memory states for the net h, c = {}, {} for i_layer in range(1, num_hidden_layers+1): h[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx) c[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx) for i in range(num_batches_train): data_one_hot = train_data_inputs[i] label_one_hot = train_data_labels[i] with autograd.record(): outputs, h, c = deep_lstm_rnn(data_one_hot, h, c) loss = average_rmse_loss(outputs, label_one_hot) loss.backward() # SGD(params, learning_rate) index_adam_call += 1 # needed for bias correction in Adam optimizer params, M, R = adam(params, learning_rate, M, R, index_adam_call, beta1, beta2, 1e-8) ########################## # Keep a moving average of the losses ########################## if (i == 0) and (e == 0): moving_loss = nd.mean(loss).asscalar() else: moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar() df_moving_loss.loc[e] = round(moving_loss, 4) ############################ # Predictions and plots ############################ # use the epoch index to randomly select one test data sequence. data_prediction_df = check_prediction(index=e) axes_fig1.clear() data_prediction_df.plot(ax=axes_fig1) fig.canvas.draw() prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1], 3) true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3) rel_error = round(100. * np.abs(prediction / true_label - 1.0), 2) axes_fig2.clear() if e == 0: moving_rel_error = rel_error else: moving_rel_error = .9 * moving_rel_error + .1 * rel_error df_moving_loss.loc[e, ['Error']] = moving_rel_error axes_loss_plot = df_moving_loss.plot(ax=axes_fig2, secondary_y='Loss', color=['r','b']) axes_loss_plot.right_ax.grid(False) axes_loss_plot.right_ax.set_yscale('log') fig2.canvas.draw() print("Epoch = {0} | Loss = {1} | Out of Sample Prediction = {2} True = {3} Error = {4}".format(e, moving_loss, prediction, true_label, moving_rel_error )) %matplotlib inline """ Explanation: Test and visualize predictions End of explanation """
karlstroetmann/Artificial-Intelligence
Python/2 Constraint Solver/N-Queens-Problem-CSP.ipynb
gpl-2.0
def create_csp(n): S = range(1, n+1) Variables = { f'V{i}' for i in S } Values = set(S) DifferentCols = { f'V{i} != V{j}' for i in S for j in S if i < j } DifferentDiags = { f'abs(V{j} - V{i}) != {j - i}' for i in S for j in S if i < j } return Variables, Values, DifferentCols | DifferentDiags """ Explanation: The N-Queens-Problem as a CSP The function create_csp(n) takes a natural number n as argument and returns a constraint satisfaction problem that encodes the n-queens puzzle. A constraint satisfaction problem $\mathcal{P}$ is a triple of the form $$ \mathcal{P} = \langle \mathtt{Vars}, \mathtt{Values}, \mathtt{Constraints} \rangle $$ where - Vars is a set of strings which serve as variables. The idea is that $V_i$ specifies the column of the queen that is placed in row $i$. Values is a set of values that can be assigned to the variables in $\mathtt{Vars}$. In the 8-queens-problem we will have $\texttt{Values} = {1,\cdots,8}$. - Constraints is a set of formulas from first order logic. Each of these formulas is called a constraint of $\mathcal{P}$. There are two different types of constraints. * We have constraints that express that no two queens that are positioned in different rows share the same column. To capture these constraints, we define $$\texttt{DifferentCol} := \bigl{ \texttt{V}_i \not= \texttt{V}_j \bigm| i \in {1,\cdots,8} \wedge j \in {1,\cdots,8} \wedge j < i \bigr}.$$ Here the condition $j < i$ ensures that, for example, while we have the constraint $\texttt{V}_2 \not= \texttt{V}_1$ we do not also have the constraint $\texttt{V}_1 \not= \texttt{V}_2$, as the latter constraint would be redundant if the former constraint had already been established. * We have constraints that express that no two queens positioned in different rows share the same diagonal. The queens in row $i$ and row $j$ share the same diagonal iff the equation $$ |i - j| = |V_i - V_j| $$ holds. The expression $|i-j|$ is the absolute value of the difference of the rows of the queens in row $i$ and row $j$, while the expression $|V_i - V_j|$ is the absolute value of the difference of the columns of these queens. To capture these constraints, we define $$ \texttt{DifferentDiag} := \bigl{ |i - j| \not= |\texttt{V}_i - \texttt{V}_j| \bigm| i \in {1,\cdots,8} \wedge j \in {1,\cdots,8} \wedge j < i \bigr}. $$ End of explanation """ def main(): Vars, Values, Constraints = create_csp(4) print('Variables: ', Vars) print('Values: ', Values) print('Constraints:') for c in Constraints: print(' ', c) main() """ Explanation: The function main() creates a CSP representing the 4-queens puzzle and prints the CSP. It is included for testing purposes. End of explanation """ import chess """ Explanation: Displaying the Solution End of explanation """ def show_solution(Solution): board = chess.Board(None) # create empty chess board queen = chess.Piece(chess.QUEEN, True) for row in range(1, 8+1): col = Solution['V'+str(row)] field_number = (row - 1) * 8 + col - 1 board.set_piece_at(field_number, queen) display(board) """ Explanation: The function show_solution(Solution) takes a dictionary that contains a variable assignment that represents a solution to the 8-queens puzzle. It displays this Solution on a chess board. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-mm/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: NCC Source ID: NORESM2-MM Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:24 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
AlphaGit/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_sentences = source_text.split('\n') source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences] target_sentences = target_text.split('\n') target_id_text = [[target_vocab_to_int[word] for word in sentence.split() + ['<EOS>']] for sentence in target_sentences] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function input = tf.placeholder(dtype=tf.int32, shape=[None, None], name="input") targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name="targets") learning_rate = tf.placeholder(dtype=tf.float32, shape=None, name="learning_rate") keep_prob = tf.placeholder(dtype=tf.float32, shape=None, name="keep_prob") return input, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # get all the target_data, except the last portion # strided_slice will get all "columns" except the last one # we cannot use slice because the last dimension is defined as None everything_but_last = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) gos = tf.fill([batch_size, 1], target_vocab_to_int['<GO>']) preprocessed_input = tf.concat([gos, everything_but_last], 1) return preprocessed_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ rnn_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) rnn_cell = tf.contrib.rnn.DropoutWrapper(rnn_cell, output_keep_prob=keep_prob) cells = tf.contrib.rnn.MultiRNNCell([rnn_cell] * num_layers) _, encoding_layer = tf.nn.dynamic_rnn(cells, rnn_inputs, dtype=tf.float32) return encoding_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ train_decoder = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) rnn_decoder = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_decoder, dec_embed_input, sequence_length, scope=decoding_scope) rnn_output = rnn_decoder[0] train_logits = output_fn(rnn_output) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ inferring_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size) dropped_out_decoder = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) rnn_decoder = tf.contrib.seq2seq.dynamic_rnn_decoder(dropped_out_decoder, inferring_decoder_fn, scope=decoding_scope) inference_logits = rnn_decoder[0] return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) decoder_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell] * num_layers) with tf.variable_scope("dec_scope") as dec_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=dec_scope) train_logits = decoding_layer_train(encoder_state, decoder_cell, dec_embed_input, sequence_length, dec_scope, output_fn, keep_prob) with tf.variable_scope("dec_scope", reuse=True) as dec_scope: infer_logits = decoding_layer_infer(encoder_state, decoder_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, dec_scope, output_fn, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ encoder_embedded_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder = encoding_layer(encoder_embedded_input, rnn_size, num_layers, keep_prob) processed_decoder_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) # this is the one we'll actually train: which embedding to lookup from the decoder input decoder_embedding = tf.Variable(tf.random_normal([target_vocab_size, dec_embedding_size], stddev=0.1)) decoder_embedding_input = tf.nn.embedding_lookup(decoder_embedding, processed_decoder_input) # from the same embedding (inputs) we'll generate both decoders train_logits, infer_logits = decoding_layer(decoder_embedding_input, decoder_embedding, encoder, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation """ # Number of Epochs -- TODO modify to bigger values for Floydhub # Floydhub: # """ epochs = 10 # Batch Size batch_size = 1024 # RNN Size rnn_size = 100 # Number of Layers num_layers = 4 # Embedding Size encoding_embedding_size = 300 decoding_embedding_size = 300 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.75 # """ # Local: """ epochs = 1 # Batch Size batch_size = 64 # RNN Size rnn_size = 20 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 100 decoding_embedding_size = 100 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.75 """ "Done." """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ lowercase_sentence = sentence.lower() unknown_term = vocab_to_int['<UNK>'] return [ vocab_to_int.get(word, unknown_term) for word in sentence.split() ] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'the fruit is orange .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
jerkos/cobrapy
documentation_builder/milp.ipynb
lgpl-2.1
cone_selling_price = 7. cone_production_cost = 3. popsicle_selling_price = 2. popsicle_production_cost = 1. starting_budget = 100. """ Explanation: Mixed-Integer Linear Programming Ice Cream This example was originally contributed by Joshua Lerman. An ice cream stand sells cones and popsicles. It wants to maximize its profit, but is subject to a budget. We can write this problem as a linear program: max cone $\cdot$ cone_margin + popsicle $\cdot$ popsicle margin subject to cone $\cdot$ cone_cost + popsicle $\cdot$ popsicle_cost $\le$ budget End of explanation """ from cobra import Model, Metabolite, Reaction cone = Reaction("cone") popsicle = Reaction("popsicle") # constrainted to a budget budget = Metabolite("budget") budget._constraint_sense = "L" budget._bound = starting_budget cone.add_metabolites({budget: cone_production_cost}) popsicle.add_metabolites({budget: popsicle_production_cost}) # objective coefficient is the profit to be made from each unit cone.objective_coefficient = cone_selling_price - cone_production_cost popsicle.objective_coefficient = popsicle_selling_price - \ popsicle_production_cost m = Model("lerman_ice_cream_co") m.add_reactions((cone, popsicle)) m.optimize().x_dict """ Explanation: This problem can be written as a cobra.Model End of explanation """ cone.variable_kind = "integer" popsicle.variable_kind = "integer" m.optimize().x_dict """ Explanation: In reality, cones and popsicles can only be sold in integer amounts. We can use the variable kind attribute of a cobra.Reaction to enforce this. End of explanation """ from IPython.display import Image Image(url=r"http://imgs.xkcd.com/comics/np_complete.png") """ Explanation: Now the model makes both popsicles and cones. Restaurant Order To tackle the less immediately obvious problem from the following XKCD comic: End of explanation """ total_cost = Metabolite("constraint") total_cost._bound = 15.05 costs = {"mixed_fruit": 2.15, "french_fries": 2.75, "side_salad": 3.35, "hot_wings": 3.55, "mozarella_sticks": 4.20, "sampler_plate": 5.80} m = Model("appetizers") for item, cost in costs.items(): r = Reaction(item) r.add_metabolites({total_cost: cost}) r.variable_kind = "integer" m.add_reaction(r) # To add to the problem, suppose we don't want to eat all mixed fruit. m.reactions.mixed_fruit.objective_coefficient = 1 m.optimize(objective_sense="minimize").x_dict """ Explanation: We want a solution satisfying the following constraints: $\left(\begin{matrix}2.15&2.75&3.35&3.55&4.20&5.80\end{matrix}\right) \cdot \vec v = 15.05$ $\vec v_i \ge 0$ $\vec v_i \in \mathbb{Z}$ This problem can be written as a COBRA model as well. End of explanation """ m.optimize(objective_sense="maximize").x_dict """ Explanation: There is another solution to this problem, which would have been obtained if we had maximized for mixed fruit instead of minimizing. End of explanation """ import cobra.test model = cobra.test.create_test_model("textbook") # an indicator for pgi pgi = model.reactions.get_by_id("PGI") # make a boolean variable pgi_indicator = Reaction("indicator_PGI") pgi_indicator.lower_bound = 0 pgi_indicator.upper_bound = 1 pgi_indicator.variable_kind = "integer" # create constraint for v - 1000 b <= 0 pgi_plus = Metabolite("PGI_plus") pgi_plus._constraint_sense = "L" # create constraint for v + 1000 b >= 0 pgi_minus = Metabolite("PGI_minus") pgi_minus._constraint_sense = "G" pgi_indicator.add_metabolites({pgi_plus: -1000, pgi_minus: 1000}) pgi.add_metabolites({pgi_plus: 1, pgi_minus: 1}) model.add_reaction(pgi_indicator) # an indicator for zwf zwf = model.reactions.get_by_id("G6PDH2r") zwf_indicator = Reaction("indicator_ZWF") zwf_indicator.lower_bound = 0 zwf_indicator.upper_bound = 1 zwf_indicator.variable_kind = "integer" # create constraint for v - 1000 b <= 0 zwf_plus = Metabolite("ZWF_plus") zwf_plus._constraint_sense = "L" # create constraint for v + 1000 b >= 0 zwf_minus = Metabolite("ZWF_minus") zwf_minus._constraint_sense = "G" zwf_indicator.add_metabolites({zwf_plus: -1000, zwf_minus: 1000}) zwf.add_metabolites({zwf_plus: 1, zwf_minus: 1}) # add the indicator reactions to the model model.add_reaction(zwf_indicator) """ Explanation: Boolean Indicators To give a COBRA-related example, we can create boolean variables as integers, which can serve as indicators for a reaction being active in a model. For a reaction flux $v$ with lower bound -1000 and upper bound 1000, we can create a binary variable $b$ with the following constraints: $b \in {0, 1}$ $-1000 \cdot b \le v \le 1000 \cdot b$ To introduce the above constraints into a cobra model, we can rewrite them as follows $v \le b \cdot 1000 \Rightarrow v- 1000\cdot b \le 0$ $-1000 \cdot b \le v \Rightarrow v + 1000\cdot b \ge 0$ End of explanation """ solution = model.optimize() print("PGI indicator = %d" % solution.x_dict["indicator_PGI"]) print("ZWF indicator = %d" % solution.x_dict["indicator_ZWF"]) print("PGI flux = %.2f" % solution.x_dict["PGI"]) print("ZWF flux = %.2f" % solution.x_dict["G6PDH2r"]) """ Explanation: In a model with both these reactions active, the indicators will also be active End of explanation """ or_constraint = Metabolite("or") or_constraint._bound = 1 zwf_indicator.add_metabolites({or_constraint: 1}) pgi_indicator.add_metabolites({or_constraint: 1}) solution = model.optimize() print("PGI indicator = %d" % solution.x_dict["indicator_PGI"]) print("ZWF indicator = %d" % solution.x_dict["indicator_ZWF"]) print("PGI flux = %.2f" % solution.x_dict["PGI"]) print("ZWF flux = %.2f" % solution.x_dict["G6PDH2r"]) """ Explanation: Because these boolean indicators are in the model, additional constraints can be applied on them. For example, we can prevent both reactions from being active at the same time by adding the following constraint: $b_\text{pgi} + b_\text{zwf} = 1$ End of explanation """
dmytroKarataiev/MachineLearning
creating_customer_segments/customer_segments.ipynb
mit
# Import libraries necessary for this project import numpy as np import pandas as pd import renders as rs from IPython.display import display # Allows the use of display() for DataFrames # Show matplotlib plots inline (nicely formatted in the notebook) %matplotlib inline # Load the wholesale customers dataset try: data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape) except: print "Dataset could not be loaded. Is the dataset missing?" """ Explanation: Machine Learning Engineer Nanodegree Unsupervised Learning Project 3: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer. The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers. Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation """ # Display a description of the dataset display(data.describe()) """ Explanation: Data Exploration In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project. Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase. End of explanation """ # Select three indices of your choice you wish to sample from the dataset indices = [4, 81, 390] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" display(samples) print "Diff with the mean of the dataset" display(samples - data.mean().round()) print "Diff with the median of the dataset" display(samples - data.median().round()) print "Quartile Visualization" # Import Seaborn, a very powerful library for Data Visualisation import seaborn as sns perc = data.rank(pct=True) perc = 100 * perc.round(decimals=3) perc = perc.iloc[indices] sns.heatmap(perc, vmin=1, vmax=99, annot=True) samples_bar = samples.append(data.describe().loc['mean']) samples_bar.index = indices + ['mean'] _ = samples_bar.plot(kind='bar', figsize=(14,6)) """ Explanation: Implementation: Selecting Samples To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another. End of explanation """ # Make a copy of the DataFrame, using the 'drop' function to drop the given feature features = data.columns for feature in features: new_data = data.drop(feature, axis = 1) target = data[feature] # Split the data into training and testing sets using the given feature as the target from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split( new_data, target, test_size = 0.25, random_state = 0) # Create a decision tree regressor and fit it to the training set from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state = 0) regressor.fit(X_train, y_train) # Report the score of the prediction using the testing set score = regressor.score(X_test, y_test) print feature, score """ Explanation: Question 1 Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. What kind of establishment (customer) could each of the three samples you've chosen represent? Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. Answer: I've chosen three different indices which represent three completely different types of establishments: Index 4: Probably a big supermarket in a good neighbourhood. All spendings are well above the median for each category, which means that the scale is large. Sales of Fresh and Delicatessen are in 86% and 97% respectively, which probably makes it almost an outlier in the data. Index 81: A shop near a poor neighbourhood (taking into account the amount of Fresh and Delicatessen sales). This point of sales is focused on sales of Milk, Groceries and Detergents, with all sales in 80+ percentile (their sales are much higher than the mean and the median). Index 390: considering low sales of Milk, Grocery and Detergents - probably a fast food of some kind with a focus on saled of Frozen (84 percentile). Implementation: Feature Relevance One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature. In the code block below, you will need to implement the following: - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function. - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state. - Import a decision tree regressor, set a random_state, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's score function. End of explanation """ # Correlations between segments corr = data.corr() mask = np.zeros_like(corr) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): ax = sns.heatmap(corr, mask=mask, square=True, annot=True, cmap='RdBu') # Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde') """ Explanation: Question 2 Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits? Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. Answer: I've tried to predict each feature of the set to understand if we have some features with a high r^2. The highest prediction score was 0.73 with Detergents_Paper. I don't think this feature is necessary for identifying customer habits, as we have a limited number of samples and that's why we shouldn't use highly correlated features in the dataset. And also we can get the same information from other features. Visualize Feature Distributions To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix. End of explanation """ # Scale the data using the natural logarithm log_data = np.log(data) # Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde') ## Checked the difference between data and cleaned data (with 1 deleted outlier) ## pd.scatter_matrix(good_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde') """ Explanation: Question 3 Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed? Hint: Is the data normally distributed? Where do most of the data points lie? Answer: There are several pairs of features: Milk - Detergents, Milk - Grocery, Grocery - Detergents. The biggest correlation is between Grocery and Detergents_Paper features. This picture confirmed my suspicions about the relevance of the Groceries feature. In each case the data is skewed to the right. Data Preprocessing In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature Scaling If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm. In the code block below, you will need to implement the following: - Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this. - Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log. End of explanation """ # Display the log-transformed sample data display(log_samples) """ Explanation: Observation After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before). Run the code below to see how the sample data has changed after having the natural logarithm applied to it. End of explanation """ # For each feature find the data points with extreme high or low values for feature in log_data.keys(): # Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature], 25) # Calculate Q3 (75th percentile of the data) for the given feature Q3 = np.percentile(log_data[feature], 75) # Use the interquartile range to calculate an outlier step (1.5 times the interquartile range) step = 1.5 * (Q3 - Q1) # Display the outliers print "Data points considered outliers for the feature '{}':".format(feature) display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]) # OPTIONAL: Select the indices for data points you wish to remove outliers = [75] # Remove the outliers, if any were specified good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True) """ Explanation: Implementation: Outlier Detection Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal. In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this. - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile. - Assign the calculation of an outlier step for the given feature to step. - Optionally remove data points from the dataset by adding indices to the outliers list. NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable good_data. End of explanation """ # Apply PCA to the good data with the same number of dimensions as features from sklearn.decomposition import PCA pca = PCA(n_components=len(good_data.columns)) pca.fit(good_data) # Apply a PCA transformation to the sample log-data pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = rs.pca_results(good_data, pca) """ Explanation: Question 4 Are there any data points considered outliers for more than one feature? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. Answer: There are several points, which are outliers for more than 1 feature: 65, 66, 75, 128, 154. I think, only point 75 should be removed as it really changes the trend in the data and it is very far from the rest of data points. Probably, it is a recording error. After deleting we can see a better picture of the data. Feature Transformation In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCA Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data. In the code block below, you will need to implement the following: - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca. - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values)) """ Explanation: Question 5 How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending. Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights. Answer: ~72% of data explained by 1 and 2 principal components. 93.4% of data explained by 1-4 components. The first dimension has the biggest positive weight on Detergents and slightly lower weights on Groceries and Milk, which are the 3 features with the highest correlation (based on the plots). It also shows that these customers are buying Fresh and Frozen products in a much lesser proportion. - type of a customer: modern trade store (supermarket type) with a variety of products, probably with a smaller assortment (to maintain larger sales per meter from a shelf and to keep prices lower). - customers with a high value are buying Detergents more than any other product, sligthly less Milk and Groceries, while sales of Fresh and Frozen are low. - customers with a negative value are the opposite: they almost do not buy Detergents, Groceries and Milk. The second dimension probably is orthogonal to the first, reducing the impact of Milk, Grocery, and Detergents, and instead puting high weights on sales of Fresh, Frozen and Delicatessen items. - type of a customer: HoReCa point of sales due to the prevalence of sales of foods which are needed to be cooked (Fresh, Frozen, Delicatessen). - customers with a positive value are buying Fresh, Frozen and Delicatessen products. - customers with a negative value almost aren't buying Fresh, Frozen and Delicatessen products. The third dimension has a high Fresh weight and a very negative Delicatessen weight. - type of a customer: looks like an open market to me (farmers market for example) as it sells Fresh products more than anything else. - customers with a high positive value are buying a lot Fresh products. - customers with a negative value are buying primarily Delicatessen products with a slight increase in Frozen. The 4th dimension is mostly focused on Frozen with a high weight and with a very low Delicatessen weight. - type of a customer: could be a place which sells frozen meat (but the Frozen category can be anything: ice cream, meat, etc - I don't know for sure). Considering the fact that this data is from Portugal, I would argue that this is a stall type of a client selling frozen meat products. - customers with a positive value buy Frozen and spend very little on Delicatessen. - customers with a negative value buy Delicatessen with a slight increase in Fresh products. Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points. End of explanation """ # Fit PCA to the good data using only two dimensions pca = PCA(n_components=2) pca.fit(good_data) # Apply a PCA transformation the good data reduced_data = pca.transform(good_data) # Apply a PCA transformation to the sample log-data pca_samples = pca.transform(log_samples) # Create a DataFrame for the reduced data reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2']) """ Explanation: Implementation: Dimensionality Reduction When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards. In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with good_data to pca. - Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data. - Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples. End of explanation """ # Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2'])) """ Explanation: Observation Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions. End of explanation """ # Apply your clustering algorithm of choice to the reduced data from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score clusters = range(2, 11) best = (0, 0.0) for each in clusters: clusterer = KMeans(n_clusters=each, random_state=0).fit(reduced_data) # Predict the cluster for each data point preds = clusterer.predict(reduced_data) # Find the cluster centers centers = clusterer.cluster_centers_ # Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data, preds) print "Clusters:", each, "score:", score if score > best[1]: best = (each, score) clusterer = KMeans(n_clusters=best[0], random_state=0).fit(reduced_data) # Predict the cluster for each data point preds = clusterer.predict(reduced_data) # Find the cluster centers centers = clusterer.cluster_centers_ # Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data, preds) print "The best n of Clusters:", best[0], "\nScore:", best[1] """ Explanation: Clustering In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6 *What are the advantages to using a K-Means clustering algorithm? * *What are the advantages to using a Gaussian Mixture Model clustering algorithm? * Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why? Answer: K-Means clustering: - It uses a hard assignment of points to clusters. - In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available) and scalable. - Gives best result when data set are distinct or well separated from each other. Gaussian Mixture Model clustering: - The GMM algorithm is a good algorithm to use for the classification of static postures and non-temporal pattern recognition. - The fastest algorithm for learning mixture models, but it is slower than the K-Means due to using information about the data distribution — e.g., probabilities of points belonging to clusters. - It uses a soft classification, which means a sample will not be classified fully to one class but it will have different probabilities in several classes. I will start with the K-Means, as I don't have a complete understanding of the dataset and K-means is usually used as a first algorithm to use for clustering. Implementation: Creating Clusters Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering. In the code block below, you will need to implement the following: - Fit a clustering algorithm to the reduced_data and assign it to clusterer. - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds. - Find the cluster centers using the algorithm's respective attribute and assign them to centers. - Predict the cluster for each sample data point in pca_samples and assign them sample_preds. - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds. - Assign the silhouette score to score and print the result. End of explanation """ # Display the results of the clustering from implementation rs.cluster_results(reduced_data, preds, centers, pca_samples) """ Explanation: Question 7 Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? Answer: I've tried 9 different numbers of clusters: Clusters: 2 score: 0.420795773671 Clusters: 3 score: 0.396034911432 Clusters: 4 score: 0.331704488262 Clusters: 5 score: 0.349383709753 Clusters: 6 score: 0.361735087656 Clusters: 7 score: 0.363059697196 Clusters: 8 score: 0.360593881403 Clusters: 9 score: 0.354722206188 Clusters: 10 score: 0.349422838857 The best number of Clusters: 2 with a score: 0.420795773671 Cluster Visualization Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters. End of explanation """ # TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers.index = segments display(true_centers) display(data.describe()) print samples """ Explanation: Implementation: Data Recovery Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations. In the code block below, you will need to implement the following: - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers. - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers. End of explanation """ # Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred import matplotlib.pyplot as plt # check if samples' spending closer to segment 0 or 1 df_diffs = (np.abs(samples-true_centers.iloc[0]) < np.abs(samples-true_centers.iloc[1])).applymap(lambda x: 0 if x else 1) # see how cluster predictions align with similariy of spending in each category df_preds = pd.concat([df_diffs, pd.Series(sample_preds, name='PREDICTION')], axis=1) sns.heatmap(df_preds, annot=True, cbar=False, yticklabels=['sample 0', 'sample 1', 'sample 2'], square=True) plt.title('Samples closer to\ncluster 0 or 1?') plt.xticks(rotation=45, ha='center') plt.yticks(rotation=0); """ Explanation: Question 8 Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent? Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'. Answer: Cluster 0: considering the sales of Milk, Groceries and Detergents which are much higher than mean we can think of this cluster as a retailers channel. Cluster 1: high sales of Fresh products - probably so-called HoReCa channel: hotels, restaraunts, cafes. Question 9 For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this? Run the code block below to find which cluster each sample point is predicted to be. End of explanation """ # Display the clustering results based on 'Channel' data rs.channel_results(reduced_data, outliers, pca_samples) """ Explanation: Answer: Point 0 is consistent with the predictions. Almost every product is selling in high volumes, including Detergents. Point 1 is also consistent because of the high sales of both Groceries and Detergents. Point 2 consistent, sales in Fresh are the most important. Conclusion Question 10 Companies often run A/B tests when making small changes to their products or services. If the wholesale distributor wanted to change its delivery service from 5 days a week to 3 days a week, how would you use the structure of the data to help them decide on a group of customers to test? Hint: Would such a change in the delivery service affect all customers equally? How could the distributor identify who it affects the most? Answer: I would choose some percentage of customers from both of the clusters (let's say 5%) and I would test both methods of deliveries on them, for example: Pick 22 customers from the segment 0: 11 - 5 days/week, 11 - 3 days/week. Gather feedback, make a decision. Repeat the same for the segment 1. Question 11 Assume the wholesale distributor wanted to predict a new feature for each customer based on the purchasing information available. How could the wholesale distributor use the structure of the clustering data you've found to assist a supervised learning analysis? Hint: What other input feature could the supervised learner use besides the six product features to help make a prediction? Answer: Supervised learner coud have used numbers of segments predicted by the K-means algorithm. Visualizing Underlying Distributions At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier on to the original dataset. Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling. End of explanation """
jbocharov-mids/W207-Machine-Learning
reference/firstname_lastname_p1.ipynb
apache-2.0
# This tells matplotlib not to try opening a new window for each plot. %matplotlib inline # Import a bunch of libraries. import time import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator from sklearn.pipeline import Pipeline from sklearn.datasets import fetch_mldata from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import confusion_matrix from sklearn.linear_model import LinearRegression from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import GaussianNB from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report # Set the randomizer seed so results are the same each time. np.random.seed(0) """ Explanation: Project 1: Digit Classification with KNN and Naive Bayes In this project, you'll implement your own image recognition system for classifying digits. Read through the code and the instructions carefully and add your own code where indicated. Each problem can be addressed succinctly with the included packages -- please don't add any more. Grading will be based on writing clean, commented code, along with a few short answers. As always, you're welcome to work on the project in groups and discuss ideas on the course wall, but <b> please prepare your own write-up (with your own code). </b> If you're interested, check out these links related to digit recognition: Yann Lecun's MNIST benchmarks: http://yann.lecun.com/exdb/mnist/ Stanford Streetview research and data: http://ufldl.stanford.edu/housenumbers/ End of explanation """ # Load the digit data either from mldata.org, or once downloaded to data_home, from disk. The data is about 53MB so this cell # should take a while the first time your run it. mnist = fetch_mldata('MNIST original', data_home='~/datasets/mnist') X, Y = mnist.data, mnist.target # Rescale grayscale values to [0,1]. X = X / 255.0 # Shuffle the input: create a random permutation of the integers between 0 and the number of data points and apply this # permutation to X and Y. # NOTE: Each time you run this cell, you'll re-shuffle the data, resulting in a different ordering. shuffle = np.random.permutation(np.arange(X.shape[0])) X, Y = X[shuffle], Y[shuffle] print 'data shape: ', X.shape print 'label shape:', Y.shape # Set some variables to hold test, dev, and training data. test_data, test_labels = X[61000:], Y[61000:] dev_data, dev_labels = X[60000:61000], Y[60000:61000] train_data, train_labels = X[:60000], Y[:60000] mini_train_data, mini_train_labels = X[:1000], Y[:1000] """ Explanation: Load the data. Notice that we are splitting the data into training, development, and test. We also have a small subset of the training data called mini_train_data and mini_train_labels that you should use in all the experiments below, unless otherwise noted. End of explanation """ #def P1(num_examples=10): ### STUDENT START ### ### STUDENT END ### #P1(10) """ Explanation: (1) Create a 10x10 grid to visualize 10 examples of each digit. Python hints: plt.rc() for setting the colormap, for example to black and white plt.subplot() for creating subplots plt.imshow() for rendering a matrix np.array.reshape() for reshaping a 1D feature vector into a 2D matrix (for rendering) End of explanation """ #def P2(k_values): ### STUDENT START ### ### STUDENT END ### #k_values = [1, 3, 5, 7, 9] #P2(k_values) """ Explanation: (2) Evaluate a K-Nearest-Neighbors model with k = [1,3,5,7,9] using the mini training set. Report accuracy on the dev set. For k=1, show precision, recall, and F1 for each label. Which is the most difficult digit? KNeighborsClassifier() for fitting and predicting classification_report() for producing precision, recall, F1 results End of explanation """ #def P3(train_sizes, accuracies): ### STUDENT START ### ### STUDENT END ### #train_sizes = [100, 200, 400, 800, 1600, 3200, 6400, 12800, 25000] #accuracies = [] #P3(train_sizes, accuracies) """ Explanation: ANSWER: (3) Using k=1, report dev set accuracy for the training set sizes below. Also, measure the amount of time needed for prediction with each training size. time.time() gives a wall clock value you can use for timing operations End of explanation """ #def P4(): ### STUDENT START ### ### STUDENT END ### #P4() """ Explanation: (4) Fit a regression model that predicts accuracy from training size. What does it predict for n=60000? What's wrong with using regression here? Can you apply a transformation that makes the predictions more reasonable? Remember that the sklearn fit() functions take an input matrix X and output vector Y. So each input example in X is a vector, even if it contains only a single value. End of explanation """ #def P5(): ### STUDENT START ### ### STUDENT END ### #P5() """ Explanation: ANSWER: Fit a 1-NN and output a confusion matrix for the dev data. Use the confusion matrix to identify the most confused pair of digits, and display a few example mistakes. confusion_matrix() produces a confusion matrix End of explanation """ #def P6(): ### STUDENT START ### ### STUDENT END ### #P6() """ Explanation: (6) A common image processing technique is to smooth an image by blurring. The idea is that the value of a particular pixel is estimated as the weighted combination of the original value and the values around it. Typically, the blurring is Gaussian -- that is, the weight of a pixel's influence is determined by a Gaussian function over the distance to the relevant pixel. Implement a simplified Gaussian blur by just using the 8 neighboring pixels: the smoothed value of a pixel is a weighted combination of the original value and the 8 neighboring values. Try applying your blur filter in 3 ways: - preprocess the training data but not the dev data - preprocess the dev data but not the training data - preprocess both training and dev data Note that there are Guassian blur filters available, for example in scipy.ndimage.filters. You're welcome to experiment with those, but you are likely to get the best results with the simplified version I described above. End of explanation """ #def P7(): ### STUDENT START ### ### STUDENT END ### #P7() """ Explanation: ANSWER: (7) Fit a Naive Bayes classifier and report accuracy on the dev data. Remember that Naive Bayes estimates P(feature|label). While sklearn can handle real-valued features, let's start by mapping the pixel values to either 0 or 1. You can do this as a preprocessing step, or with the binarize argument. With binary-valued features, you can use BernoulliNB. Next try mapping the pixel values to 0, 1, or 2, representing white, grey, or black. This mapping requires MultinomialNB. Does the multi-class version improve the results? Why or why not? End of explanation """ #def P8(alphas): ### STUDENT START ### ### STUDENT END ### #alphas = {'alpha': [0.0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0, 10.0]} #nb = P8(alphas) #print nb.best_params_ """ Explanation: ANSWER: (8) Use GridSearchCV to perform a search over values of alpha (the Laplace smoothing parameter) in a Bernoulli NB model. What is the best value for alpha? What is the accuracy when alpha=0? Is this what you'd expect? Note that GridSearchCV partitions the training data so the results will be a bit different than if you used the dev data for evaluation. End of explanation """ #def P9(): ### STUDENT END ### ### STUDENT END ### #gnb = P9() """ Explanation: ANSWER: (9) Try training a model using GuassianNB, which is intended for real-valued features, and evaluate on the dev data. You'll notice that it doesn't work so well. Try to diagnose the problem. You should be able to find a simple fix that returns the accuracy to around the same rate as BernoulliNB. Explain your solution. Hint: examine the parameters estimated by the fit() method, theta_ and sigma_. End of explanation """ #def P10(num_examples): ### STUDENT START ### ### STUDENT END ### #P10(20) """ Explanation: ANSWER: (10) Because Naive Bayes is a generative model, we can use the trained model to generate digits. Train a BernoulliNB model and then generate a 10x20 grid with 20 examples of each digit. Because you're using a Bernoulli model, each pixel output will be either 0 or 1. How do the generated digits compare to the training digits? You can use np.random.rand() to generate random numbers from a uniform distribution The estimated probability of each pixel is stored in feature_log_prob_. You'll need to use np.exp() to convert a log probability back to a probability. End of explanation """ #def P11(buckets, correct, total): ### STUDENT START ### ### STUDENT END ### #buckets = [0.5, 0.9, 0.999, 0.99999, 0.9999999, 0.999999999, 0.99999999999, 0.9999999999999, 1.0] #correct = [0 for i in buckets] #total = [0 for i in buckets] #P11(buckets, correct, total) #for i in range(len(buckets)): # accuracy = 0.0 # if (total[i] > 0): accuracy = correct[i] / total[i] # print 'p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy) """ Explanation: ANSWER: (11) Remember that a strongly calibrated classifier is rougly 90% accurate when the posterior probability of the predicted class is 0.9. A weakly calibrated classifier is more accurate when the posterior is 90% than when it is 80%. A poorly calibrated classifier has no positive correlation between posterior and accuracy. Train a BernoulliNB model with a reasonable alpha value. For each posterior bucket (think of a bin in a histogram), you want to estimate the classifier's accuracy. So for each prediction, find the bucket the maximum posterior belongs to and update the "correct" and "total" counters. How would you characterize the calibration for the Naive Bayes model? End of explanation """ #def P12(): ### STUDENT START ### ### STUDENT END ### #P12() """ Explanation: ANSWER: (12) EXTRA CREDIT Try designing extra features to see if you can improve the performance of Naive Bayes on the dev set. Here are a few ideas to get you started: - Try summing the pixel values in each row and each column. - Try counting the number of enclosed regions; 8 usually has 2 enclosed regions, 9 usually has 1, and 7 usually has 0. Make sure you comment your code well! End of explanation """
mtasende/Machine-Learning-Nanodegree-Capstone
notebooks/dev/.ipynb_checkpoints/n16_hallucinating_with_predictor-checkpoint.ipynb
mit
# Basic imports import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import datetime as dt import scipy.optimize as spo import sys from time import time from sklearn.metrics import r2_score, median_absolute_error %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') from sklearn.externals import joblib """ Explanation: In this notebook the predictor will be used to estimate the new states and rewards for the dyna (hallucinated) iterations, of the Q-learning agent. End of explanation """ best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl') best_params_df import predictor.feature_extraction as fe from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 1 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend() """ Explanation: First, let's try to instantiate the best predictor that was found End of explanation """ print('The first training day for the predictor is: {}.'.format(start_date)) print('The last training day for the predictor is: {}.'.format(fe.add_market_days(end_date, base_days))) print('The testing data for the recommender') total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature') total_data_test_df.head() print('The first TEST day for the recommender is: {}'.format(total_data_test_df.index[-0])) """ Explanation: Let's see the range of the test set (to check that no data from the recommender test set is in the training set for the predictor) End of explanation """ joblib.dump(estimator, '../../data/best_predictor.pkl') """ Explanation: Good! The predictor will be used as it is, without retraining, for simplicity and computational performance End of explanation """ estimator_reloaded = joblib.load('../../data/best_predictor.pkl') # Get the training and test predictions y_train_pred = estimator_reloaded.predict(x_sub_df) y_test_pred = estimator_reloaded.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend() """ Explanation: Let's test the saved predictor... just in case. End of explanation """ # Get the data SYMBOL = 'SPY' total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature') data_train_df = total_data_train_df[SYMBOL].unstack()[['Close', 'Volume']] data_train_df.head() def generate_samples(data_df): start_date = data_df.index[0] close_sample = pd.DataFrame(data_df['Close'].values, columns=[start_date]).T close_sample = close_sample / close_sample.iloc[0,0] volume_sample = pd.DataFrame(data_df['Volume'].values, columns=[start_date]).T volume_sample = volume_sample / volume_sample.iloc[0,0] return close_sample, volume_sample data_df = data_train_df[:112] start_date = data_df.index[0] close_sample = pd.DataFrame(data_df['Close'].values, columns=[start_date]).T close_sample = close_sample / close_sample.iloc[0,0] volume_sample = pd.DataFrame(data_df['Volume'].values, columns=[start_date]).T volume_sample = volume_sample / volume_sample.iloc[0,0] close_sample close_sample, volume_sample = generate_samples(data_df) close_sample volume_sample """ Explanation: Looks good to me. Let's assume that the data comes as real values for one ticker End of explanation """ history_df = data_train_df[:112] estimator_close = joblib.load('../../data/best_predictor.pkl') estimator_volume = joblib.load('../../data/best_volume_predictor.pkl') h_history_df = history_df.copy() def predict_one_step(h_history_df, keep=False): close_sample, volume_sample = generate_samples(h_history_df) estimated_close = estimator_close.predict(close_sample).iloc[0,0] * h_history_df['Close'].iloc[0] estimated_volume = estimator_volume.predict(volume_sample).iloc[0,0] * h_history_df['Volume'].iloc[0] predicted_date = fe.add_market_days(h_history_df.index[-1], 1) h_history_df = h_history_df.drop(h_history_df.index[0]) h_history_df.loc[predicted_date,:] = {'Close': estimated_close,'Volume': estimated_volume} return h_history_df close_sample, volume_sample = generate_samples(h_history_df) estimated_close = estimator_close.predict(close_sample).iloc[0,0] * h_history_df['Close'].iloc[0] estimated_volume = estimator_volume.predict(volume_sample).iloc[0,0] * h_history_df['Volume'].iloc[0] estimator_close.predict(close_sample).iloc[0,0] predicted_date = fe.add_market_days(h_history_df.index[-1], 1) predicted_date history_df h_history_df = h_history_df.drop(h_history_df.index[0]) h_history_df.loc[predicted_date,:] = {'Close': estimated_close,'Volume': estimated_volume} h_history_df h_history_df = history_df.copy() for i in range(20): h_history_df = predict_one_step(h_history_df.copy()) """ Explanation: Now, let's predict one step End of explanation """ h_history_df = history_df.copy() predicted_df = pd.DataFrame() for i in range(112): h_history_df = predict_one_step(h_history_df.copy()) predicted_df = predicted_df.append(h_history_df.iloc[-1]) predicted_df real_df = history_df.append(data_train_df[112:224]) plt.plot(real_df.index, real_df['Close'], 'b', label='real') plt.plot(predicted_df.index, predicted_df['Close'], 'r', label='predicted') plt.legend() plt.show() """ Explanation: Just for fun, let's see some predictions... End of explanation """
mne-tools/mne-tools.github.io
0.22/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
bsd-3-clause
# Author: Tommy Clausner <tommy.clausner@gmail.com> # # License: BSD (3-clause) import os import nibabel as nib import mne from mne.datasets import sample, fetch_fsaverage from mne.minimum_norm import apply_inverse, read_inverse_operator from nilearn.plotting import plot_glass_brain print(__doc__) """ Explanation: Morph volumetric source estimate This example demonstrates how to morph an individual subject's :class:mne.VolSourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Data will be morphed based on an affine transformation and a nonlinear registration method known as Symmetric Diffeomorphic Registration (SDR) by :footcite:AvantsEtAl2008. Transformation is estimated from the subject's anatomical T1 weighted MRI (brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain) &lt;https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage&gt;__. Afterwards the transformation will be applied to the volumetric source estimate. The result will be plotted, showing the fsaverage T1 weighted anatomical MRI, overlaid with the morphed volumetric source estimate. End of explanation """ sample_dir_raw = sample.data_path() sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample') subjects_dir = os.path.join(sample_dir_raw, 'subjects') fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif') fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif') fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri', 'brain.mgz') fetch_fsaverage(subjects_dir) # ensure fsaverage src exists fname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif' """ Explanation: Setup paths End of explanation """ evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) # Apply inverse operator stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM") # To save time stc.crop(0.09, 0.09) """ Explanation: Compute example data. For reference see sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py Load data: End of explanation """ src_fs = mne.read_source_spaces(fname_src_fsaverage) morph = mne.compute_source_morph( inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir, niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed src_to=src_fs, verbose=True) """ Explanation: Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from :class:src &lt;mne.SourceSpaces&gt;, and subject_to is set to 'fsaverage' by default. subjects_dir can be None when set in the environment. In that case SourceMorph can be initialized taking src as only argument. See :class:mne.SourceMorph for more details. The default parameter setting for zooms will cause the reference volumes to be resliced before computing the transform. A value of '5' would cause the function to reslice to an isotropic voxel size of 5 mm. The higher this value the less accurate but faster the computation will be. The recommended way to use this is to morph to a specific destination source space so that different subject_from morphs will go to the same space.` A standard usage for volumetric data reads: End of explanation """ stc_fsaverage = morph.apply(stc) """ Explanation: Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the :meth:morph.apply() &lt;mne.SourceMorph.apply&gt; method. <div class="alert alert-info"><h4>Note</h4><p>Volumetric morphing is much slower than surface morphing because the volume for each time point is individually resampled and SDR morphed. The :meth:`mne.SourceMorph.compute_vol_morph_mat` method can be used to compute an equivalent sparse matrix representation by computing the transformation for each source point individually. This generally takes a few minutes to compute, but can be :meth:`saved <mne.SourceMorph.save>` to disk and be reused. The resulting sparse matrix operation is very fast (about 400× faster) to :meth:`apply <mne.SourceMorph.apply>`. This approach is more efficient when the number of time points to be morphed exceeds the number of source space points, which is generally in the thousands. This can easily occur when morphing many time points and multiple conditions.</p></div> End of explanation """ # Create mri-resolution volume of results img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1') """ Explanation: Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using :meth:morph.apply(..., output='nifti1') &lt;mne.SourceMorph.apply&gt;. End of explanation """ # Load fsaverage anatomical image t1_fsaverage = nib.load(fname_t1_fsaverage) # Plot glass brain (change to plot_anat to display an overlaid anatomical T1) display = plot_glass_brain(t1_fsaverage, title='subject results to fsaverage', draw_cross=False, annotate=True) # Add functional data as overlay display.add_overlay(img_fsaverage, alpha=0.75) """ Explanation: Plot results End of explanation """
lionell/laboratories
math_modelling/lab3/lab3.ipynb
mit
def fmap(fs, x): return np.array([f(*x) for f in fs]) def runge_kutta4_system(fs, x, y0): h = x[1] - x[0] y = np.ndarray((len(x), len(y0))) y[0] = y0 for i in range(1, len(x)): k1 = h * fmap(fs, [x[i - 1], *y[i - 1]]) k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)]) k3 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k2/2)]) k4 = h * fmap(fs, [x[i - 1] + h, *(y[i - 1] + k3)]) y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6 return y """ Explanation: Runge-Kutta methods for SDE End of explanation """ def gen_A(c1, m1, m2, c2, c3, c4, m3): # correct A = np.zeros((6, 6)) A[0, 1] = 1 A[1, 0] = -(c2 + c1)/m1 A[1, 2] = c2/m1 A[2, 3] = 1 A[3, 0] = c2/m2 A[3, 2] = -(c3 + c2)/m2 A[3, 4] = c3/m2 A[4, 5] = 1 A[5, 2] = c3/m3 A[5, 4] = -(c4 + c3)/m3 return A """ Explanation: Now let's check if method works correctly. To do so, we are going to use scipy.integrate.odeint. We are going to solve the next problem $$ \frac{dy}{dt} = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \ -\frac{c_2 + c_1}{m_1} & 0 & \frac{c_2}{m_1} & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ \frac{c_2}{m_2} & 0 & -\frac{c_3 + c_2}{m_2} & 0 & \frac{c_3}{m_2} & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & \frac{c_3}{m_3} & 0 & -\frac{c_4 + c_3}{m_3} & 0 \end{pmatrix} y = Ay, \ y(t_0) = y_{ans}(t_0) $$ First we have to generate matrix $A$ End of explanation """ c1, c2, c3, c4 = 4, 4, 6, 3 m1, m2, m3 = 4, 2, 3 t = np.linspace(0, 10, 101) y0 = np.array([0, 1, 0, 3, 0, 2]) A = gen_A(c1, m1, m2, c2, c3, c4, m3) A """ Explanation: Now we need some values to substitute variables $c_1, \dots, m_3$ in matrix $A$. Let's assume that $$ (c_1, c_2, c_3, c_4) = (4, 4, 6, 3), \ (m_1, m_2, m_3) = (4, 2, 3), \ t \in [0, 10], \Delta t = 0.1, \ y(0) = (0, 1, 0, 3, 0, 2) $$ End of explanation """ def eval_y(A, t, y0): # correct fs = [] for i in range(6): fun = (lambda i: lambda *args: np.dot(A[i], np.array(args[1:])))(i) fs.append(fun) return runge_kutta4_system(fs, t, y0) """ Explanation: Then let's write function to evaluate this system using Runge-Kutta method. End of explanation """ def dydt(y, t): dy = [None] * 6 dy[0] = y[1] dy[1] = -2 * y[0] + y[2] dy[2] = y[3] dy[3] = 2 * y[0] -5 * y[2] + 3 * y[4] dy[4] = y[5] dy[5] = 2 * y[2] - 3 * y[4] return dy """ Explanation: To run scipy.integrate.odeint we need to use different function for derivatives End of explanation """ ys_rk4 = eval_y(A, t, y0) ys_sp = odeint(dydt, y0, t) np.linalg.norm(ys_rk4 - ys_sp) """ Explanation: Now we can compare results, using Frobenius norm $$ ||A||F = \Big[ \sum{i,j} \big|a_{i,j}\big|^2 \Big]^{1/2} $$ End of explanation """ def eval_U(A, Bs, t): h = t[1] - t[0] q, m, n = Bs.shape Us = np.empty((q // 2 + q % 2, m, n)) for i in range(n): fs = [] for j in range(m): fun = (lambda i, j: lambda *args: Bs[int(round(args[0] / h)), j, i] + np.dot(A[j], np.array(args[1:])))(i, j) fs.append(fun) x = t[::2] y0 = np.zeros(m) Us[:, :, i] = runge_kutta4_system(fs, x, y0) return Us """ Explanation: As we can check it's pretty small, and if we are going to decrease step $h$ error will go down. Runge-Kutta methods for matrices We are going to solve next problem $$ \frac{dU(t)}{dt} = \frac{\partial (Ay)}{\partial y^T} U(t) + \frac{\partial (Ay)}{\partial \beta^T}(t), \ U(t_0) = 0, $$ In our case we have $$ \frac{\partial (Ay)}{\partial y^T} = A $$ Let's denote $$ B(t) = \frac{\partial (Ay)}{\partial \beta^T} $$ Finally we have $$ \frac{dU(t)}{dt} = A \cdot U(t) + B(t), \ U(t_0) = 0, $$ NOTE! We can compute $B(t)$ only in some points. So we are going to compress our variable $t$ twice to use RK4. End of explanation """ def gen_Bs(ys, c1, m1, m2, c2, c3, c4, m3): # correct q = ys.shape[0] Bs = np.zeros((q, 6, 3)) Bs[:, 1, 0] = -1/m1 * ys[:, 0] Bs[:, 1, 1] = (c2 + c1)/m1**2 * ys[:, 0] - c2/m1**2 * ys[:, 2] Bs[:, 3, 2] = -c2/m2**2 * ys[:, 0] + (c3 + c2)/m2**2 * ys[:, 2] - c3/m2**2 * ys[:, 4] return Bs """ Explanation: We need function to generate $B(t)$ $$ B(t) = \frac{\partial}{\partial \beta^T} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \ -\frac{c_2 + c_1}{m_1} & 0 & \frac{c_2}{m_1} & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ \frac{c_2}{m_2} & 0 & -\frac{c_3 + c_2}{m_2} & 0 & \frac{c_3}{m_2} & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \ 0 & 0 & \frac{c_3}{m_3} & 0 & -\frac{c_4 + c_3}{m_3} & 0 \end{pmatrix} y = \frac{\partial}{\partial \beta^T} \begin{pmatrix} y_2 \ -\frac{c_2 + c_1}{m_1} y_1 + \frac{c_2}{m_1} y_3 \ y_4 \ \frac{c_2}{m_2} y_1 - \frac{c_3 + c_2}{m_2} y_3 + \frac{c_3}{m_2} y_5 \ y_6 \ \frac{c_3}{m_3} y_3 - \frac{c_4 + c_3}{m_3} y_5 \end{pmatrix} = \ =\begin{pmatrix} 0 & 0 & 0 \ -\frac{1}{m_1} y_1 & \frac{c_2 + c_1}{m_1^2} y_1 - \frac{c_2}{m_1^2} y_3 & 0 \ 0 & 0 & 0 \ 0 & 0 & -\frac{c_2}{m_2^2} y_1 + \frac{c_3 + c_2}{m_2^2} y_3 - \frac{c_3}{m_2^2} y_5 \ 0 & 0 & 0 \ 0 & 0 & 0 \end{pmatrix} $$ End of explanation """ def eval_delta(Us, ys, ys_ans): # correct q = Us.shape[0] T1 = np.zeros((3, 3)) for i in range(q): T1 = T1 + np.dot(Us[i].T, Us[i]) T2 = np.zeros((3, 1)) ys = ys[::2] ys_ans = ys_ans[::2] for i in range(q): T2 = T2 + np.dot(Us[i].T, np.reshape(ys_ans[i] - ys[i], (6, 1))) return np.dot(np.linalg.inv(T1), T2) """ Explanation: Result evaluation $$ \Delta \beta = \Big( \int_{t_0}^{t_k} U^T(t)U(t)dt \Big)^{-1} \int_{t_0}^{t_k} U^T(t)(y_{ans}(t) - y(t))dt $$ End of explanation """ def eval_diff(ys, ys_ans): # correct q = ys.shape[0] ans = 0 for i in range(q): ans = ans + np.dot(ys_ans[i] - ys[i], ys_ans[i] - ys[i]) return ans def eval_beta(beta0, other, ys_ans, t): beta = beta0 for i in range(100): A = gen_A(*beta, *other) ys = eval_y(A, t, y0) err = eval_diff(ys, ys_ans) print(err) if (err < EPS): break Bs = gen_Bs(ys, *beta, *other) Us = eval_U(A, Bs, t) delta = eval_delta(Us, ys, ys_ans) beta = beta + delta[:, 0] return beta """ Explanation: $$ I(\beta) = \int_{t_0}^{t_k} (y_{ans}(t) - y(t))^T(y_{ans}(t) - y(t))dt $$ End of explanation """ ys_ans = np.loadtxt(open('data/y1.txt', 'r')).T y0 = ys_ans[0] t = np.linspace(0, 50, 251) c2, c3, c4, m3 = 0.3, 0.2, 0.12, 18 beta0 = np.array([0.1, 11, 23]) plt.figure(figsize=(15, 10)) plt.plot(t, ys_ans[:, 0], 'r', label='y1(t)') plt.plot(t, ys_ans[:, 1], 'r--', label='y2(t)') plt.plot(t, ys_ans[:, 2], 'b', label='y3(t)') plt.plot(t, ys_ans[:, 3], 'b--', label='y4(t)') plt.plot(t, ys_ans[:, 4], 'g', label='y5(t)') plt.plot(t, ys_ans[:, 5], 'g--', label='y6(t)') plt.xlabel('t') plt.ylabel('y') plt.title('Data from data/y1.txt') plt.legend(loc='best') plt.show() beta_res = eval_beta(beta0, [c2, c3, c4, m3], ys_ans, t) beta_res """ Explanation: Data to process Here we have $$ (c_2, c_3, c_4, m_3) = (0.3, 0.2, 0.12, 18), \ \beta = (c_1, m_1, m_2)^T, \ \beta_0 = (0.1, 11, 23)^T, \ t \in [0, 50], \Delta t = 0.2, $$ End of explanation """ c1, m1, m2 = beta_res A = gen_A(c1, m1, m2, c2, c3, c4, m3) ys_gen = eval_y(A, t, y0) plt.figure(figsize=(15, 10)) plt.plot(t, ys_gen[:, 0], 'r', label='y1(t)') plt.plot(t, ys_gen[:, 1], 'r--', label='y2(t)') plt.plot(t, ys_gen[:, 2], 'b', label='y3(t)') plt.plot(t, ys_gen[:, 3], 'b--', label='y4(t)') plt.plot(t, ys_gen[:, 4], 'g', label='y5(t)') plt.plot(t, ys_gen[:, 5], 'g--', label='y6(t)') plt.xlabel('t') plt.ylabel('y') plt.title('Data generated using pre-calculated params') plt.legend(loc='best') plt.show() """ Explanation: Looks like we have an answer. Let's check this one! End of explanation """
sdss/marvin
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
bsd-3-clause
from marvin import config from marvin.tools.query import Query config.mode='remote' """ Explanation: DAP Zonal Queries (or Spaxel Queries) Marvin allows you to perform queries on individual spaxels within and across the MaNGA dataset. End of explanation """ config.setRelease('MPL-5') f = 'emline_gflux_ha_6564 > 25' q = Query(search_filter=f) print(q) # let's run the query r = q.run() r.totalcount r.results """ Explanation: Let's grab all spaxels with an Ha-flux > 25 from MPL-5. End of explanation """ # get a list of the plate-ifus plateifu = r.getListOf('plateifu') # look at the unique values with Python set print('unique galaxies', set(plateifu), len(set(plateifu))) """ Explanation: Spaxel queries are queries on individual spaxels, and thus will always return a spaxel x and y satisfying your input condition. There is the potential of returning a large number of results that span only a few actual galaxies. Let's see how many.. End of explanation """ f = 'emline_gflux_ha_6564 > 25 and bintype.name == SPX' q = Query(search_filter=f, return_params=['template.name']) print(q) # run it r = q.run() r.results """ Explanation: Optimize your query Unless specified, spaxel queries will query across all bintypes and stellar templates. If you only want to search over a certain binning mode, this must be specified. If your query is taking too long, or returning too many results, consider filtering on a specific bintype and template. End of explanation """ f = 'nsa.sersic_logmass > 9.5 and nsa.z < 0.1 and emline_sew_ha_6564 > 3' q = Query(search_filter=f) print(q) r = q.run() # Let's see how many spaxels we returned from how many galaxies plateifu = r.getListOf('plateifu') print('spaxels returned', r.totalcount) print('from galaxies', len(set(plateifu))) r.results[0:5] """ Explanation: Global+Local Queries To combine global and local searches, simply combine them together in one filter condition. Let's look for all spaxels that have an H-alpha EW > 3 in galaxies with NSA redshift < 0.1 and a log sersic_mass > 9.5 End of explanation """ config.mode='remote' config.setRelease('MPL-4') f = 'npergood(emline_gflux_ha_6564 > 5) >= 20' q = Query(search_filter=f) r = q.run() r.results """ Explanation: Query Functions Marvin also contains more advanced queries in the form of predefined functions. For example, let's say you want to ask Marvin "Give me all galaxies that have an H-alpha flux > 25 in more than 20% of their good spaxels" you can do so using the query function npergood. npergood accepts as input a standard filter expression condition. E.g., the syntax for the above query would be input as npergood(emline_gflux_ha_6564 > 25) >= 20 The syntax is FUNCTION(Conditional Expression) Operator Value Let's try it... End of explanation """
quantumlib/Cirq
docs/tutorials/google/identifying_hardware_changes.ipynb
apache-2.0
try: import cirq except ImportError: !pip install --quiet cirq --pre import matplotlib.pyplot as plt import networkx as nx import numpy as np import cirq import cirq_google as cg """ Explanation: Identifying Hardware Changes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/google/identifying_hardware_changes"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/identifying_hardware_changes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/identifying_hardware_changes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/identifying_hardware_changes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> You've run your circuit with Google's Quantum Computing Service and you're getting results that unexpectedly differ from those you saw when you ran your experiment last week. What's the cause of this and what can you do about it? Your experience may be due to changes in the device that have occurred since the most recent maintenance Calibration. Every few days, the QCS devices are calibrated for the highest performance across all of their available qubits and operations. However, in the hours or days since the most recent maintenance calibration, the performance of the device hardware may have changed significantly, affecting your circuit's results. The rest of this tutorial will describe these hardware changes, demonstrate how to collect error metrics for identifying if changes have occurred, and provide some examples of how you can compare your metric results to select the most performant qubits for your circuit. For more further reading on qubit picking methodology, see the Best Practices guide and Qubit Picking with Loschmidt Echoes tutorial. The method presented in the Loschmidt Echoes tutorial is an alternative way to identify hardware changes. Hardware Changes The device hardware changes occur in both the qubits themselves and the control electronics used to drive gates and measure the state of the qubits. As analog devices, both the qubits and control electronics are subject to interactions with their environment that manifest as a meaningful change to the qubits gate or readout fidelity. Quantum processors based on frequency tunable superconducting qubits use a direct current (DC) bias current to set the frequency of the qubits' $|0\rangle$ state to $|1\rangle$ state transition. These DC biases are generated by classical analog control electronics, where resistors and other components can be affected by environmental temperature changes in an interaction called thermal drift. Uncompensated thermal drift results in a change in the qubit's transition frequency, which can cause unintended state transitions in the qubits during circuit execution or incorrect readout of the qubits' state. These manifest as changes to the error rates associated with gate and readout operations. Additionally, the qubits may unexpectedly couple to other local energy systems and exchange energy with or lose energy to them. Because a qubit is only able to identify the presence of two levels in the parasitic local system, these interacting states are often referred to as two-level systems (TLS). While the exact physical origin of these states is unknown, defects in the hardware materials are a plausible explanation. It has been observed that interactions with these TLS can result in coherence fluctuations in time and frequency, again causing unintended state transitions or incorrect readouts, affecting error rates. For more information on DC Bias and TLS and how they affect the devices, see arXiv:1809.01043. Qubit Error Metrics There are many Calibration Metrics available to measure gate and readout error rates and see if they have changed. The Visualizing Calibration Metrics tutorial demonstrates how to collect and visualize each of these available metrics. You can apply the comparison methods presented in this tutorial to any such metric, but the examples below focus on the two following metrics: two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle: This metric captures the estimated probability for the quantum state on two neighboring qubits to depolarize (as if a Pauli gate was applied to either or both qubits) after applying an $\sqrt{i\mathrm{SWAP}}$ gate. This metric includes some coherent error like the error introduced by control hardware. This metric is computed using Cross Entropy Benchmarking (XEB) during maintenance calibration and in this tutorial. parallel_p11_error: This metric estimates the probability for a readout register to correctly measure a $|1\rangle$ state on a qubit that was prepared to be in the $|1\rangle$ state. The Simultaneous Readout experiment used to collect this metric evaluates all of the qubits in parallel/simultaneously. Note: The two-qubit metric uses Pauli error, which has two other multiplicatively-related variants: Average error and Incoherent error. Disclaimer: The data shown in this tutorial is an example and not representative of the QCS in production. Data Collection Setup First, install Cirq and import the necessary packages. Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre. End of explanation """ from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook # Set key variables project_id = "your_project_id_here" #@param {type:"string"} processor_id = "your_processor_id_here" #@param {type:"string"} repetitions = 2000 #@param {type:"integer"} # Get device sampler qcs_objects = get_qcs_objects_for_notebook(project_id=project_id, processor_id=processor_id) device = qcs_objects.device sampler = qcs_objects.sampler # Get qubit set qubits = device.qubit_set() # Limit device qubits to only those before row/column `device_limit` device_limit = 10 #@param {type:"integer"} qubits = {qb for qb in qubits if qb.row<device_limit and qb.col<device_limit} # Visualize the qubits on a grid by putting them in a throwaway device object used only for this print statement print(cg.devices.XmonDevice(0,0,0,qubits)) """ Explanation: Next, authorize to use the Quantum Computing Service with a project_id and processor_id, and get a sampler to run your experiments. Set the number of repetitions you'll use for all experiments. Note: You can select a subset of the qubits to shorten the runtime of the experiment. Note: You need to input a real QCS project_id and processor_id in the next cell. Otherwise, the code will assume you're running with a simulator, causing issues later. End of explanation """ # Retreive maintenance calibration data. calibration = cg.get_engine_calibration(processor_id=processor_id) # Heatmap the two metrics. two_qubit_gate_metric = "two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle" #@param {type:"string"} readout_metric = "parallel_p11_error" #@param {type:"string"} # Plot heatmaps with integrated histogram calibration.plot(two_qubit_gate_metric, fig=plt.figure(figsize=(22, 10))) calibration.plot(readout_metric, fig=plt.figure(figsize=(22, 10))) """ Explanation: Maintenance Calibration Data Query for the calibration data with cirq_google.get_engine_calibration, select the two metrics by name from the calibration object, and visualize them with its plot() method. End of explanation """ """Setup for parallel XEB experiment.""" from cirq.experiments import random_quantum_circuit_generation as rqcg from itertools import combinations random_seed = 52 # Generate library of two-qubit XEB circuits. circuit_library = rqcg.generate_library_of_2q_circuits( n_library_circuits=20, two_qubit_gate=cirq.SQRT_ISWAP, random_state=random_seed, ) device_graph = nx.Graph((q1,q2) for (q1,q2) in combinations(qubits, 2) if q1.is_adjacent(q2)) # Generate different possible pairs of qubits, and randomly assign circuit (indices) to then, n_combinations times. combinations_by_layer = rqcg.get_random_combinations_for_device( n_library_circuits=len(circuit_library), n_combinations=10, device_graph=device_graph, random_state=random_seed, ) # Prepare the circuit depths the circuits will be truncated to. cycle_depths = np.arange(3, 100, 20) """ Explanation: You may have already seen this existing maintenance calibration data when you did qubit selection in the first place. Next, you'll run device characterization experiments to collect the same data metrics from the device, to see if their values have changed since the previous calibration. Current Two-Qubit Metric Data with XEB This section is a shortened version of the Parallel XEB tutorial, which runs characterization experiments to collect data for the two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle metric. First, generate a library of two qubit circuits using the $\sqrt{i\mathrm{SWAP}}$ gate . These circuits will be run in parallel in larger circuits according to combinations_by_layer. End of explanation """ """Collect all data by executing circuits.""" from cirq.experiments.xeb_sampling import sample_2q_xeb_circuits from cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities, fit_exponential_decays # Run XEB circuits on the processor. sampled_df = sample_2q_xeb_circuits( sampler=sampler, circuits=circuit_library, cycle_depths=cycle_depths, combinations_by_layer=combinations_by_layer, shuffle=np.random.RandomState(random_seed), repetitions=repetitions, ) # Run XEB circuits on a simulator and fit exponential decays to get fidelities. fidelity_data = benchmark_2q_xeb_fidelities( sampled_df=sampled_df, circuits=circuit_library, cycle_depths=cycle_depths, ) fidelities = fit_exponential_decays(fidelity_data) #Grab (pair, sqrt_iswap_pauli_error_per_cycle) data for all qubit pairs. pxeb_results = { pair: (1.0 - fidelity) / (4 / 3) #Scalar to get Pauli error for (_, _, pair), fidelity in fidelities.layer_fid.items() } """ Explanation: Then, run the circuits on the device, combining them into larger circuits and truncating the circuits by length, with cirq.experiments.xeb_sampling.sample_2q_xeb_circuits. Afterwards, run the same circuits on a perfect simulator, and compare them to the sampled results. Finally, fit the collected data to an exponential decay curve to estimate the error rate per appication of each two-qubit $\sqrt{i\mathrm{SWAP}}$ gate. End of explanation """ # Run experiment sq_result = cirq.estimate_parallel_single_qubit_readout_errors(sampler, qubits=qubits, repetitions=repetitions) # Use P11 errors p11_results = sq_result.one_state_errors """ Explanation: Note: The parallel XEB errors are scaled in pxeb_results. This is because the collected fidelities are the estimated depolarization fidelities, not the Pauli error metrics available from the calibration data. See the XEB Theory tutorial for an explanation why, and Calibration Metrics for more information on the difference between these values. Current Readout Metric Data with Simultaneous Readout To evaluate performance changes in the readout registers, collect the Parallel P11 error data for each qubit with the Simultaneous Readout experiment, accessible with estimate_parallel_single_qubit_readout_errors. This function runs the experiment to estimate P00 and P11 errors for each qubit (as opposed to querying for the most recent calibration data). The experiment prepares each qubit in the $|0\rangle$ and $|1\rangle$ states, measures them, and evaluates how often the qubits are measured in the expected state. End of explanation """ from matplotlib.colors import LogNorm # Plot options. You may need to change these if you data shows a lot of the same colors. vmin = 5e-3 vmax = 3e-2 options = {"norm": LogNorm()} format = "0.3f" fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9)) # Calibration two qubit data calibration.heatmap(two_qubit_gate_metric).plot( ax=ax1, title="Calibration", vmin=vmin, vmax=vmax, collection_options=options, annotation_format=format, ) # Current two qubit data cirq.TwoQubitInteractionHeatmap(pxeb_results).plot( ax=ax2, title="Current", vmin=vmin, vmax=vmax, collection_options=options, annotation_format=format, ) # Calculate difference in two-qubit metric twoq_diffs = {} for pair,calibration_err in calibration[two_qubit_gate_metric].items(): # The order of the qubits in the result dictionary keys is sometimes swapped. Eg: (Q1,Q2):0.04 vs (Q2,Q1):0.06 if pair in pxeb_results: characterization_err = pxeb_results[pair] else: characterization_err = pxeb_results[tuple(reversed(pair))] twoq_diffs[pair] = characterization_err - calibration_err[0] # Two qubit difference data cirq.TwoQubitInteractionHeatmap(twoq_diffs).plot( ax=ax3, title='Difference in Two Qubit Metrics', annotation_format=format, ) # Add titles plt.figtext(0.5,0.97, two_qubit_gate_metric.replace("_"," ").title(), ha="center", va="top", fontsize=14) """ Explanation: Heatmap Comparisons For each metric, plot the calibration and collected characterization data side by side, on the same scale. Also plot the difference between the two datasets (on a different scale). Two-Qubit Metric Heatmap Comparison End of explanation """ # Plot options, with different vmin and vmax for readout data. vmin = 3e-2 vmax = 1.1e-1 options = {"norm": LogNorm()} format = "0.3f" fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9)) # Calibration readout data calibration.heatmap(readout_metric).plot( ax=ax1, title="Calibration", vmin=vmin, vmax=vmax, collection_options=options, annotation_format=format, ) # Current readout data cirq.Heatmap(p11_results).plot( ax=ax2, title="Current", vmin=vmin, vmax=vmax, collection_options=options, annotation_format=format, ) # Collect difference in readout metrics readout_diffs = {q[0]:p11_results[q[0]] - err[0] for q,err in calibration[readout_metric].items()} # Readout difference data cirq.Heatmap(readout_diffs).plot( ax=ax3, title='Difference in Readout Metrics', annotation_format=format, ) # Add title plt.figtext(0.5,0.97, readout_metric.replace("_"," ").title(), ha="center", va="top", fontsize=14) """ Explanation: The large numbers of zero and below values (green and darker colors) in the difference heatmap indicate that the device's two-qubit $\sqrt{i\mathrm{SWAP}}$ gates have improved noticeably across the device. In fact, only a couple qubit pairs towards the bottom of the device have worsened since the previous calibration. You should try to make use of the qubit pairs $(Q(5,2),Q(5,3))$ and $(Q(5,1),Q(6,1))$, which were previously average but have become the most reliable $\sqrt{i\mathrm{SWAP}}$ gates in the device. Qubit pairs $(Q(6,2),Q(7,2))$, $(Q(7,2),Q(7,3))$ and especially $(Q(6,4),Q(7,4))$, were the worst qubit pairs on the device, but have improved so significantly that they are within an acceptable range of $0.010$ to $0.016$ Pauli error. You may not need to avoid them now, if you were previously. It's important to note that, if you have the option to use a consistently high reliablity qubit or qubit pair, instead of one that demonstrates inconsistent performance, you should do so. For example, qubit pairs $(Q(5,1),Q(5,2))$ and $(Q(5,2),Q(6,2))$ have not changed much, are still around $0.010$ Pauli error, and happen to be near the other two good qubit pairs mentioned earlier, making them a good candidates for inclusion. Readout Metric Heatmap Comparisons End of explanation """
obulpathi/datascience
scikit/Chapter 9/Summary.ipynb
apache-2.0
from sklearn.datasets import load_digits from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import cross_val_score digits = load_digits() X, y = digits.data / 16., digits.target cross_val_score(LogisticRegression(), X, y, cv=5) from sklearn.grid_search import GridSearchCV from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) grid = GridSearchCV(LogisticRegression(), param_grid={'C': np.logspace(-3, 2, 6)}) grid.fit(X_train, y_train) grid.score(X_test, y_test) """ Explanation: Summary scikit-learn API X : data, 2d numpy array or scipy sparse matrix of shape (n_samples, n_features) y : targets, 1d numpy array of shape (n_samples,) <table> <tr style="border:None; font-size:20px; padding:10px;"><th colspan=2>``model.fit(X_train, [y_train])``</td></tr> <tr style="border:None; font-size:20px; padding:10px;"><th>``model.predict(X_test)``</th><th>``model.transform(X_test)``</th></tr> <tr style="border:None; font-size:20px; padding:10px;"><td>Classification</td><td>Preprocessing</td></tr> <tr style="border:None; font-size:20px; padding:10px;"><td>Regression</td><td>Dimensionality Reduction</td></tr> <tr style="border:None; font-size:20px; padding:10px;"><td>Clustering</td><td>Feature Extraction</td></tr> <tr style="border:None; font-size:20px; padding:10px;"><td>&nbsp;</td><td>Feature selection</td></tr> </table> Model evaluation and parameter selection End of explanation """ from sklearn.pipeline import make_pipeline from sklearn.feature_selection import SelectKBest pipe = make_pipeline(SelectKBest(k=59), LogisticRegression()) pipe.fit(X_train, y_train) pipe.score(X_test, y_test) """ Explanation: Model complexity, overfitting, underfitting Pipelines End of explanation """ cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5) cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5, scoring="roc_auc") """ Explanation: Scoring metrics End of explanation """ from sklearn.preprocessing import OneHotEncoder X = np.array([[15.9, 1], # from Tokyo [21.5, 2], # from New York [31.3, 0], # from Paris [25.1, 2], # from New York [63.6, 1], # from Tokyo [14.4, 1], # from Tokyo ]) y = np.array([0, 1, 1, 1, 0, 0]) encoder = OneHotEncoder(categorical_features=[1], sparse=False) pipe = make_pipeline(encoder, LogisticRegression()) pipe.fit(X, y) pipe.score(X, y) """ Explanation: Data Wrangling End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/csiro-bom/cmip6/models/sandbox-1/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: CSIRO-BOM Source ID: SANDBOX-1 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:55 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
mespe/SolRad
exploration/ozone.ipynb
mit
ozone_daily['site'][ozone_daily['site'].isin([2778, 2783])] """ Explanation: Although these two sites are listed in the Location file they are not found in the 'ozone' data set. Check out, "MSA name" column in the Location.xlxs. I think they are not used for monitoring air quality parameters. End of explanation """ len(ozone_daily['site'].unique()) #this was done once in the beginning to #save data set as ".csv" file. #ozone_daily.to_csv('daily_ozone_obs_1980_2014.csv', sep = ',') locations = pd.read_excel('Location.xlsx') def get_county_site(locations, county = 'Colusa'): county_of_interest = (locations.set_index(['County Name', 'Site']).loc[county]) county_of_interest = county_of_interest.reset_index() county_sites = county_of_interest['Site'] return county_sites colusa_sites = get_county_site(locations).dropna() colusa_daily_ozone = ozone_daily[ozone_daily['site'].isin(colusa_sites)] colusa_daily_ozone = (colusa_daily_ozone.reset_index(). drop('index', axis = 1)) #this also was done only once to save the output as csv file. colusa_daily_ozone.to_csv('colusa_daily_ozone_1980_2014.csv', sep = ',') colusa_daily_ozone.head() colusa_daily_ozone['site'].unique() """ Explanation: The number of unique sites in Location file is around 2100. However, as you can see below, this number is 485 in the "ozone" data set. End of explanation """
mjirik/pyseg_base
examples/pretrain_model.ipynb
bsd-3-clause
from imcut import pycut import numpy as np import scipy.ndimage import matplotlib.pyplot as plt from datetime import datetime def make_data(sz=32, offset=0, sigma=80): seeds = np.zeros([sz, sz, sz], dtype=np.int8) seeds[offset + 12, offset + 9 : offset + 14, offset + 10] = 1 seeds[offset + 20, offset + 18 : offset + 21, offset + 12] = 1 img = np.ones([sz, sz, sz]) img = img - seeds seeds[ offset + 3 : offset + 15, offset + 2 : offset + 6, offset + 27 : offset + 29 ] = 2 img = scipy.ndimage.morphology.distance_transform_edt(img) segm = img < 7 img = (100 * segm + sigma * np.random.random(img.shape)).astype(np.uint8) return img, segm, seeds # make_data() """ Explanation: Use pretrained model End of explanation """ img, seg, seeds = make_data(64, 20) i = 30 plt.imshow(img[i, :, :], cmap='gray') """ Explanation: Get the image data End of explanation """ segparams = { # 'method':'graphcut', "method": "graphcut", "use_boundary_penalties": False, "boundary_dilatation_distance": 2, "boundary_penalties_weight": 1, "modelparams": { "type": "gmmsame", "fv_type": "intensity", # 'fv_extern': fv_function, "adaptation": "original_data", }, } gc = pycut.ImageGraphCut(img, segparams=segparams) gc.set_seeds(seeds) t0 = datetime.now() gc.run() print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show() mdl_stored_file = "test_model.p" gc.save(mdl_stored_file) """ Explanation: Train gaussian mixture model and save it to file End of explanation """ # forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) gc.set_seeds(seeds) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show() """ Explanation: Run segmentation faster by loading model from file The advantage is higher with the higher number of seeds. End of explanation """ # forget gc = None img, seg, seeds = make_data(56, 18) gc = pycut.ImageGraphCut(img) gc.load(mdl_stored_file) t0 = datetime.now() gc.run(run_fit_model=False) print(f"time cosumed={datetime.now()-t0}") plt.imshow(img[i, :, :], cmap='gray') plt.contour(gc.segmentation[i,:,:]) plt.show() """ Explanation: The seeds does not have to be used if model is loaded from file End of explanation """
jbwhit/jupyter-best-practices
notebooks/03-Git-and-Autoreload.ipynb
mit
df = pd.read_csv("../data/coal_prod_cleaned.csv") df.head() df.shape df.columns qgrid_widget = qgrid.show_grid( df[["Year", "Mine_State", "Labor_Hours", "Production_short_tons"]], show_toolbar=True, ) qgrid_widget df2 = df.groupby('Mine_State').sum() df3 = df.groupby('Mine_State').sum() df2.loc['Wyoming', 'Production_short_tons'] = 5.181732e+08 # have to run the next line then restart your kernel # !cd ../insight; python setup.py develop %aimport insight.plotting insight.plotting.plot_prod_vs_hours(df2, color_index=1) insight.plotting.plot_prod_vs_hours(df3, color_index=0) def plot_prod_vs_hours( df, color_index=0, output_file="../img/production-vs-hours-worked.png" ): fig, ax = plt.subplots(figsize=(10, 8)) sns.regplot( df["Labor_Hours"], df["Production_short_tons"], ax=ax, color=sns.color_palette()[color_index], ) ax.set_xlabel("Labor Hours Worked") ax.set_ylabel("Total Amount Produced") x = ax.set_xlim(-9506023.213266129, 204993853.21326613) y = ax.set_ylim(-51476801.43653282, 746280580.4034251) fig.tight_layout() fig.savefig(output_file) plot_prod_vs_hours(df2, color_index=0) plot_prod_vs_hours(df3, color_index=1) # make a change via qgrid df3 = qgrid_widget.get_changed_df() """ Explanation: QGrid Interactive pandas dataframes: https://github.com/quantopian/qgrid End of explanation """ qgrid_widget = qgrid.show_grid( df2[["Year", "Labor_Hours", "Production_short_tons"]], show_toolbar=True, ) qgrid_widget """ Explanation: Github https://github.com/jbwhit/jupyter-tips-and-tricks/commit/d3f2c0cef4dfd28eb3b9077595f14597a3022b1c?short_path=04303fc#diff-04303fce5e9bb38bcee25d12d9def22e End of explanation """
karlstroetmann/Formal-Languages
Ply/Symbolic-Calculator.ipynb
gpl-2.0
import ply.lex as lex """ Explanation: A Simple Symbolic Calculator This file shows how a simple symbolic calculator can be implemented using Ply. The grammar for the language implemented by this parser is as follows: $$ \begin{array}{lcl} \texttt{stmnt} & \rightarrow & \;\texttt{IDENTIFIER} \;\texttt{':='}\; \texttt{expr}\; \texttt{';'}\ & \mid & \;\texttt{expr}\; \texttt{';'} \[0.2cm] \texttt{expr} & \rightarrow & \;\texttt{expr}\; \texttt{'+'} \; \texttt{product} \ & \mid & \;\texttt{expr}\; \texttt{'-'} \; \texttt{product} \ & \mid & \;\texttt{product} \[0.2cm] \texttt{product} & \rightarrow & \;\texttt{product}\; \texttt{'*'} \;\texttt{factor} \ & \mid & \;\texttt{product}\; \texttt{'/'} \;\texttt{factor} \ & \mid & \;\texttt{factor} \[0.2cm] \texttt{factor} & \rightarrow & \texttt{'('} \; \texttt{expr} \;\texttt{')'} \ & \mid & \;\texttt{NUMBER} \ & \mid & \;\texttt{IDENTIFIER} \end{array} $$ Specification of the Scanner End of explanation """ tokens = [ 'NUMBER', 'IDENTIFIER', 'ASSIGN_OP' ] """ Explanation: There are only three tokens that need to be defined via regular expressions. The other tokens consist only of a single character and can therefore be defined as literals. End of explanation """ def t_NUMBER(t): r'0|[1-9][0-9]*(\.[0-9]+)?([eE][+-]?([1-9][0-9]*))?' t.value = float(t.value) return t """ Explanation: The token NUMBER specifies a fully featured floating point number. End of explanation """ def t_IDENTIFIER(t): r'[a-zA-Z][a-zA-Z0-9_]*' return t """ Explanation: The token IDENTIFIER specifies the name of a variable. End of explanation """ def t_ASSIGN_OP(t): r':=' return t """ Explanation: The token ASSIGN_OP specifies the assignment operator. As this operator consists of two characters, it can't be defined as a literal. End of explanation """ literals = ['+', '-', '*', '/', '(', ')', ';'] """ Explanation: literals is a list operator symbols that consist of a single character. End of explanation """ t_ignore = ' \t' """ Explanation: Blanks and tabulators are ignored. End of explanation """ def t_newline(t): r'\n+' t.lexer.lineno += t.value.count('\n') """ Explanation: Newlines are counted in order to give precise error messages. Otherwise they are ignored. End of explanation """ def t_error(t): print(f"Illegal character '{t.value[0]}' at character number {t.lexer.lexpos} in line {t.lexer.lineno}.") t.lexer.skip(1) __file__ = 'main' """ Explanation: Unkown characters are reported as lexical errors. End of explanation """ lexer = lex.lex() """ Explanation: We generate the lexer. End of explanation """ import ply.yacc as yacc """ Explanation: Specification of the Parser End of explanation """ start = 'stmnt' """ Explanation: The start variable of our grammar is statement. End of explanation """ def p_stmnt_assign(p): "stmnt : IDENTIFIER ASSIGN_OP expr ';'" Names2Values[p[1]] = p[3] def p_stmnt_expr(p): "stmnt : expr ';'" print(p[1]) """ Explanation: There are two grammar rules for stmnts: stmnt : IDENTIFIER ":=" expr ";" | expr ';' ; - If a stmnt is an assignment, the expression on the right hand side of the assignment operator is evaluated and the value is stored in the dictionary Names2Values. The key used in this dictionary is the name of the variable on the left hand side ofthe assignment operator. - If a stmnt is an expression, the expression is evaluated and the result of this evaluation is printed. It is <b>very important</b> that in the grammar rules below the : is surrounded by space characters, for otherwise Ply will throw mysterious error messages at us! Below, Names2Values is a dictionary mapping variable names to their values. It will be defined later. End of explanation """ def p_expr_plus(p): "expr : expr '+' prod" p[0] = p[1] + p[3] def p_expr_minus(p): "expr : expr '-' prod" p[0] = p[1] - p[3] def p_expr_prod(p): "expr : prod" p[0] = p[1] """ Explanation: An expr is a sequence of prods that are combined with the operators + and -. The corresponding grammar rules are: expr : expr '+' prod | expr '-' prod | prod ; End of explanation """ def p_prod_mult(p): "prod : prod '*' factor" p[0] = p[1] * p[3] def p_prod_div(p): "prod : prod '/' factor" p[0] = p[1] / p[3] def p_prod_factor(p): "prod : factor" p[0] = p[1] """ Explanation: A prod is a sequence of factors that are combined with the operators * and /. The corresponding grammar rules are: prod : prod '*' factor | prod '/' factor | factor ; End of explanation """ def p_factor_group(p): "factor : '(' expr ')'" p[0] = p[2] def p_factor_number(p): "factor : NUMBER" p[0] = p[1] def p_factor_id(p): "factor : IDENTIFIER" p[0] = Names2Values.get(p[1], float('NaN')) """ Explanation: A factor can is either an expression in parenthesis, a number, or an identifier. factor : '(' expr ')' | NUMBER | IDENTIFIER ; End of explanation """ float('NaN'), float('Inf'), float('Inf') - float('Inf') """ Explanation: The expression float('NaN') stands for an undefined number. End of explanation """ def p_error(p): if p: print(f"Syntax error at character number {p.lexer.lexpos} at token '{p.value}' in line {p.lexer.lineno}.") else: print('Syntax error at end of input.') """ Explanation: The method p_error is called if a syntax error occurs. The argument p is the token that could not be read. If p is None then there is a syntax error at the end of input. End of explanation """ parser = yacc.yacc(write_tables=False, debug=True) """ Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table. We set debug to True so that the parse tables are dumped into the file parser.out. End of explanation """ !type parser.out !cat parser.out """ Explanation: Let's look at the action table that is generated. End of explanation """ Names2Values = {} """ Explanation: Names2Values is the dictionary that maps variable names to their values. Initially the dictionary is empty as no variables has yet been defined. End of explanation """ def main(): while True: s = input('calc> ') if s == '': break yacc.parse(s) main() """ Explanation: The parser is invoked by calling the method yacc.parse(s) where s is a string that is to be parsed. End of explanation """
google/starthinker
colabs/cm_user_editor.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: CM360 Bulk User Editor A tool for rapidly bulk editing Campaign Manager profiles, roles, and sub accounts. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'recipe_name':'', # Name of document to deploy to. } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter CM360 Bulk User Editor Recipe Parameters Add this card to a recipe and save it. Then click Run Now to deploy. Follow the instructions for setup. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'drive':{ 'auth':'user', 'hour':[ ], 'copy':{ 'source':'https://docs.google.com/spreadsheets/d/1Mw4kDJfaWVloyjSayJSkgE8i28Svoj1756fyQtIpmRE/', 'destination':{'field':{'name':'recipe_name','prefix':'CM User Editor For ','kind':'string','order':1,'description':'Name of document to deploy to.','default':''}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute CM360 Bulk User Editor This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """
eusebioaguilera/scalablemachinelearning
Lab04/ML_lab4_ctr_student.ipynb
gpl-3.0
labVersion = 'cs190_week4_v_1_3' """ Explanation: Click-Through Rate Prediction Lab This lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition. This lab will cover: Part 1: Featurize categorical data using one-hot-encoding (OHE) Part 2: Construct an OHE dictionary Part 3: Parse CTR data and generate OHE features Visualization 1: Feature frequency Part 4: CTR prediction and logloss evaluation Visualization 2: ROC curve Part 5: Reduce feature dimension via feature hashing Visualization 3: Hyperparameter heat map Note that, for reference, you can look up the details of the relevant Spark methods in Spark's Python API and the relevant NumPy methods in the NumPy Reference End of explanation """ # Data for manual OHE # Note: the first data point does not include any value for the optional third feature sampleOne = [(0, 'mouse'), (1, 'black')] sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')] sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')] sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree]) # TODO: Replace <FILL IN> with appropriate code sampleOHEDictManual = {} sampleOHEDictManual[(0,'bear')] = 0 sampleOHEDictManual[(0,'cat')] = 1 sampleOHEDictManual[(0,'mouse')] = 2 sampleOHEDictManual[(1,'black')] = 3 sampleOHEDictManual[(1,'tabby')] = 4 sampleOHEDictManual[(2,'mouse')] = 5 sampleOHEDictManual[(2,'salmon')] = 6 # TEST One-hot-encoding (1a) from test_helper import Test Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')], 'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c', "incorrect value for sampleOHEDictManual[(0,'bear')]") Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')], '356a192b7913b04c54574d18c28d46e6395428ab', "incorrect value for sampleOHEDictManual[(0,'cat')]") Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')], 'da4b9237bacccdf19c0760cab7aec4a8359010b0', "incorrect value for sampleOHEDictManual[(0,'mouse')]") Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')], '77de68daecd823babbb58edb1c8e14d7106e83bb', "incorrect value for sampleOHEDictManual[(1,'black')]") Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')], '1b6453892473a467d07372d45eb05abc2031647a', "incorrect value for sampleOHEDictManual[(1,'tabby')]") Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')], 'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4', "incorrect value for sampleOHEDictManual[(2,'mouse')]") Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')], 'c1dfd96eea8cc2b62785275bca38ac261256e278', "incorrect value for sampleOHEDictManual[(2,'salmon')]") Test.assertEquals(len(sampleOHEDictManual.keys()), 7, 'incorrect number of keys in sampleOHEDictManual') """ Explanation: Part 1: Featurize categorical data using one-hot-encoding (1a) One-hot-encoding We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon). In a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category. Later in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms. End of explanation """ import numpy as np from pyspark.mllib.linalg import SparseVector # TODO: Replace <FILL IN> with appropriate code aDense = np.array([0., 3., 0., 4.]) aSparse = SparseVector(4, [[1, 3], [3., 4.]]) bDense = np.array([0., 0., 0., 1.]) bSparse = SparseVector(4, [(3, 1.)]) w = np.array([0.4, 3.1, -1.4, -.5]) print aDense.dot(w) print aSparse.dot(w) print bDense.dot(w) print bSparse.dot(w) # TEST Sparse Vectors (1b) Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector') Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector') Test.assertTrue(aDense.dot(w) == aSparse.dot(w), 'dot product of aDense and w should equal dot product of aSparse and w') Test.assertTrue(bDense.dot(w) == bSparse.dot(w), 'dot product of bDense and w should equal dot product of bSparse and w') """ Explanation: (1b) Sparse vectors Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors). Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense. End of explanation """ # Reminder of the sample features # sampleOne = [(0, 'mouse'), (1, 'black')] # sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')] # sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')] # TODO: Replace <FILL IN> with appropriate code sampleOneOHEFeatManual = SparseVector(7, [(2, 1.), (3, 1.)]) sampleTwoOHEFeatManual = SparseVector(7, [(1, 1.), (4, 1.), (5, 1.)]) sampleThreeOHEFeatManual = SparseVector(7, [(0, 1.), (3, 1.), (6, 1.)]) # TEST OHE Features as sparse vectors (1c) Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector), 'sampleOneOHEFeatManual needs to be a SparseVector') Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector), 'sampleTwoOHEFeatManual needs to be a SparseVector') Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector), 'sampleThreeOHEFeatManual needs to be a SparseVector') Test.assertEqualsHashed(sampleOneOHEFeatManual, 'ecc00223d141b7bd0913d52377cee2cf5783abd6', 'incorrect value for sampleOneOHEFeatManual') Test.assertEqualsHashed(sampleTwoOHEFeatManual, '26b023f4109e3b8ab32241938e2e9b9e9d62720a', 'incorrect value for sampleTwoOHEFeatManual') Test.assertEqualsHashed(sampleThreeOHEFeatManual, 'c04134fd603ae115395b29dcabe9d0c66fbdc8a7', 'incorrect value for sampleThreeOHEFeatManual') """ Explanation: (1c) OHE features as sparse vectors Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0]. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def oneHotEncoding(rawFeats, OHEDict, numOHEFeats): """Produce a one-hot-encoding from a list of features and an OHE dictionary. Note: You should ensure that the indices used to create a SparseVector are sorted. Args: rawFeats (list of (int, str)): The features corresponding to a single observation. Each feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne) OHEDict (dict): A mapping of (featureID, value) to unique integer. numOHEFeats (int): The total number of unique OHE features (combinations of featureID and value). Returns: SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique identifiers for the (featureID, value) combinations that occur in the observation and with values equal to 1.0. """ myList = [OHEDict[f] for f in rawFeats] sortedMyList = sorted(myList) valueList = [1 for f in rawFeats] return SparseVector(numOHEFeats, sortedMyList, valueList) # Calculate the number of features in sampleOHEDictManual numSampleOHEFeats = len(sampleOHEDictManual) # Run oneHotEnoding on sampleOne sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats) print sampleOneOHEFeat # TEST Define an OHE Function (1d) Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual, 'sampleOneOHEFeat should equal sampleOneOHEFeatManual') Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]), 'incorrect value for sampleOneOHEFeat') Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual, numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]), 'incorrect definition for oneHotEncoding') """ Explanation: (1d) Define a OHE function Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c). End of explanation """ # TODO: Replace <FILL IN> with appropriate code sampleOHEData = sampleDataRDD.map(lambda x : oneHotEncoding(x, sampleOHEDictManual, len(sampleOHEDictManual))) print sampleOHEData.collect() # TEST Apply OHE to a dataset (1e) sampleOHEDataValues = sampleOHEData.collect() Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements') Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}), 'incorrect OHE for first sample') Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}), 'incorrect OHE for second sample') Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}), 'incorrect OHE for third sample') """ Explanation: (1e) Apply OHE to a dataset Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset. End of explanation """ # TODO: Replace <FILL IN> with appropriate code sampleDistinctFeats = (sampleDataRDD. flatMap(lambda x : x).distinct()) # TEST Pair RDD of (featureID, category) (2a) Test.assertEquals(sorted(sampleDistinctFeats.collect()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'incorrect value for sampleDistinctFeats') """ Explanation: Part 2: Construct an OHE dictionary (2a) Pair RDD of (featureID, category) To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct. End of explanation """ # TODO: Replace <FILL IN> with appropriate code sampleOHEDict = (sampleDistinctFeats. zipWithIndex().collectAsMap()) print sampleOHEDict # TEST OHE Dictionary from distinct features (2b) Test.assertEquals(sorted(sampleOHEDict.keys()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDict has unexpected keys') Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values') """ Explanation: (2b) OHE Dictionary from distinct features Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap. In our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def createOneHotDict(inputData): """Creates a one-hot-encoder dictionary based on the input data. Args: inputData (RDD of lists of (int, str)): An RDD of observations where each observation is made up of a list of (featureID, value) tuples. Returns: dict: A dictionary where the keys are (featureID, value) tuples and map to values that are unique integers. """ return (inputData.flatMap(lambda x : x).distinct().zipWithIndex().collectAsMap()) sampleOHEDictAuto = createOneHotDict(sampleDataRDD) print sampleOHEDictAuto # TEST Automated creation of an OHE dictionary (2c) Test.assertEquals(sorted(sampleOHEDictAuto.keys()), [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDictAuto has unexpected keys') Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7), 'sampleOHEDictAuto has unexpected values') """ Explanation: (2c) Automated creation of an OHE dictionary Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b). End of explanation """ # Run this code to view Criteo's agreement from IPython.lib.display import IFrame IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/", 600, 350) # TODO: Replace <FILL IN> with appropriate code # Just replace <FILL IN> with the url for dac_sample.tar.gz import glob import os.path import tarfile import urllib import urlparse # Paste url, url should end with: dac_sample.tar.gz url = 'http://labs.criteo.com/wp-content/uploads/2015/04/dac_sample.tar.gz' url = url.strip() baseDir = os.path.join('data') inputPath = os.path.join('cs190', 'dac_sample.txt') fileName = os.path.join(baseDir, inputPath) inputDir = os.path.split(fileName)[0] def extractTar(check = False): # Find the zipped archive and extract the dataset tars = glob.glob('dac_sample*.tar.gz*') if check and len(tars) == 0: return False if len(tars) > 0: try: tarFile = tarfile.open(tars[0]) except tarfile.ReadError: if not check: print 'Unable to open tar.gz file. Check your URL.' return False tarFile.extract('dac_sample.txt', path=inputDir) print 'Successfully extracted: dac_sample.txt' return True else: print 'You need to retry the download with the correct url.' print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' + 'directory') return False if os.path.isfile(fileName): print 'File is already available. Nothing to do.' elif extractTar(check = True): print 'tar.gz file was already available.' elif not url.endswith('dac_sample.tar.gz'): print 'Check your download url. Are you downloading the Sample dataset?' else: # Download the file and store it in the same directory as this notebook try: urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path)) except IOError: print 'Unable to download and store: {0}'.format(url) extractTar() import os.path baseDir = os.path.join('data') inputPath = os.path.join('cs190', 'dac_sample.txt') fileName = os.path.join(baseDir, inputPath) if os.path.isfile(fileName): rawData = (sc .textFile(fileName, 2) .map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data print rawData.take(1) """ Explanation: Part 3: Parse CTR data and generate OHE features Before we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable. Below is Criteo's data sharing agreement. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. The file is 8.4 MB compressed. The script below will download the file to the virtual machine (VM) and then extract the data. If running the cell below does not render a webpage, open the Criteo agreement in a separate browser tab. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. Note that the download could take a few minutes, depending upon your connection speed. End of explanation """ # TODO: Replace <FILL IN> with appropriate code weights = [.8, .1, .1] seed = 42 # Use randomSplit with weights and seed rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed) # Cache the data rawTrainData.cache() rawValidationData.cache() rawTestData.cache() nTrain = rawTrainData.count() nVal = rawValidationData.count() nTest = rawTestData.count() print nTrain, nVal, nTest, nTrain + nVal + nTest print rawData.take(1) # TEST Loading and splitting the data (3a) Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]), 'you must cache the split data') Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain') Test.assertEquals(nVal, 10075, 'incorrect value for nVal') Test.assertEquals(nTest, 10014, 'incorrect value for nTest') """ Explanation: (3a) Loading and splitting the data We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def parsePoint(point): """Converts a comma separated string into a list of (featureID, value) tuples. Note: featureIDs should start at 0 and increase to the number of features - 1. Args: point (str): A comma separated string where the first value is the label and the rest are features. Returns: list: A list of (featureID, value) tuples. """ mypoints = point.split(',') return [(i, item) for i, item in enumerate(mypoints[1:])] parsedTrainFeat = rawTrainData.map(parsePoint) numCategories = (parsedTrainFeat .flatMap(lambda x: x) .distinct() .map(lambda x: (x[0], 1)) .reduceByKey(lambda x, y: x + y) .sortByKey() .collect()) print numCategories[2][1] # TEST Extract features (3b) Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint') Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint') """ Explanation: (3b) Extract features We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function. End of explanation """ # TODO: Replace <FILL IN> with appropriate code ctrOHEDict = createOneHotDict(parsedTrainFeat) numCtrOHEFeats = len(ctrOHEDict.keys()) print numCtrOHEFeats print ctrOHEDict[(0, '')] # TEST Create an OHE dictionary from the dataset (3c) Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict') Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict') """ Explanation: (3c) Create an OHE dictionary from the dataset Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical. End of explanation """ from pyspark.mllib.regression import LabeledPoint # TODO: Replace <FILL IN> with appropriate code def parseOHEPoint(point, OHEDict, numOHEFeats): """Obtain the label and feature vector for this raw observation. Note: You must use the function `oneHotEncoding` in this implementation or later portions of this lab may not function as expected. Args: point (str): A comma separated string where the first value is the label and the rest are features. OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer. numOHEFeats (int): The number of unique features in the training dataset. Returns: LabeledPoint: Contains the label for the observation and the one-hot-encoding of the raw features based on the provided OHE dictionary. """ parsedPoints = parsePoint(point) label = point.split(',')[0] features = oneHotEncoding(parsedPoints, OHEDict, numOHEFeats) return LabeledPoint(label, features) OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats)) OHETrainData.cache() print OHETrainData.take(1) # Check that oneHotEncoding function was used in parseOHEPoint backupOneHot = oneHotEncoding oneHotEncoding = None withOneHot = False try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats) except TypeError: withOneHot = True oneHotEncoding = backupOneHot # TEST Apply OHE to the dataset (3d) numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5)) numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5)) Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint') Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint') """ Explanation: (3d) Apply OHE to the dataset Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d). End of explanation """ def bucketFeatByCount(featCount): """Bucket the counts by powers of two.""" for i in range(11): size = 2 ** i if featCount <= size: return size return -1 featCounts = (OHETrainData .flatMap(lambda lp: lp.features.indices) .map(lambda x: (x, 1)) .reduceByKey(lambda x, y: x + y)) featCountsBuckets = (featCounts .map(lambda x: (bucketFeatByCount(x[1]), 1)) .filter(lambda (k, v): k != -1) .reduceByKey(lambda x, y: x + y) .collect()) print featCountsBuckets import matplotlib.pyplot as plt x, y = zip(*featCountsBuckets) x, y = np.log(x), np.log(y) def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0): """Template for generating the plot layout.""" plt.close() fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white') ax.axes.tick_params(labelcolor='#999999', labelsize='10') for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]: axis.set_ticks_position('none') axis.set_ticks(ticks) axis.label.set_color('#999999') if hideLabels: axis.set_ticklabels([]) plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-') map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right']) return fig, ax # generate layout and plot data fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2)) ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$') plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75) pass """ Explanation: Visualization 1: Feature frequency We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( $ \scriptsize 2^0 $ ), the second to features that appear twice ( $ \scriptsize 2^1 $ ), the third to features that occur between three and four ( $ \scriptsize 2^2 $ ) times, the fifth bucket is five to eight ( $ \scriptsize 2^3 $ ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def oneHotEncoding(rawFeats, OHEDict, numOHEFeats): """Produce a one-hot-encoding from a list of features and an OHE dictionary. Note: If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be ignored. Args: rawFeats (list of (int, str)): The features corresponding to a single observation. Each feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne) OHEDict (dict): A mapping of (featureID, value) to unique integer. numOHEFeats (int): The total number of unique OHE features (combinations of featureID and value). Returns: SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique identifiers for the (featureID, value) combinations that occur in the observation and with values equal to 1.0. """ myList = [OHEDict[f] for f in rawFeats if f in OHEDict] sortedMyList = sorted(myList) valueList = [1 for f in rawFeats if f in OHEDict] return SparseVector(numOHEFeats, sortedMyList, valueList) OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats)) OHEValidationData.cache() print OHEValidationData.take(1) # TEST Handling unseen features (3e) numNZVal = (OHEValidationData .map(lambda lp: len(lp.features.indices)) .sum()) Test.assertEquals(numNZVal, 372080, 'incorrect number of features') """ Explanation: (3e) Handling unseen features We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data. End of explanation """ from pyspark.mllib.classification import LogisticRegressionWithSGD # fixed hyperparameters numIters = 50 stepSize = 10. regParam = 1e-6 regType = 'l2' includeIntercept = True # TODO: Replace <FILL IN> with appropriate code model0 = LogisticRegressionWithSGD.train(OHETrainData, numIters, stepSize, regParam=regParam, regType=regType, intercept=includeIntercept) sortedWeights = sorted(model0.weights) print sortedWeights[:5], model0.intercept # TEST Logistic regression (4a) Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept') Test.assertTrue(np.allclose(sortedWeights[0:5], [-0.45899236853575609, -0.37973707648623956, -0.36996558266753304, -0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights') """ Explanation: Part 4: CTR prediction and logloss evaluation (4a) Logistic regression We are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful. First use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model. End of explanation """ # TODO: Replace <FILL IN> with appropriate code from math import log def computeLogLoss(p, y): """Calculates the value of log loss for a given probabilty and label. Note: log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it and when p is 1 we need to subtract a small value (epsilon) from it. Args: p (float): A probabilty between 0 and 1. y (int): A label. Takes on the values 0 and 1. Returns: float: The log loss value. """ epsilon = 10e-12 logLoss = None # For undefined values of log(p) if p == 0: p += epsilon elif p == 1: p -= epsilon if y == 1: logLoss = -log(p) else: logLoss = -log(1-p) return logLoss print computeLogLoss(.5, 1) print computeLogLoss(.5, 0) print computeLogLoss(.99, 1) print computeLogLoss(.99, 0) print computeLogLoss(.01, 1) print computeLogLoss(.01, 0) print computeLogLoss(0, 1) print computeLogLoss(1, 1) print computeLogLoss(1, 0) # TEST Log loss (4b) Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)], [0.69314718056, 0.0100503358535, 4.60517018599]), 'computeLogLoss is not correct') Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)], [25.3284360229, 1.00000008275e-11, 25.3284360229]), 'computeLogLoss needs to bound p away from 0 and 1 by epsilon') """ Explanation: (4b) Log loss Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ where $ \scriptsize p$ is a probability between 0 and 1 and $ \scriptsize y$ is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs. End of explanation """ # TODO: Replace <FILL IN> with appropriate code # Note that our dataset has a very high click-through rate by design # In practice click-through rate can be one to two orders of magnitude lower classOneFracTrain = OHETrainData.map(lambda x : x.label).reduce(lambda x, y: x + y) / OHETrainData.count() print classOneFracTrain logLossTrBase = OHETrainData.map(lambda x : computeLogLoss(classOneFracTrain, x.label)).sum() / OHETrainData.count() print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase) # TEST Baseline log loss (4c) Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain') Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase') """ Explanation: (4c) Baseline log loss Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values. End of explanation """ # TODO: Replace <FILL IN> with appropriate code from math import exp # exp(-t) = e^-t def getP(x, w, intercept): """Calculate the probability for an observation given a set of weights and intercept. Note: We'll bound our raw prediction between 20 and -20 for numerical purposes. Args: x (SparseVector): A vector with values of 1.0 for features that exist in this observation and 0.0 otherwise. w (DenseVector): A vector of weights (betas) for the model. intercept (float): The model's intercept. Returns: float: A probability between 0 and 1. """ rawPrediction = x.dot(w) + intercept # Bound the raw prediction value rawPrediction = min(rawPrediction, 20) rawPrediction = max(rawPrediction, -20) return 1. / (1. + exp(-rawPrediction)) trainingPredictions = OHETrainData.map(lambda x : getP(x.features, model0.weights, model0.intercept)) print trainingPredictions.take(5) # TEST Predicted probability (4d) Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348), 'incorrect value for trainingPredictions') """ Explanation: (4d) Predicted probability In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data. Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def evaluateResults(model, data): """Calculates the log loss for the data given the model. Args: model (LogisticRegressionModel): A trained logistic regression model. data (RDD of LabeledPoint): Labels and features for each observation. Returns: float: Log loss for the data. """ return data.map(lambda x : computeLogLoss(getP(x.features, model.weights, model.intercept), x.label)).sum() / data.count() logLossTrLR0 = evaluateResults(model0, OHETrainData) print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}' .format(logLossTrBase, logLossTrLR0)) # TEST Evaluate the model (4e) Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0') """ Explanation: (4e) Evaluate the model We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss. End of explanation """ # TODO: Replace <FILL IN> with appropriate code logLossValBase = OHEValidationData.map(lambda x : computeLogLoss(classOneFracTrain, x.label)).sum() / OHEValidationData.count() logLossValLR0 = evaluateResults(model0, OHEValidationData) print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}' .format(logLossValBase, logLossValLR0)) # TEST Validation log loss (4f) Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase') Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0') """ Explanation: (4f) Validation log loss Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset. End of explanation """ labelsAndScores = OHEValidationData.map(lambda lp: (lp.label, getP(lp.features, model0.weights, model0.intercept))) labelsAndWeights = labelsAndScores.collect() labelsAndWeights.sort(key=lambda (k, v): v, reverse=True) labelsByWeight = np.array([k for (k, v) in labelsAndWeights]) length = labelsByWeight.size truePositives = labelsByWeight.cumsum() numPositive = truePositives[-1] falsePositives = np.arange(1.0, length + 1, 1.) - truePositives truePositiveRate = truePositives / numPositive falsePositiveRate = falsePositives / (length - numPositive) # Generate layout and plot data fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1)) ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05) ax.set_ylabel('True Positive Rate (Sensitivity)') ax.set_xlabel('False Positive Rate (1 - Specificity)') plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.) plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model pass """ Explanation: Visualization 2: ROC curve We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line. End of explanation """ from collections import defaultdict import hashlib def hashFunction(numBuckets, rawFeats, printMapping=False): """Calculate a feature dictionary for an observation's features based on hashing. Note: Use printMapping=True for debug purposes and to better understand how the hashing works. Args: numBuckets (int): Number of buckets to use as features. rawFeats (list of (int, str)): A list of features for an observation. Represented as (featureID, value) tuples. printMapping (bool, optional): If true, the mappings of featureString to index will be printed. Returns: dict of int to float: The keys will be integers which represent the buckets that the features have been hashed to. The value for a given key will contain the count of the (featureID, value) tuples that have hashed to that key. """ mapping = {} for ind, category in rawFeats: featureString = category + str(ind) mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets) if(printMapping): print mapping sparseFeatures = defaultdict(float) for bucket in mapping.values(): sparseFeatures[bucket] += 1.0 return dict(sparseFeatures) # Reminder of the sample values: # sampleOne = [(0, 'mouse'), (1, 'black')] # sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')] # sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')] # TODO: Replace <FILL IN> with appropriate code # Use four buckets sampOneFourBuckets = hashFunction(4, sampleOne, True) sampTwoFourBuckets = hashFunction(4, sampleTwo, True) sampThreeFourBuckets = hashFunction(4, sampleThree, True) # Use one hundred buckets sampOneHundredBuckets = hashFunction(100, sampleOne, True) sampTwoHundredBuckets = hashFunction(100, sampleTwo, True) sampThreeHundredBuckets = hashFunction(100, sampleThree, True) print '\t\t 4 Buckets \t\t\t 100 Buckets' print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets) print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets) print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets) # TEST Hash function (5a) Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets') Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0}, 'incorrect value for sampThreeHundredBuckets') """ Explanation: Part 5: Reduce feature dimension via feature hashing (5a) Hash function As we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing. Below is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries. End of explanation """ # TODO: Replace <FILL IN> with appropriate code def parseHashPoint(point, numBuckets): """Create a LabeledPoint for this observation using hashing. Args: point (str): A comma separated string where the first value is the label and the rest are features. numBuckets: The number of buckets to hash to. Returns: LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed features. """ parsedPoints = parsePoint(point) label = point.split(',')[0] features = hashFunction(numBuckets, parsedPoints, printMapping=False) return LabeledPoint(label, SparseVector(numBuckets, features)) numBucketsCTR = 2 ** 15 hashTrainData = rawTrainData.map(lambda x : parseHashPoint(x, numBucketsCTR)) hashTrainData.cache() hashValidationData = rawValidationData.map(lambda x : parseHashPoint(x, numBucketsCTR)) hashValidationData.cache() hashTestData = rawTestData.map(lambda x : parseHashPoint(x, numBucketsCTR)) hashTestData.cache() print hashTrainData.take(1) # TEST Creating hashed features (5b) hashTrainDataFeatureSum = sum(hashTrainData .map(lambda lp: len(lp.features.indices)) .take(20)) hashTrainDataLabelSum = sum(hashTrainData .map(lambda lp: lp.label) .take(100)) hashValidationDataFeatureSum = sum(hashValidationData .map(lambda lp: len(lp.features.indices)) .take(20)) hashValidationDataLabelSum = sum(hashValidationData .map(lambda lp: lp.label) .take(100)) hashTestDataFeatureSum = sum(hashTestData .map(lambda lp: len(lp.features.indices)) .take(20)) hashTestDataLabelSum = sum(hashTestData .map(lambda lp: lp.label) .take(100)) Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData') Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData') Test.assertEquals(hashValidationDataFeatureSum, 776, 'incorrect number of features in hashValidationData') Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData') Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData') Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData') """ Explanation: (5b) Creating hashed features Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d). End of explanation """ # TODO: Replace <FILL IN> with appropriate code def computeSparsity(data, d, n): """Calculates the average sparsity for the features in an RDD of LabeledPoints. Args: data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation. d (int): The total number of features. n (int): The number of observations in the RDD. Returns: float: The average of the ratio of features in a point to total features. """ return float(data.map(lambda x: len(x.features.indices)).sum()) / d / n averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain) averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain) print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE) print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash) # TEST Sparsity (5c) Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04), 'incorrect value for averageSparsityOHE') Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03), 'incorrect value for averageSparsityHash') """ Explanation: (5c) Sparsity Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets. Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively. End of explanation """ numIters = 500 regType = 'l2' includeIntercept = True # Initialize variables using values from initial model training bestModel = None bestLogLoss = 1e10 # TODO: Replace <FILL IN> with appropriate code stepSizes = [1, 10] regParams = [1e-6, 1e-3] for stepSize in stepSizes: for regParam in regParams: model = (LogisticRegressionWithSGD .train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType, intercept=includeIntercept)) logLossVa = evaluateResults(model, hashValidationData) print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}' .format(stepSize, regParam, logLossVa)) if (logLossVa < bestLogLoss): bestModel = model bestLogLoss = logLossVa print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}' .format(logLossValBase, bestLogLoss)) # TEST Logistic model with hashed features (5d) Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss') """ Explanation: (5d) Logistic model with hashed features Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1e-3 for regParams. End of explanation """ from matplotlib.colors import LinearSegmentedColormap # Saved parameters and results. Eliminate the time required to run 36 models stepSizes = [3, 6, 9, 12, 15, 18] regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2] logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321], [ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068], [ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153], [ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507], [ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589], [ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]]) numRows, numCols = len(stepSizes), len(regParams) logLoss = np.array(logLoss) logLoss.shape = (numRows, numCols) fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True, gridWidth=0.) ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes) ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size') colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2) image = plt.imshow(logLoss,interpolation='nearest', aspect='auto', cmap = colors) pass """ Explanation: Visualization 3: Hyperparameter heat map We will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss. The search was run using six step sizes and six values for regularization, which required the training of thirty-six separate models. We have included the results below, but omitted the actual search to save time. End of explanation """ # TODO: Replace <FILL IN> with appropriate code # Log loss for the best model from (5d) logLossTest = evaluateResults(bestModel, hashTestData) # Log loss for the baseline model logLossTestBaseline = hashTestData.map(lambda x: computeLogLoss(classOneFracTrain, x.label)).sum() / hashTestData.count() print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}' .format(logLossTestBaseline, logLossTest)) # TEST Evaluate on the test set (5e) Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438), 'incorrect value for logLossTestBaseline') Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest') """ Explanation: (5e) Evaluate on the test set Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f). End of explanation """
PLOS/allofplos
allofplos/allofplos_basics.ipynb
mit
import datetime from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois) from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois, get_all_plos_dois) from allofplos.corpus.plos_corpus import (get_uncorrected_proofs, get_all_solr_dois) from allofplos import Article """ Explanation: Examples of basic allofplos functions End of explanation """ example_dois = get_random_list_of_dois(count=10) example_doi = example_dois[0] article = Article(example_doi) example_file = article.filepath example_url = article.url print("Three ways to represent an article\nArticle as DOI: {}\nArticle as local file: {}\nArticle as url: {}" \ .format(example_doi, example_file, example_url)) example_corrections_dois = ['10.1371/journal.pone.0166537', '10.1371/journal.ppat.1005301', '10.1371/journal.pone.0100397'] example_retractions_dois = ['10.1371/journal.pone.0180272', '10.1371/journal.pone.0155388', '10.1371/journal.pone.0102411'] example_vor_doi = '10.1371/journal.ppat.1006307' example_uncorrected_proofs = get_uncorrected_proofs() """ Explanation: Get example DOIs: get_random_list_of_dois() End of explanation """ validate_doi('10.1371/journal.pbio.2000797') validate_doi('10.1371/journal.pone.12345678') # too many trailing digits doi_list = ['10.1371/journal.pbio.2000797', '10.1371/journal.pone.12345678', '10.1371/journal.pmed.1234567'] show_invalid_dois(doi_list) """ Explanation: Validate PLOS DOI format: validate.doi(string), show_invalid_dois(list) End of explanation """ article = Article('10.1371/journal.pbio.2000797') # working DOI article.check_if_doi_resolves() article = Article('10.1371/annotation/b8b66a84-4919-4a3e-ba3e-bb11f3853755') # working DOI article.check_if_doi_resolves() article = Article('10.1371/journal.pone.1111111') # valid DOI structure, but article doesn't exist article.check_if_doi_resolves() """ Explanation: Check if a DOI resolves correctly: article.check_if_doi_resolves() End of explanation """ article = Article(next(iter(example_uncorrected_proofs))) article.proof article = Article(example_vor_doi) article.proof """ Explanation: Check if uncorrected proof: article.proof End of explanation """ find_valid_dois("ever seen 10.1371/journal.pbio.2000797, it's great! or maybe 10.1371/journal.pone.1234567?") """ Explanation: Find PLOS DOIs in a string: find_valid_dois(string) End of explanation """ # returns a datetime object article = Article(example_doi) article.pubdate # datetime object can be transformed into any string format article = Article(example_doi) dates = article.get_dates(string_=True, string_format='%Y-%b-%d') print(dates['epub']) """ Explanation: Get article pubdate: article.pubdate End of explanation """ article = Article(example_doi) article.authors article = Article(example_corrections_dois[0]) article.type_ article = Article(example_retractions_dois[0]) article.type_ """ Explanation: Check (JATS) article type of article file: article.type_ End of explanation """ article = Article(example_corrections_dois[0]) article.related_dois article = Article(example_retractions_dois[0]) article.related_dois """ Explanation: Get related DOIs: article.related_dois For corrections and retractions, get the DOI(s) of the PLOS articles being retracted or corrected. End of explanation """ solr_dois = get_all_solr_dois() print(len(solr_dois), "articles indexed on Solr.") """ Explanation: Working with many articles at once Get list of every article DOI indexed on the PLOS search API, Solr: get_all_solr_dois() End of explanation """ all_articles = get_all_local_dois() print(len(all_articles), "articles on local computer.") """ Explanation: Get list of every PLOS article you have downloaded: get_all_local_dois() End of explanation """ plos_articles = get_all_plos_dois() download_updated_xml('allofplos_xml/journal.pcbi.0030158.xml') """ Explanation: Combine local and solr articles: get_all_plos_dois() End of explanation """
wiki-ai/editquality
ipython/reverted_detection_demo.ipynb
mit
# Magical ipython notebook stuff puts the result of this command into a variable revids_f = !wget http://quarry.wmflabs.org/run/65415/output/0/tsv?download=true -qO- revids = [int(line) for line in revids_f[1:]] len(revids) """ Explanation: Basic damage detection in Wikipedia This notebook demonstrates the basic contruction of a vandalism classification system using the revscoring library that we have developed specifically for classification models of MediaWiki stuff. The basic process that we'll follow is this: Gather example of human judgement applied to Wikipedia edits. In this case, we'll take advantage of reverts. Split the data into a training and testing set Training the machine learning model Testing the machine learning model And then we'll have some fun applying the model to some edits using RCStream. The following diagram gives a good sense for the whole process of training and evaluating a model. <img style="text-align: center;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/09/Supervised_machine_learning_in_a_nutshell.svg/640px-Supervised_machine_learning_in_a_nutshell.svg.png" /> Part 1: Getting labeled observations <img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/Machine_learning_nutshell_--_Gather_labeled_observations.svg/300px-Machine_learning_nutshell_--_Gather_labeled_observations.svg.png" /> Regretfully, running SQL queries isn't something we can do directly from the notebook yet. So, we'll use Quarry to generate a nice random sample of edits. 20,000 observations should do just fine. Here's the query I want to run: SQL USE enwiki_p; SELECT rev_id FROM revision WHERE rev_timestamp BETWEEN "20150201" AND "20160201" ORDER BY RAND() LIMIT 20000; See http://quarry.wmflabs.org/query/7530. By clicking around the UI, I can see that this URL will download my tab-separated file: http://quarry.wmflabs.org/run/65415/output/0/tsv?download=true End of explanation """ import sys, traceback import mwreverts.api import mwapi # We'll use the mwreverts API check. In order to do that, we need an API session session = mwapi.Session("https://en.wikipedia.org", user_agent="Revert detection demo <ahalfaker@wikimedia.org>") # For each revision, find out if it was "reverted" and label it so. rev_reverteds = [] for rev_id in revids[:20]: # NOTE: Limiting to the first 20!!!! try: _, reverted, reverted_to = mwreverts.api.check( session, rev_id, radius=5, # most reverts within 5 edits window=48*60*60, # 2 days rvprop={'user', 'ids'}) # Some properties we'll make use of except (RuntimeError, KeyError) as e: sys.stderr.write(str(e)) continue if reverted is not None: reverted_doc = [r for r in reverted.reverteds if r['revid'] == rev_id][0] if 'user' not in reverted_doc or 'user' not in reverted.reverting: continue # self-reverts self_revert = \ reverted_doc['user'] == reverted.reverting['user'] # revisions that are reverted back to by others reverted_back_to = \ reverted_to is not None and \ 'user' in reverted_to.reverting and \ reverted_doc['user'] != \ reverted_to.reverting['user'] # If we are reverted, not by self or reverted back to by someone else, # then, let's assume it was damaging. damaging_reverted = not (self_revert or reverted_back_to) else: damaging_reverted = False rev_reverteds.append((rev_id, damaging_reverted)) sys.stderr.write("r" if damaging_reverted else ".") """ Explanation: OK. Now that we have a set of revisions, we need to label them. In this case, we're going to label them as reverted/not. We want to exclude a few different types of reverts -- e.g. when a user reverts themself or when an edit is reverted back to by someone else. For this, we'll use the mwreverts and mwapi libraries. End of explanation """ rev_reverteds_f = !bzcat ../datasets/demo/enwiki.rev_reverted.20k_2015.tsv.bz2 rev_reverteds = [line.strip().split("\t") for line in rev_reverteds_f[1:]] rev_reverteds = [(int(rev_id), reverted == "True") for rev_id, reverted in rev_reverteds] len(rev_reverteds) """ Explanation: Eeek! This takes too long. You get the idea. So, I uploaded dataset that has already been labeled here @ ../datasets/demo/enwiki.rev_reverted.20k_2015.tsv.bz2 End of explanation """ train_set = rev_reverteds[:15000] test_set = rev_reverteds[15000:] print("training:", len(train_set)) print("testing:", len(test_set)) """ Explanation: OK. It looks like we got an error when trying to extract the reverted status of ~132 edits, which is an acceptable loss. Now just to make sure we haven't gone crazy, let's check some of the reverted edits: https://en.wikipedia.org/wiki/?diff=695071713 (section blanking) https://en.wikipedia.org/wiki/?diff=667375206 (unexplained addition of nonsense) https://en.wikipedia.org/wiki/?diff=670204366 (vandalism "I don't know") https://en.wikipedia.org/wiki/?diff=680329354 (adds non-existent category) https://en.wikipedia.org/wiki/?diff=668682186 (test edit -- removes punctuation) https://en.wikipedia.org/wiki/?diff=666882037 (adds spamlink) https://en.wikipedia.org/wiki/?diff=663302354 (adds nonsense special char) https://en.wikipedia.org/wiki/?diff=675803278 (unconstructive link changes) https://en.wikipedia.org/wiki/?diff=680203994 (vandalism -- "Pepe meme") https://en.wikipedia.org/wiki/?diff=656734057 ("JELENAS BOOTY UNDSO") OK. Looks like we are doing pretty good. :) Part 2: Split the data into a training and testing set <img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Machine_learning_nutshell_--_Split_into_train-test_set.svg/320px-Machine_learning_nutshell_--_Split_into_train-test_set.svg.png" /> Before we move on with training, it's important that we hold back some of the data for testing later. If we train on the same data we'll test with, we risk overfitting and not noticing! In this section, we'll both split the training and testing set and gather prective features for each of the labeled observations. End of explanation """ from revscoring.features import wikitext, revision_oriented, temporal from revscoring.languages import english features = [ # Catches long key mashes like kkkkkkkkkkkk wikitext.revision.diff.longest_repeated_char_added, # Measures the size of the change in added words wikitext.revision.diff.words_added, # Measures the size of the change in removed words wikitext.revision.diff.words_removed, # Measures the proportional change in "badwords" english.badwords.revision.diff.match_prop_delta_sum, # Measures the proportional change in "informals" english.informals.revision.diff.match_prop_delta_sum, # Measures the proportional change meaningful words english.stopwords.revision.diff.non_stopword_prop_delta_sum, # Is the user anonymous revision_oriented.revision.user.is_anon, # Is the user a bot or a sysop revision_oriented.revision.user.in_group({'bot', 'sysop'}), # How long ago did the user register? temporal.revision.user.seconds_since_registration ] """ Explanation: OK. In order to train the machine learning model, we'll need to give it a source of signal. This is where "features" come into play. A feature represents a simple numerical statistic that we can extract from our observations that we think will be predictive of our outcome. Luckily, revscoring provides a whole suite of features that work well for damage detection. In this case, we'll be looking at features of the edit diff. End of explanation """ from revscoring.extractors import api api_extractor = api.Extractor(session) revisions = [695071713, 667375206] for rev_id in revisions: print("https://en.wikipedia.org/wiki/?diff={0}".format(rev_id)) print(list(api_extractor.extract(rev_id, features))) # Now for the whole set! training_features_reverted = [] for rev_id, reverted in train_set[:20]: try: feature_values = list(api_extractor.extract(rev_id, features)) observation = {"rev_id": rev_id, "cache": feature_values, "reverted": reverted} except RuntimeError as e: sys.stderr.write(str(e)) continue sys.stderr.write(".") training_features_reverted.append(observation) # Uncomment to regenerate the observations file. #import bz2 #from revscoring.utilities.util import dump_observation # #f = bz2.open("../datasets/demo/enwiki.features_reverted.training.20k_2015.json.bz2", "wt") #for observation in training_features_reverted: # dump_observation(observation, f) #f.close() """ Explanation: Now, we'll need to turn to revscorings feature extractor to help us get us feature values for each revision. End of explanation """ from revscoring.utilities.util import read_observations training_features_reverted_f = !bzcat ../datasets/demo/enwiki.features_reverted.training.20k_2015.json.bz2 training_features_reverted = list(read_observations(training_features_reverted_f)) len(training_features_reverted) """ Explanation: Eeek! Again this takes too long, so again, I uploaded a dataset with features already extracted @ ../datasets/demo/enwiki.features_reverted.training.20k_2015.tsv.bz2 End of explanation """ from revscoring.scoring.models import GradientBoosting is_reverted = GradientBoosting(features, labels=[True, False], version="live demo!", learning_rate=0.01, max_features="log2", n_estimators=700, max_depth=5, population_rates={False: 0.5, True: 0.5}, scale=True, center=True) training_unpacked = [(o["cache"], o["reverted"]) for o in training_features_reverted] is_reverted.train(training_unpacked) """ Explanation: Part 3: Training the model <img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Machine_learning_nutshell_--_Train_a_machine_learning_model.svg/320px-Machine_learning_nutshell_--_Train_a_machine_learning_model.svg.png" /> Now that we have a set of features extracted for our training set, it's time to train a model. revscoring provides a set of different classifier algorithms. From past experience, I know a gradient boosting classifier works well, so we'll use that. End of explanation """ reverted_obs = [rev_id for rev_id, reverted in test_set if reverted] non_reverted_obs = [rev_id for rev_id, reverted in test_set if not reverted] for rev_id in reverted_obs[:10]: feature_values = list(api_extractor.extract(rev_id, features)) score = is_reverted.score(feature_values) print(True, "https://en.wikipedia.org/wiki/?diff=" + str(rev_id), score['prediction'], round(score['probability'][True], 2)) for rev_id in non_reverted_obs[:10]: feature_values = list(api_extractor.extract(rev_id, features)) score = is_reverted.score(feature_values) print(False, "https://en.wikipedia.org/wiki/?diff=" + str(rev_id), score['prediction'], round(score['probability'][True], 2)) """ Explanation: We now have a trained model that we can play around with. Let's try a few edits from our test set. End of explanation """ testing_features_reverted_f = !bzcat ../datasets/demo/enwiki.features_reverted.testing.20k_2015.json.bz2 testing_features_reverted = list(read_observations(testing_features_reverted_f)) testing_unpacked = [(o["cache"], o["reverted"]) for o in testing_features_reverted] len(testing_unpacked) """ Explanation: Part 4: Testing the model So, the above analysis can help give us a sense for whether the model is working or not, but it's hard to standardize between models. So, we can apply some metrics that are specially crafted for machine learning models. <center> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Machine_learning_nutshell_--_Test_the_machine_learning_model.svg/640px-Machine_learning_nutshell_--_Test_the_machine_learning_model.svg.png" /> </center> But first, I'll need to load the pre-generated feature values. End of explanation """ is_reverted.test(testing_unpacked) print(is_reverted.info.format()) """ Explanation: Accuracy -- The proportion of correct predictions Precision -- The proportion of correct positive predictions Recall -- The proportion of positive examples predicted as positive Filter rate at 90% recall -- The proportion of observations that can be ignored while still catching 90% of "reverted" edits. We'll use revscoring statistics to measure these against the test set. End of explanation """ import json from sseclient import SSEClient as EventSource url = 'https://stream.wikimedia.org/v2/stream/recentchange' for event in EventSource(url): if event.event == 'message': try: change = json.loads(event.data) if change['type'] not in ('new', 'edit'): continue rev_id = change['revision']['new'] feature_values = list(api_extractor.extract(rev_id, features)) score = is_reverted.score(feature_values) if score['prediction']: print("!!!Please review", "https://en.wikipedia.org/wiki/?diff=" + str(rev_id), round(score['probability'][True], 2), flush=True) else: print("Good edit", "https://en.wikipedia.org/wiki/?diff=" + str(rev_id), round(score['probability'][True], 2), flush=True) except ValueError: pass """ Explanation: Bonus round! Let's listen to Wikipedia's vandalism! So we don't have the most powerful damage detection classifier, but then again, we're only including 9 features. Usually we run with ~60 features and get to much higher levels of fitness. but this model is still useful and it should help us detect the most egregious vandalism in Wikipedia. In order to listen to Wikipedia, we'll need to connect to RCStream -- the same live feed that powers listen to Wikipedia. End of explanation """
zzsza/bigquery-tutorial
tutorials/02-Utils/02. Connect Datalab.ipynb
mit
import google.datalab.bigquery as bq # Query 생성 query_string = ''' #standardSQL SELECT corpus AS title, COUNT(*) AS unique_words FROM `publicdata.samples.shakespeare` GROUP BY title ORDER BY unique_words DESC LIMIT 10 ''' query = bq.Query(query_string) output_options = bq.QueryOutput.table(use_cache=True) result = query.execute(output_options=output_options).result() # query 실행 result """ Explanation: 02. Connect datalab datalab은 Google Cloud에서 서비스하는 Jupyter notebook이라고 보면 됩니다 최근 20170818 버전부터 python2, python3 커널을 사용할 수 있습니다 (기존에는 python2만 해당) datalab은 Google Cloud Storage / BigQuery 등에서 IO 속도가 매우 빠릅니다 datalab install datalab connect [datalab instance 명] <img src="../images/007_connect_datalab.png" width="700" height="700"> - Google Datalab의 모습 End of explanation """ pandas_df = result.to_dataframe() pandas_df sample_dataset = bq.Dataset('bigquery-public-data.samples') # dataset이 존재하는지 유무 sample_dataset.exists() """ Explanation: pandas dataframe으로 변환 End of explanation """ %bq datasets list --project cloud-datalab-samples """ Explanation: %bq magic command를 이용한 project sample 확인 End of explanation """ %%bq query #standardSQL SELECT corpus AS title, COUNT(*) AS unique_words FROM `publicdata.samples.shakespeare` GROUP BY title ORDER BY unique_words DESC LIMIT 10 """ Explanation: %%bq query를 사용해 바로 query문을 날림 End of explanation """ %%bq query -n requests SELECT timestamp, latency, endpoint FROM `cloud-datalab-samples.httplogs.logs_20140615` WHERE endpoint = 'Popular' OR endpoint = 'Recent' df = requests.execute(output_options=bq.QueryOutput.dataframe()).result() len(df) df.head() """ Explanation: 매직커맨드를 활용해 pandas data frame 생성 End of explanation """ %%bq query --name data WITH quantiles AS ( SELECT APPROX_QUANTILES(LOG10(latency), 50) AS timearray FROM `cloud-datalab-samples.httplogs.logs_20140615` WHERE latency <> 0 ) select row_number() over(order by time) as percentile, time from quantiles cross join unnest(quantiles.timearray) as time order by percentile %chart columns --data data --fields percentile,time """ Explanation: google chart api를 활용해 바로 chart 그리기 가능 End of explanation """
herruzojm/udacity-deep-learning
autoencoder/Convolutional_Autoencoder_Solution.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation """ inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same') # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None) #Now 28x28x1 decoded = tf.nn.sigmoid(logits, name='decoded') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) """ Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation """ sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation """ inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same') # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same') # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7)) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x32 logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None) #Now 28x28x1 decoded = tf.nn.sigmoid(logits, name='decoded') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) """ Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation """
tcstewar/testing_notebooks
spike trains - poisson and regular.ipynb
gpl-2.0
class PoissonSpikingApproximate(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): self.output[:] = 0 p = 1.0 - np.exp(-x*self.dt) self.output[p>self.rng.rand(self.size)] = self.value return self.output """ Explanation: Here, we just check if there are any events within dt time, and if there are, we assume there is only one of them. This is a horrible approximation. Don't do this. End of explanation """ class PoissonSpikingExactBad(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def __call__(self, t, x): p = 1.0 - np.exp(-x*self.dt) self.output[:] = 0 s = np.where(p>self.rng.rand(self.size))[0] self.output[s] += self.value count = len(s) while count > 0: s2 = np.where(p[s]>self.rng.rand(count))[0] s = s[s2] self.output[s] += self.value count = len(s) return self.output """ Explanation: For this attempt, I'm screwing something up in the logic, as it's firing more frequently than it should. The idea is to use the same approach as above to see if any spikes happen during the dt. If there are any spikes, then count that as 1 spike. But now I need to know if there are any more spikes (given that we know at least one spike happened). Since one spike happened at one instant in time, but all the other points in time during that dt could also have a spike (since the whole point of a poisson process is that everything's independent), we can remove that one infinitessimal point in take from dt, leaving us with dt, and we can check if any spikes happened in that remaining time. In other words, just do the same process over again, only considering those neurons who did spike on the previous iteration of this logic. Continue until no neurons have another spike. I'm not sure if my logic is bad or my coding is bad, but this doesn't work. End of explanation """ class PoissonSpikingExact(object): def __init__(self, size, seed, dt=0.001): self.rng = np.random.RandomState(seed=seed) self.dt = dt self.value = 1.0 / dt self.size = size self.output = np.zeros(size) def next_spike_times(self, rate): return -np.log(1.0-self.rng.rand(len(rate))) / rate def __call__(self, t, x): self.output[:] = 0 next_spikes = self.next_spike_times(x) s = np.where(next_spikes<self.dt)[0] count = len(s) self.output[s] += self.value while count > 0: next_spikes[s] += self.next_spike_times(x[s]) s2 = np.where(next_spikes[s]<self.dt)[0] count = len(s2) s = s[s2] self.output[s] += self.value return self.output model = nengo.Network() with model: freq=10 stim = nengo.Node(lambda t: np.sin(t*np.pi*2*freq)) ens = nengo.Ensemble(n_neurons=5, dimensions=1, neuron_type=nengo.LIFRate(), seed=1) nengo.Connection(stim, ens, synapse=None) regular_spikes = nengo.Node(RegularSpiking(ens.n_neurons), size_in=ens.n_neurons) nengo.Connection(ens.neurons, regular_spikes, synapse=None) poisson_spikes = nengo.Node(PoissonSpikingExact(ens.n_neurons, seed=1), size_in=ens.n_neurons) nengo.Connection(ens.neurons, poisson_spikes, synapse=None) p_rate = nengo.Probe(ens.neurons) p_regular = nengo.Probe(regular_spikes) p_poisson = nengo.Probe(poisson_spikes) sim = nengo.Simulator(model) sim.run(0.1) pylab.figure(figsize=(10,8)) pylab.subplot(3,1,1) pylab.plot(sim.trange(), sim.data[p_rate]) pylab.xlim(0, sim.time) pylab.ylabel('rate') pylab.subplot(3,1,2) import nengo.utils.matplotlib nengo.utils.matplotlib.rasterplot(sim.trange(), sim.data[p_regular]) pylab.xlim(0, sim.time) pylab.ylabel('regular spiking') pylab.subplot(3,1,3) nengo.utils.matplotlib.rasterplot(sim.trange(), sim.data[p_poisson]) pylab.xlim(0, sim.time) pylab.ylabel('poisson spiking') pylab.show() """ Explanation: Now for one that works. Here we do the approach of actually figuring out when during the time step the events happen, and continue until we fall off the end of the timestep. This is how everyone says to do it. End of explanation """ def test_accuracy(cls, rate, T=1): test_model = nengo.Network() with test_model: stim = nengo.Node(rate) spikes = nengo.Node(cls(1, seed=1), size_in=1) nengo.Connection(stim, spikes, synapse=None) p = nengo.Probe(spikes) sim = nengo.Simulator(test_model) sim.run(T, progress_bar=False) return np.mean(sim.data[p]) rates = np.linspace(0, 1000, 11) result_approx = [test_accuracy(PoissonSpikingApproximate, r) for r in rates] result_bad = [test_accuracy(PoissonSpikingExactBad, r) for r in rates] result_exact = [test_accuracy(PoissonSpikingExact, r) for r in rates] pylab.plot(rates, result_approx, label='spike rate approx') pylab.plot(rates, result_bad, label='spike rate bad') pylab.plot(rates, result_exact, label='spike rate exact') pylab.plot(rates, rates, ls='--', c='k', label='ideal') pylab.legend(loc='best') pylab.show() """ Explanation: Let's test the accuracy of these models. End of explanation """
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
Improving Deep Neural Networks/Initialization.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset() """ Explanation: Initialization Welcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can: - Speed up the convergence of gradient descent - Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify. End of explanation """ def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters """ Explanation: You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - Zeros initialization -- setting initialization = "zeros" in the input argument. - Random initialization -- setting initialization = "random" in the input argument. This initializes the weights to large random values. - He initialization -- setting initialization = "he" in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. Instructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls. End of explanation """ # GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) """ Explanation: 2 - Zero initialization There are two types of parameters to initialize in a neural network: - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$ - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$ Exercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. End of explanation """ parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) """ Explanation: Expected Output: <table> <tr> <td> **W1** </td> <td> [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[ 0. 0.]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using zeros initialization. End of explanation """ print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) """ Explanation: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary: End of explanation """ # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) *10 parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) """ Explanation: The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. <font color='blue'> What you should remember: - The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initialization To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. End of explanation """ parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) """ Explanation: Expected Output: <table> <tr> <td> **W1** </td> <td> [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.82741481 -6.27000677]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using random initialization. End of explanation """ print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) """ Explanation: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s. End of explanation """ # GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1]) parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) """ Explanation: Observations: - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. <font color='blue'> In summary: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initialization Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).) Exercise: Implement the following function to initialize your parameters with He initialization. Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation. End of explanation """ parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) """ Explanation: Expected Output: <table> <tr> <td> **W1** </td> <td> [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using He initialization. End of explanation """
WNoxchi/Kaukasos
quantum/openfermion-forest-demo-codealong.ipynb
mit
from openfermion.ops import QubitOperator from forestopenfermion import pyquilpauli_to_qubitop, qubitop_to_pyquilpauli """ Explanation: OpenFermion – Forest demo Wayne H Nixalo – 2018/6/26 A codealong of Forest-OpenFermion_demo.ipynb Generating and simulating circuits with OpenFermion Forest The QubitOperator datastructure in OpenFermion is the main point of contact between OpenFermion and Forest. Translation of the QubitOperator to PauliTerms and PauliSums is the interface that is constructed in the OpenFermion–Forest module. Fortunately, when traversing layers of abstraction in OpenFermion, the QubitOperator naturally appears in all types of simulations. Upon translation into the language of pyQuil, connections to the Forest-QVM or an alternative QVM (such as reference-qvm) that understand pyQuil Program objects can be initialized. The following demonstration starts with the interface between the QubitOperator data structure and the PauliTerm and PauliSum data structures of pyQuil, and then demonstrates how to cnostruct and simulate Hamiltonians starting from OpenFermion. End of explanation """ qubit_op = QubitOperator('X0 Y1 Z2') pauli_term = qubitop_to_pyquilpauli(qubit_op) print(pauli_term) qubit_op_sum = QubitOperator('X1 Y5 Z2', coefficient=8) + QubitOperator('Y4 Z2', coefficient=1.5) pauli_term_sum = qubitop_to_pyquilpauli(qubit_op_sum) print(pauli_term_sum) """ Explanation: The interface contains 2 methods that mediate the translation of PauliTerm and PauliSums to QubitOperators and vice-versa. End of explanation """ reversed_term = pyquilpauli_to_qubitop(pauli_term) print(reversed_term.isclose(qubit_op)) # should return True reversed_sum = pyquilpauli_to_qubitop(pauli_term_sum) print(reversed_sum.isclose(qubit_op_sum)) # shuold return True """ Explanation: We can convert back from a PauliSum object to a QubitOperator End of explanation """ from openfermion.ops import FermionOperator from openfermion.transforms import jordan_wigner from openfermion.utils import hermitian_conjugated # we'll construct the Hamiltonian terms hopping_hamiltonian = FermionOperator() spatial_orbitals = 4 for i in range(spatial_orbitals): electron_hop_alpha = FermionOperator(((2*i, 1), (2*((i+1) % spatial_orbitals), 0))) electron_hop_beta = FermionOperator(((2*i+1, 1), ((2*((i+1) % spatial_orbitals) + 1), 0))) hopping_hamiltonian += electron_hop_alpha + hermitian_conjugated(electron_hop_alpha) hopping_hamiltonian += electron_hop_beta + hermitian_conjugated(electron_hop_beta) """ Explanation: Let's generate the hopping terms for the Hubbard model Hamiltonian on 4-spatial sites. End of explanation """ from pyquil.quil import Program from forestopenfermion import exponentiate hopping_term_generator = jordan_wigner(hopping_hamiltonian) pyquil_program = exponentiate(hopping_term_generator) print(pyquil_program) """ Explanation: We can turn the hopping hamiltonian into QubitOperator terms on 2 * (spatial_orbital) qubits using the OpenFermion Jorda-Wigner routine. OpenFermion-Forest provides an interface to convert the QubitOperator objects into pyquil objects and generate a Quil program from their exponentiation. The Quil program was generated by taking each PauliTerm and converting to a set of gates according to arXiv:1306.3991. Once the user has data in the pyQuil format, more pyquil tools, such as a Trotterization engine, can be used. End of explanation """ from pyquil.api import QVMConnection qvm = QVMConnection() wf = qvm.wavefunction(pyquil_program) """ Explanation: The returned value from exponentiate is a pyQuil Program object. The object has some nice features such as a dagger function 🗡, easy classical control-flow construction, and introspection 🧐. The circuit can be simulated w/ or w/o noise by running on the Forest-QVM or on reference-qvm. In order to pyquil Programs on the Forest-QVM you'll need to sign up on the Forest Home Page for a key. End of explanation """ print(wf.amplitudes) """ Explanation: The resulting Wavefunction object from pyQuil contains pretty printing features and the ability to access the wavefunction. End of explanation """ print(wf) """ Explanation: We can also pretty-print the wavefunction (on by default) which prints the amplitudes and bitstrings in an easy to read fromat. End of explanation """
griffinfoster/fundamentals_of_interferometry
2_Mathematical_Groundwork/2_6_cross_correlation_and_auto_correlation.ipynb
gpl-2.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS """ Explanation: Outline Glossary 2. Mathematical Groundwork Previous: 2.5 Convolution Next: 2.7 Fourier Theorems Import standard modules: End of explanation """ pass """ Explanation: Import section specific modules: End of explanation """
ffyu/Build_Model_from_Scratch
6_Principal_Component_Analysis.ipynb
mit
import numpy as np class PCA(): def __init__(self, num_components): self.num_components = num_components self.U = None self.S = None def fit(self, X): # perform pca m = X.shape[0] X_mean = np.mean(X, axis=0) X -= X_mean cov = X.T.dot(X) * 1.0 / m self.U, self.S, _ = np.linalg.svd(cov) return self def project(self, X): # project data based on reduced dimension U_reduce = self.U[:, :self.num_components] X_reduce = X.dot(U_reduce) return X_reduce def inverse(self, X_reduce): # recover the original data based on the reduced form U_reduce = self.U[:, :self.num_components] X = X_reduce.dot(U_reduce.T) return X def explained_variance(self): # print the ratio of explained variance with the pca explained = np.sum(self.S[:self.num_components]) total = np.sum(self.S) return explained * 1.0 / total """ Explanation: Principal Component Analysis (PCA) is widely used in Machine Learning pipelines as a means to compress data or help visualization. This notebook aims to walk through the basic idea of the PCA and build the algorithm from scratch in Python. Before diving directly into the PCA, let's first talk about several import concepts - the "eigenvectors & eigenvalues" and "Singular Value Decomposition (SVD)". An eigenvector of a square matrix is a column vector that satisfies: $$Av=\lambda v$$ Where A is a $[n\times n]$ square matrix, v is a $[n\times 1]$ eigenvector, and $\lambda$ is a scalar value which is also known as the eigenvalue. If A is both a square and symmetric matrix (like a typical variance-covariance matrix), then we can write A as: $$A=U\Sigma U^T$$ Here columns of matrix U are eigenvectors of matrix A; and $\Sigma$ is a diaonal matrix containing the corresponding eigenvalues. This is also a special case of the well-known theorem "Singular Value Decomposition" (SVD), where a rectangular matrix M can be expressed as: $$M=U\Sigma V^T$$ With SVD, we can calcuate the eigenvectors and eigenvalues of a square & symmetric matrix. This will be the key to solve the PCA. The goal of the PCA is to find a lower dimension surface to maxmize total variance of the projection, or in other means, to minimize the projection error. The entire algorithm can be summarized as the following: 1) Given a data matrix $X$ with $m$ rows (number of records) and $n$ columns (number of dimensions), we should first substract the column mean for each dimension. 2) Then we can calculate the variance-covariance matrix using the equation (X here already has zero mean for each column from step 1): $$cov=\frac{1}{m}X^TX$$ 3) We can then use SVD to compute the eigenvectors and corresponding eigenvalues of the above covariance matrix "$cov$": $$cov=U\Sigma U^T$$ 4) If our target dimension is $p$ ($p<n$), then we will select the first $p$ columns of the $U$ matrix and get matrix $U_{reduce}$. 5) To get the compressed data set, we can do the transformation as below: $$X_{reduce}=XU_{reduce}$$ 6) To appoximate the original data set given the compressed data, we can use: $$X=X_{reduce}U_{reduce}^T$$ Note this is true because $U_{reduce}^{-1}=U_{reduce}^T$ (in this case, all the eigenvectors are unit vectors). In practice, it is also important to choose the proper number of principal components. For data compression, we want to retain as much variation in the original data while reducing the dimension. Luckily, with SVD, we can get a estimate of the retained variation by: $$\%\ of\ variance\ retained = \frac{\sum_{i=1}^{p}S_{ii}}{\sum_{i=1}^{n}S_{ii}}$$ Where $S_{ii}$ is the $ith$ diagonal element of the $\Sigma$ matrix, $p$ is the number of reduced dimension, and $n$ is the dimension of the original data. For data visulization purposes, we usually choose 2 or 3 dimensions to plot the compressed data. The following class PCA() implements the idea of principal component analysis. End of explanation """ from sklearn.datasets import load_iris iris = load_iris() X = iris['data'] y = iris['target'] print X.shape """ Explanation: Now we can use a demo data set to show dimensionality reduction and data visualization. We will use the Iris Data set as always. End of explanation """ pca = PCA(num_components=2) pca.fit(X) X_reduce = pca.project(X) print X_reduce.shape """ Explanation: We can find that the dimension of the original $X$ matrix is 4. We can then compress it to 2 using PCA technique with the PCA() class that we defined above. End of explanation """ print "{:.2%}".format(pca.explained_variance()) """ Explanation: Now that the data has been compressed, we can check the ratianed variance. End of explanation """ %pylab inline pylab.rcParams['figure.figsize'] = (10, 6) from matplotlib import pyplot as plt for c, marker, class_num in zip(['green', 'r', 'cyan'], ['o', '^', 's'], np.unique(y)): plt.scatter(x=X_reduce[:, 0][y == class_num], y=X_reduce[:, 1][y == class_num], c=c, marker=marker, label="Class {}".format(class_num), alpha=0.7, s=30) plt.xlabel("Component 1") plt.ylabel("Component 2") plt.legend() plt.show() """ Explanation: We have 97.76% of variance retained. This is okay for data visulization purposes. But if we used PCA in supervised learning pipelines, we might want to add more dimension to keep more than 99% of the variation from the original data. Finally, with the compressed dimension, we can plot to see the distribution of iris dataset. End of explanation """
tensorflow/docs-l10n
site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb
apache-2.0
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2018 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ %%capture !pip3 install seaborn """ Explanation: Universal Sentence Encoder <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View 在 TensorFlow.org 上查看</a> </td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td> <td> <a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder%2F4%20OR%20google%2Funiversal-sentence-encoder-large%2F5"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a> </td> </table> 此笔记本演示了如何访问 Universal Sentence Encoder,并将它用于句子相似度和句子分类任务。 Universal Sentence Encoder 使获取句子级别的嵌入向量变得与以往查找单个单词的嵌入向量一样容易。之后,您可以轻松地使用句子嵌入向量计算句子级别的语义相似度,以及使用较少监督的训练数据在下游分类任务中实现更好的性能。 设置 本部分将设置访问 TF Hub 上通用句子编码器的环境,并提供将编码器应用于单词、句子和段落的示例。 End of explanation """ #@title Load the Universal Sentence Encoder's TF Hub module from absl import logging import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"] model = hub.load(module_url) print ("module %s loaded" % module_url) def embed(input): return model(input) #@title Compute a representation for each message, showing various lengths supported. word = "Elephant" sentence = "I am a sentence for which I would like to get its embedding." paragraph = ( "Universal Sentence Encoder embeddings also support short paragraphs. " "There is no hard limit on how long the paragraph is. Roughly, the longer " "the more 'diluted' the embedding will be.") messages = [word, sentence, paragraph] # Reduce logging output. logging.set_verbosity(logging.ERROR) message_embeddings = embed(messages) for i, message_embedding in enumerate(np.array(message_embeddings).tolist()): print("Message: {}".format(messages[i])) print("Embedding size: {}".format(len(message_embedding))) message_embedding_snippet = ", ".join( (str(x) for x in message_embedding[:3])) print("Embedding: [{}, ...]\n".format(message_embedding_snippet)) """ Explanation: 有关安装 Tensorflow 的更多详细信息,请访问 https://tensorflow.google.cn/install/。 End of explanation """ def plot_similarity(labels, features, rotation): corr = np.inner(features, features) sns.set(font_scale=1.2) g = sns.heatmap( corr, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd") g.set_xticklabels(labels, rotation=rotation) g.set_title("Semantic Textual Similarity") def run_and_plot(messages_): message_embeddings_ = embed(messages_) plot_similarity(messages_, message_embeddings_, 90) """ Explanation: 语义文本相似度任务示例 Universal Sentence Encoder 生成的嵌入向量会被近似归一化。两个句子的语义相似度可以作为编码的内积轻松进行计算。 End of explanation """ messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "Eating strawberries is healthy", "Is paleo better than keto?", # Asking about age "How old are you?", "what is your age?", ] run_and_plot(messages) """ Explanation: 可视化相似度 下面,我们在热图中显示相似度。最终的图形是一个 9x9 矩阵,其中每个条目 [i, j] 都根据句子 i 和 j 的编码的内积进行着色。 End of explanation """ import pandas import scipy import math import csv sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = pandas.read_table( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"), error_bad_lines=False, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) sts_test = pandas.read_table( os.path.join( os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"), error_bad_lines=False, quoting=csv.QUOTE_NONE, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) # cleanup some NaN values in sts_dev sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]] """ Explanation: 评估:STS(语义文本相似度)基准 STS 基准会根据从句子嵌入向量计算得出的相似度得分与人为判断的一致程度,提供内部评估。该基准要求系统为不同的句子对选择返回相似度得分。然后使用皮尔逊相关来评估机器相似度得分相对于人为判断的质量。 下载数据 End of explanation """ sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} def run_sts_benchmark(batch): sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1) sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1) cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1) clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0) scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi """Returns the similarity scores""" return scores dev_scores = sts_data['sim'].tolist() scores = [] for batch in np.array_split(sts_data, 10): scores.extend(run_sts_benchmark(batch)) pearson_correlation = scipy.stats.pearsonr(scores, dev_scores) print('Pearson correlation coefficient = {0}\np-value = {1}'.format( pearson_correlation[0], pearson_correlation[1])) """ Explanation: 评估句子嵌入向量 End of explanation """
Aniruddha-Tapas/Applied-Machine-Learning
Miscellaneous/Gesture-Phase-Detection.ipynb
mit
%matplotlib inline import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn import cross_validation, metrics from sklearn import preprocessing import matplotlib import matplotlib.pyplot as plt # read .csv from provided dataset csv_filename1="a1_raw.csv" csv_filename2="a1_va3.csv" # df=pd.read_csv(csv_filename,index_col=0) df1=pd.read_csv(csv_filename1 , skiprows=[1,2,3,4]) df2=pd.read_csv(csv_filename2) df1.head() df1.shape df1['phase'].unique() from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df1['phase'] = le.fit_transform(df1['phase']) df1['phase'].unique() df2.head() df2.shape df2['Phase'].unique() df1.columns df2.columns df2.rename(columns={'Phase': 'phase'}, inplace=True) df1.phase.unique() df2.phase.unique() a = df2.phase == 'D' b = df2.phase == 'P' c = df2.phase == 'S' d = df2.phase == 'H' e = df2.phase == 'R' df2.loc[a,'phase'] = 'Rest' df2.loc[b,'phase'] = 'Preparation' df2.loc[c,'phase'] = 'Stroke' df2.loc[d,'phase'] = 'Hold' df2.loc[e,'phase'] = 'Retraction' df2.head(3) df2.phase.unique() from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df2['phase'] = le.fit_transform(df2['phase']) df2.phase.unique() df1.groupby('phase').count() df2.groupby('phase').count() df1.sort('phase',inplace=True) df2.sort('phase',inplace=True) df2.tail() left = pd.DataFrame({ ....: 'key2': ['0', '2', '1', '3','0','1'], ....: 'A': ['A0', 'A1', 'A2', 'A3','A4','A5'], ....: 'B': ['B0', 'B1', 'B2', 'B3','B4','B5']}) ....: right = pd.DataFrame({ ....: 'key2': ['0', '1', '2', '0', '1', '3'], ....: 'C': ['C0', 'C1', 'C2', 'C3', 'C4', 'C5'], ....: 'D': ['D0', 'D1', 'D2', 'D3', 'D4', 'D5']}) ....: left right left.sort('key2',inplace=True) left right.sort('key2',inplace=True) right result = pd.merge(left, right, on=['key2']) result result2 = pd.merge(left, right, on=['key2'], how='right') result2 df = pd.merge(df1, df2, on='phase') df.head() df.columns df[:1] df1.shape,df2.shape,df.shape df.drop('timestamp', axis=1, inplace=True) cols = list(df.columns) features = cols features.remove('phase') len(features) df1.shape,df2.shape,df.shape df1.drop('phase',axis=1,inplace=True) df_1 = pd.concat([df1,df2],axis=1) df_1.drop('timestamp' , axis=1, inplace=True ) df_1.shape cols = list(df_1.columns) features = cols features.remove('phase') X = df_1[features] y = df_1['phase'] # split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) print (X_train.shape, y_train.shape) """ Explanation: Gesture-Phase-Detection Dataset : https://archive.ics.uci.edu/ml/datasets/Gesture+Phase+Segmentation The dataset is composed by features extracted from 7 videos with people gesticulating, aiming at studying Gesture Phase Segmentation. Each video is represented by two files: a raw file, which contains the position of hands, wrists, head and spine of the user in each frame; and a processed file, which contains velocity and acceleration of hands and wrists. See the data set description for more information on the dataset. Raw files: 18 numeric attributes (double), a timestamp and a class attribute (nominal). Processed files: 32 numeric attributes (double) and a class attribute (nominal). A feature vector with up to 50 numeric attributes can be generated with the two files mentioned above. End of explanation """ len(features) # Apply PCA with the same number of dimensions as variables in the dataset from sklearn.decomposition import PCA pca = PCA(n_components=50) pca.fit(X) # Print the components and the amount of variance in the data contained in each dimension print(pca.components_) print(pca.explained_variance_ratio_) %matplotlib inline import matplotlib.pyplot as plt plt.plot(list(pca.explained_variance_ratio_),'-o') plt.title('Explained variance ratio as function of PCA components') plt.ylabel('Explained variance ratio') plt.xlabel('Component') plt.show() # First we reduce the data to two dimensions using PCA to capture variation pca = PCA(n_components=2) reduced_data = pca.fit_transform(X) print(reduced_data[:10]) # print upto 10 elements # Import clustering modules from sklearn.cluster import KMeans from sklearn.mixture import GMM kmeans = KMeans(n_clusters=5) clusters = kmeans.fit(reduced_data) print(clusters) # Plot the decision boundary by building a mesh grid to populate a graph. x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 hx = (x_max-x_min)/1000. hy = (y_max-y_min)/1000. xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy)) # Obtain labels for each point in mesh. Use last trained model. Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()]) # Find the centroids for KMeans or the cluster means for GMM centroids = kmeans.cluster_centers_ print('*** K MEANS CENTROIDS ***') print(centroids) # TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7 print('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***') print(pca.inverse_transform(centroids)) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('Clustering on the seeds dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() """ Explanation: Unsupervised Learning PCA End of explanation """ from sklearn.cluster import AgglomerativeClustering ac = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='complete') labels = ac.fit_predict(X) print('Cluster labels: %s' % labels) """ Explanation: Applying agglomerative clustering via scikit-learn End of explanation """ X = df_1[features] y = df_1['phase'] # split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) """ Explanation: End of explanation """ from sklearn import cluster clf = cluster.KMeans(init='k-means++', n_clusters=5, random_state=5) clf.fit(X_train) print clf.labels_.shape print clf.labels_ # Predict clusters on testing data y_pred = clf.predict(X_test) from sklearn import metrics print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)) print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)) print "Confusion matrix" print metrics.confusion_matrix(y_test, y_pred) """ Explanation: K Means End of explanation """ # Affinity propagation aff = cluster.AffinityPropagation() aff.fit(X_train) print aff.cluster_centers_indices_.shape y_pred = aff.predict(X_test) from sklearn import metrics print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)) print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)) print "Confusion matrix" print metrics.confusion_matrix(y_test, y_pred) """ Explanation: Affinity Propogation End of explanation """ ms = cluster.MeanShift() ms.fit(X_train) y_pred = ms.predict(X_test) from sklearn import metrics print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)) print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)) print "Confusion matrix" print metrics.confusion_matrix(y_test, y_pred) """ Explanation: MeanShift End of explanation """ from sklearn import mixture # Define a heldout dataset to estimate covariance type X_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split( X_train, y_train,test_size=0.25, random_state=42) for covariance_type in ['spherical','tied','diag','full']: gm=mixture.GMM(n_components=100, covariance_type=covariance_type, random_state=42, n_init=5) gm.fit(X_train_heldout) y_pred=gm.predict(X_test_heldout) print "Adjusted rand score for covariance={}:{:.2}".format(covariance_type, metrics.adjusted_rand_score(y_test_heldout, y_pred)) """ Explanation: Mixture of Guassian Models End of explanation """ pca = PCA(n_components=2) X = pca.fit_transform(X) c = [] from matplotlib.pyplot import cm n=6 color=iter(cm.rainbow(np.linspace(0,1,n))) for i in range(n): c.append(next(color)) n = 5 f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,6)) km = KMeans(n_clusters= n , random_state=0) y_km = km.fit_predict(X) for i in range(n): ax1.scatter(X[y_km==i,0], X[y_km==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i)) ax1.set_title('K-means clustering') ac = AgglomerativeClustering(n_clusters=n, affinity='euclidean', linkage='complete') y_ac = ac.fit_predict(X) for i in range(n): ax2.scatter(X[y_ac==i,0], X[y_ac==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i)) ax2.set_title('Agglomerative clustering') # Put a legend below current axis plt.legend() plt.tight_layout() #plt.savefig('./figures/kmeans_and_ac.png', dpi=300) plt.show() """ Explanation: End of explanation """ import os from sklearn.tree import DecisionTreeClassifier, export_graphviz import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn import cross_validation, metrics from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import BernoulliNB from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from time import time from sklearn.pipeline import Pipeline from sklearn.metrics import roc_auc_score , classification_report from sklearn.grid_search import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report X = df_1[features] y = df_1['phase'] # split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) print (X_train.shape, y_train.shape,X_test.shape, y_test.shape) """ Explanation: Classification End of explanation """ t0=time() print ("DecisionTree") dt = DecisionTreeClassifier(min_samples_split=20,random_state=99) # dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99) clf_dt=dt.fit(X_train,y_train) print ("Acurracy: ", clf_dt.score(X_test,y_test)) t1=time() print ("time elapsed: ", t1-t0) tt0=time() print ("cross result========") scores = cross_validation.cross_val_score(dt, X,y, cv=5) print (scores) print (scores.mean()) tt1=time() print ("time elapsed: ", tt1-tt0) """ Explanation: Decision Tree accuracy and time elapsed caculation End of explanation """ t2=time() print ("RandomForest") rf = RandomForestClassifier(n_estimators=100,n_jobs=-1) clf_rf = rf.fit(X_train,y_train) print ("Acurracy: ", clf_rf.score(X_test,y_test)) t3=time() print ("time elapsed: ", t3-t2) tt2=time() print ("cross result========") scores = cross_validation.cross_val_score(rf, X,y, cv=5) print (scores) print (scores.mean()) tt3=time() print ("time elapsed: ", tt3-tt2) """ Explanation: Random Forest accuracy and time elapsed caculation End of explanation """ t4=time() print ("NaiveBayes") nb = BernoulliNB() clf_nb=nb.fit(X_train,y_train) print ("Acurracy: ", clf_nb.score(X_test,y_test)) t5=time() print ("time elapsed: ", t5-t4) tt4=time() print ("cross result========") scores = cross_validation.cross_val_score(nb, X,y, cv=5) print (scores) print (scores.mean()) tt5=time() print ("time elapsed: ", tt5-tt4) """ Explanation: Naive Bayes accuracy and time elapsed caculation End of explanation """ t6=time() print ("KNN") # knn = KNeighborsClassifier(n_neighbors=3) knn = KNeighborsClassifier() clf_knn=knn.fit(X_train, y_train) print ("Acurracy: ", clf_knn.score(X_test,y_test) ) t7=time() print ("time elapsed: ", t7-t6) tt6=time() print ("cross result========") scores = cross_validation.cross_val_score(knn, X,y, cv=5) print (scores) print (scores.mean()) tt7=time() print ("time elapsed: ", tt7-tt6) """ Explanation: KNN accuracy and time elapsed caculation End of explanation """ t7=time() print ("SVM") svc = SVC() clf_svc=svc.fit(X_train, y_train) print ("Acurracy: ", clf_svc.score(X_test,y_test) ) t8=time() print ("time elapsed: ", t8-t7) tt7=time() print ("cross result========") scores = cross_validation.cross_val_score(svc, X,y, cv=5) print (scores) print (scores.mean()) tt8=time() print ("time elapsed: ", tt7-tt6) from sklearn.svm import SVC from sklearn.cross_validation import cross_val_score from sklearn.pipeline import Pipeline from sklearn import grid_search svc = SVC() parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]} grid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy') grid.fit(X_train, y_train) print ('Best score: %0.3f' % grid.best_score_) print ('Best parameters set:') best_parameters = grid.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print ('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid.predict(X_test) print (classification_report(y_test, predictions)) pipeline = Pipeline([ ('clf', SVC(kernel='rbf', gamma=0.01, C=100)) ]) parameters = { 'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1), 'clf__C': (0.1, 0.3, 1, 3, 10, 30), } grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy') grid_search.fit(X_train, y_train) print ('Best score: %0.3f' % grid_search.best_score_) print ('Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print ('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print (classification_report(y_test, predictions)) """ Explanation: SVM accuracy and time elapsed caculation End of explanation """ from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import numpy as np import operator class MajorityVoteClassifier(BaseEstimator, ClassifierMixin): """ A majority vote ensemble classifier Parameters ---------- classifiers : array-like, shape = [n_classifiers] Different classifiers for the ensemble vote : str, {'classlabel', 'probability'} (default='label') If 'classlabel' the prediction is based on the argmax of class labels. Else if 'probability', the argmax of the sum of probabilities is used to predict the class label (recommended for calibrated classifiers). weights : array-like, shape = [n_classifiers], optional (default=None) If a list of `int` or `float` values are provided, the classifiers are weighted by importance; Uses uniform weights if `weights=None`. """ def __init__(self, classifiers, vote='classlabel', weights=None): self.classifiers = classifiers self.named_classifiers = {key: value for key, value in _name_estimators(classifiers)} self.vote = vote self.weights = weights def fit(self, X, y): """ Fit classifiers. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. y : array-like, shape = [n_samples] Vector of target class labels. Returns ------- self : object """ if self.vote not in ('probability', 'classlabel'): raise ValueError("vote must be 'probability' or 'classlabel'" "; got (vote=%r)" % self.vote) if self.weights and len(self.weights) != len(self.classifiers): raise ValueError('Number of classifiers and weights must be equal' '; got %d weights, %d classifiers' % (len(self.weights), len(self.classifiers))) # Use LabelEncoder to ensure class labels start with 0, which # is important for np.argmax call in self.predict self.lablenc_ = LabelEncoder() self.lablenc_.fit(y) self.classes_ = self.lablenc_.classes_ self.classifiers_ = [] for clf in self.classifiers: fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y)) self.classifiers_.append(fitted_clf) return self def predict(self, X): """ Predict class labels for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. Returns ---------- maj_vote : array-like, shape = [n_samples] Predicted class labels. """ if self.vote == 'probability': maj_vote = np.argmax(self.predict_proba(X), axis=1) else: # 'classlabel' vote # Collect results from clf.predict calls predictions = np.asarray([clf.predict(X) for clf in self.classifiers_]).T maj_vote = np.apply_along_axis( lambda x: np.argmax(np.bincount(x, weights=self.weights)), axis=1, arr=predictions) maj_vote = self.lablenc_.inverse_transform(maj_vote) return maj_vote def predict_proba(self, X): """ Predict class probabilities for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- avg_proba : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample. """ probas = np.asarray([clf.predict_proba(X) for clf in self.classifiers_]) avg_proba = np.average(probas, axis=0, weights=self.weights) return avg_proba def get_params(self, deep=True): """ Get classifier parameter names for GridSearch""" if not deep: return super(MajorityVoteClassifier, self).get_params(deep=False) else: out = self.named_classifiers.copy() for name, step in six.iteritems(self.named_classifiers): for key, value in six.iteritems(step.get_params(deep=True)): out['%s__%s' % (name, key)] = value return out from sklearn.cross_validation import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline import numpy as np from sklearn.preprocessing import StandardScaler clf1 = LogisticRegression(penalty='l2', C=0.001, random_state=0) clf2 = DecisionTreeClassifier(max_depth=1, criterion='entropy', random_state=0) clf3 = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski') pipe1 = Pipeline([['sc', StandardScaler()], ['clf', clf1]]) pipe3 = Pipeline([['sc', StandardScaler()], ['clf', clf3]]) clf_labels = ['Logistic Regression', 'Decision Tree', 'KNN'] print('10-fold cross validation:\n') for clf, label in zip([pipe1, clf2, pipe3], clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) # Majority Rule (hard) Voting mv_clf = MajorityVoteClassifier( classifiers=[pipe1, clf2, pipe3]) clf_labels += ['Majority Voting'] all_clf = [pipe1, clf2, pipe3, mv_clf] for clf, label in zip(all_clf, clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='accuracy') print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) mv_clf.get_params() from sklearn.grid_search import GridSearchCV params = {'decisiontreeclassifier__max_depth': [1, 2], 'pipeline-1__clf__C': [0.001, 0.1, 100.0]} grid = GridSearchCV(estimator=mv_clf, param_grid=params, cv=10, scoring='accuracy') grid.fit(X_train, y_train) for params, mean_score, scores in grid.grid_scores_: print("%0.3f+/-%0.2f %r" % (mean_score, scores.std() / 2, params)) print('Best parameters: %s' % grid.best_params_) print('Accuracy: %.2f' % grid.best_score_) """ Explanation: Ensemble Learning End of explanation """ from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=None) bag = BaggingClassifier(base_estimator=tree, n_estimators=500, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, n_jobs=1, random_state=1) from sklearn.metrics import accuracy_score tree = tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f' % (tree_train, tree_test)) bag = bag.fit(X_train, y_train) y_train_pred = bag.predict(X_train) y_test_pred = bag.predict(X_test) bag_train = accuracy_score(y_train, y_train_pred) bag_test = accuracy_score(y_test, y_test_pred) print('Bagging train/test accuracies %.3f/%.3f' % (bag_train, bag_test)) """ Explanation: Bagging -- Building an ensemble of classifiers from bootstrap samples End of explanation """ from sklearn.ensemble import AdaBoostClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=1) ada = AdaBoostClassifier(base_estimator=tree, n_estimators=500, learning_rate=0.1, random_state=0) tree = tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f' % (tree_train, tree_test)) ada = ada.fit(X_train, y_train) y_train_pred = ada.predict(X_train) y_test_pred = ada.predict(X_test) ada_train = accuracy_score(y_train, y_train_pred) ada_test = accuracy_score(y_test, y_test_pred) print('AdaBoost train/test accuracies %.3f/%.3f' % (ada_train, ada_test)) """ Explanation: Leveraging weak learners via adaptive boosting End of explanation """
deepchem/deepchem
examples/tutorials/Introduction_to_Gaussian_Processes.ipynb
mit
%pip install --pre deepchem """ Explanation: Introduction to Gaussian Processes In the world of cheminformatics and machine learning, models are often trees (random forest, XGBoost, etc.) or artifical neural networks (deep neural networks, graph convolutional networks, etc.). These models are known as "Frequentist" models. However, there is another category known as Bayesian models. Today we will be experimenting with a Bayesian model implemented in scikit-learn known as gaussian processes (GP). For a deeper dive on GP, there is a great tutorial paper on how GP works for regression. There is also an academic paper that applies GP to a real world problem. As a short intro, GP allows us to build up our statistical model using an infinite number of Gaussian functions over our n-dimensional space, where n is the number of features. However, we pick these functions based on how well they fit the data we pass it. We end up with a statistical model built from an ensemble of Gaussian functions which can actually vary quite a bit. The result is that for points we have trained the model on, the variance in our ensemble should be very low. For test set points close to the training set points, the variance should be higher but still low as the ensemble was picked to predict well in its neighborhood. For points far from the training set points, however, we did not pick our ensemble of Gaussian functions to fit them so we'd expect the variance in our ensemble to be high. In this way, we end up with a statistical model that allows for a natural generation of uncertainty. Colab This tutorial and the rest in the sequences are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. Setup The first step is to get DeepChem up and running. We recommend using Google Colab to work through this tutorial series. You'll need to run the following commands to get DeepChem installed on your colab notebook. End of explanation """ import deepchem as dc from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, WhiteKernel import numpy as np import matplotlib.pyplot as plt """ Explanation: Gaussian Processes As stated earlier, GP is already implemented in scikit-learn so we will be using DeepChem's scikit-learn wrapper. SklearnModel is a subclass of DeepChem's Model class. It acts as a wrapper around a sklearn.base.BaseEstimator. Here we import deepchem and the GP regressor model from sklearn. End of explanation """ tasks, datasets, transformers = dc.molnet.load_bace_regression(featurizer='ecfp', splitter='random') train_dataset, valid_dataset, test_dataset = datasets """ Explanation: Loading data Next we need a dataset that presents a regression problem. For this tutorial we will be using the BACE dataset from MoleculeNet. End of explanation """ print(f'The tasks are: {tasks}') print(f'The transformers are: {transformers}') print(f'The transformer normalizes the outputs (y values): {transformers[0].transform_y}') """ Explanation: I always like to get a close look at what the objects in my code are storing. We see that tasks is a list of tasks that we are trying to predict. The transformer is a NormalizationTransformer that normalizes the outputs (y values) of the dataset. End of explanation """ print(train_dataset) print(valid_dataset) print(test_dataset) """ Explanation: Here we see that the data has already been split into a training set, a validation set, and a test set. We will train the model on the training set and test the accuracy of the model on the test set. If we were to do any hyperparameter tuning, we would use the validation set. The split was ~80/10/10 train/valid/test. End of explanation """ output_variance = 7.908735015054668 length_scale = 6.452349252677817 noise_level = 0.10475507755839343 kernel = output_variance**2 * RBF(length_scale=length_scale, length_scale_bounds='fixed') + WhiteKernel(noise_level=noise_level, noise_level_bounds='fixed') alpha = 4.989499481123432e-09 sklearn_gpr = GaussianProcessRegressor(kernel=kernel, alpha=alpha) model = dc.models.SklearnModel(sklearn_gpr) """ Explanation: Using the SklearnModel Here we first create the model using the GaussianProcessRegressor we imported from sklearn. Then we wrap it in DeepChem's SklearnModel. To learn more about the model, you can either read the sklearn API or run help(GaussianProcessRegressor) in a code block. As you see, the values I picked for the parameters seem awfully specific. This is because I needed to do some hyperparameter tuning beforehand to get model that wasn't wildly overfitting the training set. You can learn more about how I tuned the model in the Appendix at the end of this tutorial. End of explanation """ model.fit(train_dataset) metric1 = dc.metrics.Metric(dc.metrics.mean_squared_error) metric2 = dc.metrics.Metric(dc.metrics.r2_score) print(f'Training set score: {model.evaluate(train_dataset, [metric1, metric2])}') print(f'Test set score: {model.evaluate(test_dataset, [metric1, metric2])}') """ Explanation: Then we fit our model to the data and see how it performs both on the training set and on the test set. End of explanation """ def predict_with_error(dc_model, X, y_transformer): samples = model.model.sample_y(X, 100) means = y_transformer.untransform(np.mean(samples, axis=1)) stds = y_transformer.y_stds[0] * np.std(samples, axis=1) return means, stds """ Explanation: Analyzing the Results We can also visualize how well the predicted values match up to the measured values. First we need a function that allows us to obtain both the mean predicted value and the standard deviation of the value. This is done by sampling 100 predictions from each set of inputs X and calculating the mean and standard deviation. End of explanation """ y_meas_train = transformers[0].untransform(train_dataset.y) y_pred_train, y_pred_train_stds = predict_with_error(model, train_dataset.X, transformers[0]) plt.xlim([2.5, 10.5]) plt.ylim([2.5, 10.5]) plt.scatter(y_meas_train, y_pred_train) """ Explanation: For our training set, we see a pretty good correlation between the measured values (x-axis) and the predicted values (y-axis). Note that we use the transformer from earlier to untransform our predicted values. End of explanation """ y_meas_test = transformers[0].untransform(test_dataset.y) y_pred_test, y_pred_test_stds = predict_with_error(model, test_dataset.X, transformers[0]) plt.xlim([2.5, 10.5]) plt.ylim([2.5, 10.5]) plt.scatter(y_meas_test, y_pred_test) """ Explanation: We now do the same for our test set. We see a fairly good correlation! However, it is certainly not as tight. This is reflected in the difference between the R2 scores calculated above. End of explanation """ def percent_within_std(y_meas, y_pred, y_std): assert len(y_meas) == len(y_pred) and len(y_meas) == len(y_std), 'length of y_meas and y_pred must be the same' count_within_error = 0 for i in range(len(y_meas)): if abs(y_meas[i][0]-y_pred[i]) < y_std[i]: count_within_error += 1 return count_within_error/len(y_meas) """ Explanation: We can also write a function to calculate how many of the predicted values fall within the predicted error range. This is done by counting up how many samples have a true error smaller than its standard deviation calculated earlier. One standard deviation is a 68% confidence interval. End of explanation """ percent_within_std(y_meas_train, y_pred_train, y_pred_train_stds) percent_within_std(y_meas_test, y_pred_test, y_pred_test_stds) """ Explanation: For the train set, >90% of the samples are within a standard deviation. In comparison, only ~70% of the samples are within a standard deviation for the test set. A standard deviation is a 68% confidence interval so we see that for the training set, the uncertainty is close. However, this model overpredicts uncertainty on the training set. End of explanation """ plt.hist(y_pred_test_stds) plt.show() """ Explanation: We can also take a look at the distributions of the standard deviations for the test set predictions. We see a very roughly Gaussian distribution in the predicted errors. End of explanation """ %pip install optuna import optuna def get_model(trial): output_variance = trial.suggest_float('output_variance', 0.1, 10, log=True) length_scale = trial.suggest_float('length_scale', 1e-5, 1e5, log=True) noise_level = trial.suggest_float('noise_level', 1e-5, 1e5, log=True) params = { 'kernel': output_variance**2 * RBF(length_scale=length_scale, length_scale_bounds='fixed') + WhiteKernel(noise_level=noise_level, noise_level_bounds='fixed'), 'alpha': trial.suggest_float('alpha', 1e-12, 1e-5, log=True), } sklearn_gpr = GaussianProcessRegressor(**params) return dc.models.SklearnModel(sklearn_gpr) def objective(trial): model = get_model(trial) model.fit(train_dataset) metric = dc.metrics.Metric(dc.metrics.mean_squared_error) return model.evaluate(valid_dataset, [metric])['mean_squared_error'] study = optuna.create_study(direction='minimize') study.optimize(objective, n_trials=100) print(study.best_params) """ Explanation: For now, this is the end of our tutorial. We plan to follow up soon with a deeper dive into uncertainty estimation and in particular, calibrated uncertainty estimation. We will see you then! Appendix: Hyperparameter Optimization As hyperparameter optimization is outside the scope of this tutorial, I will not explain how to use Optuna to tune hyperparameters. But the code is still included for the sake of completeness. End of explanation """
huizhuzhao/jupyter_notebook
RNNLM.ipynb
mit
import csv import itertools import operator import numpy as np import nltk import sys from datetime import datetime from utils import * import matplotlib.pyplot as plt %matplotlib inline # Download NLTK model data (you need to do this once) nltk.download("book") """ Explanation: Recurrent Neural Networks Tutorial, Part 2 – Implementing a Language Model RNN with Python, Numpy and Theano End of explanation """ vocabulary_size = 8000 unknown_token = "UNKNOWN_TOKEN" sentence_start_token = "SENTENCE_START" sentence_end_token = "SENTENCE_END" # Read the data and append SENTENCE_START and SENTENCE_END tokens print "Reading CSV file..." with open('data/reddit-comments-2015-08.csv', 'rb') as f: reader = csv.reader(f, skipinitialspace=True) reader.next() # Split full comments into sentences sentences = itertools.chain(*[nltk.sent_tokenize(x[0].decode('utf-8').lower()) for x in reader]) # Append SENTENCE_START and SENTENCE_END sentences = ["%s %s %s" % (sentence_start_token, x, sentence_end_token) for x in sentences] print "Parsed %d sentences." % (len(sentences)) # Tokenize the sentences into words tokenized_sentences = [nltk.word_tokenize(sent) for sent in sentences] # Count the word frequencies word_freq = nltk.FreqDist(itertools.chain(*tokenized_sentences)) print "Found %d unique words tokens." % len(word_freq.items()) # Get the most common words and build index_to_word and word_to_index vectors vocab = word_freq.most_common(vocabulary_size-1) index_to_word = [x[0] for x in vocab] index_to_word.append(unknown_token) word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)]) print "Using vocabulary size %d." % vocabulary_size print "The least frequent word in our vocabulary is '%s' and appeared %d times." % (vocab[-1][0], vocab[-1][1]) # Replace all words not in our vocabulary with the unknown token for i, sent in enumerate(tokenized_sentences): tokenized_sentences[i] = [w if w in word_to_index else unknown_token for w in sent] print "\nExample sentence: '%s'" % sentences[0] print "\nExample sentence after Pre-processing: '%s'" % tokenized_sentences[0] # Create the training data X_train = np.asarray([[word_to_index[w] for w in sent[:-1]] for sent in tokenized_sentences]) y_train = np.asarray([[word_to_index[w] for w in sent[1:]] for sent in tokenized_sentences]) """ Explanation: This the second part of the Recurrent Neural Network Tutorial. The first part is here. In this part we will implement a full Recurrent Neural Network from scratch using Python and optimize our implementation using Theano, a library to perform operations on a GPU. The full code is available on Github. I will skip over some boilerplate code that is not essential to understanding Recurrent Neural Networks, but all of that is also on Github. Language Modeling Our goal is to build a Language Model using a Recurrent Neural Network. Here's what that means. Let's say we have sentence of $m$ words. Language Model allows us to predict the probability of observing the sentence (in a given dataset) as: $ \begin{aligned} P(w_1,...,w_m) = \prod_{i=1}^{m}P(w_i \mid w_1,..., w_{i-1}) \end{aligned} $ In words, the probability of a sentence is the product of probabilities of each word given the words that came before it. So, the probability of the sentence "He went to buy some chocolate" would be the probability of "chocolate" given "He went to buy some", multiplied by the probability of "some" given "He went to buy", and so on. Why is that useful? Why would we want to assign a probability to observing a sentence? First, such a model can be used as a scoring mechanism. For example, a Machine Translation system typically generates multiple candidates for an input sentence. You could use a language model to pick the most probable sentence. Intuitively, the most probable sentence is likely to be grammatically correct. Similar scoring happens in speech recognition systems. But solving the Language Modeling problem also has a cool side effect. Because we can predict the probability of a word given the preceding words, we are able to generate new text. It's a generative model. Given an existing sequence of words we sample a next word from the predicted probabilities, and repeat the process until we have a full sentence. Andrew Karparthy has a great post that demonstrates what language models are capable of. His models are trained on single characters as opposed to full words, and can generate anything from Shakespeare to Linux Code. Note that in the above equation the probability of each word is conditioned on all previous words. In practice, many models have a hard time representing such long-term dependencies due to computational or memory constraints. They are typically limited to looking at only a few of the previous words. RNNs can, in theory, capture such long-term dependencies, but in practice it's a bit more complex. We'll explore that in a later post. Training Data and Preprocessing To train our language model we need text to learn from. Fortunately we don't need any labels to train a language model, just raw text. I downloaded 15,000 longish reddit comments from a dataset available on Google's BigQuery. Text generated by our model will sound like reddit commenters (hopefully)! But as with most Machine Learning projects we first need to do some pre-processing to get our data into the right format. 1. Tokenize Text We have raw text, but we want to make predictions on a per-word basis. This means we must tokenize our comments into sentences, and sentences into words. We could just split each of the comments by spaces, but that wouldn't handle punctuation properly. The sentence "He left!" should be 3 tokens: "He", "left", "!". We'll use NLTK's word_tokenize and sent_tokenize methods, which do most of the hard work for us. 2. Remove infrequent words Most words in our text will only appear one or two times. It's a good idea to remove these infrequent words. Having a huge vocabulary will make our model slow to train (we'll talk about why that is later), and because we don't have a lot of contextual examples for such words we wouldn't be able to learn how to use them correctly anyway. That's quite similar to how humans learn. To really understand how to appropriately use a word you need to have seen it in different contexts. In our code we limit our vocabulary to the vocabulary_size most common words (which I set to 8000, but feel free to change it). We replace all words not included in our vocabulary by UNKNOWN_TOKEN. For example, if we don't include the word "nonlinearities" in our vocabulary, the sentence "nonlineraties are important in neural networks" becomes "UNKNOWN_TOKEN are important in Neural Networks". The word UNKNOWN_TOKEN will become part of our vocabulary and we will predict it just like any other word. When we generate new text we can replace UNKNOWN_TOKEN again, for example by taking a randomly sampled word not in our vocabulary, or we could just generate sentences until we get one that doesn't contain an unknown token. 3. Prepend special start and end tokens We also want to learn which words tend start and end a sentence. To do this we prepend a special SENTENCE_START token, and append a special SENTENCE_END token to each sentence. This allows us to ask: Given that the first token is SENTENCE_START, what is the likely next word (the actual first word of the sentence)? 4. Build training data matrices The input to our Recurrent Neural Networks are vectors, not strings. So we create a mapping between words and indices, index_to_word, and word_to_index. For example, the word "friendly" may be at index 2001. A training example $x$ may look like [0, 179, 341, 416], where 0 corresponds to SENTENCE_START. The corresponding label $y$ would be [179, 341, 416, 1]. Remember that our goal is to predict the next word, so y is just the x vector shifted by one position with the last element being the SENTENCE_END token. In other words, the correct prediction for word 179 above would be 341, the actual next word. End of explanation """ # Print an training data example x_example, y_example = X_train[17], y_train[17] print "x:\n%s\n%s" % (" ".join([index_to_word[x] for x in x_example]), x_example) print "\ny:\n%s\n%s" % (" ".join([index_to_word[x] for x in y_example]), y_example) """ Explanation: Here's an actual training example from our text: End of explanation """ class RNNNumpy: def __init__(self, word_dim, hidden_dim=100, bptt_truncate=4): # Assign instance variables self.word_dim = word_dim self.hidden_dim = hidden_dim self.bptt_truncate = bptt_truncate # Randomly initialize the network parameters self.U = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (hidden_dim, word_dim)) self.V = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (word_dim, hidden_dim)) self.W = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (hidden_dim, hidden_dim)) """ Explanation: Building the RNN For a general overview of RNNs take a look at first part of the tutorial. Let's get concrete and see what the RNN for our language model looks like. The input $x$ will be a sequence of words (just like the example printed above) and each $x_t$ is a single word. But there's one more thing: Because of how matrix multiplication works we can't simply use a word index (like 36) as an input. Instead, we represent each word as a one-hot vector of size vocabulary_size. For example, the word with index 36 would be the vector of all 0's and a 1 at position 36. So, each $x_t$ will become a vector, and $x$ will be a matrix, with each row representing a word. We'll perform this transformation in our Neural Network code instead of doing it in the pre-processing. The output of our network $o$ has a similar format. Each $o_t$ is a vector of vocabulary_size elements, and each element represents the probability of that word being the next word in the sentence. Let's recap the equations for the RNN from the first part of the tutorial: $ \begin{aligned} s_t &= \tanh(Ux_t + Ws_{t-1}) \ o_t &= \mathrm{softmax}(Vs_t) \end{aligned} $ I always find it useful to write down the dimensions of the matrices and vectors. Let's assume we pick a vocabulary size $C = 8000$ and a hidden layer size $H = 100$. You can think of the hidden layer size as the "memory" of our network. Making it bigger allows us to learn more complex patterns, but also results in additional computation. Then we have: $ \begin{aligned} x_t & \in \mathbb{R}^{8000} \ o_t & \in \mathbb{R}^{8000} \ s_t & \in \mathbb{R}^{100} \ U & \in \mathbb{R}^{100 \times 8000} \ V & \in \mathbb{R}^{8000 \times 100} \ W & \in \mathbb{R}^{100 \times 100} \ \end{aligned} $ This is valuable information. Remember that $U,V$ and $W$ are the parameters of our network we want to learn from data. Thus, we need to learn a total of $2HC + H^2$ parameters. In the case of $C=8000$ and $H=100$ that's 1,610,000. The dimensions also tell us the bottleneck of our model. Note that because $x_t$ is a one-hot vector, multiplying it with $U$ is essentially the same as selecting a column of U, so we don't need to perform the full multiplication. Then, the biggest matrix multiplication in our network is $Vs_t$. That's why we want to keep our vocabulary size small if possible. Armed with this, it's time to start our implementation. Initialization We start by declaring a RNN class an initializing our parameters. I'm calling this class RNNNumpy because we will implement a Theano version later. Initializing the parameters $U,V$ and $W$ is a bit tricky. We can't just initialize them to 0's because that would result in symmetric calculations in all our layers. We must initialize them randomly. Because proper initialization seems to have an impact on training results there has been lot of research in this area. It turns out that the best initialization depends on the activation function ($\tanh$ in our case) and one recommended approach is to initialize the weights randomly in the interval from $\left[-\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}\right]$ where $n$ is the number of incoming connections from the previous layer. This may sound overly complicated, but don't worry too much about it. As long as you initialize your parameters to small random values it typically works out fine. End of explanation """ def forward_propagation(self, x): # The total number of time steps T = len(x) # During forward propagation we save all hidden states in s because need them later. # We add one additional element for the initial hidden, which we set to 0 s = np.zeros((T + 1, self.hidden_dim)) s[-1] = np.zeros(self.hidden_dim) # The outputs at each time step. Again, we save them for later. o = np.zeros((T, self.word_dim)) # For each time step... for t in np.arange(T): # Note that we are indxing U by x[t]. This is the same as multiplying U with a one-hot vector. s[t] = np.tanh(self.U[:,x[t]] + self.W.dot(s[t-1])) o[t] = softmax(self.V.dot(s[t])) return [o, s] RNNNumpy.forward_propagation = forward_propagation """ Explanation: Above, word_dim is the size of our vocabulary, and hidden_dim is the size of our hidden layer (we can pick it). Don't worry about the bptt_truncate parameter for now, we'll explain what that is later. Forward Propagation Next, let's implement the forward propagation (predicting word probabilities) defined by our equations above: End of explanation """ def predict(self, x): # Perform forward propagation and return index of the highest score o, s = self.forward_propagation(x) return np.argmax(o, axis=1) RNNNumpy.predict = predict """ Explanation: We not only return the calculated outputs, but also the hidden states. We will use them later to calculate the gradients, and by returning them here we avoid duplicate computation. Each $o_t$ is a vector of probabilities representing the words in our vocabulary, but sometimes, for example when evaluating our model, all we want is the next word with the highest probability. We call this function predict: End of explanation """ np.random.seed(10) model = RNNNumpy(vocabulary_size) o, s = model.forward_propagation(X_train[10]) print o.shape print o """ Explanation: Let's try our newly implemented methods and see an example output: End of explanation """ predictions = model.predict(X_train[10]) print predictions.shape print predictions """ Explanation: For each word in the sentence (45 above), our model made 8000 predictions representing probabilities of the next word. Note that because we initialized $U,V,W$ to random values these predictions are completely random right now. The following gives the indices of the highest probability predictions for each word: End of explanation """ def calculate_total_loss(self, x, y): L = 0 # For each sentence... for i in np.arange(len(y)): o, s = self.forward_propagation(x[i]) # We only care about our prediction of the "correct" words correct_word_predictions = o[np.arange(len(y[i])), y[i]] # Add to the loss based on how off we were L += -1 * np.sum(np.log(correct_word_predictions)) return L def calculate_loss(self, x, y): # Divide the total loss by the number of training examples N = np.sum((len(y_i) for y_i in y)) return self.calculate_total_loss(x,y)/N RNNNumpy.calculate_total_loss = calculate_total_loss RNNNumpy.calculate_loss = calculate_loss """ Explanation: Calculating the Loss To train our network we need a way to measure the errors it makes. We call this the loss function $L$, and our goal is find the parameters $U,V$ and $W$ that minimize the loss function for our training data. A common choice for the loss function is the cross-entropy loss. If we have $N$ training examples (words in our text) and $C$ classes (the size of our vocabulary) then the loss with respect to our predictions $o$ and the true labels $y$ is given by: $ \begin{aligned} L(y,o) = - \frac{1}{N} \sum_{n \in N} y_{n} \log o_{n} \end{aligned} $ The formula looks a bit complicated, but all it really does is sum over our training examples and add to the loss based on how off our prediction are. The further away $y$ (the correct words) and $o$ (our predictions), the greater the loss will be. We implement the function calculate_loss: End of explanation """ # Limit to 1000 examples to save time print "Expected Loss for random predictions: %f" % np.log(vocabulary_size) print "Actual loss: %f" % model.calculate_loss(X_train[:1000], y_train[:1000]) """ Explanation: Let's take a step back and think about what the loss should be for random predictions. That will give us a baseline and make sure our implementation is correct. We have $C$ words in our vocabulary, so each word should be (on average) predicted with probability $1/C$, which would yield a loss of $L = -\frac{1}{N} N \log\frac{1}{C} = \log C$: End of explanation """ def bptt(self, x, y): T = len(y) # Perform forward propagation o, s = self.forward_propagation(x) # We accumulate the gradients in these variables dLdU = np.zeros(self.U.shape) dLdV = np.zeros(self.V.shape) dLdW = np.zeros(self.W.shape) delta_o = o delta_o[np.arange(len(y)), y] -= 1. # For each output backwards... for t in np.arange(T)[::-1]: dLdV += np.outer(delta_o[t], s[t].T) # Initial delta calculation delta_t = self.V.T.dot(delta_o[t]) * (1 - (s[t] ** 2)) # Backpropagation through time (for at most self.bptt_truncate steps) for bptt_step in np.arange(max(0, t-self.bptt_truncate), t+1)[::-1]: # print "Backpropagation step t=%d bptt step=%d " % (t, bptt_step) dLdW += np.outer(delta_t, s[bptt_step-1]) dLdU[:,x[bptt_step]] += delta_t # Update delta for next step delta_t = self.W.T.dot(delta_t) * (1 - s[bptt_step-1] ** 2) return [dLdU, dLdV, dLdW] RNNNumpy.bptt = bptt """ Explanation: Pretty close! Keep in mind that evaluating the loss on the full dataset is an expensive operation and can take hours if you have a lot of data! Training the RNN with SGD and Backpropagation Through Time (BPTT) Remember that we want to find the parameters $U,V$ and $W$ that minimize the total loss on the training data. The most common way to do this is SGD, Stochastic Gradient Descent. The idea behind SGD is pretty simple. We iterate over all our training examples and during each iteration we nudge the parameters into a direction that reduces the error. These directions are given by the gradients on the loss: $\frac{\partial L}{\partial U}, \frac{\partial L}{\partial V}, \frac{\partial L}{\partial W}$. SGD also needs a learning rate, which defines how big of a step we want to make in each iteration. SGD is the most popular optimization method not only for Neural Networks, but also for many other Machine Learning algorithms. As such there has been a lot of research on how to optimize SGD using batching, parallelism and adaptive learning rates. Even though the basic idea is simple, implementing SGD in a really efficient way can become very complex. If you want to learn more about SGD this is a good place to start. Due to its popularity there are a wealth of tutorials floating around the web, and I don't want to duplicate them here. I'll implement a simple version of SGD that should be understandable even without a background in optimization. But how do we calculate those gradients we mentioned above? In a traditional Neural Network we do this through the backpropagation algorithm. In RNNs we use a slightly modified version of the this algorithm called Backpropagation Through Time (BPTT). Because the parameters are shared by all time steps in the network, the gradient at each output depends not only on the calculations of the current time step, but also the previous time steps. If you know calculus, it really is just applying the chain rule. The next part of the tutorial will be all about BPTT, so I won't go into detailed derivation here. For a general introduction to backpropagation check out this and this post. For now you can treat BPTT as a black box. It takes as input a training example $(x,y)$ and returns the gradients $\frac{\partial L}{\partial U}, \frac{\partial L}{\partial V}, \frac{\partial L}{\partial W}$. End of explanation """ def gradient_check(self, x, y, h=0.001, error_threshold=0.01): # Calculate the gradients using backpropagation. We want to checker if these are correct. bptt_gradients = model.bptt(x, y) # List of all parameters we want to check. model_parameters = ['U', 'V', 'W'] # Gradient check for each parameter for pidx, pname in enumerate(model_parameters): # Get the actual parameter value from the mode, e.g. model.W parameter = operator.attrgetter(pname)(self) print "Performing gradient check for parameter %s with size %d." % (pname, np.prod(parameter.shape)) # Iterate over each element of the parameter matrix, e.g. (0,0), (0,1), ... it = np.nditer(parameter, flags=['multi_index'], op_flags=['readwrite']) while not it.finished: ix = it.multi_index # Save the original value so we can reset it later original_value = parameter[ix] # Estimate the gradient using (f(x+h) - f(x-h))/(2*h) parameter[ix] = original_value + h gradplus = model.calculate_total_loss([x],[y]) parameter[ix] = original_value - h gradminus = model.calculate_total_loss([x],[y]) estimated_gradient = (gradplus - gradminus)/(2*h) # Reset parameter to original value parameter[ix] = original_value # The gradient for this parameter calculated using backpropagation backprop_gradient = bptt_gradients[pidx][ix] # calculate The relative error: (|x - y|/(|x| + |y|)) relative_error = np.abs(backprop_gradient - estimated_gradient)/(np.abs(backprop_gradient) + np.abs(estimated_gradient)) # If the error is to large fail the gradient check if relative_error > error_threshold: print "Gradient Check ERROR: parameter=%s ix=%s" % (pname, ix) print "+h Loss: %f" % gradplus print "-h Loss: %f" % gradminus print "Estimated_gradient: %f" % estimated_gradient print "Backpropagation gradient: %f" % backprop_gradient print "Relative Error: %f" % relative_error return it.iternext() print "Gradient check for parameter %s passed." % (pname) RNNNumpy.gradient_check = gradient_check # To avoid performing millions of expensive calculations we use a smaller vocabulary size for checking. grad_check_vocab_size = 100 np.random.seed(10) model = RNNNumpy(grad_check_vocab_size, 10, bptt_truncate=1000) model.gradient_check([0,1,2,3], [1,2,3,4]) """ Explanation: Gradient Checking Whenever you implement backpropagation it is good idea to also implement gradient checking, which is a way of verifying that your implementation is correct. The idea behind gradient checking is that derivative of a parameter is equal to the slope at the point, which we can approximate by slightly changing the parameter and then dividing by the change: $ \begin{aligned} \frac{\partial L}{\partial \theta} \approx \lim_{h \to 0} \frac{J(\theta + h) - J(\theta -h)}{2h} \end{aligned} $ We then compare the gradient we calculated using backpropagation to the gradient we estimated with the method above. If there's no large difference we are good. The approximation needs to calculate the total loss for every parameter, so that gradient checking is very expensive (remember, we had more than a million parameters in the example above). So it's a good idea to perform it on a model with a smaller vocabulary. End of explanation """ # Performs one step of SGD. def numpy_sdg_step(self, x, y, learning_rate): # Calculate the gradients dLdU, dLdV, dLdW = self.bptt(x, y) # Change parameters according to gradients and learning rate self.U -= learning_rate * dLdU self.V -= learning_rate * dLdV self.W -= learning_rate * dLdW RNNNumpy.sgd_step = numpy_sdg_step # Outer SGD Loop # - model: The RNN model instance # - X_train: The training data set # - y_train: The training data labels # - learning_rate: Initial learning rate for SGD # - nepoch: Number of times to iterate through the complete dataset # - evaluate_loss_after: Evaluate the loss after this many epochs def train_with_sgd(model, X_train, y_train, learning_rate=0.005, nepoch=100, evaluate_loss_after=5): # We keep track of the losses so we can plot them later losses = [] num_examples_seen = 0 for epoch in range(nepoch): # Optionally evaluate the loss if (epoch % evaluate_loss_after == 0): loss = model.calculate_loss(X_train, y_train) losses.append((num_examples_seen, loss)) time = datetime.now().strftime('%Y-%m-%d %H:%M:%S') print "%s: Loss after num_examples_seen=%d epoch=%d: %f" % (time, num_examples_seen, epoch, loss) # Adjust the learning rate if loss increases if (len(losses) > 1 and losses[-1][1] > losses[-2][1]): learning_rate = learning_rate * 0.5 print "Setting learning rate to %f" % learning_rate sys.stdout.flush() # For each training example... for i in range(len(y_train)): # One SGD step model.sgd_step(X_train[i], y_train[i], learning_rate) num_examples_seen += 1 """ Explanation: SGD Implementation Now that we are able to calculate the gradients for our parameters we can implement SGD. I like to do this in two steps: 1. A function sdg_step that calculates the gradients and performs the updates for one batch. 2. An outer loop that iterates through the training set and adjusts the learning rate. End of explanation """ np.random.seed(10) model = RNNNumpy(vocabulary_size) %timeit model.sgd_step(X_train[10], y_train[10], 0.005) """ Explanation: Done! Let's try to get a sense of how long it would take to train our network: End of explanation """ np.random.seed(10) # Train on a small subset of the data to see what happens model = RNNNumpy(vocabulary_size) losses = train_with_sgd(model, X_train[:100], y_train[:100], nepoch=10, evaluate_loss_after=1) """ Explanation: Uh-oh, bad news. One step of SGD takes approximately 350 milliseconds on my laptop. We have about 80,000 examples in our training data, so one epoch (iteration over the whole data set) would take several hours. Multiple epochs would take days, or even weeks! And we're still working with a small dataset compared to what's being used by many of the companies and researchers out there. What now? Fortunately there are many ways to speed up our code. We could stick with the same model and make our code run faster, or we could modify our model to be less computationally expensive, or both. Researchers have identified many ways to make models less computationally expensive, for example by using a hierarchical softmax or adding projection layers to avoid the large matrix multiplications (see also here or here). But I want to keep our model simple and go the first route: Make our implementation run faster using a GPU. Before doing that though, let's just try to run SGD with a small dataset and check if the loss actually decreases: End of explanation """ from rnn_theano import RNNTheano, gradient_check_theano np.random.seed(10) # To avoid performing millions of expensive calculations we use a smaller vocabulary size for checking. grad_check_vocab_size = 5 model = RNNTheano(grad_check_vocab_size, 10) gradient_check_theano(model, [0,1,2,3], [1,2,3,4]) np.random.seed(10) model = RNNTheano(vocabulary_size) %timeit model.sgd_step(X_train[10], y_train[10], 0.005) """ Explanation: Good, it seems like our implementation is at least doing something useful and decreasing the loss, just like we wanted. Training our Network with Theano and the GPU I have previously written a tutorial on Theano, and since all our logic will stay exactly the same I won't go through optimized code here again. I defined a RNNTheano class that replaces the numpy calculations with corresponding calculations in Theano. Just like the rest of this post, the code is also available Github. End of explanation """ from utils import load_model_parameters_theano, save_model_parameters_theano model = RNNTheano(vocabulary_size, hidden_dim=50) # losses = train_with_sgd(model, X_train, y_train, nepoch=50) # save_model_parameters_theano('./data/trained-model-theano.npz', model) load_model_parameters_theano('./data/trained-model-theano.npz', model) """ Explanation: This time, one SGD step takes 70ms on my Mac (without GPU) and 23ms on a g2.2xlarge Amazon EC2 instance with GPU. That's a 15x improvement over our initial implementation and means we can train our model in hours/days instead of weeks. There are still a vast number of optimizations we could make, but we're good enough for now. To help you avoid spending days training a model I have pre-trained a Theano model with a hidden layer dimensionality of 50 and a vocabulary size of 8000. I trained it for 50 epochs in about 20 hours. The loss was was still decreasing and training longer would probably have resulted in a better model, but I was running out of time and wanted to publish this post. Feel free to try it out yourself and trian for longer. You can find the model parameters in data/trained-model-theano.npz in the Github repository and load them using the load_model_parameters_theano method: End of explanation """ def generate_sentence(model): # We start the sentence with the start token new_sentence = [word_to_index[sentence_start_token]] # Repeat until we get an end token while not new_sentence[-1] == word_to_index[sentence_end_token]: next_word_probs = model.forward_propagation(new_sentence) sampled_word = word_to_index[unknown_token] # We don't want to sample unknown words while sampled_word == word_to_index[unknown_token]: samples = np.random.multinomial(1, next_word_probs[-1]) sampled_word = np.argmax(samples) new_sentence.append(sampled_word) sentence_str = [index_to_word[x] for x in new_sentence[1:-1]] return sentence_str num_sentences = 10 senten_min_length = 7 for i in range(num_sentences): sent = [] # We want long sentences, not sentences with one or two words while len(sent) < senten_min_length: sent = generate_sentence(model) print " ".join(sent) """ Explanation: Generating Text Now that we have our model we can ask it to generate new text for us! Let's implement a helper function to generate new sentences: End of explanation """
emredjan/emredjan.github.io
code/plot_lm.ipynb
mit
%matplotlib inline import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import statsmodels.formula.api as smf from statsmodels.graphics.gofplots import ProbPlot plt.style.use('seaborn') # pretty matplotlib plots plt.rc('font', size=14) plt.rc('figure', titlesize=18) plt.rc('axes', labelsize=15) plt.rc('axes', titlesize=18) """ Explanation: Let's start with the necessary imports and setup commands: End of explanation """ auto = pd.read_csv('../../../../data/ISLR/datasets/Auto.csv', na_values=['?']) auto.dropna(inplace=True) auto.reset_index(drop=True, inplace=True) """ Explanation: Loading the data, and getting rid of NAs: End of explanation """ model_f = 'mpg ~ cylinders + \ displacement + \ horsepower + \ weight + \ acceleration + \ year + \ origin' model = smf.ols(formula=model_f, data=auto) model_fit = model.fit() """ Explanation: The fitted linear regression model, using statsmodels R style formula API: End of explanation """ # fitted values (need a constant term for intercept) model_fitted_y = model_fit.fittedvalues # model residuals model_residuals = model_fit.resid # normalized residuals model_norm_residuals = model_fit.get_influence().resid_studentized_internal # absolute squared normalized residuals model_norm_residuals_abs_sqrt = np.sqrt(np.abs(model_norm_residuals)) # absolute residuals model_abs_resid = np.abs(model_residuals) # leverage, from statsmodels internals model_leverage = model_fit.get_influence().hat_matrix_diag # cook's distance, from statsmodels internals model_cooks = model_fit.get_influence().cooks_distance[0] """ Explanation: Calculations required for some of the plots: End of explanation """ plot_lm_1 = plt.figure(1) plot_lm_1.set_figheight(8) plot_lm_1.set_figwidth(12) plot_lm_1.axes[0] = sns.residplot(model_fitted_y, 'mpg', data=auto, lowess=True, scatter_kws={'alpha': 0.5}, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) plot_lm_1.axes[0].set_title('Residuals vs Fitted') plot_lm_1.axes[0].set_xlabel('Fitted values') plot_lm_1.axes[0].set_ylabel('Residuals') # annotations abs_resid = model_abs_resid.sort_values(ascending=False) abs_resid_top_3 = abs_resid[:3] for i in abs_resid_top_3.index: plot_lm_1.axes[0].annotate(i, xy=(model_fitted_y[i], model_residuals[i])); """ Explanation: And now, the actual plots: 1. Residual plot First plot that's generated by plot() in R is the residual plot, which draws a scatterplot of fitted values against residuals, with a "locally weighted scatterplot smoothing (lowess)" regression line showing any apparent trend. This one can be easily plotted using seaborn residplot with fitted values as x parameter, and the dependent variable as y. lowess=True makes sure the lowess regression line is drawn. Additional parameters are passed to underlying matplotlib scatter and line functions using scatter_kws and line_kws, also titles and labels are set using matplotlib methods. The ; in the end gets rid of the output text &lt;matplotlib.text.Text at 0x000000000&gt; at the top of the plot <sup>1</sup>. Top 3 absolute residuals are also annotated: End of explanation """ QQ = ProbPlot(model_norm_residuals) plot_lm_2 = QQ.qqplot(line='45', alpha=0.5, color='#4C72B0', lw=1) plot_lm_2.set_figheight(8) plot_lm_2.set_figwidth(12) plot_lm_2.axes[0].set_title('Normal Q-Q') plot_lm_2.axes[0].set_xlabel('Theoretical Quantiles') plot_lm_2.axes[0].set_ylabel('Standardized Residuals'); # annotations abs_norm_resid = np.flip(np.argsort(np.abs(model_norm_residuals)), 0) abs_norm_resid_top_3 = abs_norm_resid[:3] for r, i in enumerate(abs_norm_resid_top_3): plot_lm_2.axes[0].annotate(i, xy=(np.flip(QQ.theoretical_quantiles, 0)[r], model_norm_residuals[i])); """ Explanation: 2. QQ plot This one shows how well the distribution of residuals fit the normal distribution. This plots the standardized (z-score) residuals against the theoretical normal quantiles. Anything quite off the diagonal lines may be a concern for further investigation. For this, I'm using ProbPlot and its qqplot method from statsmodels graphics API. statsmodels actually has a qqplot method that we can use directly, but it's not very customizable, hence this two-step approach. Annotations were a bit tricky, as theoretical quantiles from ProbPlot are already sorted: End of explanation """ plot_lm_3 = plt.figure(3) plot_lm_3.set_figheight(8) plot_lm_3.set_figwidth(12) plt.scatter(model_fitted_y, model_norm_residuals_abs_sqrt, alpha=0.5) sns.regplot(model_fitted_y, model_norm_residuals_abs_sqrt, scatter=False, ci=False, lowess=True, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) plot_lm_3.axes[0].set_title('Scale-Location') plot_lm_3.axes[0].set_xlabel('Fitted values') plot_lm_3.axes[0].set_ylabel('$\sqrt{|Standardized Residuals|}$'); # annotations abs_sq_norm_resid = np.flip(np.argsort(model_norm_residuals_abs_sqrt), 0) abs_sq_norm_resid_top_3 = abs_sq_norm_resid[:3] for i in abs_norm_resid_top_3: plot_lm_3.axes[0].annotate(i, xy=(model_fitted_y[i], model_norm_residuals_abs_sqrt[i])); """ Explanation: 3. Scale-Location Plot This is another residual plot, showing their spread, which you can use to assess heteroscedasticity. It's essentially a scatter plot of absolute square-rooted normalized residuals and fitted values, with a lowess regression line. Scatterplot is a standard matplotlib function, lowess line comes from seaborn regplot. Top 3 absolute square-rooted normalized residuals are also annotated: End of explanation """ plot_lm_4 = plt.figure(4) plot_lm_4.set_figheight(8) plot_lm_4.set_figwidth(12) plt.scatter(model_leverage, model_norm_residuals, alpha=0.5) sns.regplot(model_leverage, model_norm_residuals, scatter=False, ci=False, lowess=True, line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8}) plot_lm_4.axes[0].set_xlim(0, 0.20) plot_lm_4.axes[0].set_ylim(-3, 5) plot_lm_4.axes[0].set_title('Residuals vs Leverage') plot_lm_4.axes[0].set_xlabel('Leverage') plot_lm_4.axes[0].set_ylabel('Standardized Residuals') # annotations leverage_top_3 = np.flip(np.argsort(model_cooks), 0)[:3] for i in leverage_top_3: plot_lm_4.axes[0].annotate(i, xy=(model_leverage[i], model_norm_residuals[i])) # shenanigans for cook's distance contours def graph(formula, x_range, label=None): x = x_range y = formula(x) plt.plot(x, y, label=label, lw=1, ls='--', color='red') p = len(model_fit.params) # number of model parameters graph(lambda x: np.sqrt((0.5 * p * (1 - x)) / x), np.linspace(0.001, 0.200, 50), 'Cook\'s distance') # 0.5 line graph(lambda x: np.sqrt((1 * p * (1 - x)) / x), np.linspace(0.001, 0.200, 50)) # 1 line plt.legend(loc='upper right'); """ Explanation: 4. Leverage plot This plot shows if any outliers have influence over the regression fit. Anything outside the group and outside "Cook's Distance" lines, may have an influential effect on model fit. statsmodels has a built-in leverage plot for linear regression, but again, it's not very customizable. Digging around the source of the statsmodels.graphics package, it's pretty straightforward to implement it from scratch and customize with standard matplotlib functions. There are three parts to this plot: First is the scatterplot of leverage values (got from statsmodels fitted model using get_influence().hat_matrix_diag) vs. standardized residuals. Second one is the lowess regression line for that. And the third and the most tricky part is the Cook's distance lines, which I currently couldn't figure out how to draw in Python. But statsmodels has Cook's distance already calculated, so we can use that to annotate top 3 influencers on the plot: Update: I think I figured out how to draw Cook's distance ($D_i$) contours for $D_i=0.5$ and $D_i=1$ The trick was rearranging the formula $p{D_i} = r_i^2 h_i/(1-h_i)$ to plot the lines at 0.5 and 1. End of explanation """
GoogleCloudPlatform/cloudml-samples
notebooks/keras/cascade.ipynb
apache-2.0
!gsutil cp gs://cloud-samples-data/air/fruits360/fruits360-combined.zip . !ls !unzip -qn fruits360-combined.zip """ Explanation: Cascade (HD-CNN Model Deriative) Objective This notebook demonstrates building a hierachical image classifer based on a HD-CNN deriative which uses cascading classifers to predict the class of a label from a coarse to finer classes. In this demonstration, we have two classes in the heirarchy: fruits and varieties of fruit. The model will first predict the coarse class (type of fruit) and then within that class of fruit, the variety. For example, if given an image of Apple Granny Smith, it would first predict 'Apple' (fruit) and then predict the 'Apple Granny Smith'. This deriative of the HD-CNN is designed to demonstrate both the methodology of heirarchical classification, as well as design improvements not available at the time (2014) when the model was first published Zhicheng Yan. General Approach Our HD-CNN deriative archirecture consists of: 1. An stem convolutional block. - The output from the stem convolutional head is shared with the coarse and finer classifiers (referred to as the shared layers in the paper). 2. A coarse classifier. - A Convolution and Dense layers for classifying the coarse level class. 3. A set of finer classifiers, one per coarse level class. - A Convolution and Dense layers per coarse level class for classifying the corresponding finer level class. 4. A conditional execution step for predicting a specific finer classifier based on the output of the coarse classifier. - The coarse level classifier is predicted. - The index of the prediction is used to select a finer classifier. - An im-memory copy of the shared bottleneck layer (i.e., last convolution layer in stem) is passed as the input to the finer level classifier. Our HD-CNN deriative is trained as follows: 1. Train the coarse level classifier using the coarse level labels in the dataset. <img src='arch-1.png'> 2. Train the finer level classifier per coarse level class, using the corresponding subset (with finer labels) from the dataset. <img src='arch-2.png'> <br/> Dataset We will be using the Fruits-360 dataset, which was formerly a Kaggle competition. It consists of images of fruit labeled by fruit type and the variety. 1. There are a total of 47 types of fruit (e.g., Apple, Orange, Pear, etc) and 81 varieties. 2. On average, there are 656 images per variety. 3. Each image is 128x128 RGB. <div> <img src='Training/Apple/Apple Golden 2/0_100.jpg' style='float: left'> <img src='Training/Apple/Apple Red 1/0_100.jpg' style='float: left'> <img src='Training/Apple/Apple Red 1/0_100.jpg' style='float: left'> <img src='Training/Orange/Orange/0_100.jpg' style = 'float: left'> <img src='Training/Pear/Pear/0_100.jpg' style = 'float: left'> </div> Objective The objective is to train a hierarchical image classifier (coarse and then finer label) using a cascading layer architecture. First, the shared layers and coarse classifier are trained. Then the cascading finer classifiers are trained. For prediction, the outcome (softmax) of the coarse classifier will conditionally execute the corresponding finer classifier and reuse the feature maps from the shared layers. Costs This notebook requires 17GB of memory. It will not run on a Standard TF JaaS instance (15GB). You will need to select an instance with memory > 17GB. Prerequisites Download the Fruits 360 dataset from GCS public bucket into this JaaS instance. Some of the cells in the notebook display images. The images will not appear until the cell for copying the training data/misc from GCS into the JaaS instance is executed. End of explanation """ import os from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input, decode_predictions from keras.preprocessing.image import ImageDataGenerator from keras.layers import GlobalAveragePooling2D, Dense from keras import Sequential, Model, Input from keras.layers import Conv2D, Flatten, MaxPooling2D, Dense, Dropout, BatchNormalization, ReLU from keras import Model, optimizers from keras.models import load_model from keras.utils import to_categorical import keras.layers as layers from sklearn.model_selection import train_test_split import tensorflow as tf import numpy as np import cv2 """ Explanation: Getting Started We will be using the fully frameworks and Python modules: 1. Keras framework for building and training models. 2. Keras builtin models (resnet50). 3. Keras preprocessing for feeding and augmenting the dataset during training. 4. Gap data engineering framework for preprocessing the image data. 5. Numpy for general image/matrix manipulation. End of explanation """ def Fruits(root): n_label = 0 images = [] labels = [] classes = {} os.chdir(root) classes_ = os.scandir('./') for class_ in classes_: print(class_.name) os.chdir(class_.name) classes[class_.name] = n_label # Finer Level Subdirectories per Coarse Level subclasses = os.scandir('./') for subclass in subclasses: os.chdir(subclass.name) files = os.listdir('./') for file in files: image = cv2.imread(file) images.append(image) labels.append(n_label) os.chdir('../') os.chdir('../') n_label += 1 os.chdir('../') images = np.asarray(images) images = (images / 255.0).astype(np.float32) labels = to_categorical(labels, n_label) print("Images", images.shape, "Labels", labels.shape, "Classes", classes) # Split the processed image dataset into training and test data x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True) return x_train, x_test, y_train, y_test, classes """ Explanation: Make Datasets Make Coarse Category Dataset This makes the by fruit type dataset. End of explanation """ def Varieties(root): ''' Generate Cascade (Finer) Level Dataset for Fruit Varieties''' datasets = {} os.chdir(root) fruits = os.scandir('./') for fruit in fruits: n_label = 0 images = [] labels = [] classes = {} print('FRUIT', fruit.name) os.chdir(fruit.name) varieties = os.scandir('./') for variety in varieties: print('VARIETY', variety.name) classes[variety.name] = n_label os.chdir(variety.name) files = os.listdir('./') for file in files: image = cv2.imread(file) images.append(image) labels.append(n_label) os.chdir('../') n_label += 1 images = np.asarray(images) images = (images / 255.0).astype(np.float32) labels = to_categorical(labels, n_label) x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True) datasets[fruit.name] = (x_train, x_test, y_train, y_test, classes) os.chdir('../') print("IMAGES", x_train.shape, y_train.shape, "CLASSES", classes) os.chdir('../') return datasets """ Explanation: Make Finer Category Datasets This makes the by Fruit Variety datasets End of explanation """ !free -m x_train, x_test, y_train, y_test, fruits_classes = Fruits('Training') !free -m """ Explanation: Generate the preprocessed Coarse Dataset End of explanation """ # Split out 10% of Train to use for Validation pivot = int(len(x_train) * 0.9) x_val = x_train[pivot:] y_val = y_train[pivot:] x_train = x_train[:pivot] y_train = y_train[:pivot] print("train", x_train.shape, y_train.shape) print("val ", x_val.shape, y_val.shape) print("test ", x_test.shape, y_test.shape) !free -m """ Explanation: Split Coarse Dataset (by Fruit) into Train, Validation and Test First split into train and test. Then split out 10% of train to use for validation during training. - Train: 80% - Train: 90% - Validation: 10% - Test : 20% End of explanation """ def Feeder(): datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=30) return datagen """ Explanation: Make Trainers Create the routines we will use for training. Make Feeder Prepare the Feeder mechanism for training the neural networkm using ImageDataGenerator. Add image augmentation for: 1. Horizontal Flip 2. Verticial Flip 3. Random Rotation +/- 30 degrees End of explanation """ def Train(model, datagen, x_train, y_train, x_test, y_test, epochs=10, batch_size=32): model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size, shuffle=True), steps_per_epoch=len(x_train) / batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) scores = model.evaluate(x_train, y_train, verbose=1) print("Train", scores) """ Explanation: Make Trainer Prepare a training session: 1. Epochs defaults to 10 2. Batch size defaults to 32 3. Train with validation data 4. Final evaluation with test data (holdout set). End of explanation """ def ResNet(shape=(128, 128, 3), nclasses=47, optimizer='adam', weights=None): base_model = ResNet50(weights=weights, include_top=False, input_shape=shape) for i, layer in enumerate(base_model.layers): # first: train only the top layers (which were randomly initialized) for Transfer Learning if weights is not None: layer.trainable = False # label the last convolutional layer in the base model as the bottleneck layer.name = 'bottleneck' # Get the last convolutional layer of the ResNet base model x = base_model.output # add a global spatial average pooling layer x = GlobalAveragePooling2D()(x) # let's add a fully-connected layer #x = Dense(1024, activation='relu')(x) # and a logistic layer predictions = Dense(nclasses, activation='softmax')(x) # this is the model we will train model = Model(inputs=base_model.input, outputs=predictions) # compile the model (should be done *after* setting layers to non-trainable) model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.summary() return model """ Explanation: Make Model Stem Convolutional Block (Base Model) We will use this base model as the stem convolutional block of cascading model: 1. The output of this model are a set of pooled feature maps. 2. The last layer that produces this set of pooled feature maps is referred to as the bottleneck layer. Coarse Classifier The coarse classifier is an independent block layer for classifying the coarse level label: 1. Input is the bottleneck layer from the stem convolutional block. 2. Layer consists of a convolution layer and a dense layer, where the dense layer is the classifier. Finer Classifier The finer classifiers are a set of independent block layers for classifying the finer label. There is one finer classifier per unique coarse level label. 1. Input is the bottleneck layer from the stem convolutional block. 2. Layer consists of a convolution layer and a dense layer, where the dense layer is the classifier. 3. The finer classifer is conditionally executed based on the softmax output from the coarse classifier. ResNet for Transfer Learning Use a prebuilt Keras model (ResNet 50). Either as: 1. Transfer Learning: The layers are pretrained with imagenet weights. 2. Full Training: layers are not pretrained (weights = None) End of explanation """ def ConvNet(shape=(128, 128, 3), nclasses=47, optimizer='adam'): model = Sequential() # stem convolutional group model.add(Conv2D(16, (3,3), padding='same', activation='relu', input_shape=shape)) # conv block - double filters model.add(Conv2D(32, (3,3), padding='same')) model.add(ReLU()) model.add(Dropout(0.50)) model.add(MaxPooling2D((2,2))) # conv block - double filters model.add(Conv2D(64, (3,3), padding='same')) model.add(ReLU()) model.add(MaxPooling2D((2,2))) # conv block - double filters + bottleneck layer model.add(Conv2D(128, (3,3), padding='same', activation='relu')) model.add(MaxPooling2D((2,2), name="bottleneck")) # dense block model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.25)) # classifier model.add(Dense(nclasses, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.summary() return model """ Explanation: Simple ConvNet The stem convolutional block consists of a mini-VGG, which consists of: 1. A convolutional input (stem) 2. Three convolutional groups, each doubling the number of filers. 3. Each convolutional group consists of one convolutional block. 4. A dropout of 50% is added to the first convolutional group. The coarse classifier consists of: 1. A 1024 none dense layer 2. A 47 node dense layer for classification. End of explanation """ # Select the model for the stem convolutional group (shared layers) stem = 'ConvNet' if stem == 'ConvNet': model = ConvNet(shape=(100, 100, 3)) elif stem == 'ResNet-imagenet': model = ResNet(weights='imagenet', optimizer='adagrad') elif stem == 'ResNet': model = ResNet() # load previously stored model else: model = load_model('model.h5') """ Explanation: Start Training 1. Train the Coarse Classifier 2. Add Finer Classifiers 4. Train the Finer Classifiers Generate Coarse Model Choose between: 1. A untrained simple VGG CovNet as Stem Convolution Group, or 2. Pre-trained ResNet50 (imagenet weights) for Transfer Learning End of explanation """ datagen = Feeder() Train(model, datagen, x_train, y_train, x_val, y_val, 5) scores = model.evaluate(x_test, y_test, verbose=1) print("Test", scores) """ Explanation: Train the Coarse Model End of explanation """ # Save the model and weights model.save("model-coarse.h5") """ Explanation: Save the Coarse Model End of explanation """ def Bottleneck(model): for layer in model.layers: layer.trainable = False if layer.name == 'bottleneck': bottleneck = layer print("BOTTLENECK", bottleneck.output.shape) return bottleneck """ Explanation: Prepare Coarse CNN for cascade training 1. Freeze all layers 2. Find bottleneck layer End of explanation """ # Converse memory by releasing training data for coarse model import gc x_train = y_train = x_val = y_val = x_test = y_test = None gc.collect() varieties_datasets = Varieties('Training') for key, dataset in varieties_datasets.items(): _x_train, _x_test, _y_train, _y_test, classes = dataset # Separate out 10% of train for validation pivot = int(len(_x_train) * 0.9) _x_val = _x_train[pivot:] _y_val = _y_train[pivot:] _x_train = _x_train[:pivot] _y_train = _y_train[:pivot] # save the dataset for this fruit (key) varieties_datasets[key] = { 'classes': classes, 'train': (_x_train, _y_train), 'val': (_x_val, _y_val), 'test': (_x_test, _y_test) } !free -m """ Explanation: Generate the preprocessed Finer Datasets Split Finer (by Variety) Datasets into Train, Validation and Test 1. For each fruit type, split the corresponding variety images into train, validation and test. 2. Save each split dataset in a dictionary, using the fruit name as the key. End of explanation """ bottleneck = Bottleneck(model) cascades = [] for key, val in varieties_datasets.items(): classes = val['classes'] print("KEY", key, classes) # if only one subclassifier, then skip (i.e., coarse == finer) if len(classes) == 1: continue x = layers.Conv2D(128, (3,3), padding='same', activation='relu')(bottleneck.output) x = BatchNormalization()(x) x = MaxPooling2D((2,2))(x) x = layers.Flatten()(bottleneck.output) x = layers.Dense(1024, activation='relu')(x) x = layers.Dense(len(classes), activation='softmax', name=key.replace(' ', ''))(x) cascades.append(x) """ Explanation: Add Each Cascade (Finer) Classifier 1. Get the bottleneck layer for the coarse CNN 2. Add an independent finer classifier per fruit from the bottleneck layer End of explanation """ classifiers = [] for cascade in cascades: _model = Model(model.input, cascade) _model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) _model.summary() classifiers.append(_model) """ Explanation: Compile each finer classifier End of explanation """ for classifier in classifiers: # get the output layer for this subclassifier last = classifier.layers[len(classifier.layers)-1] print(last, last.name) # find the corresponding variety dataset for key, dataset in varieties_datasets.items(): if key == last.name: x_train, y_train = dataset['train'] x_val, y_val = dataset['val'] datagen = Feeder() Train(classifier, datagen, x_train, y_train, x_val, y_val, 5) """ Explanation: Train the finer classifiers End of explanation """ for classifier in classifiers: # get the output layer for this subclassifier last = classifier.layers[len(classifier.layers)-1] print(last, last.name) # find the corresponding variety dataset for key, dataset in varieties_datasets.items(): if key == last.name: x_test, y_test = dataset['test'] scores = classifier.evaluate(x_test, y_test, verbose=1) print("Test", scores) """ Explanation: Evaluate the Model 1. Evaluate the Model for each finer classifier. End of explanation """ n = 0 for classifier in classifiers: classifier.save('model-finer-' + str(n) + '.h5') n += 1 """ Explanation: Save the Finer Models End of explanation """ import random # Let's make a prediction for each type of fruit for key, dataset in varieties_datasets.items(): # Get the variety test data for this type of fruit x_test, y_test = dataset['test'] # pick a random image in the variety datast index = random.randint(0, len(x_test)) # use the coarse model to predict the type of fruit yhat = np.argmax( model.predict(x_test[index:index+1]) ) # let's find the class name (type of fruit) for this predicted label for fruit, label in fruits_classes.items(): if label == yhat: break print("Yhat", yhat, "Coarse Prediction", key, "=", fruit) # Prediction was correct if key == fruit: if len(dataset['classes']) == 1: print("No Finer Classifier") continue # find the corresponding finer classifier for this type of fruit for classifier in classifiers: # get the output layer for this subclassifier last = classifier.layers[len(classifier.layers)-1] if last.name == fruit: # use the finer model to predict the variety of this type of fruit yhat = np.argmax(classifier.predict(x_test[index:index+1])) for variety, value in dataset['classes'].items(): if value == np.argmax(y_test[index]): break for yhat_variety, value in dataset['classes'].items(): if value == yhat: break print("Yhat", yhat, "Finer Prediction", variety, "=", yhat_variety) break """ Explanation: Let's do some cascading predictions We will take one random selected image per type of fruit, and: 1. Run the image through the coarse classifier (by fruit). 2. Based on the predicted output, select the corresponding finer classifier (by variety). 3. Run the image through the corresponding finer classifier. End of explanation """ # extractfeatures = Model(input=model.input, output=model.get_layer('bottleneck').output) """ Explanation: End of Notebook End of explanation """
rickiepark/tfk-notebooks
tensorflow_for_beginners/3. Linear Regression.ipynb
mit
import matplotlib.pyplot as plt %matplotlib inline """ Explanation: 그래프를 그리기 위해서 matplotlib을 임포트 합니다. %matplotlib inline은 새로운 창을 띄우지 않고 주피터 노트북 안에 이미지를 삽입하여 줍니다. End of explanation """ x_raw = ... x = ... """ Explanation: 텐서플로우를 tf 란 이름으로 임포트 하세요. tf.Session()을 사용하여 세션 객체를 하나 만드세요. sess = tf.Session() 임의의 샘플 데이터를 만들려고 합니다. 평균 0, 표준 편차 0.55 인 샘플 데이터 1000개를 만듭니다. x_raw = tf.random_normal([...], mean=.., stddev=..) x = sess.run(x_raw) End of explanation """ y_raw = ... y = ... """ Explanation: 위에서 x 축의 값을 만들었으니 이에 상응하는 y 축의 값을 만들려고 합니다. y 값은 0.1*x+0.3 을 만족하되 실제 데이터처럼 보이게 하려고 난수를 조금 섞어서 만듭니다. 여기서는 평균 0, 표준 편차 0.03 인 정규 분포 난수를 만듭니다. y_raw = 0.1 * x + 0.3 + tf.random_normal([...], mean=.., stddev=..) y = sess.run(y_raw) End of explanation """ W = ... b = ... y_hat = ... """ Explanation: 만든 샘플 데이터를 산점도로 나타내보겠습니다. plot 명령에 x, y 축의 값을 전달하고 산점도 표시는 원 모양 'o'으로 하고 테두리 선을 검은색으로 그리도록 하겠습니다. plt.plot(x, y, 'o', markeredgecolor='k') 선형 회귀에서 사용할 두개의 변수 W 와 b 를 만들고 직선 방정식을 구성합니다. W = tf.Variable(tf.zeros([.])) b = ...(tf.zeros([.])) y_hat = W * x + b End of explanation """ loss = ... optimizer = ... train = ... """ Explanation: 회귀에서의 손실함수는 평균 제곱 오차(mean squared error)입니다. 텐서플로우에서 사용하는 오차 함수 tf.loss.mean_squared_error()를 사용하여 손실 함수를 위한 노드를 만듭니다. 이 함수에 전달할 매개변수는 정답 y와 예측한 값 y_hat 입니다. loss = tf.losses.mean_squared_error(y, y_hat) 경사하강법은 텐서플로우 tf.train.GradientDescentOptimizer()에 구현되어 있습니다. 경사하강법 학습속도를 0.5로 주고 최적화 연산을 만듭니다. optimizer = tf.train.GradientDescentOptimizer(0.5) optimizer.minimize()함수에 손실 함수 객체를 넘겨주어 학습할 최종 객체를 생성합니다. train = optimizer.minimize(loss) End of explanation """ init = ... sess.run(init) """ Explanation: 계산 그래프에 필요한 변수를 초기화합니다. End of explanation """ costs = [] for step in range(10): _, w_, b_, c = ... costs.append(c) print(step, w_, b_, c) # 산포도 그리기 plt.plot(x, y, 'o', markeredgecolor='k') # 직선 그리기 plt.plot(...) # x, y 축 레이블링을 하고 각 축의 최대, 최소값 범위를 지정합니다. plt.xlabel('x') plt.xlim(-2,2) plt.ylim(0.1,0.6) plt.ylabel('y') plt.show() """ Explanation: sess.run() 메소드를 이용해 필요한 연산을 수행할 수 있습니다. 반드시 수행할 것은 train이고 화면 출력을 위해 W, b, loss 를 계산해서 값을 반환 받겠습니다. _, w_, b_, c = sess.run([train, W, b, loss]) 반환 받은 c 는 costs 리스트에 추가하여 나중에 손실함수 그래프를 그리겠습니다. w_, b_ 를 이용해 위 산점도에 직선이 어떻게 맞춰지는지 그림으로 표현합니다. plt.plot(x, w_ * x + b_) End of explanation """
nadvamir/deep-learning
dcgan-svhn/DCGAN_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data """ Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) """ Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation """ trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') """ Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation """ idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) """ Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation """ def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), self.scaler(y) """ Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z """ Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation """ def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x = tf.layers.dense(z, 512*16) x = tf.reshape(x, (-1, 4, 4, 512)) # 4x4x516 # leaky relu leaky_relu = lambda x: tf.maximum(x*alpha, x) # convolutions conv_1 = tf.layers.conv2d_transpose(x, 256, 5, strides=2, padding='same') # 8x8x256 conv_1 = tf.layers.batch_normalization(conv_1, training=training) conv_1 = leaky_relu(conv_1) conv_2 = tf.layers.conv2d_transpose(conv_1, 128, 5, strides=2, padding='same') # 16x16x128 conv_2 = tf.layers.batch_normalization(conv_2, training=training) conv_2 = leaky_relu(conv_2) # Output layer, 32x32x3 logits = tf.layers.conv2d_transpose(conv_1, 3, 5, strides=2, padding='same') out = tf.tanh(logits) return out """ Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one. End of explanation """ def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x = tf.layers.conv2d(x, 16, 4, (2, 2), padding='same') x = tf.maximum(x * alpha, x) # now 16x16x16 x = tf.layers.conv2d(x, 32, 4, (2, 2), padding='same') x = tf.layers.batch_normalization(x, training=True) x = tf.maximum(x * alpha, x) # now 8x8x32 x = tf.layers.conv2d(x, 64, 4, (2, 2), padding='same') x = tf.layers.batch_normalization(x, training=True) x = tf.maximum(x * alpha, x) # now 4x4x64 x = tf.reshape(x, (-1, 4*4*64)) logits = tf.layers.dense(x, 1, name='output') out = tf.nn.sigmoid(logits) return out, logits """ Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first. End of explanation """ def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss """ Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation """ def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt """ Explanation: Optimizers Again, nothing new here. End of explanation """ class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=0.2) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5) """ Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation """ def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img) plt.subplots_adjust(wspace=0, hspace=0) return fig, axes """ Explanation: Here is a function for displaying generated images. End of explanation """ def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(50, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 5, 10, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples """ Explanation: And another function we can use to train our network. End of explanation """ real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 1 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 5, 10, figsize=(10,5)) """ Explanation: Hyperparameters GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time. End of explanation """
minxuancao/shogun
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
gpl-3.0
%pylab inline %matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat from modshogun import RealFeatures, MulticlassLabels, Math # load the dataset dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = dataset['data'] # the usps dataset has the digits labeled from 1 to 10 # we'll subtract 1 to make them in the 0-9 range instead Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 # 4000 examples for training Xtrain = RealFeatures(Xall[:,0:4000]) Ytrain = MulticlassLabels(Yall[0:4000]) # the rest for testing Xtest = RealFeatures(Xall[:,4000:-1]) Ytest = MulticlassLabels(Yall[4000:-1]) # initialize the random number generator with a fixed seed, for repeatability Math.init_random(10) """ Explanation: Deep Autoencoders by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn This notebook illustrates how to train and evaluate a deep autoencoder using Shogun. We'll look at both regular fully-connected autoencoders and convolutional autoencoders. Introduction A (single layer) autoencoder is a neural network that has three layers: an input layer, a hidden (encoding) layer, and a decoding layer. The network is trained to reconstruct its inputs, which forces the hidden layer to try to learn good representations of the inputs. In order to encourage the hidden layer to learn good input representations, certain variations on the simple autoencoder exist. Shogun currently supports two of them: Denoising Autoencoders [1] and Contractive Autoencoders [2]. In this notebook we'll focus on denoising autoencoders. For denoising autoencoders, each time a new training example is introduced to the network, it's randomly corrupted in some mannar, and the target is set to the original example. The autoencoder will try to recover the orignal data from it's noisy version, which is why it's called a denoising autoencoder. This process will force the hidden layer to learn a good representation of the input, one which is not affected by the corruption process. A deep autoencoder is an autoencoder with multiple hidden layers. Training such autoencoders directly is usually difficult, however, they can be pre-trained as a stack of single layer autoencoders. That is, we train the first hidden layer to reconstruct the input data, and then train the second hidden layer to reconstruct the states of the first hidden layer, and so on. After pre-training, we can train the entire deep autoencoder to fine-tune all the parameters together. We can also use the autoencoder to initialize a regular neural network and train it in a supervised manner. In this notebook we'll apply deep autoencoders to the USPS dataset for handwritten digits. We'll start by loading the data and dividing it into a training set and a test set: End of explanation """ from modshogun import NeuralLayers, DeepAutoencoder layers = NeuralLayers() layers = layers.input(256).rectified_linear(512).rectified_linear(128).rectified_linear(512).linear(256).done() ae = DeepAutoencoder(layers) """ Explanation: Creating the autoencoder Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section. We'll create a 5-layer deep autoencoder with following layer sizes: 256->512->128->512->256. We'll use rectified linear neurons for the hidden layers and linear neurons for the output layer. End of explanation """ from modshogun import AENT_DROPOUT, NNOM_GRADIENT_DESCENT ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise ae.pt_noise_parameter.set_const(0.5) # each input has a 50% chance of being set to zero ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent ae.pt_gd_learning_rate.set_const(0.01) ae.pt_gd_mini_batch_size.set_const(128) ae.pt_max_num_epochs.set_const(50) ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing # uncomment this line to allow the training progress to be printed on the console #from modshogun import MSG_INFO; ae.io.set_loglevel(MSG_INFO) # start pre-training. this might take some time ae.pre_train(Xtrain) """ Explanation: Pre-training Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels: L1 for the input layer, L2 for the first hidden layer, and so on up to L5 for the output layer. In pre-training, an autoencoder will formed for each encoding layer (layers up to the middle layer in the network). So here we'll have two autoencoders: L1->L2->L5, and L2->L3->L4. The first autoencoder will be trained on the raw data and used to initialize the weights and biases of layers L2 and L5 in the deep autoencoder. After the first autoencoder is trained, we use it to transform the raw data into the states of L2. These states will then be used to train the second autoencoder, which will be used to initialize the weights and biases of layers L3 and L4 in the deep autoencoder. The operations described above are performed by the the pre_train() function. Pre-training parameters for each autoencoder can be controlled using the pt_* public attributes of DeepAutoencoder. Each of those attributes is an SGVector whose length is the number of autoencoders in the deep autoencoder (2 in our case). It can be used to set the parameters for each autoencoder indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all autoencoders. Different noise types can be used to corrupt the inputs in a denoising autoencoder. Shogun currently supports 2 noise types: dropout noise, where a random portion of the inputs is set to zero at each iteration in training, and gaussian noise, where the inputs are corrupted with random gaussian noise. The noise type and strength can be controlled using pt_noise_type and pt_noise_parameter. Here, we'll use dropout noise. End of explanation """ ae.set_noise_type(AENT_DROPOUT) # same noise type we used for pre-training ae.set_noise_parameter(0.5) ae.set_max_num_epochs(50) ae.set_optimization_method(NNOM_GRADIENT_DESCENT) ae.set_gd_mini_batch_size(128) ae.set_gd_learning_rate(0.0001) ae.set_epsilon(0.0) # start fine-tuning. this might take some time _ = ae.train(Xtrain) """ Explanation: Fine-tuning After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network. End of explanation """ # get a 50-example subset of the test set subset = Xtest[:,0:50].copy() # corrupt the first 25 examples with multiplicative noise subset[:,0:25] *= (random.random((256,25))>0.5) # corrupt the other 25 examples with additive noise subset[:,25:50] += random.random((256,25)) # obtain the reconstructions reconstructed_subset = ae.reconstruct(RealFeatures(subset)) # plot the corrupted data and the reconstructions figure(figsize=(10,10)) for i in range(50): ax1=subplot(10,10,i*2+1) ax1.imshow(subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax1.set_xticks([]) ax1.set_yticks([]) ax2=subplot(10,10,i*2+2) ax2.imshow(reconstructed_subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax2.set_xticks([]) ax2.set_yticks([]) """ Explanation: Evaluation Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions: End of explanation """ # obtain the weights matrix of the first hidden layer # the 512 is the number of biases in the layer (512 neurons) # the transpose is because numpy stores matrices in row-major format, and Shogun stores # them in column major format w1 = ae.get_layer_parameters(1)[512:].reshape(256,512).T # visualize the weights between the first 100 neurons in the hidden layer # and the neurons in the input layer figure(figsize=(10,10)) for i in range(100): ax1=subplot(10,10,i+1) ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax1.set_xticks([]) ax1.set_yticks([]) """ Explanation: The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise. Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format. End of explanation """ from modshogun import NeuralSoftmaxLayer nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10)) nn.set_max_num_epochs(50) nn.set_labels(Ytrain) _ = nn.train(Xtrain) """ Explanation: Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like: L1->L2->L3->Softmax. The network is obtained by calling convert_to_neural_network(): End of explanation """ from modshogun import MulticlassAccuracy predictions = nn.apply_multiclass(Xtest) accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100 print "Classification accuracy on the test set =", accuracy, "%" """ Explanation: Next, we'll evaluate the accuracy on the test set: End of explanation """ from modshogun import DynamicObjectArray, NeuralInputLayer, NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR conv_layers = DynamicObjectArray() # 16x16 single channel images conv_layers.append_element(NeuralInputLayer(16,16,1)) # the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2, 2, 2)) # the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2)) # the first decoding layer: same structure as the first encoding layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2)) # the second decoding layer: same structure as the input layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 1, 2, 2)) conv_ae = DeepAutoencoder(conv_layers) """ Explanation: Convolutional Autoencoders Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification. In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using CNeuralConvolutionalLayer objects: End of explanation """ conv_ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise conv_ae.pt_noise_parameter.set_const(0.3) # each input has a 30% chance of being set to zero conv_ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent conv_ae.pt_gd_learning_rate.set_const(0.002) conv_ae.pt_gd_mini_batch_size.set_const(100) conv_ae.pt_max_num_epochs[0] = 30 # max number of epochs for pre-training the first encoding layer conv_ae.pt_max_num_epochs[1] = 10 # max number of epochs for pre-training the second encoding layer conv_ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing # start pre-training. this might take some time conv_ae.pre_train(Xtrain) """ Explanation: Now we'll pre-train the autoencoder: End of explanation """ conv_nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10)) # train the network conv_nn.set_epsilon(0.0) conv_nn.set_max_num_epochs(50) conv_nn.set_labels(Ytrain) # start training. this might take some time _ = conv_nn.train(Xtrain) """ Explanation: And then convert the autoencoder to a regular neural network for classification: End of explanation """ predictions = conv_nn.apply_multiclass(Xtest) accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100 print "Classification accuracy on the test set =", accuracy, "%" """ Explanation: And evaluate it on the test set: End of explanation """
nbelaid/nbelaid.github.io
dev/_trush/mooc_python-machine-learning/Assignment+1.ipynb
mit
import numpy as np import pandas as pd from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() #print(cancer.DESCR) # Print the data set description """ Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource. Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below). End of explanation """ cancer.keys() """ Explanation: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary. End of explanation """ # You should write your whole answer within the function provided. The autograder will call # this function and compare the return value against the correct solution value def answer_zero(): # This function returns the number of features of the breast cancer dataset, which is an integer. # The assignment question description will tell you the general format the autograder is expecting return len(cancer['feature_names']) # You can examine what your function returns by calling it in the cell. If you have questions # about the assignment formats, check out the discussion forums for any FAQs answer_zero() """ Explanation: Question 0 (Example) How many features does the breast cancer dataset have? This function should return an integer. End of explanation """ def answer_one(): # Your code here cancerDF = pd.DataFrame(cancer['data'], index=range(0, 569, 1)) cancerDF.columns=cancer['feature_names'] cancerDF['target'] = cancer['target'] return cancerDF # Return your answer answer_one() """ Explanation: Question 1 Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset cancer to a DataFrame. *This function should return a (569, 31) DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target'] *and index = * RangeIndex(start=0, stop=569, step=1) End of explanation """ def answer_two(): cancerdf = answer_one() # Your code here classDistribution = cancerdf['target'].value_counts() classDistribution.index = ['benign', 'malignant'] return classDistribution # Return your answer answer_two() """ Explanation: Question 2 What is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?) This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign'] End of explanation """ print def answer_three(): cancerdf = answer_one() # Your code here X = cancerdf.loc[:, cancerdf.columns != 'target'] y = cancerdf.loc[:,'target'] return X,y answer_three() """ Explanation: Question 3 Split the DataFrame into X (the data) and y (the labels). This function should return a tuple of length 2: (X, y), where * X has shape (569, 30) * y has shape (569,). End of explanation """ from sklearn.model_selection import train_test_split def answer_four(): X, y = answer_three() # Your code here X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) return X_train, X_test, y_train, y_test """ Explanation: Question 4 Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test). Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder! This function should return a tuple of length 4: (X_train, X_test, y_train, y_test), where * X_train has shape (426, 30) * X_test has shape (143, 30) * y_train has shape (426,) * y_test has shape (143,) End of explanation """ from sklearn.neighbors import KNeighborsClassifier def answer_five(): X_train, X_test, y_train, y_test = answer_four() # Your code here knn = KNeighborsClassifier(n_neighbors = 1) knn.fit(X_train, y_train) return knn # Return your answer answer_five() """ Explanation: Question 5 Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1). *This function should return a * sklearn.neighbors.classification.KNeighborsClassifier. End of explanation """ def answer_six(): cancerdf = answer_one() means = cancerdf.mean()[:-1].values.reshape(1, -1) # Your code here knn = answer_five() y_predict = knn.predict(means) return y_predict # Return your answer answer_six() """ Explanation: Question 6 Using your knn classifier, predict the class label using the mean value for each feature. Hint: You can use cancerdf.mean()[:-1].values.reshape(1, -1) which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier). This function should return a numpy array either array([ 0.]) or array([ 1.]) End of explanation """ def answer_seven(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() # Your code here y_predict = knn.predict(X_test) return y_predict # Return your answer answer_seven() """ Explanation: Question 7 Using your knn classifier, predict the class labels for the test set X_test. This function should return a numpy array with shape (143,) and values either 0.0 or 1.0. End of explanation """ def answer_eight(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() # Your code here score = knn.score(X_test, y_test) return score# Return your answer answer_eight() """ Explanation: Question 8 Find the score (mean accuracy) of your knn classifier using X_test and y_test. This function should return a float between 0 and 1 End of explanation """ def accuracy_plot(): import matplotlib.pyplot as plt %matplotlib notebook X_train, X_test, y_train, y_test = answer_four() # Find the training and testing accuracies by target value (i.e. malignant, benign) mal_train_X = X_train[y_train==0] mal_train_y = y_train[y_train==0] ben_train_X = X_train[y_train==1] ben_train_y = y_train[y_train==1] mal_test_X = X_test[y_test==0] mal_test_y = y_test[y_test==0] ben_test_X = X_test[y_test==1] ben_test_y = y_test[y_test==1] knn = answer_five() scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y), knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)] plt.figure() # Plot the scores as a bar chart bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868']) # directly label the score onto the bars for bar in bars: height = bar.get_height() plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2), ha='center', color='w', fontsize=11) # remove all the ticks (both axes), and tick labels on the Y axis plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on') # remove the frame of the chart for spine in plt.gca().spines.values(): spine.set_visible(False) plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8); plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8) # Uncomment the plotting function to see the visualization, # Comment out the plotting function when submitting your notebook for grading #accuracy_plot() """ Explanation: Optional plot Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells. End of explanation """
celiasmith/syde556
SYDE 556 Lecture 1 Introduction.ipynb
gpl-2.0
from IPython.display import YouTubeVideo YouTubeVideo('U_Q6Xjz9QHg', width=720, height=400, loop=1, autoplay=0, playlist='U_Q6Xjz9QHg') """ Explanation: SYDE 556/750: Simulating Neurobiological Systems Accompanying Readings: Chapter 1 End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('jHxyP-nUhUY', width=500, height=400, autoplay=0, start=60) """ Explanation: Overall Goal <img src="files/lecture1/book_cover.png" width="200" style="float:right"> - Building brains! - Why? - To figure out how brains work (health applications) - To apply this knowledge to building systems (AI applications) Administration Course website: http://compneuro.uwaterloo.ca/courses/syde-750.html Contact information Chris Eliasmith: <a href="mailto:celiasmith@uwaterloo.ca">celiasmith@uwaterloo.ca</a> Course times: Mon & Wed. 9:00a-10:20a (10:30a-11:20p Wed for 750) Location: Mon: E5-6004, Wed: E7-5343 Office hours: by appointment Coursework Four assignments (60%) 20%, 20%, 10%, 10% About two weeks for each assignment Everyone writes their own code, generates their own graphs, writes their own answers Final project (40%) Make a model of some neural system For 556 students, this can be an extension of something seen in class For 750 students, this must be more of a research project with more novelty ideas Get your idea approved via email before Reading Week (Feb 18) Schedule <table style="border: 1px solid black;" cellspacing="10"> <tr><th>Week</th><th>Reading</th><th>Topic</th><th>Assignments</th></tr> <tr><td style="padding:0 15px 0 15px;">Jan 7</td><td style="padding:0 15px 0 15px;">Chpt 1</td><td>Introduction</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Jan 9, 14</td><td style="padding:0 15px 0 15px;">Chpt 2,4</td><td>Neurons, Population Representation</td><td style="padding:0 15px 0 15px;">#1 posted</td></tr> <tr><td style="padding:0 15px 0 15px;">Jan 16, 21</td><td style="padding:0 15px 0 15px;">Chpt 4</td><td>Temporal Representation</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Jan 23, 28, 30</td><td style="padding:0 15px 0 15px;">Chpt 5,6</td><td>Feedforward Transformations</td><td style="padding:0 15px 0 15px;">#1 due (23rd at midnight); #2 posted</td></tr> <tr><td style="padding:0 15px 0 15px;">Feb 4, 6, 11</td><td style="padding:0 15px 0 15px;">Chpt 6,8</td><td>Dynamics</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Feb 13, 25</td><td style="padding:0 15px 0 15px;">Chpt 7</td><td>Analysis of Representations</td><td style="padding:0 15px 0 15px;">#2 due (15th at midnight); #3 posted</td></tr> <tr><td style="padding:0 15px 0 15px;">Feb 18, 20</td><td></td><td>*Reading Week*</td><td></td></tr> <tr><td style="padding:0 15px 0 15px;">Feb 27, Mar 4</td><td style="padding:0 15px 0 15px;">Provided</td><td>Symbols</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Mar 6, 11</td><td style="padding:0 15px 0 15px;">Chpt 8</td><td>Memory</td><td style="padding:0 15px 0 15px;">#3 due (6th at midnight)</td></tr> <tr><td style="padding:0 15px 0 15px;">Mar 13, 18</td><td style="padding:0 15px 0 15px;">Provided</td><td>Action Selection</td><td style="padding:0 15px 0 15px;">#4 due (20th at midnight)</td></tr> <tr><td style="padding:0 15px 0 15px;">Mar 20, 25</td><td style="padding:0 15px 0 15px;">Chpt 9</td><td>Learning</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Mar 27</td><td style="padding:0 15px 0 15px;"></td><td>Conclusion</td><td style="padding:0 15px 0 15px;"></td></tr> <tr><td style="padding:0 15px 0 15px;">Apr 1, Apr 3</td><td style="padding:0 15px 0 15px;"></td><td>Project Presentations</td><td style="padding:0 15px 0 15px;"></td></tr> </table> To Do: Get textbook (Eliasmith & Anderson, 2003, Neural Engineering), start reading. Be able to run Juypter (old: ipython) notebooks Anaconda is probably simplest Decide what language you'll do your assignments in (Python highly recommended, need permission for others) Start thinking about a project... already! Focus of the Course Theoretical Neuroscience <img src="files/lecture1/neuro_levels.png" width="600" style="float:right"> - How does the mind work? - Most complex and most interesting system humanity has ever studied - Why study anything else? - How should we go about studying it? - What techniques/tools? - How do we know if we're making progress? - How do we deal with the complexity? A Useful Analogy <img src="files/lecture1/physics_analogy.png" width="500" style="float:right"> - What is Theoretical Neuroscience? - A useful analogy is to theoretical physics - Similarities - Methods are similar - Goals are similar (quantification) - Differences - Central question "What exists? vs Who are we?" - More simulation (because of nonlinearities) in biology Neural Modelling Let's build it <img src="files/lecture1/understand.gif" style="float:right"> Specify theory in enough detail that this is possible Tends to get complex, so need computer simulation Bring together levels and modeling methods Single neuron models (levels of detail; e.g. spikes, spatial structure, various ion channels, etc.) Small network models (levels of detail; e.g. spiking neurons, rate neurons, mean fields, etc.) Large network/cognitive models (levels of detail; e.g. biophysics, pure computation, anatomy, etc.) Ideally allow all levels of detail below any higher level to be included as desired. 'Correct' level depends on questions being asked. Problems with current approaches Large-scale neural models (e.g. Human Brain Project, Synapse Project, etc.) Lack of function or behaviour Can't compare to psychological data Assumes canonical algorithm repeats e.g., Measurements from one small part (hippocampus) are valid everywhere But, different parts of the brain are very different (connectivity, cell types, inputs/outputs) Expects intelligence to 'emerge' Unclear what 'emergence' means, how it will work, or what it explains Wishful thinking? Cognitve models (e.g. ACT-R, Soar, etc.) Disconnected from neuroscience, can't compare to neural data Trying to map components of the model to brain areas When a component is active, maybe neurons in that area are more active? No "bridging laws" Like having rules of chemistry that never mention that it's all built out of atoms and electrons No constraints on the equations Just anything that can be written down Many possibilities; hard to figure out what matches human data best Maybe that's okay Do we understand the brain enough to make this connection and constrain theories? When understanding a word processor, do we worry about transistors? The Brain 2 kg (2% of body weight) 20 Watts (25% of power consumption) Area: 4 sheets of paper Neurons: 100 billion (150,000 per $mm^2$) End of explanation """ from IPython.display import HTML HTML('<H2>A Neuron</H2><p><iframe src="https://rawgit.com/celiasmith/syde556/master/lecture1/neuron.html" width=825 height=475></iframe>') """ Explanation: Brain structures Lots of visually obvious structure Lots of greek and latin names to remember locus coeruleus, thalamus, amygdala, hypothalamus, substantia nigra, etc etc <img src="files/lecture1/brain1.jpg" width="300" style="float:left"> <img src="files/lecture1/brain2.png" width="300" float:right> End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('F37kuXObIBU', width=720, height=500, start=8*60+35) """ Explanation: Neurons in the brain 100 billion 100's or 1000's of distinct types (distinguished via anatomy and/or physiology) Axon length: from $10^{-4}$ to $5$ m Each neuron: 500-200,000 inputs and outputs 72km of axons Communication: 100's of different neurotransmitters Neuron communication: Synapses <img src="files/lecture1/NeuronStructure.jpg"> What it really looks like <img src="files/lecture1/brainbow2.jpg"> What it really really looks like End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('KE952yueVLA', width=640, height=390) """ Explanation: Kinds of data from the brain Lesion studies What are the effects of damaging different parts of the brain? Occipital cortex: blindness (really blindsight) Inferior frontal gyrus: can't speak (Broca's area) Posterior superior temporal gyrus: can't understand speech (Wernicke's area) Fusiform gyrus: can't recognize faces (and other visually complex objects) Ventral medial prefrontal cortex: moral judgement??? (Phineas Gage) etc, etc, etc fMRI Functional Magnetic Resonance Imaging <img src="files/lecture1/fMRI.jpg" style="float:right"> Measure blood oxygenation levels in the brain show the difference between two tasks averaged over many trials and patients Measured while performing tasks ~4 second between scans some attempts at going faster, but blood vessels don't change much faster than this Shows where energy is being used in the brain equivalent to figuring out how a CPU works by measuring temperature a bit more fine-grained than lesion studies Good spatial resolution, low temporal resolution Neurosynth EEG Electrical activity at the scalp Large-scale communication between areas High time resolution, low spatial resolution <img src="files/lecture1/eeg-person.jpg" width="200" style="float:left"> <img src="files/lecture1/eeg-data.jpg" width="200" style="float:left"> <img src="files/lecture1/p300.jpg" width="200" style="float:left"> Single cell recording Place electrodes (one or many) into the brain, record from it not necessarily right at a neuron Pick up local electrical potentials You can hear neural 'spikes' High temporal resolution only one (or a few) cells End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('lfNVv0A8QvI', width=640, height=390) """ Explanation: <img src="files/lecture1/catgratings.gif" width="400" style="float:left"> Multielectrode recordings Put 'tetrodes' or multi-electrode arrays into the brain Post-processing: "Spike sorting" Local field potentials (LFPs) High temporal resolution, max ~100 cells <img src="files/lecture1/lfps.png" width="400" style="float:left"><img src="files/lecture1/multielectrode.jpg" width="300" style="float:right"> End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('DGBy-BGiZIM', width=640, height=360) """ Explanation: Calcium Imaging Use calcium to glow when Ca2+ ions bond Happens a lot during neural activity and spike generation Good spatial and good temporal resolution E.g. In a fish embryo End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('CpejbZ-XEyM', width=640, height=360) """ Explanation: In a stalking fish End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('v7uRFVR9BPU', width=640, height=390) """ Explanation: Optogenetics Allows stimulation and recording from select parts of the brain Just those parts expressing a light sensitive proteins that are stimulated High spatial and temporal resolution (but local) End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('_UFOSHZ22q4', width=600, height=400, start=60) """ Explanation: What do we know so far? Lots of details Data: "The proportion of type A neurons in area X is Y" Conclusion: "Therefore, the proportion of type A neurons in area X is Y". Hard to get a big picture No good methods for generalizing from data "Data-rich and theory-poor" (Churchland & Sejnowski, 1994; still true) Need some way to connect these details Need unifying theory Recall: Neural Modeling What I cannot create, I do not understand <img src="files/lecture1/understand.gif" style="float:right"> Build a computer simulation Do to neuroscience what Newton did to physics Too complex to be analytically tractable, so use computer simulation Can we use this to connect the levels? Single neuron simulation Hodgkin & Huxley, 1952 <img src="files/lecture1/hh-neuron1.png" width="600"> Single neuron simulation Hodgkin & Huxley, 1952 <img src="files/lecture1/hh-neuron2.png" width="600"> Single neuron simulation <img src="files/lecture1/hh-circuit2.jpg" width="400" style="float:left"> <img src="files/lecture1/hh-circuit.png" width="400" style="float:left"> <img src="files/lecture1/hh-circuit3.gif" width="400" style="float:left"> Millions of neurons End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('WmChhExovzY', width=600, height=400) """ Explanation: Billions of neurons Simplify the neuron model and you can run more of them End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('2j9rRHChtXk', width=640, height=390) """ Explanation: The controversy What level of detail for the neurons? How should they be connected? IBM SyNAPSE project (Dharmendra Modha) Billions of neurons, but very simple models Randomly connected 2009: "Cat"-scale brain (1 billion neurons) 2012: "Human"-scale brain (500 billion neurons; 5x human!) Called a "hoax and PR stunt" by: Blue Brain (Henry Markram) Much more detailed neurons Statistically connected (i.e. similar to hippocampus) How much detail is enough? How could we know? What actually matters... Connecting brain models to behaviour How can we build models that actually do something? How should we connect realistic neurons so they work together? The Neural Engineering Framework <img src="files/lecture1/nef_book.png" width="200" style="border:1px solid black; float:right"> Our attempt Probably wrong, but got to start somewhere Three principles Representation Transformation Dynamics Building behaviour out of detailed low-level components Representation How do neurons represent information? (What is the neural code?) What is the mapping between a value to be stored and the activity of a group of neurons? Examples: Edge detection in retina Place cells Every group of neurons can be thought of as representing something Each neuron has some preferred value(s) Neurons fire more strongly the closer the value is to that preferred value Values are vectors Transformation Connections compute functions on those vectors Activity of one group of neurons causes another group to fire One group may represent $x$, connected to another group representing $y$ Whatever firing pattern we get in $y$ due to $x$ is a function $y = f(x)$ Can find what class of functions are well approximated this way Puts limits on the algorithms we can implement with neurons Dynamics Recurrent connections (feedback) Turns out to allow us to compute functions of this form: ${dx \over dt} = f(x, u)$ $x$ is what the neurons represent, $u$ is the input neurons and $f()$ is the transformation Great for implementing all of control theory (i.e., dynamical systems) Example: memory: (${dx \over dt} = u$) Examples This approach gives us a neural compiler Given a quantitative description of a behaviour (e.g. an algorithm), you can solve for the connections between neurons that will approximate that behaviour Works for a wide variety of neuron models Number of neurons affects accuracy Neuron properties influence timing and computation Can make predictions (e.g. rats head direction and path integration) Vision: character recognition End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('VWUhCzUDZ70', width=640, height=390) """ Explanation: Vision: >1000 categories End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('sUvHCs5y0o8', width=640, height=360) """ Explanation: Problem solving: Tower of Hanoi End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('f6Ul5TYK5-o', width=640, height=360) """ Explanation: Spaun: digit recognition End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('WNnMhF7rnYo', width=640, height=390) """ Explanation: Spaun: copy drawing End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('mP7DX6x9PX8', width=640, height=390) """ Explanation: Spaun: addition by counting End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('Q_LRvnwnYp8', width=640, height=390) """ Explanation: Spaun: pattern completion End of explanation """
UWPRG/Python
tutorials/MetaD countours.ipynb
mit
import numpy as np import matplotlib.pyplot as plt unbiasedCVs = np.genfromtxt('NVT_monitor/COLVAR',comments='#'); biasedCVs = np.genfromtxt('MetaD/COLVAR',comments='#'); unbiasedCVsHOT = np.genfromtxt('NVT_monitor/hot/COLVAR',comments='#'); """ Explanation: Jim's notebook on contour plots, showing projection of 2D data on top of the countour plot End of explanation """ %matplotlib inline fig = plt.figure(figsize=(6,6)) axes = fig.add_subplot(111) stride=5 xlabel='$\Phi$' ylabel='$\Psi$' axes.plot(biasedCVs[::stride,1],biasedCVs[::stride,2],marker='o',markersize=4,linestyle='none') axes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=4,linestyle='none',markerfacecolor='yellow') axes.set_xlabel(xlabel, fontsize=20) axes.set_ylabel(ylabel, fontsize=20) plt.show() """ Explanation: Plotting biased and unbiased CVS End of explanation """ #read the data in from a text file fesdata = np.genfromtxt('MetaD/fes.dat',comments='#'); fesdata = fesdata[:,0:3] #what was your grid size? this calculates it dim=int(np.sqrt(np.size(fesdata)/3)) #some post-processing to be compatible with contourf X=np.reshape(fesdata[:,0],[dim,dim],order="F") #order F was 20% faster than A/C Y=np.reshape(fesdata[:,1],[dim,dim],order="F") Z=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order="F") #convert to kcal/mol #what spacing do you want? assume units are in kJ/mol spacer=1 lines=20 levels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True) fig=plt.figure(figsize=(10,8)) axes = fig.add_subplot(111) plt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,) plt.colorbar() plt.xlabel('$\Phi$') plt.ylabel('$\Psi$') axes.set_xlabel(xlabel, fontsize=20) axes.set_ylabel(ylabel, fontsize=20) stride=10 #axes.plot(biasedCVs[::stride,1],biasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='cyan') axes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='blue') #axes.plot(unbiasedCVsHOT[::stride,1],unbiasedCVsHOT[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='red') unbiasedCVs = np.genfromtxt('NVT_monitor/other_basin/COLVAR',comments='#'); stride=5 axes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='yellow') plt.savefig('fes_bias.png') plt.show() """ Explanation: Plotting contour plot of biased FES End of explanation """
unmrds/cc-python
Name_Data.ipynb
apache-2.0
# http://api.census.gov/data/2010/surname import requests import json import pandas as pd import matplotlib.pyplot as plt """ Explanation: An Introductory Python Workflow: US Census Surname Data This notebook provides working examples of many of the concepts introduced earlier: Importing modules or libraries to extend basic Python functionality Declaring and using variables Python data types and data structures Flow control Using the 2010 surname data from the US Census, we will develop a workflow to accomplish the following: Retrieve information about the dataset API Retrieve data about a single surname Output surname data in tabular form Visualize surname data using a pie chart Sample dataset Decennial Census Surname Files (2010) https://www.census.gov/data/developers/data-sets/surnames.html https://api.census.gov/data/2010/surname.html Citation US Census Bureau (2016) Decennial Census Surname Files (2010) Retrieved from https://api.census.gov/data/2010/surname.jsonstructures 1. Import modules The modules used in this exercise are popular and under active development. Follow the links for more information about methods, syntax, etc. Requests: http://docs.python-requests.org/en/master/ JSON: https://docs.python.org/3/library/json.html Pandas: http://pandas.pydata.org/ Matplotlib: https://matplotlib.org/ Look for information about or links to the API, developer's documentation, etc. Helpful examples are often included. Note that we are providing an alias for Pandas and matplotlib. Whenever we need to call a method from those module, we can use the alias. End of explanation """ # First, get the basic info about the dataset. # References: Dataset API (https://api.census.gov/data/2010/surname.html) # Requests API (http://docs.python-requests.org/en/master/) # Python 3 JSON API (https://docs.python.org/3/library/json.html) api_base_url = "http://api.census.gov/data/2010/surname" api_info = requests.get(api_base_url) api_json = api_info.json() # Uncomment the next line(s) to see the response content. # NOTE: JSON and TEXT don't look much different to us. They can look very different to a machine! #print(api_info.text) print(json.dumps(api_json, indent=4)) # The output is a dictionary - data are stored as key:value pairs and can be nested. # Request and store a local copy of the dataset variables. # Note that the URL could be hard coded just from referencing the API, but # we are navigating the JSON data. var_link = api_json['dataset'][0]['c_variablesLink'] print(var_link) # Use the variable info link to make a new request variables = requests.get(var_link) jsonData = variables.json() variable_data = jsonData['variables'] # Note that this is a dictionary of dictionaries. print(json.dumps(variable_data, indent=4)) print(variable_data.keys()) """ Explanation: 2. Basic interactions with the Census dataset API Combining data from two API endpoints into a human-readable table The dataset in our example is not excessively large, so we can explore different approaches to interacting with it: Download some or all data to the computer we're using ('local'). Keep the data in memory and do stuff. Only download what we need when we need it. Doing stuff may require additional calls to the API. Both have pros and cons. Both are used in the following examples. Some points of interest: The data are not provided in tabular form Human-readable variable names and definitions are stored separately from the data In order to make a human readable table we need to: Download variable definitions Download data Replace shorthand variable codes with human readable names Reformat the data into a table Get API and variable information: End of explanation """ # References: Pandas (http://pandas.pydata.org/) # Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC' desired_vars = 'NAME,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC&RANK=1:10' # Top ten names base_url = 'http://api.census.gov/data/2010/surname?get=' query_url = base_url + desired_vars name_stats = requests.get(query_url) surname_data = name_stats.json() # The response data are not very human readable. # Note that this is a list of lists. Data within lists are typically accessed by position number. (There are no keys.) print('Raw response data:\n') print(json.dumps(surname_data, indent=4)) """ Explanation: Get surname data: End of explanation """ # Pass the data to a Pandas dataframe. # In addition to being easier to read, dataframes simplify further analysis. # The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE # It's easier to read the descriptive labels provide via the variables API. # The code block below replaces variable names with labels as it builds the dataframe. column_list = [] for each in surname_data[0]: # For each variable in the response data (stored as surname_data[0]) label = variable_data[each]['label'] # look up that variable's label in the variable dictionary column_list.append(label) # add the variable's label to the list of column headers print(each, ":", label) print('\n', column_list) """ Explanation: Laying out the API response like a table helps illustrate what we're doing here. For easier reading the "surname_data" variable has been replace with "d" in the image below. The variable codes in d[0] will be replaced with human readable descriptions from the variable list (v). Replace variable codes with human readable labels End of explanation """ df = pd.DataFrame([surname_data[1]], columns=column_list) # Create a dataframe using the column names created above. Data # for the dataframe comes from rows 2-10 (positions 1-9) # of surname_data. # The table we just created is empty. Here we add the surname data: for surname in d[2:]: tdf = pd.DataFrame([surname], columns=column_list) df = df.append(tdf) print('\n\nPandas dataframe:') df.sort_values(by=["National Rank"]) """ Explanation: Create a dataframe (table) with variable labels as column names and append data End of explanation """ # Try 'STEUBEN' in order to break the first pie chart example. # Update 2020-02-26: Surnames should be all caps! name = 'WHEELER' name_query = '&NAME=' + name """ Explanation: Exercises Change the table to include the 50 most common surnames. Alternatively, create a table for the 11th - 20th most common surnames. Sort the output table by surname or a demographic. Change the request/table to include only surname, rank, and count. Correct the sort order in the example table. 3. Download and tabulate statistics for a given surname Now let's find out the rank and demographic breakdown of a particular name. To make it easy to change the name we're looking up, assign it to a variable. End of explanation """ # Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC' desired_vars = 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC' """ Explanation: Referring to the variables API, decide which variables are of interest and edit accordingly. End of explanation """ # References: Pandas (http://pandas.pydata.org/) base_url = 'http://api.census.gov/data/2010/surname?get=' query_url = base_url + desired_vars + name_query name_stats = requests.get(query_url) d = name_stats.json() # The response data are not very human readable. print('Raw response data:\n') print(d) # Pass the data to a Pandas dataframe. # In addition to being easier to read, dataframes simplify further analysis. # The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE # It's easier to read the descriptive labels provide via the variables API. # The code block below replaces variable names with labels as it builds the dataframe. column_list = [] for each in d[0]: # For each variable in the response data (stored as d[0]) label = v[each]['label'] # look up that variable's label in the variable dictionary column_list.append(label) # add the variable's label to the list of column headers df = pd.DataFrame([d[1]], columns=column_list) # Create a dataframe using the column names created above. Data # for the dataframe comes from d[1] print('\n\nPandas dataframe:') df """ Explanation: Build the query URL and send the request. Pass the response data into a Pandas dataframe for viewing. End of explanation """ # Using index positions is good for doing something quick, but in this case makes code easy to break. # Selecting different surname dataset variables or re-ordering variables will result in errors. print(d) pcts = d[1][2:8] print('\n\n',pcts) # Create the labels and get the data for the pie chart. # Note that we are using the downloaded source data, not the dataframe # used for the table above. labels = ['White', 'Asian', '2+ Races', 'Native American', 'Black', 'Hispanic'] pcts = d[1][2:8] #print(pcts) # Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html) plt.pie( # using data percentages pcts, # Use labels defined above labels=labels, # with no shadows shadow=False, # with the start angle at 90% startangle=90, # with the percent listed as a fraction autopct='%1.1f%%', ) # View the plot drop above plt.axis('equal') # View the plot plt.tight_layout() plt.show() """ Explanation: 4. Create a name demographic pie-chart End of explanation """ # First try - just replace string with a zero. # Here, the for loop iterates through items in a list. pcts2 = [] for p in pcts: if p != '(S)': pcts2.append(p) else: pcts2.append(0) # Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html) plt.pie( # using data percentages pcts2, # Use labels defined above labels=labels, # with no shadows shadow=False, # with the start angle at 90% startangle=90, # with the percent listed as a fraction autopct='%1.1f%%', ) # View the plot drop above plt.axis('equal') # View the plot plt.tight_layout() plt.show() # Second try - exclude and corresponding label if source data for a given demographic == (S) # This requires the list index of the data and the label. # The for loop in this case iterates across a range of integers equal to the length of the list. pcts3 = [] edit_labels = [] for i in range(len(pcts)): print(pcts[i]) if pcts[i] != '(S)': pcts3.append(pcts[i]) edit_labels.append(labels[i]) else: pass # Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html) plt.pie( # using data percentages pcts3, # Use labels defined above labels=edit_labels, # with no shadows shadow=False, # with the start angle at 90% startangle=90, # with the percent listed as a fraction autopct='%1.1f%%', ) # View the plot drop above plt.axis('equal') # View the plot plt.tight_layout() plt.show() """ Explanation: 5. Fix the data type error End of explanation """
jrbourbeau/cr-composition
notebooks/legacy/learning-curve.ipynb
mit
import sys sys.path.append('/home/jbourbeau/cr-composition') print('Added to PYTHONPATH') import argparse from collections import defaultdict import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import seaborn.apionly as sns from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_score, learning_curve import composition as comp import composition.analysis.plotting as plotting # Plotting-related sns.set_palette('muted') sns.set_color_codes() color_dict = {'P': 'b', 'He': 'g', 'Fe': 'm', 'O': 'r'} %matplotlib inline """ Explanation: Learning curve Table of contents Data preprocessing Fitting random forest Feature importance End of explanation """ df, cut_dict = comp.load_sim(return_cut_dict=True) selection_mask = np.array([True] * len(df)) standard_cut_keys = ['lap_reco_success', 'lap_zenith', 'num_hits_1_30', 'IT_signal', 'max_qfrac_1_30', 'lap_containment', 'energy_range_lap'] for key in standard_cut_keys: selection_mask *= cut_dict[key] df = df[selection_mask] feature_list, feature_labels = comp.get_training_features() print('training features = {}'.format(feature_list)) X_train, X_test, y_train, y_test, le = comp.get_train_test_sets( df, feature_list, train_he=True, test_he=True) print('number training events = ' + str(y_train.shape[0])) """ Explanation: Data preprocessing Load simulation dataframe and apply specified quality cuts Extract desired features from dataframe Get separate testing and training datasets End of explanation """ pipeline = comp.get_pipeline('RF') train_sizes, train_scores, test_scores =\ learning_curve(estimator=pipeline, X=X_train, y=y_train, train_sizes=np.linspace(0.1, 1.0, 10), cv=10, n_jobs=20, verbose=3) train_mean = np.mean(train_scores, axis=1) train_std = np.std(train_scores, axis=1) test_mean = np.mean(test_scores, axis=1) test_std = np.std(test_scores, axis=1) plt.plot(train_sizes, train_mean, color='b', linestyle='-', marker='o', markersize=5, label='training accuracy') plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue') plt.plot(train_sizes, test_mean, color='g', linestyle='--', marker='s', markersize=5, label='validation accuracy') plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green') plt.grid() plt.xlabel('Number of training samples') plt.ylabel('Accuracy') plt.title('RF Classifier') plt.legend() # plt.ylim([0.8, 1.0]) plt.tight_layout() plt.show() """ Explanation: Produce 10-fold CV learning curve End of explanation """
nicococo/tilitools
lectures/optimization_solution.ipynb
mit
from functools import partial from scipy.optimize import check_grad, minimize import numpy as np import cvxopt as cvx import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Exercise: Optimization We will implement various optimization algorithms and examine their performance for various tasks. First-order, smooth optimization using gradient descent Implement basic gradient descent solver Implement gradient descent with armijo backtracking Smooth One-class SVM Implement hinge- and Huber loss functions Implement objective and derivative of the smooth one-class svm Use check_grad to verify the implementation First-order, non-smooth optimization using sub-gradient descent Implement objective and derivative of $\ell_p$-norm regularized one-class svm Utilizing Available QP Solver Packages: CVXOPT Use cvxopt qp solver to solve the primal one-class svm optimization problem Utilizing Available Solver Packages: SciPy's Optimization Suite Apply scipy's minimize function on your implementation of the objective function of the smooth one-class svm End of explanation """ def fun_l2_logistic_regression(w, X, y, param): w = w.reshape(w.size, 1) t1 = 1. + np.exp(-y * w.T.dot(X).T) f = param/2.*w.T.dot(w) + np.sum(np.log(t1)) return f[0,0] def grad_l2_logistic_regression(w, X, y, param): w = w.reshape(w.size, 1) t2 = 1. + np.exp(y * w.T.dot(X).T) grad = param*w + (-y/t2).T.dot(X.T).T return grad.ravel() def findMin0(fun, grad, w, alpha, max_evals=10, eps=1e-2, verbosity=1): f = fun(w) g = grad(w) evals = 1 while evals < max_evals: w = w - alpha * g f = fun(w) g = grad(w) evals += 1 opt_cond = np.linalg.norm(g, ord=np.inf) if verbosity > 0: print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, f, opt_cond)) if opt_cond < eps: break print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, f, opt_cond)) return w """ Explanation: 1. First-order, smooth optimization using gradient descent In this first part, we want use various variants of gradient descent for continuous and smooth optimization. A well-known continuous, convex, smooth method is l2-norm regularized logistic regression. Which has the following objective function: $f(w) = \frac{\lambda}{2} \|w\|^2 + \sum_{i=1}^n \log(1+\exp(-y_i\langle w, x_i \rangle))$ In order to apply gradient descent, we will further need the first derivative: $f'(w) = \lambda w + \sum_{i=1}^n \frac{-y_i}{1+\exp(y_i(\langle w, x_i \rangle))}x_i$. End of explanation """ # Generate some test data np.random.seed(42) X = np.random.randn(10, 100) w = np.random.randn(10, 1) reg_y = w.T.dot(X) median = np.median(reg_y) y = -np.ones((reg_y.size, 1), dtype=np.int) y[reg_y.ravel() >= median] = +1 """ Explanation: Ok, let us generate some small data set to try out our optimization schemes. End of explanation """ fun = partial(fun_l2_logistic_regression, X=X, y=y, param=1.) grad = partial(grad_l2_logistic_regression, X=X, y=y, param=1.) print(check_grad(fun, grad, 0.0*np.random.randn(10))) wstar = findMin0(fun, grad, 0.0*np.random.randn(10), 0.001, max_evals=1000, eps=1e-8, verbosity=0) """ Explanation: We have a look at the most basic gradient descent method we can think of and start playing with the step-size $\alpha$. Try some, e.g. - $\alpha=1.0$ - $\alpha=1e-6$ - $\alpha=0.001$ What do you notice? End of explanation """ def findMinBT(fun, grad, w, alpha, gamma, max_evals=10, eps=1e-2, verbosity=1): f = fun(w) g = grad(w) evals = 1 while evals < max_evals: while fun(w - alpha * g) > f - gamma*alpha*g.dot(g): alpha /= 2. w = w - alpha * g f = fun(w) g = grad(w) evals += 1 opt_cond = np.linalg.norm(g, ord=np.inf) if verbosity > 0: print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, f, opt_cond)) if opt_cond < eps: break print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, f, opt_cond)) return w fun = partial(fun_l2_logistic_regression, X=X, y=y, param=1.) grad = partial(grad_l2_logistic_regression, X=X, y=y, param=1.) print(check_grad(fun, grad, 0.0*np.random.randn(10))) wstar = findMinBT(fun, grad, 0.0*np.random.randn(10), 1., 0.001, max_evals=100, verbosity=0) """ Explanation: We do not want to tweak the $\alpha$'s for every single optimization problem. This is where line search steps in. End of explanation """ def hinge_loss(x): l_x = np.zeros(x.size) inds = np.argwhere(x > 0).ravel() l_x[inds] = x[inds] return l_x def huber_loss(x, delta, epsilon): l_x = np.zeros(x.size) inds = np.argwhere(x >= delta + epsilon).ravel() l_x[inds] = x[inds] - delta inds = np.argwhere(np.logical_and((delta - epsilon <= x), (x <= delta + epsilon))).ravel() l_x[inds] = (epsilon+x[inds]-delta)*(epsilon+x[inds]-delta) / (4.*epsilon) return l_x xs = np.linspace(-1, +1, 1000) plt.plot(xs, hinge_loss(xs), '-r', linewidth=2.0) plt.plot(xs, huber_loss(xs, 0.0, 0.5), '--b', linewidth=2.0) plt.grid() plt.tight_layout() def fun_smooth_ocsvm(var, X, nu, delta, epsilon): rho = var[0] w = var[1:] w = w.reshape(w.size, 1) n = X.shape[1] d = X.shape[0] inner = (rho - w.T.dot(X)).ravel() loss = np.zeros(n) inds = np.argwhere(inner >= delta + epsilon) loss[inds] = inner[inds] - delta inds = np.argwhere(np.logical_and((delta - epsilon <= inner), (inner <= delta + epsilon))).ravel() loss[inds] = (epsilon + inner[inds]- delta)*(epsilon + inner[inds] -delta) / (4.*epsilon) f = 1./2.*w.T.dot(w) - rho + np.sum(loss) / (n*nu) return f[0,0] def grad_smooth_ocsvm(var, X, nu, delta, epsilon): rho = var[0] w = var[1:] w = w.reshape(w.size, 1) n = X.shape[1] d = X.shape[0] inner = (rho - w.T.dot(X)).ravel() grad_loss_rho = np.zeros(n) grad_loss_w = np.zeros((n,d)) inds = np.argwhere(inner >= delta + epsilon).ravel() grad_loss_rho[inds] = 1. grad_loss_w[inds, :] = -X[:, inds].T inds = np.argwhere(np.logical_and((delta - epsilon <= inner), (inner <= delta + epsilon))).ravel() grad_loss_rho[inds] = (-delta + epsilon + inner[inds]) / (2.*epsilon) grad_loss_w[inds, :] = ((-delta + epsilon + inner[inds]) / (2.*epsilon) * (-X[:, inds])).T grad = np.zeros(d+1) grad[0] = -1 + np.sum(grad_loss_rho) / (n*nu) grad[1:] = w.ravel() + np.sum(grad_loss_w, axis=0) / (n*nu) return grad.ravel() # Generate some test data np.random.seed(42) X = np.random.randn(10, 100) fun = partial(fun_smooth_ocsvm, X=X, nu=1.0, delta=0., epsilon=0.5) grad = partial(grad_smooth_ocsvm, X=X, nu=1.0, delta=0., epsilon=0.5) # First, check gradient vs numerical gradient. # This should give very small results. print(check_grad(fun, grad, np.random.randn(10+1))) xstar = findMinBT(fun, grad, 0.0*np.random.randn(10+1), 1., 0.0001, max_evals=1000, eps=1e-4, verbosity=0) wstar = xstar[1:] print(wstar) print(np.mean(X, axis=1)) print(np.linalg.norm(wstar - np.mean(X, axis=1))) """ Explanation: More elaborate optimization methods (e.g. Newton descent) will use second-order information in order to find a better step-length. We will come back to this later. 2. Smooth One-class SVM Since this is an anomaly detection workshop, we want to train some anomaly detectors. So, here is our one-class SVM primal problem again: $\min_{w,\rho,\xi} \frac{1}{2}\|w\|^2 - \rho + \frac{1}{n\nu} \sum_{i=1}^n \xi_i$ subject to the following constraints: $\xi_i \geq 0\;, \quad \langle w, x_i \rangle \geq \rho - \xi_i \; , \quad \forall \; i$ This OP is unfortunately neither smooth nor unconstrained. So, lets change this. 1. We will get rid of the constraints by re-formulating them $\xi_i \geq 0\;, \quad \langle w, x_i \rangle \geq \rho - \xi_i \; \Rightarrow \xi_i \geq 0\;, \quad \xi_i \geq \rho - \langle w, x_i \rangle$ Since we minimize the objective, the RHS will hold with equality. Hence we can replace $\xi_i$ in the objective with the RHS. However, we need to take care also of the LHS which states that if LHS is smaller than $0$ the value should stay $0$. This can be achieved by taking a $max(0, RHS)$. Hence, we land at the following unconstrained problem: $\min_{w,\rho} \frac{1}{2}\|w\|^2 - \rho + \frac{1}{n\nu} \sum_{i=1}^n max(0,\rho - \langle w, x_i \rangle)$, which can be written in terms of a general loss function $\ell$ as $\min_{w,\rho} \frac{1}{2}\|w\|^2 - \rho + \frac{1}{n\nu} \sum_{i=1}^n \ell(\rho - \langle w, x_i \rangle)$. This is now unconstrained but still not smooth as the max (which is BTW called the hinge-loss) introduces some non-smoothness into the problem and gradient descent solvers can not readily applied. So, lets make it differentiable by approximation. Approximating the hinge-loss by differentiable Huber-loss $\ell_{\Delta,\epsilon}(x) := \left{\begin{array}{lr} x -\Delta, & \text{for } x \geq \Delta + \epsilon\ \frac{(\epsilon + x - \Delta)^2}{4\epsilon}, & \text{for } \Delta - \epsilon\leq x\leq \Delta + \epsilon\ 0, & \text{else} \end{array}\right}$ ..and the corresponding derivative is (I hope): $\frac{\partial}{\partial x}\ell_{\Delta,\epsilon}(x) := \left{\begin{array}{lr} 1, & \text{for } x \geq \Delta + \epsilon\ \frac{(\epsilon + x - \Delta)}{2\epsilon}, & \text{for } \Delta - \epsilon\leq x\leq \Delta + \epsilon\ 0, & \text{else} \end{array}\right}$ For our purposes, $\Delta=0.0$ and $\epsilon=0.5$ will suffice. (1) Implement the hinge loss $\ell(x) = \max(0,x)$ (2) Implement the Huber loss as defined above End of explanation """ def findMinSG(fun, grad, x0, rate, max_evals=1000, eps=1e-2, step_method=1, verbosity=1): dims = x0.size x = x0 best_x = x best_obj = np.float64(1e20) obj_bak = -1e10 evals = 0 is_converged = False while not is_converged and evals < max_evals: obj = fun(x) # this is subgradient, hence need to store the best solution so far if best_obj >= obj: best_x = x best_obj = obj # stop, if progress is too slow if np.abs((obj-obj_bak)) < eps: is_converged = True continue obj_bak = obj # gradient step for threshold g = grad(x) if step_method == 1: # constant step alpha = rate elif step_method == 2: # non-summable dimishing step size alpha = rate / np.sqrt(np.float(evals+1.)) / np.linalg.norm(g) else: # const. step length alpha = rate / np.linalg.norm(g) if verbosity > 0: print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, obj, np.abs((obj-obj_bak)))) # update x = x - alpha*g evals += 1 print('{0} {1:5.5f} {2:5.5f} {3:5.5f}'.format(evals, alpha, obj, np.abs((obj-obj_bak)))) return best_x def fun_lp_norm(w, p=2.): pnorm = np.sum(np.abs(w)**p)**(1./p) return pnorm def grad_lp_norm(w, p=2.): pnorm1 = np.sum(np.abs(w)**p)**((p-1.)/p) grad_pnorm = (w*np.abs(w)**(p-2.)) / pnorm1 return grad_pnorm.ravel() fun = partial(fun_lp_norm, p=1.2) grad = partial(grad_lp_norm, p=1.2) print(check_grad(fun, grad, np.random.randn(100))) def fun_lp_norm_ocsvm(var, X, p, nu): feat, n = X.shape w = var[1:] rho = var[0] pnorm = np.sum(np.abs(w)**p)**(1./p) slacks = rho - w.T.dot(X) slacks[slacks < 0.] = 0. return (pnorm - rho + np.sum(slacks) / (n*nu)) def grad_lp_norm_ocsvm(var, X, p, nu): feats, n = X.shape w = var[1:] rho = var[0] pnorm1 = np.sum(np.abs(w)**p)**((p-1.)/p) grad_pnorm = (w*np.abs(w)**(p-2.)) / pnorm1 slacks = rho - w.T.dot(X) inds = np.argwhere(slacks >= 0.0) grad = np.zeros(feats+1) grad[0] = -1. + np.float(inds.size) / np.float(n*nu) grad[1:] = grad_pnorm - np.sum(X[:, inds], axis=1).T / (n*nu) return grad.ravel() # Generate some test data np.random.seed(42) X = np.random.randn(10, 1000) fun = partial(fun_lp_norm_ocsvm, X=X, p=2.0, nu=1.0) grad = partial(grad_lp_norm_ocsvm, X=X, p=2.0, nu=1.0) xstar = findMinSG(fun, grad, np.random.randn(10+1), 0.01, max_evals=2000, eps=1e-3, step_method=1, verbosity=0) wstar = xstar[1:] print(wstar) print(np.mean(X, axis=1)) print(np.linalg.norm(wstar - np.mean(X, axis=1))) """ Explanation: 3. First-order, non-smooth optimization using sub-gradient descent Unfortunately, many interesting methods do contain a non-smooth part in their objective. Examples include support vector machines (SVMs), one-class support vector machines (OCSVM), and support vector data descriptions. Here, we gonna implement a version of the primal one-class SVM with a $\ell_p$-norm regularizer. This will allow us to control the sparsity of the found solution vector: $\min_{w,\rho} \|w\|p - \rho + \frac{1}{n\nu} \sum{i=1}^n max(0,\rho - \langle w, x_i \rangle)$, The resulting optimization problem is unconstrained but non-smooth. We will use a subgradient descent solver for this problem. End of explanation """ np.random.seed(42) xs = np.array([1.0, 1.5, 2.0, 4.0, 100.0]) sparsity = np.zeros((xs.size, X.shape[0])) for i in range(xs.size): fun = partial(fun_lp_norm_ocsvm, X=X, p=xs[i], nu=1.0) grad = partial(grad_lp_norm_ocsvm, X=X, p=xs[i], nu=1.0) xstar = findMinSG(fun, grad, np.random.randn(10+1), 0.01, max_evals=2000, eps=1e-3, step_method=1, verbosity=0) wstar = xstar[1:] wstar = np.abs(wstar) wstar /= np.max(wstar) sparsity[i, :] = wstar plt.subplot(1, xs.size, i+1) plt.bar(np.arange(X.shape[0]), sparsity[i, :]) plt.title('p={0:1.2f}'.format(xs[i])) plt.grid() plt.tight_layout() """ Explanation: Let's have a look on how the sparsity is controlled by varying $p$. End of explanation """ def calculate_primal_qp_solution(X, nu): # Solution vector 'x' is a concatenation of w \in R^dims, xi \inR^n, rho \in R # and hence has a dimensionality of dims+n+rho. d = X.shape[0] n = X.shape[1] # 1. xi_i >= 0 -> -xi_i <= 0 G1 = np.concatenate([np.zeros((n, d)), -np.eye(n), np.zeros((n, 1))], axis=1) h1 = np.zeros(n) # 2. <w, x_i> >= rho - xi_i -> -<w, x_i> + rho - xi_i <= 0 G2 = np.concatenate([-X.T, -np.eye(n), np.ones((n, 1))], axis=1) h2 = np.zeros(n) # 3. Final inequality constraints G = np.concatenate([G1, G2], axis=0) h = np.concatenate([h1, h2]) # 4. Build squared part of the objective P = np.zeros((d+n+1, d+n+1)) P[np.diag_indices(d)] = 1 # 5. Build linear part of the objective q = np.ones(d+n+1) / (n*nu) q[:d] = 0 q[-1] = -1 # solve qp sol = cvx.solvers.qp(cvx.matrix(P), cvx.matrix(q), cvx.matrix(G), cvx.matrix(h)) return np.array(sol['x'])[:d].ravel(), np.array(sol['x'])[d:d+n].ravel(), np.array(sol['x'])[-1] wstar, xistar, rhostar = calculate_primal_qp_solution(X, 1.) print('Optimized solution: ', wstar) print('Truth: ', np.mean(X, axis=1)) print('Difference: ', np.linalg.norm(wstar - np.mean(X, axis=1))) """ Explanation: 4. Utilizing Available QP Solver Packages: CVXOPT There are very good general purpose solver for certain types of optimization problems available. Most important are cplex, mosek, and cvxopt where the latter is for free and contains interfaces for comercial solvers (cplex and mosek). Back again at the one-class SVM primal problem: $\min_{w,\rho,\xi} \frac{1}{2}\|w\|^2 - \rho + \frac{1}{n\nu} \sum_{i=1}^n \xi_i$ subject to the following constraints: $\xi_i \geq 0\;, \quad \langle w, x_i \rangle \geq \rho - \xi_i \; , \quad \forall \; i$ Use cvxopt's qp method to solve this problem (cvxopt.solvers.qp(P, q, G, h)) Hence, the above problem needs to be re-written as: $\min_x \frac{1}{2}x^T P x + q^T x$ subject to $Gx \leq h$ and $Ax=b$. End of explanation """ # Generate some test data np.random.seed(42) X = np.random.randn(10, 100) fun = partial(fun_smooth_ocsvm, X=X, nu=1.0, delta=0., epsilon=0.5) res = minimize(fun, 0.0*np.random.randn(10+1), method='L-BFGS-B', options={'gtol': 1e-6, 'disp': True}) xstar = res.x wstar = xstar[1:] print(wstar) print(np.mean(X, axis=1)) print(np.linalg.norm(wstar - np.mean(X, axis=1))) """ Explanation: As you might notice, coding the derivatives is not trivial and takes the most of the time. There are some methods build-in scipy that help you with that and also optimize using more elaborate techniques such as second-order L-BFGS (a memory-limited newton descent). Here, let's recycle some of our functions... 5. Utilizing Available Solver Packages: SciPy's Optimization Suite Here is a link to the scipy 'minimize' function which implements lots of solvers for smooth (un-)constrained optimization problems: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html We will recycle our smooth one-class SVM objective function. End of explanation """
simkovic/matustools
Statformulas.ipynb
mit
model = """ data { int<lower=0> N; //nr subjects real<lower=0> k; real<lower=0> t; }generated quantities{ real<lower=0> y; y=gamma_rng(k,1/t); } """ smGammaGen = pystan.StanModel(model_code=model) model = """ data { int<lower=0> N; //nr subjects real<lower=0> y[N]; }parameters{ real<lower=0> k; real<lower=0> t; }model{ for (n in 1:N) y[n]~gamma(k,1/t); } """ smGamma = pystan.StanModel(model_code=model) N=1000 fit=smGammaGen.sampling(data={'N':N,'k':10,'t':np.exp(-10)}, chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param") w=fit.extract() y=w['y'] print(y.shape) fit=smGamma.sampling(data={'N':N,'y':y}, chains=4,n_jobs=4,seed=1,thin=2,iter=2000,warmup=1000) print(fit) w=fit.extract() t=np.log(w['t']) g=pg(0,w['k'])+t w=fit.extract() plt.plot(g,t,'.') np.corrcoef(g,t)[0,1] invgammafun='''functions{ vector invdigamma(vector x){ vector[num_elements(x)] y; vector[num_elements(x)] L; for (i in 1:num_elements(x)){ if (x[i]==digamma(1)){ y[i]=1; }else{ if (x[i]>=-2.22){ y[i]=(exp(x[i])+0.5); }else{ y[i]=1/(x[i]-digamma(1)); }}} L=digamma(y)-x; while (min(L)>10^-12){ y=y-L ./trigamma(y); L=digamma(y)-x; } return y;} real invdigammaR(real x){ real y; real L; if (x==digamma(1)){ y=1; }else{ if (x>=-2.22){ y=(exp(x)+0.5); }else{ y=1/(x-digamma(1)); }} L=digamma(y)-x; while (abs(L)>1e-5){ y=y-L ./trigamma(y); L=digamma(y)-x; } return y; }} ''' model = """ data { int<lower=0> N; //nr subjects real<lower=0> y[N]; }parameters{ real<lower=-100,upper=100> g; real<lower=-100,upper=100> t; }transformed parameters{ real k; k=invdigammaR(g-t); }model{ for (n in 1:N) y[n]~gamma(k,exp(-t)); } """ smGammaGeom = pystan.StanModel(model_code=invgammafun+model) N=10 fit=smGammaGen.sampling(data={'N':N,'k':1,'t':np.exp(0)}, chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param") w=fit.extract() y=w['y'] fit=smGammaGeom.sampling(data={'N':N,'y':y}, chains=4,n_jobs=4,seed=2,thin=1,iter=500,warmup=200) #control={'adapt_delta':0.99}) print(fit) w=fit.extract() #plt.plot(pg(0,w['k']),w['g']-w['t'],'.') #np.max(np.abs(pg(0,w['k'])-w['g']+w['t'])) plt.plot(w['g'],w['t'],'.') """ Explanation: Notation $Y$ generic random variable $U$ latent random variable $V$ residual random variable $X$ predictor Parameters $\eta$ and $\nu$ generic parameters $\mu=E[Y]$ mean parameter $\gamma=E[\log Y]$ geometric mean parameter $\sigma^2=E[(Y-\mu)^2]$ standard deviation parameter $Y=\alpha+U$ shift parameter $Y= U/\theta$ scale parameter $Y= U \lambda$ inverse-scale (rate) parameter $Y=e^{-\tau} U$ log-scale parameter $Y=U^\kappa$ exponent parameter $Y=f(U,\rho)$ shape parameter $Y=\alpha + \beta X$ linear predictor $\psi$ digamma function $\pi$ pi number $\phi$ measurement scale $\delta$ dirac function $\zeta,\epsilon,\varepsilon,\vartheta,\iota,\xi,\varpi,\varrho,\varsigma,\varphi,\chi,\omega$ Gamma distribution Paremeters $\eta$ and $\nu$ are orthogonal if $$\operatorname{E}_Y \left[ \frac{\partial \log f(Y;\eta,\nu)}{\partial\eta \ \partial\nu} \right]=0$$ The probability density function of Gamma distribution parametrized by shape parameter $\rho$ and scale parameter $\theta$ is $$f(Y=y;\rho,\theta)=\frac{1}{\Gamma(\rho) \theta^\rho} y^{\rho - 1} e^{-\frac{y}{\theta}}$$ with Fisher information $$I_{\rho \theta} = \begin{pmatrix} \psi'(\rho) & \theta^{-1} \ \theta^{-1} & \rho \theta^{-2} \end{pmatrix} $$ Consider parametrization in terms of logarithm of geometric mean $\gamma=E[\log Y]=\psi(\rho)+\log \theta$ and log-scale $\tau=\log(\theta)$, where $\psi$ is the digamma function. Then the logarithm of density function parametrized by $\gamma$ and $\tau$ is $$\log f(Y=y;\gamma,\tau)=-\log{\Gamma(\omega(\gamma-\tau)) -\tau \omega(\gamma-\tau) + (\omega(\gamma-\tau)-1)\log y- y e^{-\tau}}$$ where we use $\omega$ to label the inverse digamma function. By $\omega'(y)$ $\omega''(y)$ and we denote the first and second derivative of inverse digamma function with respect to $y$. Next, we compute the first derivative of the log-density with respect to $\gamma$: $$\begin{align} \frac{\partial}{\partial\gamma}\log f(Y;\gamma,\tau) &= -\psi(\omega(\gamma-\tau)) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \ &= -(\gamma-\tau) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \ &= (\log y - \gamma)\omega'(\gamma -\tau)\end{align}$$ Next we obtain derivative with respect to $\gamma$ and $\tau$: $$\begin{align} \frac{\partial}{\partial\gamma \partial\tau}\log f(Y;\gamma,\tau) &= \frac{\partial}{\partial\tau}\left[(\log y - \gamma)\omega'(\gamma -\tau)\right]\ &= (\gamma-\log y)\omega''(\gamma-\tau) \end{align}$$ Finally, compute the expectation $$\begin{align} \operatorname{E}_Y \left[ \frac{\partial \log f(Y;\tau,\gamma)}{\partial\tau\ \partial\gamma} \right]&= \operatorname{E}\left[\omega''(\gamma-\tau)(\gamma-\log y)\right] \ &=\omega''(\gamma-\tau)(\gamma-\operatorname{E}[\log y])\ &=\omega''(\gamma-\tau)(\gamma-\gamma)\ &=0 \end{align}$$ Note that $\operatorname{E}[\log y]$ is the logarithm of geometric mean and hence $\operatorname{E}[\log y]=\gamma$ $$I_{\gamma \tau} = \begin{pmatrix} \omega'(\gamma-\tau) & 0\ 0 & \omega(\gamma-\tau)-\omega'(\gamma-\tau)\end{pmatrix} $$ $$I_{\rho \tau} = \begin{pmatrix} \psi'(\rho) & 1 \ 1 & \rho \end{pmatrix} $$ $$I_{\rho, \tau+\log \rho} = \begin{pmatrix} \psi'(\rho)-1/\rho & 0 \ 0 & \rho \end{pmatrix} $$ $$I_{\psi(\rho), \tau} = \begin{pmatrix} \psi'(\rho)^{-1} & \psi'(\rho)^{-1} \ \psi'(\rho)^{-1} & \rho \end{pmatrix} $$ $$I_{\psi(\rho)+\tau, \tau} = \begin{pmatrix} \psi'(\rho)^{-1} & 0 \ 0& \rho-\psi'(\rho)^{-1} \end{pmatrix} $$ End of explanation """ model = """ data { int<lower=0> N; //nr subjects int<lower=0> M; real gm; real gs; real t; }generated quantities{ real g[N]; real<lower=0> y[N,M]; for (n in 1:N){ g[n]=normal_rng(gm,gs); for (m in 1:M){ y[n,m]=gamma_rng(invdigammaR(g[n]-t),exp(t)); }}} """ smGammaGen = pystan.StanModel(model_code=invgammafun+model) N=10;M=20 fit=smGammaGen.sampling(data={'N':N,'M':M,'gm':5,'gs':2,'t':1}, chains=4,n_jobs=4,seed=1,thin=1,iter=30,warmup=0,algorithm="Fixed_param") w=fit.extract() y=w['y'][0,:,:] print(y.shape) model = """ data { int<lower=0> N; //nr subjects int<lower=0> M; real<lower=0> y[N,M]; }parameters{ real g[N]; real gm; real<lower=0> gs; real t; }model{ for (n in 1:N){ g[n]~normal(gm,gs); for (m in 1:M){ y[n,m]~gamma(invdigammaR(g[n]-t),exp(t)); }}} """ smGamma = pystan.StanModel(model_code=invgammafun+model) fit=smGamma.sampling(data={'N':N,'M':M,'y':y}, chains=4,n_jobs=4,seed=2,thin=1,iter=1000,warmup=500) print(fit) %pylab inline plt.plot(w['gm']) """ Explanation: Hierarchical parameter recovery End of explanation """ 1/H """ Explanation: Weibull distribution $$f(y)=\frac{\kappa}{y}\left(\frac{y}{\theta}\right)^{\kappa}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$ $$I_{\theta \kappa} = \begin{pmatrix} \frac{\kappa^2}{\theta^2} & -\frac{\psi(2)}{\theta}\ . & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$ $E[\log Y]= \log \theta + \psi(1)/\kappa$ $E[Y^s]=\theta^s \Gamma(1+s/\kappa)$ $E[Y^\kappa]=\theta^\kappa $ $\mathrm{Var}[\log Y]=\psi'(1)/\kappa^2$ $E[(Y/\theta)^\kappa]=1$ $\mathrm{Var}[(Y/\theta)^\kappa]=1$ $E[\log (Y/\theta^\kappa)]= \psi(1)$ $E[\log^2 (Y/\theta^\kappa)]= \psi'(1)+\psi(1)^2$ $E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \psi(2)= \psi(1)+1$ $E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \psi'(2)+\psi(2)^2$ $$I_{\tau \kappa} = \begin{pmatrix} \kappa^2 & - \psi(2)\ . & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$ $\tau=\log \theta$ $r_{\tau \kappa}=\psi(2)/\sqrt{\psi'(1)+\psi(2)^2}=0.31$ This is orthogonal parametrization $$\kappa= \frac{1}{\xi-H \tau}$$ $$\xi=\frac{1}{\kappa}+H \tau $$ $H=\frac{\psi(2)}{\psi'(1)+\psi(2)^2}=0.232$ $$I_{\tau \xi} = \frac{H}{(\xi-\tau)^{2}} \begin{pmatrix} \left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \ . & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix} $$ $$I_{\tau \kappa} = \begin{pmatrix} \kappa^2 & - \psi(2)\ . & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$ $$I_{\tau,1/\kappa} =\kappa^{2} \begin{pmatrix} 1 & \psi(2)\ . & \psi'(1)+\psi(2)^2\end{pmatrix} $$ $$I_{\tau,1/\kappa-H\tau} =\kappa^{2} \begin{pmatrix} 1-\psi(2) H& 0\ . & \psi'(1)+\psi(2)^2\end{pmatrix} = \kappa^{2} \begin{pmatrix} 0.902 & 0\ . & 1.824\end{pmatrix}$$ $$I_{\tau,H\kappa} =\begin{pmatrix} \kappa^{2} & \psi(2) H\ . & \kappa^{-2} \psi(2) H\end{pmatrix} $$ $$I_{\tau,1/(H\kappa)} =\kappa^{2} H^2 \begin{pmatrix} 1 & \psi(2) H\ . & \psi(2) H\end{pmatrix} $$ $$I_{\tau,1/(H\kappa)+\tau} =\kappa^{2} H^2 \begin{pmatrix} 1-\psi(2) H& 0\ . & \psi(2) H\end{pmatrix} \= \kappa^{2} H^2 \begin{pmatrix} \left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \ . & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}= \kappa^{2} H^2 \begin{pmatrix} 0.902 & 0\ . & 0.098\end{pmatrix}$$ $$I_{\tau,\epsilon} =(\epsilon-\tau)^2\begin{pmatrix} \left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \ . & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}$$ Orthogonal from Cox and Reid (1987) $\epsilon= \exp(\log \theta + \psi(2)/\kappa)=\exp(1/\kappa)\exp(E[\log Y])=\exp E[(Y/\theta)^\kappa \log Y]$ $\theta= \epsilon \exp(-\psi(2)/\kappa)$ End of explanation """ pg(1,1) """ Explanation: $$J_{a/H,b}=\begin{pmatrix} H &0 \0 & 1 \end{pmatrix}$$ $$J^T \begin{pmatrix} H^{-2} A & H^{-1} B \ H^{-1} B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$ $H=B/A$ $$J^T \begin{pmatrix} A & B \ B & C \end{pmatrix} J= \begin{pmatrix} B^2/A &B^2/A \B^2/A & C\end{pmatrix}$$ $$J_{a+b,b}=\begin{pmatrix} 1 &-1 \0 & 1 \end{pmatrix}$$ $$J^T\begin{pmatrix} A &A \A & B \end{pmatrix} J= \begin{pmatrix} A &0 \0 & B-A\end{pmatrix}$$ $$J_{\log a,b}=\begin{pmatrix} e^a &0 \0 & 1 \end{pmatrix}$$ $$J_{\log a,b}^T \begin{pmatrix} e^{-2a} A & e^{-a} B \e^{-a} B & C \end{pmatrix} J_{\log a,b}= \begin{pmatrix} A &B \B & C\end{pmatrix}$$ $$J_{e^a,b}=\begin{pmatrix} 1/a &0 \0 & 1 \end{pmatrix}$$ $$J^T \begin{pmatrix} a^2 A & a B \ a B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$ $$J_{a^{-1},b}=\begin{pmatrix} -a^{2} &0 \0 & 1 \end{pmatrix}$$ $$J^T \begin{pmatrix} a^{-4} A & a^{-2} B \ a^{-2} B & C \end{pmatrix} J= \begin{pmatrix} A &-B \-B & C\end{pmatrix}$$ End of explanation """ model = """ data { int<lower=0> N; //nr subjects vector<lower=0>[N] y; }parameters { real<lower=0> k; real<lower=0> t; }model { y~weibull(k,t); } """ smWeibull = pystan.StanModel(model_code=model) model = """ data { int<lower=0> N; //nr subjects vector<lower=0>[N] y; }parameters { real t; real e; }model { y~weibull(4.313501020391736/(e-t),exp(t)); } """ smWeibullE = pystan.StanModel(model_code=model) model = """ data { int<lower=0> N; int<lower=0> M; vector<lower=0>[M] y[N]; }parameters { real lnk[N]; real lnt[N]; real km;real tm; real<lower=0> ks; real<lower=0> ts; }model { lnk~normal(km,ks); lnt~normal(tm,ts); for (n in 1:N) y[n]~weibull(exp(lnk[n]),exp(lnt[n])); } """ #smWeibullH = pystan.StanModel(model_code=model) model = """ data { int<lower=0> N; int<lower=0> M; vector<lower=0>[M] y[N]; }parameters { real<lower=0> lne[N]; real lnt[N]; real em;real tm; real<lower=0> es; real<lower=0> ts; }model { lne~normal(em,es); lnt~normal(tm,ts); for (n in 1:N) y[n]~weibull(4.313501020391736/(lne[n]),exp(lnt[n])); } """ smWeibullEH = pystan.StanModel(model_code=model) print(polygamma(0,1)) print(polygamma(0,2)) print(polygamma(1,1)) print(polygamma(1,2)) print(polygamma(1,1)**2) print(polygamma(1,2)**2) ts=[-10,-1,0,1,10] k=1 for t in ts: plt.subplot(2,3,k);k+=1 e=np.linspace(-10,10,101) plt.plot(e,4.313501020391736/(e-t)) fit.get_adaptation_info() from scipy import stats def prs(x): ts= x.rsplit('\n#') out=[ts[1].rsplit('=')[1]] out.extend(ts[3][:-2].rsplit(',')) return out def computeConvergence(ms,data,reps=50): from time import time D=[[],[]] R=[[],[]] for sd in range(reps): print(sd) for m in range(len(ms)): sm=ms[m] t0=time() try: fit=sm.sampling(data=data,chains=6,n_jobs=6, seed=1,thin=1,iter=1000,warmup=500) D[m].append(time()-t0) nfo=list(map(prs,fit.get_adaptation_info()) ) R[m].append(nfo) except: D[m].append(np.nan) R[m].append(np.zeros((6,3))*np.nan) D=np.array(D) #R=np.float32(R) print(np.mean(D,1)) return D, R t=-1;e=1 k=4.313501020391736/(e-t) print('k= ',k) temp={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=100),'N':100} #D,R=computeConvergence([smWeibull, smWeibullE]) """ Explanation: old stuff $$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\frac{\kappa^2}{\theta^2}+J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{\theta}(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$ $\theta=e^\phi$ $J_{11}=\frac{\partial \theta}{\partial \phi}=e^\phi=\theta$ $J_{12}=\frac{\partial \theta}{\partial \gamma}=0$ $$\mathrm{Cov}(\gamma,\phi)=J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{J_{11}}(J_{21}J_{22}+J_{11}J_{22})$$ $$\mathrm{Cov}(\gamma,\phi)=J_{22}\left(J_{21}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\psi(2)(J_{21}/J_{11}+1)\right)\ = J_{21}J_{22}\frac{\psi(2)}{\kappa^2}\left( \frac{\psi'(2)+\psi(2)^2+1}{\psi(2)}-\kappa^2\left(\frac{\partial \phi}{\partial \kappa}+e^{-\phi}\right)\right)\ = J_{21}J_{22}\psi(2)\left( \frac{\psi'(2)+\psi(2)^2+1}{\kappa^2\psi(2)}-e^{-\phi}-\frac{\partial \phi}{\partial \kappa}\right) $$ $\gamma=-\phi- \frac{\psi'(2)+\psi(2)^2+1}{\kappa \psi(2)}$ $\frac{\partial \gamma}{\partial \kappa}= -\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2 \psi(2)}$ $\kappa=-\frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi) \psi(2)}$ $\frac{\partial \kappa}{\partial \phi}= \frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi)^2 \psi(2)}$ $$\mathrm{Cov}(\gamma,\phi)= J_{21}J_{22}\psi(2)\left( \frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1}-e^{-\phi}- \frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1} \right) $$ $c \mathrm{Ei}(\frac{c}{\kappa})-e^\frac{c}{\kappa}(e^{-\phi}+\kappa)=k$ End of explanation """ N=20 M=50 e=np.random.randn(N)*1+2 t=np.random.randn(N)*1+1 #t=-1;e=1 k=4.313501020391736/(np.abs(e-t)) #print('k= ',k) data={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=(M,N)).T,'N':N,'M':M} ms=[smWeibullH, smWeibullEH] D,R=computeConvergence(ms,data,reps=50) D """ Explanation: Hierarchical weibull End of explanation """ import pystan ggcode='''functions{ //' Naive implementation of the generalized Gamma density. //' @param x Value to evaluate density at. //' @param alpha Shape parameter. //' @param beta Scale parameter. //' @param nu Tail parameter. real gengamma_pdf(real x, real k, real b, real q) { real d; d = q/(b*tgamma(k))*(x/b)^(k*q-1) * exp(-(x/b)^q); return d; } real gengamma_lpdf(real x, real k, real b, real q) { real d; d = log(q) - log(b) - lgamma(k) + (k*q-1)*(log(x) - log(b)) - (x/b)^q; return d; } real generalized_gamma_cdf(real x, real k, real b, real q) { real d; d = gamma_p(k, (x/b)^q); return d; } real generalized_gamma_lcdf(real x, real k, real b, real q) { real d; d = log(generalized_gamma_cdf(x, k, b, q)); return d; }}''' model = """ data { int<lower=0> N; //nr subjects vector<lower=0>[N] yLT; }parameters { real k; //real b; real q; }model { for (n in 1:N) yLT[n]~gengamma(exp(k),exp(0),exp(q)); } """ smGengamma = pystan.StanModel(model_code=ggcode+model) from scipy import stats x=np.linspace(0,10,101)[1:] #k,q,0,b k=2;b=1;q=3; plt.plot(x,stats.gengamma.pdf(x,k,q,0,b)) temp={'yLT':stats.gengamma.rvs(k,q,0,b,size=100),'N':100} fit=smGengamma.sampling(data=temp,chains=6,n_jobs=6, seed=1,thin=1,iter=10000,warmup=500) print(fit) w=fit.extract() p=np.exp(w['k']) #b=np.exp(w['b']) H=(pg(1,p+1)+np.square(pg(0,p+1))+1/p)/pg(0,p+1) e=H/np.exp(w['q'])+1 plt.figure() plt.plot(p,e,'.') np.corrcoef(p,e)[0,1] from scipy.special import gamma, digamma,polygamma plt.figure(figsize=(12,4)) g=np.log(b)+digamma(k)/q c=(polygamma(1,k+1)+polygamma(0,k+1)**2+1/k)*q/polygamma(0,k+1)+np.log(b) q1=g q2=np.log(b) q3=c #*np.exp(-a)+q2 plt.subplot(1,3,1) plt.plot(q1,q2,'.') plt.title(np.corrcoef(q1,q2)[0,1]) plt.subplot(1,3,2) plt.plot(q1,q3,'.') plt.title(np.corrcoef(q1,q3)[0,1]) plt.ylim([-1000,1000]) plt.subplot(1,3,3) plt.plot(q2,q3,'.') plt.title(np.corrcoef(q2,q3)[0,1]); plt.ylim([-50,50]) """ Explanation: Information matrix generalized gamma $$f(y)=\frac{\kappa}{y \Gamma(\rho)}\left(\frac{y}{\theta}\right)^{\kappa \rho}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$ $$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y - \kappa \rho \log \theta -\left(\frac{y}{\theta} \right)^\kappa$$ $$I_{\rho \theta \kappa} = \begin{pmatrix} \psi'(\rho) & \frac{\kappa}{\theta} &- \frac{\psi(\rho)}{\kappa} \ . & \frac{\rho \kappa^2}{\theta^2} & -\frac{\rho}{\theta}\psi(\rho+1)\ . & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$ $\rho (\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{k})= \rho \psi'(\rho)+\rho \psi(\rho)^2 + 2\psi(\rho) +1$ $E[\log Y]= \log \theta + \psi(\rho)/\kappa$ $E[Y^s]=\theta^s \Gamma(\rho+s/\kappa)/\Gamma(\rho)$ $E[Y^\kappa]=\theta^\kappa \rho$ $E[Y^\kappa \log Y ]=\theta^\kappa \rho (\log \theta + \psi(\rho+1)/\kappa)= \theta^\kappa (\rho \log \theta + \rho \psi(\rho)/\kappa+1/\kappa)$ $E[\log^2 Y]= \log^2 \theta + 2 \log \theta \psi(\rho)/\kappa+(\psi'(\rho)+\psi(\rho)^2)/\kappa^2$ $E[Y^\kappa \log^2 Y]= \theta^\kappa \rho (\log^2 \theta + 2 \log \theta \psi(\rho+1)/\kappa+(\psi'(\rho+1)+\psi(\rho+1)^2)/\kappa^2)$ $E[Y^{2\kappa} \log^2 Y]= \theta^{2\kappa} (\rho+1) (\log^2 \theta + 2 \log \theta \psi(\rho+2)/\kappa+(\psi'(\rho+2)+\psi(\rho+2)^2)/\kappa^2)$ $\mathrm{Var}[\log Y]=\psi'(\rho)/\kappa^2$ $E[(Y/\theta)^\kappa]=\rho$ $\mathrm{Var}[(Y/\theta)^\kappa]=\rho$ $E[\log (Y/\theta)^\kappa]= \psi(\rho)$ $E[\log^2 (Y/\theta)^\kappa]= \psi'(\rho)+\psi(\rho)^2$ $E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \rho \psi(\rho+1)= \rho \psi(\rho)+1$ $E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \rho (\psi'(\rho+1)+\psi(\rho+1)^2)$ $$I_{\rho \tau \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \frac{\psi(\rho)}{\kappa} \ . & \rho \kappa^2 & -\rho\psi(\rho+1)\ . & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$ $$I_{\rho, \tau, \log \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \psi(\rho) \ . & \rho \kappa^2 & -\kappa\rho\psi(\rho+1)\ . & . & \rho \left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$ $$I_{\rho \tau,1/\kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho) \ . & \rho \kappa^2 & -\kappa^2 \rho A\ . & . & \kappa^2 \rho B\end{pmatrix} $$ $$I_{\rho \tau,B/(A \kappa)} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho)A/B \ . & \rho \kappa^2 & -\kappa^2 \rho A^2/B\ . & . & \kappa^2 \rho A^2/B\end{pmatrix} $$ $$I_{\rho \tau,B/(A \kappa)-\tau} = \begin{pmatrix} \psi'(\rho) & \kappa-\kappa\psi(\rho)A/B &- \kappa\psi(\rho)A/B \ . & \rho \kappa^2 & 0\ . & . & \kappa^2 \rho A^2/B-\rho \kappa^2\end{pmatrix} $$ A=\psi(\rho+1) B=\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right) $\gamma=\tau+\psi(\rho)/\kappa$ $\rho=\omega(\kappa(\gamma-\tau))=\omega$ $$J=\begin{pmatrix}\kappa \omega' &-\kappa \omega' & (\gamma-\tau)\omega'\ 0&1 &0 \ 0& 0& 1 \end{pmatrix}$$ $$I_{\gamma \tau \kappa} = J^T\begin{pmatrix} \frac{1}{\omega'} & \kappa &-(\gamma-\tau) \ . & \omega \kappa^2 & -(\gamma-\tau)\omega-1\ . & . & \frac{R}{\kappa^2}\end{pmatrix} J $$ $$I_{\gamma \tau \kappa} = \begin{pmatrix} \kappa^2\omega' &0&0 \ . & \kappa^2(\omega -\omega')& (\gamma-\tau)(\kappa\omega'-\omega)-1\ . & . & \frac{R}{\kappa^2}-(\gamma-\tau)^2\omega'\end{pmatrix} $$ with $R=\frac{\omega}{\omega'} +\omega \kappa^2 (\gamma-\tau)^2 + 2\kappa (\gamma-\tau)+1$ Simplyfied Gamma $$f(y;\rho)=\frac{ y^{\rho-1} e^{-y}}{\Gamma(\rho)}$$ $$\log f(y;\rho)=\rho \log y -\log y -y-\log \Gamma(\rho)$$ $\Gamma(z+1) = \int_0^\infty x^{z} e^{-x}\, dx$ $\Gamma(z+1)/\Gamma(z)=z$ $\frac{d^n}{dx^n}\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} (\ln t)^n \, dt$ $\psi(x)=\log(\Gamma(x))'=\Gamma'(x)/\Gamma(x)$ $E[Y]= \int_0^\infty y^{\rho} e^{-y}\, dy / \Gamma(\rho)= \Gamma(\rho+1)/ \Gamma(\rho)=\rho$ $E[Y^s]=\Gamma(\rho+s)/ \Gamma(\rho)$ $\mathrm{Var}[Y]=E[Y^2]-E[Y]^2=\rho(\rho+1)-\rho^2=\rho$ $E[\log Y]=\Gamma'(\rho)/\Gamma(\rho)=\psi(\rho)$ $E[Y \log Y]=\Gamma'(\rho+1)/\Gamma(\rho)= \rho \psi(\rho+1)= \rho \psi(\rho)+1$ $E[1/Y]= \Gamma(\rho-1)/ \Gamma(\rho)=1/(\rho-1)$ $\mathrm{Var}[1/Y]=E[Y^2]-E[Y]^2=\frac{1}{(\rho-2)(\rho-1)^2}$ $E[\log^2 Y]=\Gamma''(\rho)/\Gamma(\rho)=\psi'(\rho)+\psi(\rho)^2$ use $\psi'(x)=(\Gamma'(x)/\Gamma(x))'=\Gamma''(x)/\Gamma(x)-(\Gamma'(x)/\Gamma(x))^2$ $E[Y \log^2 Y]=\Gamma''(\rho+1)/\Gamma(\rho)=\rho(\psi'(\rho+1)+\psi(\rho+1)^2)=\rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)$ Gengamma with $\theta=1$ $$f(y)=\frac{\kappa}{y \Gamma(\rho)}y^{\kappa \rho} e^{-y^\kappa}$$ $$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y -y^\kappa$$ $$I_{\rho \kappa} = \begin{pmatrix} \psi'(\rho) & - \frac{\psi(\rho)}{\kappa} \ . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$ $$I_{\rho \log\kappa} = \begin{pmatrix} \psi'(\rho) & - \psi(\rho) \ . & \rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)+1\end{pmatrix} $$ $\gamma=\psi(\rho)/\kappa$ $\rho=\omega(\gamma \kappa)$ $1=d \psi(\omega(\gamma))/d \gamma$ $$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \ . & \frac{\omega(\gamma\kappa)}{\kappa\omega'(\gamma\kappa)}+ \omega(\gamma\kappa)\gamma^2\kappa+2\gamma+\frac{1}{\kappa^2}\end{pmatrix} $$ $$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \ . & \kappa^{-1}E[Y\log^2 Y]+\frac{1}{\kappa^2}\end{pmatrix} $$ TODO check the last result by transformation of $I_{\rho \kappa}$ orthogonal with End of explanation """ import pystan model = """ data { int<lower=0> N; //nr subjects vector<lower=0>[N] yLT; }parameters { real a; real b; }model { for (n in 1:N) yLT[n]~beta(exp(a),exp(b)); } """ smBeta = pystan.StanModel(model_code=model) from scipy import stats x=np.linspace(0,1,101)[1:] plt.plot(x,stats.beta.pdf(x,4,15,0,1)) temp={'yLT':stats.beta.rvs(4,15,0,1,size=100),'N':100} fit=smBeta.sampling(data=temp,chains=6,n_jobs=6, seed=1,thin=4,iter=55000,warmup=5000) print(fit) w=fit.extract() a=np.exp(w['a']) b=np.exp(w['b']) from scipy.special import gamma, digamma,polygamma,beta plt.figure(figsize=(12,12)) gA=digamma(a)-digamma(a+b) gB=digamma(b)-digamma(a+b) tA=polygamma(1,a)-polygamma(1,a+b) var=a*b/np.square(a+b)/(a+b+1) ex=a/(a+b) q1=ex q2=var #q2=g #k=np.exp(a) #l=np.exp(b) #q1=np.log(np.square(k)*digamma(2)+digamma(1))/(2*digamma(2))-g/(polygamma(1,1)+1) plt.plot(q1,q2,'.') #plt.ylim([0,1]) #plt.xlim([0,1]) np.corrcoef(q1,q2)[0,1] """ Explanation: Beta distribution Parameters $\alpha$ and $\beta$ are orthogonal if $$\operatorname{E}_X \left[ \frac{\partial \log f(X;\alpha,\beta)}{\partial\alpha \ \partial\beta} \right]=0$$ The probability density function of Beta distribution parametrized by shape parameters $\alpha$ and $\beta$ is $$f(X=x;\alpha,\beta)=\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$ Consider parametrization in terms of logarithm of geometric mean $E[\log X]=\gamma=\psi(\alpha)-\psi(\alpha+\beta)$ and the logarithm of geometric mean of $1-X$: $E[\log (1-X)]=\phi=\psi(\beta)-\psi(\alpha+\beta)$ Then the fisher information matrix of the distribution parametrized by shape parameters is $$I_{\alpha,\beta}=\begin{pmatrix}\psi'(\alpha)-\psi'(\alpha+\beta) & -\psi'(\alpha+\beta)\ -\psi'(\alpha+\beta) & \psi'(\beta)-\psi'(\alpha+\beta) \end{pmatrix}$$ Fisher information matrix when parametrized by $\gamma$ and $\phi$ is $$I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J$$ Where $J$ is the Jacobian matrix defined as $$J=\begin{pmatrix}\frac{\partial \alpha}{\partial \gamma} & \frac{\partial \alpha}{\partial \phi}\ \frac{\partial \beta}{\partial \gamma} & \frac{\partial \beta}{\partial \phi} \end{pmatrix}$$ Note that $I_{\alpha,\beta}$ can be written as: $$I_{\alpha,\beta}=\begin{pmatrix}\frac{\partial \gamma}{\partial \alpha} & \frac{\partial \phi}{\partial \alpha} \ \frac{\partial \gamma}{\partial \beta} & \frac{\partial \phi}{\partial \beta} \end{pmatrix}$$ $$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\psi'(\alpha)+J_{21}J_{22}\psi'(\beta)-\psi'(\alpha+\beta)(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$ $\gamma=\psi(\alpha)-\psi(\alpha+\beta)$ $\phi=\psi(\beta)-\psi(\alpha+\beta)$ $\gamma-\phi=\psi(\alpha)-\psi(\beta)$ $\alpha=\omega(\phi-\psi(\beta))-\beta$ $\beta=\omega(\gamma-\psi(\alpha))-\alpha$ $$\gamma=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \alpha}=\frac{\partial \log \Gamma(\alpha)}{\partial \alpha}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \alpha}$$ $$\phi=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \beta}=\frac{\partial \log \Gamma(\beta)}{\partial \beta}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \beta}$$ $\psi'(\alpha)=\psi'(\alpha+\beta)\frac{\partial \beta}{\partial \alpha} -\frac{1}{J_{11}}$ $\psi'(\beta)=\psi'(\alpha+\beta)\frac{\partial \alpha}{\partial \beta} -\frac{1}{J_{22}}$ $I_{\alpha,\beta}=\begin{pmatrix}A+C & C \ C & B+C \end{pmatrix}$ $J^{-1}=\begin{pmatrix}A+C & C \ C & B+C \end{pmatrix}$ $I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J= J^\mathrm{T} J^{-1} J=J$ $$J=\frac{1}{AB+BC+AC}\begin{pmatrix}B+C & -C \ -C & A+C \end{pmatrix} = \begin{pmatrix}\frac{1}{A+\frac{BC}{B+C}} & -\frac{1}{A+B+\frac{AB}{C}} \ -\frac{1}{A+B+\frac{AB}{C}} & \frac{1}{B+\frac{AC}{A+C}} \end{pmatrix}$$ $$J_{11}=(A+C)^{-1}$$ $$J_{12}=J_{21}= C^{-1}$$ $$J_{22}=-(B+C)^{-1}$$ $$\frac{J_{11}J_{22}}{J_{12}J_{21}}=1$$ $$\frac{-C^2}{(A+C)(B+C)}=1$$ $$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}A+J_{21}J_{22}B+C(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$ $$\mathrm{Cov}(\gamma,\phi)=\frac{A}{C(A+C)}-\frac{B}{C(B+C)}+\frac{1}{A+C}-\frac{1}{B+C} +\frac{1}{C} +\frac{1}{C}\frac{-C^2}{(A+C)(B+C)} = \frac{1}{C}\left(\frac{A}{A+C}-\frac{B}{B+C}+\frac{C}{A+C}-\frac{C}{B+C} +1 +1\right) = \frac{2}{C}$$ End of explanation """ 1/(1+pg(0,2)**2/pg(1,1)) pg(0,2) from scipy.special import gamma gamma(1) """ Explanation: Wald distribution Fisher information $$f(x)=\frac{\alpha}{\sigma \sqrt{2 \pi x^3}}\exp\left(-\frac{(\nu x-\alpha)^2}{2 \sigma^2 x}\right)$$ $E[X]=\alpha/\nu$ $E[1/X]=\nu/\alpha +\sigma^2/\alpha^2$ $$I_{\alpha \sigma \nu} = \begin{pmatrix} \frac{2}{\alpha^2}+\frac{\nu}{\sigma^2 \alpha} & \frac{2}{\sigma \alpha} & \frac{1}{\sigma}\ . & \frac{1}{\sigma^2} &0\ . & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$ $$I_{\log \alpha,\log \sigma \nu} = \begin{pmatrix} 2 \sigma+\frac{\nu \alpha}{\sigma} & 2 & \frac{1}{\sigma}\ . & 1 &0\ . & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$ End of explanation """
martinjrobins/hobo
examples/plotting/residuals-variance-diagnostics.ipynb
bsd-3-clause
import pints import pints.toy as toy import pints.plot import numpy as np import matplotlib.pyplot as plt # Use the toy logistic model model = toy.LogisticModel(initial_population_size=1500) real_parameters = [0.000025, 10] times = np.linspace(0, 1000, 1000) org_values = model.simulate(real_parameters, times) # Add independent Gaussian noise noise = 50 values = org_values + pints.noise.multiplicative_gaussian(2.0, 0.0001, org_values) # Set up the problem and run the optimisation problem = pints.SingleOutputProblem(model, times, values) score = pints.SumOfSquaresError(problem) boundaries = pints.RectangularBoundaries([0, 0], [1, 1000]) x0 = np.array([0.001, 500]) opt = pints.OptimisationController( score, x0, boundaries=boundaries, method=pints.XNES, ) opt.set_log_to_screen(False) found_parameters, found_value = opt.run() print('Score at true solution: ') print(score(real_parameters)) print('Found solution: True parameters:' ) for k, x in enumerate(found_parameters): print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k])) fig, ax = pints.plot.series(np.array([found_parameters]), problem, ref_parameters=real_parameters) fig.set_size_inches(15, 7.5) plt.show() """ Explanation: Noise model diagnostics: residuals standard deviation and magnitude This example introduces two noise model diagnostics which are useful for studying the variance in time series noise. The general procedure we follow in this notebook is to start by performing a fit assuming an IID noise process. Next, we generate the diagnostic plots from the IID residuals, and see if they suggest that a more complex noise process is appropriate. The two diagnostics demonstrated in this notebook are pints.residuals_diagnostics.plot_residuals_binned_std and pints.residuals_diagnostics.plot_residuals_vs_output. Both methods can take either a single best fit parameter or an MCMC chain of posterior samples. The diagnostic plots in the notebook can be used to study the variance of the residuals. Pints also contains diagnostics for studying correlated noise processes, which are covered in Evaluating noise models using autocorrelation plots of the residuals and Noise model autocorrelation diagnostic plots. Time series with non-IID noise First, we generate a time series from the logistic model, and add a noise process whose magnitude varies over time. Specifically we use multiplicative noise, in which the magnitude of the noise terms are proportional to the value of the time series. This noise process is discussed in further detail in Multiplicative Gaussian noise. We then fit the logistic parameters assuming standard IID noise, which will yield a best fit and corresponding residuals. End of explanation """ from pints.residuals_diagnostics import plot_residuals_binned_std fig = plot_residuals_binned_std( np.array([found_parameters]), problem, n_bins=10 ) plt.show() """ Explanation: Binned residuals standard deviation Having obtained the IID fit, we now generate some diagnostic plots to see whether the IID assumption is valid (since the data was generated using multiplicative noise, we expect to find evidence that IID is not appropriate). The first diagnostic plot divides the time series into bins, and displays the standard deviation of the residuals calculated within each bin over time. This function is available from Pints using pints.residuals_diagnostics.plot_residuals_binned_std. End of explanation """ from pints.residuals_diagnostics import plot_residuals_vs_output fig = plot_residuals_vs_output( np.array([found_parameters]), problem ) plt.show() """ Explanation: This plot shows the a steady decrease in the residuals variance over time, suggesting (correctly) that the noise is not actually IID. Residuals vs output Another diagnostic plot which helps to analyse the residuals is available in Pints using the plot_residuals_vs_output function in pints.residuals_diagnostics. This plot compares the magnitude of the residuals to the values of the solution. End of explanation """
quoniammm/mine-tensorflow-examples
fastAI/deeplearning2/spelling_bee_RNN.ipynb
mit
%matplotlib inline import importlib import utils2; importlib.reload(utils2) from utils2 import * np.set_printoptions(4) PATH = 'data/spellbee/' limit_mem() from sklearn.model_selection import train_test_split """ Explanation: Spelling Bee This notebook starts our deep dive (no pun intended) into NLP by introducing sequence-to-sequence learning on Spelling Bee. Data Stuff We take our data set from The CMU pronouncing dictionary End of explanation """ lines = [l.strip().split(" ") for l in open(PATH+"cmudict-0.7b", encoding='latin1') if re.match('^[A-Z]', l)] lines = [(w, ps.split()) for w, ps in lines] lines[0], lines[-1] """ Explanation: The CMU pronouncing dictionary consists of sounds/words and their corresponding phonetic description (American pronunciation). The phonetic descriptions are a sequence of phonemes. Note that the vowels end with integers; these indicate where the stress is. Our goal is to learn how to spell these words given the sequence of phonemes. The preparation of this data set follows the same pattern we've seen before for NLP tasks. Here we iterate through each line of the file and grab each word/phoneme pair that starts with an uppercase letter. End of explanation """ phonemes = ["_"] + sorted(set(p for w, ps in lines for p in ps)) phonemes[:5] len(phonemes) """ Explanation: Next we're going to get a list of the unique phonemes in our vocabulary, as well as add a null "_" for zero-padding. End of explanation """ p2i = dict((v, k) for k,v in enumerate(phonemes)) letters = "_abcdefghijklmnopqrstuvwxyz*" l2i = dict((v, k) for k,v in enumerate(letters)) """ Explanation: Then we create mappings of phonemes and letters to respective indices. Our letters include the padding element "_", but also "*" which we'll explain later. End of explanation """ maxlen=15 pronounce_dict = {w.lower(): [p2i[p] for p in ps] for w, ps in lines if (5<=len(w)<=maxlen) and re.match("^[A-Z]+$", w)} len(pronounce_dict) """ Explanation: Let's create a dictionary mapping words to the sequence of indices corresponding to it's phonemes, and let's do it only for words between 5 and 15 characters long. End of explanation """ a=['xyz','abc'] [o.upper() for o in a if o[0]=='x'], [[p for p in o] for o in a], [p for o in a for p in o] """ Explanation: Aside on various approaches to python's list comprehension: * the first list is a typical example of a list comprehension subject to a conditional * the second is a list comprehension inside a list comprehension, which returns a list of list * the third is similar to the second, but is read and behaves like a nested loop * Since there is no inner bracket, there are no lists wrapping the inner loop End of explanation """ maxlen_p = max([len(v) for k,v in pronounce_dict.items()]) pairs = np.random.permutation(list(pronounce_dict.keys())) n = len(pairs) input_ = np.zeros((n, maxlen_p), np.int32) labels_ = np.zeros((n, maxlen), np.int32) for i, k in enumerate(pairs): for j, p in enumerate(pronounce_dict[k]): input_[i][j] = p for j, letter in enumerate(k): labels_[i][j] = l2i[letter] go_token = l2i["*"] dec_input_ = np.concatenate([np.ones((n,1)) * go_token, labels_[:,:-1]], axis=1) """ Explanation: Split lines into words, phonemes, convert to indexes (with padding), split into training, validation, test sets. Note we also find the max phoneme sequence length for padding. End of explanation """ (input_train, input_test, labels_train, labels_test, dec_input_train, dec_input_test ) = train_test_split(input_, labels_, dec_input_, test_size=0.1) input_train.shape labels_train.shape input_vocab_size, output_vocab_size = len(phonemes), len(letters) input_vocab_size, output_vocab_size """ Explanation: Sklearn's <tt>train_test_split</tt> is an easy way to split data into training and testing sets. End of explanation """ parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]} lstm_params = {} dim = 240 """ Explanation: Next we proceed to build our model. Keras code End of explanation """ def get_rnn(return_sequences= True): return LSTM(dim, dropout_U= 0.1, dropout_W= 0.1, consume_less= 'gpu', return_sequences=return_sequences) """ Explanation: Without attention End of explanation """ inp = Input((maxlen_p,)) x = Embedding(input_vocab_size, 120)(inp) x = Bidirectional(get_rnn())(x) x = get_rnn(False)(x) x = RepeatVector(maxlen)(x) x = get_rnn()(x) x = get_rnn()(x) x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x) """ Explanation: The model has three parts: * We first pass list of phonemes through an embedding function to get a list of phoneme embeddings. Our goal is to turn this sequence of embeddings into a single distributed representation that captures what our phonemes say. * Turning a sequence into a representation can be done using an RNN. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a pronunciation. * <tt>BiDirectional</tt> passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards. * We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after). * Finally, we arrive at a vector representation of the sequence which captures everything we need to spell it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each letter is in the output sequence. * We use <tt>RepeatVector</tt> to help our RNN remember at each point what the original word is that it's trying to translate. End of explanation """ model = Model(inp, x) model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc']) hist=model.fit(input_train, np.expand_dims(labels_train,-1), validation_data=[input_test, np.expand_dims(labels_test,-1)], batch_size=64, **parms, nb_epoch=3) hist.history['val_loss'] """ Explanation: We can refer to the parts of the model before and after <tt>get_rnn(False)</tt> returns a vector as the encoder and decoder. The encoder has taken a sequence of embeddings and encoded it into a numerical vector that completely describes it's input, while the decoder transforms that vector into a new sequence. Now we can fit our model End of explanation """ def eval_keras(input): preds = model.predict(input, batch_size=128) predict = np.argmax(preds, axis = 2) return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict) """ Explanation: To evaluate, we don't want to know what percentage of letters are correct but what percentage of words are. End of explanation """ acc, preds = eval_keras(input_test); acc def print_examples(preds): print("pronunciation".ljust(40), "real spelling".ljust(17), "model spelling".ljust(17), "is correct") for index in range(20): ps = "-".join([phonemes[p] for p in input_test[index]]) real = [letters[l] for l in labels_test[index]] predict = [letters[l] for l in preds[index]] print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17), "".join(predict).split("_")[0].ljust(17), str(real == predict)) """ Explanation: The accuracy isn't great. End of explanation """ print_examples(preds) """ Explanation: We can see that sometimes the mistakes are completely reasonable, occasionally they're totally off. This tends to happen with the longer words that have large phoneme sequences. That's understandable; we'd expect larger sequences to lose more information in an encoding. End of explanation """ import attention_wrapper; importlib.reload(attention_wrapper) from attention_wrapper import Attention """ Explanation: Attention model This graph demonstrates the accuracy decay for a nueral translation task. With an encoding/decoding technique, larger input sequences result in less accuracy. <img src="https://smerity.com/media/images/articles/2016/bahdanau_attn.png" width="600"> This can be mitigated using an attentional model. End of explanation """ inp = Input((maxlen_p,)) inp_dec = Input((maxlen,)) emb_dec = Embedding(output_vocab_size, 120)(inp_dec) emb_dec = Dense(dim)(emb_dec) x = Embedding(input_vocab_size, 120)(inp) x = Bidirectional(get_rnn())(x) x = get_rnn()(x) x = get_rnn()(x) x = Attention(get_rnn, 3)([x, emb_dec]) x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x) """ Explanation: The attentional model doesn't encode into a single vector, but rather a sequence of vectors. The decoder then at every point is passing through this sequence. For example, after the bi-directional RNN we have 16 vectors corresponding to each phoneme's output state. Each output state describes how each phoneme relates between the other phonemes before and after it. After going through more RNN's, our goal is to transform this sequence into a vector of length 15 so we can classify into characters. A smart way to take a weighted average of the 16 vectors for each of the 15 outputs, where each set of weights is unique to the output. For example, if character 1 only needs information from the first phoneme vector, that weight might be 1 and the others 0; if it needed information from the 1st and 2nd equally, those two might be 0.5 each. The weights for combining all the input states to produce specific outputs can be learned using an attentional model; we update the weights using SGD, and train it jointly with the encoder/decoder. Once we have the outputs, we can classify the character using softmax as usual. Notice below we do not have an RNN that returns a flat vector as we did before; we have a sequence of vectors as desired. We can then pass a sequence of encoded states into the our custom <tt>Attention</tt> model. This attention model also uses a technique called teacher forcing; in addition to passing the encoded hidden state, we also pass the correct answer for the previous time period. We give this information to the model because it makes it easier to train. In the beginning of training, the model will get most things wrong, and if your earlier character predictions are wrong then your later ones will likely be as well. Teacher forcing allows the model to still learn how to predict later characters, even if the earlier characters were all wrong. End of explanation """ model = Model([inp, inp_dec], x) model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc']) hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1), validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)], batch_size=64, **parms, nb_epoch=3) hist.history['val_loss'] K.set_value(model.optimizer.lr, 1e-4) hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1), validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)], batch_size=64, **parms, nb_epoch=5) np.array(hist.history['val_loss']) def eval_keras(): preds = model.predict([input_test, dec_input_test], batch_size=128) predict = np.argmax(preds, axis = 2) return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict) """ Explanation: We can now train, passing in the decoder inputs as well for teacher forcing. End of explanation """ acc, preds = eval_keras(); acc """ Explanation: Better accuracy! End of explanation """ print("pronunciation".ljust(40), "real spelling".ljust(17), "model spelling".ljust(17), "is correct") for index in range(20): ps = "-".join([phonemes[p] for p in input_test[index]]) real = [letters[l] for l in labels_test[index]] predict = [letters[l] for l in preds[index]] print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17), "".join(predict).split("_")[0].ljust(17), str(real == predict)) """ Explanation: This model is certainly performing better with longer words. The mistakes it's making are reasonable, and it even succesfully formed the word "partisanship". End of explanation """ nb_samples, nb_time, input_dim, output_dim = (64, 4, 32, 48) x = tf.placeholder(np.float32, (nb_samples, nb_time, input_dim)) xr = K.reshape(x,(-1,nb_time,1,input_dim)) W1 = tf.placeholder(np.float32, (input_dim, input_dim)); W1.shape W1r = K.reshape(W1, (1, input_dim, input_dim)) W1r2 = K.reshape(W1, (1, 1, input_dim, input_dim)) xW1 = K.conv1d(x,W1r,border_mode='same'); xW1.shape xW12 = K.conv2d(xr,W1r2,border_mode='same'); xW12.shape xW2 = K.dot(x, W1) x1 = np.random.normal(size=(nb_samples, nb_time, input_dim)) w1 = np.random.normal(size=(input_dim, input_dim)) res = sess.run(xW1, {x:x1, W1:w1}) res2 = sess.run(xW2, {x:x1, W1:w1}) np.allclose(res, res2) W2 = tf.placeholder(np.float32, (output_dim, input_dim)); W2.shape h = tf.placeholder(np.float32, (nb_samples, output_dim)) hW2 = K.dot(h,W2); hW2.shape hW2 = K.reshape(hW2,(-1,1,1,input_dim)); hW2.shape """ Explanation: Test code for the attention layer End of explanation """
UWPRG/Python
tutorials/PEP8_compliance_tips.ipynb
mit
%%bash ipython profile create blake mkdir /Users/houghb/.ipython/profile_blake/static/ mkdir /Users/houghb/.ipython/profile_blake/static/custom/ touch /Users/houghb/.ipython/profile_blake/static/custom/custom.css """ Explanation: Tips to make it easier to comply with the PEP8 style guide Read the style guide here. Please add any additional tips or tricks you discover! Tip #1 - line length One challenge I have in ipython notebook is writing lines that are too long (>79 char), but I don't realize it because the notebook window in my browser is set quite wide. Here is a way to make your ipython notebook cells show an 80 character-wide window so you know when you've written too long of a line. You are going to make a new profile (or if you want you can edit the default) with a custom css to change how the code cells look. You can do all this from ipython (as I've done here), or you can do it in terminal if you prefer. Change the directory names as appropriate below. NOTE: these instructions for creating a custom profile don't work if you've upgraded to ipython or jupyter 4.x (+) Instructions for these newer versions are included at the end of this section End of explanation """ %%file /Users/houghb/.ipython/profile_blake/static/custom/custom.css /**This is Blake's custom css file**/ div.input{ width:107ex; /* on my system this is an 80 char window */ } """ Explanation: The file that you edit in the next cell (custom.css) will change how your notebook cells look. You can add other customizations, but the one I've included here will make the shaded area of code cells 80 characters long (assuming you have the same default font that I do, you may need to edit the width for your system if you change the font). You will still be able to type lines longer than 80 characters if you want, but will need to scroll left and right to see them. If you want to make all cells in your notebook (including Markdown cells) only 80 characters then substitue "cell" for "input" in this code. End of explanation """ %%file /Users/houghb/.jupyter/custom/custom.css /**This is Blake's custom css file**/ div.input{ width:107ex; /* on my system this is an 80 char window */ } """ Explanation: To use your newly created custom.css you need to start ipython notebook with the command following command. ipython notebook --profile=blake (substitute your profile name instead of blakecss) If you've updated to ipython or jupyter 4.X(+): Instead of saving your file in .../.ipython/profile_blake/static/custom/custom.css, save it in the .jupyter directory: End of explanation """
brain-research/guided-evolutionary-strategies
Guided_Evolutionary_Strategies_Demo_TensorFlow2.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import numpy as np import matplotlib.pyplot as plt import tensorflow as tf print(f'TensorFlow version: {tf.__version__}') """ Explanation: Guided Evolutionary Strategies Demo (using TensorFlow 2) Date: 06/08/2020 This is a self-contained notebook that reproduces the toy example in Fig. 1 of the Guided ES paper. End of explanation """ # Generate problem data. rs = np.random.RandomState(seed=0) m = 2000 # Number of data points. n = 1000 # Number of variables. A = rs.randn(m, n) b = rs.randn(m, 1) xstar = np.linalg.lstsq(A, b, rcond=None)[0] f_star = (0.5/float(m)) * np.linalg.norm(np.dot(A, xstar) - b) ** 2 A = tf.convert_to_tensor(A, dtype=tf.float32) b = tf.convert_to_tensor(b, dtype=tf.float32) # This is a bias vector that will be added to the gradient grad_bias = 1.0 * tf.nn.l2_normalize(tf.convert_to_tensor(rs.randn(n, 1), dtype=tf.float32)) @tf.function def loss_and_grad_fun(x): residual = tf.matmul(A, x) - b loss = 0.5 * tf.norm(residual) ** 2 / float(m) # The 'gradient' that we observe is a noisy, biased version of the true gradient. # This is meant to mimic scenarios where we only have access to biased gradients. err = tf.matmul(tf.transpose(A), residual) / float(m) grad_noise = 1.5 * tf.nn.l2_normalize(tf.random.normal(shape=(n, 1))) gradient = err + (grad_bias + grad_noise) * tf.norm(err) return loss, gradient """ Explanation: Problem setup We test the algorithms on a toy problem where we explicitly add bias and variance to the gradient. End of explanation """ opt = tf.keras.optimizers.SGD(5e-3) @tf.function def step_fun(x): loss, gradient = loss_and_grad_fun(x) opt.apply_gradients([(gradient, x)]) return loss %%time x = tf.Variable(tf.zeros((n, 1)), dtype=tf.float32) fobj = [] for _ in range(10000): fobj.append(step_fun(x)) # Store training curve for plotting later. f_gd = tf.stack(fobj).numpy().copy() """ Explanation: Algorithm 1: Gradient Descent Our first algorithm is gradient descent, applied directly to the biased gradients. End of explanation """ # Hyperparameters for Vanilla ES sigma = 0.1 beta = 1.0 learning_rate = 0.2 # Defines the distribution for sampling parameter perturbations. scale = sigma / np.sqrt(n) def sample(): return scale * tf.random.normal(shape=(n, 1), dtype=tf.float32) opt = tf.keras.optimizers.SGD(learning_rate) @tf.function def step_fun(x): epsilon = sample() # We utilize antithetic (positive and negative) samples. f_pos, _ = loss_and_grad_fun(x + epsilon) f_neg, _ = loss_and_grad_fun(x - epsilon) # This update is a stochastic finite difference estimate of the true gradient. update = (beta / (2 * sigma ** 2)) * (f_pos - f_neg) * epsilon opt.apply_gradients([(update, x)]) return loss_and_grad_fun(x)[0] %%time x = tf.Variable(tf.zeros((n, 1)), dtype=tf.float32) # Run the optmizer. fobj = [] for _ in range(10000): fobj.append(step_fun(x)) # Store training curve for plotting later. f_ves = tf.stack(fobj).numpy().copy() """ Explanation: Algorithm 2: Vanilla Evolutionary Strategies (Vanilla ES) Our next algorithm is vanilla (standard) evolutionary strategies. This is a zeroth-order optimization algorithm, which means that it only uses the function evaluation (and ignores the biased gradients). End of explanation """ # Hyperparameters for Guided ES sigma = 0.1 alpha = 0.5 beta = 1.0 k = 1 # Defines the dimensionality of the low-rank subspace. # Defines parameters of the distribution for sampling perturbations. a = sigma * np.sqrt(alpha / float(n)) c = sigma * np.sqrt((1. - alpha) / float(k)) def sample(gradient_subspace): epsilon_full = tf.random.normal(shape=(n, 1), dtype=tf.float32) epsilon_subspace = tf.random.normal(shape=(k, 1), dtype=tf.float32) Q, _ = tf.linalg.qr(gradient_subspace) epsilon = a * epsilon_full + c * tf.matmul(Q, epsilon_subspace) return epsilon opt = tf.keras.optimizers.SGD(0.2) @tf.function def step_fun(x): # We pass the gradient to our sampling function. loss, gradient = loss_and_grad_fun(x) epsilon = sample(gradient) # We utilize antithetic (positive and negative) samples. f_pos, _ = loss_and_grad_fun(x + epsilon) f_neg, _ = loss_and_grad_fun(x - epsilon) # This update is a stochastic finite difference estimate of the true gradient. update = (beta / (2 * sigma ** 2)) * (f_pos - f_neg) * epsilon opt.apply_gradients([(update, x)]) return loss_and_grad_fun(x)[0] %%time x = tf.Variable(tf.zeros((n, 1)), dtype=tf.float32) # Run the optmizer. fobj = [] for _ in range(10000): fobj.append(step_fun(x)) # Store training curve for plotting later. f_ges = tf.stack(fobj).numpy().copy() COLORS = {'ges': '#7570b3', 'ves': '#1b9e77', 'sgdm': '#d95f02'} plt.figure(figsize=(8, 6)) plt.plot(f_ves - f_star, color=COLORS['ves'], label='Vanilla ES') plt.plot(f_gd - f_star, color=COLORS['sgdm'], label='Grad. Descent') plt.plot(f_ges - f_star, color=COLORS['ges'], label='Guided ES') plt.legend(fontsize=16, loc=0) plt.xlabel('Iteration', fontsize=16) plt.ylabel('Loss', fontsize=16) plt.title('Demo of Guided Evolutionary Strategies', fontsize=16); """ Explanation: Algorithm 3: Guided Evolutionary Strategies (Guided ES) Guided ES is our proposed method. It uses a diagonal plus low-rank covariance matrix for drawing perturbations, where the low-rank subspace is spanned by the available gradient information. This allows it to incorporate the biased gradient information, while still minimizing the true loss function. End of explanation """
ydkahin/jupyter-notebooks
notebooks/quora-views-challenge/quora_views_challenge-partiii-EDA_and_feature_engineering.ipynb
mit
import pandas as pd import json json_data = open('../views/sample/input00.in') # Edit this to where you have put the input00.in file data = [] for line in json_data: data.append(json.loads(line)) data.remove(9000) data.remove(1000) df = pd.DataFrame(data) df['anonymous'] = df['anonymous'].map({False: 0, True:1}).astype(int) cleaned_df=df[:9000] # to make reading the question_text cells easier, remove the maximum column width pd.set_option('display.max_colwidth', -1) from plotnine import * """ Explanation: Quora Views Prediction Challenge - Part III - EDA and Feature Engineering End of explanation """ df[['question_text', 'topics', 'context_topic']].head() """ Explanation: Missing context_topic At Quora, context_topic has been depreciated since 2015. It used to be the primary topic assigned to every new question. And this topic tag was not visible to viewers, so they didn't have a way to filter it out (For this reason, I expect context_topic_followers to not contribute to view count.). Those with missing context_topic, which is the primary topic of the question. Although, as per the prompt, every question's context_topic json blob is said to be included in the topics array of each question, it is not. Let's investigate those with missing primary topics and try to derive some insight. End of explanation """ # make a copy of cleaned_df data_df = cleaned_df.copy() # create the column data_df['context_present'] = data_df['context_topic'].apply(lambda x: 0 if x==None else 1) """ Explanation: But we want to see the mean of cleaned_df.__ans__ with the ctm_w_ans rows removed. So, let's make a column of a boolean feature of whether context_topic is missing or not and do an np.mean pivot table. End of explanation """ test = data_df['context_present'] * data_df['num_answers'] context_xnum = pd.DataFrame(test, columns=['context_xnum']) test.corr(data_df.__ans__) """ Explanation: Let's see what happens if we create a column out of the product of num_answers and context_present. End of explanation """ (ggplot(pd.concat([data_df, context_xnum], axis=1), aes(x='context_xnum', y='__ans__')) + geom_point() + geom_smooth(method='lm') + theme_bw()) (ggplot(pd.concat([data_df, context_xnum], axis=1), aes(x='context_xnum', y='__ans__', color='factor(anonymous)')) + geom_point() + stat_smooth( method='lm') + facet_wrap('~anonymous')) """ Explanation: New Feature: Product of num_answers and context_present We see a moderate correlation with coefficient of 0.362 between the product context_xnum and __ans__. End of explanation """ data_df[data_df.context_present == 0][13:16] """ Explanation: Surprisingly, those missing a context topic have a higher __ans__ than those that do. You would imagine the opposite assuming those missing a primary topic would be ones that are ignored or those that didn't garner enough attention. But quite the opposite seems to be true. End of explanation """ topics_present = data_df['topics'].apply(lambda x: 0 if len(x) == 0 else 1) topics_present.value_counts() topics_present.corr(data_df['__ans__']) test_1 = topics_present * test # 0 if topics aren't present or num_answers = 0 or context_topic not present. test_1.corr(data_df['__ans__']) """ Explanation: How about topics column? Another insight from the missing context_topic dataframe is that there are rows with missing topics. These went undetected last time because they are empty arrays and not NaN or missing values. End of explanation """ # Short questionsb data_df['len_question_text'] = data_df.question_text.apply(lambda x: len(x)) # plot (ggplot(data_df, aes(x='len_question_text')) + geom_density() + theme_bw()) """ Explanation: The correlation hasn't changed, so at this point, we have nothing to do with the empty topics rows. End of explanation """ data_df[(data_df.len_question_text < 15)][['__ans__', 'question_text', 'len_question_text', 'num_answers']].sort_values(by="__ans__", ascending=False) #gives the correlation of `num_answers` to number of characters less than int_ def NumAnsCorrNumChar(int_): question_shorter_than_int_=data_df['len_question_text'].apply(lambda x: 1 if x <= int_ else 0) return question_shorter_than_int_.corr(data_df['num_answers']) #gives the correlation of `__ans__` to the product of num_answers and the ... # ... boolean column of number of characters less than int_ def TargetCorrNumChar(int_): question_shorter_than_int_=data_df['len_question_text'].apply(lambda x: 1 if x <= int_ else 0) prod = question_shorter_than_int_ * data_df['num_answers'] return prod.corr(data_df['__ans__']) some_list = [] for i in data_df['len_question_text'].values: some_list.append(NumAnsCorrNumChar(i)) min(some_list), max(some_list) #7 and 15 characters """ Explanation: Let's look at questions with less than 10 characters. End of explanation """ some_list_1 = [] for i in data_df['len_question_text'].values: some_list_1.append(TargetCorrNumChar(i)) min(some_list_1), max(some_list_1) #7 and 15 characters some_list_1.index(0.3820463704019823) #the index of the max data_df.iloc[141]['len_question_text'] # the number of characters """ Explanation: This is useless, did it out of curiousity. How about the correlation between the number of characters and __ans__? End of explanation """ data_df[(~data_df.question_text.str.contains("[?]"))][['__ans__', 'question_text', 'num_answers']].sort_values(by="__ans__", ascending=False).head() q_df = data_df.question_text.apply(lambda x: 1 if ( (any(~pd.Series(x).str.contains('[?]')))) else 0) q_df.corr(data_df.__ans__) """ Explanation: The feature deciding whether a question is less than 182 characters long (valued 1) or shorter has a moderate correlation with the target column __ans__. Observe that most of the questions in the bottom of the list have no question marks; so, let's explore that. End of explanation """ # Combination checker function `ccc=checkCorrComb` def CorrOR(str_): split = str_.split(', ') joined= '|'.join(split) # create a pd series with boolean values combined_df = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str(joined))) else 0) return combined_df.corr(data_df.__ans__) def CorrAND(str_): split = str_.split(', ') # create a pd series with boolean values combined_df = data_df.question_text.apply(lambda x: 1 if all(words in x for words in split) else 0) return combined_df.corr(data_df.__ans__) #Since Quora has a huge userbase in India and Pakistan, let's start there CorrOR('Pakistan, Pakistani, India, Indian, IIT, Delhi, Modi'), CorrAND('India, Pakistan') combination_test_1 = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains('Pakistan|Pakistani|India|Indian|IIT|Delhi|Modi')) else 0) combination_test_2 = data_df.question_text.apply(lambda x: 1 if all(words in x for words in ['India', 'Pakistan']) else 0) combination_test_1.corr(data_df.__ans__), combination_test_2.corr(data_df.__ans__) """ Explanation: Nothing significant! Correlation between __ans__ and certain keywords Let me start by clarifying the title of this section. I want to investigate if the appearance of certain words in the question_text affect views (__ans__ and technically the ratio views to age, but I will just call it views). Also, when I write "the correlation of some word," I am referring to the correlation of __ans__ to a boolean column corresponding to whether the question text in that row contains this word. Correlation Calculators for Keyword Combinations To make our lives easier, let's automate the process. End of explanation """ # Superlatives sup_list = ['angriest', 'worst', 'biggest', 'bitterest', 'blackest', 'blandest', 'bloodiest', 'bluest', 'boldest', 'bossiest', 'bravest', 'briefest', 'brightest', 'broadest', 'busiest', 'calmest', 'cheapest', 'chewiest', 'chubbiest', 'classiest', 'cleanest', 'clearest', 'cleverest', 'closest', 'cloudiest', 'clumsiest', 'coarsest', 'coldest', 'coolest', 'craziest', 'creamiest', 'creepiest', 'crispiest', 'cruellest', 'crunchiest', 'curliest', 'curviest', 'cutest', 'dampest', 'darkest', 'deadliest', 'deepest', 'densest', 'dirtiest', 'driest', 'dullest', 'dumbest', 'dustiest', 'earliest', 'easiest', 'faintest', 'fairest', 'fanciest', 'furthest/farthest', 'fastest', 'fattest', 'fewest', 'fiercest', 'filthiest', 'finest', 'firmest', 'fittest', 'flakiest', 'flattest', 'freshest', 'friendliest', 'fullest', 'funniest', 'gentlest', 'gloomiest', 'best', 'grandest', 'gravest', 'greasiest', 'greatest', 'greediest', 'grossest', 'guiltiest', 'hairiest', 'handiest', 'happiest', 'hardest', 'harshest', 'healthiest', 'heaviest', 'highest', 'hippest', 'hottest', 'humblest', 'hungriest', 'iciest', 'itchiest', 'juiciest', 'kindest', 'largest', 'latest', 'laziest', 'lightest', 'likeliest', 'littlest', 'liveliest', 'loneliest', 'longest', 'loudest', 'loveliest', 'lowest', 'maddest', 'meanest', 'messiest', 'mildest', 'moistest', 'narrowest', 'nastiest', 'naughtiest', 'nearest', 'neatest', 'neediest', 'newest', 'nicest', 'noisiest', 'oddest', 'oiliest', 'oldest/eldest', 'plainest', 'politest', 'poorest', 'prettiest', 'proudest', 'purest', 'quickest', 'quietest', 'rarest', 'rawest', 'richest', 'ripest', 'riskiest', 'roomiest', 'roughest', 'rudest', 'rustiest', 'saddest', 'safest', 'saltiest', 'sanest', 'scariest', 'shallowest', 'sharpest', 'shiniest', 'shortest', 'shyest', 'silliest', 'simplest', 'sincerest', 'skinniest', 'sleepiest', 'slimmest', 'slimiest', 'slowest', 'smallest', 'smartest', 'smelliest', 'smokiest', 'smoothest', 'softest', 'soonest', 'sorest', 'sorriest', 'sourest', 'spiciest', 'steepest', 'stingiest', 'strangest', 'strictest', 'strongest', 'sunniest', 'sweatiest', 'sweetest', 'tallest', 'tannest', 'tastiest', 'thickest', 'thinnest', 'thirstiest', 'tiniest', 'toughest', 'truest', 'ugliest', 'warmest', 'weakest', 'wealthiest', 'weirdest', 'wettest', 'widest', 'wildest', 'windiest', 'wisest', 'worldliest', 'worthiest', 'youngest'] data_df['qcontains_superlatives'] = data_df.question_text.apply(lambda x: 1 if any(st in x for st in sup_list) else 0) data_df['qcontains_superlatives'].corr(data_df['__ans__']) """ Explanation: Ok, the functions work well! Keywords: Superlative Adjectives Ok let's create another question_text filter. End of explanation """ # questions containing best, most, or epic CorrOR('best, most, epic') """ Explanation: Ok, not as promising as I thought! Let's go through the list and look for the best one. Keywords: "Best" et al. End of explanation """ # Calculates the correlation between [a boolean column of question texts containing `str_` multiplied by num_answers] and [num_answers] # eg. CorrORxNumAns('cat, dog, book, animals') def CorrORxNumAns(str_): split = str_.split(', ') joined= '|'.join(split) combined_df = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str(joined))) else 0) xnum_combined_df = combined_df * data_df['num_answers'] return xnum_combined_df.corr(data_df.__ans__) CorrORxNumAns('best, most, epic, university, stories') """ Explanation: Not as I expected. What if I multiply it by the number of answers? Feature Expander End of explanation """ # make a class of questions with answers >= 1 a=pd.DataFrame() a['num_answers_g0'] = data_df['num_answers'].apply(lambda x: 1 if x != 0 else 0) # multiply `qcontains_best` by the number of answers>=1 a['qcontains_best'] = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(str('best|most|epic|India|'))) else 0) b = a['qcontains_best'] * a['num_answers_g0'] b.corr(data_df['__ans__']) """ Explanation: Let's check if it matters that the number of answers is greater than 1 (as opposed to just 0 or 1, for all with at least one answer). So lets do the following: End of explanation """ (ggplot(data_df, aes(x="num_answers")) +\ geom_histogram(binwidth = 5) + theme_bw()) """ Explanation: So, it does matter that some questions have more number of answers than others. Exploring num_answers Let's begin with a simple historgram. End of explanation """ (ggplot(data_df[data_df.num_answers > 6], aes(x="num_answers")) +\ geom_histogram(binwidth = 5) + theme_bw()) """ Explanation: How about questions with more than 6 answers? End of explanation """ (ggplot(data_df, aes(x="num_answers", y="__ans__")) + geom_point() + geom_smooth(method='lm') + theme_bw()) print('The correlation above is {}.'.format(data_df['num_answers'].corr(data_df['__ans__']))) (ggplot(data_df, aes('num_answers', '__ans__', color='factor(anonymous)')) + theme_bw() #black and white theme + geom_point(size=0.2) + geom_smooth(aes(color='factor(anonymous)'), method='lm') + facet_wrap('~anonymous', nrow=1, scales='free')) # divide the plot by the 'anonymous' column print('For the 0 plot, the coefficient of correlation is {0}, whereas for the 1 plot, it is {1}.'.format(data_df[data_df['anonymous']==0]['num_answers'].corr(data_df[data_df['anonymous']==0]['__ans__']), data_df[data_df['anonymous']==1]['num_answers'].corr(data_df[data_df['anonymous']==1]['__ans__']))) (ggplot(data_df[data_df.num_answers > 20], aes('num_answers', '__ans__', color='factor(anonymous)')) + theme(legend_position="left") + geom_point() + geom_smooth(aes(color='factor(anonymous)'), method='lm') + facet_wrap('~anonymous', nrow=1, scales='free') + theme_bw()) """ Explanation: Ok, let's see the scatter plot of num_answers and __ans__ (dependent variable). End of explanation """ def Corr_gNumAns(int_): # create a pd series with boolean values combined_df = data_df.num_answers.apply(lambda x: 1 if x > int_ else 0) return combined_df.corr(data_df.__ans__) Corr_gNumAns(0) s_df = pd.DataFrame() s_df['>=num_ans'] = data_df.num_answers s = data_df.num_answers.values # calculate correlation cor = [] for i in range(len(s)): cor.append(Corr_gNumAns(i)) # calculate correlation cor = [] for i in s: cor.append(Corr_gNumAns(i)) cor_df = pd.DataFrame(cor, columns=['cor_coef']) cor_df.tail() """ Explanation: This begs the question: which num_answers maximizes the correlation. End of explanation """ num_corcoef_df = pd.concat([s_df, cor_df, data_df['anonymous'], data_df['__ans__']], axis=1) num_corcoef_df.head() num_corcoef_df.iloc[2451] num_corcoef_df.iloc[2451] = num_corcoef_df.iloc[2452] (ggplot(num_corcoef_df, aes('>=num_ans', 'cor_coef', size='__ans__')) + geom_point(size=0.2) + geom_line(size=0.3) + theme_bw()) (ggplot(num_corcoef_df, aes('>=num_ans', 'cor_coef', size='__ans__', color='factor(anonymous)')) + geom_point(size=0.2) + geom_line(size=0.3) + theme_bw() + facet_wrap('~anonymous', scales='free')) """ Explanation: Since we have quite a few NaN, we will only plot the non-NaN ones. End of explanation """ num_corcoef_df.iloc[list(num_corcoef_df.cor_coef.values).index(max(list(num_corcoef_df.cor_coef.values)))] # there is probably a non-gory way of doing this """ Explanation: So where is the absolute max of cor_coef achieved? End of explanation """ data_df['num_ans>= 29'] = data_df['num_answers'].apply(lambda x: 1 if x>=29 else 0) data_df['num_ans>= 29'].describe() data_df['num_ans>= 29'].corr(data_df['__ans__']) """ Explanation: This tells us that setting a boolean column of whether the number of answers is more than 29 gives us a feature that has a correlation coefficient of 0.39 with our target label __ans__. New Feature: Questions with at least 29 Answers End of explanation """ data_df[data_df.question_text.str.contains(' story|Story|stories|Stories')][['question_text', '__ans__']].describe() # the space before story is in order to avoid mixing up of history """ Explanation: Only "stories" What about stories? One common theme I see among popular questions in Quora is questions about the best stories or one liners, etc. Let's see if questions with these terms have a more significant correlation than features we built earlier. End of explanation """ # Best stories data_df['qcontains_best_story'] = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains(" story|Story|stories|Stories")) else 0) # do a correlation data_df['qcontains_best_story'].corr(data_df['__ans__']) """ Explanation: There are 54 of these. End of explanation """ # Best one-liner # first let's get a hint of the keywords associated with "liners" data_df['qcontains_best_liner'] = data_df.question_text.apply(lambda x: 1 if any(pd.Series(x).str.contains("liners")) else 0) print(data_df['qcontains_best_liner'].corr(data_df['__ans__'])) data_df[data_df.question_text.str.contains('liners')][['question_text', '__ans__']] """ Explanation: But it's not promising. End of explanation """
pgmpy/pgmpy_notebook
notebooks/7. Parameterizing with Continuous Variables.ipynb
mit
from IPython.display import Image """ Explanation: Parameterizing with Continuous Variables End of explanation """ import numpy as np from scipy.special import beta # Two variable drichlet ditribution with alpha = (1,2) def drichlet_pdf(x, y): return (np.power(x, 1)*np.power(y, 2))/beta(x, y) from pgmpy.factors.continuous import ContinuousFactor drichlet_factor = ContinuousFactor(['x', 'y'], drichlet_pdf) drichlet_factor.scope(), drichlet_factor.assignment(5,6) """ Explanation: Continuous Factors Base Class for Continuous Factors Joint Gaussian Distributions Canonical Factors Linear Gaussian CPD In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case. Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, $P(X | Y_1, Y_2, \cdots Y_n)$ represent, for every assignment of values $y_1 \in Val(Y_1), y_2 \in Val(Y_2), \cdots, y_n \in val(Y_n)$, a distribution over $X$. In this case, $X$ might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have $X$’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well. Base Class for Continuous Factors This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class. End of explanation """ def custom_pdf(x, y, z): return z*(np.power(x, 1)*np.power(y, 2))/beta(x, y) custom_factor = ContinuousFactor(['x', 'y', 'z'], custom_pdf) custom_factor.scope(), custom_factor.assignment(1, 2, 3) custom_factor.reduce([('y', 2)]) custom_factor.scope(), custom_factor.assignment(1, 3) from scipy.stats import multivariate_normal std_normal_pdf = lambda *x: multivariate_normal.pdf(x, [0, 0], [[1, 0], [0, 1]]) std_normal = ContinuousFactor(['x1', 'x2'], std_normal_pdf) std_normal.scope(), std_normal.assignment([1, 1]) std_normal.marginalize(['x2']) std_normal.scope(), std_normal.assignment(1) sn_pdf1 = lambda x: multivariate_normal.pdf([x], [0], [[1]]) sn_pdf2 = lambda x1,x2: multivariate_normal.pdf([x1, x2], [0, 0], [[1, 0], [0, 1]]) sn1 = ContinuousFactor(['x2'], sn_pdf1) sn2 = ContinuousFactor(['x1', 'x2'], sn_pdf2) sn3 = sn1 * sn2 sn4 = sn2 / sn1 sn3.assignment(0, 0), sn4.assignment(0, 0) """ Explanation: This class supports methods like marginalize, reduce, product and divide just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later. End of explanation """ from pgmpy.factors.distributions import GaussianDistribution as JGD dis = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]), np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]])) dis.variables dis.mean dis.covariance dis.pdf([0,0,0]) """ Explanation: The ContinuousFactor class also has a method discretize that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class. Joint Gaussian Distributions In its most common representation, a multivariate Gaussian distribution over $X_1 \cdots X_n$ is characterized by an n-dimensional mean vector $\mu$, and a symmetric $n \times n$ covariance matrix $\Sigma$. The density function is most defined as - $$p(x) = \dfrac{1}{(2\pi)^{n/2}| \Sigma |^{1/2}} \exp[-0.5*(x- \mu )^T \Sigma^{-1}(x- \mu)]$$ The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters. End of explanation """ dis1 = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]), np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]])) dis2 = JGD(['x3', 'x4'], [1, 2], [[2, 3], [5, 6]]) dis3 = dis1 * dis2 dis3.variables dis3.mean dis3.covariance """ Explanation: This class overrides the basic operation methods (marginalize, reduce, normalize, product and divide) as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is $\mathcal{O}(n^3)$ with repect to the number of variables. End of explanation """ from pgmpy.factors.continuous import CanonicalDistribution phi1 = CanonicalDistribution(['x1', 'x2', 'x3'], np.array([[1, -1, 0], [-1, 4, -2], [0, -2, 4]]), np.array([[1], [4], [-1]]), -2) phi2 = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]), np.array([[5], [-1]]), 1) phi3 = phi1 * phi2 phi3.variables phi3.h phi3.K phi3.g """ Explanation: The others methods can also be used in a similar fashion. Canonical Factors While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step. So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference: factor product, factor division, factor reduction, and marginalization. Thus, we can define a set of simple data structures that allow the inference process to be performed. Moreover, the integration operation required by marginalization is always well defined, and it is guaranteed to produce a finite integral under certain conditions; when it is well defined, it has a simple analytical solution. A canonical form $C (X; K,h, g)$ is defined as: $$C(X; K,h,g) = \exp(-0.5X^TKX + h^TX + g)$$ We can represent every Gaussian as a canonical form. Rewriting the joint Gaussian pdf we obtain, $N (\mu; \Sigma) = C (K, h, g)$ where: $$K = \Sigma^{-1}$$ $$h = \Sigma^{-1} \mu$$ $$g = -0.5 \mu^T \Sigma^{-1} \mu - \log((2 \pi)^{n/2}| \Sigma |^{1/2}$$ Similar to the JointGaussainDistribution class, the CanonicalFactor class is also derived from the ContinuousFactor class but with its own implementations of the methods required for the sum-product algorithms that are much more efficient than its parent class methods. Let us have a look at the API of a few methods in this class. End of explanation """ phi = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]), np.array([[5], [-1]]), 1) jgd = phi.to_joint_gaussian() jgd.variables jgd.covariance jgd.mean """ Explanation: This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution. End of explanation """ # For P(Y| X1, X2, X3) = N(-2x1 + 3x2 + 7x3 + 0.2; 9.6) from pgmpy.factors.continuous import LinearGaussianCPD cpd = LinearGaussianCPD('Y', [0.2, -2, 3, 7], 9.6, ['X1', 'X2', 'X3']) print(cpd) """ Explanation: Linear Gaussian CPD A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent. For example, $$P(Y ; x_1, x_2, x_3) = N(\beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_0 ; \sigma^2)$$ Let $Y$ be a linear Gaussian of its parents $X_1, \cdots, X_k: $$p(Y | x) = N(\beta_0 + \beta^T x ; \sigma^2)$$ The distribution of $Y$ is a normal distribution $p(Y)$ where: $$\mu_Y = \beta_0 + \beta^T \mu$$ $$\sigma^2_Y = \sigma^2 + \beta^{T \Sigma \beta}$$ The joint distribution over ${X, Y}$ is a normal distribution where: $$ Cov[X_i; Y] = {\sum_{j=1}^{k} \beta_j \Sigma_{i,j}}$$ Assume that $X_1, \cdots, X_k$ are jointly Gaussian with distribution $\mathcal{N}(\mu; \Sigma)$. Then: For its representation pgmpy has a class named LinearGaussianCPD in the module pgmpy.factors.continuous. To instantiate an object of this class, one needs to provide a variable name, the value of the $\beta_0$ term, the variance, a list of the parent variable names and a list of the coefficient values of the linear equation (beta_vector), where the list of parent variable names and beta_vector list is optional and defaults to None. End of explanation """ from pgmpy.models import LinearGaussianBayesianNetwork model = LinearGaussianBayesianNetwork([('x1', 'x2'), ('x2', 'x3')]) cpd1 = LinearGaussianCPD('x1', [1], 4) cpd2 = LinearGaussianCPD('x2', [-5, 0.5], 4, ['x1']) cpd3 = LinearGaussianCPD('x3', [4, -1], 3, ['x2']) # This is a hack due to a bug in pgmpy (LinearGaussianCPD # doesn't have `variables` attribute but `add_cpds` function # wants to check that...) cpd1.variables = [*cpd1.evidence, cpd1.variable] cpd2.variables = [*cpd2.evidence, cpd2.variable] cpd3.variables = [*cpd3.evidence, cpd3.variable] model.add_cpds(cpd1, cpd2, cpd3) jgd = model.to_joint_gaussian() jgd.variables jgd.mean jgd.covariance """ Explanation: A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution. These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model. End of explanation """