repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
tensorflow/agents
docs/tutorials/6_reinforce_tutorial.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TF-Agents Authors. End of explanation """ !sudo apt-get update !sudo apt-get install -y xvfb ffmpeg freeglut3-dev !pip install 'imageio==2.4.0' !pip install pyvirtualdisplay !pip install tf-agents[reverb] !pip install pyglet xvfbwrapper from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib.pyplot as plt import numpy as np import PIL.Image import pyvirtualdisplay import reverb import tensorflow as tf from tf_agents.agents.reinforce import reinforce_agent from tf_agents.drivers import py_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.networks import actor_distribution_network from tf_agents.policies import py_tf_eager_policy from tf_agents.replay_buffers import reverb_replay_buffer from tf_agents.replay_buffers import reverb_utils from tf_agents.specs import tensor_spec from tf_agents.trajectories import trajectory from tf_agents.utils import common # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() """ Explanation: REINFORCE agent <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/6_reinforce_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/6_reinforce_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Introduction This example shows how to train a REINFORCE agent on the Cartpole environment using the TF-Agents library, similar to the DQN tutorial. We will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection. Setup If you haven't installed the following dependencies, run: End of explanation """ env_name = "CartPole-v0" # @param {type:"string"} num_iterations = 250 # @param {type:"integer"} collect_episodes_per_iteration = 2 # @param {type:"integer"} replay_buffer_capacity = 2000 # @param {type:"integer"} fc_layer_params = (100,) learning_rate = 1e-3 # @param {type:"number"} log_interval = 25 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 50 # @param {type:"integer"} """ Explanation: Hyperparameters End of explanation """ env = suite_gym.load(env_name) """ Explanation: Environment Environments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using suites. We have different suites for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name. Now let us load the CartPole environment from the OpenAI Gym suite. End of explanation """ #@test {"skip": true} env.reset() PIL.Image.fromarray(env.render()) """ Explanation: We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up. End of explanation """ print('Observation Spec:') print(env.time_step_spec().observation) print('Action Spec:') print(env.action_spec()) """ Explanation: The time_step = environment.step(action) statement takes action in the environment. The TimeStep tuple returned contains the environment's next observation and reward for that action. The time_step_spec() and action_spec() methods in the environment return the specifications (types, shapes, bounds) of the time_step and action respectively. End of explanation """ time_step = env.reset() print('Time step:') print(time_step) action = np.array(1, dtype=np.int32) next_time_step = env.step(action) print('Next time step:') print(next_time_step) """ Explanation: So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the action_spec is a scalar where 0 means "move left" and 1 means "move right." End of explanation """ train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) """ Explanation: Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the TFPyEnvironment wrapper. The original environment's API uses numpy arrays, the TFPyEnvironment converts these to/from Tensors for you to more easily interact with TensorFlow policies and agents. End of explanation """ actor_net = actor_distribution_network.ActorDistributionNetwork( train_env.observation_spec(), train_env.action_spec(), fc_layer_params=fc_layer_params) """ Explanation: Agent The algorithm that we use to solve an RL problem is represented as an Agent. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of Agents such as DQN, DDPG, TD3, PPO and SAC. To create a REINFORCE Agent, we first need an Actor Network that can learn to predict the action given an observation from the environment. We can easily create an Actor Network using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the fc_layer_params argument set to a tuple of ints representing the sizes of each hidden layer (see the Hyperparameters section above). End of explanation """ optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) train_step_counter = tf.Variable(0) tf_agent = reinforce_agent.ReinforceAgent( train_env.time_step_spec(), train_env.action_spec(), actor_network=actor_net, optimizer=optimizer, normalize_returns=True, train_step_counter=train_step_counter) tf_agent.initialize() """ Explanation: We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated. End of explanation """ eval_policy = tf_agent.policy collect_policy = tf_agent.collect_policy """ Explanation: Policies In TF-Agents, policies represent the standard notion of policies in RL: given a time_step produce an action or a distribution over actions. The main method is policy_step = policy.action(time_step) where policy_step is a named tuple PolicyStep(action, state, info). The policy_step.action is the action to be applied to the environment, state represents the state for stateful (RNN) policies and info may contain auxiliary information such as log probabilities of the actions. Agents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy). End of explanation """ #@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] # Please also see the metrics module for standard implementations of different # metrics. """ Explanation: Metrics and Evaluation The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows. End of explanation """ table_name = 'uniform_table' replay_buffer_signature = tensor_spec.from_spec( tf_agent.collect_data_spec) replay_buffer_signature = tensor_spec.add_outer_dim( replay_buffer_signature) table = reverb.Table( table_name, max_size=replay_buffer_capacity, sampler=reverb.selectors.Uniform(), remover=reverb.selectors.Fifo(), rate_limiter=reverb.rate_limiters.MinSize(1), signature=replay_buffer_signature) reverb_server = reverb.Server([table]) replay_buffer = reverb_replay_buffer.ReverbReplayBuffer( tf_agent.collect_data_spec, table_name=table_name, sequence_length=None, local_server=reverb_server) rb_observer = reverb_utils.ReverbAddEpisodeObserver( replay_buffer.py_client, table_name, replay_buffer_capacity ) """ Explanation: Replay Buffer In order to keep track of the data collected from the environment, we will use Reverb, an efficient, extensible, and easy-to-use replay system by Deepmind. It stores experience data when we collect trajectories and is consumed during training. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using tf_agent.collect_data_spec. End of explanation """ #@test {"skip": true} def collect_episode(environment, policy, num_episodes): driver = py_driver.PyDriver( environment, py_tf_eager_policy.PyTFEagerPolicy( policy, use_tf_function=True), [rb_observer], max_episodes=num_episodes) initial_time_step = environment.reset() driver.run(initial_time_step) """ Explanation: For most agents, the collect_data_spec is a Trajectory named tuple containing the observation, action, reward etc. Data Collection As REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer. Here we are using 'PyDriver' to run the experience collecting loop. You can learn more about TF Agents driver in our drivers tutorial. End of explanation """ #@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. tf_agent.train = common.function(tf_agent.train) # Reset the train step tf_agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few episodes using collect_policy and save to the replay buffer. collect_episode( train_py_env, tf_agent.collect_policy, collect_episodes_per_iteration) # Use data from the buffer and update the agent's network. iterator = iter(replay_buffer.as_dataset(sample_batch_size=1)) trajectories, _ = next(iterator) train_loss = tf_agent.train(experience=trajectories) replay_buffer.clear() step = tf_agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1}'.format(step, avg_return)) returns.append(avg_return) """ Explanation: Training the agent The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing. The following will take ~3 minutes to run. End of explanation """ #@test {"skip": true} steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=250) """ Explanation: Visualization Plots We can plot return vs global steps to see the performance of our agent. In Cartpole-v0, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200. End of explanation """ def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) """ Explanation: Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab. End of explanation """ num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = tf_agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename) """ Explanation: The following code visualizes the agent's policy for a few episodes: End of explanation """
root-mirror/training
SoftwareCarpentry/08-rdataframe-features.ipynb
gpl-2.0
import ROOT df = ROOT.RDataFrame("dataset","data/example_file.root") df1 = df.Define("c","a+b") out_treename = "outtree" out_filename = "outtree.root" out_columns = ["a","b","c"] snapdf = df1.Snapshot(out_treename, out_filename, out_columns) """ Explanation: Save dataset to ROOT file after processing With RDataFrame, you can read your dataset, add new columns with processed values and finally use Snapshot to save the resulting data to a ROOT file in TTree format. End of explanation """ %%bash rootls -lt outtree.root """ Explanation: We can now check that the dataset was correctly stored in a file: End of explanation """ snapdf.Display().Print() """ Explanation: Result of a Snapshot is still an RDataFrame that can be further used: End of explanation """ df = ROOT.RDataFrame("sig_tree", "https://root.cern/files/Higgs_data.root") filter1 = df.Filter("lepton_eta > 0", "Lepton eta cut") filter2 = filter1.Filter("lepton_phi < 1", "Lepton phi cut") rep = df.Report() rep.Print() """ Explanation: Cutflow reports Filters applied to the dataset can be given a name. The Report method will gather information about filter efficiency and show the data flow between subsequent cuts on the original dataset. End of explanation """ %%cpp float asfloat(unsigned long long entrynumber){ return entrynumber; } """ Explanation: Using C++ functions in Python Since we still want to perform complex operations in Python but plain Python code is prone to be slow and not thread-safe, you can inject C++ functions doing the work in your event loop during runtime. This mechanism uses the C++ interpreter cling shipped with ROOT, making this possible in a single line of code. Let's start by defining a function that will allow us to change the type of a the RDataFrame dataset entry numbers (stored in the special column "rdfentry") from unsigned long long to float. End of explanation """ %%cpp float square(float val){ return val * val; } """ Explanation: Then let's define another function that takes a float values and computes its square. End of explanation """ # Create a new RDataFrame from scratch with 100 consecutive entries df = ROOT.RDataFrame(100) # Create a new column using the previously declared C++ functions df1 = df.Define("a", "asfloat(rdfentry_)") df2 = df1.Define("b", "square(a)") """ Explanation: And now let's use these functions with RDataFrame! We start by creating an empty RDataFrame with 100 consecutive entries and defining new columns on it: End of explanation """ # Show the two columns created in a graph c = ROOT.TCanvas() graph = df2.Graph("a","b") graph.SetMarkerStyle(20) graph.SetMarkerSize(0.5) graph.SetMarkerColor(ROOT.kBlue) graph.SetTitle("My graph") graph.Draw("AP") c.Draw() """ Explanation: We can now plot the values of the columns in a graph: End of explanation """ %%time # Get a first baseline measurement treename = "Events" filename = "root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root" df = ROOT.RDataFrame(treename, filename) df.Sum("nMuon").GetValue() %%time # Activate multithreading capabilities # By default takes all available cores on the machine ROOT.EnableImplicitMT() treename = "Events" filename = "root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root" df = ROOT.RDataFrame(treename, filename) df.Sum("nMuon").GetValue() # Disable implicit multithreading when done ROOT.DisableImplicitMT() """ Explanation: Using all cores of your machine with multi-threaded RDataFrame RDataFrame can transparently perform multi-threaded event loops to speed up the execution of its actions. Users have to call ROOT::EnableImplicitMT() before constructing the RDataFrame object to indicate that it should take advantage of a pool of worker threads. Each worker thread processes a distinct subset of entries, and their partial results are merged before returning the final values to the user. RDataFrame operations such as Histo1D or Snapshot are guaranteed to work correctly in multi-thread event loops. User-defined expressions, such as strings or lambdas passed to Filter, Define, Foreach, Reduce or Aggregate will have to be thread-safe, i.e. it should be possible to call them concurrently from different threads. End of explanation """
HazyResearch/snorkel
tutorials/workshop/Workshop_7_Advanced_BRAT_Annotator.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 %matplotlib inline import os import numpy as np # Connect to the database backend and initalize a Snorkel session from lib.init import * """ Explanation: Creating Gold Annotation Labels with BRAT This is a short tutorial on how to use BRAT (Brat Rapid Annotation Tool), an online environment for collaborative text annotation. http://brat.nlplab.org/ End of explanation """ Spouse = candidate_subclass('Spouse', ['person1', 'person2']) """ Explanation: Step 1: Define a Candidate Type End of explanation """ from snorkel.contrib.brat import BratAnnotator brat = BratAnnotator(session, Spouse, encoding='utf-8') """ Explanation: a) Select an example Candidate and Document Candidates are divided into 3 splits mapping to a unique integer id: - 0: training - 1: development - 2: testing In this tutorial, we'll load our training set candidates and create gold labels for a document using the BRAT interface Step 2: Launching BRAT BRAT runs as as seperate server application. When you first initialize this server, you need to provide your applications Candidate type. For this tutorial, we use the Spouse relation defined above, which consists of a pair of PERSON named entities connected by marriage. Currently, we only support 1 relation type per-application. End of explanation """ brat.init_collection("spouse/train", split=0) """ Explanation: a) Initialize our document collection BRAT creates a local copy of all the documents and annotations found in a split set. We initialize or document collection by passing in a set of candidates via the split id. Annotations are stored as plain text files in standoff format. <img align="left" src="imgs/brat-login.jpg" width="200px" style="margin-right:50px"> After launching the BRAT annotator for the first time, you will need to login to begin editing annotations. Navigate your mouse to the upper right-hand corner of the BRAT interface (see Fig. 1) click 'login' and enter the following information: login: brat password: brat Advanced BRAT users can setup multiple annotator accounts by adding USER/PASSWORD key pairs to the USER_PASSWORD dictionary found in snokel/contrib/brat/brat-v1.3_Crunchy_Frog/config.py. This is useful if you would like to keep track of multiple annotator judgements for later adjudication or use as labeling functions as per our tutorial on using Snorkel for Crowdsourcing. End of explanation """ brat.import_collection("data/brat-spouse.zip", overwrite=True) """ Explanation: We've already generated some BRAT annotations, so import and existing collection for purposes of this tutorial. End of explanation """ doc_name = '5ede8912-59c9-4ba9-93df-c58cebb542b7' doc = session.query(Document).filter(Document.name==doc_name).one() brat.view("spouse/train", doc) """ Explanation: b) Launch BRAT Interface in a New Window Once our collection is initialized, we can view specific documents for annotation. The default mode is to generate a HTML link to a new BRAT browser window. Click this link to connect to launch the annotator editor. End of explanation """ brat.view("spouse/train") """ Explanation: If you do not have a specific document to edit, you can optionally launch BRAT and use their file browser to navigate through all files found in the target collection. End of explanation """ train_cands = session.query(Candidate).filter(Candidate.split==0).all() """ Explanation: Step 3: Creating Gold Label Annotations a) Annotating Named Entities Spouse relations consist of 2 PERSON named entities. When annotating our validation documents, the first task is to identify our target entities. In this tutorial, we will annotate all PERSON mentions found in our example document, though for your application you may choose to only label those that particpate in a true relation. <img align="right" src="imgs/brat-anno-dialog.jpg" width="400px" style="margin-left:50px"> Begin by selecting and highlighting the text corresponding to a PERSON entity. Once highlighted, an annotation dialog will appear on your screen (see image of the BRAT Annotation Dialog Window to the right). If this is correct, click ok. Repeat this for every entity you find in the document. Annotation Guidelines When developing gold label annotations, you should always discuss and agree on a set of annotator guidelines to share with human labelers. These are the guidelines we used to label the Spouse relation: <span style="color:red">Do not</span> include formal titles associated with professional roles e.g., Pastor Jeff, Prime Minister Prayut Chan-O-Cha Do include English honorifics unrelated to a professional role, e.g., Mr. John Cleese. <span style="color:red">Do not</span> include family names/surnames that do not reference a single individual, e.g., the Duggar family. Do include informal titles, stage names, fictional characters, and nicknames, e.g., Dog the Bounty Hunter Include possessive's, e.g., Anna's. b) Annotating Relations To annotate Spouse relations, we look through all pairs of PERSON entities found within a single sentence. BRAT identifies the bounds of each sentence and renders a numbered row in the annotation window (see the left-most column in the image below). <img align="right" src="imgs/brat-relation.jpg" width="500px" style="margin-left:50px"> Annotating relations is done through simple drag and drop. Begin by clicking and holding on a single PERSON entity and then drag that entity to its corresponding spouse entity. That is it! Annotation Guidelines Restrict PERSON pairs to those found in the same sentence. The order of PERSON arguments does not matter in this application. <span style="color:red">Do not</span> include relations where a PERSON argument is wrong or otherwise incomplete. Step 4: Scoring Models using BRAT Labels a) Evaluating System Recall Creating gold validation data with BRAT is a critical evaluation step because it allows us to compute an estimate of our model's true recall. When we create labeled data over a candidate set created by Snorkel, we miss mentions of relations that our candidate extraction step misses. This causes us to overestimate the system's true recall. In the code below, we show how to map BRAT annotations to an existing set of Snorkel candidates and compute some associated metrics. End of explanation """ %time brat.import_gold_labels(session, "spouse/train", train_cands) """ Explanation: b) Mapping BRAT Annotations to Snorkel Candidates We annotated a single document using BRAT to illustrate the difference in scores when we factor in the effects of candidate generation. End of explanation """ test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all() from snorkel.learning.disc_models.rnn import reRNN lstm = reRNN(seed=1701, n_threads=None) lstm.load("spouse.lstm") marginals = lstm.marginals(test_cands) """ Explanation: Our candidate extractor only captures 7/14 (50%) of true mentions in this document. Our real system's recall is likely even worse, since we won't correctly predict the label for all true candidates. c) Re-loading the Trained LSTM We'll load the LSTM model we trained in Workshop_4_Discriminative_Model_Training.ipynb and use to to predict marginals for our test candidates. End of explanation """ doc_ids = set(open("data/brat_test_docs.tsv","rb").read().splitlines()) cid_query = [c.id for c in test_cands if c.get_parent().document.name in doc_ids] brat.init_collection("spouse/test-subset", cid_query=cid_query) brat.view("spouse/test-subset") """ Explanation: d) Create a Subset of Test for Evaluation Our measures assume BRAT annotations are complete for the given set of documents! Rather than manually annotating the entire test set, we define a small subset of 10 test documents for hand lableing. We'll then compute the full, recall-corrected metrics for this subset. First, let's build a query to initalize this candidate collection. End of explanation """ import matplotlib.pyplot as plt plt.hist(marginals, bins=20) plt.show() from snorkel.annotations import load_gold_labels L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, load_as_array=True, zero_one=True) L_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True) tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test) brat.score(session, test_cands, marginals, "spouse/test-subset") brat.score(session, test_cands, marginals, "spouse/test-subset", recall_correction=False) """ Explanation: e) Comparing Unadjusted vs. Adjusted Scores End of explanation """
changhoonhahn/centralMS
centralms/notebooks/notes_catalog.ipynb
mit
import numpy as np import catalog as Cat import matplotlib.pyplot as plt from ChangTools.plotting import prettycolors """ Explanation: notebook accompanying catalog.py Illustrates how to generate new subhalo accretion history catalogs End of explanation """ sig = 0.0 smf = 'li-march' nsnap0 = 20 subhist = Cat.SubhaloHistory(sigma_smhm=0., smf_source='li-march', nsnap_ancestor=20) subhist.Build() subhist._CheckHistory() """ Explanation: generate new subhalo and central subhalo catalogs End of explanation """ censub = Cat.PureCentralHistory(sigma_smhm=sig, smf_source=smf, nsnap_ancestor=nsnap0) censub.Build() censub.Downsample() """ Explanation: generate central subhalo accretion history catalogs End of explanation """
NEONScience/NEON-Data-Skills
tutorials-in-development/Python/neon_api/neon_api_05_taxonomy_py.ipynb
agpl-3.0
import requests import json #Choose values for each option SERVER = 'http://data.neonscience.org/api/v0/' FAMILY = 'Pinaceae' OFFSET = 11 LIMIT = 20 VERBOSE = 'false' #Create 'options' portion of API call OPTIONS = '?family={family}&offset={offset}&limit={limit}&verbose={verbose}'.format( family = FAMILY, offset = OFFSET, limit = LIMIT, verbose = VERBOSE) #Print out the completed options string. This is the query string that is appended to the endpoint URL in the taxonomy API call print(OPTIONS) #Make request pine_req = requests.get(SERVER+'taxonomy/'+OPTIONS) pine_json = pine_req.json() """ Explanation: syncID: title: "Querying Taxonomy Data with NEON API and Python" description: "Querying the 'taxonomy/' NEON API endpoint with Python and navigating the response" dateCreated: 2020-04-24 authors: Maxwell J. Burner contributors: Donal O'Leary estimatedTime: packagesLibraries: requests, json, pandas topics: api languagesTool: python dataProduct: code1: tutorialSeries: python-neon-api-series urlTitle: neon_api_taxonomy In this tutorial we will learn to query the taxonomy/ endpoint of the NEON API using Python. <div id="ds-objectives" markdown="1"> ### Objectives After completing this tutorial, you will be able to: * Query the taxonomy endpoint of the NEON API to obtain taxonomic data * Search NEON taxonomic data using different criteria * Use the various options of the taxonomy endpoint to customize the results of a call * Navigate the data returned by a call to the taxonomy endpoint of the NEON API * Navigate the parent-child relationships between NEON locations ### Install Python Packages * **requests** * **json** * **pandas** </div> In this tutorial we will learn to use Python and the taxonomy/ endpoint of the NEON API to query information from NEON's taxonomic data. NEON maintains a great deal of taxonomic data, used in species identification during field observations and laboratory processing of samples. NEON taxonomy data can be obtained through the API, or through an interactive interface called the Taxon Viewer. Just as the locations/ endpoint can provide more context for a location referenced in NEON studies, the taxonomy/ endpoint can provide additional information on species identified in NEON observational data. Making the Request Unlike other endpoints, the locations/ endpoint does not take a single target in its URL. Instead, the query can make use of a number of different options, which are specified in the URL string itself. Each option is assigned a value with an equals sign, for example 'family=Pineceae'; these are placed after a question mark '?' at the end of the endpoint URL, which signals a 'query string' will follow. Multiple query options are separated by an ampersand '&' in the URL string. Each call must have one of the following options, but cannot use multiple: * taxonTypeCode, a four-letter that indicates which NEON taxonomy is being queried, such as FISH or BIRD * One of the major taxonomic ranks from genus through kingdom * scientificName a specific name of format genus + specific epithet + (authority); this is used to search for an exact result In addition, any number of the following options can also be added to modify the results of the query: * verbose takes a 'true' for a more detailed response or 'false' for a shorter response * offset takes an integer indicating the number of starting rows of the list of results to skip; the default is 0 * limit takes an integer indicating the maximum length of the list returned; the default is 50 Let's request data on up to 20 members of the Pine family, skipping the first 11, with the short response. End of explanation """ #Print out values in the top level of the pine_json taxonomy dictionary, other than the 'data' entry. for key in pine_json.keys(): if(key != 'data'): print(key,':',pine_json[key]) """ Explanation: Navigating the Response Unlike most API call responses, the taxonomy JSON at the uppermost level has more elements that just 'data'. The other elements include: count- how many species were returned in this response total- how many species entries are available from NEON (if offset was zero and limit was infinity). prev- the API url that could get the 'previous' set of entries (if offset was not zero) matching the other parameters. next- the API url that could get the next set of entries (if limit was not infinity, and the limit parameter resulted in some entries being excluded). The prev and next urls could be used to effectively break up a larger API call into several segments; we ask for a smaller set than we actually want, then use the "next" url to get the next set of entries in a seperate call. End of explanation """ #Print data for one species sample = pine_json['data'][7] for key in sample.keys(): print("{:28}: {}".format(key, sample[key])) """ Explanation: Within the 'data' element is a list with entries for each taxa returned by the call. Each species entry is a dictionary with atttributes for: The full taxonomy, with a separate attribute for each taxonomic level The NEON taxonomy type the data was obtained from (taxonTypeCode) The short taxon code used by NEON (taxonID, acceptedTaxonID) The author of the scientific name The common/vernacular name, if any The reference text used (nameAccordingToID) End of explanation """ for species in pine_json['data']: print("{:19}| {}".format(species['dwc:vernacularName'], species['dwc:scientificName'])) """ Explanation: The "dwc" at the beginning of many atttribute names indicates that the terms used for each field are matched to those used by Darwin Core, an official standard maintained for biodiversity reference. The "gbif" refers to the Global Biodiversity Information Facility. We can also print vernacular names alongside the scientific names of each species entry. End of explanation """ #Set options SERVER = 'http://data.neonscience.org/api/v0/' TAXONCODE = 'FISH' OFFSET = 0 LIMIT = 20 VERBOSE = 'true' #Create 'options' portion of API call OPTIONS = '?taxonTypeCode={taxoncode}&offset={offset}&limit={limit}&verbose={verbose}'.format( taxoncode = TAXONCODE, offset = OFFSET, limit = LIMIT, verbose = VERBOSE) print(OPTIONS) #Make request fish_req = requests.get(SERVER+'taxonomy/'+OPTIONS) fish_json = fish_req.json() """ Explanation: Using Taxon Type Code Let's make another API call, using taxonTypeCode this time. We'll look through some of the NEON Fish Taxonomy, but try the verbose description. End of explanation """ #Print data for one species in the result sample = fish_json['data'][7] for key in sample.keys(): print("{:28}: {}".format(key, sample[key])) """ Explanation: Choose an arbitrary species and see what data its dictionary contains. End of explanation """ #Print common and scientific name for each fish for species in fish_json['data']: print(species['dwc:vernacularName'],'|', species['dwc:scientificName']) """ Explanation: This is a more verbose entry than what we've seen, so there are more attributes, though many lack values. The 'gbif' attributes indicate terms matched to those used by the Global Biodiversity Forum. End of explanation """ import pandas as pd #Establish target for API search SITECODE = 'TEAK' PRODUCTCODE = 'DP1.10003.001' #Get data on available files bird_request = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2018-06') bird_json = bird_request.json() #Extract the URL for just the 'basic' package of the 'count' data, #and read that csv into a pandas data.frame falled 'bird_df' for file in bird_json['data']['files']: if('count' in file['name']): if('basic' in file['name']): bird_df = pd.read_csv(file['url']) #View all columns of the first 5 rows bird_df.head() """ Explanation: Finding a Specific Species Many NEON data products, such as the land bird breeding counts used in a previous tutorial, include species idetnification data in the form of species name. We can use the NEON taxonomy/ endpoint to search for a specific species mentioned in the NEON data. Let's look at the 2018-06 Lower Teakettle Bird Counts again, and get more detail on one of the observed species. End of explanation """ #Use pandas .unique method to see what species were observed bird_df['scientificName'].unique() """ Explanation: The unique method for Pandas series, which include individual columns of dataframes, returns the series with all duplicate values removed. End of explanation """ #Make request aedon_request = requests.get(SERVER+'taxonomy/'+'?scientificname=Troglodytes%20aedon') aedon_json = aedon_request.json() """ Explanation: More information on 'Troglodytes aedon' would be interesting. When using a scientific name in a taxonomy API call, which will be encoded as a URL, we replace any spaces in the name with '%20'; also, remember to capitalize the genus name, but not the species name. End of explanation """ #Print elements of JSON other than data for key in aedon_json.keys(): if(key != 'data'): print(key,':',aedon_json[key]) #Print elements of species dict in data list for key in aedon_json['data'][0].keys(): print(key,':',aedon_json['data'][0][key]) """ Explanation: Because only a single result was returned, count and total entries will be one, and there will be no urls for the previous or next batch of entries. It is important to note that the data element is still treated as a list; it is simply a list with only one element. End of explanation """
D-K-E/cltk
notebooks/CLTK Demonstration.ipynb
mit
## Requires Python 3.7, 3.8, 3.9 on a POSIX-compliant OS ## The latest published beta: # !pip install cltk ## Or directly from this repo: # cd .. && make install # %load_ext autoreload # %autoreload 2 """ Explanation: Table of contents Introduction Install pre-release of CLTK Get data Run NLP pipeline with NLP() Inspect CLTK Doc Inspect CLTK Word Modeling morphology with MorphosyntacticFeature and MorphosyntacticFeatureBundle Modeling syntax with Form and DependencyTree Feature extraction Brief demonstration of NLP() for Ancient Greek Introduction <a name="introduction"></a> This notebook demonstrates how to use NLP(), the CLTK's primary interface, in Latin and Ancient Greek. Pipelines are available for 17 languages (see Languages in the docs). Full documentation available at https://docs.cltk.org/en/latest/cltk.html#cltk.nlp.NLP. Note that there is a large amoung of code from this project's first six years (v. 0.1), not all of which has been or will be moved over to this v. 1.0. Docs for 0.1 still available at https://legacy.cltk.org and tutorial notebooks at https://github.com/cltk/tutorials. Install CLTK <a name="install"></a> This notebook comes from https://github.com/cltk/cltk/tree/master/notebooks. For full instructions on installing the CLTK are available at https://docs.cltk.org/en/latest/installation.html. End of explanation """ # Get Latin text # https://gist.github.com/kylepjohnson/2f9376fcf15699c250a0d09b37683370 # now at `notebooks/lat-livy.txt` # !curl -O -L https://gist.github.com/kylepjohnson/2f9376fcf15699c250a0d09b37683370/raw/05b7a17af4b216a4986d897c57a9987e836cc91a/lat-livy.txt # Get Ancient Greek text # https://gist.github.com/kylepjohnson/9835c36fb06ca30ebf29b7f2c7bd29e0 # now at `notebooks/grc-thucydides.txt` # !curl -O -L https://gist.github.com/kylepjohnson/9835c36fb06ca30ebf29b7f2c7bd29e0/raw/e08e47849f64484b0950b14563bb5a9fd1e1ef1c/grc-thucydides.txt # read the Latin file # We'll run the full demonstration in the Latin language first with open("lat-livy.txt") as fo: livy_full = fo.read() print("Text snippet:", livy_full[:200]) print("Character count:", len(livy_full)) print("Approximate token count:", len(livy_full.split())) len(livy_full) // 12 # Now let's cut this down to roughly 10k tokens for this demonstration's purposes livy = livy_full[:len(livy_full) // 12] print("Approximate token count:", len(livy.split())) """ Explanation: Get data <a name="get-data"></a> The following obtain two plaintext documents of two Classical authors. A subset of each will be used to demonstrate the CLTK. End of explanation """ # For most users, this is the only import required from cltk import NLP # Load the default Pipeline for Latin cltk_nlp = NLP(language="lat") # Removing ``LatinLexiconProcess`` for this demo b/c it is slow (adds ~9 mins total) cltk_nlp.pipeline.processes.pop(-1) print(cltk_nlp.pipeline.processes) # Now execute NLP algorithms upon input text # Aside from download, execution time is ~50 sec on a 2015 Macbook Pro %time cltk_doc = cltk_nlp.analyze(text=livy) # You will be asked to download some models (from CLTK, fastText, and Stanza) """ Explanation: Run NLP pipeline with NLP() <a name="run-nlp"></a> End of explanation """ # We can now inspect the result print(type(cltk_doc)) # All accessors print([x for x in dir(cltk_doc) if not x.startswith("__")]) # Several of the more useful # List of tokens print(cltk_doc.tokens[:20]) # List of lemmas print(cltk_doc.lemmata[:20]) # Basic part-of-speech info print(cltk_doc.pos[:20]) # A list of list of tokens print(cltk_doc.sentences_tokens[:2]) """ Explanation: Inspect CLTK Doc <a name="inspect-doc"></a> End of explanation """ # One ``Word`` object for each token print(len(cltk_doc.words)) """ Explanation: Inspect CLTK Word <a name="inspect-word"></a> Most powerful, though, is the Doc.words accessor, which is a list of Word objects. These Word objects contain all information that was generated during the NLP pipeline End of explanation """ # Let's look at a non-trivial sentence from Book 1 print("Original:", cltk_doc.sentences_strings[6]) print("") print("Translation:", "Landing there, the Trojans, as men who, after their all but immeasurable wanderings, had nothing left but their swords and ships, were driving booty from the fields, when King Latinus and the Aborigines, who then occupied that country, rushed down from their city and their fields to repel with arms the violence of the invaders.") # source: http://www.perseus.tufts.edu/hopper/text?doc=Liv.+1+1+5&fromdoc=Perseus%3Atext%3A1999.02.0151 sentence_6 = cltk_doc.sentences[6] # type: List[Word] # Looking at one Word, 'concurrunt' ('they run together') a_word_concurrunt = sentence_6[40] print(a_word_concurrunt) """ Explanation: Users can go token-by-token via Doc.words or via the intermediary step of looping through sentences. End of explanation """ print("`Word.string`:", a_word_concurrunt.string) print("") # Part-of-speech is always be available at `.pos`. print("`Word.pos`:", a_word_concurrunt.pos) """ Explanation: In this word, you can see information for lexicography (.lemmata), semantics (.embedding), morphology (.pos, .features), syntax (.governor, .dependency_relation), plus other information most users would find helpful (.stop, .named_entity). Modeling morphology with MorphosyntacticFeature and MorphosyntacticFeatureBundle <a name="morph"></a> When a language's Pipeline builds each Word object, morphological information is stored at several accessors. Those of interest to most users are .pos and .features. End of explanation """ # type print("type(`Word.features`):", type(a_word_concurrunt.features)) print("") # str repr of `MorphosyntacticFeatureBundle` print("`Word.features`:", a_word_concurrunt.features) """ Explanation: The CLTK contains classes a specific class for the annotation types defined by v2 of the Universal Dependencies project. In the CLTK's codebase, these are located at cltk/morphology/universal_dependencies_features.py. For instance, a Latin verb requires a label for its https://universaldependencies.org/u/feat/all.html#al-u-feat/Mood (e.g., indicative), which the UD project defines as "a feature that expresses modality and subclassifies finite verb forms". Though morphological taggers may annnotate a verb's mood variously ("ind.", "indicative", "Indic", etc.), the CLTK maps the term into the following, standardized Mood. ``` python class Mood(MorphosyntacticFeature): """The mood of a verb. see https://universaldependencies.org/u/feat/Mood.html """ admirative = auto() conditional = auto() desiderative = auto() imperative = auto() indicative = auto() jussive = auto() necessitative = auto() optative = auto() potential = auto() purposive = auto() quotative = auto() subjunctive = auto() ``` Turning back to the the above example word, we can see such features at .features. End of explanation """ print("Mood:", a_word_concurrunt.features["Mood"]) # type: List[Mood] print("Number:", a_word_concurrunt.features["Number"]) # type: List[Number] print("Person:", a_word_concurrunt.features["Person"]) # type: List[Person] print("Tense:", a_word_concurrunt.features["Tense"]) # type: List[Tense] print("VerbForm:", a_word_concurrunt.features["VerbForm"]) # type: List[VerbForm] print("Voice:", a_word_concurrunt.features["Voice"]) # type: List[Voice] # Note: The values returned here are a list, though under normally only one # morphological form will be available """ Explanation: A user may inspect a MorphosyntacticFeatureBundle in a manner similar to a dict End of explanation """ a_mood_obj = a_word_concurrunt.features["Mood"][0] # see type print("type(a_mood_obj):", type(a_mood_obj)) print("") # See inheritance from enum import IntEnum print("Is `IntEnum`?", isinstance(a_mood_obj, IntEnum)) print("") # from cltk.morphology.morphosyntax import MorphosyntacticFeature print("`Mood` inherits from `MorphosyntacticFeature`?", isinstance(a_mood_obj, MorphosyntacticFeature)) # You can manipulate this object as any IntEnum plus a few extras print("`MorphosyntacticFeature` accessors:", [x for x in dir(a_mood_obj) if not x.startswith("__")]) print("") print("MorphosyntacticFeature.name:", a_mood_obj.name) # type: str # A stable int value is available, too, associated with this name print("MorphosyntacticFeature.value:", a_mood_obj.value) # type: int """ Explanation: Looking a bit closer at MorphosyntacticFeature, we can see how its data type inherits from the Python builtin IntEnu. End of explanation """ from cltk.morphology.morphosyntax import MorphosyntacticFeatureBundle from cltk.morphology.universal_dependencies_features import Mood, Number, Person, VerbForm, Voice latin_word_sim = "sim" mood = Mood.subjunctive voice = Voice.active person = Person.first number = Number.singular verb_form = VerbForm.finite latin_word_sim_bundle = MorphosyntacticFeatureBundle(mood, voice, person, number, verb_form) print(latin_word_sim_bundle) """ Explanation: Users can create their own MorphosyntacticFeature and MorphosyntacticFeatureBundle: End of explanation """ from cltk.core.data_types import Word print(Word(string="sim", features=latin_word_sim_bundle)) # For more on this or any other CLTK class, use `help()` # help(a_mood_obj) # help(MorphosyntacticFeatureBundle) # Note: Extra morphological info may be written in `str` type # to to the values at `.upos` and `.xpos` for languages using # Stanza project # Note: The particular annoations at these are often inconsistent across # languages or even treebanks within a single language; hence the benefit # of the CLTK's modeling at `.pos`. print("`Word.upos`:", a_word_concurrunt.upos) print("`Word.xpos`:", a_word_concurrunt.xpos) """ Explanation: Finally, we may even construct a Word with this information: End of explanation """ from cltk.dependency.tree import DependencyTree # Let's look at this sentence again print(cltk_doc.sentences_strings[6]) # text form of `sentence_6` a_tree = DependencyTree.to_tree(sentence_6) from pprint import pprint pprint(a_tree.get_dependencies()) a_tree.print_tree() """ Explanation: Modeling syntax with Form and DependencyTree <a name="syntax"></a> The CLTK uses the builtin xml library to build tree for modeling dependency parses. A Word is mapped into a Form, then ElemntTree is used to organize these Forms into a DependencyTree. With a tree, certain measurements are more efficient (counting depth, breadth, edge types). End of explanation """ from cltk.utils.feature_extraction import cltk_doc_to_features_table feature_names, list_of_list_features = cltk_doc_to_features_table(cltk_doc=cltk_doc) # See here the names of the features extracted print(feature_names) # Number of "inner lists" matches number of tokens print("Number tokens:", len(cltk_doc.words)) print("len() of feature instances (one for each token):", len(list_of_list_features)) # Look at one row of data `(variable name, variable value)` pprint(list(zip(feature_names, list_of_list_features[108]))) """ Explanation: Feature extraction <a name="features"></a> The CLTK offers the function cltk_doc_to_features_table(), which assist users when preparing a Doc for training data for machine learning. It converts the list of Word objects at Doc.words into a tabular list of lists. End of explanation """ # read the Ancient Greek file with open("grc-thucydides.txt") as fo: thucydides_full = fo.read() print("Text snippet:", thucydides_full[0:200]) print("Character count:", len(thucydides_full)) print("Approximate token count:", len(thucydides_full.split())) len(thucydides_full) // 7 # Cut this down to roughly 10k tokens for this demonstration's purposes thucydides = thucydides_full[:len(thucydides_full) // 7] print("Approximate token count:", len(thucydides.split())) thucydides[:200] cltk_nlp_grc = NLP(language="grc") # Execution time is 50 sec on a 2015 Macbook Pro %time cltk_doc_grc = cltk_nlp_grc.analyze(text=thucydides) # You will be asked to download some models (from CLTK, fastText, and Stanza) print("`Doc.tokens`:", cltk_doc_grc.tokens[:20]) print(cltk_doc_grc.words[4]) # πόλεμον ('war') a_tree_grc = DependencyTree.to_tree(cltk_doc_grc.sentences[0]) #81 pprint(a_tree_grc.get_dependencies()) print(cltk_doc_grc.sentences_strings[0]) print("") print("Translation:", "Thucydides, an Athenian, wrote the history of the war between the Peloponnesians and the Athenians, beginning at the moment that it broke out, and believing that it would be a great war, and more worthy of relation than any that had preceded it. This belief was not without its grounds. The preparations of both the combatants were in every department in the last state of perfection; and he could see the rest of the Hellenic race taking sides in the quarrel; those who delayed doing so at once having it in contemplation.") print("") a_tree_grc.print_tree() feature_names_grc, list_of_list_features_grc = cltk_doc_to_features_table(cltk_doc=cltk_doc_grc) print(feature_names_grc) print("len() of feature instances (one for each token):", len(list_of_list_features_grc)) print("") print("Example of one instance row:", list_of_list_features_grc[4]) # Putting these together for easier reading pprint(list(zip(feature_names_grc, list_of_list_features_grc[4]))) """ Explanation: Brief demonstration of NLP() for Ancient Greek <a name="greek-nlp"></a> The API for Greek is the same as Latin. End of explanation """
enbanuel/phys202-2015-work
days/day10/Interpolation.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns """ Explanation: Interpolation Learning Objective: Learn to interpolate 1d and 2d datasets of structured and unstructured points using SciPy. End of explanation """ x = np.linspace(0,4*np.pi,10) x """ Explanation: Overview We have already seen how to evaluate a Python function at a set of numerical points: $$ f(x) \rightarrow f_i = f(x_i) $$ Here is an array of points: End of explanation """ f = np.sin(x) f plt.plot(x, f, marker='o') plt.xlabel('x') plt.ylabel('f(x)'); """ Explanation: This creates a new array of points that are the values of $\sin(x_i)$ at each point $x_i$: End of explanation """ from scipy.interpolate import interp1d """ Explanation: This plot shows that the points in this numerical array are an approximation to the actual function as they don't have the function's value at all possible points. In this case we know the actual function ($\sin(x)$). What if we only know the value of the function at a limited set of points, and don't know the analytical form of the function itself? This is common when the data points come from a set of measurements. Interpolation is a numerical technique that enables you to construct an approximation of the actual function from a set of points: $$ {x_i,f_i} \rightarrow f(x) $$ It is important to note that unlike curve fitting or regression, interpolation doesn't not allow you to incorporate a statistical model into the approximation. Because of this, interpolation has limitations: It cannot accurately construct the function's approximation outside the limits of the original points. It cannot tell you the analytical form of the underlying function. Once you have performed interpolation you can: Evaluate the function at other points not in the original dataset. Use the function in other calculations that require an actual function. Compute numerical derivatives or integrals. Plot the approximate function on a finer grid that the original dataset. Warning: The different functions in SciPy work with a range of different 1d and 2d arrays. To help you keep all of that straight, I will use lowercase variables for 1d arrays (x, y) and uppercase variables (X,Y) for 2d arrays. 1d data We begin with a 1d interpolation example with regularly spaced data. The function we will use it interp1d: End of explanation """ x = np.linspace(0,4*np.pi,10) # only use 10 points to emphasize this is an approx f = np.sin(x) """ Explanation: Let's create the numerical data we will use to build our interpolation. End of explanation """ sin_approx = interp1d(x, f, kind='cubic') """ Explanation: To create our approximate function, we call interp1d as follows, with the numerical data. Options for the kind argument includes: linear: draw a straight line between initial points. nearest: return the value of the function of the nearest point. slinear, quadratic, cubic: use a spline (particular kinds of piecewise polynomial of a given order. The most common case you will want to use is cubic spline (try other options): End of explanation """ newx = np.linspace(0,4*np.pi,100) newf = sin_approx(newx) """ Explanation: The sin_approx variabl that interp1d returns is a callable object that can be used to compute the approximate function at other points. Compute the approximate function on a fine grid: End of explanation """ plt.plot(x, f, marker='o', linestyle='', label='original data') plt.plot(newx, newf, marker='.', label='interpolated'); plt.legend(); plt.xlabel('x') plt.ylabel('f(x)'); """ Explanation: Plot the original data points, along with the approximate interpolated values. It is quite amazing to see how the interpolation has done a good job of reconstructing the actual function with relatively few points. End of explanation """ plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx))) plt.xlabel('x') plt.ylabel('Absolute error'); """ Explanation: Let's look at the absolute error between the actual function and the approximate interpolated function: End of explanation """ x = 4*np.pi*np.random.rand(15) f = np.sin(x) sin_approx = interp1d(x, f, kind='cubic') # We have to be careful about not interpolating outside the range newx = np.linspace(np.min(x), np.max(x),100) newf = sin_approx(newx) plt.plot(x, f, marker='o', linestyle='', label='original data') plt.plot(newx, newf, marker='.', label='interpolated'); plt.legend(); plt.xlabel('x') plt.ylabel('f(x)'); plt.plot(newx, np.abs(np.sin(newx)-sin_approx(newx))) plt.xlabel('x') plt.ylabel('Absolute error'); """ Explanation: 1d non-regular data It is also possible to use interp1d when the x data is not regularly spaced. To show this, let's repeat the above analysis with randomly distributed data in the range $[0,4\pi]$. Everything else is the same. End of explanation """ from scipy.interpolate import interp2d """ Explanation: Notice how the absolute error is larger in the intervals where there are no points. 2d structured For the 2d case we want to construct a scalar function of two variables, given $$ {x_i, y_i, f_i} \rightarrow f(x,y) $$ For now, we will assume that the points ${x_i,y_i}$ are on a structured grid of points. This case is covered by the interp2d function: End of explanation """ def wave2d(x, y): return np.sin(2*np.pi*x)*np.sin(3*np.pi*y) """ Explanation: Here is the actual function we will use the generate our original dataset: End of explanation """ x = np.linspace(0.0, 1.0, 10) y = np.linspace(0.0, 1.0, 10) """ Explanation: Build 1d arrays to use as the structured grid: End of explanation """ X, Y = np.meshgrid(x, y) Z = wave2d(X, Y) """ Explanation: Build 2d arrays to use in computing the function on the grid points: End of explanation """ plt.pcolor(X, Y, Z) plt.colorbar(); plt.scatter(X, Y); plt.xlim(0,1) plt.ylim(0,1) plt.xlabel('x') plt.ylabel('y'); """ Explanation: Here is a scatter plot of the points overlayed with the value of the function at those points: End of explanation """ wave2d_approx = interp2d(X, Y, Z, kind='cubic') """ Explanation: You can see in this plot that the function is not smooth as we don't have its value on a fine grid. Now let's compute the interpolated function using interp2d. Notice how we are passing 2d arrays to this function: End of explanation """ xnew = np.linspace(0.0, 1.0, 40) ynew = np.linspace(0.0, 1.0, 40) Xnew, Ynew = np.meshgrid(xnew, ynew) # We will use these in the scatter plot below Fnew = wave2d_approx(xnew, ynew) # The interpolating function automatically creates the meshgrid! Fnew.shape """ Explanation: Compute the interpolated function on a fine grid: End of explanation """ plt.pcolor(xnew, ynew, Fnew); plt.colorbar(); plt.scatter(X, Y, label='original points') plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points') plt.xlim(0,1) plt.ylim(0,1) plt.xlabel('x') plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.); """ Explanation: Plot the original course grid of points, along with the interpolated function values on a fine grid: End of explanation """ from scipy.interpolate import griddata """ Explanation: Notice how the interpolated values (green points) are now smooth and continuous. The amazing thing is that the interpolation algorithm doesn't know anything about the actual function. It creates this nice approximation using only the original course grid (blue points). 2d unstructured It is also possible to perform interpolation when the original data is not on a regular grid. For this, we will use the griddata function: End of explanation """ x = np.random.rand(100) y = np.random.rand(100) """ Explanation: There is an important difference between griddata and the interp1d/interp2d: interp1d and interp2d return callable Python objects (functions). griddata returns the interpolated function evaluated on a finer grid. This means that you have to pass griddata an array that has the finer grid points to be used. Here is the course unstructured grid we will use: End of explanation """ f = wave2d(x, y) """ Explanation: Notice how we pass these 1d arrays to our function and don't use meshgrid: End of explanation """ plt.scatter(x, y); plt.xlim(0,1) plt.ylim(0,1) plt.xlabel('x') plt.ylabel('y'); """ Explanation: It is clear that our grid is very unstructured: End of explanation """ xnew = np.linspace(x.min(), x.max(), 40) ynew = np.linspace(y.min(), y.max(), 40) Xnew, Ynew = np.meshgrid(xnew, ynew) Xnew.shape, Ynew.shape Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0) Fnew.shape plt.pcolor(Xnew, Ynew, Fnew, label="points") plt.colorbar() plt.scatter(x, y, label='original points') plt.scatter(Xnew, Ynew, marker='.', color='green', label='interpolated points') plt.xlim(0,1) plt.ylim(0,1) plt.xlabel('x') plt.ylabel('y'); plt.legend(bbox_to_anchor=(1.2, 1), loc=2, borderaxespad=0.); """ Explanation: To use griddata we need to compute the final (strcutured) grid we want to compute the interpolated function on: End of explanation """
qinjian623/dlnotes
cs231n/assignments/assignment1/knn.ipynb
gpl-3.0
# Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] X_train.shape # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) """ Explanation: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages: During training, the classifier takes the training data and simply remembers it During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. End of explanation """ # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() """ Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: First we must compute the distances between all test examples and all train examples. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. End of explanation """ # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) What in the data is the cause behind the distinctly bright rows? What causes the columns? Your Answer: fill this in. End of explanation """ y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: End of explanation """ # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation """ Explanation: You should expect to see a slightly better performance than with k = 1. End of explanation """ num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ X_train_folds = np.array_split(X_train, num_folds) y_train_folds = np.array_split(y_train, num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for kc in k_choices: k_to_accuracies[kc] = [] for val_idx in range(num_folds): XX = np.concatenate(X_train_folds[0:val_idx] + X_train_folds[val_idx+1: num_folds]) yy = np.concatenate(y_train_folds[0:val_idx] + y_train_folds[val_idx+1: num_folds]) classifier = KNearestNeighbor() classifier.train(XX, yy) # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict(X_train_folds[val_idx], k=kc) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_train_folds[val_idx]) accuracy = float(num_correct) / y_train_folds[val_idx].shape[0] k_to_accuracies[kc].append(accuracy) ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. End of explanation """
sbu-python-summer/python-tutorial
day-4/matplotlib-exercises.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np %matplotlib inline """ Explanation: matplotlib exercises End of explanation """ a = np.array([0.39, 0.72, 1.00, 1.52, 5.20, 9.54, 19.22, 30.06, 39.48]) """ Explanation: Q1: planetary positions The distances of the planets from the Sun (technically, their semi-major axes) are: End of explanation """ P = np.array([0.24, 0.62, 1.00, 1.88, 11.86, 29.46, 84.01, 164.8, 248.09]) """ Explanation: These are in units where the Earth-Sun distance is 1 (astronomical units). The corresponding periods of their orbits (how long they take to go once around the Sun) are, in years End of explanation """ names = ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune", "Pluto"] """ Explanation: Finally, the names of the planets corresponding to these are: End of explanation """ f = open("shore_leave.txt", "r") for line in f: pass """ Explanation: (technically, pluto isn't a planet anymore, but we still love it :) Plot as points, the periods vs. distances for each planet on a log-log plot. Write the name of the planet next to the point for that planet on the plot Q2: drawing a circle For an angle $\theta$ in the range $\theta \in [0, 2\pi]$, the polar equations of a circle of radius $R$ are: $$ x = R\cos(\theta) $$ $$ y = R\sin(\theta) $$ We want to draw a circle. Create an array to hold the theta values&mdash;the more we use, the smoother the circle will be Create x and y arrays from theta for your choice of $R$ Plot y vs. x Now, look up the matplotlib fill() function, and draw a circle filled in with a solid color. Q3: Circles, circles, circles... Generalize your circle drawing commands to produce a function, draw_circle(x0, y0, R, color) that draws the circle. Here, (x0, y0) is the center of the circle, R is the radius, and color is the color of the circle. Now randomly draw 10 circles at different locations, with random radii, and random colors on the same plot. Q4: Climate Download the data file of global surface air temperature averages from here: https://raw.githubusercontent.com/sbu-python-summer/python-tutorial/master/day-4/nasa-giss.txt (this data comes from: https://data.giss.nasa.gov/gistemp/graphs/) There are 3 columns here: the year, the temperature change, and a smoothed representation of the temperature change. Read in this data using np.loadtxt(). Plot as a line the smoothed representation of the temperature changes. Plot as points the temperature change (no smoothing). Color the points blue if they are < 0 and color them red if they are >= 0 You might find the NumPy where() function useful. Q5: subplots matplotlib has a number of ways to create multiple axes in a figure -- look at plt.subplot() (http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot) Create an x array using NumPy with a number of points, spanning from $[0, 2\pi]$. Create 3 axes vertically, and do the following: Define a new numpy array f initialized to a function of your choice. Plot f in the top axes Compute a numerical derivative of f, $$ f' = \frac{f_{i+1} - f_i}{\Delta x}$$ and plot this in the middle axes Do this again, this time on $f'$ to compute the second derivative and plot that in the bottom axes Q6: frequent words plotting In this exercise, we will read the file with the transcription of Star Trek TOS, Shore Leave and calculate the amount of time each word was found. We will then plot the 25 most frequent words and label the plot. 6.1 Read the file and create the dictionaty {'word':count} Open the shore_leave.txt Create the dictionary of the form {'word':count}, where count shows the amount of times the word was found in the text. Remember to get rid of the punctuation ("." and ",") and to ensure that all words are lowercase End of explanation """ # your code here """ Explanation: 2. Plot 25 most frequent words Plot a labelled bar chart of the most frequent 25 words with their frequencies. End of explanation """
daviddesancho/MasterMSM
examples/brownian_dynamics_2D/2D_smFS_MSM.ipynb
gpl-2.0
%matplotlib inline %load_ext autoreload %autoreload 2 import time import itertools import h5py import numpy as np from scipy.stats import norm from scipy.stats import expon import matplotlib.pyplot as plt import matplotlib.cm as cm import seaborn as sns sns.set(style="ticks", color_codes=True, font_scale=1.5) sns.set_style({"xtick.direction": "in", "ytick.direction": "in"}) """ Explanation: MSM of Brownian dynamics simulations of diffusion on a 2D surface Here we analyze simulations on another simple mode system, but one that goes beyond one dimension. As always we start by importing some relevant libraries. End of explanation """ h5file = "data/cossio_kl1.3_Dx1_Dq1.h5" f = h5py.File(h5file, 'r') data = np.array(f['data']) f.close() """ Explanation: Here we upload the data obtained from Brownian Dynamics simulations of isotropic diffusion on a 2D potential. End of explanation """ fig, ax = plt.subplots(2,1,figsize=(12,3), sharex=True,sharey=False) ax[0].plot(data[:,0],data[:,1],'.', markersize=1) ax[1].plot(data[:,0],data[:,2],'g.', markersize=1) ax[0].set_ylim(-10,10) ax[1].set_xlim(0,25000) ax[0].set_ylabel('x') ax[1].set_ylabel('q') ax[1].set_xlabel('Time') plt.tight_layout(h_pad=0) fig, ax = plt.subplots(figsize=(6,4)) hist, bin_edges = np.histogram(data[:,1], bins=np.linspace(-7,7,20), \ density=True) bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \ for i in range(len(bin_edges)-1)] ax.plot(bin_centers, -np.log(hist),label="x") hist, bin_edges = np.histogram(data[:,2], bins=np.linspace(-7,7,20), \ density=True) bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \ for i in range(len(bin_edges)-1)] ax.plot(bin_centers, -np.log(hist),label="q") ax.set_xlim(-7,7) ax.set_ylim(1,9) #ax.set_xlabel('x') ax.set_ylabel('PMF ($k_BT$)') ax.legend() """ Explanation: Trajectory analysis End of explanation """ H, x_edges, y_edges = np.histogram2d(data[:,1],data[:,2], \ bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)]) fig, ax = plt.subplots(figsize=(6,5)) pmf = -np.log(H.transpose()) pmf -= np.min(pmf) cs = ax.contourf(pmf, extent=[x_edges.min(), x_edges.max(), \ y_edges.min(), y_edges.max()], \ cmap=cm.rainbow, levels=np.arange(0, 6,0.5)) cbar = plt.colorbar(cs) ax.set_xlim(-7,7) ax.set_ylim(-7,7) ax.set_yticks(range(-5,6,5)) ax.set_xlabel('$x$', fontsize=20) ax.set_ylabel('$q$', fontsize=20) plt.tight_layout() """ Explanation: Representation of the bistable 2D free energy surface as a function of the measured q and molecular x extensions: End of explanation """ from scipy.stats import binned_statistic_2d statistic, x_edge, y_edge, binnumber = \ binned_statistic_2d(data[:,1],data[:,2],None,'count', \ bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)]) fig, ax = plt.subplots(figsize=(6,5)) grid = ax.imshow(-np.log(statistic.transpose()),origin="lower",cmap=plt.cm.rainbow) cbar = plt.colorbar(grid) ax.set_yticks(range(0,20,5)) ax.set_xticks(range(0,20,5)) ax.set_xlabel('$x_{bin}$', fontsize=20) ax.set_ylabel('$q_{bin}$', fontsize=20) plt.tight_layout() fig,ax=plt.subplots(3,1,figsize=(12,6),sharex=True) plt.subplots_adjust(wspace=0, hspace=0) ax[0].plot(range(0,len(data[:,1])),data[:,1]) ax[1].plot(range(0,len(data[:,2])),data[:,2],color="g") ax[2].plot(binnumber) ax[0].set_ylabel('x') ax[1].set_ylabel('q') ax[2].set_ylabel("s") ax[2].set_xlabel("time (ps)") ax[2].set_xlim(0,2000) """ Explanation: Assignment Now we discretize the trajectory using the states obtained from partitioning the 2D free energy surface of the diffusion of the molecule. We first need to import the function that makes the grid. End of explanation """ from mastermsm.trajectory import traj from mastermsm.msm import msm distraj = traj.TimeSeries(distraj=list(binnumber), dt=1) distraj.find_keys() distraj.keys.sort() msm_2D = msm.SuperMSM([distraj]) """ Explanation: Master Equation Model End of explanation """ for i in [1, 2, 5, 10, 20, 50, 100]: msm_2D.do_msm(i) msm_2D.msms[i].do_trans(evecs=True) msm_2D.msms[i].boots() fig, ax = plt.subplots() for i in range(5): tau_vs_lagt = np.array([[x,msm_2D.msms[x].tauT[i],msm_2D.msms[x].tau_std[i]] \ for x in sorted(msm_2D.msms.keys())]) ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10) #ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray') ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16) ax.set_ylabel(r'$\tau$ [ps]', fontsize=16) ax.set_xlim(0.8,200) ax.set_ylim(1,1000) ax.set_yscale('log') _ = ax.set_xscale('log') """ Explanation: Convergence Test End of explanation """ lt=2 plt.figure() plt.imshow(msm_2D.msms[lt].trans, interpolation='none', \ cmap='viridis_r',origin="lower") plt.ylabel('$\it{i}$') plt.xlabel('$\it{j}$') plt.colorbar() plt.figure() plt.imshow(np.log(msm_2D.msms[lt].trans), interpolation='none', \ cmap='viridis_r',origin="lower") plt.ylabel('$\it{i}$') plt.xlabel('$\it{j}$') plt.colorbar() fig, ax = plt.subplots() ax.errorbar(range(1,12),msm_2D.msms[lt].tauT[0:11], fmt='o-', \ yerr= msm_2D.msms[lt].tau_std[0:11], ms=10) ax.set_xlabel('Eigenvalue') ax.set_ylabel(r'$\tau_i$ [ns]') """ Explanation: There is no dependency of the relaxation times $\tau$ on the lag time $\Delta$t. Estimation End of explanation """ fig, ax = plt.subplots(figsize=(10,4)) ax.plot(msm_2D.msms[2].rvecsT[:,1]) ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \ msm_2D.msms[lt].rvecsT[:,1], \ where=msm_2D.msms[lt].rvecsT[:,1]>0,\ facecolor='c', interpolate=True,alpha=.4) ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \ msm_2D.msms[lt].rvecsT[:,1], \ where=msm_2D.msms[lt].rvecsT[:,1]<0,\ facecolor='g', interpolate=True,alpha=.4) ax.set_ylabel("$\Psi^R_1$") plt.show() """ Explanation: The first mode captured by $\lambda_1$ is significantly slower than the others. That mode, which is described by the right eigenvector $\psi^R_1$ as the transition of the protein between the folded and unfolded states. End of explanation """ fig,ax = plt.subplots(1,2,figsize=(10,5),sharey=True,sharex=True) rv_mat = np.zeros((20,20), float) for i in [x for x in zip(msm_2D.msms[lt].keep_keys, \ msm_2D.msms[lt].rvecsT[:,1])]: unr_ind=np.unravel_index(i[0],(21,21)) rv_mat[unr_ind[0]-1,unr_ind[1]-1] = -i[1] ax[0].imshow(rv_mat.transpose(), interpolation="none", \ cmap='bwr',origin="lower") ax[1].imshow(-np.log(statistic.transpose()), \ cmap=plt.cm.rainbow,origin="lower") ax[1].set_yticks(range(0,20,5)) ax[1].set_xticks(range(0,20,5)) """ Explanation: The projection of $\psi^R_1$ on the 2D grid shows the transitions between the two conformational states (red and blue). End of explanation """
ML4DS/ML4all
R6.Gaussian_Processes/.ipynb_checkpoints/Bayesian_regression-checkpoint.ipynb
mit
# Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab """ Explanation: Bayesian regression Notebook version: 1.0 (Oct 01, 2015) Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es) Changes: v.1.0 - First version Pending changes: * Include regression on the stock data End of explanation """ n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 n_val_16 = 5 degree = 12 X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-.5,2.5,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model X_16 = .3 * np.ones((n_val_16,)) S_16 = np.linspace(np.min(S_tr),np.max(S_tr),n_val_16) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) ax.plot(X_16,S_16,'ro',markersize=6) ax.plot(X_grid,S_grid,'r-',label='True model') for el in zip(X_16,S_16): #Add point to the training set X_tr_iter = np.append(X_tr,el[0]) S_tr_iter = np.append(S_tr,el[1]) #Obtain LS regression coefficients and evaluate it at X_grid w_LS = np.polyfit(X_tr_iter, S_tr_iter, degree) S_grid_iter = np.polyval(w_LS,X_grid) ax.plot(X_grid,S_grid_iter,'g-') ax.set_xlim(-.5,2.5) ax.set_ylim(S_16[0]-2,S_16[-1]+2) ax.legend(loc='best') """ Explanation: 3. Bayesian regression In the previous session we tackled the problem of fitting the following model using a LS criterion: $${\hat s}({\bf x}) = f({\bf x}) = {\bf w}^\top {\bf z}$$ where ${\bf z}$ is a vector with components which can be computed directly from the observed variables. Such model is includes a linear regression problem, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model. The LS solution was defined as the one minimizing the square of the residuals over the training set ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$. As a result, a single parameter vector ${\bf w}^*$ was obtained, and correspondingly a single regression curve. In this session, rather than trying to obtain the best single model, we will work with a family of models or functions, and model the problem probabilistically, so that we can assign a probability value to each of the possible functions. 3.1 Maximum Likelihood estimation of the weights 3.1.1 Limitations of the LS approach. The need for assumptions Consider the same regression task of the previous session. We have a training dataset consisting of 15 points which are given, and depict the regression curves that would be obtained if adding an additional point at a fixed location, depending on the target value of that point: (You can run this code fragment several times, to check also the changes in the regression curves between executions, and depending also on the location of the training points) End of explanation """ n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 12 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_w = 0.3 var_w = sigma_w * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-.5,2.5,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) for k in range(nplots): #Draw weigths fromt the prior distribution w_iter = np.random.multivariate_normal(mean_w,var_w) S_grid_iter = np.polyval(w_iter,X_grid) ax.plot(X_grid,S_grid_iter,'g-') ax.set_xlim(-.5,2.5) ax.set_ylim(S_16[0]-2,S_16[-1]+2) """ Explanation: You can control the degree of the polynomia, and check that when the degree is set to 15 (16 weights) all points will be fitted perfectly It seems obvious that we have not solved the problem ... * The regression curves overfit the training data * The regression curves change a lot when varying the label of just one pattern The key missing ingredient is assumptions !! Open questions Do we think that all models are equally probable... before we see any data? What does the term <i>model probability</i> mean? Do we need to choose a single "best" model or can we consider several simultaneously? Perhaps our training targets are contaminated with noise. What to do? We will start postulating a <i>generative model</i> for the training data that includes the presence of noise contaminating the targets, and work on this model to partly answer the other two questions. 3.1.2 Generative model Denoting by $f({\bf x}) = {{\bf w}}^\top {\bf z}$ the true function that we would like to obtain, we could assume that the observations in the training set are obtained as noisy values of the output of such function, i.e., $$s^{(k)} = f({\bf x}^{(k)}) + \varepsilon^{(k)}$$ We will further characterize the noise values as i.i.d. and normally distributed, with mean zero, and variance $\sigma_\varepsilon^2$, i.e., $$\varepsilon \sim {\cal N}\left(0, \sigma_\varepsilon^2\right)$$ 3.1.3 The maximum likelihood solution Joint distribution of the noise samples, ${\pmb \varepsilon} = \left[\varepsilon^{(1)}, \dots, \varepsilon^{(K)}\right]^\top$: $${\pmb \varepsilon} \sim \left( {\bf 0}, \sigma_{\varepsilon}^2 {\bf I}\right) \;\;\; p({\pmb \varepsilon}) = \left(\frac{1}{\sqrt{2\pi \sigma_{\varepsilon}^2}}\right)^K \exp\left(- \frac{{\pmb \varepsilon}^\top {\pmb \varepsilon}}{2 \sigma_{\varepsilon}^2}\right)$$ Denoting ${\bf s} = \left[s^{(1)}, \dots, s^{(K)} \right]^\top$ and ${\bf f} = \left[ f({\bf x}^{(1)}), \dots,f({\bf x}^{(K)})\right]^\top$, we have $${\bf s} = {\bf f} + {\pmb \varepsilon}$$ Conditioning on the values of the target function, ${\bf f}$, the pdf of the available targets is obtained as a shifted version of the distribution of the noise. More precisely: \begin{align}p({\bf s}|{\bf f}) & = \left(\frac{1}{\sqrt{2\pi \sigma_{\varepsilon}^2}}\right)^K \exp\left(- \frac{\|{\bf s} - {\bf f}\|^2}{2 \sigma_{\varepsilon}^2}\right) \end{align} For the particular parametric selection of $f({\bf x})$, ${\bf f} = {\bf Z} {\bf w}$, conditioning on ${\bf f}$ is equivalent to conditioning on ${\bf w}$, so that: $$p({\bf s}|{\bf f}) = p({\bf s}|{\bf w}) = \left(\frac{1}{\sqrt{2\pi \sigma_{\varepsilon}^2}}\right)^K \exp\left(- \frac{\|{\bf s} - {\bf Z}{\bf w}\|^2}{2 \sigma_{\varepsilon}^2}\right)$$ The previous expression represents the probability of the observed targets given the weights, and is also known as the likelihood of the weights for a particular training set. The <b>maximum likelihood</b> solution is then given by: $${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w}) = \arg \min_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2$$ 3.1.4 Multiple explanations of the data With an additive Gaussian independent noise model, the maximum likelihood and the least squares solutions are the same. We have not improved much ... However, we have already formulated the problem in a probabilistic way. This opens the door to reasoning in terms of a set of possible explanations, not just one. We believe more than one of our models could have generated the data. We do not believe all models are equally likely to have generated the data We may <b>believe</b> that a simpler model is more likely than a complex one 3.2 Bayesian Inference 3.2.1 Posterior distribution of weights If we express our <i>a priori</i> belief of models using a prior distribution $p({\bf f})$, then we can infer the <i>a posteriori</i> distribution using Bayes' rule: $$p({\bf f}|{\bf s}) = \frac{p({\bf s}|{\bf f})~p({\bf f})}{p({\bf s})}$$ In the previous expression: * $p({\bf s}|{\bf f})$: is the likelihood function * $p({\bf f})$: is the <i>prior</i> distribution of the models (assumptions are needed here) * $p({\bf s})$: is the <i>marginal</i> distribution of the observed data, which could be obtained integrating the numerator over all possible models. However, we normally do not need to explicitly compute $p({\bf s})$ For the parametric model ${\bf f} = {\bf Z} {\bf w}$, the previous expressions become: $$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$ Where: * $p({\bf s}|{\bf w})$: is the likelihood function * $p({\bf w})$: is the <i>prior</i> distribution of the weights (assumptions are needed here) * $p({\bf s})$: is the <i>marginal</i> distribution of the observed data, which could be obtained integrating 3.2.2 Maximum likelihood vs Bayesian Inference. Making predictions Following a <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as: $$p({s^}|{\bf w}_{ML},{\bf x}^) $$ For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is: $$p({s^}|{\bf w}_{ML},{\bf x}^) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}_{ML}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$$ * The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model). * If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction. Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^ = s({\bf x}^)$ is carried out by mixing all models, according to the weights given by the posterior distribution. \begin{align}p({s^}|{\bf x}^,{\bf s}) & = \int p({s^},{\bf w}~|~{\bf x}^,{\bf s}) d{\bf w} \ & = \int p({s^}~|~{\bf w},{\bf x}^,{\bf s}) p({\bf w}~|~{\bf x}^,{\bf s}) d{\bf w} \ & = \int p({s^}~|~{\bf w},{\bf x}^*) p({\bf w}~|~{\bf s}) d{\bf w}\end{align} where: * $p({s^*}|{\bf w},{\bf x}^*) = \displaystyle\frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$ * $p({\bf w}~|~{\bf s})$: Is the posterior distribution of the weights, that can be computed using Bayes' Theorem. 3.2.3 Example: Selecting a Gaussian prior for the weights Prior distribution of the weights In this section, we consider a particular example in which we assume the following prior for the weights: $${\bf w} \sim {\cal N}\left({\bf 0},{\pmb \Sigma}_{p} \right)$$ The following figure shows functions which are generated by drawing points from this distribution End of explanation """ n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 12 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_p = .3 * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-.5,2.5,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Compute posterior distribution parameters Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p)) posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() for k in range(nplots): #Draw weights from the posterior distribution w_iter = np.random.multivariate_normal(posterior_mean,Sigma_w) #Note that polyval assumes the first element of weight vector is the coefficient of #the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(w_iter[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-') #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) ax.plot(X_grid,S_grid_iter,'m-',label='LS regression') ax.set_xlim(-.5,2.5) ax.set_ylim(S_16[0]-2,S_16[-1]+2) ax.legend(loc='best') """ Explanation: Likelihood of the weights According to the generative model, ${\bf s} = {\bf Z}{\bf w} + {\pmb \varepsilon}$ $${\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)$$ Posterior distribution of the weights $$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$ Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore, $${\bf w}~|~{\bf s} \sim {\cal N}\left({\bar{\bf w}} , {\pmb\Sigma}_{\bf w}\right)$$ where the mean and the covariance matrix of the distribution are to be determined. <b>Exercise:</b> Show that the posterior mean and posterior covariance matrix of ${\bf w}$ given ${\bf s}$ are: $${\pmb\Sigma}{\bf w} = \left[\frac{1}{\sigma\varepsilon^2} {\bf Z}^{\top}{\bf Z} + {\pmb \Sigma}_p^{-1}\right]^{-1}$$ $${\bar{\bf w}} = {\sigma_\varepsilon^{-2}} {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}$$ The following fragment of code draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curve along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$. End of explanation """ n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 12 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_p = .5 * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-.5,2.5,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Compute posterior distribution parameters Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p)) posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() #Plot the posterior mean #Note that polyval assumes the first element of weight vector is the coefficient of #the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(posterior_mean[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI') #Plot confidence intervals for the Bayesian Inference std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0])) std_x = np.array(std_x) plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x, alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=4, linestyle='dashdot', antialiased=True) #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) ax.plot(X_grid,S_grid_iter,'m-',label='LS regression') ax.set_xlim(-.5,2.5) ax.set_ylim(S_16[0]-2,S_16[-1]+2) ax.legend(loc='best') """ Explanation: Posterior distribution of the target Since $f^ = f({\bf x}^) = [{\bf x}^]^\top {bf w}$, $f^$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows: $$\mathbb{E}{{{\bf x}^}^\top {\bf w}~|~{\bf s}, {\bf x}^} = {{\bf x}^}^\top \mathbb{E}{{\bf w}|{\bf s}} = {\sigma_\varepsilon^{-2}} {{\bf x}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}$$ $$\text{Cov}\left[{{\bf x}^}^\top {\bf w}~|~{\bf s}, {\bf x}^\right] = {{\bf x}^}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {{\bf x}^} = {{\bf x}^}^\top {\pmb \Sigma}_{\bf w} {{\bf x}^}$$ Therefore, $f^~|~{\bf s}, {\bf x}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf x}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}, {{\bf x}^}^\top {\pmb \Sigma}_{\bf w} {{\bf x}^*} \right)$ Finally, for $s^ = f^ + \varepsilon^$, the posterior distribution is $s^~|~{\bf s}, {\bf x}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf x}^}^\top {\pmb\Sigma}{\bf w} {\bf Z}^\top {\bf s}, {{\bf x}^}^\top {\pmb \Sigma}_{\bf w} {{\bf x}^} + \sigma\varepsilon^2\right)$ End of explanation """
diging/methods
1.2 Change and difference/1.2.1 Linear model with OLS.ipynb
gpl-3.0
text_root = '../data/EmbryoProjectTexts/files' zotero_export_path = '../data/EmbryoProjectTexts' documents = nltk.corpus.PlaintextCorpusReader(text_root, 'https.+') metadata = zotero.read(zotero_export_path, index_by='link', follow_links=False) """ Explanation: 1.2. Change over time In computational humanities, we are often interested in whether and how phenomena change over time. In this notebook we will perform a simple time-series analysis of tokens in our corpus. This will get us moving toward analyizing more complex temporal trends. Load corpus and metadata Just as we did in the last notebook, we'll load our texts and metadata. End of explanation """ word_counts = nltk.FreqDist([normalize_token(token) for token in documents.words() if filter_token(token)]) """ Explanation: Has the prevalence of a token increased or decreased over time? In the last notebook, we looked at the distribution (frequency) of tokens over time. It is one thing to make a pretty figure; it is another to say with some degree of confidence that a token is becoming more or less prevalent over time. The simplest approach to this problem is a linear regression model: $Y_i = \beta_0 + \beta X_i + \epsilon_i$ where $Y$ is the response variable (frequency of a token), $X$ is the predictor variable (publication date), $\beta$ is the regression coefficient, and $\epsilon_i$ is the error for observation $i$. Up to now, we have discussed token frequency in terms of raw token counts. Since the number of texts per year may not be fixed, however, what we really want to model is the probability of a token. We don't have direct access to the probability of a token, but for most practical purposes the Maximum Likelihood Estimator for that probability is just the frequency $f(t) = \frac{N_{token}}{N_{total}}$ of the token. To get the Probability Distribution of a token, we first calculate the frequency distribution: End of explanation """ print 'N_e', word_counts['embryo'] print 'N', word_counts.N() """ Explanation: $N_{embryo}$ End of explanation """ word_counts.freq('embryo') """ Explanation: $f("embryo") = \frac{N_{embryo}}{N}$ End of explanation """ word_probs = nltk.MLEProbDist(word_counts) """ Explanation: ...and then we use NLTK's MLEProbDist (Maximum Likelihood Estimator) to obtain the probability distribution. End of explanation """ print word_probs.prob('embryo') # Probability of an observed token to be 'embryo'. """ Explanation: $p("embryo") ~= \hat{p}("embryo") = f("embryo") $ End of explanation """ word_counts_over_time = nltk.ConditionalFreqDist([ (metadata[fileid].date, normalize_token(token)) for fileid in documents.fileids() for token in documents.words(fileids=[fileid]) if filter_token(token) ]) embryo_counts = pd.DataFrame(columns=['Year', 'Count']) for i, (year, counts) in enumerate(word_counts_over_time.items()): embryo_counts.loc[i] = [year, counts['embryo']] embryo_counts plt.scatter(embryo_counts.Year, embryo_counts.Count) plt.ylabel('Word count') plt.xlabel('Year') plt.show() """ Explanation: Since we are interested in change over time, we need to generate a conditional probability distribution. Here is our conditional frequency distribution (as before): $N("embryo" \Bigm| year)$ End of explanation """ embryo_freq = pd.DataFrame(columns=['Year', 'Frequency']) for i, (year, counts) in enumerate(word_counts_over_time.items()): embryo_freq.loc[i] = [year, counts.freq('embryo')] plt.scatter(embryo_freq.Year, embryo_freq.Frequency) plt.ylabel('Word frequency') plt.xlabel('Year') plt.show() """ Explanation: $f("embryo" \Bigm| year)$ End of explanation """ word_probs_over_time = nltk.ConditionalProbDist(word_counts_over_time, nltk.MLEProbDist) embryo_prob = pd.DataFrame(columns=['Year', 'Probability']) for i, (year, probs) in enumerate(word_probs_over_time.items()): embryo_prob.loc[i] = [year, probs.prob('embryo')] plt.scatter(embryo_prob.Year, embryo_prob.Probability) plt.ylabel('Conditional word probability') plt.xlabel('Year') plt.show() print 'N(w|c=2016) =', word_counts_over_time[2016]['embryo'] print 'f(w|c=2016) =', word_counts_over_time[2016].freq('embryo') print '^p(w|c=2016) =', word_probs_over_time[2016].prob('embryo') """ Explanation: $\hat{p}("embryo" \Bigm| year)$ End of explanation """ chicken_data = pd.DataFrame(columns=['Year', 'Probability']) for i, (year, probs) in enumerate(word_probs_over_time.items()): chicken_data.loc[i] = [year, probs.prob('chicken')] chicken_data # Create a scatterplot. plt.scatter(chicken_data.Year, chicken_data.Probability) # Scale the Y axis. plt.ylim(chicken_data.Probability.min(), chicken_data.Probability.max()) # Scale the X axis. plt.xlim(chicken_data.Year.min(), chicken_data.Year.max()) plt.ylabel('$\\hat{p}(\'chicken\'|year)$') plt.show() # Render the figure. """ Explanation: Now we'll take a look at the token that we'd like to analyze. Let's try chicken. Here we get the probability for each year for the token chicken: End of explanation """ from scipy.stats import linregress Beta, Beta0, r, p, stde = linregress(chicken_data.Year, chicken_data.Probability) print '^Beta:', Beta print '^Beta_0:', Beta0 print 'r-squared:', r*r print 'p:', p plt.scatter(chicken_data.Year, chicken_data.Probability) plt.plot(chicken_data.Year, Beta0 + Beta*chicken_data.Year) # Array math! plt.ylim(chicken_data.Probability.min(), chicken_data.Probability.max()) plt.xlim(chicken_data.Year.min(), chicken_data.Year.max()) plt.ylabel('$\\hat{p}(\'chicken\'|year)$') plt.show() # Render the figure. """ Explanation: The SciPy package provides a Ordinary Least Squares linear regression function called linregress(). We can use that to estimate the model parameters from our data. End of explanation """ plt.hist(chicken_data.Probability) plt.show() """ Explanation: At first pass, our linear model looks like a remarkably good fit. Our r-squared value is not too bad, and we have a very low p value. The problem with interpreting that p-value, however, is that data derived from texts rarely satisfy the assumptions of the t-test used to assess significance. Aside from the fact that we have very few "observations", does the distribution of Y values (token probabilities) shown below look normally distributed to you? End of explanation """ import numpy as np # We can use underscores `_` for values that we don't want to keep. samples = pd.DataFrame(columns=['Beta_pi', 'Beta0_pi']) for i in xrange(1000): shuffled_probability = np.random.permutation(chicken_data.Probability) # linregress() returns five parameters; we only care about the first two. Beta_pi, Beta0_pi, _, _, _ = linregress(chicken_data.Year, shuffled_probability) samples.loc[i] = [Beta_pi, Beta0_pi] plt.figure(figsize=(10, 5)) plt.subplot(121) plt.hist(samples.Beta_pi) # Histogram of Beta values from permutations. plt.plot([Beta, Beta], [0, 200], # Beta from the observed data. lw=5, label='$\\hat{\\beta}$') # Plot the upper and lower bounds of the inner 95% probability. Beta_upper = np.percentile(samples.Beta_pi, 97.5) Beta_lower = np.percentile(samples.Beta_pi, 2.5) plt.plot([Beta_upper, Beta_upper], [0, 200], color='k', lw=2, label='$p = 0.05$') plt.plot([Beta_lower, Beta_lower], [0, 200], color='k', lw=2) plt.legend() plt.xlabel('$\\beta_{\\pi}$', fontsize=24) # Same procedure for Beta0. plt.subplot(122) plt.hist(samples.Beta0_pi) plt.plot([Beta0, Beta0], [0, 200], lw=5, label='$\\hat{\\beta_0}$') Beta0_upper = np.percentile(samples.Beta0_pi, 97.5) Beta0_lower = np.percentile(samples.Beta0_pi, 2.5) plt.plot([Beta0_upper, Beta0_upper], [0, 200], color='k', lw=2, label='$p = 0.05$') plt.plot([Beta0_lower, Beta0_lower], [0, 200], color='k', lw=2) plt.legend() plt.xlabel('$\\beta_{0\\pi}$', fontsize=24) plt.tight_layout() plt.show() """ Explanation: We need to use an hypothesis test that does not assume normality, and can handle the small sample size. One such approach is a permutation test. Our null hypothesis is that: $H_0: \beta = 0$ That is, that there is no change in the probability of our token (chicken, in this case) over time. We will shuffle our response variable, $Y$, a whole bunch of times. Each time we shuffle the data (a permutation), we will re-calculate the regression parameter $\beta_{\pi}$. We can reject $H_0$ with a confidence of $p = 0.05$ iff our observed $\hat{\beta}$ falls outside the inner 95% of the resampled distribution. End of explanation """
cavestruz/MLPipeline
notebooks/clustering/ExampleGMM.ipynb
mit
gmms = [GMM(i).fit(X) for i in range(1,10)] """ Explanation: Fit X in the gmm model for 1, 2, ... 10 components. Hint: You should create 10 instances of a GMM model, e.g. GMM(?).fit(X) would be one instance of a GMM model with ? components. End of explanation """ aics = [g.aic(X) for g in gmms] bics = [g.bic(X) for g in gmms] """ Explanation: Calculate the AIC and BIC for each of these 10 models, and find the best model. End of explanation """ plt.plot(aics) plt.plot(bics) """ Explanation: Plot the AIC and BIC End of explanation """ # Data x_i x = np.linspace(-6,6,1000) pdf = gmms[2].score_samples(x.reshape(-1,1)) """ Explanation: Define your PDF by evenly distributing 1000 points in some range. Look up what the eval method of the model instance does, and evaluate on your 1000 data points x. You should be able to extract a pdf, and the individual responsibilities for each of the components. End of explanation """ plt.plot(np.linspace(-6,6,1000),np.exp(pdf[0])) plt.hist(X,bins='auto',normed=True) """ Explanation: Plot x as a histogram, and the PDF values over your x_i values. End of explanation """
BeatHubmann/17F-U-DLND
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
mit
import pandas as pd import numpy as np import tensorflow as tf import tflearn from tflearn.data_utils import to_categorical """ Explanation: Sentiment analysis with TFLearn In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you. We'll start off by importing all the modules we'll need, then load and prepare the data. End of explanation """ reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None) """ Explanation: Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ from collections import Counter total_counts = Counter() for idx, row in reviews.iterrows(): total_counts.update(row[0].split(' ')) print("Total words in data set: ", len(total_counts)) """ Explanation: Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours. End of explanation """ vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60]) """ Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words. End of explanation """ print(vocab[-1], ': ', total_counts[vocab[-1]]) """ Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words. End of explanation """ word2idx = dict(zip(list(vocab), range(len(vocab)))) """ Explanation: The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on. End of explanation """ def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int) for word in text.split(' '): if word in word2idx: word_vector[word2idx[word]] += 1 return word_vector """ Explanation: Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary. End of explanation """ text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] """ Explanation: If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ``` End of explanation """ word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :23] """ Explanation: Now, run through our entire review data set and convert each review to a word vector. End of explanation """ Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2) trainY """ Explanation: Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later. End of explanation """ # Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### net = tflearn.input_data([None, len(vocab)]) # Input net = tflearn.fully_connected(net, 100, activation='ReLU') # Hidden 1 net = tflearn.fully_connected(net, 10, activation='ReLU') # Hidden 2 net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy') model = tflearn.DNN(net) return model """ Explanation: Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. End of explanation """ model = build_model() """ Explanation: Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network. End of explanation """ predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters. End of explanation """ # Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Moonlight is by far the best movie of 2016." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence) """ Explanation: Try out your own text! End of explanation """
deimagjas/qubits.cloud.AI
dlnd-your-first-neural-network.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation """ data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() """ Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation """ rides[:24*10].plot(x='dteday', y='cnt') """ Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation """ dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() """ Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation """ quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std """ Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation """ # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] """ Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation """ # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] """ Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation """ class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = lambda x: 1./(1.+np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)# signals into final output layer final_outputs = final_inputs #self.activation_function(final_inputs)# signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error # with help of: https://nd101.slack.com/archives/project-1/p1486732677020232?thread_ts=1486731562.020134&cid=C3QVC209L output_errors = targets-final_outputs del_err_output = output_errors # TODO: Backpropagated error del_err_hidden = np.dot(self.weights_hidden_to_output.T, del_err_output) * hidden_outputs * (1 - hidden_outputs) # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(del_err_output, hidden_outputs.T) self.weights_input_to_hidden += self.lr * np.dot(del_err_hidden, inputs.T) def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)# signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)# signals into final output layer final_outputs = final_inputs #self.activation_function(final_inputs)# signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) """ Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation """ import sys ### Set the hyperparameters here ### epochs = 6000#2700 learning_rate = 0.01 #0.03 hidden_nodes = 18 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5) """ Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) """ Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation """ import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) """ Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below How well does the model predict the data? The shape of the curves are very similarly between predicted and real data. Nevertheless, for this model there are things that are unpredictable no matter the number of neurons or epocs. For example, the most remote predictions are near to holydays, in that days they arise some variables like the family arrives, the climate might change and other unpredictable situations that the model does not take in account. For that reason I think the model fails where it does. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
bsd-3-clause
import os import numpy as np import matplotlib.pyplot as plt import mne """ Explanation: The Raw data structure: continuous data This tutorial covers the basics of working with raw EEG/MEG data in Python. It introduces the :class:~mne.io.Raw data structure in detail, including how to load, query, subselect, export, and plot data from a :class:~mne.io.Raw object. For more info on visualization of :class:~mne.io.Raw objects, see tut-visualize-raw. For info on creating a :class:~mne.io.Raw object from simulated data in a :class:NumPy array &lt;numpy.ndarray&gt;, see tut_creating_data_structures. As usual we'll start by importing the modules we need: End of explanation """ sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) """ Explanation: Loading continuous data .. sidebar:: Datasets in MNE-Python There are ``data_path`` functions for several example datasets in MNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`, :func:`mne.datasets.spm_face.data_path`, etc). All of them will check the default download location first to see if the dataset is already on your computer, and only download it if necessary. The default download location is also configurable; see the documentation of any of the ``data_path`` functions for more information. As mentioned in the introductory tutorial &lt;tut-overview&gt;, MNE-Python data structures are based around the :file:.fif file format from Neuromag. This tutorial uses an example dataset &lt;sample-dataset&gt; in :file:.fif format, so here we'll use the function :func:mne.io.read_raw_fif to load the raw data; there are reader functions for a wide variety of other data formats &lt;data-formats&gt; as well. There are also several other example datasets &lt;datasets&gt; that can be downloaded with just a few lines of code. Functions for downloading example datasets are in the :mod:mne.datasets submodule; here we'll use :func:mne.datasets.sample.data_path to download the "sample-dataset" dataset, which contains EEG, MEG, and structural MRI data from one subject performing an audiovisual experiment. When it's done downloading, :func:~mne.datasets.sample.data_path will return the folder location where it put the files; you can navigate there with your file browser if you want to examine the files yourself. Once we have the file path, we can load the data with :func:~mne.io.read_raw_fif. This will return a :class:~mne.io.Raw object, which we'll store in a variable called raw. End of explanation """ print(raw) """ Explanation: As you can see above, :func:~mne.io.read_raw_fif automatically displays some information about the file it's loading. For example, here it tells us that there are three "projection items" in the file along with the recorded data; those are :term:SSP projectors &lt;projector&gt; calculated to remove environmental noise from the MEG signals, and are discussed in a the tutorial tut-projectors-background. In addition to the information displayed during loading, you can get a glimpse of the basic details of a :class:~mne.io.Raw object by printing it: End of explanation """ raw.crop(tmax=60) """ Explanation: By default, the :samp:mne.io.read_raw_{*} family of functions will not load the data into memory (instead the data on disk are memory-mapped_, meaning the data are only read from disk as-needed). Some operations (such as filtering) require that the data be copied into RAM; to do that we could have passed the preload=True parameter to :func:~mne.io.read_raw_fif, but we can also copy the data into RAM at any time using the :meth:~mne.io.Raw.load_data method. However, since this particular tutorial doesn't do any serious analysis of the data, we'll first :meth:~mne.io.Raw.crop the :class:~mne.io.Raw object to 60 seconds so it uses less memory and runs more smoothly on our documentation server. End of explanation """ n_time_samps = raw.n_times time_secs = raw.times ch_names = raw.ch_names n_chan = len(ch_names) # note: there is no raw.n_channels attribute print('the (cropped) sample data object has {} time samples and {} channels.' ''.format(n_time_samps, n_chan)) print('The last time sample is at {} seconds.'.format(time_secs[-1])) print('The first few channel names are {}.'.format(', '.join(ch_names[:3]))) print() # insert a blank line in the output # some examples of raw.info: print('bad channels:', raw.info['bads']) # chs marked "bad" during acquisition print(raw.info['sfreq'], 'Hz') # sampling frequency print(raw.info['description'], '\n') # miscellaneous acquisition info print(raw.info) """ Explanation: Querying the Raw object .. sidebar:: Attributes vs. Methods **Attributes** are usually static properties of Python objects — things that are pre-computed and stored as part of the object's representation in memory. Attributes are accessed with the ``.`` operator and do not require parentheses after the attribute name (example: ``raw.ch_names``). **Methods** are like specialized functions attached to an object. Usually they require additional user input and/or need some computation to yield a result. Methods always have parentheses at the end; additional arguments (if any) go inside those parentheses (examples: ``raw.estimate_rank()``, ``raw.drop_channels(['EEG 030', 'MEG 2242'])``). We saw above that printing the :class:~mne.io.Raw object displays some basic information like the total number of channels, the number of time points at which the data were sampled, total duration, and the approximate size in memory. Much more information is available through the various attributes and methods of the :class:~mne.io.Raw class. Some useful attributes of :class:~mne.io.Raw objects include a list of the channel names (:attr:~mne.io.Raw.ch_names), an array of the sample times in seconds (:attr:~mne.io.Raw.times), and the total number of samples (:attr:~mne.io.Raw.n_times); a list of all attributes and methods is given in the documentation of the :class:~mne.io.Raw class. The Raw.info attribute There is also quite a lot of information stored in the raw.info attribute, which stores an :class:~mne.Info object that is similar to a :class:Python dictionary &lt;dict&gt; (in that it has fields accessed via named keys). Like Python dictionaries, raw.info has a .keys() method that shows all the available field names; unlike Python dictionaries, printing raw.info will print a nicely-formatted glimpse of each field's data. See tut-info-class for more on what is stored in :class:~mne.Info objects, and how to interact with them. End of explanation """ print(raw.time_as_index(20)) print(raw.time_as_index([20, 30, 40]), '\n') print(np.diff(raw.time_as_index([1, 2, 3]))) """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at acquisition time, and should not be changed by the user. There are a few exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but in most cases there are dedicated MNE-Python functions or methods to update the :class:`~mne.Info` object safely (such as :meth:`~mne.io.Raw.add_proj` to update ``raw.info['projs']``).</p></div> Time, sample number, and sample index .. sidebar:: Sample numbering in VectorView data For data from VectorView systems, it is important to distinguish *sample number* from *sample index*. See :term:`first_samp` for more information. One method of :class:~mne.io.Raw objects that is frequently useful is :meth:~mne.io.Raw.time_as_index, which converts a time (in seconds) into the integer index of the sample occurring closest to that time. The method can also take a list or array of times, and will return an array of indices. It is important to remember that there may not be a data sample at exactly the time requested, so the number of samples between time = 1 second and time = 2 seconds may be different than the number of samples between time = 2 and time = 3: End of explanation """ eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True) print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names)) """ Explanation: Modifying Raw objects .. sidebar:: len(raw) Although the :class:`~mne.io.Raw` object underlyingly stores data samples in a :class:`NumPy array &lt;numpy.ndarray&gt;` of shape (n_channels, n_timepoints), the :class:`~mne.io.Raw` object behaves differently from :class:`NumPy arrays &lt;numpy.ndarray&gt;` with respect to the :func:`len` function. ``len(raw)`` will return the number of timepoints (length along data axis 1), not the number of channels (length along data axis 0). Hence in this section you'll see ``len(raw.ch_names)`` to get the number of channels. :class:~mne.io.Raw objects have a number of methods that modify the :class:~mne.io.Raw instance in-place and return a reference to the modified instance. This can be useful for method chaining_ (e.g., raw.crop(...).pick_channels(...).filter(...).plot()) but it also poses a problem during interactive analysis: if you modify your :class:~mne.io.Raw object for an exploratory plot or analysis (say, by dropping some channels), you will then need to re-load the data (and repeat any earlier processing steps) to undo the channel-dropping and try something else. For that reason, the examples in this section frequently use the :meth:~mne.io.Raw.copy method before the other methods being demonstrated, so that the original :class:~mne.io.Raw object is still available in the variable raw for use in later examples. Selecting, dropping, and reordering channels Altering the channels of a :class:~mne.io.Raw object can be done in several ways. As a first example, we'll use the :meth:~mne.io.Raw.pick_types method to restrict the :class:~mne.io.Raw object to just the EEG and EOG channels: End of explanation """ raw_temp = raw.copy() print('Number of channels in raw_temp:') print(len(raw_temp.ch_names), end=' → drop two → ') raw_temp.drop_channels(['EEG 037', 'EEG 059']) print(len(raw_temp.ch_names), end=' → pick three → ') raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061']) print(len(raw_temp.ch_names)) """ Explanation: Similar to the :meth:~mne.io.Raw.pick_types method, there is also the :meth:~mne.io.Raw.pick_channels method to pick channels by name, and a corresponding :meth:~mne.io.Raw.drop_channels method to remove channels by name: End of explanation """ channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001'] eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names) print(eog_and_frontal_eeg.ch_names) """ Explanation: If you want the channels in a specific order (e.g., for plotting), :meth:~mne.io.Raw.reorder_channels works just like :meth:~mne.io.Raw.pick_channels but also reorders the channels; for example, here we pick the EOG and frontal EEG channels, putting the EOG first and the EEG in reverse order: End of explanation """ raw.rename_channels({'EOG 061': 'blink detector'}) """ Explanation: Changing channel name and type .. sidebar:: Long channel names Due to limitations in the :file:`.fif` file format (which MNE-Python uses to save :class:`~mne.io.Raw` objects), channel names are limited to a maximum of 15 characters. You may have noticed that the EEG channel names in the sample data are numbered rather than labelled according to a standard nomenclature such as the 10-20 &lt;ten_twenty_&gt; or 10-05 &lt;ten_oh_five_&gt; systems, or perhaps it bothers you that the channel names contain spaces. It is possible to rename channels using the :meth:~mne.io.Raw.rename_channels method, which takes a Python dictionary to map old names to new names. You need not rename all channels at once; provide only the dictionary entries for the channels you want to rename. Here's a frivolous example: End of explanation """ print(raw.ch_names[-3:]) channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names} raw.rename_channels(channel_renaming_dict) print(raw.ch_names[-3:]) """ Explanation: This next example replaces spaces in the channel names with underscores, using a Python dict comprehension_: End of explanation """ raw.set_channel_types({'EEG_001': 'eog'}) print(raw.copy().pick_types(meg=False, eog=True).ch_names) """ Explanation: If for some reason the channel types in your :class:~mne.io.Raw object are inaccurate, you can change the type of any channel with the :meth:~mne.io.Raw.set_channel_types method. The method takes a :class:dictionary &lt;dict&gt; mapping channel names to types; allowed types are ecg, eeg, emg, eog, exci, ias, misc, resp, seeg, dbs, stim, syst, ecog, hbo, hbr. A common use case for changing channel type is when using frontal EEG electrodes as makeshift EOG channels: End of explanation """ raw_selection = raw.copy().crop(tmin=10, tmax=12.5) print(raw_selection) """ Explanation: Selection in the time domain If you want to limit the time domain of a :class:~mne.io.Raw object, you can use the :meth:~mne.io.Raw.crop method, which modifies the :class:~mne.io.Raw object in place (we've seen this already at the start of this tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds to reduce memory demands). :meth:~mne.io.Raw.crop takes parameters tmin and tmax, both in seconds (here we'll again use :meth:~mne.io.Raw.copy first to avoid changing the original :class:~mne.io.Raw object): End of explanation """ print(raw_selection.times.min(), raw_selection.times.max()) raw_selection.crop(tmin=1) print(raw_selection.times.min(), raw_selection.times.max()) """ Explanation: :meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and :attr:~mne.io.Raw.times attributes, so that the first sample of the cropped object now corresponds to time = 0. Accordingly, if you wanted to re-crop raw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above) then the subsequent call to :meth:~mne.io.Raw.crop should get tmin=1 (not tmin=11), and leave tmax unspecified to keep everything from tmin up to the end of the object: End of explanation """ raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total print(raw_selection1.times.min(), raw_selection1.times.max()) """ Explanation: Remember that sample times don't always align exactly with requested tmin or tmax values (due to sampling), which is why the max values of the cropped files don't exactly match the requested tmax (see time-as-index for further details). If you need to select discontinuous spans of a :class:~mne.io.Raw object — or combine two or more separate :class:~mne.io.Raw objects — you can use the :meth:~mne.io.Raw.append method: End of explanation """ sampling_freq = raw.info['sfreq'] start_stop_seconds = np.array([11, 13]) start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int) channel_index = 0 raw_selection = raw[channel_index, start_sample:stop_sample] print(raw_selection) """ Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different recordings, especially when saving: :meth:`~mne.io.Raw.append` only preserves the ``info`` attribute of the initial :class:`~mne.io.Raw` object (the one outside the :meth:`~mne.io.Raw.append` method call).</p></div> Extracting data from Raw objects So far we've been looking at ways to modify a :class:~mne.io.Raw object. This section shows how to extract the data from a :class:~mne.io.Raw object into a :class:NumPy array &lt;numpy.ndarray&gt;, for analysis or plotting using functions outside of MNE-Python. To select portions of the data, :class:~mne.io.Raw objects can be indexed using square brackets. However, indexing :class:~mne.io.Raw works differently than indexing a :class:NumPy array &lt;numpy.ndarray&gt; in two ways: Along with the requested sample value(s) MNE-Python also returns an array of times (in seconds) corresponding to the requested samples. The data array and the times array are returned together as elements of a tuple. The data array will always be 2-dimensional even if you request only a single time sample or a single channel. Extracting data by index To illustrate the above two points, let's select a couple seconds of data from the first channel: End of explanation """ x = raw_selection[1] y = raw_selection[0].T plt.plot(x, y) """ Explanation: You can see that it contains 2 arrays. This combination of data and times makes it easy to plot selections of raw data (although note that we're transposing the data array so that each channel is a column instead of a row, to match what matplotlib expects when plotting 2-dimensional y against 1-dimensional x): End of explanation """ channel_names = ['MEG_0712', 'MEG_1022'] two_meg_chans = raw[channel_names, start_sample:stop_sample] y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces x = two_meg_chans[1] y = two_meg_chans[0].T + y_offset lines = plt.plot(x, y) plt.legend(lines, channel_names) """ Explanation: Extracting channels by name The :class:~mne.io.Raw object can also be indexed with the names of channels instead of their index numbers. You can pass a single string to get just one channel, or a list of strings to select multiple channels. As with integer indexing, this will return a tuple of (data_array, times_array) that can be easily plotted. Since we're plotting 2 channels this time, we'll add a vertical offset to one channel so it's not plotted right on top of the other one: End of explanation """ eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True) eeg_data, times = raw[eeg_channel_indices] print(eeg_data.shape) """ Explanation: Extracting channels by type There are several ways to select all channels of a given type from a :class:~mne.io.Raw object. The safest method is to use :func:mne.pick_types to obtain the integer indices of the channels you want, then use those indices with the square-bracket indexing method shown above. The :func:~mne.pick_types function uses the :class:~mne.Info attribute of the :class:~mne.io.Raw object to determine channel types, and takes boolean or string parameters to indicate which type(s) to retain. The meg parameter defaults to True, and all others default to False, so to get just the EEG channels, we pass eeg=True and meg=False: End of explanation """ data = raw.get_data() print(data.shape) """ Explanation: Some of the parameters of :func:mne.pick_types accept string arguments as well as booleans. For example, the meg parameter can take values 'mag', 'grad', 'planar1', or 'planar2' to select only magnetometers, all gradiometers, or a specific type of gradiometer. See the docstring of :meth:mne.pick_types for full details. The Raw.get_data() method If you only want the data (not the corresponding array of times), :class:~mne.io.Raw objects have a :meth:~mne.io.Raw.get_data method. Used with no parameters specified, it will extract all data from all channels, in a (n_channels, n_timepoints) :class:NumPy array &lt;numpy.ndarray&gt;: End of explanation """ data, times = raw.get_data(return_times=True) print(data.shape) print(times.shape) """ Explanation: If you want the array of times, :meth:~mne.io.Raw.get_data has an optional return_times parameter: End of explanation """ first_channel_data = raw.get_data(picks=0) eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog']) two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'], start=1000, stop=2000) print(first_channel_data.shape) print(eeg_and_eog_data.shape) print(two_meg_chans_data.shape) """ Explanation: The :meth:~mne.io.Raw.get_data method can also be used to extract specific channel(s) and sample ranges, via its picks, start, and stop parameters. The picks parameter accepts integer channel indices, channel names, or channel types, and preserves the requested channel order given as its picks parameter. End of explanation """ data = raw.get_data() np.save(file='my_data.npy', arr=data) """ Explanation: Summary of ways to extract data from Raw objects The following table summarizes the various ways of extracting data from a :class:~mne.io.Raw object. .. cssclass:: table-bordered .. rst-class:: midvalign +-------------------------------------+-------------------------+ | Python code | Result | | | | | | | +=====================================+=========================+ | raw.get_data() | :class:NumPy array | | | &lt;numpy.ndarray&gt; | | | (n_chans × n_samps) | +-------------------------------------+-------------------------+ | raw[:] | :class:tuple of (data | +-------------------------------------+ (n_chans × n_samps), | | raw.get_data(return_times=True) | times (1 × n_samps)) | +-------------------------------------+-------------------------+ | raw[0, 1000:2000] | | +-------------------------------------+ | | raw['MEG 0113', 1000:2000] | | +-------------------------------------+ | | raw.get_data(picks=0, | :class:`tuple` of | | start=1000, stop=2000, | (data (1 × 1000), | | return_times=True) | times (1 × 1000)) | +-------------------------------------+ | | raw.get_data(picks='MEG 0113', | | | start=1000, stop=2000, | | | return_times=True) | | +-------------------------------------+-------------------------+ | raw[7:9, 1000:2000] | | +-------------------------------------+ | | raw[[2, 5], 1000:2000] | :class:tuple of | +-------------------------------------+ (data (2 × 1000), | | raw[['EEG 030', 'EOG 061'], | times (1 × 1000)) | | 1000:2000] | | +-------------------------------------+-------------------------+ Exporting and saving Raw objects :class:~mne.io.Raw objects have a built-in :meth:~mne.io.Raw.save method, which can be used to write a partially processed :class:~mne.io.Raw object to disk as a :file:.fif file, such that it can be re-loaded later with its various attributes intact (but see precision for an important note about numerical precision when saving). There are a few other ways to export just the sensor data from a :class:~mne.io.Raw object. One is to use indexing or the :meth:~mne.io.Raw.get_data method to extract the data, and use :func:numpy.save to save the data array: End of explanation """ sampling_freq = raw.info['sfreq'] start_end_secs = np.array([10, 13]) start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int) df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample) # then save using df.to_csv(...), df.to_hdf(...), etc print(df.head()) """ Explanation: It is also possible to export the data to a :class:Pandas DataFrame &lt;pandas.DataFrame&gt; object, and use the saving methods that :mod:Pandas &lt;pandas&gt; affords. The :class:~mne.io.Raw object's :meth:~mne.io.Raw.to_data_frame method is similar to :meth:~mne.io.Raw.get_data in that it has a picks parameter for restricting which channels are exported, and start and stop parameters for restricting the time domain. Note that, by default, times will be converted to milliseconds, rounded to the nearest millisecond, and used as the DataFrame index; see the scaling_time parameter in the documentation of :meth:~mne.io.Raw.to_data_frame for more details. End of explanation """
enakai00/jupyter_ml4se_commentary
06-pandas DataFrame-02.ipynb
apache-2.0
import numpy as np import matplotlib.pyplot as plt import pandas as pd from pandas import Series, DataFrame """ Explanation: データフレームからのデータの抽出 End of explanation """ from numpy.random import randint dices = randint(1,7,(5,2)) diceroll = DataFrame(dices, columns=['dice1','dice2']) diceroll """ Explanation: DataFrame から特定の列を Series として取り出す例です。 End of explanation """ diceroll['dice1'] """ Explanation: 配列の index に column 名を指定して取り出します。 End of explanation """ diceroll.dice1 """ Explanation: column 名を属性に指定して取り出します。 End of explanation """ data = {'City': ['Tokyo','Osaka','Nagoya','Okinawa'], 'Temperature': [25.0,28.2,27.3,30.9], 'Humidity': [44,42,np.nan,62]} cities = DataFrame(data) cities cities[['City', 'Humidity']] """ Explanation: 複数の列を DataFrame として取り出す例です。 End of explanation """ cities[['City']] """ Explanation: 次のように、単一の列を DataFrame として取り出すこともできます。 End of explanation """ cities['City'] """ Explanation: 次は Series として取り出す場合です。 End of explanation """ cities[0:2] cities[2:3] cities[1:] """ Explanation: DataFrame から行を指定して取り出す例です。 配列のスライス記法で取り出す行を指定します。 End of explanation """ cities[cities['Temperature']>28] """ Explanation: 特定の条件を満たす行だけを取り出すこともできます。 End of explanation """ cities """ Explanation: 行と列の両方を指定して取り出す例です。 End of explanation """ cities.ix[1:3, ['City','Humidity']] """ Explanation: 行はスライス記法、列は column 名のリストで指定します。 End of explanation """ cities """ Explanation: DataFrame の行ごとに処理をする例です。 End of explanation """ for index, line in cities.iterrows(): print 'Index:', index print line, '\n' """ Explanation: iterrows メソッドは、各行の index とその行を表わす Series オブジェクトを順に返します。 End of explanation """ humidity = cities['Humidity'].copy() humidity[2] = 50 humidity """ Explanation: データフレームを変更する例です。 DataFrame から抽出したオブジェクトを変更する際は、明示的にコピーを作成します。 End of explanation """ cities """ Explanation: コピーを変更しても元の DataFrame が変更されることはありません。 End of explanation """ cities.loc[2,'Humidity'] = 50 cities """ Explanation: DataFrame の特定要素を変更する際は、loc メソッドで要素を指定します。 End of explanation """ for index, line in cities.iterrows(): if line['Temperature'] > 30: cities.loc[index, 'Temperature'] = 30 cities """ Explanation: 30より大きい値の Temperature を30に揃える処理の例です。 End of explanation """ cities.loc[(cities['Temperature']>27)&(cities['Temperature']<29), 'Temperature'] = 28 cities """ Explanation: 条件による行の指定と組み合わせることもできます。 End of explanation """ cities.loc[2,'Humidity'] = np.nan cities cities = cities.dropna() cities """ Explanation: dropna メソッドで欠損値を含む行を削除する例です。 End of explanation """ from numpy.random import normal def create_dataset(num): data_x = np.linspace(0,1,num) data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num) return DataFrame({'x': data_x, 'y': data_y}) """ Explanation: 練習問題 (1) 次の関数 create_dataset() を用いて、num=10 個のデータからなるデータフレーム data を作成します。その後、iterrowsメソッドを利用して、データポイント (x,y) のy値と関数 sin(2πx) の平方根平均二乗誤差 √{sum(sin(2πx) - y)**2 / num} を計算してください。 ヒント:この例では、平方根平均二乗誤差は約0.3になります。 End of explanation """ from PIL import Image Image.open("figure01.png") """ Explanation: (2) (1)のDataFrameから列 'x' だけを取り出したSeriesオブジェクトを変数 x に格納してください。nameプロパティは、'x' とします。 さらに、x**2 (各要素を2乗した値)を要素とするSeriesオブジェクトを作成して、変数 x2 に格納してください。nameプロパティは、'x2' とします。 同様に、x**3、x**4 を要素とするSeriesオブジェクトを変数 x3, x4 に格納します。 (3) (2)で作成した x, x2, x3, x4 を結合して、x, x2, x3, x4を列に持ったDataFrame dataset を作成してください。 ヒント:結果は、次のような DataFrame になります。 End of explanation """
acmiyaguchi/data-pipeline
reports/android-clients/android-clients.ipynb
mpl-2.0
def dedupe_pings(rdd): return rdd.filter(lambda p: p["meta/clientId"] is not None)\ .map(lambda p: (p["meta/documentId"], p))\ .reduceByKey(lambda x, y: x)\ .map(lambda x: x[1]) """ Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping. End of explanation """ def transform(ping): # Should not be None since we filter those out. clientId = ping["meta/clientId"] profileDate = None profileDaynum = ping["environment/profile/creationDate"] if profileDaynum is not None: try: # Bad data could push profileDaynum > 32767 (size of a C int) and throw exception profileDate = dt.datetime(1970, 1, 1) + dt.timedelta(int(profileDaynum)) except: profileDate = None # Create date should already be in ISO format creationDate = ping["creationDate"] if creationDate is not None: # This is only accurate because we know the creation date is always in 'Z' (zulu) time. creationDate = dt.datetime.strptime(ping["creationDate"], "%Y-%m-%dT%H:%M:%S.%fZ") # Added via the ingestion process so should not be None. submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d") appVersion = ping["application/version"] osVersion = ping["environment/system/os/version"] if osVersion is not None: osVersion = int(osVersion) locale = ping["environment/settings/locale"] # Truncate to 32 characters defaultSearch = ping["environment/settings/defaultSearchEngine"] if defaultSearch is not None: defaultSearch = defaultSearch[0:32] # Build up the device string, truncating like we do in 'core' ping. device = ping["environment/system/device/manufacturer"] model = ping["environment/system/device/model"] if device is not None and model is not None: device = device[0:12] + "-" + model[0:19] xpcomABI = ping["application/xpcomAbi"] arch = "arm" if xpcomABI is not None and "x86" in xpcomABI: arch = "x86" return [clientId, profileDate, submissionDate, creationDate, appVersion, osVersion, locale, defaultSearch, device, arch] """ Explanation: Transform and sanitize the pings into arrays. End of explanation """ channels = ["nightly", "aurora", "beta", "release"] batch_date = os.environ.get('date') if batch_date: start = end = dt.datetime.strptime(batch_date, '%Y%m%d') else: start = start = dt.datetime.now() - dt.timedelta(1) day = start while day <= end: for channel in channels: print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d") pings = get_pings(sc, app="Fennec", channel=channel, submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")), build_id=("20100101000000", "99999999999999"), fraction=1) subset = get_pings_properties(pings, ["meta/clientId", "meta/documentId", "meta/submissionDate", "creationDate", "application/version", "environment/system/os/version", "environment/profile/creationDate", "environment/settings/locale", "environment/settings/defaultSearchEngine", "environment/system/device/model", "environment/system/device/manufacturer", "application/xpcomAbi"]) subset = dedupe_pings(subset) print "\nDe-duped pings:" print subset.first() transformed = subset.map(transform) print "\nTransformed pings:" print transformed.first() s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/android_clients" s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d") schema = StructType([ StructField("clientid", StringType(), False), StructField("profiledate", TimestampType(), True), StructField("submissiondate", TimestampType(), False), StructField("creationdate", TimestampType(), True), StructField("appversion", StringType(), True), StructField("osversion", IntegerType(), True), StructField("locale", StringType(), True), StructField("defaultsearch", StringType(), True), StructField("device", StringType(), True), StructField("arch", StringType(), True) ]) grouped = sqlContext.createDataFrame(transformed, schema) grouped.coalesce(1).write.parquet(s3_output, mode="overwrite") day += dt.timedelta(1) """ Explanation: Create a set of pings from "saved-session" to build a set of core client data. Output the data to CSV or Parquet. This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs. End of explanation """
yongtang/tensorflow
tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2022 The TensorFlow Authors. End of explanation """ !sudo apt -y install libportaudio2 !pip install tflite-model-maker import os import glob import random import shutil import librosa import soundfile as sf from IPython.display import Audio import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf import tflite_model_maker as mm from tflite_model_maker import audio_classifier from tflite_model_maker.config import ExportFormat print(f"TensorFlow Version: {tf.__version__}") print(f"Model Maker Version: {mm.__version__}") """ Explanation: Retrain a speech recognition model with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_speech_recognition"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a speech recognition model that can classify spoken words or short phrases using one-second sound samples. The Model Maker library uses transfer learning to retrain an existing TensorFlow model with a new dataset, which reduces the amount of sample data and time required for training. By default, this notebook retrains the model (BrowserFft, from the TFJS Speech Command Recognizer) using a subset of words from the speech commands dataset (such as "up," "down," "left," and "right"). Then it exports a TFLite model that you can run on a mobile device or embedded system (such as a Raspberry Pi). It also exports the trained model as a TensorFlow SavedModel. This notebook is also designed to accept a custom dataset of WAV files, uploaded to Colab in a ZIP file. The more samples you have for each class, the better your accuracy will be, but because the transfer learning process uses feature embeddings from the pre-trained model, you can still get a fairly accurate model with only a few dozen samples in each of your classes. Note: The model we'll be training is optimized for speech recognition with one-second samples. If you want to perform more generic audio classification (such as detecting different types of music), we suggest you instead follow this Colab to retrain an audio classifier. If you want to run the notebook with the default speech dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. However, if you want to use your own dataset, then continue down to Prepare the dataset and follow the instructions there. Import the required packages You'll need TensorFlow, TFLite Model Maker, and some modules for audio manipulation, playback, and visualizations. End of explanation """ use_custom_dataset = False #@param ["False", "True"] {type:"raw"} """ Explanation: Prepare the dataset To train with the default speech dataset, just run all the code below as-is. But if you want to train with your own speech dataset, follow these steps: Note: The model you'll retrain expects input data to be roughly one second of audio at 44.1 kHz. Model Maker perfoms automatic resampling for the training dataset, so there's no need to resample your dataset if it has a sample rate other than 44.1 kHz. But beware that audio samples longer than one second will be split into multiple one-second chunks, and the final chunk will be discarded if it's shorter than one second. Be sure each sample in your dataset is in WAV file format, about one second long. Then create a ZIP file with all your WAV files, organized into separate subfolders for each classification. For example, each sample for a speech command "yes" should be in a subfolder named "yes". Even if you have only one class, the samples must be saved in a subdirectory with the class name as the directory name. (This script assumes your dataset is not split into train/validation/test sets and performs that split for you.) Click the Files tab in the left panel and just drag-drop your ZIP file there to upload it. Use the following drop-down option to set use_custom_dataset to True. Then skip to Prepare a custom audio dataset to specify your ZIP filename and dataset directory name. End of explanation """ tf.keras.utils.get_file('speech_commands_v0.01.tar.gz', 'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz', cache_dir='./', cache_subdir='dataset-speech', extract=True) tf.keras.utils.get_file('background_audio.zip', 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/sound_classification/background_audio.zip', cache_dir='./', cache_subdir='dataset-background', extract=True) """ Explanation: Generate a background noise dataset Whether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence). Because the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence: End of explanation """ # Create a list of all the background wav files files = glob.glob(os.path.join('./dataset-speech/_background_noise_', '*.wav')) files = files + glob.glob(os.path.join('./dataset-background', '*.wav')) background_dir = './background' os.makedirs(background_dir, exist_ok=True) # Loop through all files and split each into several one-second wav files for file in files: filename = os.path.basename(os.path.normpath(file)) print('Splitting', filename) name = os.path.splitext(filename)[0] rate = librosa.get_samplerate(file) length = round(librosa.get_duration(filename=file)) for i in range(length - 1): start = i * rate stop = (i * rate) + rate data, _ = sf.read(file, start=start, stop=stop) sf.write(os.path.join(background_dir, name + str(i) + '.wav'), data, rate) """ Explanation: Note: Although there is a newer version available, we're using v0.01 of the speech commands dataset because it's a smaller download. v0.01 includes 30 commands, while v0.02 adds five more ("backward", "forward", "follow", "learn", and "visual"). End of explanation """ if not use_custom_dataset: commands = [ "up", "down", "left", "right", "go", "stop", "on", "off", "background"] dataset_dir = './dataset-speech' test_dir = './dataset-test' # Move the processed background samples shutil.move(background_dir, os.path.join(dataset_dir, 'background')) # Delete all directories that are not in our commands list dirs = glob.glob(os.path.join(dataset_dir, '*/')) for dir in dirs: name = os.path.basename(os.path.normpath(dir)) if name not in commands: shutil.rmtree(dir) # Count is per class sample_count = 150 test_data_ratio = 0.2 test_count = round(sample_count * test_data_ratio) # Loop through child directories (each class of wav files) dirs = glob.glob(os.path.join(dataset_dir, '*/')) for dir in dirs: files = glob.glob(os.path.join(dir, '*.wav')) random.seed(42) random.shuffle(files) # Move test samples: for file in files[sample_count:sample_count + test_count]: class_dir = os.path.basename(os.path.normpath(dir)) os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True) os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file))) # Delete remaining samples for file in files[sample_count + test_count:]: os.remove(file) """ Explanation: Prepare the speech commands dataset We already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model. This dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things: Specify which classifications we want to use, and delete the rest. Keep only 150 samples of each class for training (to prove that transfer learning works well with smaller datasets and simply to reduce the training time). Create a separate directory for a test dataset so we can easily run inference with them later. End of explanation """ if use_custom_dataset: # Specify the ZIP file you uploaded: !unzip YOUR-FILENAME.zip # Specify the unzipped path to your custom dataset # (this path contains all the subfolders with classification names): dataset_dir = './YOUR-DIRNAME' """ Explanation: Prepare a custom dataset If you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset: End of explanation """ def move_background_dataset(dataset_dir): dest_dir = os.path.join(dataset_dir, 'background') if os.path.exists(dest_dir): files = glob.glob(os.path.join(background_dir, '*.wav')) for file in files: shutil.move(file, dest_dir) else: shutil.move(background_dir, dest_dir) if use_custom_dataset: # Move background samples into custom dataset move_background_dataset(dataset_dir) # Now we separate some of the files that we'll use for testing: test_dir = './dataset-test' test_data_ratio = 0.2 dirs = glob.glob(os.path.join(dataset_dir, '*/')) for dir in dirs: files = glob.glob(os.path.join(dir, '*.wav')) test_count = round(len(files) * test_data_ratio) random.seed(42) random.shuffle(files) # Move test samples: for file in files[:test_count]: class_dir = os.path.basename(os.path.normpath(dir)) os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True) os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file))) print('Moved', test_count, 'images from', class_dir) """ Explanation: After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook. The following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set. End of explanation """ def get_random_audio_file(samples_dir): files = os.path.abspath(os.path.join(samples_dir, '*/*.wav')) files_list = glob.glob(files) random_audio_path = random.choice(files_list) return random_audio_path def show_sample(audio_path): audio_data, sample_rate = sf.read(audio_path) class_name = os.path.basename(os.path.dirname(audio_path)) print(f'Class: {class_name}') print(f'File: {audio_path}') print(f'Sample rate: {sample_rate}') print(f'Sample length: {len(audio_data)}') plt.title(class_name) plt.plot(audio_data) display(Audio(audio_data, rate=sample_rate)) random_audio = get_random_audio_file(test_dir) show_sample(random_audio) """ Explanation: Play a sample To be sure the dataset looks correct, let's play at a random sample from the test set: End of explanation """ spec = audio_classifier.BrowserFftSpec() """ Explanation: Define the model When using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS. The model expects input as an audio sample that's 44.1 kHz, and just under a second long: the exact sample length must be 44034 frames. You don't need to do any resampling with your training dataset. Model Maker takes care of that for you. But when you later run inference, you must be sure that your input matches that expected format. All you need to do here is instantiate the BrowserFftSpec: End of explanation """ if not use_custom_dataset: train_data_ratio = 0.8 train_data = audio_classifier.DataLoader.from_folder( spec, dataset_dir, cache=True) train_data, validation_data = train_data.split(train_data_ratio) test_data = audio_classifier.DataLoader.from_folder( spec, test_dir, cache=True) """ Explanation: Load your dataset Now you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec. We already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split: the training set, the validation set, and the test set. Load the speech commands dataset End of explanation """ if use_custom_dataset: train_data_ratio = 0.8 train_data = audio_classifier.DataLoader.from_folder( spec, dataset_dir, cache=True) train_data, validation_data = train_data.split(train_data_ratio) test_data = audio_classifier.DataLoader.from_folder( spec, test_dir, cache=True) """ Explanation: Load a custom dataset Note: Setting cache=True is important to make training faster (especially when the dataset must be re-sampled) but it will also require more RAM to hold the data. If you use a very large custom dataset, caching might exceed your RAM capacity. End of explanation """ # If your dataset has fewer than 100 samples per class, # you might want to try a smaller batch size batch_size = 25 epochs = 25 model = audio_classifier.create(train_data, spec, validation_data, batch_size, epochs) """ Explanation: Train the model Now we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training. If you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set. Note: The first epoch takes longer because it must create the cache. End of explanation """ model.evaluate(test_data) """ Explanation: Review the model performance Even if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here: End of explanation """ def show_confusion_matrix(confusion, test_labels): """Compute confusion matrix and normalize.""" confusion_normalized = confusion.astype("float") / confusion.sum(axis=1) sns.set(rc = {'figure.figsize':(6,6)}) sns.heatmap( confusion_normalized, xticklabels=test_labels, yticklabels=test_labels, cmap='Blues', annot=True, fmt='.2f', square=True, cbar=False) plt.title("Confusion matrix") plt.ylabel("True label") plt.xlabel("Predicted label") confusion_matrix = model.confusion_matrix(test_data) show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label) """ Explanation: View the confusion matrix When training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data. End of explanation """ TFLITE_FILENAME = 'browserfft-speech.tflite' SAVE_PATH = './models' print(f'Exporing the model to {SAVE_PATH}') model.export(SAVE_PATH, tflite_filename=TFLITE_FILENAME) model.export(SAVE_PATH, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL]) """ Explanation: Export the model The last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere. When exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.) End of explanation """ # This library provides the TFLite metadata API ! pip install -q tflite_support from tflite_support import metadata import json def get_labels(model): """Returns a list of labels, extracted from the model metadata.""" displayer = metadata.MetadataDisplayer.with_model_file(model) labels_file = displayer.get_packed_associated_file_list()[0] labels = displayer.get_associated_file_buffer(labels_file).decode() return [line for line in labels.split('\n')] def get_input_sample_rate(model): """Returns the model's expected sample rate, from the model metadata.""" displayer = metadata.MetadataDisplayer.with_model_file(model) metadata_json = json.loads(displayer.get_metadata_json()) input_tensor_metadata = metadata_json['subgraph_metadata'][0][ 'input_tensor_metadata'][0] input_content_props = input_tensor_metadata['content']['content_properties'] return input_content_props['sample_rate'] """ Explanation: Run inference with TF Lite model Now your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python. End of explanation """ # Get a WAV file for inference and list of labels from the model tflite_file = os.path.join(SAVE_PATH, TFLITE_FILENAME) labels = get_labels(tflite_file) random_audio = get_random_audio_file(test_dir) # Ensure the audio sample fits the model input interpreter = tf.lite.Interpreter(tflite_file) input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_size = input_details[0]['shape'][1] sample_rate = get_input_sample_rate(tflite_file) audio_data, _ = librosa.load(random_audio, sr=sample_rate) if len(audio_data) < input_size: audio_data.resize(input_size) audio_data = np.expand_dims(audio_data[:input_size], axis=0) # Run inference interpreter.allocate_tensors() interpreter.set_tensor(input_details[0]['index'], audio_data) interpreter.invoke() output_data = interpreter.get_tensor(output_details[0]['index']) # Display prediction and ground truth top_index = np.argmax(output_data[0]) label = labels[top_index] score = output_data[0][top_index] print('---prediction---') print(f'Class: {label}\nScore: {score}') print('----truth----') show_sample(random_audio) """ Explanation: To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below. End of explanation """ try: from google.colab import files except ImportError: pass else: files.download(tflite_file) """ Explanation: Download the TF Lite model Now you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example. End of explanation """
karthikrangarajan/intro-to-sklearn
06.Model Evaluation.ipynb
bsd-3-clause
import pandas as pd # import model algorithm and data from sklearn import svm, datasets # import splitter from sklearn.cross_validation import train_test_split # import metrics from sklearn.metrics import confusion_matrix # feature data (X) and labels (y) iris = datasets.load_iris() X, y = iris.data, iris.target # split data into training and test sets X_train, X_test, y_train, y_test = \ train_test_split(X, y, train_size = 0.70, random_state = 42) # perform the classification step and run a prediction on test set from above clf = svm.SVC(kernel = 'linear', C = 0.01) y_pred = clf.fit(X_train, y_train).predict(X_test) pd.DataFrame({'Prediction': iris.target_names[y_pred], 'Actual': iris.target_names[y_test]}) # accuracy score clf.score(X_test, y_test) # Define a plotting function confusion matrices # (from http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) import matplotlib.pyplot as plt def plot_confusion_matrix(cm, target_names, title = 'The Confusion Matrix', cmap = plt.cm.YlOrRd): plt.imshow(cm, interpolation = 'nearest', cmap = cmap) plt.tight_layout() # Add feature labels to x and y axes tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.ylabel('True Label') plt.xlabel('Predicted Label') plt.colorbar() """ Explanation: Evaluating Models Evaluating using metrics <b>Confusion matrix</b> - visually inspect quality of a classifier's predictions (more here) - very useful to see if a particular class is problematic <b>Here, we will process some data, classify it with SVM (see here for more info), and view the quality of the classification with a confusion matrix.</b> End of explanation """ %matplotlib inline cm = confusion_matrix(y_test, y_pred) # see the actual counts print(cm) # visually inpsect how the classifier did of matching predictions to true labels plot_confusion_matrix(cm, iris.target_names) """ Explanation: Numbers in confusion matrix: * on-diagonal - counts of points for which the predicted label is equal to the true label * off-diagonal - counts of mislabeled points End of explanation """ from sklearn.metrics import classification_report # Using the test and prediction sets from above print(classification_report(y_test, y_pred, target_names = iris.target_names)) # Another example with some toy data y_test = ['cat', 'dog', 'mouse', 'mouse', 'cat', 'cat'] y_pred = ['mouse', 'dog', 'cat', 'mouse', 'cat', 'mouse'] # How did our predictor do? print(classification_report(y_test, ___, target_names = ___)) # <-- fill in the blanks """ Explanation: <b>Classification reports</b> - a text report with important classification metrics (e.g. precision, recall) End of explanation """
smrjan/seldon-server
python/examples/doc_similarity_reuters.ipynb
apache-2.0
import json import codecs import os docs = [] for filename in os.listdir("reuters-21578-json/data/full"): f = open("reuters-21578-json/data/full/"+filename) js = json.load(f) for j in js: if 'topics' in j and 'body' in j: d = {} d["id"] = j['id'] d["text"] = j['body'].replace("\n","") d["title"] = j['title'] d["tags"] = ",".join(j['topics']) docs.append(d) print "loaded ",len(docs)," documents" """ Explanation: Creating a document similarity microservice for the Reuters-21578 dataset. First download the Reuters-21578 dataset in JSON format into the local folder: bash git clone https://github.com/fergiemcdowall/reuters-21578-json The first step will be to convert this into the default corpus format we use: End of explanation """ from seldon.text import DocumentSimilarity,DefaultJsonCorpus import logging logger = logging.getLogger() logger.setLevel(logging.INFO) corpus = DefaultJsonCorpus(docs) ds = DocumentSimilarity(model_type='gensim_lsi') ds.fit(corpus) print "done" """ Explanation: Create a gensim LSI document similarity model End of explanation """ ds.score() """ Explanation: Run accuracy tests Run a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the "tags" field of the meta data as the ground truth. End of explanation """ ds.score(approx=True) """ Explanation: Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster. End of explanation """ query_doc=6023 print "Query doc: ",ds.get_meta(query_doc)['title'],"Tagged:",ds.get_meta(query_doc)['tags'] neighbours = ds.nn(query_doc,k=5,translate_id=True,approx=True) print neighbours for (doc_id,_) in neighbours: j = ds.get_meta(doc_id) print "Doc id",doc_id,j['title'],"Tagged:",j['tags'] """ Explanation: Run single nearest neighbour query Run a nearest neighbour query on a single document and print the title and tag meta data End of explanation """ import seldon rw = seldon.Recommender_wrapper() rw.save_recommender(ds,"reuters_recommender") print "done" """ Explanation: Save recommender Save the recommender to the filesystem in reuters_recommender folder End of explanation """ from seldon.microservice import Microservices m = Microservices() app = m.create_recommendation_microservice("reuters_recommender") app.run(host="0.0.0.0",port=5000,debug=False) """ Explanation: Start a microservice to serve the recommender End of explanation """
pauliacomi/pyGAPS
docs/examples/inspection.ipynb
mit
# import isotherms %run import.ipynb """ Explanation: General isotherm info Before we start the characterisation, let's have a cursory look at the isotherms. First, make sure the data is imported by running the previous notebook. End of explanation """ isotherms_n2_77k """ Explanation: We know that some of the isotherms are measured with nitrogen at 77 kelvin, but don't know what the samples are. If we evaluate an isotherm, its key details are automatically displayed. End of explanation """ print(isotherms_isosteric[0]) print([isotherm.temperature for isotherm in isotherms_isosteric]) """ Explanation: So we have a mesoporous templated silica, a zeolite, some amorphous silica, a microporous carbon and a common MOF. What about the isotherms which we'll use for isosteric calculations? Let's see what is the sample and what temperatures they are recorded at. We can use the standard print method on an isotherm for some detailed info. End of explanation """ # The second axis range limits are given to the print function isotherms_calorimetry[1].print_info(y2_range=(0,60)) """ Explanation: Let's look at the isotherms that were measured in a combination with microcalorimetry. Besides the loading and pressure points, these isotherms also have a differential enthalpy of adsorption measured for each point. We can use the isotherm.print_info function, which also outputs a graph of the isotherm besides its properties. End of explanation """ # import the characterisation module import pygaps.graphing as pgg pgg.plot_iso( isotherms_iast, # the isotherms branch='ads', # only the adsorption branch lgd_keys=['material','adsorbate'], # the isotherm properties making up the legend ) """ Explanation: For the isotherms which are to be used for IAST calculations, we'd like to plot them on the same graph, with the name of the adsorbate in the legend. For this we can use the more general pygaps.graphing.plot_iso function. End of explanation """
Centre-Alt-Rendiment-Esportiu/att
notebooks/Train_Points_Importer.ipynb
gpl-3.0
!head -10 train_points_import_data/arduino_raw_data.txt """ Explanation: <h1>Train Points Importer</h1> <hr style="border: 1px solid #000;"> <span> <h2> Import Tool for transforming collected hits from Arduino serial port, to ATT readable hit format. </h2> <span> <br> </span> <i>Import points from arduino format</i><br> <br> SOURCE FORMAT:<br> "hit: { [tstamp]:[level] [tstamp]:[level] ... [tstamp]:[level] [side]}"<br> from file: src/arduino/data/[file]<br> <br> <i>To internal format</i><br> <br> TARGET FORMAT:<br> "[x_coord],[y_coord],[tstamp],[tstamp], ... ,[tstamp]"<br> to file: src/python/data/[file]<br> </span> <hr> <h2>Abstract</h2> <br> <span> Let's have a look at the raw data, </span> End of explanation """ !head -10 train_points_import_data/processed_data.csv """ Explanation: <span> We want to have this data in a more standard format, a CSV file for instance. More like this way: </span> End of explanation """ # Import points from arduino format: # # "hit: { [tstamp]:[level] [tstamp]:[level] ... [tstamp]:[level] [side]}" # from file: src/arduino/data/[file] # # To internal format: # "[x_coord],[y_coord],[tstamp],[tstamp], ... ,[tstamp]" # to file: src/python/data/[file] import sys #sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/') sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/') import hit.importer.train_points_importer as imp """ Explanation: <span> <h2>Data cleaning and preparation</h2> </span> <br> <span> We are going to create an importer and import the data from the Arduino raw serial into a more readable format, CSV. </span> <br><br> <span> First, set libraries path and import the Importer module </span> <br> End of explanation """ importer = imp.TrainPointsImporter() """ Explanation: <span> Create an Importer instance now: </span> End of explanation """ str_left_input_file = "../src/arduino/data/train_20160129_left.txt" str_left_output_file = "../src/python/data/train_points_20160129_left.txt" """ Explanation: <span> Now it is time to set some paths, in order to point the source raw data files and the target CSV files: </span> End of explanation """ str_right_input_file = "../src/arduino/data/train_20160129_right.txt" str_right_output_file = "../src/python/data/train_points_20160129_right.txt" """ Explanation: <span> That was for the left-side collected hit info. </span> End of explanation """ importer.from_file_to_file(str_left_input_file, str_left_output_file) """ Explanation: <span> That was for the right-side collected hit info. </span> <br> <span> Let's do the left-side import now: </span> End of explanation """ importer.from_file_to_file(str_right_input_file, str_right_output_file) """ Explanation: And the right-side import now: End of explanation """
mdpiper/topoflow-notebooks
initial_snow_depth.ipynb
mit
from cmt.components import SnowEnergyBalance, SnowDegreeDay seb, sdd = SnowEnergyBalance(), SnowDegreeDay() """ Explanation: Initial snow depth in SnowDegreeDay and SnowEnergyBalance Problem: Show that setting initial snow depth h0_snow has no effect in SnowDegreeDay and SnowEnergyBalance. Import the Babel-wrapped SnowEnergyBalance and SnowDegreeDay components and create instances: End of explanation """ seb.initialize('./input/snow_energy_balance-1.cfg') sdd.initialize('./input/snow_degree_day-1.cfg') """ Explanation: Initialize the components with cfg files that, for simplicity, use the same time step and run duration: End of explanation """ time = [sdd.get_current_time()] sdd_snow_depth = [sdd.get_value('snowpack__depth').max()] seb_snow_depth = [seb.get_value('snowpack__depth').max()] print time, sdd_snow_depth, seb_snow_depth """ Explanation: Store initial values of model time and maximum snowpack depth from the two components: End of explanation """ while sdd.get_current_time() < sdd.get_end_time(): seb.update() sdd.update() time.append(sdd.get_current_time()) seb_snow_depth.append(seb.get_value('snowpack__depth').max()) sdd_snow_depth.append(sdd.get_value('snowpack__depth').max()) """ Explanation: Advance both models to the end, saving the model time and maximum snowpack depth values at each step: End of explanation """ print time print seb_snow_depth print sdd_snow_depth """ Explanation: Check the values: End of explanation """ rho_H2O = 1000.0 h_swe = sdd.get_value('snowpack__liquid-equivalent_depth').max() rho_snow = sdd.get_value('snowpack__z_mean_of_mass-per-volume_density').max() print h_swe * (rho_H2O / rho_snow) # sdd h_swe = seb.get_value('snowpack__liquid-equivalent_depth').max() rho_snow = seb.get_value('snowpack__z_mean_of_mass-per-volume_density').max() print h_swe * (rho_H2O / rho_snow) # seb """ Explanation: Here's the key point: the snow depth in each model after the first update is set by the equation on line 506 of snow_base.py. After the first update, the snow depth evolves according to the physics of the component. See: End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(time, seb_snow_depth, 'r', time, sdd_snow_depth, 'b') ax.set_xlabel('Time (s)') ax.set_ylabel('Snowpack depth (m)') ax.set_title('Snowpack depth vs. time') ax.set_xlim((time[0], time[-1])) ax.set_ylim((0.49, 0.51)) """ Explanation: Plot the snow depth time series output from each component: End of explanation """ seb.finalize(), sdd.finalize() """ Explanation: Finalize the components: End of explanation """
ecell/ecell4-notebooks
en/tutorials/tutorial07.ipynb
gpl-2.0
%matplotlib inline from ecell4.prelude import * """ Explanation: 7. Introduction of Rule-based Modeling E-Cell4 provides the rule-based modeling environment. End of explanation """ sp1 = Species("A(b^1).B(b^1)") sp2 = Species("A(b^1).A(b^1)") pttrn1 = Species("A") print(count_species_matches(pttrn1, sp1)) # => 1 print(count_species_matches(pttrn1, sp2)) # => 2 """ Explanation: 7.1. count_species_matches First, count_species_matches counts the number of matches between Species. End of explanation """ pttrn1 = Species("A(b)") pttrn2 = Species("A(b^_)") print(count_species_matches(pttrn1, sp1)) # => 0 print(count_species_matches(pttrn2, sp1)) # => 1 """ Explanation: In the above case, Species.count just returns the number of UnitSpecies named A in Species regardless of its sites. To specify the occupancy of a bond: End of explanation """ sp1 = Species("A(b=u)") pttrn1 = Species("A(b)") pttrn2 = Species("A(b=u)") print(count_species_matches(pttrn1, sp1)) # => 1 print(count_species_matches(pttrn2, sp1)) # => 1 """ Explanation: where A(b) suggests that bond b is empty, and A(b^_) does that bond b is occupied. Underscore _ means wildcard here. Similarly, you can also specify the state of sites. End of explanation """ sp1 = Species("A(b=u^1).B(b=p^1)") pttrn1 = Species("A(b=_^_)") # This is equivalent to `A(b^_)` here pttrn2 = Species("_(b^_)") print(count_species_matches(pttrn1, sp1)) # => 1 print(count_species_matches(pttrn2, sp1)) # => 2 """ Explanation: A(b) says nothing about the state, but A(b=u) specifies both state and bond. A(b=u) means that UnitSpecies named A has a site named b which state is u and the bond is empty. Wildcard _ is acceptable even in a state and name. End of explanation """ sp1 = Species("A(b^1).B(b^1)") pttrn1 = Species("_._") pttrn2 = Species("_1._1") print(count_species_matches(pttrn1, sp1)) # => 1 print(count_species_matches(pttrn2, sp1)) # => 0 """ Explanation: Wildcard _ matches anything, and the pattern matched is not consistent between wildcards even in the Species. On the other hand, named wildcards, _1, _2 and so on, confer the consistency within the match. End of explanation """ sp1 = Species("A(b^1).A(b^1)") pttrn1 = Species("_1._1") print(count_species_matches(pttrn1, sp1)) # => 1 """ Explanation: where the first pattern matches in two ways (A.B and B.A), but the second matches nothing. End of explanation """ rr1 = create_unimolecular_reaction_rule(Species("A(p=u)"), Species("A(p=p)"), 1.0) sp1 = Species("A(b^1,p=u).B(b^1)") print(rr1.count([sp1])) # => 1 """ Explanation: 7.2. ReactionRule.count and generate ReactionRule also has a function to count matches agaist the given list of reactants. End of explanation """ print([rr.as_string() for rr in rr1.generate([sp1])]) """ Explanation: ReactionRule.generate returns a list of ReactionRules generated based on the matches. End of explanation """ rr1 = create_binding_reaction_rule(Species("A(b)"), Species("B(b)"), Species("A(b^1).B(b^1)"), 1.0) sp1 = Species("A(b)") sp2 = Species("B(b)") print([rr.as_string() for rr in rr1.generate([sp1, sp2])]) print([rr.as_string() for rr in rr1.generate([sp2, sp1])]) """ Explanation: ReactionRule.generate matters the order of Species in the given list: End of explanation """ sp1 = Species("A(b,c^1).A(b,c^1)") sp2 = Species("B(b,c^1).B(b,c^1)") print(rr1.count([sp1, sp2])) # => 4 print([rr.as_string() for rr in rr1.generate([sp1, sp2])]) """ Explanation: On the current implementation, ReactionRule.generate does not always return a list of unique ReactionRules. End of explanation """ print(set([format_species(rr.products()[0]).serial() for rr in rr1.generate([sp1, sp2])])) """ Explanation: ReactionRules listed above look different, but all the products suggest the same. End of explanation """ rr1 = create_unimolecular_reaction_rule(Species("A(p=u^_)"), Species("A(p=p^_)"), 1.0) print([rr.as_string() for rr in rr1.generate([Species("A(p=u^1).B(p^1)")])]) """ Explanation: This is because these ReactionRules are generated based on the diffent matches though they produces the same Species. For details, See the section below. Wildcard is also available in ReactionRule. End of explanation """ rr1 = create_unimolecular_reaction_rule(Species("_(p=u)"), Species("_(p=p)"), 1.0) print([rr.as_string() for rr in rr1.generate([Species("A(p=u)")])]) print([rr.as_string() for rr in rr1.generate([Species("B(b^1,p=u).C(b^1,p=u)")])]) """ Explanation: Of course, a wildcard can work as a name of UnitSpecies. End of explanation """ rr1 = create_unbinding_reaction_rule(Species("AB(a=_1, b=_2)"), Species("B(b=_2)"), Species("A(a=_1)"), 1.0) print([rr.as_string() for rr in rr1.generate([Species("AB(a=x, b=y)")])]) print([rr.as_string() for rr in rr1.generate([Species("AB(a=y, b=x)")])]) """ Explanation: Named wildcards in a state is useful to specify the correspondence between sites. End of explanation """ rr1 = create_binding_reaction_rule(Species("_(b)"), Species("_(b)"), Species("_(b^1)._(b^1)"), 1.0) print(rr1.as_string()) print([rr.as_string() for rr in rr1.generate([Species("A(b)"), Species("A(b)")])]) print([rr.as_string() for rr in rr1.generate([Species("A(b)"), Species("B(b)")])]) """ Explanation: Nameless wildcard _ does not care about equality between matches. Products are generated in order. End of explanation """ rr1 = create_binding_reaction_rule(Species("_1(b)"), Species("_1(b)"), Species("_1(b^1)._1(b^1)"), 1.0) print(rr1.as_string()) print([rr.as_string() for rr in rr1.generate([Species("A(b)"), Species("A(b)")])]) print([rr.as_string() for rr in rr1.generate([Species("A(b)"), Species("B(b)")])]) # => [] """ Explanation: For its symmetry, the former case above is sometimes preffered to have a half of the original kinetic rate. This is because the number of combinations of molecules in the former is $n(n-1)/2$ even though that in the later is $n^2$, where both numbers of A and B molecules are $n$. This is true for gillespie and ode. However, in egfrd and spatiocyte, a kinetic rate is intrinsic one, and not affected by the number of combinations. Thus, in E-Cell4, no modification in the rate is done even for the case. See Homodimerization and Annihilation for the difference between algorithms. In constrast to nameless wildcard, named wildcard keeps its consistency, and always suggests the same value in the ReactionRule. End of explanation """ rr1 = create_binding_reaction_rule(Species("A(b=_1)"), Species("_1(b)"), Species("A(b=_1^1)._1(b^1)"), 1.0) print(rr1.as_string()) print([rr.as_string() for rr in rr1.generate([Species("A(b=B)"), Species("A(b)")])]) # => [] print([rr.as_string() for rr in rr1.generate([Species("A(b=B)"), Species("B(b)")])]) """ Explanation: Named wildcard is consistent even between UnitSpecies' and site's names, technically. End of explanation """ rr1 = create_binding_reaction_rule(Species("A(r)"), Species("A(l)"), Species("A(r^1).A(l^1)"), 1.0) m1 = NetfreeModel() m1.add_reaction_rule(rr1) print(m1.num_reaction_rules()) m2 = NetworkModel() m2.add_reaction_rule(rr1) print(m2.num_reaction_rules()) """ Explanation: 7.3. NetfreeModel NetfreeModel is a Model class for the rule-based modeling. The interface of NetfreeModel is almost same with NetworkModel, but takes into account rules and matches. End of explanation """ with reaction_rules(): A(r) + A(l) > A(r^1).A(l^1) | 1.0 m1 = get_model(is_netfree=True) print(repr(m1)) """ Explanation: Python notation explained in 2. How to Build a Model is available too. To get a model as NetfreeModel, set is_netfree True in get_model: End of explanation """ print(len(m2.query_reaction_rules(Species("A(r)"), Species("A(l)")))) # => 1 print(len(m2.query_reaction_rules(Species("A(l,r)"), Species("A(l,r)")))) # => 0 """ Explanation: Model.query_reaction_rules returns a list of ReactionRules agaist the given reactants. NetworkModel just returns ReactionRules based on the equality of Species. End of explanation """ print(len(m1.query_reaction_rules(Species("A(r)"), Species("A(l)")))) # => 1 print(len(m1.query_reaction_rules(Species("A(l,r)"), Species("A(l,r)")))) # => 1 """ Explanation: On the other hand, NetfreeModel genarates the list by applying the stored ReactionRules in the rule-based way. End of explanation """ print(m1.query_reaction_rules(Species("A(l,r)"), Species("A(l,r)"))[0].as_string()) print(m1.query_reaction_rules(Species("A(l,r^1).A(l^1,r)"), Species("A(l,r)"))[0].as_string()) print(m1.query_reaction_rules(Species("A(l,r^1).A(l^1,r)"), Species("A(l,r^1).A(l^1,r)"))[0].as_string()) """ Explanation: NetfreeModel does not cache generated objects. Thus, NetfreeModel.query_reaction_rules is slow, but needs less memory. End of explanation """ with reaction_rules(): _(b) + _(b) == _(b^1)._(b^1) | (1.0, 1.0) m3 = get_model(True) print(m3.num_reaction_rules()) m4 = m3.expand([Species("A(b)"), Species("B(b)")]) print(m4.num_reaction_rules()) print([rr.as_string() for rr in m4.reaction_rules()]) """ Explanation: NetfreeModel.expand expands NetfreeModel to NetworkModel by iteratively applying ReactionRules agaist the given set of Species (called seed). End of explanation """ m2 = m1.expand([Species("A(l, r)")], 100, {Species("A"): 4}) print(m2.num_reaction_rules()) # => 4 print([rr.as_string() for rr in m2.reaction_rules()]) """ Explanation: To avoid the infinite iteration for expansion, you can limit the maximum number of iterations and of UnitSpecies in a Species. End of explanation """ sp1 = Species("A(b^1).A(b^1)") sp2 = Species("A(b)") rr1 = create_unbinding_reaction_rule(sp1, sp2, sp2, 1.0) print(count_species_matches(sp1, sp1)) print([rr.as_string() for rr in rr1.generate([sp1])]) """ Explanation: 7.4. Differences between Species, ReactionRule and NetfreeModel The functions explained above is similar, but there are some differences in the criteria. End of explanation """ sp1 = Species("A(b^1).B(b^1)") rr1 = create_unbinding_reaction_rule( sp1, Species("A(b)"), Species("B(b)"), 1.0) sp2 = Species("A(b^1,c^2).A(b^3,c^2).B(b^1).B(b^3)") print(count_species_matches(sp1, sp2)) print([rr.as_string() for rr in rr1.generate([sp2])]) """ Explanation: Though count_species_matches suggests two different ways for matching (left-right and right-left), ReactionRule.generate returns only one ReactionRule because the order doesn't affect the product. End of explanation """ m1 = NetfreeModel() m1.add_reaction_rule(rr1) print([rr.as_string() for rr in m1.query_reaction_rules(sp2)]) """ Explanation: In this case, ReactionRule.generate works similarly with count_species_matches. However, NetfreeModel.query_reaction_rules returns only one ReationRule with higher kinetic rate: End of explanation """ sp1 = Species("A(b)") sp2 = Species("B(b)") rr1 = create_binding_reaction_rule(sp1, sp2, Species("A(b^1).B(b^1)"), 1.0) m1 = NetfreeModel() m1.add_reaction_rule(rr1) print([rr.as_string() for rr in rr1.generate([sp1, sp2])]) print([rr.as_string() for rr in m1.query_reaction_rules(sp1, sp2)]) print([rr.as_string() for rr in rr1.generate([sp2, sp1])]) # => [] print([rr.as_string() for rr in m1.query_reaction_rules(sp2, sp1)]) """ Explanation: NetfreeModel.query_reaction_rules checks if each ReactionRule generated is the same with others, and summalizes it if possible. As explaned above, ReactionRule.generate matters the order of Species, but Netfree.query_reaction_rules does not. End of explanation """ sp1 = Species("_(b)") sp2 = Species("_1(b)") sp3 = Species("A(b)") sp4 = Species("B(b)") rr1 = create_binding_reaction_rule(sp1, sp1, Species("_(b^1)._(b^1)"), 1) rr2 = create_binding_reaction_rule(sp2, sp2, Species("_1(b^1)._1(b^1)"), 1) print(count_species_matches(sp1, sp2)) # => 1 print(count_species_matches(sp1, sp3)) # => 1 print(count_species_matches(sp2, sp2)) # => 1 print(count_species_matches(sp2, sp3)) # => 1 print([rr.as_string() for rr in rr1.generate([sp3, sp3])]) print([rr.as_string() for rr in rr1.generate([sp3, sp4])]) print([rr.as_string() for rr in rr2.generate([sp3, sp3])]) print([rr.as_string() for rr in rr2.generate([sp3, sp4])]) # => [] """ Explanation: Named wildcards must be consistent in the context while nameless wildcards are not necessarily consistent. End of explanation """
linsalrob/PyFBA
iPythonNotebooks/From_functional_roles_to_gap-filling.ipynb
mit
import sys import copy import PyFBA modeldata = PyFBA.parse.model_seed.parse_model_seed_data('gramnegative', verbose=True) """ Explanation: How to gap-fill a genome scale metabolic model Getting started Installing libraries Before you start, you will need to install a couple of libraries: The PyFBA library has detailed installation instructions. Don't be scared, its mostly just pip install. (Optional) Also, get the SEED Servers as you can get a lot of information from them. You can install the git python repo from github. Make sure that the SEED_Servers_Python is in your PYTHONPATH. We start with importing some modules that we are going to use. We import sys so that we can use standard out and standard error if we have some error messages.<br> We import copy so that we can make a deep copy of data structures for later comparisons.<br> Then we import the PyFBA module to get started. End of explanation """ assigned_functions = PyFBA.parse.read_assigned_functions("citrobacter.assigned_functions") roles = set([i[0] for i in [list(j) for j in assigned_functions.values()]]) print("There are {} unique roles in this genome".format(len(roles))) """ Explanation: Running a basic model The SBML model is great if you have built a model elsewhere, but what if you want to build a model from a genome? We typically start with an assigned_functions file from RAST. The easiest way to find that is in the RAST directory by choosing Genome Directory from the Downloads menu on the job details page. For this example, here is an assigned_functions file from our Citrobacter model that you can download to the same directory as this iPython notebook. Notice that it has two columns: the first column is the protein ID (using SEED standard IDs that start with fig|, and then have the taxonomy ID and version number of the genome, and then peg to indicate protein encoding gene, rna to indicate RNA, crispr_spacer to indicate crispr spacers or other acronym, followed by the feature number. After the tab is the functional role of that feature. Download that file to use in this test. We start by converting this assigned_functions file to a list of reactions. End of explanation """ roles_to_reactions = PyFBA.filters.roles_to_reactions(roles, organism_type="Gram_Negative", verbose=True) reactions_to_run = set() for role in roles_to_reactions: reactions_to_run.update(roles_to_reactions[role]) print(f"There are {len(reactions_to_run)} unique reactions associated with this genome") with open("evaluated_reactions.txt", "w") as out: for r in reactions_to_run: out.write(f"{r}\n") print(f"roles: {len(roles)} r2r: {len(roles_to_reactions)} r2run: {len(reactions_to_run)}") """ Explanation: Convert those roles to reactions. We start with a dict of roles and reactions, but we only need a list of unique reactions, so we convert the keys to a set. End of explanation """ modeldata = PyFBA.parse.model_seed.parse_model_seed_data('gramnegative', verbose=True) print(f"There are {len(modeldata.compounds):,} compounds, {len(modeldata.reactions):,} reactions, and {len(modeldata.enzymes):,} enzymes in total") # for known in ['rxn05514', 'rxn05541', 'rxn01103', 'rxn05533', 'rxn09137', 'rxn00837']: # modeldata.reactions[known].lower_bound = -1000 # modeldata.reactions[known].upper_bound = 1000 """ Explanation: Read all the reactions and compounds in our database We read all the reactions, compounds, and enzymes in the ModelSEEDDatabase into three data structures. Each one is a dictionary with a string representation of the object as the key and the PyFBA object as the value. We modify the reactions specifically for Gram negative models (there are also options for Gram positive models, Mycobacterial models, general microbial models, and plant models). End of explanation """ tempset = set() for r in reactions_to_run: if r in modeldata.reactions: tempset.add(r) else: print(f"Reaction ID {r} is not in our reactions list. Skipped", file=sys.stderr) reactions_to_run = tempset """ Explanation: Update reactions to run, making sure that all reactions are in the list! There are some reactions that come from functional roles that do not appear in the reactions list. We're working on tracking these down, but for now we just check that all reaction IDs in reactions_to_run are in reactions, too. End of explanation """ media = PyFBA.parse.pyfba_media("ArgonneLB") media = PyFBA.parse.correct_media_names(media, modeldata.compounds) print(f"Our media has {len(media)} components") """ Explanation: Test whether these reactions grow on ArgonneLB media We can test whether this set of reactions grows on ArgonneLB media. The media is the same one we used above, and you can download the ArgonneLB.txt and text file and put it in the same directory as this iPython notebook to run it. (Note: we don't need to convert the media components, because the media and compounds come from the same source.) End of explanation """ biomass_equation = PyFBA.metabolism.biomass_equation('gramnegative') """ Explanation: Define a biomass equation The biomass equation is the part that says whether the model will grow! This is a metabolism.reaction.Reaction object. End of explanation """ status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) """ Explanation: Run the FBA With the reactions, compounds, reactions_to_run, media, and biomass model, we can test whether the model grows on this media. End of explanation """ sbml_reactions = set() with open('sbml_reactions.txt', 'r') as f: for l in f: if l.startswith('rxn'): sbml_reactions.add(l.strip()) status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run.union(sbml_reactions), media, biomass_equation) print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) # these are the reactions we are looking for missing_reactions = sbml_reactions.difference(reactions_to_run) print(f"There are {len(missing_reactions)} missing reactions yet to find") """ Explanation: Can we gap fill it to success?? This is the set of reactions that we need to add. End of explanation """ added_reactions = [] original_reactions_to_run = copy.copy(reactions_to_run) """ Explanation: Gap-fill the model Since the model does not grow on ArgonneLB we need to gap-fill it to ensure growth. There are several ways that we can gap-fill, and we will work through them until we get growth. As you will see, we update the reactions_to_run list each time, and keep the media and everything else consistent. Then we just need to run the FBA like we have done above and see if we get growth. We also keep a copy of the original reactions_to_run, and a list with all the reactions that we are adding, so once we are done we can go back and bisect the reactions that are added. End of explanation """ essential_reactions = PyFBA.gapfill.suggest_essential_reactions() for r in essential_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("essential", essential_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(essential_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Essential reactions There are ~100 reactions that are in every model we have tested, and we construe these to be essential for all models, so we typically add these next! End of explanation """ linked_reactions = PyFBA.gapfill.suggest_linked_reactions(modeldata, reactions_to_run, verbose=True) for r in linked_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("linked_reactions", linked_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(linked_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Linked Reactions The ModelSEED has a notion of linked reactions that do things together. Here we add all of the linked reactions. End of explanation """ ecnumber_reactions = PyFBA.gapfill.suggest_reactions_using_ec(roles, modeldata, reactions_to_run, verbose=True) for r in ecnumber_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("ec_numbers_brief", ecnumber_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(ecnumber_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: EC Numbers Make sure we have added all the EC numbers that we know about from our roles! End of explanation """ media_reactions = PyFBA.gapfill.suggest_from_media(modeldata, reactions_to_run, media, verbose=True) for r in media_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("media", media_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(media_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Media import reactions We need to make sure that the cell can import everything that is in the media... otherwise it won't be able to grow. Be sure to only do this step if you are certain that the cell can grow on the media you are testing. End of explanation """ reactions_from_other_orgs = PyFBA.gapfill.suggest_from_roles("closest.genomes.roles", modeldata.reactions) for r in reactions_from_other_orgs: modeldata.reactions[r].reset_bounds() added_reactions.append(("close genomes", reactions_from_other_orgs)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(reactions_from_other_orgs) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Reactions from closely related organisms We also gap-fill on closely related organisms. We assume that an organism is most likely to have reactions in its genome that are similar to those in closely related organisms. You will need to download the closest.genomes.roles file End of explanation """ if not growth: subsystem_reactions = \ PyFBA.gapfill.suggest_reactions_from_subsystems(modeldata.reactions, reactions_to_run, threshold=0.5) for r in subsystem_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("subsystems", subsystem_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(subsystem_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Subsystems The reactions connect us to subsystems (see Overbeek et al. 2014), and this test ensures that all the subsystems are complete. We add reactions required to complete the subsystem. End of explanation """ if not growth: ecnumber_reactions = PyFBA.gapfill.suggest_reactions_using_ec(roles, modeldata, reactions_to_run, maxnumrx=0, verbose=True) for r in ecnumber_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("ec_numbers_full", ecnumber_reactions)) # lets try limiting these so we don't add everything lec = PyFBA.gapfill.limit_reactions_by_compound(modeldata.reactions, reactions_to_run, ecnumber_reactions, 50) print(f"Before limiting we wanted to add {len(ecnumber_reactions)}, and after, we wanted to add {len(lec)} reactions") print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(ecnumber_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Revisit EC Numbers When we added the EC numbers before, we were a little conservative, only adding those EC numbers that appeared in two or less (by default) reactions. If we get here, lets be aggressive and add any EC number regardless of how many reactions we add. We set the maxnumrx variable to 0 End of explanation """ if not growth: linked_reactions = PyFBA.gapfill.suggest_linked_reactions(modeldata, reactions_to_run, verbose=True) for r in linked_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("linked_reactions_addition", linked_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(linked_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Linked Reactions We revist linked reactions once more, because now we have many more reactions in our set to run! End of explanation """ if not growth: orphan_reactions = PyFBA.gapfill.suggest_by_compound(modeldata, reactions_to_run, max_reactions=1) # lets try limiting these so we don't add everything lor = PyFBA.gapfill.limit_reactions_by_compound(modeldata.reactions, reactions_to_run, orphan_reactions, 50) print(f"Before limiting we wanted to add {len(orphan_reactions)}, and after, we wanted to add {len(lor)} reactions") for r in orphan_reactions: modeldata.reactions[r].reset_bounds() added_reactions.append(("orphans", orphan_reactions)) print(f"Before updating we have {len(reactions_to_run)} reactions, and after updating we have ", end="") reactions_to_run.update(orphan_reactions) print(f"{len(reactions_to_run)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, reactions_to_run, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") print(f"There are still {len(missing_reactions.difference(reactions_to_run))} reactions we have not found") """ Explanation: Orphan compounds Orphan compounds are those compounds which are only associated with one reaction. They are either produced, or trying to be consumed. We need to add reaction(s) that complete the network of those compounds. You can change the maximum number of reactions that a compound is in to be considered an orphan (try increasing it to 2 or 3). End of explanation """ reqd_additional = set() print(f"Before we begin, our original reactions were {len(original_reactions_to_run)}") # Begin loop through all gap-filled reactions while added_reactions: ori = copy.copy(original_reactions_to_run) ori.update(reqd_additional) # Test next set of gap-filled reactions # Each set is based on a method described above how, new = added_reactions.pop() sys.stderr.write(f"Testing reactions from {how}\n") # Get all the other gap-filled reactions we need to add for tple in added_reactions: ori.update(tple[1]) # Use minimization function to determine the minimal # set of gap-filled reactions from the current method new_essential = PyFBA.gapfill.minimize_additional_reactions(ori, new, modeldata, media, biomass_equation, verbose=True) sys.stderr.write(f"Saved {len(new_essential)} reactions from {how}\n") # Record the method used to determine # how the reaction was gap-filled for new_r in new_essential: modeldata.reactions[new_r].is_gapfilled = True modeldata.reactions[new_r].gapfill_method = how reqd_additional.update(new_essential) # Combine old and new reactions all_reactions = original_reactions_to_run.union(reqd_additional) with open('gapfilled_reactions.txt', 'w') as out: for a in all_reactions: out.write(f"{a}\n") print(f"After completing reaction trimming, we have {len(all_reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(modeldata, all_reactions, media, biomass_equation) print(f"The biomass reaction has a flux of {value} --> Growth: {growth}") """ Explanation: Trimming the model Now that the model has been shown to grow on ArgonneLB media after several gap-fill iterations, we should trim down the reactions to only the required reactions necessary to observe growth. In this example, we start removing the additional reactions from the last added (orphans) and bisect each set, trying to find the miniumum number of reactions that will ensure growth. We strongly recommend enabling verbose output on the PyFBA.gapfill.minimize_additional_reactions function, as it will demonstrate the power of O(log n) functions End of explanation """ probable_reactions = PyFBA.gapfill.compound_probability(reactions, reactions_to_run, cutoff=0, rxn_with_proteins=True) """ Explanation: Other gap-filling techniques Besides those methods we have described above, listed here are other methods that can be used to gap-fill your model. This list will continue to grow over time as we create new techniques to identify reactions and compounds that should be added to your model. Probable reactions Probable reactions are those reactions whose probability is based on whether there is a protein associated with the reaction and if the reaction's compounds are currently present in the model. Above a certain probability threshold, those reactions will be added to the model. End of explanation """
NirantK/deep-learning-practice
03-InitRNN/dlnd_tv_script_generation.ipynb
apache-2.0
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. Feedback Here is the feedback for this submission from the Udacity Reviewer: https://review.udacity.com/#!/reviews/468101/shared End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) # print(int_to_vocab) return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '!': '||Exclamation||', ',': '||Comma||', '.': '||Period||', ';': '||Semicolon||', '(': '||L-Parenthesis||', ')': '||R-Parenthesis||', '"': '||Quotation||', '?': '||Question||', '\n':'||NewLine||', '--':'||Dash||' } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ inputs = tf.placeholder(tf.int32, shape=[None, None], name='input') targets = tf.placeholder(tf.int32, shape=[None, None], name='targets') learning_rate = tf.placeholder(tf.float32, shape = None, name='learning_rate') return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) """ Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation """ def get_init_cell(batch_size, rnn_size, num_layers = 2): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :param num_layers: Number of Layers, added by me at reviewer's suggestion :return: Tuple (cell, initialize state) """ rnn = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([rnn] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name = 'initial_state') return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation """ # Question 1. What difference does it make if I use truncated_normal v/s random_uniform? # Question 2. What difference does it make if I use the embed_sequence from tf.contrib.layers vs above approach? # I could not observe difference with respect to the model and hence the questions def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # embedding = tf.Variable(tf.truncated_normal((vocab_size, embed_dim), -1, 1)) # embedding = tf.nn.embedding_lookup(embedding, input_data) embedding = tf.contrib.layers.embed_sequence(ids=input_data, vocab_size=vocab_size, embed_dim=embed_dim) return embedding """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) """ Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) """ Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation """ def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embedding = get_embed(input_data, vocab_size, embed_dim) outputs, final_state = build_rnn(cell, embedding) weights = tf.truncated_normal_initializer(dtype=tf.float32, stddev=0.1)0 biases = tf.zeros_initializer() logits = tf.contrib.layers.fully_connected(outputs, vocab_size, None, None, weights_initializer=weights, biases_initializer=biases) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # num_batches = int(len(int_text) / (batch_size * seq_length)) num_batches = len(int_text) // (batch_size * seq_length) #more pythonic, enforces integer result instead of float num_words = num_batches * batch_size * seq_length input_data = np.array(int_text[:num_words]) target_data = np.array(int_text[1:num_words+1]) input_data = input_data.reshape(batch_size, -1) target_data = target_data.reshape(batch_size, -1) input_batches = np.split(input_data, num_batches, 1) target_batches = np.split(target_data, num_batches, 1) return np.array(list(zip(input_batches, target_batches))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) """ Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ``` End of explanation """ # Number of Epochs num_epochs = 100 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Sequence Length seq_length = 11 # https://stats.stackexchange.com/questions/158834/what-is-a-feasible-sequence-length-for-an-rnn-to-model # reviewer suggested seeq_length to be set at average sentence length i.e. 11 # Embedding Dimension Size embed_dim = rnn_size // 2 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. How to select batch size v/s number of epochs? When you put m examples in a minibatch, you need to do O(m) computation and use O(m) memory, but you reduce the amount of uncertainty in the gradient by a factor of only O(sqrt(m)). In other words, there are diminishing marginal returns to putting more examples in the minibatch - Ian Goodfellow from https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network and Chapter 8 - Optimization of Deep Learning Book End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ input_tensor = loaded_graph.get_tensor_by_name('input:0') initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0') final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0') probs_tensor = loaded_graph.get_tensor_by_name('probs:0') return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # strategy one: random choice from all idx = np.random.choice(len(probabilities), 30, p=probabilities)[0] picked_word = int_to_vocab[idx] # # strategy two: max probability every time # idx_argmax = np.argmax(probabilities) # picked_word = int_to_vocab[idx_argmax] return picked_word """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 250 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'homer_simpson' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
gregmedlock/Medusa
docs/io.ipynb
mit
import medusa from pickle import load with open("../medusa/test/data/Staphylococcus_aureus_ensemble.pickle", 'rb') as infile: ensemble = load(infile) """ Explanation: Input and output Currently, the only supported approach for loading and saving ensembles in medusa is via pickle. pickle is the Python module that serializes and de-serializes Python objects (i.e. converts to/from a binary representation). This is an intentional design choice--as medusa matures, we will identify a feasible route for standardization through an extension to the Systems Biology Markup Language (SBML), which is the de facto standard for sharing genome-scale metabolic network reconstructions. To load an ensemble, use the load function from the pickle module: End of explanation """ save_dir = ("../medusa/test/data/Staphylococcus_aureus_repickled.pickle") ensemble.to_pickle(save_dir) """ Explanation: To save an ensemble, you can pickle it with: End of explanation """
gfeiden/Notebook
Projects/ngc2516_spots/possible_binaries.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt from scipy.interpolate import interp1d import numpy as np ngc2516 = np.genfromtxt('data/ngc2516_Christophe_v3.dat') # data for this study from J&J (2012) irwin01 = np.genfromtxt('data/irwin2007.phot') # data from Irwin+ (2007) """ Explanation: Possible Binaries among Low-Mass Stars in NGC 2516 Exploring how the data lines up against the study of Irwin et al. (2007), who identified a binary sequence in various CMDs. End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.set_xlabel('$(V - I_C)$', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_ylabel('$V$', fontsize=20.) ax[1].set_ylabel('$I_C$', fontsize=20.) ax[0].set_ylim(23., 15.) ax[1].set_ylim(19.5, 14.) # approximate "binary" sequence borders from Irwin+ (2007) binary_domain = np.array([ 1.4, 1.8, 2.1, 2.4, 2.6, 3.0, 3.1, 3.3, 3.6]) binary_range = np.array([15.8, 16.4, 17.1, 17.9, 18.5, 19.8, 20.2, 20.9, 21.9]) # testing curve smoothing icurve = interp1d(binary_domain, binary_range, kind='slinear') binary_domain = np.arange(1.4, 3.6, 0.02) binary_range = icurve(binary_domain) # zero point corrections DeltaI = 0.080 - 7.6e-3*irwin01[:,8] DeltaVI = 0.300 - 0.153*(irwin01[:,7] - irwin01[:,8]) binary_domain = binary_domain + (0.300 - 0.153*binary_domain) ax[0].plot((irwin01[:,7] - irwin01[:,8] + DeltaVI), irwin01[:,7], 'o', lw=2, ms=5.0, c='#999999') ax[0].plot((ngc2516[:,1] - ngc2516[:,2]), ngc2516[:,1], 'o', lw=2, ms=8.0, c='#008B8B') ax[0].plot(binary_domain, binary_range, '-', lw=3, dashes=(20., 20.), c='#800000') ax[1].plot((irwin01[:,7] - irwin01[:,8] + DeltaVI), irwin01[:,8] + DeltaI, 'o', lw=2, ms=5.0, c='#999999') ax[1].plot((ngc2516[:,1] - ngc2516[:,2]), ngc2516[:,2], 'o', lw=2, ms=8.0, c='#008B8B') """ Explanation: Plot $(V - I_C)$ CMDs for both data sets. Jackson, Jeffries, & Maxted (2009) apply a correction to the Irwin I-band data to homogenize zero points between the two studies. This correction takes the form $\Delta I = 0.080-0.0075\cdot I$. It was also noted in Jackson & Jeffries (2012) that a separate correction was made to colors, $\Delta(V-I_C) = 0.300 - 0.153\cdot(V-I_C)$. End of explanation """ # redefine binary sequence icurve = interp1d(binary_domain, binary_range, kind='slinear') # trim the dataset to reasonable limits irwin01 = np.column_stack((irwin01, DeltaVI)) ngc2516_new = np.array([star for star in ngc2516 if 1.4 < (star[1]-star[2]) < 3.33]) irwin01_new = np.array([star for star in irwin01 if 1.4 < (star[7]-star[8]+star[-1]) < 3.33]) # interpolate data points onto the binary curve ngc2516_binary_expected = icurve(ngc2516_new[:,1] - ngc2516_new[:,2]) irwin01_binary_expected = icurve(irwin01_new[:,7] - irwin01_new[:,8] + irwin01_new[:,-1]) # calculate differences ngc2516_diff = ngc2516_binary_expected - ngc2516_new[:,1] irwin01_diff = irwin01_binary_expected - irwin01_new[:,7] # plot differences fig, ax = plt.subplots(2, 1, figsize=(8.0, 8.0), sharex=True) ax[1].set_xlabel('$(V - I_C)$', fontsize=20.) for axis in ax: axis.set_xlim(1.4, 3.5) axis.set_ylabel('$\\Delta V$', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) # showing only the Irwin data ax[0].plot([1.4, 3.6], [0.0, 0.0], '-', lw=3, dashes=(20., 20.), c='#800000') ax[0].plot((irwin01_new[:,7] - irwin01_new[:,8] + irwin01_new[:,-1]), irwin01_diff, 'o', ms=5.0, c='#999999') # now both data sets ax[1].plot([1.4, 3.6], [0.0, 0.0], '-', lw=3, dashes=(20., 20.), c='#800000') ax[1].plot((irwin01_new[:,7] - irwin01_new[:,8] + irwin01_new[:,-1]), irwin01_diff, 'o', ms=5.0, c='#999999') ax[1].plot((ngc2516_new[:,1] - ngc2516_new[:,2]), ngc2516_diff, 'o', ms=8.0, c='#008B8B') fig.tight_layout() """ Explanation: While the binary sequence definition is tenuous, it does indicate that several of the stars in our sample are potentially unrecognized binaries. Plotting the above CMD without our data and only the Irwin et al. data, our approximate binary sequence provide a very clear divide between what is clear the single star sequence and a population of more luminous (or redder) stars. The idea that they are simply redder is explored with the addition of spots. Here, we are only concerned with possible affects of binarity. If we now plot the differences between the individual data points and the approximate binary sequence, we get End of explanation """ # flag binaries in J&J data ngc2516_new = np.column_stack((ngc2516_new, ngc2516_diff)) ngc2516_bin = np.array([star for star in ngc2516_new if star[-1] > 0.0]) ngc2516_new = np.array([star for star in ngc2516_new if star[-1] <= 0.0]) # plot differences fig, ax = plt.subplots(1, 1, figsize=(8.0, 4.0)) ax.set_xlabel('$(V - I_C)$', fontsize=20.) ax.set_xlim(1.4, 3.5) ax.set_ylabel('$\\Delta V$', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) # now both data sets ax.plot([1.4, 3.6], [0.0, 0.0], '-', lw=3, dashes=(20., 20.), c='#800000') ax.plot((irwin01_new[:,7] - irwin01_new[:,8] + irwin01_new[:,-1]), irwin01_diff, 'o', ms=5.0, c='#999999') ax.plot((ngc2516_new[:,1] - ngc2516_new[:,2]), ngc2516_new[:,-1], 'o', ms=7.0, c='#008B8B') ax.plot((ngc2516_bin[:,1] - ngc2516_bin[:,2]), ngc2516_bin[:,-1], '*', ms=14.0, c='#800000') """ Explanation: There appear to be small offsets in the adopted $V$ magnitudes between the two studies, despite Jackson & Jeffries explicitly stating that they adopt the photometry of Irwin et al. (2007). We can, instead, simply adjust the cut-off criterion if we wish to remove potential binaries by incorporating the small offset uncertainty into the total ambiguity surrounding the binary sequence. If we select out only the stars that lie +0.1 mag above the zero point in the above figure, End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.set_xlabel('$(V - I_C)$', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_ylabel('$V$', fontsize=20.) ax[1].set_ylabel('$I_C$', fontsize=20.) ax[0].set_ylim(23., 15.) ax[1].set_ylim(19.5, 14.) ax[0].plot((ngc2516_bin[:,1] - ngc2516_bin[:,2]), ngc2516_bin[:,1], '*', lw=2, ms=12.0, c='#800000') ax[1].plot((ngc2516_bin[:,1] - ngc2516_bin[:,2]), ngc2516_bin[:,2], '*', lw=2, ms=12.0, c='#800000') ax[0].plot((ngc2516_new[:,1] - ngc2516_new[:,2]), ngc2516_new[:,1], 'o', lw=2, ms=8.0, c='#008B8B') ax[1].plot((ngc2516_new[:,1] - ngc2516_new[:,2]), ngc2516_new[:,2], 'o', lw=2, ms=8.0, c='#008B8B') """ Explanation: A selection criterion of 0.2 mag above the zero point corresponds to, approximately, a $3\sigma$ threshold. We could be conservative (in the sense of testing models) and remove stars beginning below 0.1 mag. The impact on the resulting mean radii can be probed later. ~~For now, we'll stick with our initial 0.2 mag criterion.~~ There may be better reason to adopt 0.0 as the threshold. Need to review the binary sequence definition later in Irwin+ (2007). How does this affect the appearance of color magnitude diagrams? End of explanation """ # load data with *OLD* mean values mean_radii_VI = np.genfromtxt('data/ngc2516_avg_radii_V-I_v2.dat') mean_radii_IK = np.genfromtxt('data/ngc2516_avg_radii_I-K_v2.dat') fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_xlabel('$(V - I_C)$', fontsize=20.) ax[1].set_xlabel('$(I_C - K)$', fontsize=20.) ax[0].plot((ngc2516_bin[:,1] - ngc2516_bin[:,2]), ngc2516_bin[:,10], '*', lw=2, ms=12.0, c='#800000') ax[0].plot((ngc2516_new[:,1] - ngc2516_new[:,2]), ngc2516_new[:,10], 'o', lw=2, ms=8.0, c='#008B8B') ax[0].plot(mean_radii_VI[:,0], mean_radii_VI[:,1], '-', lw=3, c = '#444444') ax[0].fill_between(mean_radii_VI[:,0], mean_radii_VI[:,2], mean_radii_VI[:,3], edgecolor='#eeeeee', facecolor='#eeeeee') ax[1].plot((ngc2516_bin[:,2] - ngc2516_bin[:,3]), ngc2516_bin[:,10], '*', lw=2, ms=12.0, c='#800000') ax[1].plot((ngc2516_new[:,2] - ngc2516_new[:,3]), ngc2516_new[:,10], 'o', lw=2, ms=8.0, c='#008B8B') ax[1].plot(mean_radii_IK[:,0], mean_radii_IK[:,1], '-', lw=3, c = '#444444') ax[1].fill_between(mean_radii_IK[:,0], mean_radii_IK[:,2], mean_radii_IK[:,3], edgecolor='#eeeeee', facecolor='#eeeeee') """ Explanation: This has a strong affect on the CMDs of NGC 2516. Both display tighter sequences, as expected. Notably, removing binaries based on $V$-band magnitude removes suspected binaries flagged from visual inspection of the $I_C$-band magnitude. How, then, are the radii (and mean radii) affected by the removal of binaries? First, radius-color figures. End of explanation """ # load data with *OLD* mean values mean_radii_I = np.genfromtxt('data/ngc2516_avg_radii_I_v2.dat') mean_radii_K = np.genfromtxt('data/ngc2516_avg_radii_K_v2.dat') fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_xlabel('$I_C$ (mag)', fontsize=20.) ax[1].set_xlabel('$K$ (mag)', fontsize=20.) ax[0].plot(ngc2516_bin[:,2], ngc2516_bin[:,10], '*', lw=2, ms=12.0, c='#800000') ax[0].plot(ngc2516_new[:,2], ngc2516_new[:,10], 'o', lw=2, ms=8.0, c='#008B8B') ax[0].plot(mean_radii_I[:,0], mean_radii_I[:,1], '-', lw=3, c = '#444444') ax[0].fill_between(mean_radii_I[:,0], mean_radii_I[:,2], mean_radii_I[:,3], edgecolor='#eeeeee', facecolor='#eeeeee') ax[1].plot(ngc2516_bin[:,3], ngc2516_bin[:,10], '*', lw=2, ms=12.0, c='#800000') ax[1].plot(ngc2516_new[:,3], ngc2516_new[:,10], 'o', lw=2, ms=8.0, c='#008B8B') ax[1].plot(mean_radii_K[:,0], mean_radii_K[:,1], '-', lw=3, c = '#444444') ax[1].fill_between(mean_radii_K[:,0], mean_radii_K[:,2], mean_radii_K[:,3], edgecolor='#eeeeee', facecolor='#eeeeee') """ Explanation: Looking at the radius-color diagrams, there is a fairly even division of the binary stars about the previously determined mean values. This suggests that removing binaries would have only a minor impact on the mean radii as a function of photometric color. And radius-magnitude, End of explanation """ fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_xlabel('$I_C$ (mag)', fontsize=20.) ax[1].set_xlabel('$I_C$ (mag)', fontsize=20.) ax[0].set_ylabel('Measured Period (day)', fontsize=20.) ax[1].set_ylabel('Measured $v \\sin i$ (km s$^{-1}$)', fontsize=20.) ax[0].plot(ngc2516_bin[:,2], ngc2516_bin[:,0], '*', lw=2, ms=12.0, c='#800000') ax[0].plot(ngc2516_new[:,2], ngc2516_new[:,0], 'o', lw=2, ms=8.0, c='#008B8B') ax[1].plot(ngc2516_bin[:,2], ngc2516_bin[:,7], '*', lw=2, ms=12.0, c='#800000') ax[1].plot(ngc2516_new[:,2], ngc2516_new[:,7], 'o', lw=2, ms=8.0, c='#008B8B') fig, ax = plt.subplots(1, 2, figsize=(16.0, 8.0)) for axis in ax: axis.tick_params(which='major', axis='both', length=10., labelsize=16.) ax[0].set_xlabel('$K$ (mag)', fontsize=20.) ax[1].set_xlabel('$K$ (mag)', fontsize=20.) ax[0].set_ylabel('Measured Period (day)', fontsize=20.) ax[1].set_ylabel('Measured $v \\sin i$ (km s$^{-1}$)', fontsize=20.) ax[0].plot(ngc2516_bin[:,3], ngc2516_bin[:,0], '*', lw=2, ms=12.0, c='#800000') ax[0].plot(ngc2516_new[:,3], ngc2516_new[:,0], 'o', lw=2, ms=8.0, c='#008B8B') ax[1].plot(ngc2516_bin[:,3], ngc2516_bin[:,7], '*', lw=2, ms=12.0, c='#800000') ax[1].plot(ngc2516_new[:,3], ngc2516_new[:,7], 'o', lw=2, ms=8.0, c='#008B8B') """ Explanation: This is quite interesting. Binaries appear to have estimated radii consistent with those from the single star population. It's therefore possible that removing binaries will increase the mean radii as a function of both magnitude, since the stars with the largest radii are considered to be among the single star population. Does it make sense that binaries lie on the smaller side? Recall, \begin{equation} \frac{R\sin i}{R_{\odot}} = 0.124 \frac{P}{2\pi} v\sin i, \end{equation} so the radius is proportional to both the rotational velocity and the rotational period. At a given magnitude, one would expect the average measured $v\sin i$ of a binary to be greater than for single stars. That would tend to increase the measurement of $R\sin i$ for a fixed orbital period. But, if the star has a brighter magnitude than it would if the two stars are single, then the adopted template spectrum for the binary system may be too early of a spectral type, leading to intrinsically broader spectral lines. This would tend to produce lower $v\sin i$ values. End of explanation """
sassoftware/sas-viya-programming
communities/Loading Data from Python into CAS.ipynb
apache-2.0
import swat conn = swat.CAS(host, port, username, password) """ Explanation: Loading Data from Python into CAS There are many ways of loading data into CAS. Some methods simply invoke actions in CAS that load data from files on the server. Other methods of loading data involve connections to data sources such as databases in CASLibs. But the last method is to load data from the client side using a language such as Python. There are multiple methods of loading data from Python into CAS. We will be introducing the different ways of getting your data into CAS from the Python CAS client here. As usual, the first thing we need to do is to import SWAT and create a connection to CAS. End of explanation """ tbl = conn.read_csv('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv') tbl tbl.head() """ Explanation: Now that we have our connection, let's start loading some data. Loading Data using Pandas-like Parser Methods It's possible to load data directly from the connection object that we just created. CAS connection objects support the same data readers that the Pandas package does. This includes read_csv, read_excel, read_json, read_html, etc. The CAS connection versions of these methods are merely thin wrappers around the Pandas versions. The data is read into a DataFrame and then uploaded to a CAS table behind the scenes. The result is a Python CASTable object. Let's look at reading a CSV file first. Keep in mind, that using a URL to specifying the file will download the URL to a local file then upload it to CAS. This won't matter much with a small data set such as this, but if you have larger data sets, you may want to use a different method of loading data. End of explanation """ htmltbl = conn.read_html('https://www.fdic.gov/bank/individual/failed/banklist.html')[0] htmltbl.head() """ Explanation: As you can see, getting CSV data into CAS from Python is pretty simple. Of course, if you have a large amount of data in CSV format, you may want to load it from a file on the server, but this method works well for small to medium size data. To read data from an HTML file, we'll use the read_html method. We also have to add the [0] to the end because the read_html will return a CASTable object for each table in the HTML file. We just want the first one. End of explanation """ htmltbl = conn.read_html('https://www.fdic.gov/bank/individual/failed/banklist.html', parse_dates=[5, 6])[0] htmltbl.head() """ Explanation: You'll notice that the 6th and 7th columns contain dates. We can use the parse_dates= parameter of read_html to convert those columns into CAS datetime values. End of explanation """ htmltbl.columninfo() """ Explanation: Here is the result of the table.columninfo action showing the data types of each column. End of explanation """ out = conn.upload('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv') out """ Explanation: All of the other read_XXX methods in Pandas work from CAS connection objects and support the same options. See the Pandas documentation for the other data reader methods and options. Let's move on to the next method of loading data: the upload method. Loading Data Using the upload Method The next way of loading data is by sending a data file to the server using the upload CAS object method. In this style of loading data, the parsing of the file is done on the server-side; the file is simply uploaded as binary data and the table.loadtable action is run in the background. This has the benefit that the parsing can be done with much faster parsers than what can be done in Python. Some parsers can even execute in parallel so the parsing load is split across nodes in the CAS grid. The downside is that there aren't as many parsing options as what the Pandas data readers offer, so both methods still have their place. Let's look at code that uploads data files into CAS tables. We'll start off with CSV files since they are ubiquitous. End of explanation """ tbl = out['casTable'] tbl.head() """ Explanation: When using the upload method on the CAS object, you'll get the result of the underlying table.upload action call. To get a reference to the CASTable object in that result, you just grab the casTable key. End of explanation """ import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(50, 4), columns=list('ABCD')) df.head() dftbl = conn.upload(df)['casTable'] dftbl dftbl.head() dftbl.columninfo() """ Explanation: The upload method works with any file type that the table.loadtable action supports, but you can also upload a Pandas DataFrame. One thing to note about doing it this way is that the DataFrame data is exported to CSV and then uploaded to CAS, so the resulting table will follow the same rules and limitations as uploading a CSV file. End of explanation """ from swat.cas import datamsghandlers as dmh print(dmh.CASDataMsgHandler.__subclasses__()) print(dmh.PandasDataFrame.__subclasses__()) """ Explanation: Using Data Message Handlers The last method of loading data into CAS from Python is using something called "data message handlers" and the addtable action. The addtable action is a bit different than other actions in that it sets up a two-way communication from the server back to the client. Once the action is invoked, the server asks for chunks of data until there isn't any more data to upload. Doing this communication can be a bit tricky, so the SWAT package has some pre-defined data message handlers that can be used directly or subclassed to create your own. You may not have realized it, but you have already used data message handlers in the first example. The read_XXX methods all use data message handlers to load the data into CAS. You can see all of the supplied data messsage handlers by looking at the subclasses of swat.cas.datamsghandlers.CASDataMsgHandler. End of explanation """ cvs_dmh = dmh.CSV('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv') out = conn.addtable(table='cars', replace=True, **cvs_dmh.args.addtable) """ Explanation: You may notice that most of the data message handlers are just subclasses of the PandasDataFrame message handler. This is because they all convert their data into a Pandas DataFrame before uploading the data. This also means that the data must fit into memory to use them. There is one data message handler that doesn't work that way: swat.cas.datamsghandlers.DBAPI. The DBAPI message handler uses a Python DB-API 2.0 compliant database connection to retrieve data from a database and loads that into CAS. It only loads as much data as is needed for a single chunk. We'll just be looking at the CSV data message handler here. The rest follow the same pattern of usage. To use the CSV data message handler, you just pass it the path to a CSV file. This creates an instance of the data message handler that can be used to generate the arguments to the addtable action and also handles the request from the server. End of explanation """ cvs_dmh.args.addtable """ Explanation: The action call arguments may look a little odd if you haven't used Python a lot. The args.addtable property of cvs_dmh contains a dictionary of parameter arguments. When you use the double-asterisk before it, it tells Python to apply the dictionary as keyword arguments to the function. The arguments generated by the data message handler are shown below. End of explanation """ tbl = out['casTable'] tbl.head() """ Explanation: The output of the addtable action includes a casTable key just like the upload method. End of explanation """ html_dmh = dmh.HTML('https://www.fdic.gov/bank/individual/failed/banklist.html', index=0, parse_dates=[5, 6]) out = conn.addtable(table='banklist', replace=True, **html_dmh.args.addtable) tbl = out.casTable tbl.head() """ Explanation: Just as with the read_XXX methods on the connection, the PandasDataFrame-based data message handlers take the same arguments as the read_XXX methods to customize their behavior. Using Pandas' HTML reader as an example again, we can upload data with dates in them. Note that in this case, we can specify the index of the HTML table that we want in the index= parameter. End of explanation """ conn.close() """ Explanation: Conclusion In this article, we have demonstrated three ways of getting data from Python into CAS. The first was using the read_XXX methods that emulate Pandas' data readers. The second was using the upload method of the CAS connection to upload a file and have the server read it. The third was using data message handlers to add data using the addtable action. Each method has its strengths and weaknesses. The upload method is generally the fastest, but has the most limited parsing capabilities. The read_XXX methods are quick and convenient and give you the power of Pandas data readers. While we didn't get deep into data message handlers here, they allow the most power by allowing you to subclass them and write custom data loaders from any data source. End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch1-Example_1-10.ipynb
unlicense
%pylab notebook """ Explanation: Electric Machinery Fundamentals 5th edition Chapter 1 (Code examples) Example 1-10 Calculate and plot the velocity of a linear motor as a function of load. Import the PyLab namespace (provides set of useful commands and constants like $\pi$) End of explanation """ VB = 120.0 # Battery voltage (V) r = 0.3 # Resistance (ohms) l = 1.0 # Bar length (m) B = 0.6 # Flux density (T) """ Explanation: Define all the parameters: End of explanation """ F = arange(0,51,10) # Force (N) F # Lets print the variable to check. # Can you exaplain why "arange(0,50,10)" gives not the array below? """ Explanation: Select the forces to apply to the bar: End of explanation """ i = F / (l * B) # Current (A) """ Explanation: Calculate the currents flowing in the motor: End of explanation """ eind = VB - i * r # Induced voltage (V) """ Explanation: Calculate the induced voltages on the bar: End of explanation """ v_bar = eind / (l * B); # Velocity (m/s) """ Explanation: Calculate the velocities of the bar: End of explanation """ plot(F, v_bar); rc('text', usetex=True) # enable LaTeX commands for plot title(r'\textbf{Plot of velocity versus applied force}') xlabel(r'\textbf{Force (N)}') ylabel(r'\textbf{Velocity (m/s)}') axis([0, 50, 0, 200]) grid() """ Explanation: Plot the velocity of the bar versus force: End of explanation """
QuantEcon/QuantEcon.notebooks
solving_initial_value_problems.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np import sympy as sp # comment out if you don't want plots rendered in notebook %matplotlib inline """ Explanation: <center> Solving initial value problems (IVPs) in quantecon David R. Pugh End of explanation """ from quantecon import ivp """ Explanation: 1. Introduction This notebook demonstrates how to solve initial value problems (IVPs) using the quantecon Python library. Before demonstrating how one might solve an IVP using quantecon, I provide formal definitions for ordinary differential equations (ODEs) and initial value problems (IVPs), as well as a short discussion of finite-difference methods that will be used to solve IVPs. Ordinary differential equations (ODE) An ordinary differential equation (ODE) is in equation of the form \begin{equation} \textbf{y}'= \textbf{f}(t ,\textbf{y}) \tag{1.1} \end{equation} where $\textbf{f}:[t_0,\infty) \times \mathbb{R}^n\rightarrow\mathbb{R}^n$. In the case where $n=1$, then equation 1.1 reduces to a single ODE; when $n>1$, equation 1.1 defines a system of ODEs. ODEs are one of the most basic examples of functional equations: a solution to equation 1.1 is a function $\textbf{y}(t): D \subset \mathbb{R}\rightarrow\mathbb{R}^n$. There are potentially an infinite number of solutions to the ODE defined in equation 1.1. In order to reduce the number of potentially solutions, we need to impose a bit more structure on the problem. Initial value problems (IVPs) An initial value problem (IVP) has the form \begin{equation} \textbf{y}'= \textbf{f}(t ,\textbf{y}),\ t \ge t_0,\ \textbf{y}(t_0) = \textbf{y}_0 \tag{1.2} \end{equation} where $\textbf{f}:[t_0,\infty) \times \mathbb{R}^n\rightarrow\mathbb{R}^n$ and the initial condition $\textbf{y}_0 \in \mathbb{R}^n$ is a given vector. Alternatively, I could also specify an initial value problem by imposing a terminal condition of the form $\textbf{y}(T) = \textbf{y}_T$. The key point is that the solution $\textbf{y}(t)$ is pinned down at one $t\in[t_0, T]$. The solution to the IVP defined by equation 1.2 is the function $\textbf{y}(t): [t_0,T] \subset \mathbb{R}\rightarrow\mathbb{R}^n$ that satisfies the initial condition $\textbf{y}(t_0) = \textbf{y}_0$. So long as the function $\textbf{f}$ is reasonably well-behaved, the solution $\textbf{y}(t)$ exists and is unique. Finite-difference methods Finite-difference methods are perhaps the most commonly used class of numerical methods for approximating solutions to IVPs. The basic idea behind all finite-difference methods is to construct a difference equation \begin{equation} \frac{\textbf{y}(t_i + h) - \textbf{y}_i}{h} \approx \textbf{y}'(t_i) = \textbf{f}(t_i ,\textbf{y}(t_i)) \tag{1.3} \end{equation} which is "similar" to the differential equation at some grid of values $t_0 < \dots < t_N$. Finite-difference methods then "solve" the original differential equation by finding for each $n=0,\dots,N$ a value $\textbf{y}_n$ that approximates the value of the solution $\textbf{y}(t_n)$. It is important to note that finite-difference methods only approximate the solution $\textbf{y}$ at the $N$ grid points. In order to approximate $\textbf{y}$ between grid points one must resort to some form of interpolation. The literature on finite-difference methods for solving IVPs is vast and there are many excellent reference texts. Those interested in a more in-depth treatment of these topics, including formal proofs of convergence, order, and stability of the numerical methods used in this notebook, should consult <cite data-cite="hairer1993solving">(Hairer, 1993)</cite>, <cite data-cite="butcher2008numerical">(Butcher, 2008)</cite>, <cite data-cite="iserles2009first">(Iserles, 2009)</cite>. Chapter 10 of <cite data-cite="judd1998numerical">(Judd, 1998)</cite> covers a subset of these more formal texts with a specific focus on economic applications. 2. Examples The remainder of this notebook demonstrates the usage and functionality of the quantecon.ivp module by way of example. To get started, we need to import the quantecon.ivp module... End of explanation """ ivp.IVP? """ Explanation: 2.1 Lotka-Volterra "Predator-Prey" model We begin with the Lotka-Volterra model, also known as the predator-prey model, which is a pair of first order, non-linear, differential equations frequently used to describe the dynamics of biological systems in which two species interact, one a predator and the other its prey. The model was proposed independently by Alfred J. Lotka in 1925 and Vito Volterra in 1926. \begin{align} \frac{du}{dt} =& au - buv \tag{2.1.1} \ \frac{dv}{dt} =& -cv + dbuv \tag{2.1.2} \end{align} where $u$ is the number of preys (for example, rabbits), $v$ is the number of predators (for example, foxes) and $a, b, c, d$ are constant parameters defining the behavior of the population. Parameter definitions are as follows: * $a$: the natural growing rate of prey in the absence of predators. * $b$: the natural dying rate of prey due to predation. * $c$: the natural dying rate of predators, in teh absence of prey. * $d$: the factor describing how many caught prey is necessary to create a new predator. I will use $\textbf{y}=[u, v]$ to describe the state of both populations. 2.1.1 Defining an instance of the IVP class First, we need to create an instance of the IVP class representing the Lotka-Volterra "Predator-Prey" model. To initialize an instance of the IVP class we need to define the following... f : Callable of the form f(t, y, *f_args). The function f is the right hand side of the system of equations defining the model. The independent variable, t, should be a scalar; y is an ndarray of dependent variables with y.shape == (n,). The function f should return a scalar, ndarray or list (but not a tuple). jac : Callable of the form jac(t, y, *jac_args), optional(default=None). The Jacobian of the right hand side of the system of equations defining the ODE. $$ \mathcal{J}_{i,j} = \bigg[\frac{\partial f_i}{\partial y_j}\bigg] \tag {2.1.3}$$ Most all of this information can be found in the docstring for the ivp.IVP class. End of explanation """ def lotka_volterra_system(t, y, a, b, c, d): """ Return the Lotka-Voltera system. Parameters ---------- t : float Time y : ndarray (float, shape=(2,)) Endogenous variables of the Lotka-Volterra system. Ordering is `y = [u, v]` where `u` is the number of prey and `v` is the number of predators. a : float Natural growth rate of prey in the absence of predators. b : float Natural death rate of prey due to predation. c : float Natural death rate of predators, due to absence of prey. d : float Factor describing how many caught prey is necessary to create a new predator. Returns ------- jac : ndarray (float, shape=(2,2)) Jacobian of the Lotka-Volterra system of ODEs. """ f = np.array([ a * y[0] - b * y[0] * y[1] , -c * y[1] + d * b * y[0] * y[1] ]) return f def lotka_volterra_jacobian(t, y, a, b, c, d): """ Return the Lotka-Voltera Jacobian matrix. Parameters ---------- t : float Time y : ndarray (float, shape=(2,)) Endogenous variables of the Lotka-Volterra system. Ordering is `y = [u, v]` where `u` is the number of prey and `v` is the number of predators. a : float Natural growth rate of prey in the absence of predators. b : float Natural death rate of prey due to predation. c : float Natural death rate of predators, due to absence of prey. d : float Factor describing how many caught prey is necessary to create a new predator. Returns ------- jac : ndarray (float, shape=(2,2)) Jacobian of the Lotka-Volterra system of ODEs. """ jac = np.array([[a - b * y[1], -b * y[0]], [b * d * y[1], -c + b * d * y[0]]]) return jac """ Explanation: From the docstring we see that we are required to define a function describing the right-hand side of the system of differential equations that we wish to solve. While, optional, it is always a good idea to also define a function describing the Jacobian matrix of partial derivatives. For the Lotka-Volterra model, these two functions would look as follows... End of explanation """ lotka_volterra_ivp = ivp.IVP(f=lotka_volterra_system, jac=lotka_volterra_jacobian) """ Explanation: We can go ahead and create our instance of the ivp.IVP class representing the Lotka-Volterra model using the above defined functions as follows... End of explanation """ # ordering is (a, b, c, d) lotka_volterra_params = (1.0, 0.1, 1.5, 0.75) """ Explanation: 2.1.2 Defining model parameters In order to simulate the model, however, we will need to supply values for the model parameters $a,b,c,d$. First, let's define some "reasonable" values. End of explanation """ lotka_volterra_ivp.set_f_params? lotka_volterra_ivp.set_jac_params? """ Explanation: In order to add these parameter values to our model we need to pass them as arguments to the set_f_params and set_jac_params methods of the newly created instance of the ivp.IVP class. Check the doctrings of the methods for information on the appropriate syntax... End of explanation """ lotka_volterra_ivp.set_f_params(*lotka_volterra_params) lotka_volterra_ivp.set_jac_params(*lotka_volterra_params) """ Explanation: From the docstring we see that both the set_f_params and the set_jac_params methods take an arbitrary number of positional arguments. End of explanation """ lotka_volterra_ivp.f_params lotka_volterra_ivp.jac_params """ Explanation: ...and we can inspect that values of these attributes and see that the return results are the same. End of explanation """ # I generally prefer to set attributes directly... lotka_volterra_ivp.f_params = lotka_volterra_params # ...result is the same lotka_volterra_ivp.f_params """ Explanation: Alternatively, we could just directly set the f_params and jac_params attributes without needing to explicitly call either the set_f_params and set_jac_params methods! End of explanation """ # remember...always read the docs! ivp.IVP.solve? """ Explanation: 2.1.3 Using ivp.IVP.solve to integrate the ODE In order to solve a system of ODEs, the ivp.IVP.solve method provides an interface into the ODE integration routines provided in the scipy.integrate.ode module. The method takes the following parameters... t0 : float. Initial condition for the independent variable. y0 : array_like (float, shape=(n,)). Initial condition for the dependent variables. h : float, optional(default=1.0). Step-size for computing the solution. Can be positive or negative depending on the desired direction of integration. T : int, optional(default=None). Terminal value for the independent variable. One of either T or g must be specified. g : Callable of the form g(t, y, args), optional(default=None). Provides a stopping condition for the integration. If specified user must also specify a stopping tolerance, tol. tol : float, optional (default=None). Stopping tolerance for the integration. Only required if g is also specifed. integrator : str, optional(default='dopri5') Must be one of 'vode', 'lsoda', 'dopri5', or 'dop853' step : bool, optional(default=False) Allows access to internal steps for those solvers that use adaptive step size routines. Currently only 'vode', 'zvode', and 'lsoda' support step=True. relax : bool, optional(default=False) Currently only 'vode', 'zvode', and 'lsoda' support relax=True. **kwargs : dict, optional(default=None). Dictionary of integrator specific keyword arguments. See the Notes section of the docstring for scipy.integrate.ode for a complete description of solver specific keyword arguments. ... and returns: solution: array_like (float). Simulated solution trajectory. The above information can be found in the doctring of the ivp.IVP.solve method. End of explanation """ # define the initial condition... t0, y0 = 0, np.array([10, 5]) # ...and integrate! solution = lotka_volterra_ivp.solve(t0, y0, h=1e-1, T=15, integrator='dopri5', atol=1e-12, rtol=1e-9) """ Explanation: Example usage Using dopri5, an embedded Runge-Kutta method of order 4(5) with adaptive step size control due to Dormand and Prince, integrate the model forward from an initial condition of 10 rabbits and 5 foxes for 15 years. End of explanation """ # extract the components of the solution trajectory t = solution[:, 0] rabbits = solution[:, 1] foxes = solution[:, 2] # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(t, rabbits, 'r.', label='Rabbits') plt.plot(t, foxes , 'b.', label='Foxes') plt.grid() plt.legend(loc=0, frameon=False, bbox_to_anchor=(1, 1)) plt.xlabel('Time', fontsize=15) plt.ylabel('Population', fontsize=15) plt.title('Evolution of fox and rabbit populations', fontsize=20) plt.show() """ Explanation: Plotting the solution Once we have computed the solution, we can plot it using the excellent matplotlib Python library. End of explanation """ # define the desired interpolation points... ti = np.linspace(t0, solution[-1, 0], 1000) # ...and interpolate! interp_solution = lotka_volterra_ivp.interpolate(solution, ti, k=5, ext=2) """ Explanation: Note that we have plotted the time paths of rabbit and fox populations as sequences of points rather than smooth curves. This is done to visually emphasize the fact that finite-difference methods used to approximate the solution return a discrete approximation to the true continuous solution. 2.1.4 Using ivp.IVP.interpolate to interpolate the solution The IVP.interpolate method provides an interface to the parametric B-spline interpolation routines in scipy.interpolate in order to constuct a continuous approximation to the true solution. For more details on B-spline interpolation, including some additional economic applications, see chapter 6 of <cite data-cite="judd1998numerical">(Judd, 1998)</cite>. The ivp.IVP.interpolate method takes the following parameters... traj : array_like (float) Solution trajectory providing the data points for constructing the B-spline representation. ti : array_like (float) Array of values for the independent variable at which to interpolate the value of the B-spline. k : int, optional(default=3) Degree of the desired B-spline. Degree must satisfy :math:1 \le k \le 5. der : int, optional(default=0) The order of derivative of the spline to compute (must be less than or equal to k). ext : int, optional(default=2) Controls the value of returned elements for outside the original knot sequence provided by traj. For extrapolation, set ext=0; ext=1 returns zero; ext=2 raises a ValueError. ... and returns: interp_traj: ndarray (float). The interpolated trajectory. Example usage Approximate the solution to the Lotka-Volterra model at 1000 evenly spaced points using a 5th order B-spline interpolation and no extrapolation. End of explanation """ # extract the components of the solution ti = interp_solution[:, 0] rabbits = interp_solution[:, 1] foxes = interp_solution[:, 2] # make the plot fig = plt.figure(figsize=(8, 6)) plt.plot(ti, rabbits, 'r-', label='Rabbits') plt.plot(ti, foxes , 'b-', label='Foxes') plt.xlabel('Time', fontsize=15) plt.ylabel('Population', fontsize=15) plt.title('Evolution of fox and rabbit populations', fontsize=20) plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1)) plt.grid() plt.show() """ Explanation: Plotting the interpolated solution We can now plot the interpolated solution using matplotlib as follows... End of explanation """ # life will be easier if you read the docs! lotka_volterra_ivp.compute_residual? """ Explanation: Note that we have plotted the time paths of rabbit and fox populations as smooth curves. This is done to visually emphasize the fact that the B-spline interpolation methods used to approximate the solution return a continuous approximation to the true continuous solution. 2.1.5 Assessing accuracy using ivp.IVP.compute_residual After computing a continuous approximation to the solution of our IVP, it is important to verify that the computed approximation is actually a "good" approximation. To assess the accuracy of our numerical solution we first define a residual function, $R(t)$, as the difference between the derivative of the B-spline approximation of the solution trajectory, $\hat{\textbf{y}}'(t)$, and the right-hand side of the original ODE evaluated along the approximated solution trajectory. \begin{equation} \textbf{R}(t) = \hat{\textbf{y}}'(t) - f(t, \hat{\textbf{y}}(t)) \tag{2.1.4} \end{equation} The idea is that if our numerical approximation of the true solution is "good", then this residual function should be roughly zero everywhere within the interval of approximation. The ivp.IVP.compute_residual method takes the following parameters... traj : array_like (float). Solution trajectory providing the data points for constructing the B-spline representation. ti : array_like (float). Array of values for the independent variable at which to interpolate the value of the B-spline. k : int, optional(default=3). Degree of the desired B-spline. Degree must satisfy $1 \le k \le 5$. ext : int, optional(default=2). Controls the value of returned elements for outside the original knot sequence provided by traj. For extrapolation, set ext=0; ext=1 returns zero; ext=2 raises a ValueError. ... and returns: residual : array (float) Difference between the derivative of the B-spline approximation of the solution trajectory and the right-hand side of the ODE evaluated along the approximated solution trajectory. Remember to check the docstring for more information! End of explanation """ # reset original parameters lotka_volterra_ivp.f_params = lotka_volterra_params lotka_volterra_ivp.jac_params = lotka_volterra_params # compute the residual residual = lotka_volterra_ivp.compute_residual(solution, ti, k=1) """ Explanation: Example usage Compute the residual to the Lotka-Volterra model at 1000 evenly spaced points using a 1st order B-spline interpolation (which is equivalent to linear interpolation!). End of explanation """ # extract the raw residuals rabbits_residual = residual[:, 1] foxes_residual = residual[:, 2] # typically, normalize residual by the level of the variable norm_rabbits_residual = np.abs(rabbits_residual) / rabbits norm_foxes_residual = np.abs(foxes_residual) / foxes # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(ti, norm_rabbits_residual, 'r-', label='Rabbits') plt.plot(ti, norm_foxes_residual**2 / foxes, 'b-', label='Foxes') plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps') plt.xlabel('Time', fontsize=15) plt.ylim(1e-16, 1) plt.ylabel('Residuals (normalized)', fontsize=15) plt.yscale('log') plt.title('Lotka-Volterra residuals', fontsize=20) plt.grid() plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1)) plt.show() """ Explanation: Plotting the residual In your introductory econometrics/statistics course, your professor likely implored you to "always plot your residuals!" This maxim of data analysis is no less true in numerical analysis. However, while patterns in residuals are generally a "bad" thing in econometrics/statistics (as they suggest model mispecification, or other related problems), patterns in a residual function, $\textbf{R}(t)$, in numerical analysis are generally OK (and in certain cases actually desireable!). In this context, what is important is that overall size of the residuals is "small". End of explanation """ from ipywidgets import interact from ipywidgets.widgets import FloatText, FloatSlider, IntSlider, Text # reset parameters lotka_volterra_ivp.f_params = (1.0, 0.1, 2.0, 0.75) lotka_volterra_ivp.jac_params = lotka_volterra_ivp.f_params """ Explanation: Understanding determinants of accuracy We can use IPython widgets to investigate the determinants of accuracy of our approximated solution. Good candidates for exploration are... h: the step size used in computing the initial finite difference solution. atol: the absolute tolerance for the solver. rtol: the relative tolerance for the solver. k: the degree of the B-spline used in the interpolation of that finite difference solution. End of explanation """ @interact(h=FloatText(value=1e0), atol=FloatText(value=1e-3), rtol=FloatText(value=1e-3), k=IntSlider(min=1, value=3, max=5), integrator=Text(value='lsoda')) def plot_lotka_volterra_residuals(h, atol, rtol, k, integrator): """Plots residuals of the Lotka-Volterra system.""" # re-compute the solution tmp_solution = lotka_volterra_ivp.solve(t0, y0, h=h, T=15, integrator=integrator, atol=atol, rtol=rtol) # re-compute the interpolated solution and residual tmp_ti = np.linspace(t0, tmp_solution[-1, 0], 1000) tmp_interp_solution = lotka_volterra_ivp.interpolate(tmp_solution, tmp_ti, k=k) tmp_residual = lotka_volterra_ivp.compute_residual(tmp_solution, tmp_ti, k=k) # extract the components of the solution tmp_rabbits = tmp_interp_solution[:, 1] tmp_foxes = tmp_interp_solution[:, 2] # extract the raw residuals tmp_rabbits_residual = tmp_residual[:, 1] tmp_foxes_residual = tmp_residual[:, 2] # typically, normalize residual by the level of the variable tmp_norm_rabbits_residual = np.abs(tmp_rabbits_residual) / tmp_rabbits tmp_norm_foxes_residual = np.abs(tmp_foxes_residual) / tmp_foxes # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(tmp_ti, tmp_norm_rabbits_residual, 'r-', label='Rabbits') plt.plot(tmp_ti, tmp_norm_foxes_residual**2 / foxes, 'b-', label='Foxes') plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps') plt.xlabel('Time', fontsize=15) plt.ylim(1e-16, 1) plt.ylabel('Residuals (normalized)', fontsize=15) plt.yscale('log') plt.title('Lotka-Volterra residuals', fontsize=20) plt.grid() plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1)) """ Explanation: Now we can make use of the @interact decorator and the various IPython widgets to create an interactive visualization of the residual plot for the Lotka-Volterra "Predator-Prey" model. End of explanation """ @interact(a=FloatSlider(min=0.0, max=5.0, step=0.5, value=1.5), b=FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5), c=FloatSlider(min=0.0, max=5.0, step=0.5, value=3.5), d=FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5)) def plot_lotka_volterra(a, b, c, d): """Plots trajectories of the Lotka-Volterra system.""" # update the parameters and re-compute the solution lotka_volterra_ivp.f_params = (a, b, c, d) lotka_volterra_ivp.jac_params = (a, b, c, d) tmp_solution = lotka_volterra_ivp.solve(t0, y0, h=1e-1, T=15, integrator='dopri5', atol=1e-12, rtol=1e-9) # extract the components of the solution tmp_t = tmp_solution[:, 0] tmp_rabbits = tmp_solution[:, 1] tmp_foxes = tmp_solution[:, 2] # create the plot! fig = plt.figure(figsize=(8, 6)) plt.plot(tmp_t, tmp_rabbits, 'r.', label='Rabbits') plt.plot(tmp_t, tmp_foxes , 'b.', label='Foxes') plt.xlabel('Time', fontsize=15) plt.ylabel('Population', fontsize=15) plt.title('Evolution of fox and rabbit populations', fontsize=20) plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1)) plt.grid() """ Explanation: Sensitivity to parameters Once we have computed and plotted an approximate solution (and verified that the approximation is a good one by plotting the residual function!), we can try and learn something about the dependence of the solution on model parameters. End of explanation """ # enables sympy LaTex printing... sp.init_printing() # declare endogenous variables t, x, y, z = sp.var('t, x, y, z') # declare model parameters beta, rho, sigma = sp.var('beta, rho, sigma') # define symbolic model equations _x_dot = sigma * (y - x) _y_dot = x * (rho - z) - y _z_dot = x * y - beta * z # define symbolic system and compute the jacobian _lorenz_system = sp.Matrix([[_x_dot], [_y_dot], [_z_dot]]) """ Explanation: 2.2 The Lorenz equations The Lorenz equations are a system of three coupled, first-order, non-linear differential equations which describe the trajectory of a particle through time. The system was originally derived by as a model of atmospheric convection, but the deceptive simplicity of the equations have made them an often-used example in fields beyond atmospheric physics. The equations describe the evolution of the spatial variables x, y, and z, given the governing parameters $\sigma, \beta, \rho$, through the specification of the time-derivatives of the spatial variables: \begin{align} \frac{dx}{dt} =& \sigma(y − x) \tag{2.2.1} \ \frac{dy}{dt} =& x(\rho − z) − y \tag{2.2.2} \ \frac{dz}{dt} =& xy − \beta z \tag{2.2.3} \end{align} The resulting dynamics are entirely deterministic giving a starting point $(x_0,y_0,z_0)$. Though it looks straightforward, for certain choices of the parameters, the trajectories become chaotic, and the resulting trajectories display some surprising properties. 2.2.1 Incorporating SymPy While deriving the Jacobian matrix by hand is trivial for most simple 2D or 3D systems, it can quickly become tedious and error prone for larger systems (or even some highly non-linear 2D systems!). In addition to being an important input to most ODE integrators/solvers, Jacobians are also useful for assessing the stability properties of equilibria. An alternative approach for solving IVPs using the ivp module, which leverages the SymPy Python library to do the tedious computations involved in deriving the Jacobian, is as follows. Define the IVP using SymPy. Use SymPy routines for computing the Jacobian. Wrap the symbolic expressions as callable NumPy functions. Use these functions to create an instance of the ivp.IVP class. The remainder of this notebook implements each of these steps to solve and analyze the Lorenz equations defined above. Step 1: Defining the Lorenz equations using SymPy We begin by defining a sp.Matrix instance containing the three Lorenz equations... End of explanation """ _lorenz_system """ Explanation: Let's take a check out our newly defined _lorenz_system and make sure it looks as expected... End of explanation """ _lorenz_jacobian = _lorenz_system.jacobian([x, y, z]) _lorenz_jacobian """ Explanation: Step 2: Computing the Jacobian using SymPy Once we have defined our model as a SymPy matrix, computing the Jacobian is trivial... End of explanation """ # in order to pass an array as an argument, we need to apply a change of variables X = sp.DeferredVector('X') change_of_vars = {'x': X[0], 'y': X[1], 'z': X[2]} _transformed_lorenz_system = _lorenz_system.subs(change_of_vars) _transformed_lorenz_jacobian = _transformed_lorenz_system.jacobian([X[0], X[1], X[2]]) # wrap the symbolic expressions as callable numpy funcs _args = (t, X, beta, rho, sigma) _f = sp.lambdify(_args, _transformed_lorenz_system, modules=[{'ImmutableMatrix': np.array}, "numpy"]) _jac = sp.lambdify(_args, _transformed_lorenz_jacobian, modules=[{'ImmutableMatrix': np.array}, "numpy"]) """ Explanation: Step 3: Wrap the SymPy expression to create vectorized NumPy functions Now we wrap the SymPy matrices defining the model and the Jacobian to create vectorized NumPy functions. It is crucial that the interface for our wrapped functions matches the interface required by the f and jac parameters which we will pass to the ivp.IVP constructor to create an instance of the ivp.IVP class representing the Lorenz equations. Recall from the ivp.IVP docstring that... f : callable `f(t, y, *f_args)` Right hand side of the system of equations defining the ODE. The independent variable, `t`, is a `scalar`; `y` is an `ndarray` of dependent variables with `y.shape == (n,)`. The function `f` should return a `scalar`, `ndarray` or `list` (but not a `tuple`). jac : callable `jac(t, y, *jac_args)`, optional(default=None) Jacobian of the right hand side of the system of equations defining the ODE. .. :math: \mathcal{J}_{i,j} = \bigg[\frac{\partial f_i}{\partial y_j}\bigg] Thus our wrapped functions need to take a float t as the first argument, and array y as a second argument, followed by some arbitrary number of model parameters. We can handle all of this as follows... End of explanation """ def lorenz_system(t, X, beta, rho, sigma): """ Return the Lorenz system. Parameters ---------- t : float Time X : ndarray (float, shape=(3,)) Endogenous variables of the Lorenz system. beta : float Model parameter. Should satisfy :math:`0 < \beta`. rho : float Model parameter. Should satisfy :math:`0 < \rho`. sigma : float Model parameter. Should satisfy :math:`0 < \sigma`. Returns ------- rhs_ode : ndarray (float, shape=(3,)) Right hand side of the Lorenz system of ODEs. """ rhs_ode = _f(t, X, beta, rho, sigma).ravel() return rhs_ode def lorenz_jacobian(t, X, beta, rho, sigma): """ Return the Jacobian of the Lorenz system. Parameters ---------- t : float Time X : ndarray (float, shape=(3,)) Endogenous variables of the Lorenz system. beta : float Model parameter. Should satisfy :math:`0 < \beta`. rho : float Model parameter. Should satisfy :math:`0 < \rho`. sigma : float Model parameter. Should satisfy :math:`0 < \sigma`. Returns ------- jac : ndarray (float, shape=(3,3)) Jacobian of the Lorenz system of ODEs. """ jac = _jac(t, X, beta, rho, sigma) return jac """ Explanation: Step 4: Use these functions to create an instance of the IVP class First we define functions describing the right-hand side of the ODE and the Jacobian which we need to initialize the ivp.IVP class... End of explanation """ # parameters with ordering (beta, rho, sigma) lorenz_params = (2.66, 28.0, 10.0) """ Explanation: ... next we define a tuple of model parameters... End of explanation """ # create the instance lorenz_ivp = ivp.IVP(f=lorenz_system, jac=lorenz_jacobian) # specify the params lorenz_ivp.f_params = lorenz_params lorenz_ivp.jac_params = lorenz_params """ Explanation: ... finally, we are ready to create the instance of the ivp.IVP class representing the Lorenz equations. End of explanation """ # declare and initial condition t0, X0 = 0.0, np.array([1.0, 1.0, 1.0]) # solve! solution = lorenz_ivp.solve(t0, X0, h=1e-2, T=100, integrator='dop853', atol=1e-12, rtol=1e-9) """ Explanation: 2.2.2 Solving the Lorenz equations At this point I proceed in exactly the same fashion as in the previous Lotka-Volterra equations example: Solve the model using a discretized, finite-difference approximation. Use the discretized approximation in conjunction with parametric B-spline interpolation to construct a continuous approximation of the true solution. Compute and analyze the residual of the approximate solution. Step 1. Solve the model using a discretized, finite-difference approximation Using dop853, an embedded Runge-Kutta method of order 7(8) with adaptive step size control due to Dormand and Prince, integrate the Lorenz equations forward from an initial condition of $X_0 = (1.0, 1.0, 1.0)$ from $t=0$, to $T=100$. End of explanation """ @interact(T=IntSlider(min=0, value=0, max=solution.shape[0], step=5)) def plot_lorenz(T): """Plots the first T points in the solution trajectory of the Lorenz equations.""" # extract the components of the solution trajectory t = solution[:T, 0] x_vals = solution[:T, 1] y_vals = solution[:T, 2] z_vals = solution[:T, 3] # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(t, x_vals, 'r.', label='$x_t$', alpha=0.5) plt.plot(t, y_vals , 'b.', label='$y_t$', alpha=0.5) plt.plot(t, z_vals , 'g.', label='$z_t$', alpha=0.5) plt.grid() plt.xlabel('Time', fontsize=20) plt.ylabel('$x_t, y_t, z_t$', fontsize=20, rotation='horizontal') plt.title('Time paths of the $x,y,z$ coordinates', fontsize=25) plt.legend(frameon=False, bbox_to_anchor=(1.15,1)) plt.show() """ Explanation: Plotting the solution time paths We can use IPython widgets to construct a "poor man's" animation of the evolution of the Lorenz equations. End of explanation """ # define the desired interpolation points... ti = np.linspace(t0, solution[-1, 0], 1e4) # ...and interpolate! interp_solution = lorenz_ivp.interpolate(solution, ti, k=5, ext=2) """ Explanation: Step 2: Construct a continuous approximation to the solution. Let's construct a continuous approximation to the solution of the Lorenz equations at 10000 evenly spaced points using a 5th order B-spline interpolation. End of explanation """ # extract the components of the solution trajectory t = solution[:, 0] x_vals = interp_solution[:, 1] y_vals = interp_solution[:, 2] z_vals = interp_solution[:, 3] # xy phase space projection fig, axes = plt.subplots(1, 3, figsize=(12, 6), sharex=True, sharey=True, squeeze=False) axes[0,0].plot(x_vals, y_vals, 'r', alpha=0.5) axes[0,0].set_xlabel('$x$', fontsize=20, rotation='horizontal') axes[0,0].set_ylabel('$y$', fontsize=20, rotation='horizontal') axes[0,0].set_title('$x,y$-plane', fontsize=20) axes[0,0].grid() # xz phase space projection axes[0,1].plot(x_vals, z_vals , 'b', alpha=0.5) axes[0,1].set_xlabel('$x$', fontsize=20, rotation='horizontal') axes[0,1].set_ylabel('$z$', fontsize=20, rotation='horizontal') axes[0,1].set_title('$x,z$-plane', fontsize=20) axes[0,1].grid() # yz phase space projection axes[0,2].plot(y_vals, z_vals , 'g', alpha=0.5) axes[0,2].set_xlabel('$y$', fontsize=20, rotation='horizontal') axes[0,2].set_ylabel('$z$', fontsize=20, rotation='horizontal') axes[0,2].set_title('$y,z$-plane', fontsize=20) axes[0,2].grid() plt.suptitle('Phase space projections', x=0.5, y=1.05, fontsize=25) plt.tight_layout() plt.show() """ Explanation: Plotting 2D projections of the solution in phase space The underlying structure of the Lorenz system becomes more apparent when I plot the interpolated solution trajectories in phase space. We can plot 2D projections of the time paths of the solution in phase space... End of explanation """ # compute the residual ti = np.linspace(0, solution[-1,0], 10000) residual = lorenz_ivp.compute_residual(solution, ti, k=5) """ Explanation: Step 3: Compute the residuals to assese the accuracy of our solution Finally, to assess the accuracy of our solution we need to compute and plot the solution residuals at 10000 evenly spaced points using a 5th order B-spline interpolation. End of explanation """ # extract the raw residuals x_residuals = residual[:, 1] y_residuals = residual[:, 2] z_residuals = residual[:, 3] # typically, normalize residual by the level of the variable norm_x_residuals = np.abs(x_residuals) / x_vals norm_y_residuals = np.abs(y_residuals) / y_vals norm_z_residuals = np.abs(z_residuals) / z_vals # create the plot fig = plt.figure(figsize=(8, 6)) plt.plot(ti, norm_x_residuals, 'r-', label='$x(t)$', alpha=0.5) plt.plot(ti, norm_y_residuals, 'b-', label='$y(t)$', alpha=0.5) plt.plot(ti, norm_z_residuals, 'g-', label='$z(t)$', alpha=0.5) plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps') plt.xlabel('Time', fontsize=15) plt.ylim(1e-16, 1) plt.ylabel('Residuals', fontsize=15) plt.yscale('log') plt.title('Lorenz equations residuals', fontsize=20) plt.grid() plt.legend(loc='best', frameon=False) plt.show() """ Explanation: Again, we want to confirm that the residuals are "small" everywhere. Patterns, if they exists, are not cause for concern. End of explanation """
tensorflow/docs-l10n
site/zh-cn/tutorials/estimator/premade.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import tensorflow as tf import pandas as pd """ Explanation: 预创建的 Estimators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/premade"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 中运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/estimator/premade.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a> </td> </table> Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到 tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入 docs-zh-cn@tensorflow.org Google Group。 本教程将向您展示如何使用 Estimators 解决 Tensorflow 中的鸢尾花(Iris)分类问题。Estimator 是 Tensorflow 完整模型的高级表示,它被设计用于轻松扩展和异步训练。更多细节请参阅 Estimators。 请注意,在 Tensorflow 2.0 中,Keras API 可以完成许多相同的任务,而且被认为是一个更易学习的API。如果您刚刚开始入门,我们建议您从 Keras 开始。有关 Tensorflow 2.0 中可用高级API的更多信息,请参阅 Keras标准化。 首先要做的事 为了开始,您将首先导入 Tensorflow 和一系列您需要的库。 End of explanation """ CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species'] SPECIES = ['Setosa', 'Versicolor', 'Virginica'] """ Explanation: 数据集 本文档中的示例程序构建并测试了一个模型,该模型根据花萼和花瓣的大小将鸢尾花分成三种物种。 您将使用鸢尾花数据集训练模型。该数据集包括四个特征和一个标签。这四个特征确定了单个鸢尾花的以下植物学特征: 花萼长度 花萼宽度 花瓣长度 花瓣宽度 根据这些信息,您可以定义一些有用的常量来解析数据: End of explanation """ train_path = tf.keras.utils.get_file( "iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv") test_path = tf.keras.utils.get_file( "iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv") train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0) test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0) """ Explanation: 接下来,使用 Keras 与 Pandas 下载并解析鸢尾花数据集。注意为训练和测试保留不同的数据集。 End of explanation """ train.head() """ Explanation: 通过检查数据您可以发现有四列浮点型特征和一列 int32 型标签。 End of explanation """ train_y = train.pop('Species') test_y = test.pop('Species') # 标签列现已从数据中删除 train.head() """ Explanation: 对于每个数据集都分割出标签,模型将被训练来预测这些标签。 End of explanation """ def input_evaluation_set(): features = {'SepalLength': np.array([6.4, 5.0]), 'SepalWidth': np.array([2.8, 2.3]), 'PetalLength': np.array([5.6, 3.3]), 'PetalWidth': np.array([2.2, 1.0])} labels = np.array([2, 1]) return features, labels """ Explanation: Estimator 编程概述 现在您已经设定好了数据,您可以使用 Tensorflow Estimator 定义模型。Estimator 是从 tf.estimator.Estimator 中派生的任何类。Tensorflow提供了一组tf.estimator(例如,LinearRegressor)来实现常见的机器学习算法。此外,您可以编写您自己的自定义 Estimator。入门阶段我们建议使用预创建的 Estimator。 为了编写基于预创建的 Estimator 的 Tensorflow 项目,您必须完成以下工作: 创建一个或多个输入函数 定义模型的特征列 实例化一个 Estimator,指定特征列和各种超参数。 在 Estimator 对象上调用一个或多个方法,传递合适的输入函数以作为数据源。 我们来看看这些任务是如何在鸢尾花分类中实现的。 创建输入函数 您必须创建输入函数来提供用于训练、评估和预测的数据。 输入函数是一个返回 tf.data.Dataset 对象的函数,此对象会输出下列含两个元素的元组: features——Python字典,其中: 每个键都是特征名称 每个值都是包含此特征所有值的数组 label 包含每个样本的标签的值的数组。 为了向您展示输入函数的格式,请查看下面这个简单的实现: End of explanation """ def input_fn(features, labels, training=True, batch_size=256): """An input function for training or evaluating""" # 将输入转换为数据集。 dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels)) # 如果在训练模式下混淆并重复数据。 if training: dataset = dataset.shuffle(1000).repeat() return dataset.batch(batch_size) """ Explanation: 您的输入函数可以以您喜欢的方式生成 features 字典与 label 列表。但是,我们建议使用 Tensorflow 的 Dataset API,该 API 可以用来解析各种类型的数据。 Dataset API 可以为您处理很多常见情况。例如,使用 Dataset API,您可以轻松地从大量文件中并行读取记录,并将它们合并为单个数据流。 为了简化此示例,我们将使用 pandas 加载数据,并利用此内存数据构建输入管道。 End of explanation """ # 特征列描述了如何使用输入。 my_feature_columns = [] for key in train.keys(): my_feature_columns.append(tf.feature_column.numeric_column(key=key)) """ Explanation: 定义特征列(feature columns) 特征列(feature columns)是一个对象,用于描述模型应该如何使用特征字典中的原始输入数据。当您构建一个 Estimator 模型的时候,您会向其传递一个特征列的列表,其中包含您希望模型使用的每个特征。tf.feature_column 模块提供了许多为模型表示数据的选项。 对于鸢尾花问题,4 个原始特征是数值,因此我们将构建一个特征列的列表,以告知 Estimator 模型将 4 个特征都表示为 32 位浮点值。故创建特征列的代码如下所示: End of explanation """ # 构建一个拥有两个隐层,隐藏节点分别为 30 和 10 的深度神经网络。 classifier = tf.estimator.DNNClassifier( feature_columns=my_feature_columns, # 隐层所含结点数量分别为 30 和 10. hidden_units=[30, 10], # 模型必须从三个类别中做出选择。 n_classes=3) """ Explanation: 特征列可能比上述示例复杂得多。您可以从指南获取更多关于特征列的信息。 我们已经介绍了如何使模型表示原始特征,现在您可以构建 Estimator 了。 实例化 Estimator 鸢尾花为题是一个经典的分类问题。幸运的是,Tensorflow 提供了几个预创建的 Estimator 分类器,其中包括: tf.estimator.DNNClassifier 用于多类别分类的深度模型 tf.estimator.DNNLinearCombinedClassifier 用于广度与深度模型 tf.estimator.LinearClassifier 用于基于线性模型的分类器 对于鸢尾花问题,tf.estimator.DNNClassifier 似乎是最好的选择。您可以这样实例化该 Estimator: End of explanation """ # 训练模型。 classifier.train( input_fn=lambda: input_fn(train, train_y, training=True), steps=5000) """ Explanation: ## 训练、评估和预测 我们已经有一个 Estimator 对象,现在可以调用方法来执行下列操作: 训练模型。 评估经过训练的模型。 使用经过训练的模型进行预测。 训练模型 通过调用 Estimator 的 Train 方法来训练模型,如下所示: End of explanation """ eval_result = classifier.evaluate( input_fn=lambda: input_fn(test, test_y, training=False)) print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result)) """ Explanation: 注意将 input_fn 调用封装在 lambda 中以获取参数,同时提供不带参数的输入函数,如 Estimator 所预期的那样。step 参数告知该方法在训练多少步后停止训练。 评估经过训练的模型 现在模型已经经过训练,您可以获取一些关于模型性能的统计信息。代码块将在测试数据上对经过训练的模型的准确率(accuracy)进行评估: End of explanation """ # 由模型生成预测 expected = ['Setosa', 'Versicolor', 'Virginica'] predict_x = { 'SepalLength': [5.1, 5.9, 6.9], 'SepalWidth': [3.3, 3.0, 3.1], 'PetalLength': [1.7, 4.2, 5.4], 'PetalWidth': [0.5, 1.5, 2.1], } def input_fn(features, batch_size=256): """An input function for prediction.""" # 将输入转换为无标签数据集。 return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size) predictions = classifier.predict( input_fn=lambda: input_fn(predict_x)) """ Explanation: 与对 train 方法的调用不同,我们没有传递 steps 参数来进行评估。用于评估的 input_fn 只生成一个 epoch 的数据。 eval_result 字典亦包含 average_loss(每个样本的平均误差),loss(每个 mini-batch 的平均误差)与 Estimator 的 global_step(经历的训练迭代次数)值。 利用经过训练的模型进行预测(推理) 我们已经有一个经过训练的模型,可以生成准确的评估结果。我们现在可以使用经过训练的模型,根据一些无标签测量结果预测鸢尾花的品种。与训练和评估一样,我们使用单个函数调用进行预测: End of explanation """ for pred_dict, expec in zip(predictions, expected): class_id = pred_dict['class_ids'][0] probability = pred_dict['probabilities'][class_id] print('Prediction is "{}" ({:.1f}%), expected "{}"'.format( SPECIES[class_id], 100 * probability, expec)) """ Explanation: predict 方法返回一个 Python 可迭代对象,为每个样本生成一个预测结果字典。以下代码输出了一些预测及其概率: End of explanation """
scotthuang1989/Python-3-Module-of-the-Week
text/difflib.ipynb
apache-2.0
d = difflib.Differ() diff = d.compare(text1_lines,text2_lines) print('\n'.join(diff)) """ Explanation: Comparing Bodies of Text The Differ class works on sequences of text lines and produces human-readable deltas, or change instructions, including differences within individual lines. The default output produced by Differ is similar to the diff command-line tool under Unix. It includes the original input values from both lists, including common values, and markup data to indicate which changes were made. Lines prefixed with - were in the first sequence, but not the second. ines prefixed with + were in the second sequence, but not the first. If a line has an incremental difference between versions, an extra line prefixed with ? is used to highlight the change within the new version. If a line has not changed, it is printed with an extra blank space on the left column so that it is aligned with the other output that may have differences. End of explanation """ diff = difflib.unified_diff( text1_lines, text2_lines, lineterm='', ) print('\n'.join(list(diff))) """ Explanation: Other Output Formats While the Differ class shows all of the input lines, a unified diff includes only the modified lines and a bit of context. The unified_diff() function produces this sort of output. End of explanation """ from difflib import SequenceMatcher def show_results(match): print(' a = {}'.format(match.a)) print(' b = {}'.format(match.b)) print(' size = {}'.format(match.size)) i, j, k = match print(' A[a:a+size] = {!r}'.format(A[i:i + k])) print(' B[b:b+size] = {!r}'.format(B[j:j + k])) A = " abcd" B = "abcd abcd" print('A = {!r}'.format(A)) print('B = {!r}'.format(B)) print('\nWithout junk detection:') s1 = SequenceMatcher(None, A, B) match1 = s1.find_longest_match(0, len(A), 0, len(B)) show_results(match1) print('\nTreat spaces as junk:') s2 = SequenceMatcher(lambda x: x == " ", A, B) match2 = s2.find_longest_match(0, len(A), 0, len(B)) show_results(match2) match1 """ Explanation: SequenceMathcer End of explanation """ modify_instruction = s2.get_opcodes() modify_instruction s1 = [1, 2, 3, 5, 6, 4] s2 = [2, 3, 5, 4, 6, 1] print('Initial data:') print('s1 =', s1) print('s2 =', s2) print('s1 == s2:', s1 == s2) print() matcher = difflib.SequenceMatcher(None, s1, s2) for tag, i1, i2, j1, j2 in reversed(matcher.get_opcodes()): if tag == 'delete': print('Remove {} from positions [{}:{}]'.format( s1[i1:i2], i1, i2)) print(' before =', s1) del s1[i1:i2] elif tag == 'equal': print('s1[{}:{}] and s2[{}:{}] are the same'.format( i1, i2, j1, j2)) elif tag == 'insert': print('Insert {} from s2[{}:{}] into s1 at {}'.format( s2[j1:j2], j1, j2, i1)) print(' before =', s1) s1[i1:i2] = s2[j1:j2] elif tag == 'replace': print(('Replace {} from s1[{}:{}] ' 'with {} from s2[{}:{}]').format( s1[i1:i2], i1, i2, s2[j1:j2], j1, j2)) print(' before =', s1) s1[i1:i2] = s2[j1:j2] print(' after =', s1, '\n') print('s1 == s2:', s1 == s2) """ Explanation: Modify first text to second End of explanation """
deepmind/deepmind-research
rl_unplugged/bsuite.ipynb
apache-2.0
# @title Installation !pip install dm-acme !pip install dm-acme[reverb] !pip install dm-acme[tf] !pip install dm-sonnet !pip install dopamine-rl==3.1.2 !pip install atari-py !pip install dm_env !git clone https://github.com/deepmind/deepmind-research.git %cd deepmind-research !git clone https://github.com/deepmind/bsuite.git !pip install -q bsuite/ # @title Imports import copy import functools from typing import Dict, Tuple import acme from acme.agents.tf import actors from acme.agents.tf.dqn import learning as dqn from acme.tf import utils as acme_utils from acme.utils import loggers import sonnet as snt import tensorflow as tf import numpy as np import tree import dm_env import reverb from acme.wrappers import base as wrapper_base from acme.wrappers import single_precision import bsuite # @title Data Loading Utilities def _parse_seq_tf_example(example, shapes, dtypes): """Parse tf.Example containing one or two episode steps.""" def to_feature(shape, dtype): if np.issubdtype(dtype, np.floating): return tf.io.FixedLenSequenceFeature( shape=shape, dtype=tf.float32, allow_missing=True) elif dtype == np.bool or np.issubdtype(dtype, np.integer): return tf.io.FixedLenSequenceFeature( shape=shape, dtype=tf.int64, allow_missing=True) else: raise ValueError(f'Unsupported type {dtype} to ' f'convert from TF Example.') feature_map = {} for k, v in shapes.items(): feature_map[k] = to_feature(v, dtypes[k]) parsed = tf.io.parse_single_example(example, features=feature_map) restructured = {} for k, v in parsed.items(): dtype = tf.as_dtype(dtypes[k]) if v.dtype == dtype: restructured[k] = parsed[k] else: restructured[k] = tf.cast(parsed[k], dtype) return restructured def _build_sars_example(sequences): """Convert raw sequences into a Reverb SARS' sample.""" o_tm1 = tree.map_structure(lambda t: t[0], sequences['observation']) o_t = tree.map_structure(lambda t: t[1], sequences['observation']) a_tm1 = tree.map_structure(lambda t: t[0], sequences['action']) r_t = tree.map_structure(lambda t: t[0], sequences['reward']) p_t = tree.map_structure( lambda d, st: d[0] * tf.cast(st[1] != dm_env.StepType.LAST, d.dtype), sequences['discount'], sequences['step_type']) info = reverb.SampleInfo(key=tf.constant(0, tf.uint64), probability=tf.constant(1.0, tf.float64), table_size=tf.constant(0, tf.int64), priority=tf.constant(1.0, tf.float64)) return reverb.ReplaySample(info=info, data=( o_tm1, a_tm1, r_t, p_t, o_t)) def bsuite_dataset_params(env): """Return shapes and dtypes parameters for bsuite offline dataset.""" shapes = { 'observation': env.observation_spec().shape, 'action': env.action_spec().shape, 'discount': env.discount_spec().shape, 'reward': env.reward_spec().shape, 'episodic_reward': env.reward_spec().shape, 'step_type': (), } dtypes = { 'observation': env.observation_spec().dtype, 'action': env.action_spec().dtype, 'discount': env.discount_spec().dtype, 'reward': env.reward_spec().dtype, 'episodic_reward': env.reward_spec().dtype, 'step_type': np.int64, } return {'shapes': shapes, 'dtypes': dtypes} def bsuite_dataset(path: str, shapes: Dict[str, Tuple[int]], dtypes: Dict[str, type], # pylint:disable=g-bare-generic num_threads: int, batch_size: int, num_shards: int, shuffle_buffer_size: int = 100000, shuffle: bool = True) -> tf.data.Dataset: """Create tf dataset for training.""" filenames = [f'{path}-{i:05d}-of-{num_shards:05d}' for i in range( num_shards)] file_ds = tf.data.Dataset.from_tensor_slices(filenames) if shuffle: file_ds = file_ds.repeat().shuffle(num_shards) example_ds = file_ds.interleave( functools.partial(tf.data.TFRecordDataset, compression_type='GZIP'), cycle_length=tf.data.experimental.AUTOTUNE, block_length=5) if shuffle: example_ds = example_ds.shuffle(shuffle_buffer_size) def map_func(example): example = _parse_seq_tf_example(example, shapes, dtypes) return example example_ds = example_ds.map(map_func, num_parallel_calls=num_threads) if shuffle: example_ds = example_ds.repeat().shuffle(batch_size * 10) example_ds = example_ds.map( _build_sars_example, num_parallel_calls=tf.data.experimental.AUTOTUNE) example_ds = example_ds.batch(batch_size, drop_remainder=True) example_ds = example_ds.prefetch(tf.data.experimental.AUTOTUNE) return example_ds def load_offline_bsuite_dataset( bsuite_id: str, path: str, batch_size: int, num_shards: int = 1, num_threads: int = 1, single_precision_wrapper: bool = True, shuffle: bool = True) -> Tuple[tf.data.Dataset, dm_env.Environment]: """Load bsuite offline dataset.""" # Data file path format: {path}-?????-of-{num_shards:05d} # The dataset is not deterministic and not repeated if shuffle = False. environment = bsuite.load_from_id(bsuite_id) if single_precision_wrapper: environment = single_precision.SinglePrecisionWrapper(environment) params = bsuite_dataset_params(environment) dataset = bsuite_dataset(path=path, num_threads=num_threads, batch_size=batch_size, num_shards=num_shards, shuffle_buffer_size=2, shuffle=shuffle, **params) return dataset, environment """ Explanation: Copyright 2021 DeepMind Technologies Limited. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. RL Unplugged: Offline DQN - Bsuite Guide to training an Acme DQN agent on Bsuite data. <a href="https://colab.research.google.com/github/deepmind/deepmind_research/blob/master/rl_unplugged/atari_dqn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ tmp_path = 'gs://rl_unplugged/bsuite' level = 'catch' dir = '0_0.0' filename = '0_full' path = f'{tmp_path}/{level}/{dir}/{filename}' batch_size = 2 #@param bsuite_id = level + '/0' dataset, environment = load_offline_bsuite_dataset(bsuite_id=bsuite_id, path=path, batch_size=batch_size) dataset = dataset.prefetch(1) """ Explanation: Dataset and environment End of explanation """ # Get total number of actions. num_actions = environment.action_spec().num_values obs_spec = environment.observation_spec() print(environment.observation_spec()) # Create the Q network. network = snt.Sequential([ snt.flatten, snt.nets.MLP([56, 56]), snt.nets.MLP([num_actions]) ]) acme_utils.create_variables(network, [environment.observation_spec()]) # Create a logger. logger = loggers.TerminalLogger(label='learner', time_delta=1.) # Create the DQN learner. learner = dqn.DQNLearner( network=network, target_network=copy.deepcopy(network), discount=0.99, learning_rate=3e-4, importance_sampling_exponent=0.2, target_update_period=2500, dataset=dataset, logger=logger) """ Explanation: DQN learner End of explanation """ for _ in range(10000): learner.step() """ Explanation: Training loop End of explanation """ # Create a logger. logger = loggers.TerminalLogger(label='evaluation', time_delta=1.) # Create an environment loop. policy_network = snt.Sequential([ network, lambda q: tf.argmax(q, axis=-1), ]) loop = acme.EnvironmentLoop( environment=environment, actor=actors.DeprecatedFeedForwardActor(policy_network=policy_network), logger=logger) loop.run(400) """ Explanation: Evaluation End of explanation """
daniestevez/jupyter_notebooks
CE5/CE-5 frame analysis ATA 2021-01-23.ipynb
gpl-3.0
def load_frames(path): frame_size = 220 frames = np.fromfile(path, dtype = 'uint8') frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size)) return frames frames = load_frames('ATA_2021-01-23/ce5_frames_1.u8') """ Explanation: Here we look at some Chang'e 5 low data rate telemetry received with Allen Telescope array on 2021-01-23, during its transfer to the Sun-Earth L1 point. The recorded data corresponds to the frequency 8471.2 MHz, which corresponds to the lander. The frames are CCSDS concatenated frames with a frames size of 220 bytes. End of explanation """ aos = [CE5_AOSFrame.parse(f) for f in frames] collections.Counter([a.primary_header.transfer_frame_version_number for a in aos]) collections.Counter([a.primary_header.spacecraft_id for a in aos if a.primary_header.transfer_frame_version_number == 1]) collections.Counter([a.primary_header.virtual_channel_id for a in aos if a.primary_header.transfer_frame_version_number == 1 and a.primary_header.spacecraft_id == 108]) """ Explanation: AOS frames AOS frames come from spacecraft 91 and virtual channels 1 and 2. Other combinations are most likely to corruted frames despite the fact that the Reed-Solomon decoder was successful. End of explanation """ [a.primary_header for a in aos if a.primary_header.virtual_channel_id == 1][:10] vc1 = [a for a in aos if a.primary_header.virtual_channel_id == 1] fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc1]) [a.insert_zone for a in aos[:10]] t_vc1 = np.datetime64('2012-08-01') + np.timedelta64(1, 's') * np.array([a.insert_zone.timestamp for a in vc1]) plt.figure(figsize = (10,6), facecolor = 'w') plt.plot(t_vc1, fc, '.') plt.title("Chang'e 5 virtual channel 1 timestamps") plt.xlabel('AOS frame timestamp') plt.ylabel('AOS virtual channel frame counter'); plt.figure(figsize = (10,6), facecolor = 'w') plt.plot(t_vc1[1:], np.diff(fc)-1, '.') plt.title("Chang'e 5 spacecraft 91 virtual channel 1 frame loss") plt.xlabel('AOS frame timestamp') plt.ylabel('Frame loss') plt.ylim((-1,50)); """ Explanation: Virtual channel 1 The vast majority of frames belong to virtual channel 1, which seems to send real-time telemetry. End of explanation """ vc1_packets = list(ccsds.extract_space_packets(vc1, 108, 1, get_timestamps = True)) vc1_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p[0]) for p in vc1_packets] """ Explanation: We need to sort the data, since the different files we've loaded up are not in chronological order. End of explanation """ vc1_apids = collections.Counter([p.APID for p in vc1_sp_headers]) vc1_apids vc1_by_apid = {apid : [p for h,p in zip(vc1_sp_headers, vc1_packets) if h.APID == apid] for apid in vc1_apids} plot_apids(vc1_by_apid, 108, 1) """ Explanation: There are space packets in may APIDs. The contents of each APID are shown belown in plot form, but it's not easy to guess what any of the values mean. End of explanation """
tkurfurst/deep-learning
embeddings/Skip-Grams-Solution.ipynb
mit
import time import numpy as np import tensorflow as tf import utils """ Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() """ Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation """ words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) """ Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation """ vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] """ Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation """ from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] """ Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words. End of explanation """ def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) idx = len(train_words)-1 train_words[idx] get_target(train_words, idx) t = train_words[idx-5:idx]+[train_words[idx]]+train_words[idx+1:idx+5+1] t """ Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. End of explanation """ def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y """ Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation """ train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') """ Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation """ n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) """ Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation """ # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) """ Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation """ with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) """ Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation """ with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) """ Explanation: Restore the trained network if you need to: End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) """ Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation """
harishkrao/DSE200x
Mini Project/Analysis on the Movie Lens dataset.ipynb
mit
# The first step is to import the dataset into a pandas dataframe. import pandas as pd #path = 'C:/Users/hrao/Documents/Personal/HK/Python/ml-20m/ml-20m/' path = '/Users/Harish/Documents/HK_Work/Python/ml-20m/' movies = pd.read_csv(path+'movies.csv') movies.shape tags = pd.read_csv(path+'tags.csv') tags.shape ratings = pd.read_csv(path+'ratings.csv') ratings.shape links = pd.read_csv(path+'links.csv') links.shape """ Explanation: Analysis on the Movie Lens dataset using pandas I am creating the notebook for the mini project for course DSE200x - Python for Data Science on edX. The project requires each participant to complete the following steps: Selecting a dataset Exploring the dataset to identify what kinds of questions can be answered using the dataset Identifying one research question Using pandas methods to explore the dataset - this also involves using visualization techniques using matplotlib Reporting findings/analyses Presenting the work in the given presentation template Selecting a dataset The mini projects requires us to choose from among three datasets that have been explored through the course previously. I have selected the movie lens dataset, also known as the IMDB Movie Dataset. The dataset is available for download here - https://grouplens.org/datasets/movielens/20m/ Description about the dataset, as shown on the website is below: This dataset (ml-20m) describes 5-star rating and free-text tagging activity from MovieLens, a movie recommendation service. It contains 20000263 ratings and 465564 tag applications across 27278 movies. These data were created by 138493 users between January 09, 1995 and March 31, 2015. This dataset was generated on October 17, 2016. Users were selected at random for inclusion. All selected users had rated at least 20 movies. No demographic information is included. Each user is represented by an id, and no other information is provided. The data are contained in six files, genome-scores.csv, genome-tags.csv, links.csv, movies.csv, ratings.csv and tags.csv. More details about the contents and use of all these files follows. This and other GroupLens data sets are publicly available for download at http://grouplens.org/datasets/. End of explanation """ movies.head() tags.head() ratings.head() links.head() """ Explanation: Exploring the dataset Identifying the questions that can be answered using the dataset End of explanation """ # List of genres as a Python list genres = ['Action','Adventure','Animation','Children','Comedy','Crime','Documentary','Drama','Fantasy','Film-Noir','Horror','Musical','Mystery','Romance','Sci-Fi','Thriller','War','Western'] genres_rating_list = [] # The loop reads each element of the above list # For each iteration, one genre is selected from the movies data frame # This selection of the data frame is then merged with the rating data frame to get the rating for that genre # Once the new merged data frame is created, we use the mean function to get the mean rating for the genre # The genre and the corresponding mean rating are then appended to the genres_rating Data Frame # The entire looping takes long - can certainly be optimized for performance for i in range(len(genres)): fil = genres[i]+'_filter' mov = genres[i]+'_movies' rat = genres[i]+'_ratings' rat_mean = rat+'_mean' fil = movies['genres'].str.contains(genres[i]) mov = movies[fil] rat = mov.merge(ratings, on='movieId', how='inner') rat_mean = round(rat['rating'].mean(), 2) #print(genres[i], round(rat_mean,2)) genres_rating_list.append(rat_mean) df = {'Genre':genres, 'Genres Mean Rating':genres_rating_list} genres_rating = pd.DataFrame(df) genres_rating genres_rating['Genres Standard Deviation'] = genres_rating['Genres Mean Rating'].std() genres_rating['Mean'] = genres_rating['Genres Mean Rating'].mean() genres_rating['Zero'] = 0 genres_rating overall_mean = round(genres_rating['Genres Mean Rating'].mean(), 2) overall_std = round(genres_rating['Genres Mean Rating'].std(),2) scifi_rating = genres_rating[genres_rating['Genre'] == 'Sci-Fi']['Genres Mean Rating'] print(overall_mean) print(overall_std) print(scifi_rating) genres_rating['Diff from Mean'] = genres_rating['Genres Mean Rating'] - overall_mean genres_rating """ Explanation: Based on the above exploratory commands, I believe that the following questions can be answered using the dataset: Is there a correlation or a trend between the year of release of a movie and the genre? Which genres were more dominant in each decade of the range available in the dataset? Do science fiction movies tend to be rated more highly than other movie genres? For the mini-project, I have chosen question 3 for further analysis. Using pandas methods to explore the dataset Includes matplotlib visualization End of explanation """ genre_list = list(genres_rating['Genre']) genres_rating_list = list(genres_rating['Genres Mean Rating']) genres_diff_list = list(genres_rating['Diff from Mean']) %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(20, 10)) ax1 = plt.subplot(2,1,1) x = [x for x in range(0, 18)] xticks_genre_list = genre_list y = genres_rating_list plt.xticks(range(len(x)), xticks_genre_list) plt.scatter(x,y, color='g') plt.plot(x, genres_rating['Mean'], color="red") plt.autoscale(tight=True) #plt.rcParams["figure.figsize"] = (10,2) plt.title('Movie ratings by genre') plt.xlabel('Genre') plt.ylabel('Rating') plt.ylim(ymax = 4, ymin = 3) plt.grid(True) plt.savefig(r'movie-ratings-by-genre.png') plt.annotate("Sci-Fi Rating", xy=(14.25,3.5), xycoords='data', xytext=(14.20, 3.7), textcoords='data', arrowprops=dict(arrowstyle="->", connectionstyle="arc3"), ) for i,j in enumerate( y ): ax1.annotate( j, ( x[i] + 0.03, y[i] + 0.02)) ax2 = plt.subplot(2,1,2) x = [x for x in range(0, 18)] xticks_genre_list = genre_list y = genres_rating['Diff from Mean'] plt.xticks(range(len(x)), xticks_genre_list) plt.plot(x,y) plt.plot(x, genres_rating['Zero']) plt.autoscale(tight=True) #plt.rcParams["figure.figsize"] = (10,2) plt.title('Deviation of each genre\'s rating from the overall mean rating') plt.xlabel('Genre') plt.ylabel('Deviation from mean rating') plt.grid(True) plt.savefig(r'deviation-from-mean-rating.png') plt.annotate("Sci-Fi Rating", xy=(14,-0.13), xycoords='data', xytext=(14.00, 0.0), textcoords='data', arrowprops=dict(arrowstyle="->", connectionstyle="arc3"), ) plt.show() """ Explanation: Now that we have a data frame of information about each genre and the corresponding mean rating, we will visualize the data using matplotlib End of explanation """ # extract year of release of each movie from the title column # convert the data type of the movie_year column to numeric (from str) import numpy as np import re movies['movie_year'] = movies['title'] movies['movie_year'] = movies['movie_year'].str.extract(r"\(([0-9]+)\)", expand=False) # creating a new column with just the movie titles movies['title_only'] = movies['title'] movies['title_only'] = movies['title_only'].str.extract('(.*?)\s*\(', expand=False) movies['movie_year'].fillna(0, inplace=True) #Drop all rows containing incorrect year values - such as 0, 6, 69, 500 and -2147483648 movies.drop(movies[movies.movie_year == '0'].index, inplace=True) movies.drop(movies[movies.movie_year == '6'].index, inplace=True) movies.drop(movies[movies.movie_year == '06'].index, inplace=True) movies.drop(movies[movies.movie_year == '69'].index, inplace=True) movies.drop(movies[movies.movie_year == '500'].index, inplace=True) movies.drop(movies[movies.movie_year == '-2147483648'].index, inplace=True) movies.drop(movies[movies.movie_year == 0].index, inplace=True) movies.drop(movies[movies.movie_year == 6].index, inplace=True) movies.drop(movies[movies.movie_year == 69].index, inplace=True) movies.drop(movies[movies.movie_year == 500].index, inplace=True) movies.drop(movies[movies.movie_year == -2147483648].index, inplace=True) #convert the string values to numeric movies['movie_year'] = pd.to_datetime(movies['movie_year'], format='%Y') """ Explanation: Reporting findings/analyses Now that we have a couple plots, let us revisit the question we want to answer using the dataset. Again, the question is - Do science fiction movies tend to be rated more highly than other movie genres? The scatter plot shows the mean rating value for each genre. Each genre has a value on the scatter plot for the mean rating value for that genre. Let us now see if the plot is able to help us answer the question above. The mean rating for Sci-Fi genre is about 3.45. When looking at the plot, we see that there are only three other genres out of 18 genres in total, that have lesser mean ratings than Sci-Fi - Horror, Children and Comedy. The remaining 10 genres have mean ratings higher than Science Fiction. This gives us enough information to answer the question. Sci-Fi movies do not tend to be rated higher than other genres. The second plot, a bar plot, shows how much each genre's ratings deviate from the overall mean of ratings. Science Fiction is around -0.13 lower than the mean rating of 3.58, showing lesser deviation than Horror at the lower end and Film-Noir at the higher end. To conclude - no, science fiction movies are not rated higher than other movie genres. The ratings for science fiction movies hover around the mean ratings for all movies. I have submitted my work to the mini project section of the course. Now, we will explore the dataset further and try to answer the remaining questions I have listed at the beginning of the notebook. - Is there a correlation or a trend between the year of release of a movie and the genre? - Which genres were more dominant in each decade of the range available in the dataset? End of explanation """ movie_year = pd.DataFrame(movies['title_only'].groupby(movies['movie_year']).count()) movie_year.reset_index(inplace=True) X=movie_year['movie_year'] Y=movie_year['title_only'] plt.plot_date(X,Y,'bo-') plt.grid(True) plt.rcParams["figure.figsize"] = (15,5) plt.title('Number of movies per year') plt.xlabel('Years') plt.ylabel('Number of movies') plt.xlim('1885-01-01','2020-01-01') plt.show() """ Explanation: Now that we have a move year column, let us list the data types of the columns in the movies data frame. movie_year is of float64 datat type. We must convert the data type of the movie_year column to int64. Before we go ahead and do that, we must replace all NULL and inifinite entries in the column with zero. If we do not perform this step, we will get the following errror message. End of explanation """ movies.head() list(movies) a = pd.Series(movies.iloc[0]) a def flat(str1): c = pd.DataFrame(columns=list(movies)) for i in range(len(str1)): #print(str1[i]) if i == 2: a = str1[i].split('|') for j in range(len(a)): c.loc[j] = [str1[0], str1[1], a[j], str1[3], str1[4]] return c c = flat(a) c """ Explanation: The above plot provides some interesting insight: * There was a steady increase in the number of movies after 1930 and till 2008. * In this dataset, 2009 was the year when the highest number of movies were produced - 1112 in all. * The decades between 1970 and 2000 saw the highest year-on-year increase in the number of movies produced. * 2014 saw a sharp drop in the nimber of movies produced, from 1011 in 2013 to only 740 movies. * The movie count of 2015 is only 120. This could possibly be due to the lack of information available for the entire year of 2015. End of explanation """
macks22/gensim
docs/notebooks/keras_wrapper.ipynb
lgpl-2.1
from gensim.models import word2vec """ Explanation: Using wrappers for Gensim models for working with Keras This tutorial is about using gensim models as a part of your Keras models. The wrappers available (as of now) are : * Word2Vec (uses the function get_embedding_layer defined in gensim.models.keyedvectors) Word2Vec To use Word2Vec, we import the corresponding module. End of explanation """ sentences = [ ['human', 'interface', 'computer'], ['survey', 'user', 'computer', 'system', 'response', 'time'], ['eps', 'user', 'interface', 'system'], ['system', 'human', 'system', 'eps'], ['user', 'response', 'time'], ['trees'], ['graph', 'trees'], ['graph', 'minors', 'trees'], ['graph', 'minors', 'survey'] ] """ Explanation: Next we create a dummy set of sentences to train our Word2Vec model. End of explanation """ model = word2vec.Word2Vec(sentences, size=100, min_count=1, hs=1) """ Explanation: Then, we create the Word2Vec model by passing appropriate parameters. End of explanation """ import numpy as np from keras.engine import Input from keras.models import Model from keras.layers.merge import dot """ Explanation: Integration with Keras : Cosine Similarity Task As an example of integration of Gensim's Word2Vec model with Keras, we consider a word similarity task where we compute the cosine distance as a measure of similarity between the two words. End of explanation """ wv = model.wv embedding_layer = wv.get_embedding_layer() """ Explanation: We would use the layer returned by the function get_embedding_layer in the Keras model. End of explanation """ input_a = Input(shape=(1,), dtype='int32', name='input_a') input_b = Input(shape=(1,), dtype='int32', name='input_b') embedding_a = embedding_layer(input_a) embedding_b = embedding_layer(input_b) similarity = dot([embedding_a, embedding_b], axes=2, normalize=True) keras_model = Model(input=[input_a, input_b], output=similarity) keras_model.compile(optimizer='sgd', loss='mse') """ Explanation: Next, we construct the Keras model. End of explanation """ word_a = 'graph' word_b = 'trees' # output is the cosine distance between the two words (as a similarity measure) output = keras_model.predict([np.asarray([model.wv.vocab[word_a].index]), np.asarray([model.wv.vocab[word_b].index])]) print output """ Explanation: Now, we input the two words which we wish to compare and retrieve the value predicted by the model as the similarity score of the two words. End of explanation """ import os import sys import keras import numpy as np from gensim.models import word2vec from keras.models import Model from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical from keras.layers import Input, Dense, Flatten from keras.layers import Conv1D, MaxPooling1D from sklearn.datasets import fetch_20newsgroups """ Explanation: Integration with Keras : 20NewsGroups Task To see how Gensim's Word2Vec model could be integrated with Keras while dealing with a real supervised (classification) task, we consider the 20NewsGroups task. Here, we take a smaller version of this data by taking a subset of the documents to be classified. First, we import the necessary modules. End of explanation """ texts = [] # list of text samples texts_w2v = [] # used to train the word embeddings labels = [] # list of label ids #using 3 categories for training the classifier data = fetch_20newsgroups(subset='train', categories=['alt.atheism', 'comp.graphics', 'sci.space']) for index in range(len(data)): label_id = data.target[index] file_data = data.data[index] i = file_data.find('\n\n') # skip header if i > 0: file_data = file_data[i:] try: curr_str = str(file_data) sentence_list = curr_str.split('\n') for sentence in sentence_list: sentence = (sentence.strip()).lower() texts.append(sentence) texts_w2v.append(sentence.split(' ')) labels.append(label_id) except: None """ Explanation: As the first step of the task, we iterate over the folder in which our text samples are stored, and format them into a list of samples. Also, we prepare at the same time a list of class indices matching the samples. End of explanation """ MAX_SEQUENCE_LENGTH = 1000 # Vectorize the text samples into a 2D integer tensor tokenizer = Tokenizer() tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) # word_index = tokenizer.word_index data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) labels = to_categorical(np.asarray(labels)) x_train = data y_train = labels """ Explanation: Then, we format our text samples and labels into tensors that can be fed into a neural network. To do this, we rely on Keras utilities keras.preprocessing.text.Tokenizer and keras.preprocessing.sequence.pad_sequences. End of explanation """ Keras_w2v = word2vec.Word2Vec(min_count=1) Keras_w2v.build_vocab(texts_w2v) Keras_w2v.train(texts, total_examples=Keras_w2v.corpus_count, epochs=Keras_w2v.iter) Keras_w2v_wv = Keras_w2v.wv embedding_layer = Keras_w2v_wv.get_embedding_layer() """ Explanation: As the next step, we prepare the embedding layer to be used in our actual Keras model. End of explanation """ sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) x = Conv1D(128, 5, activation='relu')(embedded_sequences) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(35)(x) # global max pooling x = Flatten()(x) x = Dense(128, activation='relu')(x) preds = Dense(y_train.shape[1], activation='softmax')(x) model = Model(sequence_input, preds) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc']) model.fit(x_train, y_train, epochs=5) """ Explanation: Finally, we create a small 1D convnet to solve our classification problem. End of explanation """ from keras.models import Sequential from keras.layers import Dropout from keras.regularizers import l2 from keras.models import Model from keras.engine import Input from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer from gensim.models import keyedvectors from collections import defaultdict import pandas as pd """ Explanation: As can be seen from the results above, the accuracy obtained is not that high. This is because of the small size of training data used and we could expect to obtain better accuracy for training data of larger size. Integration with Keras : Another classification task In this task, we train our model to predict the category of the input text. We start by importing the relevant modules and libraries : End of explanation """ # global variables nb_filters = 1200 # number of filters n_gram = 2 # n-gram, or window size of CNN/ConvNet maxlen = 15 # maximum number of words in a sentence vecsize = 300 # length of the embedded vectors in the model cnn_dropout = 0.0 # dropout rate for CNN/ConvNet final_activation = 'softmax' # activation function. Options: softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear. dense_wl2reg = 0.0 # dense_wl2reg: L2 regularization coefficient dense_bl2reg = 0.0 # dense_bl2reg: L2 regularization coefficient for bias optimizer = 'adam' # optimizer for gradient descent. Options: sgd, rmsprop, adagrad, adadelta, adam, adamax, nadam # utility functions def retrieve_csvdata_as_dict(filepath): """ Retrieve the training data in a CSV file, with the first column being the class labels, and second column the text data. It returns a dictionary with the class labels as keys, and a list of short texts as the value for each key. """ df = pd.read_csv(filepath) category_col, descp_col = df.columns.values.tolist() shorttextdict = dict() for category, descp in zip(df[category_col], df[descp_col]): if type(descp) == str: shorttextdict.setdefault(category, []).append(descp) return shorttextdict def subjectkeywords(): """ Return an example data set, with three subjects and corresponding keywords. This is in the format of the training input. """ data_path = os.path.join(os.getcwd(), 'datasets/keras_classifier_training_data.csv') return retrieve_csvdata_as_dict(data_path) def convert_trainingdata(classdict): """ Convert the training data into format put into the neural networks. """ classlabels = classdict.keys() lblidx_dict = dict(zip(classlabels, range(len(classlabels)))) # tokenize the words, and determine the word length phrases = [] indices = [] for label in classlabels: for shorttext in classdict[label]: shorttext = shorttext if type(shorttext) == str else '' category_bucket = [0]*len(classlabels) category_bucket[lblidx_dict[label]] = 1 indices.append(category_bucket) phrases.append(shorttext) return classlabels, phrases, indices def process_text(text): """ Process the input text by tokenizing and padding it. """ tokenizer = Tokenizer() tokenizer.fit_on_texts(text) x_train = tokenizer.texts_to_sequences(text) x_train = pad_sequences(x_train, maxlen=maxlen) return x_train """ Explanation: We now define some global variables and utility functions which would be used in the code further : End of explanation """ # we are training our Word2Vec model here w2v_training_data_path = os.path.join(os.getcwd(), 'datasets/word_vectors_training_data.txt') input_data = word2vec.LineSentence(w2v_training_data_path) w2v_model = word2vec.Word2Vec(input_data, size=300) w2v_model_wv = w2v_model.wv # Alternatively we could have imported pre-trained word-vectors like : # w2v_model_wv = keyedvectors.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True) # The dataset 'GoogleNews-vectors-negative300.bin.gz' can be downloaded from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit """ Explanation: We create our word2vec model first. We could either train our model or user pre-trained vectors. End of explanation """ trainclassdict = subjectkeywords() nb_labels = len(trainclassdict) # number of class labels """ Explanation: We load the training data for the Keras model. End of explanation """ # get embedding layer corresponding to our trained Word2Vec model embedding_layer = w2v_model_wv.get_embedding_layer() # create a convnet to solve our classification task sequence_input = Input(shape=(maxlen,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) x = Conv1D(filters=nb_filters, kernel_size=n_gram, padding='valid', activation='relu', input_shape=(maxlen, vecsize))(embedded_sequences) x = MaxPooling1D(pool_size=maxlen - n_gram + 1)(x) x = Flatten()(x) preds = Dense(nb_labels, activation=final_activation, kernel_regularizer=l2(dense_wl2reg), bias_regularizer=l2(dense_bl2reg))(x) """ Explanation: Next, we create out Keras model. End of explanation """ classlabels, x_train, y_train = convert_trainingdata(trainclassdict) tokenizer = Tokenizer() tokenizer.fit_on_texts(x_train) x_train = tokenizer.texts_to_sequences(x_train) x_train = pad_sequences(x_train, maxlen=maxlen) model = Model(sequence_input, preds) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc']) fit_ret_val = model.fit(x_train, y_train, epochs=10) """ Explanation: Next, we train the classifier. End of explanation """ input_text = 'artificial intelligence' matrix = process_text(input_text) predictions = model.predict(matrix) # get the actual categories from output scoredict = {} for idx, classlabel in zip(range(len(classlabels)), classlabels): scoredict[classlabel] = predictions[0][idx] print scoredict """ Explanation: Our classifier is now ready to predict classes for input data. End of explanation """
mne-tools/mne-tools.github.io
0.16/_downloads/plot_movement_compensation.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) from os import path as op import mne from mne.preprocessing import maxwell_filter print(__doc__) data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement') head_pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos')) raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif')) raw_stat = mne.io.read_raw_fif(op.join(data_path, 'simulated_stationary_raw.fif')) """ Explanation: Maxwell filter data with movement compensation Demonstrate movement compensation on simulated data. The simulated data contains bilateral activation of auditory cortices, repeated over 14 different head rotations (head center held fixed). See the following for details: https://github.com/mne-tools/mne-misc-data/blob/master/movement/simulate.py End of explanation """ mne.viz.plot_head_positions( head_pos, mode='traces', destination=raw.info['dev_head_t'], info=raw.info) """ Explanation: Visualize the "subject" head movements. By providing the measurement information, the distance to the nearest sensor in each direction (e.g., left/right for the X direction, forward/backward for Y) can be shown in blue, and the destination (if given) shown in red. End of explanation """ mne.viz.plot_head_positions( head_pos, mode='field', destination=raw.info['dev_head_t'], info=raw.info) """ Explanation: This can also be visualized using a quiver. End of explanation """ # extract our resulting events events = mne.find_events(raw, stim_channel='STI 014') events[:, 2] = 1 raw.plot(events=events) topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500, time_unit='s') """ Explanation: Process our simulated raw data (taking into account head movements). End of explanation """ evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average() evoked_stat.plot_topomap(title='Stationary', **topo_kwargs) """ Explanation: First, take the average of stationary data (bilateral auditory patterns). End of explanation """ evoked = mne.Epochs(raw, events, 1, -0.2, 0.8).average() evoked.plot_topomap(title='Moving: naive average', **topo_kwargs) """ Explanation: Second, take a naive average, which averages across epochs that have been simulated to have different head positions and orientations, thereby spatially smearing the activity. End of explanation """ raw_sss = maxwell_filter(raw, head_pos=head_pos) evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average() evoked_raw_mc.plot_topomap(title='Moving: movement compensated', **topo_kwargs) """ Explanation: Third, use raw movement compensation (restores pattern). End of explanation """
hainm/scikit-xray-examples
demos/1_time_correlation/XPCS_fitting_with_lmfit.ipynb
bsd-3-clause
# analysis tools from scikit-xray (https://github.com/scikit-xray/scikit-xray/tree/master/skxray/core) import skxray.core.roi as roi import skxray.core.correlation as corr import skxray.core.utils as utils from lmfit import minimize, Parameters, Model # plotting tools from xray_vision (https://github.com/Nikea/xray-vision/blob/master/xray_vision/mpl_plotting/roi.py) import xray_vision.mpl_plotting as mpl_plot import numpy as np import os, sys import zipfile import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator from matplotlib.colors import LogNorm """ Explanation: XPCS fitting with lmfit The experimentatl X-ray Photon Correlation Sepectroscopy(XPCS) data are fitted with Intermediate Scattering Factor(ISF) using lmfit Model (http://lmfit.github.io/lmfit-py/model.html) End of explanation """ interactive_mode = False if interactive_mode: %matplotlib notebook else: %matplotlib inline backend = mpl.get_backend() """ Explanation: Easily switch between interactive and static matplotlib plots End of explanation """ #folder = "/Volumes/Data/BeamLines/CHX/Luxi_description_files_for_duke/Duke_data" folder = os.path.join(*__file__.split(os.sep)[:-1]) # Get the data and the mask try: duke_data = np.load(os.path.join(folder, "duke_data", "duke_data.npy")) N_mask = np.load(os.path.join(folder, "duke_data", "N_mask.npy")) except IOError: zipfile.ZipFile(os.path.join(folder, "duke_data.zip")).extractall() duke_data = np.load(os.path.join(folder, "duke_data", "duke_data.npy")) N_mask = np.load(os.path.join(folder, "duke_data", "N_mask.npy")) # get the average image avg_img = np.average(duke_data, axis=0) # plot the average image data after masking plt.figure() plt.imshow(N_mask*avg_img, vmax=1e0, cmap="Dark2" ) plt.title("Averaged masked data for Duke Silica Gel ") plt.colorbar() plt.show() """ Explanation: This data provided by Dr. Andrei Fluerasu L. Li, P. Kwasniewski, D. Oris, L Wiegart, L. Cristofolini, C. Carona and A. Fluerasu , "Photon statistics and speckle visibility spectroscopy with partially coherent x-rays" J. Synchrotron Rad., vol 21, p 1288-1295, 2014. End of explanation """ inner_radius = 24 # radius of the first ring width = 1 # width of each ring spacing = 0 # no spacing between rings num_rings = 5 # number of rings center = (133, 143) # center of the spckle pattern # find the edges of the required rings edges = roi.ring_edges(inner_radius, width, spacing, num_rings) edges """ Explanation: Create the Rings Mask Use the skxray.core.roi module to create Ring ROIs (ROI Mask)¶ (https://github.com/scikit-xray/scikit-xray/blob/master/skxray/core/roi.py) End of explanation """ dpix = 0.055 # The physical size of the pixels lambda_ = 1.5498 # wavelength of the X-rays Ldet = 2200. # # detector to sample distance two_theta = utils.radius_to_twotheta(Ldet, edges*dpix) q_val = utils.twotheta_to_q(two_theta, lambda_) q_val q_ring = np.mean(q_val, axis=1) q_ring """ Explanation: Convert the edge values of the rings to q ( reciprocal space) End of explanation """ rings = roi.rings(edges, center, avg_img.shape) mask_data2 = N_mask*duke_data[0:4999] ring_mask = rings*N_mask # plot the figure fig, axes = plt.subplots() axes.set_title("Ring Mask") im = mpl_plot.show_label_array(axes, ring_mask, cmap="Dark2") plt.show() """ Explanation: Create a labeled array using roi.rings End of explanation """ num_levels = 7 num_bufs = 8 g2, lag_steps = corr.multi_tau_auto_corr(num_levels, num_bufs, ring_mask, mask_data2) exposuretime=0.001; deadtime=60e-6; timeperframe = exposuretime+deadtime lags = lag_steps*timeperframe roi_names = ['gray', 'orange', 'brown', 'red', 'green'] fig, axes = plt.subplots(num_rings, sharex=True, figsize=(5, 14)) axes[num_rings-1].set_xlabel("lags") for i, roi_color in zip(range(num_rings), roi_names): axes[i].set_ylabel("g2") axes[i].set_title(" Q ring value " + str(q_ring[i])) axes[i].semilogx(lags, g2[:, i], 'o', markerfacecolor=roi_color, markersize=6) axes[i].set_ylim(bottom=1, top=np.max(g2[1:, i])) plt.show() """ Explanation: Find the experimental auto correlation functions Use the skxray.core.correlation module (https://github.com/scikit-xray/scikit-xray/blob/master/skxray/core/correlation.py) End of explanation """ mod = Model(corr.auto_corr_scat_factor) out1 = mod.eval(lags=lags, beta=0.2234, relaxation_rate = 6.2567, baseline=1.0) result1 = mod.fit(out1, lags=lags, beta=0.2230, relaxation_rate = 6.2500, baseline=1.0) plt.figure() plt.semilogx(lags, g2[:, 0], 'ro') plt.semilogx(lags, result1.best_fit, '-b') plt.ylim(1.0, 1.3) plt.title("Q ring value "+str(q_ring[0])) plt.show() out2 = mod.eval(lags=lags, beta=0.2234, relaxation_rate=6.9879, baseline=1.000) result2 = mod.fit(out2, lags=lags, beta=0.22456, relaxation_rate=6.98789, beseline=1.00) plt.figure() plt.semilogx(lags, g2[:, 2], 'ro') plt.semilogx(lags, result2.best_fit, '-b') plt.ylim(1., 1.3) plt.title("Q ring value "+ str(q_ring[2])) plt.show() import skxray print(skxray.__version__) """ Explanation: Do the fitting One time correlation data is fitted using the model in skxray.core.correlation module (auto_corr_scat_factor) (https://github.com/scikit-xray/scikit-xray/blob/master/skxray/core/correlation.py) End of explanation """
ealogar/curso-python
advanced/5_decorators.ipynb
apache-2.0
real_fibonacci = fibonacci def fibonacci(n): res = simcache.get_key(n) if not res: res = real_fibonacci(n) simcache.set_key(n, res) return res t1_start = time.time() print fibonacci(30) t1_elapsed = time.time() - t1_start print "fibonacci time {}".format(t1_elapsed) t1_start = time.time() print real_fibonacci(30) t1_elapsed = time.time() - t1_start print "fibonacci_real time {}".format(t1_elapsed) """ Explanation: Remember DRY: Don't Repeat Yourself! Let's try to apply memoization in a generic way to not modified functions Let's do a bit of magic to apply memoization easily End of explanation """ simcache.clear_keys() # Let's clean the cache # Let's define the real fibonacci computation function def fibonacci(n): if n < 2: return n print "Real fibonacci func, calling recursively to", fibonacci, n # Once the trick is done globals will contain a different function binded to 'fibonacci' return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci print fibonacci(5) # Call graph of fibonacci for n=5 # # __ 4 ---- 3 ----------- 2 ---- 1 # 5 __/ \__ 2 ---- 1 \__ 1 \__ 0 # | \__ 0 # \__ 3 ---- 2 ---- 1 # \__ 1 \__ 0 # # Let's save a reference to the real function real_fibonacci = fibonacci print real_fibonacci # Points to real fibonacci calculation function # Let's create a new function which will use memoization def memoized_fibonacci(n): # Try to retrieve value from cache res = simcache.get_key(n) if not res: # If failed, call real fibonacci func print "Memoized fibonacci func, proceeding to call real func",\ real_fibonacci, n res = real_fibonacci(n) # Store real result simcache.set_key(n, res) return res print memoized_fibonacci # This is the new function with memoization # Let's replace the real function by the memoized version in module globals fibonacci = memoized_fibonacci print fibonacci(5) # Let's see what happens now print fibonacci(5) # Let's try again print fibonacci(10) # Let's try with a bigger number """ Explanation: Let's explain the trick in slow motion End of explanation """ def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) def memoize_any_function(func_to_memoize): """Function to return a wrapped version of input function using memoization """ print "Called memoize_any_function" def memoized_version_of_func(n): """Wrapper using memoization """ res = simcache.get_key(n) if not res: res = func_to_memoize(n) # Call the real function simcache.set_key(n, res) return res return memoized_version_of_func fibonacci = memoize_any_function(fibonacci) print fibonacci(35) # Much nice if we do: @memoize_any_function # This is the simplest decorator syntax def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci(150) """ Explanation: We have applied our first hand-crafted decorator How would you memoize any function, not just fibonacci? Do you remember functions are first class objects? They can be used as arguments or return values... Do you remember we can declare functions inside other functions? Let's apply these concepts to find a generic method to use memoization End of explanation """ def timing_decorator(decorated_func): print "Called timing_decorator" def wrapper(*args): # Use variable arguments to be compatible with any function """Wrapper for time executions """ start = time.time() res = decorated_func(*args) # Call the real function elapsed = time.time() - start print "Execution of '{0}{1}' took {2} seconds".format(decorated_func.__name__, args, elapsed) return res return wrapper @timing_decorator @memoize_any_function # We can accumulate decorators def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) simcache.clear_keys() print fibonacci(5) """ Explanation: Python decorators: A callable which receives a funtion as only argument and returns another function. Typically the resulting function wrapps the first function executing some code before and/or after the first is called. Used with the at @ symbol before a function or method Don't forget to deal with 'self' as first argument of methods The decoration is done at import / evaluation time End of explanation """ print fibonacci # Why is the wrapper? Can we maintain the original name ? import functools def memoize_any_function(decorated_func): """Function to return a wrapped version of input function using memoization """ @functools.wraps(decorated_func) # Use functools.wraps to smooth the decoration def memoized_version_of_f(*args): """Wrapper using memoization """ res = simcache.get_key(args) if not res: res = decorated_func(*args) # Call the real function simcache.set_key(args, res) return res return memoized_version_of_f def timing_decorator(decorated_func): @functools.wraps(decorated_func) def wrapper(*args): # Use variable arguments to be compatible with any function """Wrapper for time executions """ start = time.time() res = decorated_func(*args) # Call the real function elapsed = time.time() - start print "Execution of '{0}{1}' took {2} seconds".format(decorated_func.__name__, args, elapsed) return res return wrapper @timing_decorator @memoize_any_function # We can accumulate decorators, and they are run in strict top-down order def fibonacci(n): if n < 2: return n return fibonacci(n - 1) + fibonacci(n - 2) print fibonacci(100) """ Explanation: It is possible to accumulate decorators Order matters, they are run in strict top - down order End of explanation """
DaveBackus/Data_Bootcamp
Code/IPython/bootcamp_test.ipynb
mit
import datetime print('Welcome to Data Bootcamp!') print('Today is: ', datetime.date.today()) """ Explanation: Data Bootcamp test program (IPython notebook) First, welcome. Now that we're friends, click on "Cell" above and choose "Run all." You should see [*] (an asterisk in square brackets), which means the program is running. When it's done, a number will replace the asterisk. Below the next code cell you should see a welcome message and today's date. If it isn't today's date, you're probably looking at old output. End of explanation """ import sys print('What version of Python are we running? \n', sys.version, '\n', sep='') if float(sys.version_info[0]) < 3.0 : raise Exception('Program halted, old version of Python. \n', 'Sorry, you need to install Anaconda again.') else: print('Congratulations, Python is up to date!') """ Explanation: Now the test: If you get the message below "Program halted, old version of Python, etc," you need to go back and install Anaconda again, this time following the instructions EXACTLY! Yes, we know that's discouraging, but better to know now than run into problems later. If you get the message "Congratulations etc," you're all set. Pat yourself on the back. You can close out the program and go back to whatever you were doing. End of explanation """
hetaodie/hetaodie.github.io
assets/media/uda-ml/qinghua/dongtaiguihua/迷你项目:动态规划(第 2 部分)/Dynamic_Programming_Solution.ipynb
mit
from frozenlake import FrozenLakeEnv env = FrozenLakeEnv() """ Explanation: Mini Project: Dynamic Programming In this notebook, you will write your own implementations of many classical dynamic programming algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore FrozenLakeEnv Use the code cell below to create an instance of the FrozenLake environment. End of explanation """ # print the state space and action space print(env.observation_space) print(env.action_space) # print the total number of states and actions print(env.nS) print(env.nA) """ Explanation: The agent moves through a $4 \times 4$ gridworld, with states numbered as follows: [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] and the agent has 4 potential actions: LEFT = 0 DOWN = 1 RIGHT = 2 UP = 3 Thus, $\mathcal{S}^+ = {0, 1, \ldots, 15}$, and $\mathcal{A} = {0, 1, 2, 3}$. Verify this by running the code cell below. End of explanation """ env.P[1][0] """ Explanation: Dynamic programming assumes that the agent has full knowledge of the MDP. We have already amended the frozenlake.py file to make the one-step dynamics accessible to the agent. Execute the code cell below to return the one-step dynamics corresponding to a particular state and action. In particular, env.P[1][0] returns the the probability of each possible reward and next state, if the agent is in state 1 of the gridworld and decides to go left. End of explanation """ import numpy as np def policy_evaluation(env, policy, gamma=1, theta=1e-8): V = np.zeros(env.nS) while True: delta = 0 for s in range(env.nS): Vs = 0 for a, action_prob in enumerate(policy[s]): for prob, next_state, reward, done in env.P[s][a]: Vs += action_prob * prob * (reward + gamma * V[next_state]) delta = max(delta, np.abs(V[s]-Vs)) V[s] = Vs if delta < theta: break return V """ Explanation: Each entry takes the form prob, next_state, reward, done where: - prob details the conditional probability of the corresponding (next_state, reward) pair, and - done is True if the next_state is a terminal state, and otherwise False. Thus, we can interpret env.P[1][0] as follows: $$ \mathbb{P}(S_{t+1}=s',R_{t+1}=r|S_t=1,A_t=0) = \begin{cases} \frac{1}{3} \text{ if } s'=1, r=0\ \frac{1}{3} \text{ if } s'=0, r=0\ \frac{1}{3} \text{ if } s'=5, r=0\ 0 \text{ else} \end{cases} $$ Feel free to change the code cell above to explore how the environment behaves in response to other (state, action) pairs. Part 1: Iterative Policy Evaluation In this section, you will write your own implementation of iterative policy evaluation. Your algorithm should accept four arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). - theta: This is a very small positive number that is used to decide if the estimate has sufficiently converged to the true value function (default value: 1e-8). The algorithm returns as output: - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s under the input policy. Please complete the function in the code cell below. End of explanation """ random_policy = np.ones([env.nS, env.nA]) / env.nA """ Explanation: We will evaluate the equiprobable random policy $\pi$, where $\pi(a|s) = \frac{1}{|\mathcal{A}(s)|}$ for all $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$. Use the code cell below to specify this policy in the variable random_policy. End of explanation """ from plot_utils import plot_values # evaluate the policy V = policy_evaluation(env, random_policy) plot_values(V) """ Explanation: Run the next code cell to evaluate the equiprobable random policy and visualize the output. The state-value function has been reshaped to match the shape of the gridworld. End of explanation """ import check_test check_test.run_check('policy_evaluation_check', policy_evaluation) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that your policy_evaluation function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged). End of explanation """ def q_from_v(env, V, s, gamma=1): q = np.zeros(env.nA) for a in range(env.nA): for prob, next_state, reward, done in env.P[s][a]: q[a] += prob * (reward + gamma * V[next_state]) return q """ Explanation: Part 2: Obtain $q_\pi$ from $v_\pi$ In this section, you will write a function that takes the state-value function estimate as input, along with some state $s\in\mathcal{S}$. It returns the row in the action-value function corresponding to the input state $s\in\mathcal{S}$. That is, your function should accept as input both $v_\pi$ and $s$, and return $q_\pi(s,a)$ for all $a\in\mathcal{A}(s)$. Your algorithm should accept four arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. - s: This is an integer corresponding to a state in the environment. It should be a value between 0 and (env.nS)-1, inclusive. - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). The algorithm returns as output: - q: This is a 1D numpy array with q.shape[0] equal to the number of actions (env.nA). q[a] contains the (estimated) value of state s and action a. Please complete the function in the code cell below. End of explanation """ Q = np.zeros([env.nS, env.nA]) for s in range(env.nS): Q[s] = q_from_v(env, V, s) print("Action-Value Function:") print(Q) """ Explanation: Run the code cell below to print the action-value function corresponding to the above state-value function. End of explanation """ check_test.run_check('q_from_v_check', q_from_v) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that the q_from_v function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged). End of explanation """ def policy_improvement(env, V, gamma=1): policy = np.zeros([env.nS, env.nA]) / env.nA for s in range(env.nS): q = q_from_v(env, V, s, gamma) # OPTION 1: construct a deterministic policy # policy[s][np.argmax(q)] = 1 # OPTION 2: construct a stochastic policy that puts equal probability on maximizing actions best_a = np.argwhere(q==np.max(q)).flatten() policy[s] = np.sum([np.eye(env.nA)[i] for i in best_a], axis=0)/len(best_a) return policy """ Explanation: Part 3: Policy Improvement In this section, you will write your own implementation of policy improvement. Your algorithm should accept three arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). The algorithm returns as output: - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. Please complete the function in the code cell below. You are encouraged to use the q_from_v function you implemented above. End of explanation """ check_test.run_check('policy_improvement_check', policy_improvement) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that the policy_improvement function satisfies the requirements outlined above (with three inputs, a single output, and with the default values of the input arguments unchanged). Before moving on to the next part of the notebook, you are strongly encouraged to check out the solution in Dynamic_Programming_Solution.ipynb. There are many correct ways to approach this function! End of explanation """ import copy def policy_iteration(env, gamma=1, theta=1e-8): policy = np.ones([env.nS, env.nA]) / env.nA while True: V = policy_evaluation(env, policy, gamma, theta) new_policy = policy_improvement(env, V) # OPTION 1: stop if the policy is unchanged after an improvement step if (new_policy == policy).all(): break; # OPTION 2: stop if the value function estimates for successive policies has converged # if np.max(abs(policy_evaluation(env, policy) - policy_evaluation(env, new_policy))) < theta*1e2: # break; policy = copy.copy(new_policy) return policy, V """ Explanation: Part 4: Policy Iteration In this section, you will write your own implementation of policy iteration. The algorithm returns the optimal policy, along with its corresponding state-value function. Your algorithm should accept three arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). - theta: This is a very small positive number that is used to decide if the policy evaluation step has sufficiently converged to the true value function (default value: 1e-8). The algorithm returns as output: - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. Please complete the function in the code cell below. You are strongly encouraged to use the policy_evaluation and policy_improvement functions you implemented above. End of explanation """ # obtain the optimal policy and optimal state-value function policy_pi, V_pi = policy_iteration(env) # print the optimal policy print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):") print(policy_pi,"\n") plot_values(V_pi) """ Explanation: Run the next code cell to solve the MDP and visualize the output. The optimal state-value function has been reshaped to match the shape of the gridworld. Compare the optimal state-value function to the state-value function from Part 1 of this notebook. Is the optimal state-value function consistently greater than or equal to the state-value function for the equiprobable random policy? End of explanation """ check_test.run_check('policy_iteration_check', policy_iteration) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that the policy_iteration function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged). End of explanation """ def truncated_policy_evaluation(env, policy, V, max_it=1, gamma=1): num_it=0 while num_it < max_it: for s in range(env.nS): v = 0 q = q_from_v(env, V, s, gamma) for a, action_prob in enumerate(policy[s]): v += action_prob * q[a] V[s] = v num_it += 1 return V """ Explanation: Part 5: Truncated Policy Iteration In this section, you will write your own implementation of truncated policy iteration. You will begin by implementing truncated policy evaluation. Your algorithm should accept five arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. - max_it: This is a positive integer that corresponds to the number of sweeps through the state space (default value: 1). - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). The algorithm returns as output: - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. Please complete the function in the code cell below. End of explanation """ def truncated_policy_iteration(env, max_it=1, gamma=1, theta=1e-8): V = np.zeros(env.nS) policy = np.zeros([env.nS, env.nA]) / env.nA while True: policy = policy_improvement(env, V) old_V = copy.copy(V) V = truncated_policy_evaluation(env, policy, V, max_it, gamma) if max(abs(V-old_V)) < theta: break; return policy, V """ Explanation: Next, you will implement truncated policy iteration. Your algorithm should accept five arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - max_it: This is a positive integer that corresponds to the number of sweeps through the state space (default value: 1). - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). - theta: This is a very small positive number that is used for the stopping criterion (default value: 1e-8). The algorithm returns as output: - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. Please complete the function in the code cell below. End of explanation """ policy_tpi, V_tpi = truncated_policy_iteration(env, max_it=2) # print the optimal policy print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):") print(policy_tpi,"\n") # plot the optimal state-value function plot_values(V_tpi) """ Explanation: Run the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld. Play with the value of the max_it argument. Do you always end with the optimal state-value function? End of explanation """ check_test.run_check('truncated_policy_iteration_check', truncated_policy_iteration) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that the truncated_policy_iteration function satisfies the requirements outlined above (with four inputs, two outputs, and with the default values of the input arguments unchanged). End of explanation """ def value_iteration(env, gamma=1, theta=1e-8): V = np.zeros(env.nS) while True: delta = 0 for s in range(env.nS): v = V[s] V[s] = max(q_from_v(env, V, s, gamma)) delta = max(delta,abs(V[s]-v)) if delta < theta: break policy = policy_improvement(env, V, gamma) return policy, V """ Explanation: Part 6: Value Iteration In this section, you will write your own implementation of value iteration. Your algorithm should accept three arguments as input: - env: This is an instance of an OpenAI Gym environment, where env.P returns the one-step dynamics. - gamma: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: 1). - theta: This is a very small positive number that is used for the stopping criterion (default value: 1e-8). The algorithm returns as output: - policy: This is a 2D numpy array with policy.shape[0] equal to the number of states (env.nS), and policy.shape[1] equal to the number of actions (env.nA). policy[s][a] returns the probability that the agent takes action a while in state s under the policy. - V: This is a 1D numpy array with V.shape[0] equal to the number of states (env.nS). V[s] contains the estimated value of state s. End of explanation """ policy_vi, V_vi = value_iteration(env) # print the optimal policy print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):") print(policy_vi,"\n") # plot the optimal state-value function plot_values(V_vi) """ Explanation: Use the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld. End of explanation """ check_test.run_check('value_iteration_check', value_iteration) """ Explanation: Run the code cell below to test your function. If the code cell returns PASSED, then you have implemented the function correctly! Note: In order to ensure accurate results, make sure that the value_iteration function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged). End of explanation """
EBIvariation/eva-cttv-pipeline
data-exploration/complex-events/notebooks/complex-events-stats.ipynb
apache-2.0
total_count, variant_type_hist, other_counts, exclusive_counts = counts(no_consequences_path, PROJECT_ROOT) print(total_count) plt.figure(figsize=(15,7)) plt.xticks(rotation='vertical') plt.title('Variant Types (no functional consequences and incomplete coordinates)') plt.bar(variant_type_hist.keys(), variant_type_hist.values()) variant_type_hist plt.figure(figsize=(15,7)) plt.xticks(rotation='vertical') plt.title('Variant Descriptors (no functional consequences and incomplete coordinates)') plt.bar(other_counts.keys(), other_counts.values()) other_counts def print_link_for_type(variant_type, min_score=-1): for record in dataset: if record.measure: m = record.measure if m.has_complete_coordinates: continue if m.variant_type == variant_type and record.score >= min_score: print(f'https://www.ncbi.nlm.nih.gov/clinvar/{record.accession}/') print_link_for_type('Microsatellite', min_score=1) """ Explanation: Gather counts Among records with no functional consequences * how many of each variant type * how many have hgvs, sequence location w/ start/stop position at least, cytogenic location * of those with hgvs, how many can the library parse? * how many can our code parse? End of explanation """ plt.figure(figsize=(10,7)) plt.title('Variant Descriptors (no functional consequences and incomplete coordinates)') plt.bar(exclusive_counts.keys(), exclusive_counts.values()) exclusive_counts """ Explanation: Examples Some hand-picked examples of complex variants from ClinVar. For each type I tried to choose at least one that seemed "typical" and one that was relatively high quality to get an idea of the variability, but no guarantees for how representative these are. Duplication https://www.ncbi.nlm.nih.gov/clinvar/variation/1062574/ https://www.ncbi.nlm.nih.gov/clinvar/variation/89496/ Deletion https://www.ncbi.nlm.nih.gov/clinvar/variation/1011851/ Inversion https://www.ncbi.nlm.nih.gov/clinvar/variation/268016/ https://www.ncbi.nlm.nih.gov/clinvar/variation/90611/ Translocation https://www.ncbi.nlm.nih.gov/clinvar/variation/267959/ https://www.ncbi.nlm.nih.gov/clinvar/variation/267873/ https://www.ncbi.nlm.nih.gov/clinvar/variation/1012364/ copy number gain https://www.ncbi.nlm.nih.gov/clinvar/variation/523250/ https://www.ncbi.nlm.nih.gov/clinvar/variation/870516/ copy number loss https://www.ncbi.nlm.nih.gov/clinvar/variation/1047901/ https://www.ncbi.nlm.nih.gov/clinvar/variation/625801/ Complex https://www.ncbi.nlm.nih.gov/clinvar/variation/267835/ https://www.ncbi.nlm.nih.gov/clinvar/variation/585332/ Appendix A: Marcos' questions What do the HGVS parser numbers mean? This is the number of records which had at least one HGVS descriptor for which the specified parser was able to extract some information. For the official parser this means not throwing an exception; for our parser this means returning some non-None properties (though note our parser was originally written for the repeat expansion pipeline). What's the total number of HGVS we can parse with either parser? added to the above chart. From the variants with cytogenetic location, how many did not have any of the other descriptors, if any? see below End of explanation """ def try_to_parse(hgvs): try: parser.parse_hgvs_variant(hgvs) print(hgvs, 'SUCCESS') except: print(hgvs, 'FAILED') try_to_parse('NC_000011.10:g.(?_17605796)_(17612832_?)del') try_to_parse('NC_000011.10:g.(17605790_17605796)_(17612832_1761283)del') try_to_parse('NC_000011.10:g.17605796_17612832del') try_to_parse('NC_000011.10:g.?_17612832del') def try_to_vep(hgvs): safe_hgvs = urllib.parse.quote(hgvs) vep_url = f'https://rest.ensembl.org/vep/human/hgvs/{safe_hgvs}?content-type=application/json' resp = requests.get(vep_url) print(resp.json()) try_to_vep('NC_000011.10:g.(?_17605796)_(17612832_?)del') try_to_vep('NC_000011.10:g.(17605790_17605796)_(17612832_1761283)del') try_to_vep('NC_000011.10:g.17605796_17612832del') try_to_vep('NC_000011.10:g.?_17612832del') """ Explanation: Appendix B: More HGVS parsing exploration HGVS python library doesn't support ranges. VEP API has some limited support for HGVS. End of explanation """
Neurosim-lab/netpyne
netpyne/tutorials/rxd_movie_tut/rxd_movie_tut.ipynb
mit
plotArgs = { 'speciesLabel': 'ca', 'regionLabel' : 'ecs', 'saveFig' : 'movie', 'showFig' : False, 'clim' : [1.9997, 2.000], } """ Explanation: Making a movie of reaction-diffusion concentrations We recommend creating and using a virtual environment for NetPyNE tutorials. To do so, enter the following commands into your terminal: mkdir netpyne_tuts cd netpyne_tuts python3 -m venv env source env/bin/activate python3 -m pip install --upgrade pip setuptools wheel python3 -m pip install --upgrade ipython ipykernel jupyter python3 -m pip install --upgrade neuron git clone https://github.com/Neurosim-lab/netpyne.git python3 -m pip install -e netpyne ipython kernel install --user --name=env For this tutorial, you will also need to install natsort and imageio. python3 -m pip install natsort imageio Then you can copy the example directory we will be using into netpyne_tuts, copy this notebook tutorial into it, and compile the mod files. cp -r netpyne/examples/rxd_net . cp netpyne/netpyne/tutorials/rxd_movie_tut/rxd_movie_tut.ipynb rxd_net cd rxd_net nrnivmodl mod Finally, you can launch this tutorial in a Jupyter notebook. jupyter notebook rxd_movie_tut.ipynb Note that the network parameters are defined in netParams.py, the simulation configuration is specified in cfg.py and the steps to actually run the simulation are in init.py. From the terminal, you could run this simulation with the command python3 init.py. Or, if you have MPI properly installed, you could run the sim on four cores with the command mpiexec -np 4 nrniv -python -mpi init.py. To run the simulation from this notebook, you would execute %run init.py or !mpiexec -np 4 nrniv -python -mpi init.py. However, we need to modify the simulation run so that a movie frame (figure) is generated at specified times. You can modify init.py to do this, but here we will do it interactively. First, lets look at what's in init.py: from netpyne import sim from netParams import netParams from cfg import cfg sim.initialize(netParams, cfg) sim.net.createPops() sim.net.createCells() sim.net.connectCells() sim.net.addStims() sim.net.addRxD() sim.setupRecording() sim.simulate() sim.analyze() We want to replace sim.simulate() with sim.runSimWithIntervalFunc(), which pauses at a set interval and executes the specified function. See more details on runSimWithIntervalFunc here: http://netpyne.org/netpyne.sim.run.html#netpyne.sim.run.runSimWithIntervalFunc. The function runSimWithIntervalFunc requires two arguments: the time interval at which to execute the function (interval) and the function to be executed (func). It also has two optional arguments: a limited time range over which to execute the function (timeRange) and a dictionary of arguments to feed into the function to be executed (funcArgs). For this example, which runs for 1000 ms, we will make a short movie with 10 frames by setting interval=100. We will use the function sim.analysis.plotRxDConcentration and we want to plot the calcium concentration in the extracellular space. We also need to set saveFig to 'movie', and set the colorbar limits (so they stay the same in each movie frame). In order to feed these arguments into the plotting function at each time step, we will create a dictionary: End of explanation """ from netpyne import sim from netParams import netParams from cfg import cfg sim.initialize(netParams, cfg) sim.net.createPops() sim.net.createCells() sim.net.connectCells() sim.net.addStims() sim.net.addRxD() sim.setupRecording() #sim.simulate() sim.runSimWithIntervalFunc(100.0, sim.analysis.plotRxDConcentration, timeRange=None, funcArgs=plotArgs) sim.analyze() """ Explanation: At this point, we can replace sim.simulate() in our init.py file with sim.runSimWithIntervalFunc(100.0, sim.analysis.plotRxDConcentration, timeRange=None, funcArgs=plotArgs). Then we can run the simulation. End of explanation """ import os import natsort import imageio images = [] filenames = natsort.natsorted([file for file in os.listdir() if 'movie' in file and file.endswith('.png')]) for filename in filenames: images.append(imageio.imread(filename)) imageio.mimsave('rxd_conc_movie.gif', images) """ Explanation: This should run the simulation, pausing every 100 ms to create a reaction-diffusion concentration plot. At this point, we can create a movie (an animated gif) from our frames. End of explanation """
serge-sans-paille/pythran
docs/examples/Third Party Libraries.ipynb
bsd-3-clause
import pythran %load_ext pythran.magic %%pythran #pythran export pythran_cbrt(float64(float64), float64) def pythran_cbrt(libm_cbrt, val): return libm_cbrt(val) """ Explanation: Using third-party Native Libraries Sometimes, the functionality you need is only available in third-party native libraries. These libraries can still be used from within Pythran, using Pythran's support for capsules. Pythran Code The pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following: End of explanation """ import ctypes # capsulefactory PyCapsule_New = ctypes.pythonapi.PyCapsule_New PyCapsule_New.restype = ctypes.py_object PyCapsule_New.argtypes = ctypes.c_void_p, ctypes.c_char_p, ctypes.c_void_p # load libm libm = ctypes.CDLL('libm.so.6') # extract the proper symbol cbrt = libm.cbrt # wrap it cbrt_capsule = PyCapsule_New(cbrt, "double(double)".encode(), None) """ Explanation: In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function. This capsule can be created using ctypes: End of explanation """ pythran_cbrt(cbrt_capsule, 8.) """ Explanation: The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line. End of explanation """ %%pythran #pythran export pythran_sincos(None(float64, float64*, float64*), float64) def pythran_sincos(libm_sincos, val): import numpy as np val_sin, val_cos = np.empty(1), np.empty(1) libm_sincos(val, val_sin, val_cos) return val_sin[0], val_cos[0] """ Explanation: With Pointers Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran? End of explanation """ sincos_capsule = PyCapsule_New(libm.sincos, "unchecked anyway".encode(), None) pythran_sincos(sincos_capsule, 0.) """ Explanation: There is some magic happening here: None is used to state the function pointer does not return anything. In order to create pointers, we actually create empty one-dimensional array and let pythran handle them as pointer. Beware that you're in charge of all the memory checking stuff! Apart from that, we can now call our function with the proper capsule parameter. End of explanation """ %%pythran ## This is the capsule. #pythran export capsule corp((int, str), str set) def corp(param, lookup): res, key = param return res if key in lookup else -1 ## This is some dummy callsite #pythran export brief(int, int((int, str), str set)): def brief(val, capsule): return capsule((val, "doctor"), {"some"}) """ Explanation: With Pythran It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world. One just need to use the capsule keyword to indicate we want to generate a capsule. End of explanation """ try: corp((1,"some"),set()) except TypeError as e: print(e) """ Explanation: It's not possible to call the capsule directly, it's an opaque structure. End of explanation """ brief(1, corp) """ Explanation: It's possible to pass it to the according pythran function though. End of explanation """ !find -name 'cube*' -delete %%file cube.pyx #cython: language_level=3 cdef api double cube(double x) nogil: return x * x * x from setuptools import setup from Cython.Build import cythonize _ = setup( name='cube', ext_modules=cythonize("cube.pyx"), zip_safe=False, # fake CLI call script_name='setup.py', script_args=['--quiet', 'build_ext', '--inplace'] ) """ Explanation: With Cython The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython: api and __pyx_capi__. nogil is of importance here: Pythran releases the GIL, so better not call a cythonized function that uses it. End of explanation """ import sys sys.path.insert(0, '.') import cube print(type(cube.__pyx_capi__['cube'])) cython_cube = cube.__pyx_capi__['cube'] pythran_cbrt(cython_cube, 2.) """ Explanation: The cythonized module has a special dictionary that holds the capsule we're looking for. End of explanation """
nudomarinero/mltier1
explore/Match_LOFAR_combined_final.ipynb
gpl-3.0
import numpy as np from astropy.table import Table, join from astropy import units as u from astropy.coordinates import SkyCoord, search_around_sky from IPython.display import clear_output import pickle import os import sys sys.path.append("..") from mltier1 import (get_center, Field, MultiMLEstimator, MultiMLEstimatorOld, parallel_process, get_sigma_all, get_sigma_all_old, describe) %load_ext autoreload %autoreload from IPython.display import clear_output %pylab inline """ Explanation: ML match for LOFAR and the combined PanSTARRS WISE catalogue: Generic matching code applied to sources In this notebook the maximum likelihood cross-match between the LOFAR HETDEX catalogue and the combined PansSTARRS WISE catalogue is computed. Configuration Load libraries and setup End of explanation """ save_intermediate = True plot_intermediate = True idp = "../idata/final_pdf_v0.9" if not os.path.isdir(idp): os.makedirs(idp) """ Explanation: General configuration End of explanation """ # Busy week Edinburgh 2017 ra_down = 172.09 ra_up = 187.5833 dec_down = 46.106 dec_up = 56.1611 # Busy week Hatfield 2017 ra_down = 170. ra_up = 190. dec_down = 46.8 dec_up = 55.9 # Full field July 2017 ra_down = 160. ra_up = 232. dec_down = 42. dec_up = 62. field = Field(170.0, 190.0, 46.8, 55.9) field_full = Field(160.0, 232.0, 42.0, 62.0) """ Explanation: Area limits End of explanation """ combined_all = Table.read("../pw.fits") lofar_all = Table.read("../data/LOFAR_HBA_T1_DR1_catalog_v0.9.srl.fixed.fits") #lofar_all = Table.read("data/LOFAR_HBA_T1_DR1_merge_ID_optical_v0.8.fits") np.array(combined_all.colnames) np.array(lofar_all.colnames) """ Explanation: Load data End of explanation """ lofar = field_full.filter_catalogue(lofar_all, colnames=("RA", "DEC")) combined = field_full.filter_catalogue(combined_all, colnames=("ra", "dec")) """ Explanation: Filter catalogues The following line has been corrected in the latest versions to use all the sources, including the extended. Hence the running of the "-extended" version of this notebook is no longer necessary. End of explanation """ combined["colour"] = combined["i"] - combined["W1mag"] combined_aux_index = np.arange(len(combined)) """ Explanation: Additional data End of explanation """ coords_combined = SkyCoord(combined['ra'], combined['dec'], unit=(u.deg, u.deg), frame='icrs') coords_lofar = SkyCoord(lofar['RA'], lofar['DEC'], unit=(u.deg, u.deg), frame='icrs') """ Explanation: Sky coordinates End of explanation """ combined_matched = (~np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Matched i-W1 sources combined_panstarrs = (~np.isnan(combined["i"]) & np.isnan(combined["W1mag"])) # Sources with only i-band combined_wise =(np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Sources with only W1-band combined_i = combined_matched | combined_panstarrs combined_w1 = combined_matched | combined_wise #combined_only_i = combined_panstarrs & ~combined_matched #combined_only_w1 = combined_wise & ~combined_matched print("Total - ", len(combined)) print("i and W1 - ", np.sum(combined_matched)) print("Only i - ", np.sum(combined_panstarrs)) print("With i - ", np.sum(combined_i)) print("Only W1 - ", np.sum(combined_wise)) print("With W1 - ", np.sum(combined_w1)) """ Explanation: Class of sources in the combined catalogue The sources are grouped depending on the available photometric data. End of explanation """ colour_limits = [0.0, 0.5, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.5, 4.0] # Start with the W1-only, i-only and "less than lower colour" bins colour_bin_def = [{"name":"only W1", "condition": combined_wise}, {"name":"only i", "condition": combined_panstarrs}, {"name":"-inf to {}".format(colour_limits[0]), "condition": (combined["colour"] < colour_limits[0])}] # Get the colour bins for i in range(len(colour_limits)-1): name = "{} to {}".format(colour_limits[i], colour_limits[i+1]) condition = ((combined["colour"] >= colour_limits[i]) & (combined["colour"] < colour_limits[i+1])) colour_bin_def.append({"name":name, "condition":condition}) # Add the "more than higher colour" bin colour_bin_def.append({"name":"{} to inf".format(colour_limits[-1]), "condition": (combined["colour"] >= colour_limits[-1])}) combined["category"] = np.nan for i in range(len(colour_bin_def)): combined["category"][colour_bin_def[i]["condition"]] = i np.sum(np.isnan(combined["category"])) """ Explanation: Colour categories The colour categories will be used after the first ML match End of explanation """ numbers_combined_bins = np.array([np.sum(a["condition"]) for a in colour_bin_def]) numbers_combined_bins """ Explanation: We get the number of sources of the combined catalogue in each colour category. It will be used at a later stage to compute the $Q_0$ values End of explanation """ bin_list, centers, Q_0_colour, n_m, q_m = pickle.load(open("../lofar_params.pckl", "rb")) likelihood_ratio_function = MultiMLEstimator(Q_0_colour, n_m, q_m, centers) likelihood_ratio_function_old = MultiMLEstimatorOld(Q_0_colour, n_m, q_m, centers) """ Explanation: Maximum Likelihood End of explanation """ radius = 15 selection = ~np.isnan(combined["category"]) # Avoid the dreaded sources with no actual data catalogue = combined[selection] def apply_ml(i, likelihood_ratio_function): idx_0 = idx_i[idx_lofar == i] d2d_0 = d2d[idx_lofar == i] category = catalogue["category"][idx_0].astype(int) mag = catalogue["i"][idx_0] mag[category == 0] = catalogue["W1mag"][idx_0][category == 0] lofar_ra = lofar[i]["RA"] lofar_dec = lofar[i]["DEC"] lofar_pa = lofar[i]["PA"] lofar_maj_err = lofar[i]["E_Maj"] lofar_min_err = lofar[i]["E_Min"] c_ra = catalogue["ra"][idx_0] c_dec = catalogue["dec"][idx_0] c_ra_err = catalogue["raErr"][idx_0] c_dec_err = catalogue["decErr"][idx_0] sigma, sigma_maj, sigma_min = get_sigma_all(lofar_maj_err, lofar_min_err, lofar_pa, lofar_ra, lofar_dec, c_ra, c_dec, c_ra_err, c_dec_err) lr_0 = likelihood_ratio_function(mag, d2d_0.arcsec, sigma, sigma_maj, sigma_min, category) chosen_index = np.argmax(lr_0) result = [combined_aux_index[selection][idx_0[chosen_index]], # Index (d2d_0.arcsec)[chosen_index], # distance lr_0[chosen_index]] # LR return result from mltier1 import fr_u, fr_u_old def check_ml(i, likelihood_ratio_function, likelihood_ratio_function_old, verbose=True): idx_0 = idx_i[idx_lofar == i] d2d_0 = d2d[idx_lofar == i] category = catalogue["category"][idx_0].astype(int) mag = catalogue["i"][idx_0] mag[category == 0] = catalogue["W1mag"][idx_0][category == 0] lofar_ra = lofar[i]["RA"] lofar_dec = lofar[i]["DEC"] lofar_pa = lofar[i]["PA"] lofar_maj_err = lofar[i]["E_Maj"] lofar_min_err = lofar[i]["E_Min"] c_ra = catalogue["ra"][idx_0] c_dec = catalogue["dec"][idx_0] c_ra_err = catalogue["raErr"][idx_0] c_dec_err = catalogue["decErr"][idx_0] sigma, sigma_maj, sigma_min = get_sigma_all_old(lofar_maj_err, lofar_min_err, lofar_pa, lofar_ra, lofar_dec, c_ra, c_dec, c_ra_err, c_dec_err) sigma_0_0, det_sigma = get_sigma_all(lofar_maj_err, lofar_min_err, lofar_pa, lofar_ra, lofar_dec, c_ra, c_dec, c_ra_err, c_dec_err) fr = fr_u(d2d_0.arcsec, sigma_0_0, det_sigma) fr_old = np.array(fr_u_old(d2d_0.arcsec, sigma, sigma_maj, sigma_min)) if verbose: print("NEW - s00: {}; sdet: {}; fr: {}".format(sigma_0_0, det_sigma, fr)) print("OLD - s: {}; smin: {}; smaj: {}; fr: {}".format( np.array(sigma), np.array(sigma_maj), np.array(sigma_min), fr_old)) lr_0 = likelihood_ratio_function(mag, d2d_0.arcsec, sigma_0_0, det_sigma, category) lr_0_old = likelihood_ratio_function_old(mag, d2d_0.arcsec, sigma, sigma_maj, sigma_min, category) chosen_index = np.argmax(lr_0) chosen_index_old = np.argmax(lr_0_old) ix, dist, lr = (combined_aux_index[selection][idx_0[chosen_index]], # Index (d2d_0.arcsec)[chosen_index], # distance lr_0[chosen_index]) ix_old, dist_old, lr_old = (combined_aux_index[selection][idx_0[chosen_index_old]], # Index (d2d_0.arcsec)[chosen_index_old], # distance lr_0[chosen_index_old] ) if verbose: print("NEW res - Ix: {}; dist: {}; LR: {}".format(ix, dist, lr)) # LR print("OLD res - Ix: {}; dist: {}; LR: {}".format(ix_old, dist_old, lr_old)) return (sigma_0_0, det_sigma, fr, np.array(sigma), np.array(sigma_maj), np.array(sigma_min), fr_old, ix, dist, lr, ix_old, dist_old, lr_old, (lofar_maj_err, lofar_min_err, lofar_pa, lofar_ra, lofar_dec, c_ra, c_dec, c_ra_err, c_dec_err)) """ Explanation: ML match End of explanation """ idx_lofar, idx_i, d2d, d3d = search_around_sky( coords_lofar, coords_combined[selection], radius*u.arcsec) idx_lofar_unique = np.unique(idx_lofar) """ Explanation: Run the cross-match End of explanation """ list_i = [141, 235, 396, 412, 418, 711, 858, 887, 932, 965, 1039, 1389, 1680, 1699, 1787, 1927, 2168, 2267, 2339, 2410, 2548, 2838, 2969, 3136, 3163, 3265, 3348, 3353, 3401] for i in range(100000): s00, det_s, fr, s, s_maj, s_min, fr_o, ix, dist, lr, ix_o, dist_o, lr_o, p = check_ml(idx_lofar_unique[i], likelihood_ratio_function, likelihood_ratio_function_old, verbose=False) if (ix != ix_o) and ((lr > 6) or (lr_o > 6)): print(i) #print(ix, dist, lr) #print(ix_o, dist_o, lr_o) #print(s00, det_s, fr, s, s_maj, s_min, fr_o) #print(p) list_i = [141, 235, 396, 412, 418, 711, 858, 887, 932, 965, 1039, 1389, 1680, 1699, 1787, 1927, 2168, 2267, 2339, 2410, 2548, 2838, 2969, 3136, 3163, 3265, 3348, 3353, 3401, 3654, 3687, 4022, 4074, 4083, 4164, 4263] for i in list_i: s00, det_s, fr, s, s_maj, s_min, fr_o, ix, dist, lr, ix_o, dist_o, lr_o, p = check_ml(idx_lofar_unique[i], likelihood_ratio_function, likelihood_ratio_function_old, verbose=False) if ix != ix_o: print(i) print(ix, dist, lr) print(ix_o, dist_o, lr_o) print(s00, det_s, fr, s, s_maj, s_min, fr_o) print(p) import multiprocessing n_cpus_total = multiprocessing.cpu_count() n_cpus = max(1, n_cpus_total-1) def ml(i): return apply_ml(i, likelihood_ratio_function) res = parallel_process(idx_lofar_unique, ml, n_jobs=n_cpus) lofar["lr"] = np.nan # Likelihood ratio lofar["lr_dist"] = np.nan # Distance to the selected source lofar["lr_index"] = np.nan # Index of the PanSTARRS source in combined (lofar["lr_index"][idx_lofar_unique], lofar["lr_dist"][idx_lofar_unique], lofar["lr"][idx_lofar_unique]) = list(map(list, zip(*res))) total_sources = len(idx_lofar_unique) combined_aux_index = np.arange(len(combined)) """ Explanation: Run the ML matching End of explanation """ lofar["lrt"] = lofar["lr"] lofar["lrt"][np.isnan(lofar["lr"])] = 0 q0 = np.sum(Q_0_colour) def completeness(lr, threshold, q0): n = len(lr) lrt = lr[lr < threshold] return 1. - np.sum((q0 * lrt)/(q0 * lrt + (1 - q0)))/float(n)/q0 def reliability(lr, threshold, q0): n = len(lr) lrt = lr[lr > threshold] return 1. - np.sum((1. - q0)/(q0 * lrt + (1 - q0)))/float(n)/q0 completeness_v = np.vectorize(completeness, excluded=[0]) reliability_v = np.vectorize(reliability, excluded=[0]) n_test = 100 threshold_mean = np.percentile(lofar["lrt"], 100*(1 - q0)) thresholds = np.arange(0., 10., 0.01) thresholds_fine = np.arange(0.1, 1., 0.001) completeness_t = completeness_v(lofar["lrt"], thresholds, q0) reliability_t = reliability_v(lofar["lrt"], thresholds, q0) average_t = (completeness_t + reliability_t)/2 completeness_t_fine = completeness_v(lofar["lrt"], thresholds_fine, q0) reliability_t_fine = reliability_v(lofar["lrt"], thresholds_fine, q0) average_t_fine = (completeness_t_fine + reliability_t_fine)/2 threshold_sel = thresholds_fine[np.argmax(average_t_fine)] plt.rcParams["figure.figsize"] = (15,6) subplot(1,2,1) plot(thresholds, completeness_t, "r-") plot(thresholds, reliability_t, "g-") plot(thresholds, average_t, "k-") vlines(threshold_sel, 0.9, 1., "k", linestyles="dashed") vlines(threshold_mean, 0.9, 1., "y", linestyles="dashed") ylim([0.9, 1.]) xlabel("Threshold") ylabel("Completeness/Reliability") subplot(1,2,2) plot(thresholds_fine, completeness_t_fine, "r-") plot(thresholds_fine, reliability_t_fine, "g-") plot(thresholds_fine, average_t_fine, "k-") vlines(threshold_sel, 0.9, 1., "k", linestyles="dashed") #vlines(threshold_mean, 0.9, 1., "y", linestyles="dashed") ylim([0.97, 1.]) xlabel("Threshold") ylabel("Completeness/Reliability") print(threshold_sel) plt.rcParams["figure.figsize"] = (15,6) subplot(1,2,1) hist(lofar[lofar["lrt"] != 0]["lrt"], bins=200) vlines([threshold_sel], 0, 5000) ylim([0,5000]) subplot(1,2,2) hist(np.log10(lofar[lofar["lrt"] != 0]["lrt"]+1), bins=200) vlines(np.log10(threshold_sel+1), 0, 5000) ticks, _ = xticks() xticks(ticks, ["{:.1f}".format(10**t-1) for t in ticks]) ylim([0,5000]); lofar["lr_index_sel"] = lofar["lr_index"] lofar["lr_index_sel"][lofar["lrt"] < threshold_sel] = np.nan """ Explanation: Threshold and selection End of explanation """ combined["lr_index_sel"] = combined_aux_index.astype(float) pwl = join(lofar, combined, join_type='left', keys='lr_index_sel', uniq_col_name='{col_name}{table_name}', table_names=['_input', '']) pwl_columns = pwl.colnames for col in pwl_columns: fv = pwl[col].fill_value if (isinstance(fv, np.float64) and (fv != 1e+20)): print(col, fv) pwl[col].fill_value = 1e+20 columns_save = ['Source_Name', 'RA', 'E_RA', 'DEC', 'E_DEC', 'Peak_flux', 'E_Peak_flux', 'Total_flux', 'E_Total_flux', 'Maj', 'E_Maj', 'Min', 'E_Min', 'PA', 'E_PA', 'Isl_rms', 'S_Code', 'Mosaic_ID', 'AllWISE', 'objID', 'ra', 'dec', 'raErr', 'decErr', 'W1mag', 'W1magErr', 'i', 'iErr', 'colour', 'category', 'lr', 'lr_dist'] pwl[columns_save].filled().write('lofar_pw_pdf.fits', format="fits") """ Explanation: Save combined catalogue End of explanation """
ramseylab/networkscompbio
class13_similarity_python3.ipynb
apache-2.0
import pandas import igraph import numpy import matplotlib.pyplot as plt import scipy.cluster.hierarchy import scipy.spatial.distance """ Explanation: CS446/546 - Class Session 13 - similarity and hierarchical clustering In this class session we are going to hierachically cluster (based on Sorensen-Dice similarity) vertices in a directed graph from a landmark paper on human gene regulation (Neph et al., Cell, volume 150, pages 1274-1286, 2012; see PDF on Canvas) Using Pandas read_csv, read in the ifle shared/neph_gene_network.txt, which has two columns of text (first column is the regulator gene, second column is the target gene), into a data frame. The file has no header and is tab-delimited. Assign the column names of the dataframe to be regulator and target, respectively. Let's load the Python packages that we will need for this exercise End of explanation """ edge_list_neph = pandas.read_csv("shared/neph_gene_network.txt", sep="\t", names=["regulator","target"]) """ Explanation: Using pandas.read_csv, read the file shared/neph_gene_network.txt; name the two columns of the resulting data frame, regulator and target. End of explanation """ neph_graph = igraph.Graph.TupleList(edge_list_neph.values.tolist(), directed=False) neph_graph.summary() """ Explanation: Load the edge-list data into an undirected igraph.Graph object neph_graph, using igraph.Graph.TupleList. End of explanation """ S = neph_graph.similarity_dice() """ Explanation: Using the igraph Graph.similarity_dice() method, compute a similarity matrix and assign it to name S End of explanation """ D = 1 - numpy.matrix(S) """ Explanation: Using the numpy.matrix constructor, compute a distance matrix (1-S) and assign to object D End of explanation """ vD = scipy.spatial.distance.squareform(D) """ Explanation: Use scipy.spatial.distance.squareform to make a vector-form distance vector from the square-form distance matrix D; call the resulting object vD End of explanation """ hc = scipy.cluster.hierarchy.linkage(vD, method="average") """ Explanation: Using scipy.cluster.hierarchy.linkage on vD (with method="average"), perform hierarchical agglomerative clustering. Assign the resulting object to name hc End of explanation """ plt.figure() scipy.cluster.hierarchy.dendrogram(hc) plt.show() """ Explanation: Plot a dendrogram using scipy.cluster.hierarchy.dendrogram End of explanation """
gfeiden/Notebook
Daily/20150902_phoenix_cifist_bcs.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.interpolate as scint """ Explanation: Phoenix BT-Settl Bolometric Corrections Figuring out the best method of handling Phoenix bolometric correction files. End of explanation """ cd /Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15/ """ Explanation: Change to directory containing bolometric correction files. End of explanation """ bc_table = np.genfromtxt('colmag.BT-Settl.server.JOHNSON.Vega', comments='!') """ Explanation: Load a bolometric correction table, say for the Cousins AB photometric system. End of explanation """ test_surface = scint.LinearNDInterpolator(bc_table[:, :2], bc_table[:, 4:]) """ Explanation: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment. Let's take a first swing at the problem by using the LinearND Interpolator from SciPy. End of explanation """ test_surface(np.array([1500., 5.0])) """ Explanation: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity. End of explanation """ test_surface(np.array([3000., 5.0])) """ Explanation: This agrees with data in the bolometric correciton table. Teff logg [Fe/H] [a/Fe] B V R I 1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291 Now, let's raise the temperature. End of explanation """ test_surface(np.array([3000., 5.0])) """ Explanation: Again, we have a good match to tabulated values, Teff logg [Fe/H] [a/Fe] B V R I 3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273 However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare? End of explanation """ test_surface(np.array([3000., 5.0])) """ Explanation: This appears consistent. What about progressing to lower metallicity values? End of explanation """ iso = np.genfromtxt('/Users/grefe950/evolve/dmestar/iso/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso') """ Explanation: For reference, at [Fe/H] = $-0.5$ dex, we have Teff logg [Fe/H] [a/Fe] B V R I 3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154 The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes. Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections. End of explanation """ iso.shape """ Explanation: Make sure there are magnitudes and colors associated with this isochrone. End of explanation """ test_bcs = test_surface(10**iso[:,1], iso[:, 2]) test_bcs.shape """ Explanation: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS. End of explanation """ bol_mags = 4.74 - 2.5*iso[:, 3] for i in range(test_bcs.shape[1]): bcs = -1.0*np.log10(10**iso[:, 1]/5777.) + test_bcs[:, i] - 5.0*iso[:, 4] if i == 0: test_mags = bol_mags - bcs else: test_mags = np.column_stack((test_mags, bol_mags - bcs)) iso[50, 0:4], iso[50, 6:], test_mags[50] """ Explanation: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes. End of explanation """ col_table = np.genfromtxt('colmag.BT-Settl.server.COUSINS.Vega', comments='!') """ Explanation: Let's try something different: using the color tables provided by the Phoenix group, from which the bolometric corrections are calculated. End of explanation """ col_surface = scint.LinearNDInterpolator(col_table[:, :2], col_table[:, 4:8]) """ Explanation: Create an interpolation surface from the magnitude table. End of explanation """ phx_mags = col_surface(10.0**iso[:, 1], iso[:, 2]) """ Explanation: Compute magnitudes for a Dartmouth isochrone. End of explanation """ for i in range(phx_mags.shape[1]): phx_mags[:, i] = phx_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0 """ Explanation: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star. End of explanation """ iso[40, :5], iso[40, 6:], phx_mags[40] """ Explanation: Now compare against MARCS values. End of explanation """ phx_iso = np.genfromtxt('/Users/grefe950/Notebook/Projects/ngc2516_spots/data/phx_isochrone_120myr.txt') fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True) ax[0].set_xlim(0.0, 2.0) ax[1].set_xlim(0.0, 4.0) ax[0].set_ylim(16, 2) ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c="#b22222") ax[0].plot(phx_mags[:, 0] - phx_mags[:, 1], phx_mags[:, 1], lw=3, c="#1e90ff") ax[0].plot(phx_iso[:, 7] - phx_iso[:, 8], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555") ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c="#b22222") ax[1].plot(phx_mags[:, 1] - phx_mags[:, 3], phx_mags[:, 1], lw=3, c="#1e90ff") ax[1].plot(phx_iso[:, 8] - phx_iso[:, 10], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555") """ Explanation: Load an isochrone from the Lyon-Phoenix series. End of explanation """ new_isochrone = np.column_stack((iso[:, :6], phx_mags)) np.savetxt('/Users/grefe950/Notebook/Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso', new_isochrone, fmt='%16.8f') """ Explanation: Export a new isochrone with colors from AGSS09 (PHX) End of explanation """ tmp = -10.*np.log10(3681./5777.) + test_surface(3681., 4.78, 0.0) #+ 5.0*np.log10(0.477) tmp 4.74 - 2.5*(-1.44) - tmp """ Explanation: Separate Test Case These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first. End of explanation """
PytLab/gaft
examples/1D_optimization_example.ipynb
gpl-3.0
from gaft.components import BinaryIndividual indv = BinaryIndividual(ranges=[(0, 10)], eps=0.001) """ Explanation: Find the global minima of function $f(x) = x + 10sin(5x) + 7cos(4x)$ Create individual (use binary encoding) End of explanation """ from gaft.components import Population population = Population(indv_template=indv, size=50).init() """ Explanation: Create a population with 50 individuals End of explanation """ from gaft.operators import TournamentSelection selection = TournamentSelection() """ Explanation: Create genetic operators: selection, crossover, mutation 1. Tournament selection End of explanation """ from gaft.operators import UniformCrossover crossover = UniformCrossover(pc=0.8, pe=0.5) """ Explanation: 2. Uniform crossover pc is the probabililty of crossover operation pe is the exchange probabiltiy for each possible gene bit in chromsome End of explanation """ from gaft.operators import FlipBitMutation mutation = FlipBitMutation(pm=0.1) """ Explanation: 3. Flip bit mutation pm is the probability of mutation End of explanation """ from gaft.analysis import ConsoleOutput """ Explanation: Import an on-the-fly analysis plugin to output info to console End of explanation """ from gaft import GAEngine engine = GAEngine(population=population, selection=selection, crossover=crossover, mutation=mutation, analysis=[ConsoleOutput]) """ Explanation: Create an engine to run End of explanation """ from math import sin, cos @engine.fitness_register @engine.minimize def fitness(indv): x, = indv.solution return x + 10*sin(5*x) + 7*cos(4*x) """ Explanation: Define target function to optimize here we try to find the global minima of $f(x) = x + 10sin(5x) + 7cos(4x)$ GAFT find the maxima of the fitness function, here we use the engine.minimize decorator to tell GAFT to find the minima. End of explanation """ engine.run(ng=50) """ Explanation: Run the engine End of explanation """ best_indv = engine.population.best_indv(engine.fitness) """ Explanation: After engine running, we can do something more... Get the best individual End of explanation """ best_indv.solution """ Explanation: Get the solution End of explanation """ engine.fitness(best_indv) """ Explanation: And the fitness value End of explanation """
GoogleCloudPlatform/ai-platform-samples
notebooks/samples/tensorflow/sentiment_analysis/ai_platform_sentiment_analysis.ipynb
apache-2.0
import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. else: %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tensorflow/sentiment_analysis/ai_platform_sentiment_analysis.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tensorflow/sentiment_analysis/ai_platform_sentiment_analysis.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview AI Platform Online Prediction now supports custom python code in to apply custom prediction routines, in this blog post we will perform sentiment analysis using Twitter data and Transfer learning using Pretrained Glove embeddings. This tutorial also uses the new AI Platform Pipelines product. Dataset We use the Twitter data which is called sentiment140 dataset. It contains 1,600,000 tweets extracted using the Twitter AI. The tweets have been annotated (0 = negative, 4 = positive) and they can be used to detect sentiment. It contains the following 6 fields: target: the polarity of the tweet (0 = negative, 2 = neutral, 4 = positive) ids: The id of the tweet ( 2087) date: the date of the tweet (Sat May 16 23:58:44 UTC 2009) flag: The query (lyx). If there is no query, then this value is NO_QUERY. user: the user that tweeted (robotickilldozr) text: the text of the tweet (Lyx is cool) The official link regarding the dataset with resources about how it was generated is here Objective In this notebook, we show how to deploy a TensorFlow model using AI Platform Custom Prediction Code using sentiment140 for sentiment analysis. Costs This tutorial uses billable components of Google Cloud Platform (GCP): Cloud AI Platform Cloud Storage Learn about Cloud AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or AI Platform Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate that environment and run pip install jupyter in a shell to install Jupyter. Run jupyter notebook in a shell to launch Jupyter. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the AI Platform APIs and Compute Engine APIs. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the GCP Console, go to the Create service account key page. From the Service account drop-down list, select New service account. In the Service account name field, enter a name. From the Role drop-down list, select Machine Learning Engine > AI Platform Admin and Storage > Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account End of explanation """ !pip install -U tensorflow==1.15.* --user import tensorflow as tf print(tf.__version__) import pandas as pd import numpy as np import os """ Explanation: PIP Install Packages and dependencies End of explanation """ PROJECT_ID = '[your-project-id]' # TODO (Set to your GCP Project name) !gcloud config set project {PROJECT_ID} BUCKET_NAME = '[your-bucket-name]' # TODO (Set to your GCS Bucket name) REGION = 'us-central1' #@param {type:"string"} # Model information. ROOT = 'ml_pipeline' MODEL_DIR = os.path.join(ROOT,'models').replace("\\","/") PACKAGES_DIR = os.path.join(ROOT,'packages').replace("\\","/") !gsutil rm -r gs://{BUCKET_NAME}/{ROOT} """ Explanation: 1. Project Configuration End of explanation """ !gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv . """ Explanation: 2. Get training data In this step, we are going to: 1. Download Twitter data 2. Load the data to Pandas Dataframe. 3. Convert the class feature (sentiment) from string to a numeric indicator. Data can be downloaded directly from here (https://www.kaggle.com/kazanova/sentiment140) It is also located here: gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv You can copy it by using the following command: gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv . End of explanation """ sentiment_mapping = { 0: 'negative', 2: 'neutral', 4: 'positive' } df_twitter = pd.read_csv('training.csv', encoding='latin1', header=None)\ .rename(columns={ 0: 'sentiment', 1: 'id', 2: 'posted_at', 3: 'query', 4: 'username', 5: 'text' })[['sentiment', 'text']] df_twitter['sentiment_label'] = df_twitter['sentiment'].map(sentiment_mapping) """ Explanation: 2.1. Input data Create a dictionary with a mapping for each label. End of explanation """ df_twitter['sentiment_label'].count() """ Explanation: Verify number of records End of explanation """ %%writefile preprocess.py from tensorflow.python.keras.preprocessing import sequence from tensorflow.keras.preprocessing import text import re class TextPreprocessor(object): def __init__(self, vocab_size, max_sequence_length): self._vocab_size = vocab_size self._max_sequence_length = max_sequence_length self._tokenizer = None def _clean_line(self, text): text = re.sub(r"http\S+", "", text) text = re.sub(r"@[A-Za-z0-9]+", "", text) text = re.sub(r"#[A-Za-z0-9]+", "", text) text = text.replace("RT","") text = text.lower() text = text.strip() return text def fit(self, text_list): # Create vocabulary from input corpus. text_list_cleaned = [self._clean_line(txt) for txt in text_list] tokenizer = text.Tokenizer(num_words=self._vocab_size) tokenizer.fit_on_texts(text_list) self._tokenizer = tokenizer def transform(self, text_list): # Transform text to sequence of integers text_list = [self._clean_line(txt) for txt in text_list] text_sequence = self._tokenizer.texts_to_sequences(text_list) # Fix sequence length to max value. Sequences shorter than the length are # padded in the beginning and sequences longer are truncated # at the beginning. padded_text_sequence = sequence.pad_sequences( text_sequence, maxlen=self._max_sequence_length) return padded_text_sequence """ Explanation: 2.2. Data processing fn End of explanation """ from preprocess import TextPreprocessor processor = TextPreprocessor(5, 5) processor.fit(['hello Google Cloud AI Platform','test']) processor.transform(['hello Google Cloud AI Platform',"lol"]) """ Explanation: Some small test: End of explanation """ CLASSES = {'negative': 0, 'positive': 1} # label-to-int mapping VOCAB_SIZE = 25000 # Limit on the number vocabulary size used for tokenization MAX_SEQUENCE_LENGTH = 50 # Sentences will be truncated/padded to this length from preprocess import TextPreprocessor from sklearn.model_selection import train_test_split sents = df_twitter.text labels = np.array(df_twitter.sentiment_label.map(CLASSES)) # Train and test split X, _, y, _ = train_test_split(sents, labels, test_size=0.1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) # Create vocabulary from training corpus. processor = TextPreprocessor(VOCAB_SIZE, MAX_SEQUENCE_LENGTH) processor.fit(X_train) # Preprocess the data train_texts_vectorized = processor.transform(X_train) eval_texts_vectorized = processor.transform(X_test) import pickle with open('./processor_state.pkl', 'wb') as f: pickle.dump(processor, f) """ Explanation: 2.3.Data preparation Text preprocessor End of explanation """ # Hyperparameters LEARNING_RATE = .001 EMBEDDING_DIM = 50 FILTERS = 64 DROPOUT_RATE = 0.5 POOL_SIZE = 3 NUM_EPOCH = 25 BATCH_SIZE = 128 KERNEL_SIZES = [2, 5, 8] """ Explanation: 3. Model Create a TensorFlow model End of explanation """ def create_model(vocab_size, embedding_dim, filters, kernel_sizes, dropout_rate, pool_size, embedding_matrix): # Input layer model_input = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') # Embedding layer z = tf.keras.layers.Embedding( input_dim=vocab_size + 1, output_dim=embedding_dim, input_length=MAX_SEQUENCE_LENGTH, weights=[embedding_matrix] )(model_input) z = tf.keras.layers.Dropout(dropout_rate)(z) # Convolutional block conv_blocks = [] for kernel_size in kernel_sizes: conv = tf.keras.layers.Convolution1D( filters=filters, kernel_size=kernel_size, padding='valid', activation='relu', bias_initializer='random_uniform', strides=1)(z) conv = tf.keras.layers.MaxPooling1D(pool_size=2)(conv) conv = tf.keras.layers.Flatten()(conv) conv_blocks.append(conv) z = tf.keras.layers.Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0] z = tf.keras.layers.Dropout(dropout_rate)(z) z = tf.keras.layers.Dense(100, activation='relu')(z) model_output = tf.keras.layers.Dense(1, activation='sigmoid')(z) model = tf.keras.models.Model(model_input, model_output) return model """ Explanation: 3.1. Basic model End of explanation """ !gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt . """ Explanation: 3.2. Pretrained Glove embeddings Embeddings can be downloaded from Stanford Glove project: https://nlp.stanford.edu/projects/glove/ - Download file here (http://nlp.stanford.edu/data/glove.twitter.27B.zip) - Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 25d, 50d, 100d, & 200d vectors, 1.42 GB download) It is also located here: gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt You can copy it by using the following command: gsutil cp gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt . End of explanation """ def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') embeddings_index = dict(get_coefs(*o.strip().split()) for o in open('glove.twitter.27B.50d.txt','r', encoding='utf8')) word_index = processor._tokenizer.word_index nb_words = min(VOCAB_SIZE, len(word_index)) embedding_matrix = np.zeros((nb_words + 1, EMBEDDING_DIM)) for word, i in word_index.items(): if i >= VOCAB_SIZE: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector """ Explanation: Create an embedding index End of explanation """ model = create_model(VOCAB_SIZE, EMBEDDING_DIM, FILTERS, KERNEL_SIZES, DROPOUT_RATE,POOL_SIZE, embedding_matrix) # Compile model with learning parameters. optimizer = tf.keras.optimizers.Nadam(lr=0.001) model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['acc']) #Keras train history = model.fit( train_texts_vectorized, y_train, epochs=NUM_EPOCH, batch_size=BATCH_SIZE, validation_data=(eval_texts_vectorized, y_test), verbose=2, callbacks=[ tf.keras.callbacks.ReduceLROnPlateau( monitor='val_acc', min_delta=0.005, patience=3, factor=0.5), tf.keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.005, patience=5, verbose=0, mode='auto' ), tf.keras.callbacks.History() ] ) with open('history.pkl','wb') as file: pickle.dump(history.history,file) model.save('keras_saved_model.h5') """ Explanation: 3.3. Create, compile and train TensorFlow model End of explanation """ %%writefile custom_prediction.py import os import pickle import numpy as np from datetime import date from google.cloud import logging import tensorflow.keras as keras class CustomModelPrediction(object): def __init__(self, model, processor): self._model = model self._processor = processor def _postprocess(self, predictions): labels = ['negative', 'positive'] return [ { "label":labels[int(np.round(prediction))], "score":float(np.round(prediction, 4)) } for prediction in predictions] def predict(self, instances, **kwargs): preprocessed_data = self._processor.transform(instances) predictions = self._model.predict(preprocessed_data) labels = self._postprocess(predictions) return labels @classmethod def from_path(cls, model_dir): model = keras.models.load_model( os.path.join(model_dir,'keras_saved_model.h5')) with open(os.path.join(model_dir, 'processor_state.pkl'), 'rb') as f: processor = pickle.load(f) return cls(model, processor) """ Explanation: 4. Deployment 4.1. Prepare custom model prediction End of explanation """ from custom_prediction import CustomModelPrediction classifier = CustomModelPrediction.from_path('.') requests = (['God I hate the north', 'god I love this']) response = classifier.predict(requests) response """ Explanation: Testing custom prediction locally End of explanation """ %%writefile setup.py from setuptools import setup setup( name='tweet_sentiment_classifier', version='0.1', include_package_data=True, scripts=['preprocess.py', 'custom_prediction.py'] ) """ Explanation: 4.2. Package it End of explanation """ !python setup.py sdist !gsutil cp ./dist/tweet_sentiment_classifier-0.1.tar.gz gs://{BUCKET_NAME}/{PACKAGES_DIR}/tweet_sentiment_classifier-0.1.tar.gz !gsutil cp keras_saved_model.h5 gs://{BUCKET_NAME}/{MODEL_DIR}/ !gsutil cp processor_state.pkl gs://{BUCKET_NAME}/{MODEL_DIR}/ """ Explanation: Wrap it up and copy to GCP End of explanation """ MODEL_NAME='twitter_model_custom_prediction' MODEL_VERSION='v1' RUNTIME_VERSION='1.15' PYTHON_VERSION='3.7' !gcloud beta ai-platform models create {MODEL_NAME} --regions {REGION} --enable-logging --enable-console-logging !gcloud ai-platform versions delete {MODEL_VERSION} --model {MODEL_NAME} --quiet !gcloud beta ai-platform versions create {MODEL_VERSION} \ --model {MODEL_NAME} \ --origin gs://{BUCKET_NAME}/{MODEL_DIR} \ --python-version {PYTHON_VERSION} \ --runtime-version {RUNTIME_VERSION} \ --package-uris gs://{BUCKET_NAME}/{PACKAGES_DIR}/tweet_sentiment_classifier-0.1.tar.gz \ --prediction-class=custom_prediction.CustomModelPrediction """ Explanation: 5. Create model and version End of explanation """ from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json requests = [ 'god this episode is bad', 'meh, I kinda like it', 'what were the writer thinking, omg!', 'omg! what a twist, who would\'ve though :o!', 'woohoow, sansa for the win!' ] # JSON format the requests request_data = {'instances': requests} """ Explanation: 6. Testing End of explanation """ %%time api = discovery.build('ml', 'v1') parent = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, MODEL_VERSION) parent = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME) response = api.projects().predict(body=request_data, name=parent).execute() response['predictions'] # Delete model version resource ! gcloud ai-platform versions delete {MODEL_VERSION} --model {MODEL_NAME} --quiet # Delete model resource ! gcloud ai-platform models delete {MODEL_NAME} --quiet """ Explanation: Authenticate and call AI Plaform prediction API End of explanation """ !pip install 'kfp>=0.1.31' --user """ Explanation: 7. Deploy using AI Platform Pipelines With AI Platform Pipelines, you can orchestrate your machine learning (ML) workflows as reusable and reproducible pipelines. AI Platform Pipelines saves you the difficulty of setting up Kubeflow Pipelines with TensorFlow Extended on Google Kubernetes Engine. Install the KubeFlow Pipelines SDK End of explanation """ import json import kfp import kfp.components as comp import kfp.dsl as dsl import pandas as pd import time """ Explanation: Import dependencies End of explanation """ # Project parameters. CLUSTER='' # TODO Change to your GKE cluster ZONE='us-central1-a' # Pipeline Parameters MODEL_NAME = 'sentiment_classifier' + str(int(time.time())) MODEL_VERSION = 'v1' + str(int(time.time())) RUNTIME_VERSION = '1.15' PYTHON_VERSION='3.7' PACKAGE_TRAINER_URI = 'gs://cloud-samples-data/ai-platform/sentiment_analysis/trainer-0.1.tar.gz' PACKAGE_CUSTOM_PREDICTION_URI = 'gs://cloud-samples-data/ai-platform/sentiment_analysis/custom_prediction-0.1.tar.gz' PACKAGE_URIS = json.dumps([PACKAGE_TRAINER_URI]) PACKAGE_PATH='./trainer' PYTHON_MODULE = 'trainer.task' TRAINING_FILE='gs://cloud-samples-data/ai-platform/sentiment_analysis/training.csv'.format(BUCKET_NAME) GLOVE_FILE='gs://cloud-samples-data/ai-platform/sentiment_analysis/glove.twitter.27B.50d.txt'.format(BUCKET_NAME) MODEL_DIR='gs://{}/models'.format(BUCKET_NAME) SAVED_MODEL_NAME='keras_saved_model.h5' PROCESSOR_STATE_FILE='processor_state.pkl' PIPELINE_NAME = 'Text Prediction' PIPELINE_FILENAME_PREFIX = 'twitter' PIPELINE_DESCRIPTION = 'Text Prediction' # Note, numeric parameters should be pass as string. TRAINER_ARGS = json.dumps(['--train-file', TRAINING_FILE, '--glove-file', GLOVE_FILE, '--learning-rate', '0.001', '--embedding-dim', '50', '--num-epochs', '25', '--filter-size', '64', '--batch-size', '128', '--vocab-size', '25000', '--pool-size', '3', '--max-sequence-length', '50', '--saved-model', SAVED_MODEL_NAME, '--preprocessor-state-file', PROCESSOR_STATE_FILE, '--gcs-bucket', BUCKET_NAME, '--deploy-gcp'] ) """ Explanation: Create a Hosted AI Platform Pipeline Create a new Hosted KubeFlow pipeline under AI Platform -> Pipelines. Set up you AI Platform Pipeline as indicated here Note: Verify you are using version 0.2.5 and above. More information here Seting up credentials If you run pipelines that requires calling any GCP services, such as Cloud Storage, Cloud ML Engine, Dataflow, or Dataproc, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret. Documentation here Train and deploy the model End of explanation """ aiplatform_train_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/ml_engine/train/component.yaml') def train(project_id, trainer_args, package_uris, job_dir, region, python_module, python_version, runtime_version): return aiplatform_train_op( project_id=project_id, python_module=python_module, python_version=python_version, package_uris=package_uris, region=region, args=trainer_args, job_dir=job_dir, runtime_version=runtime_version ) """ Explanation: Train the model End of explanation """ aiplatform_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/ml_engine/deploy/component.yaml') def deploy(project_id, model_uri, model_id, model_version, runtime_version, python_version, version): return aiplatform_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, version_id=model_version, runtime_version=runtime_version, python_version=python_version, version=version, replace_existing_version=True, set_default=True) @dsl.pipeline( name=PIPELINE_NAME, description=PIPELINE_DESCRIPTION ) def pipeline(project_id=PROJECT_ID, python_module=PYTHON_MODULE, region=REGION, runtime_version=RUNTIME_VERSION, package_uris=PACKAGE_URIS, python_version=PYTHON_VERSION, job_dir=MODEL_DIR,): train_task = train(project_id, TRAINER_ARGS, package_uris, job_dir, region, python_module, python_version, runtime_version) deploy_task = deploy(project_id, train_task.outputs['job_dir'], MODEL_NAME, MODEL_VERSION, runtime_version, python_version, { "deploymentUri": 'gs://news-ml/models', "packageUris": [PACKAGE_CUSTOM_PREDICTION_URI, PACKAGE_TRAINER_URI], "predictionClass": 'model_prediction.CustomModelPrediction' } ) return True # Reference for invocation later pipeline_func = pipeline """ Explanation: Deploy the model End of explanation """ !kubectl get secrets !gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE" --project "$PROJECT_ID" """ Explanation: Run the KFP pipeline End of explanation """ KFP_HOST = '' pipeline = kfp.Client(host=KFP_HOST).create_run_from_pipeline_func(pipeline, arguments={}) pipeline.wait_for_run_completion(timeout=1800) """ Explanation: Obtain the KFP_HOST variable from the AI Platform Managed pipelines screen in Google Cloud Console. End of explanation """
European-XFEL/h5tools-py
docs/Demo.ipynb
bsd-3-clause
!python3 -m karabo_data.tests.make_examples """ Explanation: Reading data with karabo_data This command creates the sample data files used in the rest of this example. These files contain no real data, but they have the same structure as European XFEL's HDF5 data files. End of explanation """ !h5ls fxe_control_example.h5 from karabo_data import H5File f = H5File('fxe_control_example.h5') f.control_sources f.instrument_sources """ Explanation: Single files End of explanation """ for tid, data in f.trains(): print("Processing train", tid) print("beam iyPos:", data['SA1_XTD2_XGM/DOOCS/MAIN']['beamPosition.iyPos.value']) break tid, data = f.train_from_id(10005) data['FXE_XAD_GEC/CAM/CAMERA:daqOutput']['data.image.dims'] """ Explanation: Get data by train End of explanation """ !ls fxe_example_run/ from karabo_data import RunDirectory run = RunDirectory('fxe_example_run/') run.files[:3] # The objects for the individual files (see above) """ Explanation: These are just a few of the ways to access data. The attributes and methods described below for run directories also work with individual files. We expect that it will normally make sense to access a run directory as a single object, rather than working with the files separately. Run directories An experimental run is recorded as a collection of files in a directory. Another dummy example: End of explanation """ run.control_sources run.instrument_sources """ Explanation: What devices were recording in this run? Control devices are slow data, recording once per train. Instrument devices includes detector data, but also some other data sources such as cameras. They can have more than one reading per train. End of explanation """ print(run.train_ids[:10]) """ Explanation: Which trains are in this run? End of explanation """ run.keys_for_source('SPB_XTD9_XGM/DOOCS/MAIN:output') """ Explanation: See the available keys for a given source: End of explanation """ for tid, data in run.trains(): print("Processing train", tid) print("Detctor data module 0 shape:", data['FXE_DET_LPD1M-1/DET/0CH0:xtdf']['image.data'].shape) break # Stop after the first train to keep the demo short """ Explanation: This collects data from across files, including detector data: End of explanation """ tid, data = run.train_from_id(10005) tid, data = run.train_from_index(5) """ Explanation: Train IDs are meant to be globally unique (although there were some glitches with this in the past). A train index is only within this run. End of explanation """ ixPos = run.get_series('SA1_XTD2_XGM/DOOCS/MAIN', 'beamPosition.ixPos.value') ixPos.tail(10) """ Explanation: Series data to pandas Data which holds a single number per train (or per pulse) can be extracted to as series (individual columns) and dataframes (tables) for pandas, a widely-used tool for data manipulation. karabo_data chains sequence files, which contain successive data from the same source. In this example, trains 10000–10399 are in one sequence file (...DA01-S00000.h5), and 10400–10479 are in another (...DA01-S00001.h5). They are concatenated into one series: End of explanation """ run.get_dataframe(fields=[("*_XGM/*", "*.i[xy]Pos")]) """ Explanation: To extract a dataframe, you can select interesting data fields with glob syntax, as often used for selecting files on Unix platforms. [abc]: one character, a/b/c ?: any one character *: any sequence of characters End of explanation """ xtd2_intensity = run.get_array('SA1_XTD2_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID']) xtd2_intensity """ Explanation: Labelled arrays Data with extra dimensions can be handled as xarray labelled arrays. These are a wrapper around Numpy arrays with indexes which can be used to align them and select data. End of explanation """ import xarray as xr xtd9_intensity = run.get_array('SPB_XTD9_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID']) # Align two arrays, keep only trains which they both have data for: xtd2_intensity, xtd9_intensity = xr.align(xtd2_intensity, xtd9_intensity, join='inner') # Select data for a single train by train ID: xtd2_intensity.sel(trainId=10004) # Select data from a range of train IDs. # This includes the end value, unlike normal Python indexing xtd2_intensity.loc[10004:10006] """ Explanation: Here's a brief example of using xarray to align the data and select by train ID. See the examples in the xarray docs for more on what it can do. In this example data, all the data sources have the same range of train IDs, so aligning them doesn't change anything. In real data, devices may miss some trains that other devices did record. End of explanation """ from karabo_data import by_index # Select the first 5 trains in this run: sel = run.select_trains(by_index[:5]) # Get the whole of this array: arr = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels') print("Whole array shape:", arr.shape) # Get a region of interest arr2 = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels', roi=by_index[100:200, :512]) print("ROI array shape:", arr2.shape) """ Explanation: You can also specify a region of interest from an array to load only part of the data: End of explanation """ run.info() run.detector_info('FXE_DET_LPD1M-1/DET/0CH0:xtdf') """ Explanation: General information karabo_data provides a few ways to get general information about what's in data files. First, from Python code: End of explanation """ !lsxfel fxe_example_run/RAW-R0450-LPD00-S00000.h5 !lsxfel fxe_example_run/RAW-R0450-DA01-S00000.h5 !lsxfel fxe_example_run """ Explanation: The lsxfel command provides similar information at the command line: End of explanation """
5hubh4m/CS231n
Assignment1/knn.ipynb
mit
# Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) """ Explanation: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages: During training, the classifier takes the training data and simply remembers it During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. End of explanation """ # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() """ Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: First we must compute the distances between all test examples and all train examples. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. End of explanation """ # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) What in the data is the cause behind the distinctly bright rows? What causes the columns? Your Answer: The intensity of a pixel is higher for images that are seperated by a large L2 Distance. For some images in the Test Set which are vastly different from any image in the Train Set, we observe bright rows. Likewise for images in Train Set that are not near any image in the Test Set, we see bright columns. End of explanation """ y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: End of explanation """ # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation """ Explanation: You should expect to see a slightly better performance than with k = 1. End of explanation """ num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = np.array_split(X_train, num_folds) y_train_folds = np.array_split(y_train, num_folds) # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} for k in k_choices: k_to_accuracies[k] = [] for i in xrange(num_folds): X_cross_test = X_train_folds[i] y_cross_test = y_train_folds[i] idx = [j for j in xrange(num_folds) if j != i] X_cross_train = X_train_folds[0] y_cross_train = y_train_folds[0] num_cross_test = y_cross_test.shape[0] for j in xrange(1, len(idx)): X_cross_train = np.concatenate((X_cross_train, X_train_folds[idx[j]])) y_cross_train = np.concatenate((y_cross_train, y_train_folds[idx[j]])) classifier.train(X_cross_train, y_cross_train) dists = classifier.compute_distances_no_loops(X_cross_test) y_test_pred = classifier.predict_labels(dists, k=k) num_correct = np.sum(y_test_pred == y_cross_test) accuracy = float(num_correct) / num_cross_test k_to_accuracies[k].append(accuracy) # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array( [np.mean(v) for k, v in sorted(k_to_accuracies.items())]) accuracies_std = np.array( [np.std(v) for k, v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) """ Explanation: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. End of explanation """
gidden/gidden.github.io
presentations/pyam-iamc2017/pyam-iamc2017.ipynb
cc0-1.0
import pyam_analysis as iam data = '/home/gidden/work/iiasa/message/pyam-analysis/tutorial/tutorial_AR5_data.csv' df = iam.IamDataFrame(data=data) """ Explanation: Follow along at: mattgidden.com/presentations/pyam-iamc2017 Find us on github: github.com/IAMConsortium/pyam-analysis Diagnostics, analysis and visualization tools <br /> for Integrated Assessment timeseries data <img style="float: right; height: 100px; margin-top: 10px;" src="_static/IIASA_logo.png"> <img style="float: right; height: 80px;" src="_static/IAMC_logo.jpg"> First steps with the pyam_analysis package The pyam-analysis package provides a range of diagnostic tools and functions for analyzing and working with IAMC-style timeseries data. The package can be used with data that follows the data template convention of the Integrated Assessment Modeling Consortium (IAMC). An illustrative example is shown below; see data.ene.iiasa.ac.at/database for more information. | model | scenario | region | variable | unit | 2005 | 2010 | 2015 | |---------------------|---------------|------------|----------------|----------|----------|----------|----------| | MESSAGE V.4 | AMPERE3-Base | World | Primary Energy | EJ/y | 454.5 | 479.6 | ... | | ... | ... | ... | ... | ... | ... | ... | ... | This notebook illustrates some basic functionality of the pyam-analsysis package and the IamDataFrame class: Importing timeseries data from a csv file. Listing models, scenarios and variables included in the data. Display of timeseries data as dataframe and visualization using simple plotting functions. Evaluating the model data and executing a range of diagnostic checks to identify data outliers. Categorization of scenarios according to timeseries data. Tutorial data The timeseries data used in this tutorial is a partial snapshot of the scenario database compiled for the IPCC's Fifth Assessment Report (AR5): Krey V., O. Masera, G. Blanford, T. Bruckner, R. Cooke, K. Fisher-Vanden, H. Haberl, E. Hertwich, E. Kriegler, D. Mueller, S. Paltsev, L. Price, S. Schlömer, D. Ürge-Vorsatz, D. van Vuuren, and T. Zwickel, 2014: Annex II: Metrics & Methodology. In: Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Edenhofer, O., R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I. Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schlömer, C. von Stechow, T. Zwickel and J.C. Minx (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Link The complete database is publicly available at tntcat.iiasa.ac.at/AR5DB/. <img style="float: right; height: 100px;" src="_static/AMPERE-Logo.png"> <img style="float: right; height: 40px; margin-top: 50px; margin-right: 20px;" src="_static/EMF-Logo_v2.1.png"> The data snapshot used for this tutorial consists of selected data from two model intercomparison projects: - Energy Modeling Forum Round 27 (EMF27), see the Special Issue in Climatic Change 3-4, 2014. - EU FP7 project AMPERE, see the following scientific publications: Riahi, K., et al. (2015). "Locked into Copenhagen pledges — Implications of short-term emission targets for the cost and feasibility of long-term climate goals." Technological Forecasting and Social Change 90(Part A): 8-23. DOI: 10.1016/j.techfore.2013.09.016 Kriegler, E., et al. (2015). "Making or breaking climate targets: The AMPERE study on staged accession scenarios for climate policy." Technological Forecasting and Social Change 90(Part A): 24-44. DOI: 10.1016/j.techfore.2013.09.021 <div style="text-align: center; padding: 10px; border: 2px solid red; width: 700px"> *The data used in this tutorial is ONLY a partial snapshot of the IPCC AR5 scenario database!* *This tutorial is only intended for an illustration of the ``pyam-analysis`` package.* </div> Import package and load data from the AR5 tutorial csv snapshot file First, we import the snapshot timeseries data from the file tutorial_AR5_data.csv in the tutorial folder. As a first step, we show lists of all models, scenarios, regions, and variables (with units) included in the snapshot. End of explanation """ df.models() df.scenarios() df.regions() df.variables(include_units=True) """ Explanation: What's in our dataset? End of explanation """ df.scenarios({'model': 'MESSAGE'}) df.scenarios({'model': 'ESSAGE'}) """ Explanation: Filtering Data Most functions of the IamDataFrame class take an (optional) argument filters, i.e., a dictionary of filter criteria. Filtering by model names, scenarios and regions The feature for filtering by model, scenario or region is implemented using regular expressions (regex, re) and the re.match() function. This implies that the filtering is done from the beginning of the text string. Applying the filter 'model': 'MESSAGE' to the function scenarios() will return all MESSAGE V.4 scenarios included in the snapshot. Filtering for ESSAGE will return an empty set. End of explanation """ df.variables(filters={'variable': 'Emissions|*'}) df.variables(filters={'variable': 'Emissions|*', 'level': 2}) df.variables(filters={'level': 1}) """ Explanation: Filtering by variables and hierarchy levels Filtering for variable strings using regex is problematic due to the frequent use of the "|" character in the IAMC template to specify a hierarchical. Therefore, this package implements a pseudo-regex syntax, where | is escaped, * is used as a wildcard and exact matching at the end of the string is enforced. (in regex lingo, * is replaced by .* and $ is appended to the filter string). Filtering for Primary Energy will return only exactly those data. Filtering for Primary Energy|* will return all sub-categories of primary-energy level (and only the sub-categories). In additon, IAM variables can be filtered by the level, i.e., the "depth" of the variable in a hierarchical reading of the string separated by "|". That is, the variable Primary Energy has level 0, while Primary Energy|Coal has level 1. Filtering by both variables and level will search for the hierarchical depth following the <variable> string, so filter arguments _Primary Energy| and _level = 0 will return all variables immediately below Primary Energy. Filtering by level* only will return all variables up to that depth. To illustrate the functionality of the filters, we first show all sub-categories of the Emissions variable. Then, we reduce variables to only two hierarchical levels below "Emissions|"; the list returned by the function call will not include Emissions|CO2|Fossil Fuels and Industry|Energy Supply|Electricity, because this variable is three hierarchical levels below "Emissions|". The third example shows how to filter only by hierarchical level. The function returns all variables that are at the top hierarchical level (i.e., Primary Energy) and those at the first sub-category level. Keep in mind that there are no variables Emissions or Price (no top level). End of explanation """ df.models? """ Explanation: Filtering by year Filtering for years can be done by integer number, a list of integers, or the Python class range. Note that the last year of a range is not included, so range(2010,2015) is interpreted as [2010, 2011, 2012, 2013, 2014]. Getting help When in doubt, you can look at the help for any function by appending it with a ?. End of explanation """ df.timeseries(filters={ 'scenario': 'AMPERE3-450', 'variable': 'Primary Energy|Coal', 'region': 'World' }).head() df.pivot_table( index=['year'], columns=['scenario'], values='value', aggfunc='sum', filters={'variable': 'Primary Energy', 'region': 'World'} ).head() """ Explanation: Working with Timeseries As a next step, we want to view a selection of the data in the tutorial snapshot using the IAMC standard. The filtered data can exported as a csv file by appending .to_csv('selected_data.csv') to the next command. For displaying data in a different format, the class IamDataFrame has a wrapper of the pandas.DataFrame.pivot_table() function. It allows to flexibly specify the columns and rows. The function automatically aggregates by summation or counting (specified by the parameter aggfunc) over all timeseries data identifiers ('model', 'scenario', 'variable', 'region', 'unit', 'year') which are not used as index or columns. In the example below, the filter of the timeseries data is set for all subcategories of 'Primary Energy', which are then summed up in the displayed table. End of explanation """ df.data.head() """ Explanation: If you are familiar with the python package pandas, you can access the pd.DataFrame directly. End of explanation """ df.plot_lines({'variable': 'Emissions|CO2', 'region': 'World'}) """ Explanation: Plotting Timeseries As a next step, we want to visualize timeseries data. In the plot below, we show CO2 emissions over time for all scenarios provided in the tutorial snapshot data. End of explanation """ df.validate? df.validate('Primary Energy') df.validate({'Primary Energy': {'up': 515, 'year': 2010}}) df.validate( {'Primary Energy|Coal': {'up': 400, 'year': 2050}}, filters={'region': 'World'}, exclude=False ) """ Explanation: Validating and querying timeseries data When analyzing scenario results, it is often useful to check whether certain timeseries exist or the values are within a specific range. For example, it may make sense to ensure that reported data for historical periods are close to established reference data. The following section provides three illustrations: 1. Check whether a timeseries 'Primary Energy' exists in each scenario (in at least one year). 2. Check for every scenario whether the value for 'Primary Energy' at the global level exceeds 515 EJ/y in the reference year 2010 (the value must satisfy an upper bound of 515 EJ/y in this notation). 3. Check for every scenario whether the value for 'Primary Energy|Coal' exceeds 400 EJ/y in mid-century. The validate() function takes a filters dictionary to perform the checks on a selection of models/scenarios similar to the functions introduced above. The criteria argument can specify a valid range by an upper and lower bound (up, lo) for a variable and a subset of years to which the validation is applied - all scenarios with a value in at least one year outside that range are considered to not satisfy the validation. By setting the argument exclude=True, all scenarios failing the validation will be categorized as exclude. These scenarios will not be shown by default in any subsequent data tables or plots. End of explanation """ df.plot_lines({'variable': 'Temperature*'}) """ Explanation: Categorization of scenarios by timeseries characteristics It is often useful to apply categorization to classes of scenarios according to specific characteristics of the timeseries data. In the following example, we use the temperature change assessment by MAGICC 6 to group scenarios by the median global warming by the end of the century (year 2100). We proceed in the following steps: Plot the timeseries data of the variable that we want to use. This provides some insights on useful thresholds for the categorization. Use the function category() to apply a categorization (and colour code for later use) to all scenarios that satisfy a number of specific criteria. Use the categorization of scenarios for analysis of other timeseries data. End of explanation """ df.reset_category() df.category( 'Below 1.6C', {'Temperature|Global Mean|MAGICC6|MED': {'up': 1.6, 'year': 2100}}, color='cornflowerblue', display='list' ) df.category( 'Below 2.0C', {'Temperature|Global Mean|MAGICC6|MED': {'up': 2.0, 'year': 2100}}, filters={'category': 'uncategorized'}, color='forestgreen' ) df.category( 'Below 2.5C', {'Temperature|Global Mean|MAGICC6|MED': {'up': 2.5, 'year': 2100}}, filters={'category': 'uncategorized'}, color='gold' ) df.category( 'Below 3.5C', {'Temperature|Global Mean|MAGICC6|MED': {'up': 3.5, 'year': 2100}}, filters={'category': 'uncategorized'}, color='firebrick' ) df.category( 'Above 3.5C', {'Temperature|Global Mean|MAGICC6|MED': {}}, filters={'category': 'uncategorized'}, color='magenta' ) """ Explanation: We now use the categorization feature of the pyam-analysis package. By default, each model/scenario is assigned as "uncategorized". The next function resets all scenarios back to "uncategorized". This may be helpful in this tutorial if you are going back and forth between cells. End of explanation """ df.category('uncategorized', display='list') """ Explanation: Two models included in the snapshot have not been assessed by MAGICC6 regarding their long-term climate and warming impact. Therefore, the timeseries 'Temperature|Global Mean|MAGICC6|MED' does not exist, and they have not been categorized. Below, we display all scenarios that are uncategorized at this point. End of explanation """ df.plot_lines({'variable': 'Temperature*'}, color_by_cat=True) """ Explanation: Now, we again display the median global temperature increase for all scenarios, but we use the colouring by category to illustrate the common charateristics across scenarios. End of explanation """ df.plot_lines( {'variable': 'Emissions|CO2', 'region': 'World'}, color_by_cat=True ) """ Explanation: As a last step, we display the aggregate CO2 emissions by category. This allows to highlight alternative pathways within the same category. End of explanation """
BrownDwarf/ApJdataFrames
notebooks/Scholz2009.ipynb
mit
%pylab inline import seaborn as sns sns.set_context("notebook", font_scale=1.5) #import warnings #warnings.filterwarnings("ignore") import pandas as pd """ Explanation: ApJdataFrames Scholz2009 Title: SUBSTELLAR OBJECTS IN NEARBY YOUNG CLUSTERS (SONYC): THE BOTTOM OF THE INITIAL MASS FUNCTION IN NGC 1333 Authors: Alexander Scholz, Vincent Geers, Ray Jayawardhana, Laura Fissel, Eve Lee, David Lafrenière, and Motohide Tamura Data is from this paper: http://iopscience.iop.org/0004-637X/702/1/805/ End of explanation """ tbl1 = pd.read_clipboard(sep='\t', skiprows=3, skipfooter=6, engine='python', usecols=range(11)) tbl1 ! mkdir ../data/Scholz2009 tbl1.to_csv("../data/Scholz2009/tbl1.csv", index=False) """ Explanation: Table 1 - Probable Substellar Members of NGC 1333 ! curl http://iopscience.iop.org/0004-637X/702/1/805/suppdata/apj317606t1_ascii.txt Internet is not working at the moment, wtf. End of explanation """
JamesSample/icpw
check_core_icpw.ipynb
mit
# Connect to db eng = nivapy.da.connect() """ Explanation: Explore "core" ICPW data Prior to updating the "core" ICPW datasets in RESA, I need to get an overview of what's already in the database and what isn't. End of explanation """ # Query projects prj_grid = nivapy.da.select_resa_projects(eng) prj_grid prj_df = prj_grid.get_selected_df() print(len(prj_df)) prj_df """ Explanation: 1. Query ICPW projects The are 18 projects (one for each country) currently in RESA. We also have data for some countries that do not yet have a project defined (e.g. the Netherlands). End of explanation """ # Get stations stn_df = nivapy.da.select_resa_project_stations(prj_df, eng) print(len(stn_df)) stn_df.head() # Map nivapy.spatial.quickmap(stn_df, popup='station_code') """ Explanation: 2. Get station list There are 262 stations currently associated with the projects in RESA. End of explanation """ # Select parameters par_grid = nivapy.da.select_resa_station_parameters(stn_df, '1970-01-01', '2019-01-01', eng) par_grid # Get selected pars par_df = par_grid.get_selected_df() par_df """ Explanation: 3. Get parameters Get a list of parameters available at these stations. I assume that all data submissions to ICPW will report pH, so extracting pH data should be a good way to get an indication of which stations actually have data. End of explanation """ # Get data wc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df, par_df, '1970-01-01', '2019-01-01', eng, lod_flags=False, drop_dups=True) wc_df.head() # How many stations have pH data len(wc_df['station_code'].unique()) # Which stations do not have pH data? all_stns = set(stn_df['station_code'].unique()) no_ph = list(all_stns - set(wc_df['station_code'].unique())) no_ph_stns = stn_df.query('station_code in @no_ph').reset_index() print(len(no_ph_stns)) no_ph_stns # What data do these stations have? par_grid2 = nivapy.da.select_resa_station_parameters(no_ph_stns, '1970-01-01', '2019-01-01', eng) par_grid2 """ Explanation: 4. Get chemistry data End of explanation """ # Most recent datab for idx, row in prj_df.iterrows(): # Get stations cnt_stns = nivapy.da.select_resa_project_stations([row['project_id'],], eng) # Get pH data wc, dups = nivapy.da.select_resa_water_chemistry(cnt_stns, [1,], # pH '1970-01-01', '2019-01-01', eng, lod_flags=False, drop_dups=True) # Print results print(row['project_name'], '\t', len(cnt_stns), '\t', wc['sample_date'].max()) """ Explanation: So, there are 262 stations within the "core" ICPW projects, but 24 of these have no data whatsoever associated with them (listed above). 5. Date for last sample by country The code below gets the most recent pH sample in the database for each country. End of explanation """
apryor6/apryor6.github.io
visualizations/seaborn/notebooks/barplot.ipynb
mit
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np plt.rcParams['figure.figsize'] = (20.0, 10.0) plt.rcParams['font.family'] = "serif" df = pd.read_csv('../../../datasets/movie_metadata.csv') df.head() """ Explanation: seaborn.barplot Bar graphs are useful for displaying relationships between categorical data and at least one numerical variable. seaborn.countplot is a barplot where the dependent variable is the number of instances of each instance of the independent variable. dataset: IMDB 5000 Movie Dataset End of explanation """ # split each movie's genre list, then form a set from the unwrapped list of all genres categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")]) # one-hot encode each movie's classification for cat in categories: df[cat] = df.genres.transform(lambda s: int(cat in s)) # drop other columns df = df[['director_name','genres','duration'] + list(categories)] df.head() # convert from wide to long format and remove null classificaitons df = pd.melt(df, id_vars=['duration'], value_vars = list(categories), var_name = 'Category', value_name = 'Count') df = df.loc[df.Count>0] top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index howmany=10 # add an indicator whether a movie is short or long, split at 100 minutes runtime df['islong'] = df.duration.transform(lambda x: int(x > 100)) df = df.loc[df.Category.isin(top_categories[:howmany])] # sort in descending order #df = df.loc[df.groupby('Category').transform(sum).sort_values('Count', ascending=False).index] df.head() """ Explanation: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once. End of explanation """ p = sns.countplot(data=df, x = 'Category') """ Explanation: Basic plot End of explanation """ p = sns.countplot(data=df, x = 'Category', hue = 'islong') """ Explanation: color by a category End of explanation """ p = sns.countplot(data=df, y = 'Category', hue = 'islong') """ Explanation: make plot horizontal End of explanation """ p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1) """ Explanation: Saturation End of explanation """ import matplotlib.pyplot as plt fig, ax = plt.subplots(2) sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, ax=ax[1]) """ Explanation: Targeting a non-default axes End of explanation """ import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories)) """ Explanation: Add error bars End of explanation """ import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) """ Explanation: add black bounding lines End of explanation """ import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2, fill=False) import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) sns.set(font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) plt.rcParams['font.family'] = "cursive" #sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) plt.rcParams['font.family'] = 'Times New Roman' #sns.set_style({'font.family': 'Helvetica'}) sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) bg_color = 'white' sns.set(rc={"font.style":"normal", "axes.facecolor":bg_color, "figure.facecolor":bg_color, "text.color":"black", "xtick.color":"black", "ytick.color":"black", "axes.labelcolor":"black", "axes.grid":False, 'axes.labelsize':30, 'figure.figsize':(20.0, 10.0), 'xtick.labelsize':25, 'font.size':20, 'ytick.labelsize':20}) #sns.set_style({'font.family': 'Helvetica'}) #sns.set(style="white",font_scale=1.25) num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=3*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2) leg = p.get_legend() leg.set_title("") labs = leg.texts labs[0].set_text("Short") labs[0].set_fontsize(25) labs[0].set_size(30) labs[1].set_text("Long") leg.get_title().set_color('black') p.axes.xaxis.label.set_text("Counts") plt.text(900,2, "Bar Plot", fontsize = 95, color='Black', fontstyle='italic') p.get_figure().savefig('../../figures/barplot.png') """ Explanation: Remove color fill End of explanation """
JonasHarnau/apc
apc/vignettes/vignette_ln_vs_odp.ipynb
gpl-3.0
import apc # Turn off FutureWarnings import warnings warnings.simplefilter('ignore', FutureWarning) """ Explanation: Log-Normal or Over-Dispersed Poisson? We replicate the empirical applications in Harnau (2018a) in Section 2 and Section 6. The work on this vignette was supported by the European Research Council, grant AdG 694262. First, we import the package End of explanation """ for family in ('log_normal_response', 'od_poisson_response'): model_VNJ = apc.Model() model_VNJ.data_from_df(apc.loss_VNJ(), data_format='CL') model_VNJ.fit(family, 'AC') sub_models_VNJ = [model_VNJ.sub_model(coh_from_to=(1,5)), model_VNJ.sub_model(coh_from_to=(6,10))] bartlett_VNJ = apc.bartlett_test(sub_models_VNJ) f_VNJ = apc.f_test(model_VNJ, sub_models_VNJ) print(family) print('='*len(family)) print('Bartlett test p-value: {:.2f}'.format( bartlett_VNJ['p_value'])) print('F-test p-value: {:.2f} \n'.format( f_VNJ['p_value'])) """ Explanation: 2. Empirical illustration of the problem This section motivates the problem. Based on the Verrall et al. (2010) it applies the misspecification tests from Harnau (2018b). We split the data into two sub-samples after the fifth accident year. Then we test for breaks in dispersion parameters with a Bartlett test and linear predictors with an F-test. Remark: we replicated the empirical applications in Harnau (2018b) here. End of explanation """ r_VNJ = apc.r_test(apc.loss_VNJ(), # specify the data set family_null='gen_log_normal_response', # declare null model predictor='AC', # AC = age-cohort matching the chain-ladder R_stat='wls_ls', # R-stat: wls_ls -> R^{star}_{ls} R_dist='wls_ls') # Pi est in limiting dist: wls_ls -> Pi^{star}_{ls} print('R-statistic: {:.2f}'.format(r_VNJ['R_stat'])) print('p_value: {:.4f}'.format(r_VNJ['p_value'])) """ Explanation: Neither in a log-normal nor in an over-dispersed Poisson model can we convincingly reject the model specification based on these tests. This illustrates a situation in which it is not clear what model to use so that the new R-test can prove its usefulness. 6.1 Empirical illustration revisited In this section, we return to the empirical illustration from above and test whether an R-test can help us to decide between (generalized) log-normal and over-dispersed Poisson model. The package comes with built-in functionality for the R-test. Say we want to test $$ H_0: \text{generalized log-normal} \quad \text{vs} \quad \text{over-dispersed Poisson} $$ based on the statistic $R^_{ls}$ and compare it to $\widehat{\mathrm{R}}^_{ls}$. End of explanation """ import pandas as pd def r_test_all_combs(model): # create an empty series to be filled with R-statistics R_stats = pd.Series(None, index=('$R_{ls}$', '$R_{ql}$', '$R^*_{ls}$', '$R^*_{ql}$')) # create and empty df to be filled with p-values base_df = pd.DataFrame( None, index = ('$\widehat{\mathrm{R}}_{ls}$', '$\widehat{\mathrm{R}}_{ql}$', '$\widehat{\mathrm{R}}^*_{ls}$', '$\widehat{\mathrm{R}}^*_{ql}$'), columns = pd.MultiIndex.from_product( [ ('$H_0: $ generalized log-normal', '$H_0: $ over-dispersed Poisson'), ('$R_{ls}$', '$R_{ql}$', '$R^*_{ls}$', '$R^*_{ql}$') ]) ) # iterate over ways to compute the R-statistic for i, R_stat in enumerate(['ls', 'ql', 'wls_ls', 'wls_ql']): # iterate over ways to estimate Pi in the limiting dist for j, R_dist in enumerate(['ls', 'ql', 'wls_ls', 'wls_ql']): # compute R-test r_test = apc.r_test(apc.loss_VNJ(), family_null='gen_log_normal_response', predictor='AC', R_stat=R_stat, R_dist=R_dist, data_format='CL') base_df.iloc[j, i] = r_test['p_value'] base_df.iloc[j, i+4] = 1 - r_test['power_at_R'] R_stats.iloc[i] = r_test['R_stat'] return base_df, R_stats table3, R_stats = r_test_all_combs(model_VNJ) """ Explanation: This matches the value for the statistic in the paper and the p-value in Table 3 (which is given in %). Remark: besides the value of the test statistic and the p-value under the null, apc.r_test also return the power at the value oder the R-statistic (power_at_R). The power corresponds to one minus the p-value under the alternative. To replicate the remaining test statistics and the entire Table 3 we employ a small function that iterates over all possible combinations. End of explanation """ pd.DataFrame(R_stats.rename('R-Statistic')).T """ Explanation: The R-statistics are as follows. End of explanation """ table3*100 """ Explanation: And Table 3 is given by this: End of explanation """ from quad_form_ratio import saddlepoint_cdf_R, saddlepoint_inv_cdf_R import numpy as np model_VNJ.fit('log_normal_response', 'AC') X, Z = model_VNJ.design, np.log(model_VNJ.data_vector['response']) tau_ls = model_VNJ.fitted_values.sum() sqrt_Pi_ls = np.diag(np.sqrt(model_VNJ.fitted_values/tau_ls)) rss = model_VNJ.rss X_star_ls, Z_star_ls = sqrt_Pi_ls.dot(X), sqrt_Pi_ls.dot(Z) # fit the weighted least squares model, we set rcond=0. since # we know that X_star has full column rank. wls_ls_fit = np.linalg.lstsq(X_star_ls, Z_star_ls, rcond=0.) xi_star_ls, RSS_star_ls = wls_ls_fit[0], wls_ls_fit[1][0] fitted_wls_ls = np.exp(X.dot(xi_star_ls)) sqrt_Pi_star_ls = np.diag(np.sqrt(fitted_wls_ls/fitted_wls_ls.sum())) # Use the QR-decomposition to compute the orthogonal projection M Q, _ = np.linalg.qr(X) M = np.identity(model_VNJ.n) - Q.dot(Q.T) # do the same for the weighted least squares orthogonal projection X_star_ls = sqrt_Pi_star_ls.dot(X) Q_star_ls, _ = np.linalg.qr(X_star_ls) M_star_ls = np.identity(model_VNJ.n) - Q_star_ls.dot(Q_star_ls.T) # A refers to the sandwiched matrix in the numerator # B refers to the sandwiched matrix in the denominator # _gln and _odp refer to the sandwiches under the respective nulls A_gln = M B_gln = sqrt_Pi_star_ls.dot(M_star_ls).dot(sqrt_Pi_star_ls) A_odp = np.linalg.inv(sqrt_Pi_star_ls).dot(M).dot(np.linalg.inv(sqrt_Pi_star_ls)) B_odp = M_star_ls # We compute the 5% critical value under ODP (lower quantile) # The function iterates to find the critical value up to a precision of 0.0001 cv = saddlepoint_inv_cdf_R(A_odp, B_odp, probabilities=[0.05]) print('5% critical value for over-dispersed Poisson: {:.1f}'.format(cv[0.05])) # Given the critical value, we compute the power pwr_at_cv5 = saddlepoint_cdf_R(A_gln, B_gln, cv) print('Power at 5% critical value: {:.2f}'.format(pwr_at_cv5.iloc[0])) """ Explanation: In the paper we now move on to find the 5% critical value under the over-dispersed Poisson model as well as the power at that value. This functionality is not directly implemented in the package; however we can easily replicate it with the package quad_form_ratio. End of explanation """ r_BZ_GLNe = apc.r_test( apc.loss_BZ(), family_null='gen_log_normal_response', predictor='APC', # APC = age-period-cohort, incl. calendar effect data_format='CL' # optional, the package can infer the data_format ) # the default for R_stat and R_dist are our preferrred 'wls_ls' print('R-statistic: {:.2f}'.format(r_BZ_GLNe['R_stat'])) print('p_value: {:.2f}'.format(r_BZ_GLNe['p_value'])) """ Explanation: 6.2 Sensitivity to invalid model reductions In this section, we use the data from Barnett and Zehnwirth (2000, Table 3.5). These data are known to require a calendar effect for modeling. We show that the test results may be misleading when the baseline model is already misspecified. $H_0:$ Generalized log-normal First, we test in a model with calendar effect so the linear predictor is $\mu_{ij} = \alpha_i + \beta_j + \gamma_k + \delta$. The first hypothesis we consider is $$ H_0: \text{extended generalized log-normal} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$ This is easily tested with an $R$-test. End of explanation """ model_BZ = apc.Model() model_BZ.data_from_df(apc.loss_BZ(), data_format='CL') model_BZ.fit_table('log_normal_response', attach_to_self=False).loc[ ['AC'],: ] """ Explanation: Thus, we reject the extended generalized log-normal model. Despite the rejection of the model, we move on to test whether we can drop the calendar effect: $$ H_0: \text{generalized log-normal} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}.$$ We can do this with a simple $F$-test, both in a log-normal and a generalized log-normal model (Kuang and Nielsen, 2018). End of explanation """ r_BZ_GLN = apc.r_test( apc.loss_BZ(), family_null='gen_log_normal_response', predictor='AC', data_format='CL' ) print('R-statistic: {:.2f}'.format(r_BZ_GLN['R_stat'])) print('p_value: {:.2f}'.format(r_BZ_GLN['p_value'])) """ Explanation: As expected for the data at hand, we reject this reduction; the calendar effect is needed. For illustrative purposes, we nonetheless move on to test $$ H_0: \text{generalized log-normal} \quad \text{vs} \quad H_A: \text{over-dispersed Poisson}, $$ thus a scenario in which neither model has a calendar effect. End of explanation """ r_BZ_ODPe = apc.r_test( apc.loss_BZ(), family_null='od_poisson_response', data_format='CL' ) # the default for predictor is APC, thus includes a calendar effect print('R-statistic: {:.2f}'.format(r_BZ_ODPe['R_stat'])) print('p_value: {:.2f}'.format(r_BZ_ODPe['p_value'])) print('Power at R: {:.2f}'.format(r_BZ_ODPe['power_at_R'])) """ Explanation: Perhaps surprisingly, the generalized log-normal model looks better now - we cannot convincingly reject it against the over-dispersed Poisson model. $H_0:$ Over-dispersed Poisson Now we start the other way around and take the over-dispersed Poisson model as a baseline. First, we again include a calendar effect and test $$ H_0: \text{extended over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}.$$ End of explanation """ model_BZ = apc.Model() model_BZ.data_from_df(apc.loss_BZ(), data_format='CL') model_BZ.fit_table('od_poisson_response', attach_to_self=False).loc[ ['AC'],: ] """ Explanation: In this case, we cannot reject the over-dispersed Poisson model. The power at the value of $R^*_{ls}$ is $0.98$; this corresponds to one minus the p-value under the extended generalized log-normal null hypothesis. Just as before, we can test whether we can reasonably drop the calendar effect, just now from the extended over-dispersed Poisson model: $$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$ This is easily done with an $F$-test (Harnau and Nielsen 2017). End of explanation """ r_BZ_ODP = apc.r_test( apc.loss_BZ(), family_null='od_poisson_response', predictor='AC', data_format='CL' ) print('R-statistic: {:.2f}'.format(r_BZ_ODP['R_stat'])) print('p_value: {:.2f}'.format(r_BZ_ODP['p_value'])) """ Explanation: One again, this reduction is rejected. Neglecting this result, we investigate what happens if we test the models without calendar effect in $$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{generalized log-normal}.$$ End of explanation """ r_TA_GLNe = apc.r_test( apc.loss_TA(), family_null='gen_log_normal_response', predictor='APC', data_format='CL' ) print('R-statistic: {:.2f}'.format(r_TA_GLNe['R_stat'])) print('p_value: {:.4f}'.format(r_TA_GLNe['p_value'])) """ Explanation: This time, we reject the over-dispersed Poisson model. Thus, the results completely flipped by dropping the calendar effect. With calendar effect, we cannot reject the over-dispersed Poisson model but can reject the generalized log-normal model. By dropping the much needed calendar effect, we turn this on its head and reject the over-dispersed Poisson but not the generalized log-normal model. Thus, we should be careful what we use as a baseline model before testing. 6.3 A general to specific testing procedure Taking into account the insights from above, we now consider a general to specific testing procedure. That is, we start with "the most general" model and test for possible reductions, stopping once we run into a rejection. For this application, we consider the data by Taylor and Ashe (1983) that has been become a kinda of benchmark data set. First, we consider an extended generalized log-normal model and test it against its over-dispersed Poisson counterpart: $$ H_0: \text{extended generalized log-normal} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$ End of explanation """ r_TA_ODPe = apc.r_test( apc.loss_TA(), family_null='od_poisson_response', predictor='APC', data_format='CL' ) print('R-statistic: {:.2f}'.format(r_TA_ODPe['R_stat'])) print('p_value: {:.4f}'.format(r_TA_ODPe['p_value'])) print('Power at R: {:.2f}'.format(r_TA_ODPe['power_at_R'])) """ Explanation: The $R$-test rejects the extended generalized log-normal model. Thus, we do not proceed with this model. Instead, we now consider the reverse test: $$ H_0: \text{extended over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}. $$ End of explanation """ model_TAe = apc.Model() model_TAe.data_from_df(apc.loss_TA(), data_format='CL') model_TAe.fit('od_poisson_response', 'APC') sub_models_TAe = [model_TAe.sub_model(per_from_to=(1,5)), model_TAe.sub_model( coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10) ), model_TAe.sub_model(age_from_to=(6,10)), model_TAe.sub_model(coh_from_to=(6,10))] bartlett_TA_ODPe = apc.bartlett_test(sub_models_TAe) print('Bartlett test p-value: {:.2f}'.format(bartlett_TA_ODPe['p_value'])) """ Explanation: We cannot reject the hypothesis. Thus, we got to hunt for further evidence against the extended over-dispersed Poisson model. We consider the misspecification tests from Harnau (2018b). In the extended over-dipsersed Poisson model with calendar effect, we split the data into four sub-samples and then test $$ H_{\sigma^2}: \sigma^2_\ell = \sigma^2 $$ with a Bartlett test. End of explanation """ f_TA_ODPe = apc.f_test(model_TAe, sub_models_TAe) print('F-test p-value: {:.2f} \n'.format(f_TA_ODPe['p_value'])) """ Explanation: This is a somewhat close call. In light of the fact that simpler models tend to perform better in forecasting, we interpret the test result as the absence of strong evidence against the hypothesis. Then, taking $H_{\sigma^2}$ as given, we can move on to test for breaks in linear predictors across sub-samples $$ H_{\mu, \sigma^2}: \alpha_{i, \ell} + \beta_{j, \ell} + \gamma_{k, \ell} + \delta_\ell = \alpha_i + \beta_j + \gamma_k + \delta. $$ We do so using an $F$-test. End of explanation """ model_TAe.fit_table(attach_to_self=False).loc[['AC'],:] """ Explanation: Similar to before, we take the result of the $F$-test as a lack of convincing evidence against the model. Thus, we now consider whether we can reduce the model by dropping the calendar effect by means of an $F$-test for $$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}. $$ That is, we consider a reduction from $\mu_{ij} = \alpha_i + \beta_j + \gamma_k + \delta$ to $\mu_{ij} = \alpha_i + \beta_j + \delta$. End of explanation """ r_TA_ODP = apc.r_test( apc.loss_TA(), family_null='od_poisson_response', predictor='AC', data_format='CL' ) print('R-statistic: {:.2f}'.format(r_TA_ODP['R_stat'])) print('p_value: {:.4f}'.format(r_TA_ODP['p_value'])) print('Power at R: {:.2f}'.format(r_TA_ODP['power_at_R'])) """ Explanation: With a p-value of $0.30$, we cannot reject this reduction. In the model without calendar effect, point forecasts match the chain-ladder technique forecasts. We can now consider whether the model without calendar effect still survives the same tests it did before. First, we test it against a generalized log-normal model: $$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{generalized log-normal}. $$ End of explanation """ model_TA = apc.Model() model_TA.data_from_df(apc.loss_TA(), data_format='CL') model_TA.fit('od_poisson_response', 'AC') sub_models_TA = [model_TA.sub_model(per_from_to=(1,5)), model_TA.sub_model( coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10) ), model_TA.sub_model(age_from_to=(6,10)), model_TA.sub_model(coh_from_to=(6,10))] bartlett_TA_ODP = apc.bartlett_test(sub_models_TA) f_TA_ODP = apc.f_test(model_TA, sub_models_TA) print('Bartlett test p-value: {:.2f}'.format(bartlett_TA_ODP['p_value'])) print('F-test p-value: {:.2f} \n'.format(f_TA_ODP['p_value'])) """ Explanation: The model passes this test easily. Now we can repeat the misspecification tests, testing $$ H_{\sigma^2}: \sigma^2_\ell = \sigma^2 $$ with a Bartlett test and $$ H_{\mu, \sigma^2}: \alpha_{i, \ell} + \beta_{j, \ell} + \delta_\ell = \alpha_i + \beta_j + \delta $$ with an $F$-test. End of explanation """
vinhqdang/my_mooc
coursera/machine_learning_specialization/1_foundation/Document retrieval.ipynb
mit
import graphlab """ Explanation: Document retrieval from wikipedia data Fire up GraphLab Create End of explanation """ people = graphlab.SFrame('people_wiki.gl/') """ Explanation: Load some text data - from wikipedia, pages on people End of explanation """ people.head() len(people) """ Explanation: Data contains: link to wikipedia article, name of person, text of article. End of explanation """ obama = people[people['name'] == 'Barack Obama'] obama obama['text'] """ Explanation: Explore the dataset and checkout the text it contains Exploring the entry for president Obama End of explanation """ clooney = people[people['name'] == 'George Clooney'] clooney['text'] """ Explanation: Exploring the entry for actor George Clooney End of explanation """ obama['word_count'] = graphlab.text_analytics.count_words(obama['text']) print obama['word_count'] """ Explanation: Get the word counts for Obama article End of explanation """ obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count']) """ Explanation: Sort the word counts for the Obama article Turning dictonary of word counts into a table End of explanation """ obama_word_count_table.head() obama_word_count_table.sort('count',ascending=False) """ Explanation: Sorting the word counts to show most common words at the top End of explanation """ people['word_count'] = graphlab.text_analytics.count_words(people['text']) people.head() tfidf = graphlab.text_analytics.tf_idf(people['word_count']) tfidf people['tfidf'] = tfidf['docs'] """ Explanation: Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores. End of explanation """ obama = people[people['name'] == 'Barack Obama'] obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False) """ Explanation: Examine the TF-IDF for the Obama article End of explanation """ clinton = people[people['name'] == 'Bill Clinton'] beckham = people[people['name'] == 'David Beckham'] """ Explanation: Words with highest TF-IDF are much more informative. Manually compute distances between a few people Let's manually compare the distances between the articles for a few famous people. End of explanation """ graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0]) graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0]) """ Explanation: Is Obama closer to Clinton than to Beckham? We will use cosine distance, which is given by (1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham. End of explanation """ knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name') """ Explanation: Build a nearest neighbor model for document retrieval We now create a nearest-neighbors model and apply it to document retrieval. End of explanation """ knn_model.query(obama) """ Explanation: Applying the nearest-neighbors model for retrieval Who is closest to Obama? End of explanation """ swift = people[people['name'] == 'Taylor Swift'] knn_model.query(swift) jolie = people[people['name'] == 'Angelina Jolie'] knn_model.query(jolie) arnold = people[people['name'] == 'Arnold Schwarzenegger'] knn_model.query(arnold) """ Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval End of explanation """
JackDi/phys202-2015-work
assignments/assignment04/MatplotlibEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 1 Imports End of explanation """ import os assert os.path.isfile('yearssn.dat') """ Explanation: Line plot of sunspot data Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook. End of explanation """ data=np.loadtxt('yearssn.dat') ssc=data[:,1] year=data[:,0] assert len(year)==315 assert year.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float) """ Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts. End of explanation """ f=plt.figure(figsize=(25,4)) plt.plot(year,ssc) plt.title("Sun Spots Seen Per Year Since 1700") plt.xlabel("Year") plt.ylabel("Number of Sun Spots Seen") plt.xlim(1700,2015) plt.ylim(0,180) assert True # leave for grading """ Explanation: Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """ # YOUR CODE HERE f=plt.figure(figsize=(25,4)) seventeen=data[:100,:] eighteen=data[100:200,:] nineteen=data[200:300,:] two=data[300:,:] plt.subplot(2,2,1) plt.plot(seventeen[:,0],seventeen[:,1]) plt.title("Sun Spots seen per Year During the 1700's") plt.subplot(2,2,2) plt.plot(eighteen[:,0],eighteen[:,1]) plt.title("Sun Spots seen per Year During the 1800's") plt.subplot(2,2,3) plt.plot(nineteen[:,0],nineteen[:,1]) plt.title("Sun Spots seen per Year During the 1900's") plt.subplot(2,2,4) plt.plot(two[:,0],two[:,1]) plt.title("Sun Spots seen per Year During the 2000's") plt.tight_layout() assert True # leave for grading """ Explanation: Describe the choices you have made in building this visualization and how they make it effective. I made the figure extra long, relative to its height, in order to accomodate the long range of data in the x direction. I also chose the x and y limits so as to show all of the data. The axis labels and titles are concise and show all of the information needed. Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above: Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/10_recommend/content_based_using_neural_networks.ipynb
apache-2.0
%%bash pip freeze | grep tensor """ Explanation: Content-Based Filtering Using Neural Networks This notebook relies on files created in the content_based_preproc.ipynb notebook. Be sure to run the code in there before completing this notebook. Also, we'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still Python2. This lab illustrates: 1. how to build feature columns for a model using tf.feature_column 2. how to create custom evaluation metrics and add them to Tensorboard 3. how to train a model and make predictions with the saved model Tensorflow Hub should already be installed. You can check that it is by using "pip freeze". End of explanation """ !pip3 install tensorflow-hub==0.7.0 !pip3 install --upgrade tensorflow==1.15.3 !pip3 install google-cloud-bigquery==1.10 """ Explanation: Let's make sure we install the necessary version of tensorflow-hub. After doing the pip install below, click "Restart the kernel" on the notebook so that the Python environment picks up the new packages. End of explanation """ import os import tensorflow as tf import numpy as np import tensorflow_hub as hub import shutil PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # do not change these os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['TFVERSION'] = '1.15.3' %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: Note: Please ignore any incompatibility warnings and errors and re-run the cell to view the installed tensorflow version. End of explanation """ categories_list = open("categories.txt").read().splitlines() authors_list = open("authors.txt").read().splitlines() content_ids_list = open("content_ids.txt").read().splitlines() mean_months_since_epoch = 523 """ Explanation: Build the feature columns for the model. To start, we'll load the list of categories, authors and article ids we created in the previous Create Datasets notebook. End of explanation """ embedded_title_column = hub.text_embedding_column( key="title", module_spec="https://tfhub.dev/google/nnlm-de-dim50/1", trainable=False) content_id_column = tf.feature_column.categorical_column_with_hash_bucket( key="content_id", hash_bucket_size= len(content_ids_list) + 1) embedded_content_column = tf.feature_column.embedding_column( categorical_column=content_id_column, dimension=10) author_column = tf.feature_column.categorical_column_with_hash_bucket(key="author", hash_bucket_size=len(authors_list) + 1) embedded_author_column = tf.feature_column.embedding_column( categorical_column=author_column, dimension=3) category_column_categorical = tf.feature_column.categorical_column_with_vocabulary_list( key="category", vocabulary_list=categories_list, num_oov_buckets=1) category_column = tf.feature_column.indicator_column(category_column_categorical) months_since_epoch_boundaries = list(range(400,700,20)) months_since_epoch_column = tf.feature_column.numeric_column( key="months_since_epoch") months_since_epoch_bucketized = tf.feature_column.bucketized_column( source_column = months_since_epoch_column, boundaries = months_since_epoch_boundaries) crossed_months_since_category_column = tf.feature_column.indicator_column(tf.feature_column.crossed_column( keys = [category_column_categorical, months_since_epoch_bucketized], hash_bucket_size = len(months_since_epoch_boundaries) * (len(categories_list) + 1))) feature_columns = [embedded_content_column, embedded_author_column, category_column, embedded_title_column, crossed_months_since_category_column] """ Explanation: In the cell below we'll define the feature columns to use in our model. If necessary, remind yourself the various feature columns to use. For the embedded_title_column feature column, use a Tensorflow Hub Module to create an embedding of the article title. Since the articles and titles are in German, you'll want to use a German language embedding module. Explore the text embedding Tensorflow Hub modules available here. Filter by setting the language to 'German'. The 50 dimensional embedding should be sufficient for our purposes. End of explanation """ record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]] column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"] label_key = "next_content_id" def read_dataset(filename, mode, batch_size = 512): def _input_fn(): def decode_csv(value_column): columns = tf.decode_csv(value_column,record_defaults=record_defaults) features = dict(zip(column_keys, columns)) label = features.pop(label_key) return features, label # Create list of files that match pattern file_list = tf.io.gfile.glob(filename) # Create dataset from file list dataset = tf.data.TextLineDataset(file_list).map(decode_csv) if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely dataset = dataset.shuffle(buffer_size = 10 * batch_size) else: num_epochs = 1 # end-of-input after this dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() return _input_fn """ Explanation: Create the input function. Next we'll create the input function for our model. This input function reads the data from the csv files we created in the previous labs. End of explanation """ def model_fn(features, labels, mode, params): net = tf.feature_column.input_layer(features, params['feature_columns']) for units in params['hidden_units']: net = tf.layers.dense(net, units=units, activation=tf.nn.relu) # Compute logits (1 per class). logits = tf.layers.dense(net, params['n_classes'], activation=None) predicted_classes = tf.argmax(logits, 1) from tensorflow.python.lib.io import file_io with file_io.FileIO('content_ids.txt', mode='r') as ifp: content = tf.constant([x.rstrip() for x in ifp]) predicted_class_names = tf.gather(content, predicted_classes) if mode == tf.estimator.ModeKeys.PREDICT: predictions = { 'class_ids': predicted_classes[:, tf.newaxis], 'class_names' : predicted_class_names[:, tf.newaxis], 'probabilities': tf.nn.softmax(logits), 'logits': logits, } return tf.estimator.EstimatorSpec(mode, predictions=predictions) table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt") labels = table.lookup(labels) # Compute loss. loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Compute evaluation metrics. accuracy = tf.metrics.accuracy(labels=labels, predictions=predicted_classes, name='acc_op') top_10_accuracy = tf.metrics.mean(tf.nn.in_top_k(predictions=logits, targets=labels, k=10)) metrics = { 'accuracy': accuracy, 'top_10_accuracy' : top_10_accuracy} tf.summary.scalar('accuracy', accuracy[1]) tf.summary.scalar('top_10_accuracy', top_10_accuracy[1]) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec( mode, loss=loss, eval_metric_ops=metrics) # Create training op. assert mode == tf.estimator.ModeKeys.TRAIN optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op) """ Explanation: Create the model and train/evaluate Next, we'll build our model which recommends an article for a visitor to the Kurier.at website. Look through the code below. We use the input_layer feature column to create the dense input layer to our network. This is just a single layer network where we can adjust the number of hidden units as a parameter. Currently, we compute the accuracy between our predicted 'next article' and the actual 'next article' read next by the visitor. We'll also add an additional performance metric of top 10 accuracy to assess our model. To accomplish this, we compute the top 10 accuracy metric, add it to the metrics dictionary below and add it to the tf.summary so that this value is reported to Tensorboard as well. End of explanation """ outdir = 'content_based_model_trained' shutil.rmtree(outdir, ignore_errors = True) # start fresh each time #tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file estimator = tf.estimator.Estimator( model_fn=model_fn, model_dir = outdir, params={ 'feature_columns': feature_columns, 'hidden_units': [200, 100, 50], 'n_classes': len(content_ids_list) }) train_spec = tf.estimator.TrainSpec( input_fn = read_dataset("training_set.csv", tf.estimator.ModeKeys.TRAIN), max_steps = 2000) eval_spec = tf.estimator.EvalSpec( input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL), steps = None, start_delay_secs = 30, throttle_secs = 60) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) """ Explanation: Train and Evaluate End of explanation """ %%bash head -5 training_set.csv > first_5.csv head first_5.csv awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids """ Explanation: This takes a while to complete but in the end, I get about 30% top 10 accuracy. Make predictions with the trained model. With the model now trained, we can make predictions by calling the predict method on the estimator. Let's look at how our model predicts on the first five examples of the training set. To start, we'll create a new file 'first_5.csv' which contains the first five elements of our training set. We'll also save the target values to a file 'first_5_content_ids' so we can compare our results. End of explanation """ output = list(estimator.predict(input_fn=read_dataset("first_5.csv", tf.estimator.ModeKeys.PREDICT))) import numpy as np recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output] content_ids = open("first_5_content_ids").read().splitlines() """ Explanation: Recall, to make predictions on the trained model we pass a list of examples through the input function. Complete the code below to make predictions on the examples contained in the "first_5.csv" file we created above. End of explanation """ from google.cloud import bigquery recommended_title_sql=""" #standardSQL SELECT (SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\" LIMIT 1""".format(recommended_content_ids[0]) current_title_sql=""" #standardSQL SELECT (SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\" LIMIT 1""".format(content_ids[0]) recommended_title = bigquery.Client().query(recommended_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip() current_title = bigquery.Client().query(current_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip() print("Current title: {} ".format(current_title)) print("Recommended title: {}".format(recommended_title)) """ Explanation: Finally, we map the content id back to the article title. Let's compare our model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned. End of explanation """
fifabsas/talleresfifabsas
python/Extras/Big_Data/analisis.ipynb
mit
#Obtengo los datos directamente de la página web. No es necesario bajarlos! educa = pd.read_csv(r"https://recursos-data.buenosaires.gob.ar/ckan2/estadistica-educativa/estadistica-educativa.csv", delimiter=";") print(educa.shape) #Imprime la cantidad de filas primero, y después la cantidad de columnas educa.head() #Imprime los 5 primeros datos #Imprimamos las columnas para saber los datos educa.columns """ Explanation: Estadísticas educativas Datos provenientes del GCBA http://data.buenosaires.gob.ar/dataset/estadistica-educativa End of explanation """ features = ["nivel_educ_madre","iecep","tasa_repeticion_2012","domiciliados_pba","inversion_alumnos_2013"] #Ahora, para analizar los datos, usamos el pairplot de seaborn, #que te permite hacer histogramas 2d y agregarle una regresión lineal sns.pairplot(educa[educa.tipo_gestion == "Estatal"], vars=features, kind="reg") """ Explanation: En el archivo https://recursos-data.buenosaires.gob.ar/ckan2/estadistica-educativa/documentacion-estadistica-educativa.pdf indica el significado de cada columna. Vamos a tomar el nivel de educación de madre, la tasa de repetición, domiciliados PBA e inversión en alumnos como datos relevantes End of explanation """
tensorflow/docs-l10n
site/en-snapshot/hub/tutorials/tf2_text_classification.ipynb
apache-2.0
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. """ Explanation: Copyright 2019 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import numpy as np import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import matplotlib.pyplot as plt print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE") """ Explanation: Text Classification with Movie Reviews <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/collections/nnlm/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a> </td> </table> This notebook classifies movie reviews as positive or negative using the text of the review. This is an example of binary—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the IMDB dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews. This notebook uses tf.keras, a high-level API to build and train models in TensorFlow, and TensorFlow Hub, a library and platform for transfer learning. For a more advanced text classification tutorial using tf.keras, see the MLCC Text Classification Guide. More models Here you can find more expressive or performant models that you could use to generate the text embedding. Setup End of explanation """ train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], batch_size=-1, as_supervised=True) train_examples, train_labels = tfds.as_numpy(train_data) test_examples, test_labels = tfds.as_numpy(test_data) """ Explanation: Download the IMDB dataset The IMDB dataset is available on TensorFlow datasets. The following code downloads the IMDB dataset to your machine (or the colab runtime): End of explanation """ print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples))) """ Explanation: Explore the data Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review. End of explanation """ train_examples[:10] """ Explanation: Let's print first 10 examples. End of explanation """ train_labels[:10] """ Explanation: Let's also print the first 10 labels. End of explanation """ model = "https://tfhub.dev/google/nnlm-en-dim50/2" hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True) hub_layer(train_examples[:3]) """ Explanation: Build the model The neural network is created by stacking layers—this requires three main architectural decisions: How to represent the text? How many layers to use in the model? How many hidden units to use for each layer? In this example, the input data consists of sentences. The labels to predict are either 0 or 1. One way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have two advantages: * we don't have to worry about text preprocessing, * we can benefit from transfer learning. For this example we will use a model from TensorFlow Hub called google/nnlm-en-dim50/2. There are two other models to test for the sake of this tutorial: * google/nnlm-en-dim50-with-normalization/2 - same as google/nnlm-en-dim50/2, but with additional text normalization to remove punctuation. This can help to get better coverage of in-vocabulary embeddings for tokens on your input text. * google/nnlm-en-dim128-with-normalization/2 - A larger model with an embedding dimension of 128 instead of the smaller 50. Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that the output shape of the produced embeddings is a expected: (num_examples, embedding_dimension). End of explanation """ model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1)) model.summary() """ Explanation: Let's now build the full model: End of explanation """ model.compile(optimizer='adam', loss=tf.losses.BinaryCrossentropy(from_logits=True), metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')]) """ Explanation: The layers are stacked sequentially to build the classifier: The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The model that we are using (google/nnlm-en-dim50/2) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: (num_examples, embedding_dimension). This fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units. The last layer is densely connected with a single output node. This outputs logits: the log-odds of the true class, according to the model. Hidden units The above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation. If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called overfitting, and we'll explore it later. Loss function and optimizer A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the binary_crossentropy loss function. This isn't the only choice for a loss function, you could, for instance, choose mean_squared_error. But, generally, binary_crossentropy is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions. Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error. Now, configure the model to use an optimizer and a loss function: End of explanation """ x_val = train_examples[:10000] partial_x_train = train_examples[10000:] y_val = train_labels[:10000] partial_y_train = train_labels[10000:] """ Explanation: Create a validation set When training, we want to check the accuracy of the model on data it hasn't seen before. Create a validation set by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy). End of explanation """ history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) """ Explanation: Train the model Train the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the x_train and y_train tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set: End of explanation """ results = model.evaluate(test_examples, test_labels) print(results) """ Explanation: Evaluate the model And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy. End of explanation """ history_dict = history.history history_dict.keys() """ Explanation: This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time model.fit() returns a History object that contains a dictionary with everything that happened during training: End of explanation """ acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() """ Explanation: There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy: End of explanation """
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/Object Oriented Programming.ipynb
apache-2.0
l = [1,2,3] """ Explanation: Object Oriented Programming Object Oriented Programming (OOP) tends to be one of the major obstacles for beginners when they are first starting to learn Python. There are many,many tutorials and lessons covering OOP so feel free to Google search other lessons, and I have also put some links to other useful tutorials online at the bottom of this Notebook. For this lesson we will construct our knowledge of OOP in Python by building on the following topics: Objects Using the class keyword Creating class attributes Creating methods in a class Learning about Inheritance Learning about Special Methods for classes Lets start the lesson by remembering about the Basic Python Objects. For example: End of explanation """ l.count(2) """ Explanation: Remember how we could call methods on a list? End of explanation """ print type(1) print type([]) print type(()) print type({}) """ Explanation: What we will basically be doing in this lecture is exploring how we could create an Object type like a list. We've already learned about how to create functions. So lets explore Objects in general: Objects In Python, everything is an object. Remember from previous lectures we can use type() to check the type of object something is: End of explanation """ # Create a new object type called Sample class Sample(object): pass # Instance of Sample x = Sample() print type(x) """ Explanation: So we know all these things are objects, so how can we create our own Object types? That is where the class keyword comes in. class The user defined objects are created using the class keyword. The class is a blueprint that defines a nature of a future object. From classes we can construct instances. An instance is a specific object created from a particular class. For example, above we created the object 'l' which was an instance of a list object. Let see how we can use class: End of explanation """ class Dog(object): def __init__(self,breed): self.breed = breed sam = Dog(breed='Lab') frank = Dog(breed='Huskie') """ Explanation: By convention we give classes a name that starts with a capital letter. Note how x is now the reference to our new instance of a Sample class. In other words, we instantiate the Sample class. Inside of the class we currently just have pass. But we can define class attributes and methods. An attribute is a characteristic of an object. A method is an operation we can perform with the object. For example we can create a class called Dog. An attribute of a dog may be its breed or its name, while a method of a dog may be defined by a .bark() method which returns a sound. Let's get a better understanding of attributes through an example. Attributes The syntax for creating an attribute is: self.attribute = something There is a special method called: __init__() This method is used to initialize the attributes of an object. For example: End of explanation """ sam.breed frank.breed """ Explanation: Lets break down what we have above.The special method __init__() is called automatically right after the object has been created: def __init__(self, breed): Each attribute in a class definition begins with a reference to the instance object. It is by convention named self. The breed is the argument. The value is passed during the class instantiation. self.breed = breed Now we have created two instances of the Dog class. With two breed types, we can then access these attributes like this: End of explanation """ class Dog(object): # Class Object Attribute species = 'mammal' def __init__(self,breed,name): self.breed = breed self.name = name sam = Dog('Lab','Sam') sam.name """ Explanation: Note how we don't have any parenthesis after breed, this is because it is an attribute and doesn't take any arguments. In Python there are also class object attributes. These Class Object Attributes are the same for any instance of the class. For example, we could create the attribute species for the Dog class. Dogs (regardless of their breed,name, or other attributes will always be mammals. We apply this logic in the following manner: End of explanation """ sam.species """ Explanation: Note that the Class Object Attribute is defined outside of any methods in the class. Also by convention, we place them first before the init. End of explanation """ class Circle(object): pi = 3.14 # Circle get instantiated with a radius (default is 1) def __init__(self, radius=1): self.radius = radius # Area method calculates the area. Note the use of self. def area(self): return self.radius * self.radius * Circle.pi # Method for resetting Radius def setRadius(self, radius): self.radius = radius # Method for getting radius (Same as just calling .radius) def getRadius(self): return self.radius c = Circle() c.setRadius(2) print 'Radius is: ',c.getRadius() print 'Area is: ',c.area() """ Explanation: Methods Methods are functions defined inside the body of a class. They are used to perform operations with the attributes of our objects. Methods are essential in encapsulation concept of the OOP paradigm. This is essential in dividing responsibilities in programming, especially in large applications. You can basically think of methods as functions acting on an Object that take the Object itself into account through its self argument. Lets go through an example of creating a Circle class: End of explanation """ class Animal(object): def __init__(self): print "Animal created" def whoAmI(self): print "Animal" def eat(self): print "Eating" class Dog(Animal): def __init__(self): Animal.__init__(self) print "Dog created" def whoAmI(self): print "Dog" def bark(self): print "Woof!" d = Dog() d.whoAmI() d.eat() d.bark() """ Explanation: Great! Notice how we used self. notation to reference attributes of the class within the method calls. Review how the code above works and try creating your own method Inheritance Inheritance is a way to form new classes using classes that have already been defined. The newly formed classes are called derived classes, the classes that we derive from are called base classes. Important benefits of inheritance are code reuse and reduction of complexity of a program. The derived classes (descendants) override or extend the functionality of base classes (ancestors). Lets see an example by incorporating our previous work on the Dog class: End of explanation """ class Book(object): def __init__(self, title, author, pages): print "A book is created" self.title = title self.author = author self.pages = pages def __str__(self): return "Title:%s , author:%s, pages:%s " %(self.title, self.author, self.pages) def __len__(self): return self.pages def __del__(self): print "A book is destroyed" book = Book("Python Rocks!", "Jose Portilla", 159) #Special Methods print book print len(book) del book """ Explanation: In this example, we have two classes: Animal and Dog. The Animal is the base class, the Dog is the derived class. The derived class inherits the functionality of the base class. It is shown by the eat() method. The derived class modifies existing behavior of the base class. shown by the whoAmI() method. Finally, the derived class extends the functionality of the base class, by defining a new bark() method. Special Methods Finally lets go over special methods. Classes in Python can implement certain operations with special method names. These methods are not actually called directly but by Python specific language syntax. For example Lets create a Book class: End of explanation """
cgivre/oreilly-sec-ds-fundamentals
Notebooks/Intro/One Dimensional Data Worksheet-Python Answers.ipynb
apache-2.0
import pandas as pd import numpy as np """ Explanation: One Dimensional Data Worksheet This worksheet reviews the concepts discussed about 1 dimensional data. The goal for these exercises is getting you to think in terms of vectorized computing. This worksheet should take 20-30 minutes to complete. End of explanation """ #First initialize the series by calling the pd.Series() function randomNumbers = pd.Series( np.random.randint(1, 100, 100) ) #Display the first 5 random numbers print( randomNumbers.head() ) #Next filter out the odd numbers by using the mod operator and reset the index evenRandomNumbers = randomNumbers[ randomNumbers % 2 == 0].reset_index( drop=True ) #Display the first 5 evenRandomNumbers.head() """ Explanation: Exercise 1 Create a Series object with 100 random integers, then filter out odd integers and reindex the Series. Hint: you can use python np.random.random_integers(1, 100, 100) to create the random numbers. Print out the first 20 numbers. End of explanation """ numbers = ['(342)123-2345', '410-342-3421', '(234 434-2121', '(301)822-3423', '123-234-3423', '(410)555-4443', 'AAAAHHH', '(XXX)XXX-XXXX', '(602)123-4535', '(234)127-4534'] #Predefined list of numbers numbers = ['(342)123-2345', '410-342-3421', '(234 434-2121', '(301)822-3423', '123-234-3423', '(410)555-4443', 'AAAAHHH', '(XXX)XXX-XXXX', '(602)123-4535', '(234)127-4534'] #Create the phone numbers series phoneNumbers = pd.Series( numbers ) #Next filter the phone numbers by using the str.match function validPhoneNumbers = phoneNumbers[ phoneNumbers.str.match( r'\(\d{3}\)\d{3}-\d{4}') ].reset_index( drop=True ) """ Explanation: Exercise 2 You will be given a list containing 10 strings. Create a new Series called validPhoneNumbers that only contains data in the format (XXX)XXX-XXXX. Don't forget to reindex the series after you've filtered it. End of explanation """ #This function converts a number from Farenheit to Celsius toCelsius = lambda x: (float(5)/9)*(x-32) #Creates a series with numbers that represent temperatures in Farenheit tempsInFarenheit = pd.Series( [92,33,-5,17,122,87 ]) #Your code here... tempsinCelsius = tempsInFarenheit.apply( toCelsius ) print( tempsinCelsius) """ Explanation: Exercise 3 The code below contains a lambda function which converts a temperature from Farenheit to Celsius. You are given a Series called temperatures in Farhenheit. Using the .apply() function, convert the data into degrees Celsius. End of explanation """ numList = [1,1,1,1,1,2,4,5,7,5,4,5,6,4,3,5,5,5,6,9,0,7,6,7,5,4,4,7] #Your code here... numSeries = pd.Series( numList) numSeries.value_counts() """ Explanation: Exercise 4 You are given a list of numbers called numList. Without using a loop, write a script to count occurances of each value in the list. End of explanation """ import ipaddress hosts = [ '192.168.1.2', '10.10.10.2', '172.143.23.34', '34.34.35.34', '172.15.0.1', '172.17.0.1'] from ipaddress import ip_address IPData = pd.Series( hosts ) privateIPs = IPData[IPData.apply( lambda x : ip_address(x).is_private ) ] print( privateIPs ) """ Explanation: Exercise 5 You are given a Series of IP Addresses and the goal is to limit this data to private IP addresses. Python has an ipaddress module which provides the capability to create, manipulate and operate on IPv4 and IPv6 addresses and networks. Complete documentation is available here: https://docs.python.org/3/library/ipaddress.html. Here are some examples of how you might use this module: ```python import ipaddress myIP = ipaddress.ip_address( '192.168.0.1' ) myNetwork = ipaddress.ip_network( '192.168.0.0/28' ) Check membership in network if myIP in myNetwork: #This works print "Yay!" Loop through CIDR blocks for ip in myNetwork: print( ip ) 192.168.0.0 192.168.0.1 … … 192.168.0.13 192.168.0.14 192.168.0.15 Testing to see if an IP is private if myIP.is_private: print( "This IP is private" ) else: print( "Routable IP" ) ``` First, write a function which takes an IP address and returns true if the IP is private, false if it is public. HINT: use the ipaddress module. Next, use this to create a Series of true/false values in the same sequence as your original Series. Finally, use this to filter out the original Series so that it contains only private IP addresses. End of explanation """
tvaught/compintro
02_intro_to_python.ipynb
bsd-3-clause
2+2 """ Explanation: Introduction to Python Simple Expressions / Variable Assignment The Python interpreter, which is being used to parse and execute each of these lines, can do math like a calculator: End of explanation """ print 2*3 print (4+6)*(2+9) # should calculate to 110 print 12.0/11.0 """ Explanation: Another several examples: I'll use the "print" statement to print out the result for each calculation (if I didn't do this, it would just output the result of the last expression): End of explanation """ print(5/3) # Integer division gives a 'floor' value (rounding down, basically). print(5.0/3.0) # Dividing floats (usually) gives the expected answer. print(5.0/3) # The interpreter uses the more complex type to infer the type for the result. print(5/3.0) # The order for type "upcasting" doesn't matter """ Explanation: One major difference between using a calculator and doing calculations on the computer is that there are a couple of types of numbers -- integers and floating point values. You can think of integers as whole numbers and floats (as floating point values are called) as supporting a decimal or fractional part of the value. This shows up sometimes is odd behavior with division. End of explanation """ 0.1 + 0.2 """ Explanation: Let's look at another example of float math: End of explanation """ a = 5 print a """ Explanation: Wat ?!?! There are occasionally precision issues because of the way floating point values work. It's actually an interesting abstraction (feel free to study a more detailed explanation of how IEEE Floats work). This is a good example of how abstractions can 'leak' (more on this later). A good explanation of how this affects Python is here. For our purposes, this really doesn't matter. Just a curiosity, so, moving right along... End of explanation """ y = x**2 - 3*x + 12 # just like in algebra, right? """ Explanation: This may not seem very exciting at first, but variables are an important part of programming. It's good to know that you can use them in Python. End of explanation """ x = 10 y = x**2 - 3*x + 12 print y """ Explanation: I know x isn't defined. Isn't that what a variable is? Not exactly... More on this error stuff later. Try this: End of explanation """ import numpy as np # the python array-math library x = np.arange(0.0, 2*np.pi, 0.01) # make an array of numbers from 0 to 2π with a number every 0.01. y = np.sin(x) print "The length of x is: %s" % (len(x)) print "The length of y is: %s" % (len(y)) print "The first 5 values in the x array are:\n%s" % x[0:5] print "The first 5 values in the y array are:\n%s" % y[0:5] # this imports some plotting stuff we'll use from bokeh.plotting import output_notebook output_notebook() from bokeh.plotting import figure, show p = figure(title="Sine Example") p.line(x, y) show(p) """ Explanation: So, the variables on the right side of the equals sign have to already be assigned a value. Otherwise, the interpreter tries to evaluate the right side and assign it to the left side. This might not seem particularly useful until you see how to use looping. Also, math on arrays of values really shows how cool this is. Check this out... End of explanation """ def f(x): return x**2 - 3*x + 12 """ Explanation: That's cool. Now let's get back to something we tried earlier. Remember that expression y = x**2 - 3*x + 12 ? In algebra, that's actually a function. In Python (and just about every other language) we have the concept of functions as well. The keyword def is used the define a function in Python: End of explanation """ f(3) """ Explanation: So, now you can evaluate the function by handing it a value, or parameter (in this case, we've called it x). End of explanation """ print x[:5] # just to be clear that in the scope of this notebook, the symbol x is defined print f(x)[:5] # only show the first 5 entries """ Explanation: So earlier, we assigned the value [0.0, 0.01, 0.02 ...] to the symbol x. (I'm starting to use the more official names for things here.) What happens if we pass that symbol into the function f? End of explanation """ p2 = figure() p2.line(x, f(x)) p2_nbh = show(p2) """ Explanation: %% boom %% Mind. Blown. So, I'm dying to see what this looks like: End of explanation """ import random def headache(name, number_of_repeats=5): """ a pretty useless function """ namelist = list(name) for i in range(0, number_of_repeats): random.shuffle(namelist) for letter in namelist: print letter, headache("travis", 40) """ Explanation: This is pretty math-y, what else can functions do? Well, they can do anything you tell them... End of explanation """
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Interactive Pylearn2.ipynb
mit
!cat yaml_templates/replicate_8aug_online.yaml """ Explanation: Building the train object The job of the YAML parser is to instantiate the train object and everything inside of it. Looking at an example YAML file: End of explanation """ import pylearn2.space final_shape = (48,48) input_space = pylearn2.space.CompositeSpace([ pylearn2.space.Conv2DSpace(shape=final_shape,num_channels=1,axes=['b',0,1,'c']), pylearn2.space.Conv2DSpace(shape=final_shape,num_channels=1,axes=['b',0,1,'c']) ]) """ Explanation: We want to know how to build a model with parallel channels. So, we're going to look at interactively building just the model part of this specification and how it deals with different inputs. It should be possible to put the convolutional layers in parallel using a CompositeSpace as described in this post on the pylearn-users. It could be troublesome, however, supplying these layers with two data streams. Building the model Using the specification from above we can see how to instantiate an MLP class interactively. The obvious part we need to deal with first is the input_space. We have to define this to be a CompositeSpace (documentation for spaces). Seems like this will involve modifying the dataset class, but as long as the tuple is in the right format it shouldn't be a problem. This post might also be useful, as they seem to be trying to do the same thing, and contains an example of how to defined the CompositeSpace. So, we should start by instantiating the CompositeSpace. End of explanation """ import pylearn2.models.mlp """ Explanation: Composite Layers Up until we reach the fully connected layers we want to have different convolutional pipelines. To do this, we have to define two of these pipelines inside a CompositeLayer. End of explanation """ convlayers = {} for i in range(2): convlayers[i] = pylearn2.models.mlp.MLP( layer_name="convlayer_{0}".format(i), batch_size=128, layers=[pylearn2.models.mlp.ConvRectifiedLinear( layer_name='h1', output_channels=48, irange=0.025, init_bias=0, kernel_shape=[8,8], pool_shape=[2,2], pool_stride=[2,2], max_kernel_norm=1.9365 ), pylearn2.models.mlp.ConvRectifiedLinear( layer_name='h2', output_channels=96, irange=0.025, init_bias=0, kernel_shape=[5,5], pool_shape=[2,2], pool_stride=[2,2], max_kernel_norm=1.9365 ), pylearn2.models.mlp.ConvRectifiedLinear( layer_name='h3', output_channels=128, irange=0.025, init_bias=0, kernel_shape=[3,3], pool_shape=[2,2], pool_stride=[2,2], max_kernel_norm=1.9365 ), pylearn2.models.mlp.ConvRectifiedLinear( layer_name='h4', output_channels=128, irange=0.025, init_bias=0, kernel_shape=[3,3], pool_shape=[2,2], pool_stride=[2,2], max_kernel_norm=1.9365 ) ] ) """ Explanation: First, we have to instantiate two copies of the above convolutional layers as their own MLP objects. Originally, I thought these should have an input_source to specify the inputs they take, turns out nested MLPs do not have input or target sources. Might as well store these in a dictionary: End of explanation """ inputs_to_layers = {0:[0],1:[1]} compositelayer = pylearn2.models.mlp.CompositeLayer( layer_name="parallel_conv", layers=[convlayers[i] for i in range(2)], inputs_to_layers=inputs_to_layers) """ Explanation: Then we can initialise our CompositeLayer with these two stacks of convolutional layers. Have to define dictionary mapping which of the inputs in the composite space supplied goes to which component of the space. End of explanation """ flattened = pylearn2.models.mlp.FlattenerLayer(raw_layer=compositelayer) """ Explanation: Unfortunately, it turns out we also have to put a FlattenerLayer around this so that the output of this layer will play nicely with the fully connected layer following this: End of explanation """ n_classes=121 main_mlp =None main_mlp = pylearn2.models.mlp.MLP( batch_size=128, input_space=input_space, input_source=['img_1','img_2'], layers=[ flattened, pylearn2.models.mlp.RectifiedLinear( dim=1024, max_col_norm=1.9, layer_name='h5', istdev=0.05, W_lr_scale=0.25, b_lr_scale=0.25), pylearn2.models.mlp.Softmax( n_classes=121, max_col_norm=1.9365, layer_name='y', istdev=0.05, W_lr_scale=0.25, b_lr_scale=0.25 ) ] ) """ Explanation: Now we need to connect this composite layer to the rest of the network, which is a single fully connected layer and the softmax output layer. To do this, we instantiate another MLP object, in which the first layer is this composite layer. This also when we use the composite input space we defined above. End of explanation """ import neukrill_net.image_directory_dataset import copy reload(neukrill_net.image_directory_dataset) class ParallelIterator(object): def __init__(self, *args, **keyargs): keyargs['rng'] = np.random.RandomState(42) self.iterator_1 = neukrill_net.image_directory_dataset.FlyIterator(*args,**keyargs) keyargs = copy.deepcopy(keyargs) keyargs['rng'] = np.random.RandomState(42) self.iterator_2 = neukrill_net.image_directory_dataset.FlyIterator(*args,**keyargs) self.stochastic=False self.num_examples = self.iterator_1.num_examples def __iter__(self): return self def next(self): # get a batch from both iterators: Xbatch1,ybatch1 = self.iterator_1.next() Xbatch2,ybatch2 = self.iterator_2.next() assert np.allclose(ybatch1,ybatch2) return Xbatch1,Xbatch2,ybatch1 class ParallelDataset(neukrill_net.image_directory_dataset.ListDataset): def iterator(self, mode=None, batch_size=None, num_batches=None, rng=None, data_specs=None, return_tuple=False): if not num_batches: num_batches = int(len(self.X)/batch_size) iterator = ParallelIterator(dataset=self, batch_size=batch_size, num_batches=num_batches, final_shape=self.run_settings["final_shape"], rng=None,mode=mode) return iterator import neukrill_net.augment import os dataset = ParallelDataset( transformer=neukrill_net.augment.RandomAugment( units='float', rotate=[0,90,180,270], rotate_is_resizable=0, flip=1, resize=final_shape, normalise={'global_or_pixel':'global', 'mu': 0.957, 'sigma': 0.142} ), settings_path=os.path.abspath("settings.json"), run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"), force=True ) """ Explanation: Creating the dataset To test this model we need a dataset that's going to supply the input data in the correct format. This should be a tuple of 4D arrays returns by the iterator in the tuple containing the input and target batches. We can create this pretty easily by just making a Dataset that inherits our old ListDataset and creates an iterator that contains two FlyIterators. End of explanation """ iterator = dataset.iterator(mode='even_shuffled_sequential',batch_size=128) X1,X2,y = iterator.next() """ Explanation: Testing this new dataset iterator: End of explanation """ channels = None for i in range(20): if not channels: channels = hl.Image(X1[i,:].squeeze(),group="Iterator 1") channels = hl.Image(X2[i,:].squeeze(),group="Iterator 2") else: channels += hl.Image(X1[i,:].squeeze(),group="Iterator 1") channels += hl.Image(X2[i,:].squeeze(),group="Iterator 2") channels """ Explanation: Plotting some of the images it produces side by side to make sure they're the same: End of explanation """ import pylearn2.training_algorithms.sgd import pylearn2.costs.mlp.dropout import pylearn2.costs.cost import pylearn2.termination_criteria algorithm = pylearn2.training_algorithms.sgd.SGD( train_iteration_mode='even_shuffled_sequential', monitor_iteration_mode='even_sequential', batch_size=128, learning_rate=0.1, learning_rule= pylearn2.training_algorithms.learning_rule.Momentum( init_momentum=0.5 ), monitoring_dataset={ 'train':dataset, 'valid':ParallelDataset( transformer=neukrill_net.augment.RandomAugment( units='float', rotate=[0,90,180,270], rotate_is_resizable=0, flip=1, resize=final_shape, normalise={'global_or_pixel':'global', 'mu': 0.957, 'sigma': 0.142} ), settings_path=os.path.abspath("settings.json"), run_settings_path=os.path.abspath("run_settings/replicate_8aug.json"), force=True, training_set_mode='validation' ) }, cost=pylearn2.costs.cost.SumOfCosts( costs=[ pylearn2.costs.mlp.dropout.Dropout( input_include_probs={'h5':0.5}, input_scales={'h5':2.0}), pylearn2.costs.mlp.WeightDecay(coeffs={'parallel_conv':0.00005, 'h5':0.00005}) ] ), termination_criterion=pylearn2.termination_criteria.EpochCounter(max_epochs=500) ) import pylearn2.train_extensions import pylearn2.train_extensions.best_params extensions = [ pylearn2.training_algorithms.learning_rule.MomentumAdjustor( start=1, saturate=200, final_momentum=0.95 ), pylearn2.training_algorithms.sgd.LinearDecayOverEpoch( start=1, saturate=200, decay_factor=0.025 ), pylearn2.train_extensions.best_params.MonitorBasedSaveBest( channel_name='valid_y_nll', save_path='/disk/scratch/neuroglycerin/models/parallel_interactive.pkl' ), pylearn2.training_algorithms.sgd.MonitorBasedLRAdjuster( high_trigger=1.0, low_trigger=0.999, grow_amt=1.012, shrink_amt=0.986, max_lr=0.4, min_lr=0.00005, channel_name='valid_y_nll' ) ] """ Explanation: Don't know why there's a single one from Iterator 2 at the start, but otherwise seems to have worked. Creating the rest The rest of the train object stays the same, apart from the save path and that the algorithm will have to load one of these new ParallelDataset objects for its validation set. So, we're missing: algorithm - contains validation set, which must be set up as a parallel dataset. extensions - keeping these the same but changing save paths It's worth noting that when we define the cost and the weight decay we have to address the new convolutional layers inside the composite layer. End of explanation """ import pylearn2.train train = pylearn2.train.Train( dataset=dataset, model=main_mlp, algorithm=algorithm, extensions=extensions, save_path='/disk/scratch/neuroglycerin/models/parallel_interactive_recent.pkl', save_freq=1 ) """ Explanation: Assembling the full train object We now have everything we need to make up our train object, so we can put it together and see how well it runs. End of explanation """ train.main_loop() """ Explanation: We can live with that warning. Now, attempting to run the model: End of explanation """
GoogleCloudPlatform/tensorflow-without-a-phd
tensorflow-rnn-tutorial/00_Keras_RNN_predictions_playground.ipynb
apache-2.0
# using Tensorflow 2 %tensorflow_version 2.x import numpy as np from matplotlib import pyplot as plt import tensorflow as tf print("Tensorflow version: " + tf.__version__) #@title Display utilities [RUN ME] from enum import IntEnum import numpy as np class Waveforms(IntEnum): SINE1 = 0 SINE2 = 1 SINE3 = 2 SINE4 = 3 def create_time_series(waveform, datalen): # Generates a sequence of length datalen # There are three available waveforms in the Waveforms enum # good waveforms frequencies = [(0.2, 0.15), (0.35, 0.3), (0.6, 0.55), (0.4, 0.25)] freq1, freq2 = frequencies[waveform] noise = [np.random.random()*0.2 for i in range(datalen)] x1 = np.sin(np.arange(0,datalen) * freq1) + noise x2 = np.sin(np.arange(0,datalen) * freq2) + noise x = x1 + x2 return x.astype(np.float32) from matplotlib import transforms as plttrans plt.rcParams['figure.figsize']=(16.8,6.0) plt.rcParams['axes.grid']=True plt.rcParams['axes.linewidth']=0 plt.rcParams['grid.color']='#DDDDDD' plt.rcParams['axes.facecolor']='white' plt.rcParams['xtick.major.size']=0 plt.rcParams['ytick.major.size']=0 def picture_this_1(data, datalen): plt.subplot(211) plt.plot(data[datalen-512:datalen+512]) plt.axvspan(0, 512, color='black', alpha=0.06) plt.axvspan(512, 1024, color='grey', alpha=0.04) plt.subplot(212) plt.plot(data[3*datalen-512:3*datalen+512]) plt.axvspan(0, 512, color='grey', alpha=0.04) plt.axvspan(512, 1024, color='black', alpha=0.06) plt.show() def picture_this_2(data, batchsize, seqlen): samples = np.reshape(data, [-1, batchsize, seqlen]) rndsample = samples[np.random.choice(samples.shape[0], 8, replace=False)] print("Tensor shape of a batch of training sequences: " + str(rndsample[0].shape)) print("Random excerpt:") subplot = 241 for i in range(8): plt.subplot(subplot) plt.plot(rndsample[i, 0]) # first sequence in random batch subplot += 1 plt.show() def picture_this_3(predictions, evaldata, evallabels, seqlen): subplot = 241 for i in range(8): plt.subplot(subplot) #k = int(np.random.rand() * evaldata.shape[0]) l0, = plt.plot(evaldata[i, 1:], label="data") plt.plot([seqlen-2, seqlen-1], evallabels[i, -2:], ":") l1, = plt.plot([seqlen-1], [predictions[i]], "o", label='Predicted') l2, = plt.plot([seqlen-1], [evallabels[i][-1]], "o", label='Ground Truth') if i==0: plt.legend(handles=[l0, l1, l2]) subplot += 1 plt.show() def histogram_helper(data, title, last_label=None): labels = ['RND', 'LAST', 'LAST2', 'LINEAR', 'DNN', 'CNN', 'RNN', 'RNN_N'] colors = ['#4285f4', '#34a853', '#fbbc05', '#ea4334', '#4285f4', '#34a853', '#fbbc05', '#ea4334', '#4285f4', '#34a853', '#fbbc05', '#ea4334'] fig = plt.figure(figsize=(7,4)) plt.xticks(rotation='40') ymax = data[1]*1.3 plt.ylim(0, ymax) plt.title(title, pad="20") # remove data points where data is None filtered = filter(lambda tup: tup[1] is not None, zip(labels, data, colors)) # split back into lists labels, data, colors = map(list, zip(*filtered)) # replace last label is appropriate if last_label is not None: labels[-1] = last_label # histogram plot plt.bar(labels, data, color=colors) # add values on histogram bars for i, (_, v, color) in enumerate(zip(labels, data, colors)): plt.gca().text(i-0.3, min(v, ymax)+0.02, "{0:.4f}".format(v), color=color, fontweight="bold") plt.show() def picture_this_hist_yours(data): histogram_helper(data, 'RMSE: your model vs. other approaches', last_label='Yours') def picture_this_hist_all(data): histogram_helper(data, 'RMSE: final comparison') """ Explanation: An RNN for short-term predictions This model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data. End of explanation """ DATA_SEQ_LEN = 1024*128 data = np.concatenate([create_time_series(waveform, DATA_SEQ_LEN) for waveform in Waveforms]) # 4 different wave forms picture_this_1(data, DATA_SEQ_LEN) DATA_LEN = DATA_SEQ_LEN * 4 # since we concatenated 4 sequences """ Explanation: Generate fake dataset End of explanation """ RNN_CELLSIZE = 32 # size of the RNN cells SEQLEN = 16 # unrolled sequence length BATCHSIZE = 32 # mini-batch size LAST_N = SEQLEN//2 # loss computed on last N element of sequence in advanced RNN model """ Explanation: Hyperparameters End of explanation """ picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences """ Explanation: Visualize training sequences This is what the neural network will see during training. End of explanation """ # training to predict the same sequence shifted by one (next value) labeldata = np.roll(data, -1) # cut data into sequences traindata = np.reshape(data, [-1, SEQLEN]) labeldata = np.reshape(labeldata, [-1, SEQLEN]) # make an evaluation dataset by cutting the sequences differently evaldata = np.roll(data, -SEQLEN//2) evallabels = np.roll(evaldata, -1) evaldata = np.reshape(evaldata, [-1, SEQLEN]) evallabels = np.reshape(evallabels, [-1, SEQLEN]) def get_training_dataset(last_n=1): dataset = tf.data.Dataset.from_tensor_slices( ( traindata, # features labeldata[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence ) ) # Dataset API used here to put the dataset into shape dataset = dataset.repeat() dataset = dataset.shuffle(DATA_LEN//SEQLEN) # shuffling is important ! (Number of sequences in shuffle buffer: all of them) dataset = dataset.batch(BATCHSIZE, drop_remainder = True) return dataset def get_evaluation_dataset(last_n=1): dataset = tf.data.Dataset.from_tensor_slices( ( evaldata, # features evallabels[:,-last_n:SEQLEN] # targets: the last element or last n elements in the shifted sequence ) ) # Dataset API used here to put the dataset into shape dataset = dataset.batch(evaldata.shape[0], drop_remainder = True) # just one batch with everything return dataset """ Explanation: Prepare datasets End of explanation """ train_ds = get_training_dataset() for features, labels in train_ds.take(10): print("input_shape:", features.numpy().shape, ", shape of labels:", labels.numpy().shape) """ Explanation: Peek at the data End of explanation """ # this is how to create a Keras model from neural network layers def compile_keras_sequential_model(list_of_layers, model_name): # a tf.keras.Sequential model is a sequence of layers model = tf.keras.Sequential(list_of_layers, name=model_name) # to finalize the model, specify the loss, the optimizer and metrics model.compile( loss = 'mean_squared_error', optimizer = 'rmsprop', metrics = ['RootMeanSquaredError']) # this prints a description of the model model.summary() return model # # three very simplistic "models" that require no training. Can you beat them ? # # SIMPLISTIC BENCHMARK MODEL 1 predict_same_as_last_value = lambda x: x[:,-1] # shape of x is [BATCHSIZE,SEQLEN] # SIMPLISTIC BENCHMARK MODEL 2 predict_trend_from_last_two_values = lambda x: x[:,-1] + (x[:,-1] - x[:,-2]) # SIMPLISTIC BENCHMARK MODEL 3 predict_random_value = lambda x: tf.random.uniform(tf.shape(x)[0:1], -2.0, 2.0) def model_layers_from_lambda(lambda_fn, input_shape, output_shape): return [tf.keras.layers.Lambda(lambda_fn, input_shape=input_shape), tf.keras.layers.Reshape(output_shape)] model_layers_RAND = model_layers_from_lambda(predict_random_value, input_shape=[SEQLEN,], output_shape=[1,]) model_layers_LAST = model_layers_from_lambda(predict_same_as_last_value, input_shape=[SEQLEN,], output_shape=[1,]) model_layers_LAST2 = model_layers_from_lambda(predict_trend_from_last_two_values, input_shape=[SEQLEN,], output_shape=[1,]) # # three neural network models for comparison, in increasing order of complexity # # BENCHMARK MODEL 4: linear model (RMSE: 0.215 after 10 epochs) model_layers_LINEAR = [tf.keras.layers.Dense(1, input_shape=[SEQLEN,])] # output shape [BATCHSIZE, 1] # BENCHMARK MODEL 5: 2-layer dense model (RMSE: 0.197 after 10 epochs) model_layers_DNN = [tf.keras.layers.Dense(SEQLEN//2, activation='relu', input_shape=[SEQLEN,]), # input shape [BATCHSIZE, SEQLEN] tf.keras.layers.Dense(1)] # output shape [BATCHSIZE, 1] # BENCHMARK MODEL 6: convolutional (RMSE: 0.186 after 10 epochs) model_layers_CNN = [ tf.keras.layers.Reshape([SEQLEN, 1], input_shape=[SEQLEN,]), # [BATCHSIZE, SEQLEN, 1] is necessary for conv model tf.keras.layers.Conv1D(filters=8, kernel_size=4, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] tf.keras.layers.Conv1D(filters=16, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] tf.keras.layers.Conv1D(filters=8, kernel_size=1, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN, 8] tf.keras.layers.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//2, 8] tf.keras.layers.Conv1D(filters=8, kernel_size=3, activation='relu', padding="same"), # [BATCHSIZE, SEQLEN//2, 8] tf.keras.layers.MaxPooling1D(pool_size=2, strides=2), # [BATCHSIZE, SEQLEN//4, 8] # mis-using a conv layer as linear regression :-) tf.keras.layers.Conv1D(filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid"), # output shape [BATCHSIZE, 1, 1] tf.keras.layers.Reshape([1,]) ] # output shape [BATCHSIZE, 1] # instantiate the benchmark models and train those that need training steps_per_epoch = steps_per_epoch = DATA_LEN // SEQLEN // BATCHSIZE NB_BENCHMARK_EPOCHS = 10 model_RAND = compile_keras_sequential_model(model_layers_RAND, "RAND") # Simplistic model without parameters. It needs no training. model_LAST = compile_keras_sequential_model(model_layers_LAST, "LAST") # Simplistic model without parameters. It needs no training. model_LAST2 = compile_keras_sequential_model(model_layers_LAST2, "LAST2") # Simplistic model without parameters. It needs no training. model_LINEAR = compile_keras_sequential_model(model_layers_LINEAR, "LINEAR") model_LINEAR.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS) model_DNN = compile_keras_sequential_model(model_layers_DNN, "DNN") model_DNN.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS) model_CNN = compile_keras_sequential_model(model_layers_CNN, "CNN") model_CNN.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS) # evaluate the benchmark models benchmark_models = [model_RAND, model_LAST, model_LAST2, model_LINEAR, model_DNN, model_CNN] benchmark_rmses = [] for model in benchmark_models: _, rmse = model.evaluate(get_evaluation_dataset(), steps=1) benchmark_rmses.append(rmse) """ Explanation: Benchmark models We will compare the RNNs against these models. For the time being you can regard them as black boxes. End of explanation """ model_layers_RNN = [ tf.keras.layers.Dense(1, input_shape=[SEQLEN,]) # input shape needed on first layer only # # TODO: Replace the dummy Dense layer with your own layers # ] model_RNN = compile_keras_sequential_model(model_layers_RNN, "RNN") """ Explanation: RNN model [WORK REQUIRED] Train the model as it is, with a single dense layer (i.e. a linear model). Execute the Evaluation and Prediction cells and compare to benchmark models. How good is it ? Implement an RNN model with a single layer GRU cell: tf.keras.layers.GRU(RNN_CELLSIZE)<br/> followed by a linear regression layer (readout): tf.keras.layers.Dense(1)<br/> Note down the shapes at each layer and adjust as necessary with tf.keras.layers.Reshape([…], input_shape=[…])<br/> Hints: The first layer must have an input_shape=[…] parameter tf.keras.layers.GRU implements an unrolled sequence of GRU cells. Its expected input shape is therefore [BATCHSIZE, SEQLEN, 1] By default, tf.keras.layers.GRU only returns the last element in the output sequence. Its default output shape is [BATCHSIZE, RNN_CELLSIZE] A dense layer with a single node has an output shape of [BATCHSIZE, 1] Compare this with the shape of input and output data in the "Peek at the data" cell above. In Keras, tensor shapes have an implicit first dimension of BATCHSIZE. When you write input_shape=[SEQLEN,], the real shape is [BATCHSIZE, SEQLEN] Pen, paper and fresh brain cells <font size="+2">🤯</font> strongly recommened. Have fun ! Now add a second layer of GRU cells. The first layer must feed it the entire output sequence. The syntax for that is tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True) You can try three layers too but it will not perform significantly better. Another option for RNNs is to compute the loss on the last N elements of the predicted sequence instead of the last 1. Copy your RNN model and rename it model_RNN_N Modify the last GRU layer to output the full sequence with tf.keras.layers.GRU(RNN_CELLSIZE, return_sequences=True) The readout regression must now be replicated on all outputs in the sequence. The syntax for that is tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1)) Keep only the last N values in the sequence. This can be done with a lambda layer: tf.keras.layers.Lambda(lambda x: x[:,-LAST_N:SEQLEN,0]) This model requires different target values during training and evaluation. Use get_training_dataset(last_n=LAST_N) and get_evaluation_dataset(last_n=LAST_N) to get them. Check shapes in the "Peek at the data" cell above. A good value to try is LAST_N=SEQLEN//2. You can now train, evaluate, predict. Was is worth the extra effort <font size="+2">🤯</font>? (Optional) In the "Benchmark" cell at the end of the notebook, copy-paste the layers of your RNN and RNN_N implementations and run the bechmark. <div style="text-align: right; font-family: monospace"> X shape [BATCHSIZE, SEQLEN, 1]<br/> Y shape [BATCHSIZE, SEQLEN, 1]<br/> H shape [BATCHSIZE, RNN_CELLSIZE*N_LAYERS] </div> End of explanation """ # You can re-execute this cell to continue training NB_EPOCHS = 3 # number of times the data is repeated during training steps_per_epoch = DATA_LEN // SEQLEN // BATCHSIZE model = model_RNN # model to train: model_LINEAR, model_DNN, model_CNN, model_RNN, model_RNN_N train_ds = get_training_dataset() # use last_n=LAST_N for model_RNN_N history = model.fit(train_ds, steps_per_epoch=steps_per_epoch, epochs=NB_EPOCHS) plt.plot(history.history['loss']) plt.show() """ Explanation: Training loop End of explanation """ # Here "evaluating" using the training dataset eval_ds = get_evaluation_dataset() # use last_n=LAST_N for model_RNN_N loss, your_rmse = model.evaluate(eval_ds, steps=1) picture_this_hist_yours(benchmark_rmses + [your_rmse]) """ Explanation: Evaluation End of explanation """ # execute multiple times to see different sample sequences subset = np.random.choice(DATA_LEN//SEQLEN, 8) # pick 8 eval sequences at random predictions = model.predict(evaldata[subset], steps=1) # prediction directly from numpy array picture_this_3(predictions[:,-1], evaldata[subset], evallabels[subset], SEQLEN) """ Explanation: Predictions End of explanation """ your_RNN_layers = [ # # copy-paste your RNN implementation here (layers only) # ] assert len(your_RNN_layers)>0, "the model has no layers" your_RNN_model = compile_keras_sequential_model(your_RNN_layers, 'RNN') your_RNN_N_layers = [ # # copy-paste your RNN_N implementation here (layers only) # ] assert len(your_RNN_layers)>0, "the model has no layers" your_RNN_N_model = compile_keras_sequential_model(your_RNN_N_layers, 'RNN_N') # train your models from scratch your_RNN_model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS) your_RNN_N_model.fit(get_training_dataset(last_n=LAST_N), steps_per_epoch=steps_per_epoch, epochs=NB_BENCHMARK_EPOCHS) # evaluate all models rmses = [] benchmark_models = [model_RAND, model_LAST, model_LAST2, model_LINEAR, model_DNN, model_CNN] for model in benchmark_models: _, rmse = model.evaluate(get_evaluation_dataset(), steps=1) rmses.append(rmse) _, rmse = your_RNN_model.evaluate(get_evaluation_dataset(), steps=1) rmses.append(rmse) _, rmse = your_RNN_N_model.evaluate(get_evaluation_dataset(last_n=LAST_N), steps=1) rmses.append(rmse) picture_this_hist_all(rmses) """ Explanation: <a name="benchmark"></a> Benchmark Benchmark all the algorithms. End of explanation """
TwistedHardware/mltutorial
notebooks/tf/.ipynb_checkpoints/2. Tensors-checkpoint.ipynb
gpl-2.0
import tensorflow as tf import sys print("Python Version:",sys.version.split(" ")[0]) print("TensorFlow Version:",tf.VERSION) """ Explanation: <table> <tr> <td style="text-align:left;"><div style="font-family: monospace; font-size: 2em; display: inline-block; width:60%">2. Tensors</div><img src="images/roshan.png" style="width:30%; display: inline; text-align: left; float:right;"></td> <td></td> </tr> </table> Before we go into tensors and programming in TensorFlow, let's take a look at how does it work. TensorFlow Basics TensorFlow has a few concepts that you should be familiar with. TensorFlow executes your code inside a execution engine that you communicate with using an API. In TensorFlow your data is called a Tensor that you can apply operations (OP) to. Your code is converted into a Graph that is executed in an execution engine called a Session. So your python code is just a representation of your graph that can be executed in a session. You can see how your data (or tensors) flow from one operation to the next, hens the name TensorFlow. Since version 1.5 and as of version 1.8 there are two methods to execute code in TensorFlow: Graph Execution Eager Execution The main difference is in graph execution is a type of declarative programming and eager execution is a type of imperative programming. In plain English, the difference is graph execution defines your code as a graph and executes in a session. Your objects in python are not the actual objects inside the session, they are only a reference to them. Eager execution executes your code as you run it giving you a better control of your program while it is running so you are not stuck with a predefined graph. So why do we even bother with graph execution? Performance is a big issue in machine learning and a small difference in execution time can save you in a long project weeks of your time and 1000s of hours of GPU time. Support is another issue for now where some features of TensorFlow do not work in Eager Execution like high level estimators. We will focus in the following tutorials only on graph execution and we will cover eager execution later in this series. Tensors Tensors can be though of a scalar variable or an array of any dimension. Tensors are the main object to store pass data, sore variables and constants and all operations that take in data take it in tensor format and all operations that output data uses the tensor format for that too. Tensor Shape and Rank Tensors have a rank and a shape so for scalar values, we use rank-0 tensors of a shape () which is an empty shape Assuming we need a variable or a constant number to use in our software, we can represent it as a tensor of rank-0. A rank-1 tensor can be though of as an vector or a one dimensional array. To create a rank-1 tensor with shape (3) this will create a tensor that can hold three values in a single dimensional array. A rank-2 is a matrix or a two dimensional array. This can be used to hold two dimensional data like a black and white image. The shape of the tensor can match the shape of the image so to hold a 256x256 pixel image in a tensor, you can create a rank-2 tensor of shape (256,256). A rank-3 tensor is a three dimensional array. To can be used to hold three dimensional data like a color image represented in (RGB). To create a tensor to hold an color image of size 256x256, you can create a rank-3 tensor of shape (256,256,3). TensorFlow allows tensors in higher dimensions but you will very rarely see tensors of a rank exceeding 5 of shape (batch size, width, height, RGB, frames) for representing a batch of video clips. Importing Tensor Flow Let's import TensorFlow and start working with some tensors. End of explanation """ sess = tf.InteractiveSession() """ Explanation: Graph Execution TensorFlow executes your code inside a C++ program and returns the results through the TensorFlow API. Since we are using Python, we will be using TensorFlow Python API which is the most documented and most used API of TensorFlow. Sice we are using graph execution, there are two ways to create a session: Session Interactive Session Sessions and interactive sessions, use your code to build a "Graph" which is a representation of your code inside TensorFlow's execution engine. The main difference between them is interactive session makes itself the default session so since we are using only one session for our code, we will use that. For now let's start an interactive session and start flowing some tensors! End of explanation """ a = tf.zeros(()) """ Explanation: Generating new Tensors Since we have an interactive session, let's create a tensor. There are two common ways to create a tensor tf.zeros() and tf.ones(). Each one of them takes a python tuple or an array as the shape of the tensor. Let's start be creating a rank-0 tensor. End of explanation """ a """ Explanation: We create a tensor and assigned it to a local variable named a. When we check the value of a this is what we get. End of explanation """ a.eval() """ Explanation: Notice there is no value. You need to call eval() method of the tensor to get the actual value. This method takes an optional parameter where you can pass your session. Since we are using interactive session, we have a default one so we don't need to pass a session. End of explanation """ a.shape """ Explanation: You should know that eval() method returns a numpy.float32 (or what ever the type of the tensor is) if the rank of the tensor is 0 and numpy.ndarray if the tensor of rank 1 or higher. Numpy is a multi-dimensional array library for python that runs the operations in a C program and interfaces back with python to ensure fast array operations. We can also check the rank and shape of the tensor. End of explanation """ a.shape.ndims """ Explanation: the rank would be the number of dimensions. End of explanation """ a.name """ Explanation: Notice the name inside the TensorFlow execution engine is not a. It is zeros:0 which is an auto generated name for the variable. The auto generated name is the name of the operation that generated the tensor and then an the index of the of the tensor in the output of the operation. End of explanation """ tf.zeros(()) """ Explanation: If you created another variable using the name operation, it will be named zeros_1:0. End of explanation """ b = tf.zeros((3), name="b") b """ Explanation: Now let's create a second tensor of shape (3) which is going to be a rank-1 tensor. This time we will name it b and store it in a local variable named b. End of explanation """ type(b.eval()) """ Explanation: Notice the name of the variable now is b:0 which is the name that we gave it and an auto incrementing index. We can also get the value in the same way using eval() method. End of explanation """ sess.run(b) """ Explanation: You can also get the value of a tensor by executing the tensor using your interactive session. End of explanation """ tf.fill((2,2), 5).eval() """ Explanation: You can also fill the tensor with any other value you want other than 0 and 1 using fill() function. End of explanation """ tf.zeros((10,3)).eval() """ Explanation: Notice that the data type of this tensor is int32 and not float32 because you initialized the tensor with an integer 5 and not 5.0. Tensor Shape For multi dimensional tensors, the shape is passed as a tuple or an array. The way this array is arranged is from the outer most dimension to the inner dimensions. So for a tensor that should represents 10 items and each item has three numbers, the shape would be (10,3). End of explanation """ tf.zeros((2,3,4)).eval() """ Explanation: For higher dimensions the same rules applies. Let say we have 2 items and each item has 3 parts and each on of these parts consists of 4 numbers the shape would be (2,3,4) End of explanation """ arr1 = tf.random_normal((1000,)) arr1 """ Explanation: Note: This is the opposite of the how matrix shape notation is written in mathematics. A matrix of shape $A_{(3,10)}$ can be represented in TensorFlow as (10,3). The reason for that is in mathematics the shape is $(Columns,Rows)$ and TensorFlow uses (Outer,Inner) which translates in 2-D tensor as (Rows,Columns). Generating Tensors with Random Values In many cases, you want to generate a new tensor but we want to start with random values stored in the tensor. The way we do that is using one the random generator of TensorFlow. Normal Distribution tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.hist(arr1.eval(), bins=15); """ Explanation: This function returns random values using normal distribution which is also known as Gaussian distribution or informally called a "Bell Curve". To better understand it, let's first look at a graph showing this distribution. Normal distribution of of mean $\mu$ and standard deviation $\sigma$ is denoted $N(\mu, \sigma)$ (More about that in "The Math Behind It"). To do that we will import Matplotlib which is a very common charting library for Python. End of explanation """ arr1 = tf.random_normal((1000,), mean=20.0) plt.hist(arr1.eval(), bins=15); """ Explanation: Notice the bill shape of the curve where you get more values values around your mean a few an values as you move away from the mean. You can also change the mean. End of explanation """ arr1 = tf.random_normal((1000,), stddev=2, name="arr1") arr2 = tf.random_normal((1000,), stddev=1, name="arr2") plt.hist([arr1.eval(), arr2.eval()], bins=15); """ Explanation: You can also control how much of concentrated your random numbers will be around the mean by controlling the standard deviation. Higher standard deviation means less values around the mean and wider distribution. End of explanation """ plt.hist(tf.truncated_normal((1000,)).eval(), bins=15); """ Explanation: One more note on normal distribution, if you created a large tensor with millions or tens of millions of random values, some of these values will fall really far from the normal. With some machine learning algorithms this might create instability. You can avoid that by using truncated_normal() function instead of random_normal(). This will re-sample any values that falls more than 2 standard deviations from the mean. End of explanation """ arr1 = tf.random_uniform((1000,)) plt.hist(arr1.eval(), bins=15); """ Explanation: Uniform Distribution The other common distribution will the uniform one. This will generate values with equal probability of falling anywhere between two numbers. tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None) End of explanation """ tf.range(5).eval() """ Explanation: Generating Tensors with Sequence Values You can generate tensors with sequence values using the following function: tf.range(start, limit=None, delta=1, dtype=None, name='range') End of explanation """ tf.range(0, 5).eval() """ Explanation: Which is equivalent to: End of explanation """ tf.range(0, 5, 2).eval() """ Explanation: Notice that the output of range() function will never reach the limit parameter. You can also control the delta which is the spacing between the tensor elements. End of explanation """ a = tf.range(6) tf.reshape(a, (3,2)).eval() """ Explanation: Reshaping Tensors You can reshape tensors using this function: tf.reshape(tensor, shape, name=None) End of explanation """ a = tf.ones((2,2)) b = tf.fill((2,2), 10.0) # Notice we used 10.0 and not 10 to ensure the data type will be float32 c = a + b c.eval() """ Explanation: Tensor Arithmetics You can use standard python arithmetics on tensors and get results in a new tensor. Addition End of explanation """ d = c * 2.0 d.eval() (d + 3).eval() """ Explanation: Element-wise Operations End of explanation """ i = tf.eye(3,3) i.eval() """ Explanation: Matrix Operations TensorFlow support a variety of matrix operations. Identity Matrix An identity matrix is a 2 dimensional matrix where all the values are zeros exception diagonally where it has values of 1. This is an example of an identity matrix of shape (3,3). $$\begin{bmatrix} {1} & {0} & {0} \ {0} & {1} & {0} \ {0} & {0} & {1} \ \end{bmatrix}$$ To do that in TensorFlow use the tf.eye() function. End of explanation """ a = tf.range(1,9) i = tf.reshape(a, (2,4)) i.eval() it = tf.matrix_transpose(i) it.eval() """ Explanation: Transpose Transpose is another operation that is commonly used in matrix calculations. Transpose converts rows to columns. Assume you have a matrix $\mathbf{A}$, a transpose operation over this matrix produces $\mathbf{A^T}$ pronounced transpose of $\mathbf{A}$. End of explanation """ a = tf.ones((2,3)) b = tf.ones((3,4)) c = tf.matmul(a ,b) print("c has the shape of:", c.shape) c.eval() """ Explanation: Matrix Multiplication One of the most common operations for matrices in deep learning in matrix multiplication. Matrix multiplication is not an element-wise operation. The exact math will be discussed in the last section of this tutorial "The Math Behind It". But for now to give you the basics you should know the following: Assume we have two matrices $\mathbf{A}$ and $\mathbf{B}$. The shape of $\mathbf{A}$ is (m,n) and the shape of $\mathbf{B}$ is (o,p) we can write these two matrices with their shape as $\mathbf{A}{(m,n)}$ and $\mathbf{B}{(o,p)}$. Multiplying these two matrices produces a matrix of the shape (m,p) IF $n=o$ like this: $\mathbf{A}{(m,n)} \mathbf{B}{(o,p)}=\mathbf{C}_{(m,p)} \leftarrow n=o$ Notice the inner shape of these two matrices is the same and the output matrix has the shape of the outer shape of it these two matrices. If the inner shape of the matrices does not match the product doesn’t exist. We can use tf.matmul() function to do that. End of explanation """ g = [88, 94, 71, 97, 84, 82, 80, 98, 91, 93] total = sum(g) count = len(g) mean = total/count mean """ Explanation: The Math Behind It Standard Deviation $\sigma$ or $s$ Standard deviation is the measure of how elements in a set vary from the mean. So a sample with most data points close to the mean has low standard deviation and it gets higher as the data points start moving away from the mean. In statistics, standard deviation is denoted as the small letter sigma $\sigma$. The formula to calculate standard deviation for a whole population is: $$\sigma={\sqrt {\frac {\sum_{i=1}^N(x_{i}-{\overline {x}})^{2}}{N}}}$$ Let's break it down and see how to calculate standard deviation. Assume we have exam grades of 10 students and the grades are so follow: | ID | Grade | | ----- |:------:| | 1 | 88 | | 2 | 94 | | 3 | 71 | | 4 | 97 | | 5 | 84 | | 6 | 82 | | 7 | 80 | | 8 | 98 | | 9 | 91 | | 10 | 93 | First thing we need to do is calculate the mean. The mean is denoted as $\overline {x}$ (pronouned "x bar"). To calculate the mean (AKA average) get the sum all numbers and divide it by their count. It is also commonly denoted as a small letter mu $\mu$. Assume you have $N$ values, this will be the formula to calculate the mean: $$\overline {x} = \frac{x_1 + x_2 + ... + x_N}{N} = \frac{\sum_{i=1}^N x_i}{N}$$ So let's calculate that. End of explanation """ from math import sqrt σ = sqrt(sum([(x-mean)**2 for x in g]) / count) σ """ Explanation: Now that we know the mean, we can go back to the original equation and calculate the standard deviation. For that we will do it with loop comprehension (One of the greatest features of python). So looking at the equation one more: $$\sigma={\sqrt {\frac {\sum_{i=1}^N(x_{i}-{\overline {x}})^{2}}{N}}}$$ First we need to get each element in our grades $x_i$ and subtract it from the mean $\overline {x}$ then square it and take the sum of that. python a = [(x-mean)**2 for x in g] b = sum(a) Divide that by the number of elements $N$ then take the square root python variance = b / count σ = sqrt(variance) We can write the whole thing in one like this: End of explanation """ import numpy as np np.std(g) """ Explanation: Note that standard deviation is a build in function in NumPy, TensorFlow and many other languages and libraries. End of explanation """ t = tf.constant(g, dtype=tf.float64) mean_t, var_t = tf.nn.moments(t, axes=0) sqrt(var_t.eval()) """ Explanation: In TensorFlow End of explanation """ variance = sum([(x-mean)**2 for x in g]) / count variance """ Explanation: Variance $\sigma^2$, $s^2$ or $Var(X)$ It is just the square of the standard deviation. End of explanation """ a = [[1,0], [3,2], [1,4], ] b = [[2,1,2], [1,2,3], ] """ Explanation: Matrix Multiplication Arguably, the most common matrix operation you will perform in deep learning is multiplying matrices. Understanding this operation is a good start to understanding the math behind neural networks. This operation is also known as "dot product". Assume we have two matrices $\mathbf{A}{(2,3)}$ and $\mathbf{B}{(3,2)}$. The dot product of these two matrices $\mathbf{A}{(2,3)} . \mathbf{B}{(3,2)}$ is calculated as follows: $\mathbf{A}_{(2,3)} = \begin{bmatrix} {1} & {0} \ {3} & {2} \ {1} & {4} \ \end{bmatrix}$ $\mathbf{B}_{(3,2)} = \begin{bmatrix} {2} & {1} & {2} \ {1} & {2} & {3} \ \end{bmatrix}$ $\mathbf{C}{(2,2)} = \mathbf{A}{(2,3)} . \mathbf{B}_{(3,2)}$ $\mathbf{C}_{(2,2)} = \begin{bmatrix} {2\times1 + 3\times1 + 1\times2} & {0\times2 + 2\times1 + 4\times2} \ {1\times1 + 3\times2 + 1\times3} & {0\times1 + 2\times2 + 4\times3} \ \end{bmatrix} = \begin{bmatrix} {2 + 3 + 2} & {0 + 2 + 8} \ {1 + 6 + 3} & {0 + 4 + 12} \ \end{bmatrix}= \begin{bmatrix} {7} & {10} \ {10} & {16} \ \end{bmatrix}$ This is an animation to shows how that is done step by step. Now let's confirm it with TensorFlow. End of explanation """ a = tf.constant(a) b = tf.constant(b) c = tf.matmul(tf.matrix_transpose(a), tf.matrix_transpose(b)) c.eval() """ Explanation: Remember from before, that mathematical shape of a matrix is opposite of TensorFlow shape of a tensor. So instead of rewriting our arrays, we will just use transpose you make rows into columns and columns into rows. End of explanation """ c = tf.matmul(a,b, transpose_a=True, transpose_b=True) c.eval() """ Explanation: Luckily there is also a easier way to to that. End of explanation """
wanderer2/pymc3
docs/source/notebooks/getting_started.ipynb
apache-2.0
import numpy as np import matplotlib.pyplot as plt # Initialize random number generator np.random.seed(123) # True parameter values alpha, sigma = 1, 1 beta = [1, 2.5] # Size of dataset size = 100 # Predictor variable X1 = np.random.randn(size) X2 = np.random.randn(size) * 0.2 # Simulate outcome variable Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma """ Explanation: Getting started with PyMC3 Authors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck Note: This text is taken from the PeerJ CS publication on PyMC3. Abstract Probabilistic Programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamliltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package. Introduction Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC. Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis. While most of PyMC3's user-facing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration. Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends. Installation Running PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO. PyMC3 can be installed using pip (https://pip.pypa.io/en/latest/installing.html): pip install git+https://github.com/pymc-devs/pymc3 PyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib. To take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed. These are not automatically installed, but can be installed by: pip install patsy pandas The source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage. A Motivating Example: Linear Regression To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes $Y$ as normally-distributed observations with an expected value $\mu$ that is a linear function of two predictor variables, $X_1$ and $X_2$. $$\begin{aligned} Y &\sim \mathcal{N}(\mu, \sigma^2) \ \mu &= \alpha + \beta_1 X_1 + \beta_2 X_2 \end{aligned}$$ where $\alpha$ is the intercept, and $\beta_i$ is the coefficient for covariate $X_i$, while $\sigma$ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. We choose zero-mean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a half-normal distribution (normal distribution bounded at zero) as the prior for $\sigma$. $$\begin{aligned} \alpha &\sim \mathcal{N}(0, 100) \ \beta_i &\sim \mathcal{N}(0, 100) \ \sigma &\sim \lvert\mathcal{N}(0, 1){\rvert} \end{aligned}$$ Generating data We can simulate some artificial data from this model using only NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure. End of explanation """ %matplotlib inline fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10,4)) axes[0].scatter(X1, Y) axes[1].scatter(X2, Y) axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2'); """ Explanation: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib. End of explanation """ from pymc3 import Model, Normal, HalfNormal """ Explanation: Model Specification Specifying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above. First, we import the components we will need from PyMC. End of explanation """ basic_model = Model() with basic_model: # Priors for unknown model parameters alpha = Normal('alpha', mu=0, sd=10) beta = Normal('beta', mu=0, sd=10, shape=2) sigma = HalfNormal('sigma', sd=1) # Expected value of outcome mu = alpha + beta[0]*X1 + beta[1]*X2 # Likelihood (sampling distribution) of observations Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y) """ Explanation: Now we build our model, which we will present in full first, then explain each part line-by-line. End of explanation """ help(Normal) #try help(Model), help(Uniform) or help(basic_model) """ Explanation: The first line, python basic_model = Model() creates a new Model object which is a container for the model random variables. Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: python with basic_model: This creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model right after we create them. If you try to create a new random variable without a with model: statement, it will raise an error since there is no obvious model for the variable to be added to. The first three statements in the context manager: python alpha = Normal('alpha', mu=0, sd=10) beta = Normal('beta', mu=0, sd=10, shape=2) sigma = HalfNormal('sigma', sd=1) create a stochastic random variables with a Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10 for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, $\sigma$. These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and partly random (or stochastic). We call the Normal constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu, the mean, and sd, the standard deviation, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and many others, are available in PyMC3. The beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7) makes random variable that takes on 5 by 7 matrix values). Detailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function. End of explanation """ from pymc3 import find_MAP map_estimate = find_MAP(model=basic_model) print(map_estimate) """ Explanation: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship: python mu = alpha + beta[0]*X1 + beta[1]*X2 This creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their values may be. PyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexed-into to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided. The final line of the model, defines Y_obs, the sampling distribution of the outcomes in the dataset. python Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y) This is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object. Notice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables. Model fitting Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior estimates analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods. Maximum a posteriori methods The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find_MAP function. Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values. End of explanation """ from scipy import optimize map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell) print(map_estimate) """ Explanation: By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP. End of explanation """ from pymc3 import NUTS, sample from scipy import optimize with basic_model: # obtain starting values via MAP start = find_MAP(fmin=optimize.fmin_powell) # draw 2000 posterior samples trace = sample(2000, start=start) """ Explanation: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together. Most techniques for finding the MAP estimate also only find a local optimum (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different. Sampling methods Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution. To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis. These step methods can be assigned manually, or assigned automatically by PyMC3. Auto-assignment is based on the attributes of each variable in the model. In general: Binary variables will be assigned to BinaryMetropolis Discrete variables will be assigned to Metropolis Continuous variables will be assigned to NUTS Auto-assignment can be overriden for any subset of variables by specifying them manually prior to sampling. Gradient-based sampling methods PyMC3 has the standard sampling algorithms like adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful on models that have many continuous parameters, a situation where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentiation of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables. NUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although NUTS uses it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often. Fortunately NUTS can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_MAP) to NUTS, it will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to make a guess for a good scaling vector, which often results in a good value. The MAP estimate is often a good point to use to initiate sampling. It is also possible to supply your own vector or scaling matrix to NUTS, though this is a more advanced use. If you wish to modify a Hessian at a specific point to use as your scaling matrix or vector, you can use find_hessian or find_hessian_diag. For our basic linear regression example in basic_model, we will use NUTS to sample 2000 draws from the posterior using the MAP as the starting point and scaling point. This must also be performed inside the context of the model. End of explanation """ trace['alpha'][-5:] """ Explanation: The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows: End of explanation """ from pymc3 import Slice with basic_model: # obtain starting values via MAP start = find_MAP(fmin=optimize.fmin_powell) # instantiate sampler step = Slice(vars=[sigma]) # draw 5000 posterior samples trace = sample(5000, step=step, start=start) """ Explanation: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample. End of explanation """ from pymc3 import traceplot traceplot(trace); """ Explanation: Posterior analysis PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot. End of explanation """ from pymc3 import summary summary(trace) """ Explanation: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients. In addition, the summary function provides a text-based output of common posterior statistics: End of explanation """ try: from pandas_datareader import data except ImportError: !pip install pandas-datareader from pandas_datareader import data import pandas as pd returns = data.get_data_yahoo('SPY', start='2008-5-1', end='2009-12-1')['Adj Close'].pct_change() print(len(returns)) returns.plot(figsize=(10, 6)) plt.ylabel('daily returns in %'); """ Explanation: Case study 1: Stochastic volatility We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient. The Model Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21). $$\begin{aligned} \sigma &\sim exp(50) \ \nu &\sim exp(.1) \ s_i &\sim \mathcal{N}(s_{i-1}, \sigma^{-2}) \ log(y_i) &\sim t(\nu, 0, exp(-2 s_i)) \end{aligned}$$ Here, $y$ is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process $s$. The individual $s_i$ are the individual daily log volatilities in the latent log volatility process. The Data Our data consist of daily returns of the S&P 500 during the 2008 financial crisis. Here, we use pandas-datareader to obtain the price data from Yahoo!-Finance; it can be installed with pip install pandas-datareader. End of explanation """ from pymc3 import Exponential, StudentT, Deterministic from pymc3.math import exp from pymc3.distributions.timeseries import GaussianRandomWalk with Model() as sp500_model: nu = Exponential('nu', 1./10, testval=5.) sigma = Exponential('sigma', 1./.02, testval=.1) s = GaussianRandomWalk('s', sigma**-2, shape=len(returns)) volatility_process = Deterministic('volatility_process', exp(-2*s)) r = StudentT('r', nu, lam=1/volatility_process, observed=returns) """ Explanation: Model Specification As with the linear regression example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the $ \nu $ and $\sigma$ priors, the Student-T (StudentT) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities. In PyMC3, variables with purely positive priors like Exponential are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named "variableName_log") is added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. Variables with priors that constrain them on two sides, like Beta or Uniform, are also transformed to be unconstrained but with a log odds transform. Although, unlike model specification in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a "test value") using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden. The vector of latent volatilities s is given a prior distribution by GaussianRandomWalk. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector. End of explanation """ from pymc3 import variational import scipy with sp500_model: mu, sds, elbo = variational.advi(n=100000) step = NUTS(scaling=sp500_model.dict_to_array(sds)**2, is_cov=True) trace = sample(2000, step, start=mu, progressbar=True) """ Explanation: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does. Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example. Fitting Before we draw samples from the posterior, it is prudent to find a decent starting value by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s). To achieve good convergence with NUTS, it is critical to find a good scaling. The MAP as found by find_MAP() is in our experience a poor choice. Instead, we run automatic differentiation variational inference (ADVI) to get a variational estimate of the posterior mean as well as the posterior standard deviations. We can use these estimates to initialize NUTS to achieve faster sampling due to better scaling. End of explanation """ traceplot(trace[200:], [nu, sigma]); """ Explanation: We can check our samples by looking at the traceplot for nu and sigma. End of explanation """ fig, ax = plt.subplots(figsize=(15, 8)) returns.plot(ax=ax) ax.plot(returns.index, 1/np.exp(trace['s',::5].T), 'r', alpha=.03); ax.set(title='volatility_process', xlabel='time', ylabel='volatility'); ax.legend(['S&P500', 'stochastic volatility process']) """ Explanation: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly. End of explanation """ disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999) year = np.arange(1851, 1962) plt.plot(year, disaster_data, 'o', markersize=8); plt.ylabel("Disaster count") plt.xlabel("Year") """ Explanation: As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease. Case study 2: Coal mining disasters Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value. Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables. End of explanation """ from pymc3 import DiscreteUniform, Poisson from pymc3.math import switch with Model() as disaster_model: switchpoint = DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900) # Priors for pre- and post-switch rates number of disasters early_rate = Exponential('early_rate', 1) late_rate = Exponential('late_rate', 1) # Allocate appropriate Poisson rates to years before and after current rate = switch(switchpoint >= year, early_rate, late_rate) disasters = Poisson('disasters', rate, observed=disaster_data) """ Explanation: Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. In our model, $$ \begin{aligned} D_t &\sim \text{Pois}(r_t), r_t= \begin{cases} l, & \text{if } t \lt s \ e, & \text{if } t \ge s \end{cases} \ s &\sim \text{Unif}(t_l, t_h)\ e &\sim \text{exp}(1)\ l &\sim \text{exp}(1) \end{aligned} $$ the parameters are defined as follows: * $D_t$: The number of disasters in year $t$ * $r_t$: The rate parameter of the Poisson distribution of disasters in year $t$. * $s$: The year in which the rate parameter changes (the switchpoint). * $e$: The rate parameter before the switchpoint $s$. * $l$: The rate parameter after the switchpoint $s$. * $t_l$, $t_h$: The lower and upper boundaries of year $t$. This model is built much like our previous models. The major differences are the introduction of discrete variables with the Poisson and discrete-uniform priors and the novel form of the deterministic random variable rate. End of explanation """ from pymc3 import Metropolis with disaster_model: step1 = NUTS([early_rate, late_rate]) # Use Metropolis for switchpoint, and missing values since it accommodates discrete variables step2 = Metropolis([switchpoint, disasters.missing_values[0]]) trace = sample(10000, step=[step1, step2]) """ Explanation: The logic for the rate random variable, python rate = switch(switchpoint &gt;= year, early_rate, late_rate) is implemented using switch, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments. Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. All we need to do to handle the missing values is ensure we sample this random variable as well. Unfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements adaptive Metropolis-Hastings, because it is designed to handle discrete values. We sample with both samplers at once by passing them to the sample function in a list. Each new sample is generated by first applying step1 then step2. End of explanation """ traceplot(trace); """ Explanation: In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error. End of explanation """ import theano.tensor as T from theano.compile.ops import as_op @as_op(itypes=[T.lscalar], otypes=[T.lscalar]) def crazy_modulo3(value): if value > 0: return value % 3 else : return (-value + 1) % 3 with Model() as model_deterministic: a = Poisson('a', 1) b = crazy_modulo3(a) """ Explanation: Arbitrary deterministics Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator. Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types. End of explanation """ from pymc3.distributions import Continuous class Beta(Continuous): def __init__(self, mu, *args, **kwargs): super(Beta, self).__init__(*args, **kwargs) self.mu = mu self.mode = mu def logp(self, value): mu = self.mu return beta_logp(value - mu) @as_op(itypes=[T.dscalar], otypes=[T.dscalar]) def beta_logp(value): return -1.5 * np.log(1 + (value)**2) with Model() as model: beta = Beta('slope', mu=0, testval=0) """ Explanation: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op. Arbitrary distributions Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014). ```python import theano.tensor as T from pymc3 import DensityDist, Uniform with Model() as model: alpha = Uniform('intercept', -100, 100) # Create custom densities beta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0) eps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1) # Create likelihood like = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y) ``` For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op. Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary. End of explanation """ # Convert X and Y to a pandas DataFrame import pandas df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y}) """ Explanation: Generalized Linear Models Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module. The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example: End of explanation """ from pymc3.glm import glm with Model() as model_glm: glm('y ~ x1 + x2', df) trace = sample(5000) """ Explanation: The model can then be very concisely specified in one line of code. End of explanation """ from pymc3.glm.families import Binomial df_logistic = pandas.DataFrame({'x1': X1, 'y': Y > np.median(Y)}) with Model() as model_glm_logistic: glm('y ~ x1', df_logistic, family=Binomial()) """ Explanation: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object. End of explanation """ from pymc3.backends import SQLite with Model() as model_glm_logistic: glm('y ~ x1', df_logistic, family=Binomial()) backend = SQLite('trace.sqlite') start = find_MAP() step = NUTS(scaling=start) trace = sample(5000, step=step, start=start, trace=backend) summary(trace, varnames=['x1']) """ Explanation: Backends PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends: By default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the sqlite backend: End of explanation """ from pymc3.backends.sqlite import load with basic_model: trace_loaded = load('trace.sqlite') """ Explanation: The stored trace can then later be loaded using the load command: End of explanation """
vdelia/vdelia.github.io
assets/kanren/ukanren.ipynb
cc0-1.0
import collections logic_variable = collections.namedtuple("logic_variable", ["index"]) def is_logic_var(x): return isinstance(x, logic_variable) """ Explanation: A python implementation of $\mu$Kanren [$\mu$Kanren][micro] (microKanren) is a minimalistic relational programming language, introduced as a stripped-down implementation of [minikanren][minikanren]. As stated by its creators, it is micro as in microkernel. What is a relational programming language? In other programming paradigms, a program is a set of operations (functions, statements, etc.) which consume inputs to produce outputs. A relational program is a set of predicates on (logic) variables. The program is executed by runnint a query, which means searching the values that can be assigned to logic variables so that those predicates hold. A relational query does not return a value, but it enumerates solutions. Moreover, there is no distinction between inputs and outputs of relations. To me relational programming is a way to extract a usable and pure logical subset of prolog. Minikanren brings it to the masses, by embedding a relational subsystem in other host languages. [Other people][lp-overrated] see it as DSLs for brute-force search, but everybody agrees that [this presentation][prez-byrd] is mind-blowing. Where does it come from? In [The Reasoned Schemer][reasoned-schemer], the authors introduced relational programming as a natural extension of functional programming. They show how to embed a logic interpreter into [Scheme][racket]. While Scheme is the reference host for all the *kanrens, currently there are many implementations, in many different host languages. The most succesful is probably [clojure/core.logic][core.logic]. This post is actually a ipython notebook where I implement $\mu$Kanren and some syntactic sugar in python. You can download the original notebook here. It is meant to be really interactive: I redefine multiple times several functions to get a more and more friendly API. The first section contains an implementation of $\mu$kanren in python; in the second section I introduce some syntactic sugar to make it more similar to miniKanren; the last sections contain some examples of what kind of programs can be written with it. $\mu$Kanren core A $\mu$Kanren program can be interpreted as a query. Given a set of relations among items and variables, we ask to the interpreter to find the variable substitutions so that those relations are valid. From the [paper][micro-paper] A $\mu$Kanren program proceeds through the application of a goal to a state. Goals are often understood by analogy to predicates. Whereas the application of a predicate to an element of its domain can be either true or false, a goal pursued in a given state can either succeed or fail. A state is a pair of a substitution (represented as a dictionary) and a non-negative integer representing a fresh variable counter. Logic variables A logic_variable is an object. It is identified by an index: two logic_variables are the same if they share the same index. End of explanation """ class SubstitutionMap(dict): pass """ Explanation: To create new logic_variables, we use the fresh variable counter to generate those indexes. Substitutions Substitutions are represented by python dictionaries whose keys $k$ are logic_variables, and whose values $v$ are items that can replace that variable. End of explanation """ class SubstitutionMap(dict): def walk(self, var): while is_logic_var(var) and var in self: var = self[var] return var def ext_s(self, var, value): s = SubstitutionMap(self) s[var] = value return s # test SubstitutionMap x, y = logic_variable(0), logic_variable(1) assert x == SubstitutionMap().walk(x), "x is free" assert 1 == SubstitutionMap({x: 1}).walk(x), "x is bound to 1" assert "something else" == SubstitutionMap({x: y})\ .ext_s(y, "something else")\ .walk(x), "walk must traverse the chain of substitutions" """ Explanation: A logic_variable $X$ can be free, i.e. $X$ not in SubstitionMap, and the relations are valid predicates for all $X$ bound to another logic_variable $Y$: $X$ is equivalent to $Y$, and SubstitutionMap[$X$] = $Y$ bound to an item $i$: predicates hold when $X$ is replaced by $i$ We will see later what are the terms of the language, and so what are the items $i$. The walk method searches for a term's value. If the term is a value (i.e. not a logic variable), then it returns the value itself. Otherwise, it traverses the chain of substitutions until it finds a value. The ext_s is factory method that adds a new binding to an existing SubstitutionMap. End of explanation """ empty_state = (SubstitutionMap(), 0) """ Explanation: The fresh variable counter is an int starting from 0, and it is used throught the evaluation to get new unique indexes for logic variables. Initially, there are no substitutions, and the counter is 0. That state is called empty_state. End of explanation """ def _concrete_unify(x, y, substitutions): if substitutions is None: return None x = substitutions.walk(x) y = substitutions.walk(y) if x == y: return substitutions elif is_logic_var(x): return substitutions.ext_s(x, y) elif is_logic_var(y): return substitutions.ext_s(y, x) elif is_sequence(x) and is_sequence(y): return unify_sequences(x, y, substitutions) return None def unify(x, y, substitutions): # This is because I will modify it later return _concrete_unify(x, y, substitutions) """ Explanation: The terms of the language: unify The function unify defines which are the terms of the language. It takes as arguments two objects, $x$ and $y$, and a SubstitutionMap substitutions. Its objective is to add new bindings to substitutions, so that $x$ and $y$ becomes equivalent, i.e. unifying = unify(x, y, substitution) unifying.walk(x) == unifying.walk(y) If there is no way to get $x$ equivalent to $y$, for example because they are both already bound to not equivalent terms, then it unify returns None. In this case we say that $x$ and $y$ do not unify under substitution , i.e. unify(x, y, substitution) == None In this implementation, valid terms are python objects. At the beginning unify walks the two arguments by using the SubstitutionMap. Then it checks the resulting terms. The two objects unify if they are equals according to the == operator; then we don't need to add anything to substitution one is a logic_variable $v$, and in that case $v$ must be substituted with the other term to get the unification working they are sequences, and in that case the unification is recursively applied term by term. End of explanation """ import itertools as it def is_sequence(o): # I want it to fail on strings return not is_logic_var(o) and hasattr(o, '__iter__') def unify_sequences(xs, ys, substitutions): if len(xs) != len(ys): return None for a, b in it.izip(xs, ys): substitutions = unify(a, b, substitutions) return substitutions # Test unify x, y = logic_variable(0), logic_variable(1) assert unify(x, y, SubstitutionMap()) ==\ SubstitutionMap({x: y}), "to unify x and y, substitute y to x" assert unify(x, y, SubstitutionMap({x: "value x", y: "value y"}) ) is None, "they cannot be equivalent" assert unify(x, 5, SubstitutionMap()) == SubstitutionMap({x: 5}) assert unify((x, 1, y), (1, 2, 3), SubstitutionMap()) is None, "sequences unify term-by-term" assert unify((x, 1, y), (1, 1, 3), SubstitutionMap()) ==\ SubstitutionMap({x: 1, y: 3}), "sequences unify term-by-term" """ Explanation: Sequences are objects with the __iter__ method. This is orthogonal to the rest of $\mu$Kanren. To handle new kind of terms, we must update the unify function. End of explanation """ def equiv(x, y): def _goal((substitutions, fresh_var_counter)): unifying = unify(x, y, substitutions) if unifying is not None: yield (unifying, fresh_var_counter) return _goal # Test equiv x, y = logic_variable(0), logic_variable(1) assert list(equiv(x, 5)(empty_state)) == [(SubstitutionMap({x: 5}), 0)] assert list(equiv(x, y)(empty_state)) == [(SubstitutionMap({x: y}), 0)] """ Explanation: Goals/Predicates builder A goal is a function which takes as argument a state, i.e. a pair (SubstitutionMap, fresh variable counter), and returns a list of states which satisfy the goal. In this implementation, goals are generators yielding the valid states. If the generator is empty, then that goal does not succeed. $\mu$Kanren has four primitive goals builder: equiv, call_fresh, disj and conj. equiv equiv builds a goal which succeeds if its two arguments unify, i.e. it yields the substitutions which make its arguments unify End of explanation """ def call_fresh(f): def _new_goal((substitutions, fresh_var_counter)): return f(logic_variable(fresh_var_counter))((substitutions, fresh_var_counter+1)) return _new_goal """ Explanation: call_fresh The call/fresh goal constructor creates a new logic_variable. It takes as argument a unary function $f$, which must return a goal, and it returns a new goal which runs $f$ by binding its argument to a new logic_variable. It leaves unchanged the substitutions, but it increments the fresh variable counter, to ensure unicity of variables indexes. End of explanation """ # test call_fresh x, y = logic_variable(0), logic_variable(1) def is_five(new_var): return equiv(new_var, 5) goal = call_fresh(is_five) assert list(goal(empty_state)) ==\ [(SubstitutionMap({x: 5}), 1)], "x must be 5" # for every new variable, I need a new call_fresh def f(var): def _g(other_var): return equiv(var, other_var) return call_fresh(_g) goal = call_fresh(f) assert list(goal(empty_state)) ==\ [(SubstitutionMap({x: y}), 2)], "x and y are equivalent but not bound to a term" """ Explanation: As shown in the following tests, if I want to introduce a new logic variable in a goal $G$: I wrap the goal $G$ in a unary function $f$ I use the sole argument of $f$ as logic_variable in $G$ I pass $f$ to call_fresh and I use the resulting goal This is quiet verbose: the API will be simplified with the operator fresh. Since this is not part of the core of $\mu$Kanren, we will see that later. End of explanation """ def disj(*goals): def _new_goal(state): return it.chain.from_iterable(g(state) for g in goals) return _new_goal ANY = disj # test disj x, y = logic_variable(0), logic_variable(1) def g_any_ok(var_x, var_y): """ It succeeds if x is 2, or y is 3 or x is y """ return ANY(equiv(var_x, 2), equiv(var_y, 3), equiv(var_x, var_y)) ## 2 variables, so 2 call_fresh(unary function) r = list(call_fresh(lambda x: call_fresh(lambda y: g_any_ok(x, y)))(empty_state)) assert len(r) == 3 \ and (SubstitutionMap({x: 2}), 2) in r \ and (SubstitutionMap({y: 3}), 2) in r\ and ({x: y}, 2) in r, "Three solutions must be returned" """ Explanation: disj disj takes as arguments some goals, and it returns a new goal which succeeds for a given state $s$ if either succeeds for that state $s$. I use ANY as synonim of disj, since it is equivalent to the built-in function any. End of explanation """ def make_stream(goal, s): return it.chain.from_iterable(it.imap(goal, s)) def conj(*goals): """ It returns a goal which succeeds if all the goals passed as argument succeed for that state. """ def _new_goal(state): stream = goals[0](state) for g in goals[1:]: stream = make_stream(g, stream) return stream return _new_goal ALL = conj # test disj x, y = logic_variable(0), logic_variable(1) def g_all_ko(var_x, var_y): """ It succeeds if x is 2, or y is 3 or x is y """ return ALL(equiv(var_x, 2), equiv(var_y, 3), equiv(var_x, var_y)) # fresh will fix this mess r = list(call_fresh(lambda x: call_fresh(lambda y: g_all_ko(x, y)))(empty_state)) assert len(r) == 0, "x is not equivalent to y" def g_all_ok(var_x, var_y): """ It succeeds if x is 2 or 3, and y is 3 and y is x """ return ALL(ANY(equiv(var_x, 2), equiv(var_x, 3)), equiv(var_y, 3), equiv(var_x, var_y)) r = list(call_fresh(lambda x: call_fresh(lambda y: g_all_ok(x, y)))(empty_state)) assert r == [(SubstitutionMap({x: 3, y: 3}), 2)], "x cannot be 2" """ Explanation: conj conj takes as arguments some goals, and it returns a new goal which succeeds for a given state $s$ if all of them succeed for that state $s$. I use ALL as synonim of conj, since it is equivalent to the built-in function all. End of explanation """ import inspect from functools import partial def fresh_helper(curried_goal, nargs): if nargs <= 1: return call_fresh(curried_goal) else: return call_fresh(lambda x: fresh_helper(partial(curried_goal, x), nargs-1)) def fresh(variadic_goal): # Discover now many arguments it has, and apply the pattern # wrap in a function -> pass it to call_fresh positional_args = inspect.getargspec(variadic_goal).args return fresh_helper(variadic_goal, len(positional_args)) # test fresh # I use here the same functions used to test ALL. The API here is soooo much better assert list(fresh(g_all_ko)(empty_state)) == [] assert list(fresh(g_all_ok)(empty_state)) == \ [(SubstitutionMap({x: 3, y: 3}), 2)], "x cannot be 2" """ Explanation: That's it. This is the core of $\mu$Kanren. In the next session we will add some syntactic sugar implemented in minikanren, which is very convenient to use what we implemented as an actual language. Minikanren sugar Now that we have the core of the system, it is time to add some sugar to make it pleasant to use. fresh fresh is a call_fresh without the single-variable limitation. Technically, I use the modules inspect and functools to apply the same pattern above. End of explanation """ def run(goal_builder): for s, _ in fresh(goal_builder)(empty_state): yield s list(run(g_any_ok)) """ Explanation: run run calls fresh for us. It takes a goal as arguments and it yields the substitutions that make the goal succeed. It hides fresh and the fresh variable counter. In the next section we will redefine it to hide also the SubstitionMap. End of explanation """ free_value = collections.namedtuple("free_value", ["id"]) def reify_sequence(subs, objects): return [reify(subs, o) for o in objects] def reify(substitutions, o): o = substitutions.walk(o) if is_logic_var(o): return free_value(o.index) elif is_sequence(o): return reify_sequence(substitutions, o) else: return o # test reify x, y, z = logic_variable(0), logic_variable(1), logic_variable(2) empty_sm = SubstitutionMap() test_sm = SubstitutionMap({x: 2, y: z, z: 3}) assert free_value(id=0) == reify(empty_sm, x), "x is a free variable" assert 3 == reify(test_sm, z) == reify(test_sm, y), "y and z walk to 3" assert reify(test_sm, logic_variable(12)) != \ reify(test_sm, logic_variable(logic_variable(2))), "free variables not equivalent" """ Explanation: reify reify translates the internal representation of variables to a human readable form. The reification starts by walking a variable in the SubstitutionMap. If the result is a logic_variable, then that variable could be free, or not bound to an item but equivalent to another logic_variable. To represent this, I use the objects free_values. They contain an id, and two free_values are equivalent if that id is the same. Sequences are reified to lists, where each element is the reified version of the item of original sequence. Scalar items are reified to their value. End of explanation """ def run(goal_builder, stop=None): args = inspect.getargspec(goal_builder).args var_indexes = range(len(args)) substitutions = fresh(goal_builder)(empty_state) for s, _ in it.islice(substitutions, stop): # it reifies the variables in the order of corresponding # arguments of goal yield tuple(reify(s, logic_variable(idx)) for idx in var_indexes) # test run solutions = list(sorted(run(g_any_ok))) assert solutions == [(2, free_value(id=1)), # g_any_ok(2, whatever) (free_value(id=0), 3), # g_any_ok(whatever, 3) (free_value(id=1), free_value(id=1))] # g_any_ok(whatever, the same) """ Explanation: run, reworked run is the function we use to run queries. I let run handle the reification, and hide completely the SubstitutionMap. The new run takes as argument a variadic function returning a goal. It uses fresh to create logic_variables, and then it yields all the solutions which make the goal succeed. A solution is a tuple containing the reified variables in the same order of the goal builder arguments. I exploit the fact that variable indexes match the order of the arguments they are bound to. End of explanation """ def conde(*disjs): """ conde is a form of if-then-else construct conde( [if-clause1 then], [if-clause2 then], ) """ return ANY( *[ALL(*conjs) for conjs in disjs] ) """ Explanation: conde The last predicate I add is conde. It is a if-then-else construct. The first example shows how it works. End of explanation """ got_characters = {0: ('catelyn', 'tully'), 1: ('eddard', 'stark'), 2: ('sansa', 'stark'), 3: ('benjen', 'stark'), 4: ('robb', 'stark'), 5: ('joffrey', 'baratheon'), 6: ('stannis', 'baratheon'), 7: ('cersei', 'lannister'), 8: ('tyrion', 'lannister'), 9: ('tommen', 'baratheon'), 10: ('jon', 'snow'), 11: ('myrcella', 'baratheon'), 12: ('tywin', 'lannister'), 13: ('jaime', 'lannister'), 14: ('rickon', 'stark'), 15: ('arya', 'stark'), 16: ('brandon', 'stark'), 17: ('renly', 'baratheon'), 18: ('robert', 'baratheon')} """ Explanation: The language is sweet enough to run some examples! Example: querying Game of Thrones In this first example, I use what we developed to run some query over a db of [Game of Thrones][got] characters. How does the language work? Combine equiv and conde (or ALL, ANY) to create new predicate. If you need new logic_variables, wrap the predicate in a python function, and use fresh or run to build them Use run to iterate over the solutions End of explanation """ def charactero(characterid, name, surname): # A simplified version return conde( # if characterid is 16, then name is 'brandon', and surname is 'stark' [equiv(characterid, 16), equiv(name, 'brandon'), equiv(surname, 'stark')], # or if characterid is 17, then name is 'renly' and surname is 'baratheon' [equiv(characterid, 17), equiv(name, 'renly'), equiv(surname, 'baratheon')]) # I can define it by exploiting the got_characters db def charactero(characterid, name, surname): return conde(*[[equiv(characterid, k), equiv(name, n), equiv(surname, s)] for (k, (n, s)) in got_characters.iteritems()]) """ Explanation: We follow this naming convention: predicates are suffixed with the o character. I write the relation charactero(characterid, name, surname), which succeeds when that triple identifies a character of Game of Thrones. End of explanation """ # logic variables for name, surname in run(lambda name, surname: charactero(16, name, surname)): print name, surname, "has id 16" """ Explanation: Now I can run some queries on it. To do that, I need to pass some logic variables to the predicate, and let $\mu$Kanren find those values. I use python functions and run to get new logic variables and perform the search. What are name and surname of the character whose id is 16? Find $X$ and $Y$ s.t. charactero(16, X, Y). End of explanation """ for _id, name in run(lambda _id, name: charactero(_id, name, 'baratheon')): print _id, name, "is a 'baratheon'" """ Explanation: What are id and name of the characters whose surname is 'baratheon'? Find $X$ and $Y$ s.t. charactero(X, Y, 'baratheon'). End of explanation """ got_houses = {'stark': [0, 1, 2, 3, 4, 10, 15, 16], 'tully': [0], 'lannister': [5, 7, 8, 9, 11, 12, 13], 'baratheon': [5, 7, 9, 11, 6, 18]} def id_houseo(house_name, characterid): # house_name is k and characterid is v for all k,v return conde(*[[equiv(house_name, k), equiv(characterid, v)] for (k, vs) in got_houses.iteritems() for v in vs]) """ Explanation: In GoT there are several houses. The predicate id_houseo(house_name, characterid), is relation among character ids and houses. A character can belong to more houses. End of explanation """ print 'house of baratheon: ', for _id, in run(lambda _id: id_houseo('baratheon', _id)): print _id, """ Explanation: The ids of the characters who belong to the house baratheon End of explanation """ for x in run(lambda x: id_houseo('baratheon', 20)): print x """ Explanation: $X$ such that house name is 'baratheon' and character id is 20 End of explanation """ for x, in run(lambda x: id_houseo('baratheon', 5)): print x """ Explanation: No value of $X$ can satisfy the predicate $X$ such that house name is 'baratheon' and character id is 5 End of explanation """ def houseo(house_name, name, surname): # I need a new logic variable, characterid. Thus, I create a function and use fresh to get it def _f(characterid): # if, for any characterid, its house name is house_name, then name and surname are the one of charactero return conde([id_houseo(house_name, characterid), charactero(characterid, name, surname)]) return fresh(_f) """ Explanation: Every value of $X$ satisfies the predicate. I want to hide the character id, and write the predicate houseo, which creates the relation among house names, and name and surname of characters. To do so, I need to find the character ids in relation with a house, and then find name and surname of that character id. End of explanation """ for house, surname in run(lambda house, surname: houseo(house, 'joffrey', surname)): print 'joffrey', surname, 'belongs to house', house """ Explanation: Find all house names and surnames of the character called 'joffrey' End of explanation """ for house, name in run(lambda house, name: houseo(house, name, house)): print name, house, '=> house ', house # no jon snow """ Explanation: Find all house names and first name of characters whose surname is the name of the house they belong to End of explanation """ cons = collections.namedtuple("cons", ["car", "cdr"]) nil = () def emptyo(d): """ d is empty if it is nil """ return equiv(d, nil) def conso(a, b, acons): """ a and b forms a cons, when acons is equivalent to cons(a, b) """ return equiv(cons(a, b), acons) def firsto(acons, elt): """ if elt if the first element of acons then conso(elt, tail, acons) holds for all tails """ return fresh(lambda tail: conso(elt, tail, acons)) def tailo(acons, tail): """ if tail is the tail of acons then conso(first, tail, acons) holds for all firsts """ return fresh(lambda first: conso(first, tail, acons)) """ Explanation: Example: Data structures In this example I add [conses][cons] to $\mu$Kanren. Then we will play with the very interesting membero and appendo relations. cons predicates End of explanation """ for C, in run(lambda C: conso(1, 2, C)): print 'cons 1 and 2 to get', C """ Explanation: the $C$ such that $C$ is cons of 1 and 2 End of explanation """ for C, in run(lambda x: firsto(x, 1)): print 1, 'is the first element of', C """ Explanation: the cons such that 1 is its first element End of explanation """ def consify(lst): newcons = nil for i in reversed(lst): newcons = cons(i, newcons) return newcons def unify(x, y, substitutions): if isinstance(x, list): x = consify(x) if isinstance(y, list): y = consify(y) return _concrete_unify(x, y, substitutions) def equiv(x, y): def _goal((substitutions, fresh_var_counter)): unifying = unify(x, y, substitutions) if unifying is not None: yield (unifying, fresh_var_counter) return _goal assert unify([1, 2, 3], cons(1, cons(2, cons(3, nil))), SubstitutionMap()) == SubstitutionMap() """ Explanation: We can write basic operations, but to make it fully usable, we need to improve the syntax. What I want to do is to treat python lists as if they were conses, and traslate back to list when showing the result. The former goal can be achieved by modifying unify. Remember? unify is where items are defined. The latter by reify-ing correctly the conses unify python lists to conses I modify (as little as possibile) unify, and as a consequence I need to evaluate again equiv. The objective is to transform lists to conses when we try to unify them to other items. End of explanation """ def reify_cons(substitutions, acons): lst = [] while isinstance(acons, cons): car, cdr = acons lst.append(reify(substitutions, car)) acons = substitutions.walk(cdr) if acons: lst.append(reify(substitutions, acons)) return lst def reify(substitutions, o): o = substitutions.walk(o) if is_logic_var(o): return free_value(o.index) elif isinstance(o, cons): return reify_cons(substitutions, o) elif is_sequence(o): return reify_sequence(substitutions, o) else: return o def run(goal_builder, stop=None): args = inspect.getargspec(goal_builder).args var_indexes = range(len(args)) substitutions = fresh(goal_builder)(empty_state) for s, _ in it.islice(substitutions, stop): # it reifies the variables in the order of corresponding # arguments of goal yield tuple(reify(s, logic_variable(idx)) for idx in var_indexes) # test the new reify empty_sm = SubstitutionMap() #assert range(10) == print reify(empty_sm, consify(range(10))), "test consify" """ Explanation: reify conses to python lists I modify reify so that conses are reified to usual python lists. I need to evaluate again run also. End of explanation """ ten = range(10) """ Explanation: Playing with python lists In this section I will run all examples on the python list ten End of explanation """ for f, t in run(lambda f, t: ALL( firsto(ten, f), tailo(ten, t))): print f, 'is the first of', ten print t, 'is the tail of', ten """ Explanation: Find $f$, $t$, s.t. $f$ is the first of ten and $t$ is the tail of ten End of explanation """ for x, L in run(lambda x, L: ALL( firsto(L, x), tailo(L, x))): print x, 'is first and tail of', L """ Explanation: Find all $x$ and $acons$, s.t. $acons$ is the cons(x, x) End of explanation """ def membero(collection, elt): def _f(tail): # for every tail return conde( [firsto(collection, elt)], # if elt is first of collection it's fine [tailo(collection, tail), membero(tail, elt)]) # if tail is tail of collection, then elt must be member of tail return fresh(_f) """ Explanation: membero membero is the relational version of the python in operator. If membero(collection, elt) succeeds, then elt is in collection. End of explanation """ for x in run(lambda x: membero(ten, 30)): print x """ Explanation: find all $x$ s.t. 30 is member of ten. Such an $x$ does not exist. End of explanation """ for x in run(lambda x: membero(ten, 2)): print x """ Explanation: find all $x$ s.t. 2 is member of ten. All $x$s are fine here, thus we get a free variable as output. End of explanation """ for elt, in run(lambda elt: membero(ten, elt)): print elt, "is in ", ten """ Explanation: find all $x$ s.t. $x$ is member of ten. We enumerate the values in ten. End of explanation """ for (solution,) in run(lambda x: membero([1, 2, x], [4, 65, 9]), stop=5): print [4, 65, 9], 'is member of', [1, 2, solution] """ Explanation: We can inspect the structure itself of the list. Find $x$s s.t. [4, 65, 9] is member of [1, 2, $x$]. End of explanation """ for x, y in run(lambda x, y: membero([1, 2, x], [1, y, 3])): print '[1, y, 3] member of [1, 2, x] iff x=%s and y=%s' % (x, y) """ Explanation: we can look for more variables End of explanation """ for (L, ) in run(lambda L: membero(L, 1), stop=3): print 1, 'member of', L """ Explanation: and let it generate the whole list. There are infinite lists containing the item 1, thus I use the keyword argument stop to get just the first three solutions End of explanation """ def appendo(begin, end, collection): """ begin + end = collection """ def _f(x, y, z): return conde( # if begin is empty, then collection is end [emptyo(begin), equiv(end, collection)], # otherwise [conso(x, y, begin), # begin is [x] + y conso(x, z, collection), # collection is [x] + z appendo(y, end, z)]) # z is end appended to y return fresh(_f) """ Explanation: appendo appendo(begin, end, collection) is the workhorse to build lists. It holds when collection is the concatenation of the list begin and the list end. Note that conso is a relation among a scalar and two lists, while appendo among three lists. End of explanation """ for (L, ) in run(lambda L: appendo([1, 2], [3,4], L)): print '[1, 2] + [3, 4] =', L """ Explanation: It can be used to concatenate lists End of explanation """ for (head, ) in run(lambda head: appendo(head, [3,4], [1, 2, 3, 4])): print '{head} + [3, 4] = [1, 2, 3, 4]'.format(head=head) """ Explanation: but also to go backwards. Here we let it determine which is the head lists for a given tail, result pair. End of explanation """ for (head, tail) in run(lambda head, tail: appendo(head, tail, ten)): print '{head} + {tail} = {ten}'.format(head=head, tail=tail, ten=ten) """ Explanation: We can let it generate all the possible ways to split a list in two. End of explanation """ for (head, L) in run(lambda head, L: appendo(head, [3,4], L), stop=5): print '{head} + [3, 4] = {L}'.format(head=head, L=L) """ Explanation: or let it generate generic lists whose tail is fixed End of explanation """
google-research/google-research
kws_streaming/colab/02_inference.ipynb
apache-2.0
!git clone https://github.com/google-research/google-research.git import sys import os import tarfile import urllib import zipfile sys.path.append('./google-research') """ Explanation: Copyright 2019 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. End of explanation """ # TF streaming from kws_streaming.models import models from kws_streaming.models import utils from kws_streaming.models import model_utils from kws_streaming.layers.modes import Modes import tensorflow as tf import numpy as np import tensorflow.compat.v1 as tf1 import logging from kws_streaming.models import model_flags from kws_streaming.models import model_params from kws_streaming.train import inference from kws_streaming.train import test from kws_streaming.data import input_data from kws_streaming.data import input_data_utils as du tf1.disable_eager_execution() config = tf1.ConfigProto() config.gpu_options.allow_growth = True sess = tf1.Session(config=config) # general imports import matplotlib.pyplot as plt import os import json import numpy as np import scipy as scipy import scipy.io.wavfile as wav import scipy.signal tf.__version__ tf1.reset_default_graph() sess = tf1.Session() tf1.keras.backend.set_session(sess) tf1.keras.backend.set_learning_phase(0) """ Explanation: Examples of streaming and non streaming inference with TF/TFlite Imports End of explanation """ def waveread_as_pcm16(filename): """Read in audio data from a wav file. Return d, sr.""" samplerate, wave_data = wav.read(filename) # Read in wav file. return wave_data, samplerate def wavread_as_float(filename, target_sample_rate=16000): """Read in audio data from a wav file. Return d, sr.""" wave_data, samplerate = waveread_as_pcm16(filename) desired_length = int( round(float(len(wave_data)) / samplerate * target_sample_rate)) wave_data = scipy.signal.resample(wave_data, desired_length) # Normalize short ints to floats in range [-1..1). data = np.array(wave_data, np.float32) / 32768.0 return data, target_sample_rate # set PATH to data sets (for example to speech commands V2): # it can be downloaded from # https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz # if you run 00_check-data.ipynb then data2 should be located in the current folder current_dir = os.getcwd() DATA_PATH = os.path.join(current_dir, "data2/") # Set path to wav file for testing. wav_file = os.path.join(DATA_PATH, "left/012187a4_nohash_0.wav") # read audio file wav_data, samplerate = wavread_as_float(wav_file) assert samplerate == 16000 plt.plot(wav_data) """ Explanation: Load wav file End of explanation """ # This notebook is configured to work with 'ds_tc_resnet' and 'svdf'. MODEL_NAME = 'ds_tc_resnet' # MODEL_NAME = 'svdf' MODELS_PATH = os.path.join(current_dir, "models") MODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME + "/") MODEL_PATH train_dir = os.path.join(MODELS_PATH, MODEL_NAME) # below is another way of reading flags - through json with tf.compat.v1.gfile.Open(os.path.join(train_dir, 'flags.json'), 'r') as fd: flags_json = json.load(fd) class DictStruct(object): def __init__(self, **entries): self.__dict__.update(entries) flags = DictStruct(**flags_json) flags.data_dir = DATA_PATH # get total stride of the model total_stride = 1 if MODEL_NAME == 'ds_tc_resnet': # it can be automated by scanning layers of the model, but for now just use parameters of specific model pools = model_utils.parse(flags.ds_pool) strides = model_utils.parse(flags.ds_stride) time_stride = [1] for pool in pools: if pool > 1: time_stride.append(pool) for stride in strides: if stride > 1: time_stride.append(stride) total_stride = np.prod(time_stride) # overide input data shape for streaming model with stride/pool flags.data_stride = total_stride flags.data_shape = (total_stride * flags.window_stride_samples,) # prepare mapping of index to word audio_processor = input_data.AudioProcessor(flags) index_to_label = {} # labels used for training for word in audio_processor.word_to_index.keys(): if audio_processor.word_to_index[word] == du.SILENCE_INDEX: index_to_label[audio_processor.word_to_index[word]] = du.SILENCE_LABEL elif audio_processor.word_to_index[word] == du.UNKNOWN_WORD_INDEX: index_to_label[audio_processor.word_to_index[word]] = du.UNKNOWN_WORD_LABEL else: index_to_label[audio_processor.word_to_index[word]] = word # training labels index_to_label # pad input audio with zeros, so that audio len = flags.desired_samples padded_wav = np.pad(wav_data, (0, flags.desired_samples-len(wav_data)), 'constant') input_data = np.expand_dims(padded_wav, 0) input_data.shape # create model with flag's parameters model_non_stream_batch = models.MODELS[flags.model_name](flags) # load model's weights weights_name = 'best_weights' model_non_stream_batch.load_weights(os.path.join(train_dir, weights_name)) tf.keras.utils.plot_model( model_non_stream_batch, show_shapes=True, show_layer_names=True, expand_nested=True) """ Explanation: Prepare batched model End of explanation """ # convert model to inference mode with batch one inference_batch_size = 1 tf.keras.backend.set_learning_phase(0) flags.batch_size = inference_batch_size # set batch size model_non_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE) #model_non_stream.summary() tf.keras.utils.plot_model( model_non_stream, show_shapes=True, show_layer_names=True, expand_nested=True) predictions = model_non_stream.predict(input_data) predicted_labels = np.argmax(predictions, axis=1) predicted_labels index_to_label[predicted_labels[0]] """ Explanation: Run inference with TF TF Run non streaming inference End of explanation """ # convert model to streaming mode flags.batch_size = inference_batch_size # set batch size model_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_INTERNAL_STATE_INFERENCE) #model_stream.summary() tf.keras.utils.plot_model( model_stream, show_shapes=True, show_layer_names=True, expand_nested=True) stream_output_prediction = inference.run_stream_inference_classification(flags, model_stream, input_data) stream_output_arg = np.argmax(stream_output_prediction) stream_output_arg index_to_label[stream_output_arg] """ Explanation: TF Run streaming inference with internal state End of explanation """ # convert model to streaming mode flags.batch_size = inference_batch_size # set batch size model_stream_external = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE) #model_stream.summary() tf.keras.utils.plot_model( model_stream_external, show_shapes=True, show_layer_names=True, expand_nested=True) inputs = [] for s in range(len(model_stream_external.inputs)): inputs.append(np.zeros(model_stream_external.inputs[s].shape, dtype=np.float32)) window_stride = flags.data_shape[0] start = 0 end = window_stride while end <= input_data.shape[1]: # get new frame from stream of data stream_update = input_data[:, start:end] # update indexes of streamed updates start = end end = start + window_stride # set input audio data (by default input data at index 0) inputs[0] = stream_update # run inference outputs = model_stream_external.predict(inputs) # get output states and set it back to input states # which will be fed in the next inference cycle for s in range(1, len(model_stream_external.inputs)): inputs[s] = outputs[s] stream_output_arg = np.argmax(outputs[0]) stream_output_arg index_to_label[stream_output_arg] """ Explanation: TF Run streaming inference with external state End of explanation """ tflite_non_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE) tflite_non_stream_fname = 'tflite_non_stream.tflite' with open(os.path.join(MODEL_PATH, tflite_non_stream_fname), 'wb') as fd: fd.write(tflite_non_streaming_model) interpreter = tf.lite.Interpreter(model_content=tflite_non_streaming_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # set input audio data (by default input data at index 0) interpreter.set_tensor(input_details[0]['index'], input_data.astype(np.float32)) # run inference interpreter.invoke() # get output: classification out_tflite = interpreter.get_tensor(output_details[0]['index']) out_tflite_argmax = np.argmax(out_tflite) out_tflite_argmax index_to_label[out_tflite_argmax] """ Explanation: Run inference with TFlite Run non streaming inference with TFLite End of explanation """ tflite_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE) tflite_stream_fname = 'tflite_stream.tflite' with open(os.path.join(MODEL_PATH, tflite_stream_fname), 'wb') as fd: fd.write(tflite_streaming_model) interpreter = tf.lite.Interpreter(model_content=tflite_streaming_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_states = [] for s in range(len(input_details)): input_states.append(np.zeros(input_details[s]['shape'], dtype=np.float32)) out_tflite = inference.run_stream_inference_classification_tflite(flags, interpreter, input_data, input_states) out_tflite_argmax = np.argmax(out_tflite[0]) index_to_label[out_tflite_argmax] """ Explanation: Run streaming inference with TFLite End of explanation """ test.tflite_non_stream_model_accuracy( flags, MODEL_PATH, tflite_model_name=tflite_non_stream_fname, accuracy_name='tflite_non_stream_model_accuracy.txt') test.tflite_stream_state_external_model_accuracy( flags, MODEL_PATH, tflite_model_name=tflite_stream_fname, accuracy_name='tflite_stream_state_external_model_accuracy.txt', reset_state=True) test.tflite_stream_state_external_model_accuracy( flags, MODEL_PATH, tflite_model_name=tflite_stream_fname, accuracy_name='tflite_stream_state_external_model_accuracy.txt', reset_state=False) """ Explanation: Run evaluation on all testing data End of explanation """
astro4dev/OAD-Data-Science-Toolkit
Teaching Materials/Programming/Python/PythonISYA2018/01.BasicPythonI/03_dictionaries.ipynb
gpl-3.0
d = {'Angela': 23746, 'Sofia': 2514, 'Luis': 3747, 'Diego': 61562} """ Explanation: Dictionaries Dictionaries are a very useful data structure. They are similar to lists, but instead of being indexed by integeres they are indexed by keys which can be of any type as long as they are immutable. In a dictionary the keys are used to access values. A typical example would be the following: End of explanation """ d['Angela'] d['Diego'] d['Luis'] d['Sofia'] """ Explanation: In this example the keys are strings (corresponding to names) and the values are numbers. We can acess now the values: End of explanation """ d['Valeriano'] = 1234 print(d) """ Explanation: Adding a new element in the dictionary is very simple End of explanation """ d.pop('Angela') print(d) """ Explanation: deleting an item is also easy to do End of explanation """ list(d.keys()) """ Explanation: It is possible to gather all the keys End of explanation """ list(d.values()) """ Explanation: and also gather all the values End of explanation """ 'Miguel' in d.keys() 'Luis' in d.keys() """ Explanation: It is also easy to test whether a key (or value) is in the dictionary End of explanation """ activities = { 'Monday': {'study':4, 'sleep':8, 'party':0}, 'Tuesday': {'study':8, 'sleep':4, 'party':0}, 'Wednesday': {'study':8, 'sleep':4, 'party':0}, 'Thursday': {'study':4, 'sleep':4, 'party':4}, 'Friday': {'study':1, 'sleep':4, 'party':8}, } """ Explanation: Exercise 3.01 Create a dictionary with the integers 0 to 4 as keys and the vowels (a, e, i, o u) as values. Exercise 3.02 Use the following dictionary of dictionaries to create a new dictionary that has the same keys and the values correspond to the total number of hours used in all activities. End of explanation """
shareactorIO/pipeline
source.ml/jupyterhub.ml/notebooks/zz_old/Python/Basics/Sets.ipynb
apache-2.0
from urllib.request import urlopen url_response = urlopen('http://www.py4inf.com/code/romeo.txt') contents = str(url_response.read()) print(contents) """ Explanation: Read Contents from URL End of explanation """ lines = contents.split('\\n') print(lines) """ Explanation: Split Contents Into Lines Using New-line ('\n') End of explanation """ Jword_set = set() for line in lines: # Passing no args to split() will do what you want in this case: # split on all weird characters (aka whitespace characters) words = line.split() for word in words: # Lowercase the word or else alphabetical sort puts capitals ahead word = word.lower() # Adding to a set (vs list) will automatically de-duplicate word_set.add(word) print(word_set) sorted_word_set = sorted(word_set) print(sorted_word_set) """ Explanation: Extract Words from Each Line Note: You might have to strip out the ' character that seems to be slipping through. My guess is that you'll have to specify a RegEx expression to only accept. End of explanation """
ebonnassieux/fundamentals_of_interferometry
2_Mathematical_Groundwork/2_8_the_discrete_fourier_transform.ipynb
gpl-2.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS """ Explanation: Outline Glossary 2. Mathematical Groundwork Previous: 2.7 Fourier Theorems Next: 2.9 Sampling Theory Import standard modules: End of explanation """ from IPython.display import HTML from ipywidgets import interact HTML('../style/code_toggle.html') """ Explanation: Import section specific modules: End of explanation """ def loop_DFT(x): """ Implementing the DFT in a double loop Input: x = the vector we want to find the DFT of """ #Get the length of the vector (will only work for 1D arrays) N = x.size #Create vector to store result in X = np.zeros(N,dtype=complex) for k in range(N): for n in range(N): X[k] += np.exp(-1j*2.0*np.pi*k*n/N)*x[n] return X """ Explanation: 2.8. The Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT)<a id='math:sec:the_discrete_fourier_transform_and_the_fast_fourier_transform'></a> The continuous version of the Fourier transform can only be computed when the integrals involved can be evaluated analytically, something which is not always possible in real life applications. This is true for a number of reasons, the most relevant of which are: We don't always have the parametrisation of the signal that we want to find the Fourier transform of. Signals are measured and recorded at a finite number of points. Measured signals are contaminated by noise. In such cases the discrete equivalent of the Fourier transform, called the discrete Fourier transform (DFT), is very useful. In fact, where the scale of the problem necessitates using a computer to perform calculations, the Fourier transform can only be implemented as the discrete equivalent. There are some subtleties we should be aware of when implementing the DFT. These mainly arise because it is very difficult to capture the full information present in a continuous signal with a finite number of samples. In this chapter we review the DFT and extend some of the most useful identities derived in the previous sections to the case where we only have acces to a finite number of samples. The subtleties that arise due to limited sampling will be discussed in the next section. 2.8.1 The discrete time Fourier transform (DTFT): definition<a id='math:sec:the_discrete_time_fourier_transform_definition'></a> We start by introducing the discrete time Fourier transform (DTFT). The DTFT of a set $\left{y_n \in \mathbb{C}\right}_{n ~ \in ~ \mathbb{Z}}$ results in a Fourier series (see $\S$ 2.3 &#10142;) of the form <a id='math:eq:8_001'></a><!--\label{math:eq:8_001}-->$$ Y_{2\pi}(\omega) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-\imath \omega n} \quad \mbox{where} \quad n \in \mathbb{Z}. $$ The resulting function is a periodic function of the frequency variable $\omega$. In the above definition we assume that $\omega$ is expressed in normalised units of radians/sample so that the periodicity is $2\pi$. In terms of the usual time frequency variable $f$, where $\omega = 2\pi f$, we would define it as <a id='math:eq:8_002'></a><!--\label{math:eq:8_002}-->$$ Y_{f_s}(f) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-2\pi\imath f t_n}, $$ where $t_n$ is a time coordinate and the subscript $f_s$ denotes the period of $Y_{f_s}(f)$. As we will see in $\S$ 2.9 &#10142; the DTFT (more correctly the DFT introduced below) arises naturally when we take the Fourier transform of a sampled continuous function. As with the continuous Fourier transform, it is only possible to compute the DTFT analytically in a limited number of cases (eg. when the limit of the infinite series is known analytically or when the signal is band limited i.e. the signal contains a finite number of frequency components. For what follows we will find it useful to review the concept of periodic summation and the Poisson summation formula. Note that the DTFT is defined over the entire field of complex numbers and that there are an infinite number of components involved in the definition. 2.8.1.1 Periodic summation and the DTFT <a id='math:sec:Periodic_summation'></a> The idea behind periodic summation is to construct a periodic function, $g_{\tau}(t)$ say, from a contnuous function $g(t)$. Consider the following construction $$ g_\tau(t) = \sum_{n=-\infty}^{\infty} g(t + n\tau) = \sum_{n=-\infty}^{\infty} g(t - n\tau). $$ Clearly $g_\tau(t)$ has period $\tau$ and looks like an infinite number of copies of the function $g(t)$ for $t$ in the interval $0 \leq t \leq \tau$. We call $g_\tau(t)$ a periodic summation of $g(t)$. Note that we recover $g(t)$ when $n = 0$ and that a similar construction is obviously possible in the frequency domain. Actually the DTFT naturally results in a periodic function of the form $$Y_{f_s}(f) = \sum_{k = -\infty}^{\infty} Y(f - k f_s), $$ such that $Y_{f_s}(f)$ is the periodic summation of $Y(f)$. As we will see later, the period $f_s$ is set by the number of samples $N$ at which we have the signal. In $\S$ 2.9 &#10142; we will find it useful to think of $Y(f)$ as the spectrum of a bandlimited signal, $y(t)$ say. When the maximum frequency present in the signal is below a certain threshold the $Y_{f_s}(f)$ with $k \neq 0$ are exact copies of $Y(f)$ which we call aliases. This will become clearer after we have proved the Nyquist-Shannon sampling theorem. 2.8.1.2 Poisson summation formula <a id='math:sec:Poisson_summation'></a> The Poisson summation formula is a result from analysis which is very important in Fourier theory. A general proof of this result will not add much to the current discussion. Instead we will simply point out its implications for Fourier theory as this will result in a particularly transparent proof of the Nyquist-Shannon sampling theorem. Basically the Poisson summation formula can be used to relate the Fourier series coefficients of a periodic summation of a function to values which are proportional to the function's continuous Fourier transform. Suppose $Y(f)$ is the Fourier transform of the (Schwartz) function $y(t)$. Then <a id='math:eq:8_003'></a><!--\label{math:eq:8_003}-->$$ \sum_{n = -\infty}^{\infty} \Delta t ~ y(\Delta t n) e^{-2\pi\imath f \Delta t n} = \sum_{k = -\infty}^{\infty} Y(f - \frac{k}{\Delta t}) = \sum_{k = -\infty}^{\infty} Y(f - kf_s) = Y_{f_s}(f). $$ This shows that the series $y_n = \Delta t y(\Delta t n)$ is sufficient to construct a periodic summation of of $Y(f)$. The utility of this construction will become apparent a bit later. For now simply note that it is possible to construct $Y_{f_s}(f)$ from the Fourier series of the function $y(t)$ (scaled by $\Delta t$). The above discussion will mainly serve as a theoretical tool. It does not provide an obvious way to perform the Fourier transform in practice because it still requires an infinite number of components $y_n$. Before illustrating its utility we should construct a practical way to implement the Fourier transform. 2.8.2. The discrete Fourier transform: definition<a id='math:sec:the_discrete_fourier_transform_definition'></a> Let $y= \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ be a finite set of complex numbers. Then the discrete Fourier transform (DFT) of $y$, denoted $\mathscr{F}{\rm D}{y}$, is defined as <a id='math:eq:8_004'></a><!--\label{math:eq:8_004}-->$$ \mathscr{F}{\rm D}: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\ \mathscr{F}{\rm D}{y} = \left{Y_k\in\mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \quad \mbox{where} \quad Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-2\pi\imath f_k t_n} = \sum_{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}. $$ In the above definition $f_k$ is the $k$-th frequency sample and $t_n$ is the $n$-th sampling instant. When the samples are spaced at uniform intervals $\Delta t$ apart these are given by $$ t_n = t_0 + n\Delta t \quad \mbox{and} \quad f_k = \frac{kf_s}{N} \quad \mbox{where} \quad f_s = \frac{1}{\Delta t}. $$ Most of the proofs shown below are easiest to establish when thinking of the DFT in terms of the actual indices $k$ and $n$. This definition also has the advantage that the samples do not have to be uniformly spaced apart. In this section we use the notation $$ \mathscr{F}{\rm D}{y}_k = Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}, $$ where the subscript $k$ on the LHS denotes the index not involved in the summation. Varaibles such as $Y_k$ and $y_n$ which are related as in the above expression are sometimes refered to as Fourier pairs or Fourier duals. The number of Fourier transformed components $Y_k$ is the same as the number of samples of $y_n$. Denoting the set of Fourier transformed components by $Y = \left{Y_k \in \mathbb{C}\right}{k = 0, \ldots, N-1}$, we can define the inverse discrete Fourier transform of $Y$, denoted $\mathscr{F}{\rm D}^{-1}{Y}$, as <a id='math:eq:8_005'></a><!--\label{math:eq:8_005}-->$$ \mathscr{F}{\rm D}^{-1}: \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \rightarrow \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\ \mathscr{F}{\rm D}^{-1}{Y} = \left{y_n\in\mathbb{C}\right}{n = 0, \ldots, N-1} \quad \mbox{where} \quad y_n = \frac{1}{N} \sum{k \ = \ 0}^{N-1} Y_k e^{\imath 2\pi \frac{nk}{N}} \ , $$ or in the abbreviated notation $$ \mathscr{F}{\rm D}^{-1}{Y}_n = y_n = \frac{1}{N} \sum{k\,=\,0}^{N-1} Y_k\,e^{\imath 2\pi \frac{nk}{N}}. $$ The factor of $\frac{1}{N}$ appearing in the definition of the inverse DFT is a normalisation factor. We should mention that this normalisation is sometimes implemented differently by including a factor of $\sqrt{\frac{1}{N}}$ in the definition of both the forward and the inverse DFT. Some texts even omit it completely. We will follow the above convention throughout the course. The inverse DFT is the inverse operation with respect to the discrete Fourier transform (restricted to the original domain). This can be shown as follows:<br><br> <a id='math:eq:8_006'></a><!--\label{math:eq:8_006}-->$$ \begin{align} \mathscr{F}{\rm D}^{-1}\left{\mathscr{F}{\rm D}\left{y\right}\right}{n^\prime} \,&=\, \frac{1}{N}\sum{k\,=\,0}^{N-1} \left(\sum_{n\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}\right)e^{\imath 2\pi\frac{kn^\prime}{N}}\ &=\,\frac{1}{N}\sum_{k\,=\,0}^{N-1} \sum_{n\,=\,0}^{N-1} \left( y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\ &=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\ &=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{\imath 2\pi\frac{k(n^\prime-n)}{N}}\right)\ &=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \sum_{k\,=\,0}^{N-1} \left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^k\ &=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^N}{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)}\ &=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-e^{\imath 2\pi(n^\prime-n)}}{1-e^{\imath 2\pi\frac{(n^\prime-n)}{N}}}\ &\underset{n,n^\prime \in \mathbb{N}}{=}\,y_{n^\prime},\ \end{align} $$ where we made use of the identity $\sum_{n\,=\,0}^{N-1}x^n \,=\, \frac{1-x^N}{1-x}$ and used the orthogonality of the sinusoids in the last step. Clearly both the DFT and its inverse are periodic with period $N$ <a id='math:eq:8_007'></a><!--\label{math:eq:8_007}-->$$ \begin{align} \mathscr{F}{\rm D}{y }_k \,&=\,\mathscr{F}{\rm D}{y }{k \pm N} \ \mathscr{F}{\rm D}^{-1}{Y }{n} \,&=\,\mathscr{F}{\rm D}^{-1}{Y }_{n \pm N}.\ \end{align} $$ As is the case for the continuous Fourier transform, the inverse DFT can be expressed in terms of the forward DFT (without proof, but it's straightforward) <a id='math:eq:8_008'></a><!--\label{math:eq:8_008}-->$$ \begin{align} \mathscr{F}{\rm D}^{-1}{Y}_n \,&=\, \frac{1}{N} \mathscr{F}{\rm D}{Y}{-n} \ &=\,\frac{1}{N} \mathscr{F}{\rm D}{Y}_{N-n}.\ \end{align} $$ The DFT of a real-valued set of numbers $y = \left{y_n \in \mathbb{R}\right}_{n\,=\,0, \ldots, \,N-1}$ is Hermitian (and vice versa) <a id='math:eq:8_009'></a><!--\label{math:eq:8_009}-->$$ \begin{split} \mathscr{F}{\rm D}{y}_k\,&=\, \left(\mathscr{F}{\rm D}{y}_{-k}\right)^\ &=\, \left(\mathscr{F}{\rm D}{y}{N-k}\right)^ \ . \end{split} $$ <span style="background-color:red">BVH:GC:Define this subscript convention in the glossary </span> 2.8.3. The Discrete convolution: definition and discrete convolution theorem<a id='math:sec:the_discrete_convolution_definition_and_discrete_convolution_theorem'></a> For two sets of complex numbers $y = \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ and $z = \left{z_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ the discrete convolution is, in analogy to the analytic convolution, defined as <a id='math:eq:8_010'></a><!--\label{math:eq:8_010}-->$$ \circ: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\times \left{z_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{r_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\ (y\circ z)_k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{k-n}.\ $$ However there is a bit of a subtlety in this definition. We have to take into account that if $n > k$ the index $k-n$ will be negative. Since we have defined our indices as being strictly positive, this requires introducing what is sometimes referred to as the "wraparound" convention. Recal that complex numbers $r_k = e^{\frac{\imath 2\pi k}{N}}$ have the property that $r_{k \pm mN} = r_k$, where $m \in \mathbb{Z}$ is an integer. In the "wraparound" convention we map indices lying outside the range $0, \cdots , N-1$ into this range using the modulo operator. In other words we amend the definition as follows $$ (y\circ z)k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{(k-n) \, \text{mod} \, N}, $$ where mod denotes the modulo operation. Just like the ordinary convolution, the discrete convolution is commutative. One important effect evident from this equation is that if the two series are "broad" enough, the convolution will be continued at the beginning of the series, an effect called aliasing. The convolution theorem (i.e. that convolution in one domain is the pointwise product in the other domain) is also valid for the DFT and the discrete convolution operator. We state the theorem here without proof (it is similar to the proof for the continuous case). Let $(y \odot z)_n \underset{def}{=} y_n ~ z_n$ (this is the Hadamard or component-wise product, we will encounter it again in $\S$ 2.10 &#10142;). Then, for Fourier pairs $Y_k$ and $y_n$, and $Z_k$ and $z_n$, we have <a id='math:eq:8_011'></a><!--\label{math:eq:8_011}-->$$ \forall N\,\in\, \mathbb{N}\ \begin{align} y \,&=\, \left{y_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\ z \,&=\, \left{z_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\ Y \,&=\, \left{Y_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\ Z \,&=\, \left{Z_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\ \end{align}\ \begin{split} \mathscr{F}{\rm D}{y\odot z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{y}\circ \mathscr{F}{\rm D}{z}\ \mathscr{F}{\rm D}^{-1}{Y\odot Z}\,&=\,\mathscr{F}{\rm D}{Y}\circ \mathscr{F}{\rm D}{Z}\ \mathscr{F}{\rm D}{y\circ z}\,&=\,\mathscr{F}{\rm D}{y} \odot \mathscr{F}{\rm D}{z}\ \mathscr{F}{\rm D}^{-1}{Y\circ Z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{Y} \odot \mathscr{F}{\rm D}{Z}\ \end{split} $$ 2.8.4.Numerically implementing the DFT <a id='math:sec:numerical_DFT'></a> We now turn to how the DFT is implemented numerically. The most direct way to do this is to sum the components in a double loop of the form End of explanation """ def matrix_DFT(x): """ Implementing the DFT in vectorised form Input: x = the vector we want to find the DFT of """ #Get the length of the vector (will only work for 1D arrays) N = x.size #Create vector to store result in n = np.arange(N) k = n.reshape((N,1)) K = np.exp(-1j*2.0*np.pi*k*n/N) return K.dot(x) """ Explanation: Althought this would produce the correct result, this way of implementing the DFT is going to be incredibly slow. The DFT can be implemented in matrix form. Convince yourself that a vectorised implementation of this operation can be achieved with $$ X = K x $$ where $K$ is the kernel matrix, it stores the values $K_{kn} = e^{\frac{-\imath 2 \pi k n}{N}}$. This is implemented numerically as follows End of explanation """ x = np.random.random(256) #create random vector to take the DFT of np.allclose(loop_DFT(x),matrix_DFT(x)) #compare the result using numpy's built in function """ Explanation: This function will be much faster than the previous implementation. We should check that they both return the same result End of explanation """ x = np.random.random(256) #create random vector to take the DFT of np.allclose(np.fft.fft(x),matrix_DFT(x)) #compare the result using numpy's built in function """ Explanation: Just to be sure our DFT really works, let's also compare the output of our function to numpy's built in DFT function (note numpy automatically implements a faster version of the DFT called the FFT, see the discussion below) End of explanation """ #First we simulate a time series as the sum of a number of sinusoids each with a different frequency N = 512 #The number of samples of the time series tmin = -10 #The minimum value of the time coordinate tmax = 10 #The maximum value of the time coordinate t = np.linspace(tmin,tmax,N) #The time coordinate f1 = 1.0 #The frequency of the first sinusoid f2 = 2.0 #The frequency of the second sinusoid f3 = 3.0 #The frequency of the third sinusoid #Generate the signal y = np.sin(2.0*np.pi*f1*t) + np.sin(2.0*np.pi*f2*t) + np.sin(2.0*np.pi*f3*t) #Take the DFT Y = matrix_DFT(y) #Plot the absolute value, real and imaginary parts plt.figure(figsize=(15, 6)) plt.subplot(121) plt.stem(abs(Y)) plt.xlabel('$k$',fontsize=18) plt.ylabel(r'$|Y_k|$',fontsize=18) plt.subplot(122) plt.stem(np.angle(Y)) plt.xlabel('$k$',fontsize=18) plt.ylabel(r'phase$(Y_k)$',fontsize=18) """ Explanation: Great! Our function is returning the correct result. Next we do an example to demonstrate the duality between the spectral (frequency domain) and temporal (time domain) representations of a function. As the following example shows, the Fourier transform of a time series returns the frequencies contained in the signal. The following code simulates a signal of the form $$ y = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) + \sin(2\pi f_3 t), $$ takes the DFT and plots the amplitude and phase of the resulting components $Y_k$. End of explanation """ #Get the sampling frequency delt = t[1] - t[0] fs = 1.0/delt k = np.arange(N) fk = k*fs/N plt.figure(figsize=(15, 6)) plt.subplot(121) plt.stem(fk,abs(Y)) plt.xlabel('$f_k$',fontsize=18) plt.ylabel(r'$|Y_k|$',fontsize=18) plt.subplot(122) plt.stem(fk,np.angle(Y)) plt.xlabel('$f_k$',fontsize=18) plt.ylabel(r'phase$(Y_k)$',fontsize=18) """ Explanation: Figure 2.8.1: Amplitude and phase plots of the fourier transform of a signal comprised of 3 different tones It is not immediately obvious that these are the frequencies contained in the signal. However, recall, from the definition given at the outset, that the frequencies are related to the index $k$ via $$ f_k = \frac{k f_s}{N}, $$ where $f_s$ is the sampling frequency (i.e. one divided by the sampling period). Let's see what happens if we plot the $X_k$ against the $f_k$ using the following bit of code End of explanation """ %timeit loop_DFT(x) %timeit matrix_DFT(x) """ Explanation: Figure 2.8.2: The fourier transformed signal labeled by frequency Here we see that the three main peaks correspond to the frequencies contained in the input signal viz. $f_1 = 1$Hz, $f_2 = 2$Hz and $f_3 = 3$Hz. But what do the other peaks mean? The additional frequency peaks are a consequence of the following facts: the DFT of a real valued signal is Hermitian (see Hermitian property of real valued signals &#10549;<!--\ref{math:eq:8_009}-->) so that $Y_{-k} = Y_k^*$, the DFT is periodic with period $N$ (see Periodicity of the DFT &#10549;<!--\ref{math:eq:8_007}-->) so that $Y_{k} = Y_{k+N}$. <br> When used together the above facts imply that $Y_{N-k} = Y_k^*$. This will be important in $\S$ 2.9 &#10142; when we discuss aliasing. Note that these additional frequency peaks contain no new information. We have not explained some of the features of the signal viz. Why are there non-zero components of $Y_k$ at frequencies that are not present in the input signal? Why do the three main peaks not contain the same amount of power? This is a bit unexpected since all three components of the input signal have the same amplitude. As we will see in $\S$ 2.9 &#10142;, these features result from the imperfect sampling of the signal. This is unavoidable in any practical application involving the DFT and will be a reoccurring theme throughout this course. You are encouraged to play with the parameters (eg. the minimum $t_{min}$ and maximum $t_{max}$ values of the time coordinate, the number of samples $N$ (do not use $N > 10^5$ points or you might be here for a while), the frequencies of the input components etc.) to get a feel for what does and does not work. In particular try setting the number of samples to $N = 32$ and see if you can explain the output. It might also be a good exercise to and implement the inverse DFT. We already mentioned that the vectorised version of the DFT above will be much faster than the loop version. We can see exactly how much faster with the following commands End of explanation """ %timeit np.fft.fft(x) """ Explanation: That is almost a factor of ten difference. Lets compare this to numpy's built in FFT End of explanation """ def one_layer_FFT(x): """An implementation of the 1D Cooley-Tukey FFT using one layer""" N = x.size if N%2>0: print "Warning: length of x in not a power of two, returning DFT" return matrix_DFT(x) else: X_even = matrix_DFT(x[::2]) X_odd = matrix_DFT(x[1::2]) factor = np.exp(-2j * np.pi * np.arange(N) / N) return np.concatenate([X_even + factor[:N / 2] * X_odd,X_even + factor[N / 2:] * X_odd]) """ Explanation: That seems amazing! The numpy FFT is about 1000 times faster than our vectorised implementation. But how does numpy achieve this speed up? Well, by using the fast Fourier transform of course. <span style="background-color:red">BVH:AC:Point out that when taking a fourier transform of a real-valued signal it is only necessary to store the first N/2 + 1 samples. </span> 2.8.5. Fast Fourier transforms<a id='math:sec:fast_fourier_tranforms'></a> The DFT is a computationally expensive operation. As evidenced by the double loop required to implement the DFT the computational complexity of a naive implementation such as ours scales like $\mathcal{O}(N^2)$ where $N$ is the number of data points. Even a vectorised version of the DFT will scale like $\mathcal{O}(N^2)$ since, in the end, there are still the same number of complex exponentiations and multiplications involved. By exploiting the symmetries of the DFT, it is not difficult to identify potential ways to safe computing time. Looking at the definition of the discrete Fourier transform discrete Fourier transform &#10549;<!--\ref{math:eq:8_004}-->, one can see that, under certain circumstances, the same summands occur multiple times. Recall that the DFT is periodic i.e. $Y_k = Y_{N+k}$, where $N$ is the number of data points. Now suppose that $N = 8$. In calculating the component $Y_2$ we would have to compute the quantity $y_2\,e^{-2{\pi}\imath\frac{2 \cdot 2}{8}}$ i.e. when $n = 2$. However, using the periodicity of the kernel $e^{-2\pi\imath \frac{kn}{N}} = e^{-2\pi\imath \frac{k(n+N)}{N}}$, we can see that this same quantity will also have to be computed when calculating the component $Y_6$ since $y_2\,e^{-2{\pi}\imath\frac{2\cdot2}{8}}=y_2e^{-2{\pi}\imath\frac{6\cdot2}{8}} = y_2e^{-2{\pi}\imath\frac{12}{8}}$. If we were calculating the DFT by hand, it would be a waste of time to calculate this summand twice. To see how we can exploit this, lets first split the DFT into its odd and even $n$ indices as follows \begin{eqnarray} Y_{k} &=& \sum_{n = 0}^{N-1} y_n e^{-2\pi\imath \frac{kn}{N}}\ &=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{k(2m)}{N}} + \sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{k(2m+1)}{N}}\ &=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{km}{N/2}} + e^{-2\pi\imath \frac{k}{N}}\sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{km}{N/2}} \end{eqnarray} Notice that we have split the DFT into two terms which look very much like DFT's of length $N/2$, only with a slight adjustment on the indices. Importantly the form of the kernel (i.e. $e^{-2\pi\imath \frac{km}{N/2}}$) looks the same for both the odd and the even $n$ indices. Now, while $k$ is in the range $0, \cdots , N-1$, $n$ only ranges through $0,\cdots,N/2 - 1$. The DFT written in the above form will therefore be periodic with period $N/2$ and we can exploit this periodic property to compute the DFT with half the number of computations. See the code below for an explicit implementation. End of explanation """ np.allclose(np.fft.fft(x),one_layer_FFT(x)) """ Explanation: Lets confirm that this function returns the correct result by comparing fith numpy's FFT. End of explanation """
metpy/MetPy
v0.5/_downloads/Inverse_Distance_Verification.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import cKDTree from scipy.spatial.distance import cdist from metpy.gridding.gridding_functions import calc_kappa from metpy.gridding.interpolation import barnes_point, cressman_point from metpy.gridding.triangles import dist_2 plt.rcParams['figure.figsize'] = (15, 10) def draw_circle(x, y, r, m, label): nx = x + r * np.cos(np.deg2rad(list(range(360)))) ny = y + r * np.sin(np.deg2rad(list(range(360)))) plt.plot(nx, ny, m, label=label) """ Explanation: Inverse Distance Verification: Cressman and Barnes Compare inverse distance interpolation methods Two popular interpolation schemes that use inverse distance weighting of observations are the Barnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses the ratio between distance of an observation from a grid cell and the maximum allowable distance to calculate the relative importance of an observation for calculating an interpolation value. Barnes uses the inverse exponential ratio of each distance between an observation and a grid cell and the average spacing of the observations over the domain. Algorithmically: A KDTree data structure is built using the locations of each observation. All observations within a maximum allowable distance of a particular grid cell are found in O(log n) time. Using the weighting rules for Cressman or Barnes analyses, the observations are given a proportional value, primarily based on their distance from the grid cell. The sum of these proportional values is calculated and this value is used as the interpolated value. Steps 2 through 4 are repeated for each grid cell. End of explanation """ np.random.seed(100) pts = np.random.randint(0, 100, (10, 2)) xp = pts[:, 0] yp = pts[:, 1] zp = xp * xp / 1000 sim_gridx = [30, 60] sim_gridy = [30, 60] """ Explanation: Generate random x and y coordinates, and observation values proportional to x * y. Set up two test grid locations at (30, 30) and (60, 60). End of explanation """ grid_points = np.array(list(zip(sim_gridx, sim_gridy))) radius = 40 obs_tree = cKDTree(list(zip(xp, yp))) indices = obs_tree.query_ball_point(grid_points, r=radius) """ Explanation: Set up a cKDTree object and query all of the observations within "radius" of each grid point. The variable indices represents the index of each matched coordinate within the cKDTree's data list. End of explanation """ x1, y1 = obs_tree.data[indices[0]].T cress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1) cress_obs = zp[indices[0]] cress_val = cressman_point(cress_dist, cress_obs, radius) """ Explanation: For grid 0, we will use Cressman to interpolate its value. End of explanation """ x2, y2 = obs_tree.data[indices[1]].T barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2) barnes_obs = zp[indices[1]] ave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp))))) kappa = calc_kappa(ave_spacing) barnes_val = barnes_point(barnes_dist, barnes_obs, kappa) """ Explanation: For grid 1, we will use barnes to interpolate its value. We need to calculate kappa--the average distance between observations over the domain. End of explanation """ for i, zval in enumerate(zp): plt.plot(pts[i, 0], pts[i, 1], '.') plt.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1])) plt.plot(sim_gridx, sim_gridy, '+', markersize=10) plt.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches') plt.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches') draw_circle(sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius') draw_circle(sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius') plt.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0])) plt.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1])) plt.axes().set_aspect('equal', 'datalim') plt.legend() """ Explanation: Plot all of the affiliated information and interpolation values. End of explanation """ plt.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0])) plt.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10) mx, my = obs_tree.data[indices[0]].T mz = zp[indices[0]] for x, y, z in zip(mx, my, mz): d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2) plt.plot([sim_gridx[0], x], [sim_gridy[0], y], '--') xave = np.mean([sim_gridx[0], x]) yave = np.mean([sim_gridy[0], y]) plt.annotate('distance: {}'.format(d), xy=(xave, yave)) plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y)) plt.xlim(0, 80) plt.ylim(0, 80) plt.axes().set_aspect('equal', 'datalim') """ Explanation: For each point, we will do a manual check of the interpolation values by doing a step by step and visual breakdown. Plot the grid point, observations within radius of the grid point, their locations, and their distances from the grid point. End of explanation """ dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625]) values = np.array([0.064, 1.156, 3.364, 0.225]) cres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists) total_weights = np.sum(cres_weights) proportion = cres_weights / total_weights value = values * proportion val = cressman_point(cress_dist, cress_obs, radius) print('Manual cressman value for grid 1:\t', np.sum(value)) print('Metpy cressman value for grid 1:\t', val) """ Explanation: Step through the cressman calculations. End of explanation """ plt.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]), xy=(sim_gridx[1] + 2, sim_gridy[1])) plt.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10) mx, my = obs_tree.data[indices[1]].T mz = zp[indices[1]] for x, y, z in zip(mx, my, mz): d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2) plt.plot([sim_gridx[1], x], [sim_gridy[1], y], '--') xave = np.mean([sim_gridx[1], x]) yave = np.mean([sim_gridy[1], y]) plt.annotate('distance: {}'.format(d), xy=(xave, yave)) plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y)) plt.xlim(40, 80) plt.ylim(40, 100) plt.axes().set_aspect('equal', 'datalim') """ Explanation: Now repeat for grid 1, except use barnes interpolation. End of explanation """ dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779]) values = np.array([2.809, 6.241, 4.489, 2.704]) weights = np.exp(-dists**2 / kappa) total_weights = np.sum(weights) value = np.sum(values * (weights / total_weights)) print('Manual barnes value:\t', value) print('Metpy barnes value:\t', barnes_point(barnes_dist, barnes_obs, kappa)) """ Explanation: Step through barnes calculations. End of explanation """
rhiever/scipy_2015_sklearn_tutorial
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
cc0-1.0
%matplotlib inline import matplotlib.pyplot as plt """ Explanation: Example from Image Processing End of explanation """ from sklearn import datasets lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4, data_home='datasets') lfw_people.data.shape """ Explanation: Using PCA to extract features Now we'll take a look at unsupervised learning on a facial recognition example. This uses a dataset available within scikit-learn consisting of a subset of the Labeled Faces in the Wild data. Note that this is a relatively large download (~200MB) so it may take a while to execute. End of explanation """ fig = plt.figure(figsize=(8, 6)) # plot several images for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(lfw_people.images[i], cmap=plt.cm.bone) """ Explanation: Let's visualize these faces to see what we're working with: End of explanation """ from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(lfw_people.data, lfw_people.target, random_state=0) print(X_train.shape, X_test.shape) """ Explanation: We'll do a typical train-test split on the images before performing unsupervised learning: End of explanation """ from sklearn import decomposition pca = decomposition.RandomizedPCA(n_components=150, whiten=True) pca.fit(X_train) """ Explanation: Feature Reduction Using Principal Component Analysis We can use PCA to reduce the original 1850 features of the face images to a manageable size, while maintaining most of the information in the dataset. Here it is useful to use a variant of PCA called RandomizedPCA, which is an approximation of PCA that can be much faster for large datasets. End of explanation """ plt.imshow(pca.mean_.reshape((50, 37)), cmap=plt.cm.bone) """ Explanation: One interesting part of PCA is that it computes the "mean" face, which can be interesting to examine: End of explanation """ print(pca.components_.shape) fig = plt.figure(figsize=(16, 6)) for i in range(30): ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[]) ax.imshow(pca.components_[i].reshape((50, 37)), cmap=plt.cm.bone) """ Explanation: The principal components measure deviations about this mean along orthogonal axes. It is also interesting to visualize these principal components: End of explanation """ X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) print(X_test_pca.shape) """ Explanation: The components ("eigenfaces") are ordered by their importance from top-left to bottom-right. We see that the first few components seem to primarily take care of lighting conditions; the remaining components pull out certain identifying features: the nose, eyes, eyebrows, etc. With this projection computed, we can now project our original training and test data onto the PCA basis: End of explanation """
sdss/marvin
docs/sphinx/tutorials/exercises/resolved_mass_metallicity_relation_SOLUTION.ipynb
bsd-3-clause
import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os from os.path import join path_notebooks = os.path.abspath('.') path_data = join(path_notebooks, 'data') """ Explanation: Spatially-Resolved Mass-Metallicity Relation We're going to construct the spatially-resolved mass-metallicity relation (MZR) for a MaNGA galaxy, where mass refers to stellar mass and metallicity refers to gas-phase oxygen abundance. Roadmap Compute metallicity. Select spaxels that are star-forming, not flagged as "bad data," and above a signal-to-noise ratio threshold. Compute stellar mass surface density. Plot metallicity as a function of stellar mass surface density. End of explanation """ from marvin.tools.maps import Maps # REMOVE FROM NOTEBOOK filename = '/Users/andrews/hacks/galaxies-mzr/data/manga-8077-6104-MAPS-SPX-GAU-MILESHC.fits.gz' maps = Maps(filename=filename) # maps = Maps('8077-6104') """ Explanation: Load Maps for Galaxy Import the Marvin Maps class from marvin.tools.maps and initialize a Maps object for the galaxy 8077-6104. End of explanation """ nii = maps.emline_gflux_nii_6585 ha = maps.emline_gflux_ha_6564 """ Explanation: Measure Metallicity Pettini & Pagel (2004) N2 metallicity calibration We are going to use the N2 metallicity calibration (their Equation 1) from Pettini & Pagel (2004): 12 + log(O/H) = 8.90 + 0.57 $\times$ log( $\frac{F([NII])}{F(H\alpha)}$ ). One of the benefits of this calibration is that the required lines are very close in wavelength, so the reddening correction is negligible. Get [NII] 6585 and Halpha flux maps from the Marvin Maps object. Note: MaNGA (and Marvin) use the wavelengths of lines in vaccuum, whereas they are usually reported in air, hence the slight offsets. End of explanation """ n2 = nii / ha logn2 = np.log10(n2) """ Explanation: Calculate the necessary line ratio. Marvin can do map arithmetic, which propagates the inverse variances and masks, so you can just do +, -, *, /, and ** operations as normal. (Note: taking the log of a Marvin Map will work for the values but the inverse variance propagation does not correctly propagate the inverse variance yet.) End of explanation """ oh = 8.90 + 0.57 * logn2 """ Explanation: Finally, calculate the metallicity. End of explanation """ masks_bpt, __, __ = maps.get_bpt() """ Explanation: Select Spaxels Using the BPT Diagram to select star-forming spaxels Metallicity indicators only work for star-forming spaxels, so we need a way to select only these spaxels. The classic diagnostic diagram for classify the emission from galaxies (or galactic sub-regions) as star-forming or non-star-forming (i.e., from active galactic nuclei (AGN) or evolved stars) was originally proposed in Baldwin, Phillips, & Terlevich (1981) and is known as the BPT diagram. The BPT diagram uses ratios of emission lines to separate thermal and non-thermal emission. The classic BPT diagram uses [OIII]5007 / Hbeta vs. [NII]6583 / Halpha, but there are several versions of the BPT diagram that use different lines ratios. BPT Diagrams with Marvin Let's use Marvin's maps.get_bpt() method to make BPT diagrams for this galaxy. red line: maximal starbust (Kewley et al 2001) -- everything to the right is non-star-forming. dashed black line: conservative star-forming cut (Kauffmann et al. 2003) -- everything to the left is star-forming. Line ratios that fall in between these two lines are designated "Composite" with contributions from both star-forming and non-star-forming emission. blue line: separates non-star-forming spaxels into Seyferts and LINERs. Seyferts are a type of AGNs. LINERs (Low Ionization Nuclear Emission Regions) are not always nuclear (LIER is a better acronym) and not always AGN (oftern hot evolved stars). Sometimes these diagnostic diagrams disagree with each other, hence the "Ambiguous" designation. Try using maps.get_bpt? to read the documentation on how to use this function. End of explanation """ masks_bpt['sf']['global'] """ Explanation: The BPT masks are dictionaries of dictionaries of a boolean (True/False) arrays. We are interested in the spaxels that are classified as star-forming in all three BPT diagrams are designated as True, which is designated with the global key. Print this mask. End of explanation """ n2.pixmask.schema """ Explanation: Masks MaNGA (and SDSS generally) use bitmasks to communicate data quality. Marvin has built-in methods to convert from the bitmasks integer values to individual bits or labels and to create new masks by specifying a set of labels. Show the mask schema with n2.pixmask.schema. End of explanation """ mask_non_sf = ~masks_bpt['sf']['global'] * n2.pixmask.labels_to_value('DONOTUSE') """ Explanation: Select non-star-forming spaxels (from the BPT mask) and set their mask value to the DAP's DONOTUSE value with the n2.pixmask.labels_to_value() method. Note that we are selecting spaxels that we want from the BPT mask (i.e., True is a spaxel to keep), whereas we are using the pixmask to select spaxels that we want to exclude (i.e., True is a spaxel to ignore). End of explanation """ mask_bad_data = n2.pixmask.get_mask(['NOCOV', 'UNRELIABLE', 'DONOTUSE']) """ Explanation: Select spaxels classified by the DAP as bad data according to the masks for spaxels with no IFU coverage, with unreliable measurements, or otherwise unfit for science. Use the n2.pixmask.get_mask method. End of explanation """ min_snr = 3. mask_nii_low_snr = (np.abs(nii.value * np.sqrt(nii.ivar)) < min_snr) mask_ha_low_snr = (np.abs(ha.value * np.sqrt(ha.ivar)) < min_snr) """ Explanation: Select spaxels with signal-to-noise ratios (SNRs) > 3 on both [NII] 6585 and Halpha. ha.ivar = inverse variance = $\frac{1}{\sigma^2}$, where $\sigma$ is the error. End of explanation """ mask = mask_non_sf | mask_bad_data | mask_nii_low_snr | mask_ha_low_snr """ Explanation: Do a bitwise (binary) OR to create a master mask of spaxels to ignore. End of explanation """ fig, ax = oh.plot(mask=mask, cblabel='12+log(O/H)') """ Explanation: Plot the Metallicity Map Plot the map of metallicity using the plot() method from your Marvin Map metallicity object. Also, mask undesirable spaxels and label the colorbar. Note: solar metallicity is about 8.7. End of explanation """ import pandas as pd mstar = pd.read_csv(join(path_data, 'manga-{}_mstar.csv'.format(maps.plateifu))) """ Explanation: Compute Stellar Mass Surface Density Read in spaxel stellar mass measurements from the Firefly spectral fitting catalog (Goddard et al. 2017). The Firefly stellar population fitting results file is large (1.8 GB), so we have extracted only the measurements needed for our purposes DOWNLOAD CSV FILE. Summary of the Firefly Value Added Catalog Datamodel of the Firefly Value Added Catalog Convert spaxel angular size to a physical scale in pc. Divide stellar mass by area to get stellar surface mass density. Read in stellar masses Use pandas to read in the csv file with stellar masses. End of explanation """ fig, ax = plt.subplots() p = ax.imshow(mstar, origin='lower') ax.set_xlabel('spaxel') ax.set_ylabel('spaxel') cb = fig.colorbar(p) cb.set_label('log(Mstar) [M$_\odot$]') """ Explanation: Plot stellar mass map using ax.imshow(). MaNGA maps are oriented such that you want to specify origin='lower'. Also include a labelled colorbar. End of explanation """ spaxel_size = 0.5 # [arcsec] # or programmatically: # spaxel_size = float(maps.getCube().header['CD2_2']) * 3600 """ Explanation: Calculate physical size of a spaxel MaNGA's maps (and data cubes) have a spaxel size of 0.5 arcsec. Let's convert that into a physical scale for our galaxy. End of explanation """ redshift = maps.nsa['z'] """ Explanation: Get the redshift of the galaxy from the maps.nsa attribute. End of explanation """ c = 299792 # speed of light [km/s] H0 = 70 # [km s^-1 Mpc^-1] D = c * redshift / H0 # approx. distance to galaxy [Mpc] """ Explanation: We'll use the small angle approximation to estimate the physical scale: $\theta = \mathrm{tan}^{-1}(\frac{d}{D}) \approx \frac{206,265 \, \mathrm{arcsec}}{1 \, \mathrm{radian}} \frac{d}{D}$, where $\theta$ is the angular size of the object (in our case spaxel) in arcsec, $d$ is the diameter of the object (spaxel), and $D$ is the angular diameter distance. The distance (via the Hubble Law --- which is fairly accurate for low redshift objects) is $D \approx \frac{cz}{H_0}$, where $c$ is the speed of light in km/s, $z$ is the redshift, and $H_0$ is the Hubble constant in km/s/Mpc. Calculate $D$. End of explanation """ scale = 1 / 206265 * D * 1e6 # 1 radian = 206265 arcsec [pc / arcsec] """ Explanation: Rearrange the small angle formula to solve for the scale ($\frac{d}{\theta}$) in pc / arcsec. End of explanation """ spaxel_area = (scale * spaxel_size)**2 # [pc^2] """ Explanation: Now convert the spaxel size from arcsec to parsecs and calculate the area of a spaxel. End of explanation """ sigma_star = np.log10(10**mstar / spaxel_area) # [Msun / pc^2] """ Explanation: Finally, we simply divide the stellar mass by the area to get the stellar mass surface density $\Sigma_\star$ in units of $\frac{M_\odot}{pc^2}$. End of explanation """ fig, ax = plt.subplots(figsize=(6, 6)) ax.scatter(sigma_star.values[mask == 0], oh.value[mask == 0], alpha=0.15) ax.set_xlabel('log(Mstar) [M$_\odot$]') ax.set_ylabel('12+log(O/H)') ax.axis([0, 4, 8.0, 8.8]) """ Explanation: Let's plot metallicity as a function of $\Sigma_\star$! Remember to apply the mask. Also set the axis range to be [0, 4, 8, 8.8]. End of explanation """ # fitting formula aa = 8.55 bb = 0.014 cc = 3.14 xx = np.linspace(1, 3, 1000) yy = aa + bb * (xx - cc) * np.exp(-(xx - cc)) """ Explanation: MaNGA Spatially-Resolved Mass-Metallicity Relation We have constructed the spatially-resolved MZR for one galaxy, but we are interested in understanding the evolution of galaxies in general, so we want to repeat this exercise for many galaxies. In Barrera-Ballesteros et al. (2016), Jorge Barrera-Ballesteros (who gave a talk at Pitt in November 2017) did just this, and here is the analogous figure for 653 disk galaxies. <img src="images/barrera-ballesteros_local_mzr.png" style="width: 400px;"/> The best fit line from Barrera-Ballesteros et al. (2016) is given in the next cell. End of explanation """ fig, ax = plt.subplots(figsize=(6, 6)) ax.scatter(sigma_star.values[mask == 0], oh.value[mask == 0], alpha=0.15) ax.plot(xx, yy) ax.set_xlabel('log(Mstar) [M$_\odot$]') ax.set_ylabel('12+log(O/H)') ax.axis([0, 4, 8.0, 8.8]) """ Explanation: Remake the spatially-resolved MZR plot for our galaxy showing the he best fit line from Barrera-Ballesteros et al. (2016). End of explanation """
prasants/pyds
10.Visualise_This.ipynb
mit
""" We begin by using an inbuilt iPython Magic function to display plots within the window. """ %matplotlib inline import matplotlib.pyplot as plt import matplotlib print(matplotlib.__version__) """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Introduction-to-Matplotlib" data-toc-modified-id="Introduction-to-Matplotlib-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introduction to Matplotlib</a></div><div class="lev1 toc-item"><a href="#Reference-Section" data-toc-modified-id="Reference-Section-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference Section</a></div><div class="lev2 toc-item"><a href="#Colour/Color-Codes" data-toc-modified-id="Colour/Color-Codes-21"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Colour/Color Codes</a></div><div class="lev2 toc-item"><a href="#Linestyle-Codes" data-toc-modified-id="Linestyle-Codes-22"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Linestyle Codes</a></div><div class="lev2 toc-item"><a href="#Marker-Codes" data-toc-modified-id="Marker-Codes-23"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Marker Codes</a></div><div class="lev2 toc-item"><a href="#British-v-American-Spellings" data-toc-modified-id="British-v-American-Spellings-24"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>British v American Spellings</a></div><div class="lev2 toc-item"><a href="#Style-Guide" data-toc-modified-id="Style-Guide-25"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>Style Guide</a></div><div class="lev1 toc-item"><a href="#Line-Plots" data-toc-modified-id="Line-Plots-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Line Plots</a></div><div class="lev2 toc-item"><a href="#Another-Line-Plot" data-toc-modified-id="Another-Line-Plot-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Another Line Plot</a></div><div class="lev2 toc-item"><a href="#More-Parameters-for-Line-Plots" data-toc-modified-id="More-Parameters-for-Line-Plots-32"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>More Parameters for Line Plots</a></div><div class="lev1 toc-item"><a href="#Bar-Plots" data-toc-modified-id="Bar-Plots-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Bar Plots</a></div><div class="lev1 toc-item"><a href="#Histograms" data-toc-modified-id="Histograms-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Histograms</a></div><div class="lev1 toc-item"><a href="#Scatterplots" data-toc-modified-id="Scatterplots-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Scatterplots</a></div><div class="lev3 toc-item"><a href="#Another-Scatterplot-Example" data-toc-modified-id="Another-Scatterplot-Example-601"><span class="toc-item-num">6.0.1&nbsp;&nbsp;</span>Another Scatterplot Example</a></div><div class="lev1 toc-item"><a href="#Grids" data-toc-modified-id="Grids-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Grids</a></div><div class="lev1 toc-item"><a href="#Saving-Plots" data-toc-modified-id="Saving-Plots-8"><span class="toc-item-num">8&nbsp;&nbsp;</span>Saving Plots</a></div> # Introduction to Matplotlib Visualisations are a very powerful way for humans to get inferences about data. It allows us to abstract huge amounts of information into easy digestible graphs.<br> Python has a wonderful tool called Matplotlib, which incidentally is inspired by Matlab's visualisation library. Let's begin with a few basic plots.<br> We will also start incorporating more and more data visualisations in the next two sections, so it's not restricted to just toy problems. End of explanation """ %matplotlib inline import matplotlib.pyplot as chuck_norris y = [1,2,3,4,5,4,3,2,1] x = [2,4,6,8,10,12,10,8,6] chuck_norris.plot(x, y, marker='D', linestyle='-.', color='m') chuck_norris.plot([1,2,3,4,5,4,3,2,1], marker='^', linestyle='-', color='r') chuck_norris.ylabel('Numbers') #chuck_norris.show() """ Explanation: import matplotlib.pyplot as plt is python convention. <br> If you want, you can potentially write import matplotlib.pyplot as chuck_norris as below. <br> 'as plt' is the accepted convention though, and helps you write code with speed. Reference Section Colour/Color Codes | Colour Code | Colour | |:-----------:|:-------:| | r | Red | | b | Blue | | g | Green | | c | Cyan | | m | Magenta | | y | Yellow | | k | Black | | w | White | Linestyle Codes | Linestyle Code | Displayed Line Style | |:--------------:|:--------------------:| | – | Solid Line | | — | Dashed Line | | : | Dotted Line | | -. | Dash-Dotted Line | | None | No Connecting Lines | Marker Codes | Marker Code | Marker Displayed | |:-----------:|:----------------:| | + | Plus Sign | | . | Dot | | o | Circle | | ^ | Triangle | | p | Pentagon | | s | Square | | x | X Character | | D | Diamond | | h | Hexagon | | * | Asterisk | British v American Spellings British spellings often give errors like these: AttributeError: Unknown property colour To be on the safer side, use color, unless if you're using R packages written by Hadley Wickham. Style Guide http://matplotlib.org/examples/style_sheets/style_sheets_reference.html Line Plots End of explanation """ %matplotlib inline import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5] y = [1, 4, 9, 16, 25] # We have two lists, or more in mathematical terms, arrays, x and y plt.plot(x, y) """ Explanation: So as you see, the convention plt can save you from typing chuck_norris every single time. Back to business though. Let's reimport matplotlib. Another Line Plot End of explanation """ # Import libraries import matplotlib.pyplot as plt %matplotlib inline # Prepare the data x = [1, 2, 3, 4, 5] y = [1, 4, 9, 16, 25] # Plot the data plt.plot(x,y, label='Sales') # Add a legend plt.legend() # Add more information plt.xlabel('Adwords Spending (ZIM $)') plt.ylabel('Monthly Sales (Oranges)') plt.title('Effect of Adwords Spending on Monthly Sales') """ Explanation: Let's break down what's happening. End of explanation """ plt.rcParams["figure.figsize"] = (15,7) # Plot the data plt.plot(x, y, label='Sales') # Add a legend plt.legend() # Add more information plt.xlabel('Adwords Spending (ZIM $)') plt.ylabel('Monthly Sales (Oranges)') plt.title('Effect of Adwords Spending on Monthly Sales') """ Explanation: But this is too small. Let's specify the size of the plot. Note that you set it once at the very top, right after you import your libraries, or keep varying it every time you want to plot a graph. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt y = [1,4,9,16,25,36,49,64,81,100] x1 = [5,10,15,20,25,30,35,40,45,47] x2 = [1,1,2,3,5,8,13,21,34,53] plt.rcParams["figure.figsize"] = (15,7) plt.plot(y,x1, marker='+', linestyle='--', color='b',label='Blue Shift') plt.plot(y,x2, marker='o', linestyle='-', color='r', label='Red Shift') plt.xlabel('Days to Election') plt.ylabel('Popularity') plt.title('Candidate Popularity') plt.legend(loc='lower right') """ Explanation: More Parameters for Line Plots End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (15,7) # Declare Values vals = [10, 5, 3, 5, 7,6] xval = [1, 2, 3, 4, 5,6] # Bar Plot plt.bar(xval, vals) plt.title('Sales per Executive') plt.xlabel('ID Number') plt.ylabel('Weekly Sales') """ Explanation: Bar Plots End of explanation """ import numpy as np import matplotlib.pyplot as plt % matplotlib inline plt.rcParams["figure.figsize"] = (15,7) Y = [] for x in range(0,1000000): Y.append(np.random.randn()) # Here 50 is the bin size. Try playing around with 10,100,200 etc and see how it effects the shape of the graph plt.hist(Y, 500) plt.title('Distribution of Random Numbers') """ Explanation: Histograms End of explanation """ radius = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0] # We import the math library. # This can also be done as from math import pi # Then instead of math.pi, we simply use pi import math import matplotlib.pyplot as plt % matplotlib inline plt.rcParams["figure.figsize"] = (15,7) # How awesome is list comprehension!! area = [round((r**2)*math.pi,2) for r in radius] print(area) plt.xlabel('Radius') plt.ylabel('Area') plt.title('Radius of Circle v Area') plt.scatter(radius, area, color='g', s=30) """ Explanation: Scatterplots End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = (15,7) x = np.random.randn(1, 500) y = np.random.randn(1,500) plt.scatter(x, y, color='b', s=50) # s = size of the point plt.xlabel('X axis') plt.ylabel('Y axis') plt.title('Scatter Plot') """ Explanation: Another Scatterplot Example End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = (15,7) fig = plt.figure() # 121 = row,column,plot number # Plot for Left Hand Side - 121 means imgage1 = fig.add_subplot(121) N=500 x = np.random.randn(N) y = np.random.randn(N) colors = np.random.rand(N) size =(20 * np.random.rand(N))**2 plt.scatter(x, y, s=size, c=colors, alpha=0.4) # Plot for Right Hand Side imgage2 = fig.add_subplot(122) N=1000 x1 = np.random.randn(N) y1 = np.random.randn(N) area= (5 * np.random.rand(N))**3 colors = ['magenta', 'blue', 'black', 'yellow',] plt.scatter(x1, y1, s=area, c=colors, alpha=0.6) imgage2.grid(True) %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (15,7) y = [1,4,9,16,25,36,49,64,81,100] x1 = [5,10,15,20,25,30,35,40,45,47] x2 = [1,1,2,3,5,8,13,21,34,53] fig = plt.figure() fig.suptitle("Candidate Popularity", fontsize="x-large") # 121 = row,column,plot number # Plot for Left Hand Side - 121 means imgage011 = fig.add_subplot(121) plt.xlabel('Days to Election') plt.plot(y,x1, marker='+', linestyle='--', color='b') # Plot for Right Hand Side imgage2 = fig.add_subplot(122) plt.xlabel('Days to Election') plt.plot(y,x2, marker='o', linestyle='-', color='r') #imgage2.grid(True) ## Alternate Method %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (15,7) fig = plt.figure() fig.suptitle("Candidate Popularity", fontsize="x-large") ax1 = fig.add_subplot(121) ax1.plot(y, x1, 'r-') ax1.set_title("Candidate 1") ax2 = fig.add_subplot(122) ax2.plot(y, x2, 'k-') ax2.set_title("Candidate 2") plt.tight_layout() fig = plt.gcf() """ Explanation: Grids End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (15,7) y = [1,4,9,16,25,36,49,64,81,100] x1 = [5,10,15,20,25,30,35,40,45,47] x2 = [1,1,2,3,5,8,13,21,34,53] fig = plt.figure() fig.suptitle("Candidate Popularity", fontsize="x-large") # 121 = row,column,plot number # Plot for Left Hand Side - 121 means imgage011 = fig.add_subplot(121) plt.xlabel('Days to Election') plt.plot(y,x1, marker='+', linestyle='--', color='b') # Plot for Right Hand Side imgage2 = fig.add_subplot(122) plt.xlabel('Days to Election') plt.plot(y,x2, marker='o', linestyle='-', color='r') #imgage2.grid(True) # Save Figure plt.savefig("images/pop.png") # Save Transparent Figure plt.savefig("images/pop2.png", transparent=True) """ Explanation: Saving Plots End of explanation """
oliverlee/pydy
examples/npendulum/n-pendulum-control.ipynb
bsd-3-clause
from IPython.display import SVG SVG(filename='n-pendulum-with-cart.svg') """ Explanation: Introduction Several pieces of the puzzle have come together lately to really demonstrate the power of the scientific python software packages to handle complex dynamic and controls problems (i.e. IPython notebooks, matplotlib animations, python-control, and our software packages: sympy.physics.mechanics and PyDy). This blog post by Wolfram demonstrates Mathematica's ability to symbolically derive the equations of motion for the n-link pendulum and stabilize it with an LQR controller. This blog post inspired us to replicate the example with all free and open source software. In this example problem, we derive the equations of motion of an n-link pendulum on a laterally sliding cart and then develop a controller to stabilize it. Balancing a single inverted pendulum is a classic problem that is often a student's first experience with non-linear dynamics and control. The problem here is extended to a general n-link pendulum in which the equations of motion quickly get messy with greater than 2 links. The diagram below shows the general description of the problem. End of explanation """ from __future__ import division, print_function import sympy as sm import sympy.physics.mechanics as me """ Explanation: Setup This example depends on the following software: IPython NumPy SciPy SymPy >= 0.7.6 matplotlib The easiest way to install the Python packages it is to use conda: $ conda install ipython-notebook numpy scipy sympy matplotlib To create animations you need a video encoder like ffmpeg installed. Equations of Motion We'll start by generating the equations of motion for the system with SymPy mechanics. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. mechanics provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy. End of explanation """ me.init_vprinting() """ Explanation: We can enable mathematical rendering of the resulting equations in the notebook with the following command. End of explanation """ n = 5 """ Explanation: Now specify the number of links, $n$. I'll start with 5 since the Wolfram folks only showed four. End of explanation """ q = me.dynamicsymbols('q:{}'.format(n + 1)) # Generalized coordinates u = me.dynamicsymbols('u:{}'.format(n + 1)) # Generalized speeds f = me.dynamicsymbols('f') # Force applied to the cart m = sm.symbols('m:{}'.format(n + 1)) # Mass of each bob l = sm.symbols('l:{}'.format(n)) # Length of each link g, t = sm.symbols('g t') # Gravity and time """ Explanation: mechanics will need the generalized coordinates, generalized speeds, and the input force which are all time dependent variables and the bob masses, link lengths, and acceleration due to gravity which are all constants. Time, $t$, is also made available because we will need to differentiate with respect to time. End of explanation """ I = me.ReferenceFrame('I') # Inertial reference frame O = me.Point('O') # Origin point O.set_vel(I, 0) # Origin's velocity is zero """ Explanation: Now we can create and inertial reference frame $I$ and define the point, $O$, as the origin. End of explanation """ P0 = me.Point('P0') # Hinge point of top link P0.set_pos(O, q[0] * I.x) # Set the position of P0 P0.set_vel(I, u[0] * I.x) # Set the velocity of P0 Pa0 = me.Particle('Pa0', P0, m[0]) # Define a particle at P0 """ Explanation: Secondly, we define the define the first point of the pendulum as a particle which has mass. This point can only move laterally and represents the motion of the "cart". End of explanation """ frames = [I] # List to hold the n + 1 frames points = [P0] # List to hold the n + 1 points particles = [Pa0] # List to hold the n + 1 particles forces = [(P0, f * I.x - m[0] * g * I.y)] # List to hold the n + 1 applied forces, including the input force, f kindiffs = [q[0].diff(t) - u[0]] # List to hold kinematic ODE's for i in range(n): Bi = I.orientnew('B' + str(i), 'Axis', [q[i + 1], I.z]) # Create a new frame Bi.set_ang_vel(I, u[i + 1] * I.z) # Set angular velocity frames.append(Bi) # Add it to the frames list Pi = points[-1].locatenew('P' + str(i + 1), l[i] * Bi.x) # Create a new point Pi.v2pt_theory(points[-1], I, Bi) # Set the velocity points.append(Pi) # Add it to the points list Pai = me.Particle('Pa' + str(i + 1), Pi, m[i + 1]) # Create a new particle particles.append(Pai) # Add it to the particles list forces.append((Pi, -m[i + 1] * g * I.y)) # Set the force applied at the point kindiffs.append(q[i + 1].diff(t) - u[i + 1]) # Define the kinematic ODE: dq_i / dt - u_i = 0 """ Explanation: Now we can define the $n$ reference frames, particles, gravitational forces, and kinematical differential equations for each of the pendulum links. This is easily done with a loop. End of explanation """ kane = me.KanesMethod(I, q_ind=q, u_ind=u, kd_eqs=kindiffs) # Initialize the object fr, frstar = kane.kanes_equations(forces, particles) # Generate EoM's fr + frstar = 0 """ Explanation: With all of the necessary point velocities and particle masses defined, the KanesMethod class can be used to derive the equations of motion of the system automatically. End of explanation """ sm.trigsimp(kane.mass_matrix) """ Explanation: The equations of motion are quite long as can been seen below. This is the general nature of most non-simple mutlibody problems. That is why a SymPy is so useful; no more mistakes in algebra, differentiation, or copying hand written equations. Note that trigsimp can take quite a while to complete for extremely large expressions. Below we print $\tilde{M}$ and $\tilde{f}$ from $\tilde{M}\dot{u}=\tilde{f}$ to show the size of the expressions. End of explanation """ me.find_dynamicsymbols(kane.mass_matrix) sm.trigsimp(kane.forcing) """ Explanation: $\tilde{M}$ is a function of the constant parameters and the configuration. End of explanation """ me.find_dynamicsymbols(kane.forcing) """ Explanation: $\tilde{f}$ is a function of the constant parameters, configuration, speeds, and the applied force. End of explanation """ import numpy as np from numpy.linalg import solve from scipy.integrate import odeint """ Explanation: Simulation Now that the symbolic equations of motion are available we can simulate the pendulum's motion. We will need some more SymPy functionality and several NumPy functions, and most importantly the integration function from SciPy, odeint. End of explanation """ arm_length = 1. / n # The maximum length of the pendulum is 1 meter bob_mass = 0.01 / n # The maximum mass of the bobs is 10 grams parameters = [g, m[0]] # Parameter definitions starting with gravity and the first bob parameter_vals = [9.81, 0.01 / n] # Numerical values for the first two for i in range(n): # Then each mass and length parameters += [l[i], m[i + 1]] parameter_vals += [arm_length, bob_mass] """ Explanation: First, define some numeric values for all of the constant parameters in the problem. End of explanation """ dynamic = q + u # Make a list of the states dynamic.append(f) # Add the input force M_func = sm.lambdify(dynamic + parameters, kane.mass_matrix_full) # Create a callable function to evaluate the mass matrix f_func = sm.lambdify(dynamic + parameters, kane.forcing_full) # Create a callable function to evaluate the forcing vector """ Explanation: Mathematica has a really nice NDSolve function for quickly integrating their symbolic differential equations. We make use of SymPy's lambdify function to do something similar, i.e. to create functions that will evaluate the "full" mass matrix, $M$, and "full" forcing vector, $f$ from $M\dot{x} = f(x, r, t)$ as a NumPy function. End of explanation """ def right_hand_side(x, t, args): """Returns the derivatives of the states. Parameters ---------- x : ndarray, shape(2 * (n + 1)) The current state vector. t : float The current time. args : ndarray The constants. Returns ------- dx : ndarray, shape(2 * (n + 1)) The derivative of the state. """ r = 0.0 # The input force is always zero arguments = np.hstack((x, r, args)) # States, input, and parameters dx = np.array(solve(M_func(*arguments), # Solving for the derivatives f_func(*arguments))).T[0] return dx """ Explanation: To integrate the ODE's we need to define a function that returns the derivatives of the states given the current state and time. End of explanation """ x0 = np.hstack((0.0, # q0 np.pi / 2 * np.ones(len(q) - 1), # q1...qn+1 1e-3 * np.ones(len(u)))) # u0...un+1 t = np.linspace(0.0, 10.0, num=500) # Time vector x = odeint(right_hand_side, x0, t, args=(parameter_vals,)) # Numerical integration """ Explanation: Now that we have the right hand side function, the initial conditions are set such that the pendulum is in the vertical equilibrium and a slight initial rate is set for each speed to ensure the pendulum falls. The equations can then be integrated with SciPy's odeint function given a time series. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline from IPython.core.pylabtools import figsize figsize(8.0, 6.0) """ Explanation: Plotting The results of the simulation can be plotted with matplotlib. First, load the plotting functionality. End of explanation """ lines = plt.plot(t, x[:, :x.shape[1] // 2]) lab = plt.xlabel('Time [sec]') leg = plt.legend(dynamic[:x.shape[1] // 2]) """ Explanation: The coordinate trajectories are plotted below. End of explanation """ lines = plt.plot(t, x[:, x.shape[1] // 2:]) lab = plt.xlabel('Time [sec]') leg = plt.legend(dynamic[x.shape[1] // 2:]) """ Explanation: And the generalized speed trajectories. End of explanation """ from matplotlib import animation from matplotlib.patches import Rectangle """ Explanation: Animation matplotlib now includes very nice animation functions for animating matplotlib plots. First we import the necessary functions for creating the animation. End of explanation """ def animate_pendulum(t, states, length, filename=None): """Animates the n-pendulum and optionally saves it to file. Parameters ---------- t : ndarray, shape(m) Time array. states: ndarray, shape(m,p) State time history. length: float The length of the pendulum links. filename: string or None, optional If true a movie file will be saved of the animation. This may take some time. Returns ------- fig : matplotlib.Figure The figure. anim : matplotlib.FuncAnimation The animation. """ # the number of pendulum bobs numpoints = states.shape[1] // 2 # first set up the figure, the axis, and the plot elements we want to animate fig = plt.figure() # some dimesions cart_width = 0.4 cart_height = 0.2 # set the limits based on the motion xmin = np.around(states[:, 0].min() - cart_width / 2.0, 1) xmax = np.around(states[:, 0].max() + cart_width / 2.0, 1) # create the axes ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal') # display the current time time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes) # create a rectangular cart rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2], cart_width, cart_height, fill=True, color='red', ec='black') ax.add_patch(rect) # blank line for the pendulum line, = ax.plot([], [], lw=2, marker='o', markersize=6) # initialization function: plot the background of each frame def init(): time_text.set_text('') rect.set_xy((0.0, 0.0)) line.set_data([], []) return time_text, rect, line, # animation function: update the objects def animate(i): time_text.set_text('time = {:2.2f}'.format(t[i])) rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2)) x = np.hstack((states[i, 0], np.zeros((numpoints - 1)))) y = np.zeros((numpoints)) for j in np.arange(1, numpoints): x[j] = x[j - 1] + length * np.cos(states[i, j]) y[j] = y[j - 1] + length * np.sin(states[i, j]) line.set_data(x, y) return time_text, rect, line, # call the animator function anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init, interval=t[-1] / len(t) * 1000, blit=True, repeat=False) # save the animation if a filename is given if filename is not None: anim.save(filename, fps=30, codec='libx264') """ Explanation: The following function was modeled from Jake Vanderplas's post on matplotlib animations. The default animation writer is used (typically ffmpeg), you can change it by adding writer argument to anim.save call. End of explanation """ animate_pendulum(t, x, arm_length, filename="open-loop.mp4") from IPython.display import HTML html = \ """ <video width="640" height="480" controls> <source src="open-loop.mp4" type="video/mp4"> Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/Nj3_npq7MZI. </video> """ HTML(html) """ Explanation: Now we can create the animation of the pendulum. This animation will show the open loop dynamics. End of explanation """ equilibrium_point = [sm.S(0)] + [sm.pi / 2] * (len(q) - 1) + [sm.S(0)] * len(u) equilibrium_dict = dict(zip(q + u, equilibrium_point)) equilibrium_dict """ Explanation: Controller Design The n-link pendulum can be balanced such that all of the links are inverted above the cart by applying the correct lateral force to the cart. We can design a full state feedback controller based from a linear model of the pendulum about its upright equilibrium point. We'll start by specifying the equilibrium point and parameters in dictionaries. We make sure to use SymPy types in the equilibrium point to ensure proper cancelations in the linearization. End of explanation """ M, F_A, F_B, r = kane.linearize(new_method=True, op_point=equilibrium_dict) sm.simplify(M) sm.simplify(F_A) sm.simplify(F_B) """ Explanation: The KanesMethod class has method that linearizes the forcing vector about generic state and input perturbation vectors. The equilibrium point and numerical constants can then be substituted in to give the linear system in this form: $M\dot{x}=F_Ax+F_Br$. The state and input matrices, $A$ and $B$, can then be computed by left side multiplication by the inverse of the mass matrix: $A=M^{-1}F_A$ and $B=M^{-1}F_B$. End of explanation """ parameter_dict = dict(zip(parameters, parameter_vals)) parameter_dict M_num = sm.matrix2numpy(M.subs(parameter_dict), dtype=float) F_A_num = sm.matrix2numpy(F_A.subs(parameter_dict), dtype=float) F_B_num = sm.matrix2numpy(F_B.subs(parameter_dict), dtype=float) A = np.linalg.solve(M_num, F_A_num) B = np.linalg.solve(M_num ,F_B_num) print(A) print(B) """ Explanation: Now the numerical $A$ and $B$ matrices can be formed. First substitute numerical parameter values into $M$, $F_A$, and $F_B$. End of explanation """ equilibrium_point = np.asarray([x.evalf() for x in equilibrium_point], dtype=float) """ Explanation: Also convert equilibrium_point to a numeric array: End of explanation """ from numpy.linalg import matrix_rank from scipy.linalg import solve_continuous_are """ Explanation: Now that we have a linear system, the SciPy package can be used to design an optimal controller for the system. End of explanation """ def controllable(a, b): """Returns true if the system is controllable and false if not. Parameters ---------- a : array_like, shape(n,n) The state matrix. b : array_like, shape(n,r) The input matrix. Returns ------- controllable : boolean """ a = np.matrix(a) b = np.matrix(b) n = a.shape[0] controllability_matrix = [] for i in range(n): controllability_matrix.append(a ** i * b) controllability_matrix = np.hstack(controllability_matrix) return np.linalg.matrix_rank(controllability_matrix) == n controllable(A, B) """ Explanation: First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the matrix_rank algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links. End of explanation """ Q = np.eye(A.shape[0]) R = np.eye(B.shape[1]) S = solve_continuous_are(A, B, Q, R); K = np.dot(np.dot(np.linalg.inv(R), B.T), S) K """ Explanation: So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity. End of explanation """ def right_hand_side(x, t, args): """Returns the derivatives of the states. Parameters ---------- x : ndarray, shape(2 * (n + 1)) The current state vector. t : float The current time. args : ndarray The constants. Returns ------- dx : ndarray, shape(2 * (n + 1)) The derivative of the state. """ r = np.dot(K, equilibrium_point - x) # The controller arguments = np.hstack((x, r, args)) # States, input, and parameters dx = np.array(solve(M_func(*arguments), # Solving for the derivatives f_func(*arguments))).T[0] return dx """ Explanation: The gains can now be used to define the required input during simulation to stabilize the system. The input $r$ is simply the gain vector multiplied by the error in the state vector from the equilibrium point, $r(t)=K(x_{eq} - x(t))$. End of explanation """ x0 = np.hstack((0, np.pi / 2 * np.ones(len(q) - 1), 1 * np.ones(len(u)))) t = np.linspace(0.0, 10.0, num=500) x = odeint(right_hand_side, x0, t, args=(parameter_vals,)) """ Explanation: Now we can simulate and animate the system to see if the controller works. End of explanation """ lines = plt.plot(t, x[:, :x.shape[1] // 2]) lab = plt.xlabel('Time [sec]') leg = plt.legend(dynamic[:x.shape[1] // 2]) lines = plt.plot(t, x[:, x.shape[1] // 2:]) lab = plt.xlabel('Time [sec]') leg = plt.legend(dynamic[x.shape[1] // 2:]) animate_pendulum(t, x, arm_length, filename="closed-loop.mp4") from IPython.display import HTML html = \ """ <video width="640" height="480" controls> <source src="closed-loop.mp4" type="video/mp4"> Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/SpgBHqW9om0 </video> """ HTML(html) """ Explanation: The plots show that we seem to have a stable system. End of explanation """ # Install with pip install version_information %load_ext version_information %version_information numpy, sympy, scipy, matplotlib, control """ Explanation: The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed. This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica. The IPython notebook for this example can be downloaded from https://github.com/pydy/pydy/tree/master/examples/npendulum. You can try out different $n$ values. I've gotten the equations of motion to compute for an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work. End of explanation """
intel-analytics/BigDL
docs/readthedocs/source/doc/Serving/Example/tf1-to-cluster-serving-example.ipynb
apache-2.0
import tensorflow as tf tf.__version__ """ Explanation: In this example, we will use tensorflow v1 (version 1.15) to create a simple MLP model, and transfer the application to Cluster Serving step by step. This tutorial is recommended for Tensorflow v1 user only. If you are not Tensorflow v1 user, the keras tutorial here is more recommended. Original Tensorflow v1 Application End of explanation """ g = tf.Graph() with g.as_default(): # Graph Inputs features = tf.placeholder(dtype=tf.float32, shape=[None, 2], name='features') targets = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='targets') # Model Parameters weights = tf.Variable(tf.zeros(shape=[2, 1], dtype=tf.float32), name='weights') bias = tf.Variable([[0.]], dtype=tf.float32, name='bias') # Forward Pass linear = tf.add(tf.matmul(features, weights), bias, name='linear') ones = tf.ones(shape=tf.shape(linear)) zeros = tf.zeros(shape=tf.shape(linear)) prediction = tf.where(condition=tf.less(linear, 0.), x=zeros, y=ones, name='prediction') # Backward Pass errors = targets - prediction weight_update = tf.assign_add(weights, tf.reshape(errors * features, (2, 1)), name='weight_update') bias_update = tf.assign_add(bias, errors, name='bias_update') train = tf.group(weight_update, bias_update, name='train') saver = tf.train.Saver(name='saver') import numpy as np x_train, y_train = np.array([[1,2],[3,4],[1,3]]), np.array([1,2,1]) x_train.shape, y_train.shape """ Explanation: We first define the Tensorflow graph, and create some data. End of explanation """ with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) for epoch in range(5): for example, target in zip(x_train, y_train): feed_dict = {'features:0': example.reshape(-1, 2), 'targets:0': target.reshape(-1, 1)} _ = sess.run(['train'], feed_dict=feed_dict) w, b = sess.run(['weights:0', 'bias:0']) print('Model parameters:\n') print('Weights:\n', w) print('Bias:', b) saver.save(sess, save_path='perceptron') pred = sess.run('prediction:0', feed_dict={features: x_train}) print(pred) # in this session, save the model to savedModel format inputs = dict([(features.name, features)]) outputs = dict([(prediction.name, prediction)]) inputs, outputs tf.saved_model.simple_save(sess, "/tmp/mlp_tf1", inputs, outputs) """ Explanation: Export TensorFlow SavedModel Then, we train the graph and in the with tf.Session, we save the graph to SavedModel. The detailed code is following, and we could see the prediction result is [1] with input [1,2]. End of explanation """ ! pip install bigdl-serving import os ! mkdir cluster-serving os.chdir('cluster-serving') ! cluster-serving-init ! tail wget-log # if you encounter slow download issue like above, you can just use following command to download # ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar # if you are using wget to download, or get "bigdl-xxx-serving.jar" after "ls", please call mv *serving.jar bigdl.jar after downloaded. # After initialization finished, check the directory ! ls # Call mv *serving.jar bigdl.jar as mentioned above ! mv *serving.jar bigdl.jar ! ls """ Explanation: Deploy Cluster Serving After model prepared, we start to deploy it on Cluster Serving. First install Cluster Serving End of explanation """ ## BigDL Cluster Serving model: # model path must be provided path: /tmp/mlp_tf1 ! head config.yaml """ Explanation: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration) End of explanation """ ! $FLINK_HOME/bin/start-cluster.sh """ Explanation: Start Cluster Serving Cluster Serving requires Flink and Redis installed, and corresponded environment variables set, check Cluster Serving Installation Guide for detail. Flink cluster should start before Cluster Serving starts, if Flink cluster is not started, call following to start a local Flink cluster. End of explanation """ ! cluster-serving-start """ Explanation: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide) End of explanation """ from bigdl.serving.client import InputQueue, OutputQueue input_queue = InputQueue() # Use async api to put and get, you have pass a name arg and use the name to get arr = np.array([1,2]) input_queue.enqueue('my-input', t=arr) output_queue = OutputQueue() prediction = output_queue.query('my-input') # Use sync api to predict, this will block until the result is get or timeout prediction = input_queue.predict(arr) prediction """ Explanation: Prediction using Cluster Serving Next we start Cluster Serving code at python client. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/labs/census.ipynb
apache-2.0
!pip install tensorflow-transform """ Explanation: Preprocessing Data with Advanced Example using TensorFlow Transform Learning objectives Create a tf.Transform preprocessing_fn. Transform the data. Create an input function for training. Build the model. Train and Evaluate the model. Export the model. Introduction The Feature Engineering Component of TensorFlow Extended (TFX) This notebook provides a somewhat more advanced example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/'>TensorFlow Transform</a> (tf.Transform) can be used to preprocess data using exactly the same code for both training a model and serving inferences in production. TensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could: Normalize an input value by using the mean and standard deviation Convert strings to integers by generating a vocabulary over all of the input values Convert floats to integers by assigning them to buckets, based on the observed data distribution TensorFlow has built-in support for manipulations on a single example or a batch of examples. tf.Transform extends these capabilities to support full passes over the entire training dataset. The output of tf.Transform is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages. What you're doing in this notebook In this notebook you'll be processing a <a target='_blank' href='https://archive.ics.uci.edu/ml/machine-learning-databases/adult'>widely used dataset containing census data</a>, and training a model to do classification. Along the way you'll be transforming the data using tf.Transform. Install TensorFlow Transform End of explanation """ # This cell is only necessary because packages were installed while python was # running. It avoids the need to restart the runtime when running in Colab. import pkg_resources import importlib importlib.reload(pkg_resources) """ Explanation: Note: Restart the kernel before proceeding further. Select Kernel > Restart kernel > Restart from the menu. End of explanation """ import math import os import pprint import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf print('TF: {}'.format(tf.__version__)) import apache_beam as beam print('Beam: {}'.format(beam.__version__)) import tensorflow_transform as tft import tensorflow_transform.beam as tft_beam print('Transform: {}'.format(tft.__version__)) from tfx_bsl.public import tfxio from tfx_bsl.coders.example_coder import RecordBatchToExamples """ Explanation: Imports and globals First import the stuff you need. End of explanation """ !wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data !wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.test train_path = './adult.data' test_path = './adult.test' """ Explanation: Next download the data files: End of explanation """ CATEGORICAL_FEATURE_KEYS = [ 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country', ] NUMERIC_FEATURE_KEYS = [ 'age', 'capital-gain', 'capital-loss', 'hours-per-week', 'education-num' ] ORDERED_CSV_COLUMNS = [ 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'label' ] LABEL_KEY = 'label' """ Explanation: Name our columns You'll create some handy lists for referencing the columns in our dataset. End of explanation """ pandas_train = pd.read_csv(train_path, header=None, names=ORDERED_CSV_COLUMNS) pandas_train.head(5) one_row = dict(pandas_train.loc[0]) COLUMN_DEFAULTS = [ '' if isinstance(v, str) else 0.0 for v in dict(pandas_train.loc[1]).values()] """ Explanation: Here's a quick preview of the data: End of explanation """ pandas_test = pd.read_csv(test_path, header=1, names=ORDERED_CSV_COLUMNS) pandas_test.head(5) testing = os.getenv("WEB_TEST_BROWSER", False) if testing: pandas_train = pandas_train.loc[:1] pandas_test = pandas_test.loc[:1] """ Explanation: The test data has 1 header line that needs to be skipped, and a trailing "." at the end of each line. End of explanation """ RAW_DATA_FEATURE_SPEC = dict( [(name, tf.io.FixedLenFeature([], tf.string)) for name in CATEGORICAL_FEATURE_KEYS] + [(name, tf.io.FixedLenFeature([], tf.float32)) for name in NUMERIC_FEATURE_KEYS] + [(LABEL_KEY, tf.io.FixedLenFeature([], tf.string))] ) SCHEMA = tft.tf_metadata.dataset_metadata.DatasetMetadata( tft.tf_metadata.schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC)).schema """ Explanation: Define our features and schema Let's define a schema based on what types the columns are in our input. Among other things this will help with importing them correctly. End of explanation """ #@title def encode_example(input_features): input_features = dict(input_features) output_features = {} for key in CATEGORICAL_FEATURE_KEYS: value = input_features[key] feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[value.strip().encode()])) output_features[key] = feature for key in NUMERIC_FEATURE_KEYS: value = input_features[key] feature = tf.train.Feature( float_list=tf.train.FloatList(value=[value])) output_features[key] = feature label_value = input_features.get(LABEL_KEY, None) if label_value is not None: output_features[LABEL_KEY] = tf.train.Feature( bytes_list = tf.train.BytesList(value=[label_value.strip().encode()])) example = tf.train.Example( features = tf.train.Features(feature=output_features) ) return example """ Explanation: [Optional] Encode and decode tf.train.Example protos This tutorial needs to convert examples from the dataset to and from tf.train.Example protos in a few places. The hidden encode_example function below converts a dictionary of features forom the dataset to a tf.train.Example. End of explanation """ tf_example = encode_example(pandas_train.loc[0]) tf_example.features.feature['age'] serialized_example_batch = tf.constant([ encode_example(pandas_train.loc[i]).SerializeToString() for i in range(3) ]) serialized_example_batch """ Explanation: Now you can convert dataset examples into Example protos: End of explanation """ decoded_tensors = tf.io.parse_example( serialized_example_batch, features=RAW_DATA_FEATURE_SPEC ) """ Explanation: You can also convert batches of serialized Example protos back into a dictionary of tensors: End of explanation """ features_dict = dict(pandas_train.loc[0]) features_dict.pop(LABEL_KEY) LABEL_KEY in features_dict """ Explanation: In some cases the label will not be passed in, so the encode function is written so that the label is optional: End of explanation """ no_label_example = encode_example(features_dict) LABEL_KEY in no_label_example.features.feature.keys() """ Explanation: When creating an Example proto it will simply not contain the label key. End of explanation """ NUM_OOV_BUCKETS = 1 EPOCH_SPLITS = 10 TRAIN_NUM_EPOCHS = 2*EPOCH_SPLITS NUM_TRAIN_INSTANCES = len(pandas_train) NUM_TEST_INSTANCES = len(pandas_test) BATCH_SIZE = 128 STEPS_PER_TRAIN_EPOCH = tf.math.ceil(NUM_TRAIN_INSTANCES/BATCH_SIZE/EPOCH_SPLITS) EVALUATION_STEPS = tf.math.ceil(NUM_TEST_INSTANCES/BATCH_SIZE) # Names of temp files TRANSFORMED_TRAIN_DATA_FILEBASE = 'train_transformed' TRANSFORMED_TEST_DATA_FILEBASE = 'test_transformed' EXPORTED_MODEL_DIR = 'exported_model_dir' if testing: TRAIN_NUM_EPOCHS = 1 """ Explanation: Setting hyperparameters and basic housekeeping Constants and hyperparameters used for training. End of explanation """ def preprocessing_fn(inputs): """Preprocess input columns into transformed columns.""" # Since you are modifying some features and leaving others unchanged, you # start by setting `outputs` to a copy of `inputs. outputs = inputs.copy() # Scale numeric columns to have range [0, 1]. for key in NUMERIC_FEATURE_KEYS: outputs[key] = tft.scale_to_0_1(inputs[key]) # For all categorical columns except the label column, you generate a # vocabulary but do not modify the feature. This vocabulary is instead # used in the trainer, by means of a feature column, to convert the feature # from a string to an integer id. for key in CATEGORICAL_FEATURE_KEYS: outputs[key] = tft.compute_and_apply_vocabulary( tf.strings.strip(inputs[key]), num_oov_buckets=NUM_OOV_BUCKETS, vocab_filename=key) # For the label column you provide the mapping from string to index. table_keys = ['>50K', '<=50K'] with tf.init_scope(): initializer = tf.lookup.KeyValueTensorInitializer( keys=table_keys, values=tf.cast(tf.range(len(table_keys)), tf.int64), key_dtype=tf.string, value_dtype=tf.int64) table = tf.lookup.StaticHashTable(initializer, default_value=-1) # Remove trailing periods for test data when the data is read with tf.data. # label_str = tf.sparse.to_dense(inputs[LABEL_KEY]) label_str = inputs[LABEL_KEY] label_str = tf.strings.regex_replace(label_str, r'\.$', '') label_str = tf.strings.strip(label_str) data_labels = table.lookup(label_str) transformed_label = tf.one_hot( indices=data_labels, depth=len(table_keys), on_value=1.0, off_value=0.0) outputs[LABEL_KEY] = tf.reshape(transformed_label, [-1, len(table_keys)]) return outputs """ Explanation: Preprocessing with tf.Transform Create a tf.Transform preprocessing_fn The preprocessing function is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a Tensor or SparseTensor. There are two main groups of API calls that typically form the heart of a preprocessing function: TensorFlow Ops: Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving. Tensorflow Transform Analyzers/Mappers: Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, (prior to training, during analysis), and typically make a full pass over the entire training dataset. They create tf.constant tensors, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset. Here is a preprocessing_fn for this dataset. It does several things: Using tft.scale_to_0_1, it scales the numeric features to the [0,1] range. Using tft.compute_and_apply_vocabulary, it computes a vocabulary for each of the categorical features, and returns the integer IDs for each input as an tf.int64. This applies both to string and integer categorical-inputs. It applies some manual transformations to the data using standard TensorFlow operations. Here these operations are applied to the label but could transform the features as well. The TensorFlow operations do several things: They build a lookup table for the label (the tf.init_scope ensures that the table is only created the first time the function is called). They normalize the text of the label. They convert the label to a one-hot. End of explanation """ def transform_data(train_data_file, test_data_file, working_dir): """Transform the data and write out as a TFRecord of Example protos. Read in the data using the CSV reader, and transform it using a preprocessing pipeline that scales numeric data and converts categorical data from strings to int64 values indices, by creating a vocabulary for each category. Args: train_data_file: File containing training data test_data_file: File containing test data working_dir: Directory to write transformed data and metadata to """ # The "with" block will create a pipeline, and run that pipeline at the exit # of the block. with beam.Pipeline() as pipeline: with tft_beam.Context(temp_dir=tempfile.mkdtemp()): # Create a TFXIO to read the census data with the schema. To do this you # need to list all columns in order since the schema doesn't specify the # order of columns in the csv. # You first read CSV files and use BeamRecordCsvTFXIO whose .BeamSource() # accepts a PCollection[bytes] because you need to patch the records first # (see "FixCommasTrainData" below). Otherwise, tfxio.CsvTFXIO can be used # to both read the CSV files and parse them to TFT inputs: # csv_tfxio = tfxio.CsvTFXIO(...) # raw_data = (pipeline | 'ToRecordBatches' >> csv_tfxio.BeamSource()) train_csv_tfxio = tfxio.CsvTFXIO( file_pattern=train_data_file, telemetry_descriptors=[], column_names=ORDERED_CSV_COLUMNS, schema=SCHEMA) # Read in raw data and convert using CSV TFXIO. raw_data = ( pipeline | 'ReadTrainCsv' >> train_csv_tfxio.BeamSource()) # Combine data and schema into a dataset tuple. Note that you already used # the schema to read the CSV data, but you also need it to interpret # raw_data. cfg = train_csv_tfxio.TensorAdapterConfig() raw_dataset = (raw_data, cfg) # The TFXIO output format is chosen for improved performance. transformed_dataset, transform_fn = ( raw_dataset | tft_beam.AnalyzeAndTransformDataset( preprocessing_fn, output_record_batches=True)) # Transformed metadata is not necessary for encoding. transformed_data, _ = transformed_dataset # Extract transformed RecordBatches, encode and write them to the given # directory. # TODO(b/223384488): Switch to `RecordBatchToExamplesEncoder`. _ = ( transformed_data | 'EncodeTrainData' >> beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch)) | 'WriteTrainData' >> beam.io.WriteToTFRecord( os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE))) # Now apply transform function to test data. In this case you remove the # trailing period at the end of each line, and also ignore the header line # that is present in the test data file. test_csv_tfxio = tfxio.CsvTFXIO( file_pattern=test_data_file, skip_header_lines=1, telemetry_descriptors=[], column_names=ORDERED_CSV_COLUMNS, schema=SCHEMA) raw_test_data = ( pipeline | 'ReadTestCsv' >> test_csv_tfxio.BeamSource()) raw_test_dataset = (raw_test_data, test_csv_tfxio.TensorAdapterConfig()) # The TFXIO output format is chosen for improved performance. transformed_test_dataset = ( (raw_test_dataset, transform_fn) | tft_beam.TransformDataset(output_record_batches=True)) # Transformed metadata is not necessary for encoding. transformed_test_data, _ = transformed_test_dataset # Extract transformed RecordBatches, encode and write them to the given # directory. _ = ( transformed_test_data | 'EncodeTestData' >> beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch)) | 'WriteTestData' >> beam.io.WriteToTFRecord( os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE))) # Will write a SavedModel and metadata to working_dir, which can then # be read by the tft.TFTransformOutput class. _ = ( transform_fn | 'WriteTransformFn' >> tft_beam.WriteTransformFn(working_dir)) """ Explanation: Syntax You're almost ready to put everything together and use <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> to run it. Apache Beam uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line: result = pass_this | 'name this step' &gt;&gt; to_this_call The method to_this_call is being invoked and passed the object called pass_this, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as name this step in a stack trace</a>. The result of the call to to_this_call is returned in result. You will often see stages of a pipeline chained together like this: result = apache_beam.Pipeline() | 'first step' &gt;&gt; do_this_first() | 'second step' &gt;&gt; do_this_last() and since that started with a new pipeline, you can continue like this: next_result = result | 'doing more stuff' &gt;&gt; another_function() Transform the data Now you're ready to start transforming our data in an Apache Beam pipeline. Read in the data using the tfxio.CsvTFXIO CSV reader (to process lines of text in a pipeline use tfxio.BeamRecordCsvTFXIO instead). Analyse and transform the data using the preprocessing_fn defined above. Write out the result as a TFRecord of Example protos, which you will use for training a model later End of explanation """ import tempfile import pathlib output_dir = os.path.join(tempfile.mkdtemp(), 'keras') # Transform the data # TODO 1: Your code goes here """ Explanation: Run the pipeline: End of explanation """ tf_transform_output = tft.TFTransformOutput(output_dir) tf_transform_output.transformed_feature_spec() """ Explanation: Wrap up the output directory as a tft.TFTransformOutput: End of explanation """ !ls -l {output_dir} """ Explanation: If you look in the directory you'll see it contains three things: The train_transformed and test_transformed data files The transform_fn directory (a tf.saved_model) The transformed_metadata The followning sections show how to use these artifacts to train a model. End of explanation """ def _make_training_input_fn(tf_transform_output, train_file_pattern, batch_size): """An input function reading from transformed data, converting to model input. Args: tf_transform_output: Wrapper around output of tf.Transform. transformed_examples: Base filename of examples. batch_size: Batch size. Returns: The input data for training or eval, in the form of k. """ def input_fn(): return tf.data.experimental.make_batched_features_dataset( file_pattern=train_file_pattern, batch_size=batch_size, features=tf_transform_output.transformed_feature_spec(), reader=tf.data.TFRecordDataset, label_key=LABEL_KEY, shuffle=True) return input_fn train_file_pattern = pathlib.Path(output_dir)/f'{TRANSFORMED_TRAIN_DATA_FILEBASE}*' # Create the input function input_fn = # TODO 2: Your code goes here """ Explanation: Using our preprocessed data to train a model using tf.keras To show how tf.Transform enables us to use the same code for both training and serving, and thus prevent skew, you're going to train a model. To train our model and prepare our trained model for production you need to create input functions. The main difference between our training input function and our serving input function is that training data contains the labels, and production data does not. The arguments and returns are also somewhat different. Create an input function for training Running the pipeline in the previous section created TFRecord files containing the the transformed data. The following code uses tf.data.experimental.make_batched_features_dataset and tft.TFTransformOutput.transformed_feature_spec to read these data files as a tf.data.Dataset: End of explanation """ for example, label in input_fn().take(1): break pd.DataFrame(example) label """ Explanation: Below you can see a transformed sample of the data. Note how the numeric columns like education-num and hourd-per-week are converted to floats with a range of [0,1], and the string columns have been converted to IDs: End of explanation """ def build_keras_model(working_dir): inputs = build_keras_inputs(working_dir) encoded_inputs = encode_inputs(inputs) stacked_inputs = tf.concat(tf.nest.flatten(encoded_inputs), axis=1) output = tf.keras.layers.Dense(100, activation='relu')(stacked_inputs) output = tf.keras.layers.Dense(50, activation='relu')(output) output = tf.keras.layers.Dense(2)(output) model = tf.keras.Model(inputs=inputs, outputs=output) return model def build_keras_inputs(working_dir): tf_transform_output = tft.TFTransformOutput(working_dir) feature_spec = tf_transform_output.transformed_feature_spec().copy() feature_spec.pop(LABEL_KEY) # Build the `keras.Input` objects. inputs = {} for key, spec in feature_spec.items(): if isinstance(spec, tf.io.VarLenFeature): inputs[key] = tf.keras.layers.Input( shape=[None], name=key, dtype=spec.dtype, sparse=True) elif isinstance(spec, tf.io.FixedLenFeature): inputs[key] = tf.keras.layers.Input( shape=spec.shape, name=key, dtype=spec.dtype) else: raise ValueError('Spec type is not supported: ', key, spec) return inputs def encode_inputs(inputs): encoded_inputs = {} for key in inputs: feature = tf.expand_dims(inputs[key], -1) if key in CATEGORICAL_FEATURE_KEYS: num_buckets = tf_transform_output.num_buckets_for_transformed_feature(key) encoding_layer = ( tf.keras.layers.CategoryEncoding( num_tokens=num_buckets, output_mode='binary', sparse=False)) encoded_inputs[key] = encoding_layer(feature) else: encoded_inputs[key] = feature return encoded_inputs model = build_keras_model(output_dir) tf.keras.utils.plot_model(model,rankdir='LR', show_shapes=True) """ Explanation: Train, Evaluate the model Build the model End of explanation """ def get_dataset(working_dir, filebase): tf_transform_output = tft.TFTransformOutput(working_dir) data_path_pattern = os.path.join( working_dir, filebase + '*') input_fn = _make_training_input_fn( tf_transform_output, data_path_pattern, batch_size=BATCH_SIZE) dataset = input_fn() return dataset """ Explanation: Build the datasets End of explanation """ def train_and_evaluate( model, working_dir): """Train the model on training data and evaluate on test data. Args: working_dir: The location of the Transform output. num_train_instances: Number of instances in train set num_test_instances: Number of instances in test set Returns: The results from the estimator's 'evaluate' method """ train_dataset = get_dataset(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE) validation_dataset = get_dataset(working_dir, TRANSFORMED_TEST_DATA_FILEBASE) model = build_keras_model(working_dir) # Train the model # TODO 3: Your code goes here metric_values = model.evaluate(validation_dataset, steps=EVALUATION_STEPS, return_dict=True) return model, history, metric_values def train_model(model, train_dataset, validation_dataset): model.compile(optimizer='adam', loss=tf.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_dataset, validation_data=validation_dataset, epochs=TRAIN_NUM_EPOCHS, steps_per_epoch=STEPS_PER_TRAIN_EPOCH, validation_steps=EVALUATION_STEPS) return history model, history, metric_values = train_and_evaluate(model, output_dir) plt.plot(history.history['loss'], label='Train') plt.plot(history.history['val_loss'], label='Eval') plt.ylim(0,max(plt.ylim())) plt.legend() plt.title('Loss'); """ Explanation: Train and evaluate the model: End of explanation """ def read_csv(file_name, batch_size): return tf.data.experimental.make_csv_dataset( file_pattern=file_name, batch_size=batch_size, column_names=ORDERED_CSV_COLUMNS, column_defaults=COLUMN_DEFAULTS, prefetch_buffer_size=0, ignore_errors=True) for ex in read_csv(test_path, batch_size=5): break pd.DataFrame(ex) """ Explanation: Transform new data In the previous section the training process used the hard-copies of the transformed data that were generated by tft_beam.AnalyzeAndTransformDataset in the transform_dataset function. For operating on new data you'll need to load final version of the preprocessing_fn that was saved by tft_beam.WriteTransformFn. The TFTransformOutput.transform_features_layer method loads the preprocessing_fn SavedModel from the output directory. Here's a function to load new, unprocessed batches from a source file: End of explanation """ ex2 = ex.copy() ex2.pop('fnlwgt') tft_layer = tf_transform_output.transform_features_layer() t_ex = tft_layer(ex2) label = t_ex.pop(LABEL_KEY) pd.DataFrame(t_ex) """ Explanation: Load the tft.TransformFeaturesLayer to transform this data with the preprocessing_fn: End of explanation """ ex2 = pd.DataFrame(ex)[['education', 'hours-per-week']] ex2 pd.DataFrame(tft_layer(dict(ex2))) """ Explanation: The tft_layer is smart enough to still execute the transformation if only a subset of features are passed in. For example, if you only pass in two features, you'll get just the transformed versions of those features back: End of explanation """ class Transform(tf.Module): def __init__(self, working_dir): self.working_dir = working_dir self.tf_transform_output = tft.TFTransformOutput(working_dir) self.tft_layer = tf_transform_output.transform_features_layer() @tf.function def __call__(self, features): raw_features = {} for key, val in features.items(): # Skip unused keys if key not in RAW_DATA_FEATURE_SPEC: continue raw_features[key] = val # Apply the `preprocessing_fn`. transformed_features = tft_layer(raw_features) if LABEL_KEY in transformed_features: # Pop the label and return a (features, labels) pair. data_labels = transformed_features.pop(LABEL_KEY) return (transformed_features, data_labels) else: return transformed_features transform = Transform(output_dir) t_ex, t_label = transform(ex) pd.DataFrame(t_ex) """ Explanation: Here's a more robust version that drops features that are not in the feature-spec, and returns a (features, label) pair if the label is in the provided features: End of explanation """ # Evaluate the model # TODO 4: Your code goes here """ Explanation: Now you can use Dataset.map to apply that transformation, on the fly to new data: End of explanation """ class ServingModel(tf.Module): def __init__(self, model, working_dir): self.model = model self.working_dir = working_dir self.transform = Transform(working_dir) @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)]) def __call__(self, serialized_tf_examples): # parse the tf.train.Example feature_spec = RAW_DATA_FEATURE_SPEC.copy() feature_spec.pop(LABEL_KEY) parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec) # Apply the `preprocessing_fn` transformed_features = self.transform(parsed_features) # Run the model outputs = self.model(transformed_features) # Format the output classes_names = tf.constant([['0', '1']]) classes = tf.tile(classes_names, [tf.shape(outputs)[0], 1]) return {'classes': classes, 'scores': outputs} def export(self, output_dir): # Increment the directory number. This is required in order to make this # model servable with model_server. save_model_dir = pathlib.Path(output_dir)/'model' number_dirs = [int(p.name) for p in save_model_dir.glob('*') if p.name.isdigit()] id = max([0] + number_dirs)+1 save_model_dir = save_model_dir/str(id) # Set the signature to make it visible for serving. concrete_serving_fn = self.__call__.get_concrete_function() signatures = {'serving_default': concrete_serving_fn} # Export the model. tf.saved_model.save( self, str(save_model_dir), signatures=signatures) return save_model_dir """ Explanation: Export the model So you have a trained model, and a method to apply the preporcessing_fn to new data. Assemble them into a new model that accepts serialized tf.train.Example protos as input. End of explanation """ serving_model = ServingModel(model, output_dir) serving_model(serialized_example_batch) """ Explanation: Build the model and test-run it on the batch of serialized examples: End of explanation """ # Export the model saved_model_dir = # TODO 5: Your code goes here saved_model_dir """ Explanation: Export the model as a SavedModel: End of explanation """ reloaded = tf.saved_model.load(str(saved_model_dir)) run_model = reloaded.signatures['serving_default'] run_model(serialized_example_batch) """ Explanation: Reload the the model and test it on the same batch of examples: End of explanation """
scottlittle/solar-sensors
IPnotebooks/important-IPNBs/all-datasets-together.ipynb
apache-2.0
from mpl_toolkits.basemap import Basemap #map stuff from datetime import datetime,timedelta, time import pandas as pd import numpy as np import matplotlib.pyplot as plt from data_helper_functions import * from IPython.display import display pd.options.display.max_columns = 999 %matplotlib inline desired_channel = 'BAND_01' desired_date = datetime(2014, 4, 1) desired_timedelta = timedelta(hours = 15) desired_datetime = desired_date + desired_timedelta satellite_filefolder = '../../data/satellite/colorado/summer6months/data/' sensor_filefolder = '../../data/sensor_data/colorado6months/' pvoutput_filefolder = '../../data/pvoutput/pvoutput6months/' #satellite data satellite_filename = find_filename(desired_datetime, desired_channel, satellite_filefolder) lons, lats, data = return_satellite_data(satellite_filename, satellite_filefolder) # plt.figure(figsize=(8, 8)) # imgplot = plt.imshow(data) # imgplot.set_interpolation('none') # plt.savefig('foo.png') # plt.show() #sensor data sensor_filename = find_file_from_date(desired_date, sensor_filefolder) df_sensor = return_sensor_data(sensor_filename, sensor_filefolder) df_sensor[df_sensor.index == desired_datetime] display(df_sensor[df_sensor.index == desired_datetime]) #pvoutput data pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder) df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder) display(df_pvoutput[df_pvoutput.index == desired_datetime]) # Get some parameters for the Stereographic Projection m = Basemap(width=800000,height=800000, resolution='l',projection='stere',\ lat_ts=40,lat_0=39.5,lon_0=-104.5) xi, yi = m(lons, lats) #map onton x and y for plotting plt.figure(figsize=(10,10)) # Plot Data cs = m.pcolor(xi,yi,np.squeeze(data)) #data is 1 x 14 x 36, squeeze makes it 14 x 36 m.drawparallels(np.arange(-80., 81., 1.), labels=[1,0,0,0], fontsize=14) # Add Grid Lines m.drawmeridians(np.arange(-180., 181., 1.), labels=[0,0,0,1], fontsize=14) # Add Grid Lines m.drawstates(linewidth=3) # Add state boundaries cbar = m.colorbar(cs, location='bottom', pad="10%") # Add Colorbar plt.title('GOES 15 - Channel 1', fontsize=16) # Add Title plt.legend(prop={'size':32}) plt.show() """ Explanation: Summon any data I want to make a single query and have it return data across the datasets End of explanation """ from datetime import datetime,timedelta, time import pandas as pd import numpy as np import matplotlib.pyplot as plt from data_helper_functions import * from IPython.display import display pd.options.display.max_columns = 999 %matplotlib inline #iterate over datetimes: mytime = datetime(2014, 4, 1, 13) times = make_time(mytime) # Now that we can call data up over any datetime and we have a list of interested datetimes, # we can finally construct an X matrix and y vector for regression. sensor_filefolder = 'data/sensor_data/colorado6months/' pvoutput_filefolder = 'data/pvoutput/pvoutput6months/' X = [] #Sensor values y = [] #PVOutput for desired_datetime in times: try: #something wrong with y on last day desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date desired_date = datetime.combine(desired_date, time.min) #get into datetime format sensor_filename = find_file_from_date(desired_date, sensor_filefolder) df_sensor = return_sensor_data(sensor_filename, sensor_filefolder).ix[:,-15:-1] df_sensor[df_sensor.index == desired_datetime] pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder) df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder) y.append(df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0]) X.append(df_sensor[df_sensor.index == desired_datetime].values[0]) except: pass X = np.array(X) y = np.array(y) print X.shape print y.shape """ Explanation: Build up sensor to pvoutput model End of explanation """ from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=99) from sklearn.ensemble import RandomForestRegressor rfr = RandomForestRegressor(oob_score = True) rfr.fit(X_train,y_train) y_pred = rfr.predict(X_test) rfr.score(X_test,y_test) df_sensor.columns.values.shape sorted_mask = np.argsort(rfr.feature_importances_) for i in zip(df_sensor.columns.values,rfr.feature_importances_[sorted_mask])[::-1]: print i """ Explanation: ...finally ready to model! Random Forest End of explanation """ #now do a linear model and compare: from sklearn.linear_model import LinearRegression lr = LinearRegression() lr.fit(X_train,y_train) lr.score(X_test,y_test) sorted_mask = np.argsort(lr.coef_) for i in zip(df_sensor.columns.values,lr.coef_[sorted_mask])[::-1]: print i df_sensor.ix[:,-15:-1].head() #selects photometer and AOD, # useful in next iteration of using sensor data to fit """ Explanation: Linear model End of explanation """ import pandas as pd import numpy as np from sklearn.preprocessing import scale from lasagne import layers from lasagne.nonlinearities import softmax, rectify, sigmoid, linear, very_leaky_rectify, tanh from lasagne.updates import nesterov_momentum, adagrad, momentum from nolearn.lasagne import NeuralNet import theano from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler y = y.astype('float32') x = X.astype('float32') scaler = StandardScaler() scaled_x = scaler.fit_transform(x) x_train, x_test, y_train, y_test = train_test_split(scaled_x, y, test_size = 0.2, random_state = 12) nn_regression = NeuralNet(layers=[('input', layers.InputLayer), # ('hidden1', layers.DenseLayer), # ('hidden2', layers.DenseLayer), ('output', layers.DenseLayer) ], # Input Layer input_shape=(None, x.shape[1]), # hidden Layer # hidden1_num_units=512, # hidden1_nonlinearity=softmax, # hidden Layer # hidden2_num_units=128, # hidden2_nonlinearity=linear, # Output Layer output_num_units=1, output_nonlinearity=very_leaky_rectify, # Optimization update=nesterov_momentum, update_learning_rate=0.03,#0.02 update_momentum=0.8,#0.8 max_epochs=600, #was 100 # Others #eval_size=0.2, regression=True, verbose=0, ) nn_regression.fit(x_train, y_train) y_pred = nn_regression.predict(x_test) nn_regression.score(x_test, y_test) val = 11 print y_pred[val][0] print y_test[val] plt.plot(y_pred,'ro') plt.plot(y_test,'go') """ Explanation: When only keeping the photometer data, random forest and linear model do pretty similar. When I added all of the sensor instruments to the fit, rfr scored 0.87 and lr scored negative! Also, I threw away the mysterious "Research 2" sensor, that was probably just a solar panel! I asked NREL what it is, so we'll see. If it turns out to be a solar panel, then I can do some feature engineering with the sensor data by simulating a solar panel! Neural Net Exploration End of explanation """ from sklearn.ensemble import ExtraTreesRegressor etr = ExtraTreesRegressor(oob_score=True, bootstrap=True, n_jobs=-1, n_estimators=1000) #nj_obs uses all cores! X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=99) etr.fit(X_train, y_train) print etr.score(X_test,y_test) print etr.oob_score_ y_pred = etr.predict(X_test) from random import randint val = randint(0,y_test.shape[0]) print y_pred[val] print y_test[val] print X.shape print y.shape """ Explanation: Extra Trees! End of explanation """ from sklearn.externals import joblib joblib.dump(etr, 'data/sensor-to-power-model/sensor-to-power-model.pkl') np.savez_compressed('data/y.npz',y=y) #save y """ Explanation: Save this thing and try it out on the simulated sensors! End of explanation """
blue-yonder/tsfresh
notebooks/advanced/friedrich_coefficients.ipynb
mit
from matplotlib import pylab as plt import numpy as np import seaborn as sbn import pandas as pd from tsfresh.examples.driftbif_simulation import velocity %matplotlib inline from tsfresh.feature_extraction import ComprehensiveFCParameters from tsfresh.feature_extraction.feature_calculators import max_langevin_fixed_point, friedrich_coefficients settings = ComprehensiveFCParameters() default_params = settings['max_langevin_fixed_point'][0] default = settings['friedrich_coefficients'] def friedrich_method(v, param): df = pd.DataFrame({'velocity': v[:-1,0], 'acceleration': np.diff(v[:,0])}) df['quantiles']=pd.qcut(df.velocity.values, 30) groups = df.groupby('quantiles') result = pd.DataFrame({'a_mean': groups.acceleration.mean(), 'a_std': groups.acceleration.std(), 'v_mean': groups.velocity.mean(), 'v_std': groups.velocity.std() }) dynamics = friedrich_coefficients(v[:,0], param) dynamics = [d[1] for d in dynamics] v0 = max_langevin_fixed_point(v[:,0], **default_params) plt.subplot(2,1,1) plt.plot(v[:,0]) plt.axhline(y=v0, color='r') plt.xlabel('time') plt.ylabel('velocity') #Active Brownian motion is given if the linear term of the dynamics is positive if dynamics[-2]>0: active='Active' else: active='' plt.title('{} Brownian Motion (largest equilibrium velocity in red)'.format(active)) plt.subplot(2,1,2) ax = plt.errorbar(result.v_mean,result.a_mean, xerr=result.v_std,fmt='o') x = np.linspace(-0.004, 0.004, 201) print(dynamics) plt.plot(x, np.poly1d(dynamics)(x), label='estimated dynamics') plt.plot(v0,0.,'ro') plt.axvline(x=v0, color='r') plt.xlabel('mean velocity') plt.ylabel('mean acceleration') """ Explanation: <h1><center> Estimating Friedrich's coefficients describing the deterministic dynamics of Langevin model</center></h1> <center>Andreas W. Kempa-Liehr (Department of Engineering Science, University of Auckland)</center> This notebooks explains the friedrich_coefficient features, which has been inspired by the paper of Friedrich et al. (2000): Extracting model equations from experimental data. Physics Letters A 271, p. 217-222 The general idea is to assume a Langevin model for the dynamics of the time series $x(t)$ $$\dot{x}(t) = h(x(t)) + \mathcal{N}(0,R)$$ with $\dot{x}(t)$ denoting the temporal derivative, $h(x(t))$ the deterministic dynamics, and $\mathcal{N}(0,R)$ a Langevin force modelled as Gaussian white noise with standard deviation $R$. Now, an estimate $\tilde{h}(x)$ of the deterministic dynamics can be computed by averaging $\dot{x}(t)$ for a specific interval $x(t)\in[x-\epsilon,x+\epsilon]$ with $|\epsilon|\ll 1$: $$\left.\tilde{h}(x)\right|{x\in[x-\epsilon,x+\epsilon]} \approx \frac{\sum\limits{x(t)\in[x-\epsilon,x+\epsilon]} x(t+\Delta_t)-x(t)}{\Delta_t \sum\limits_{x(t)\in[x-\epsilon,x+\epsilon]} 1}.$$ Having a set of estimations ${\tilde{h}(x_1),\tilde{h}(x_2),\ldots,\tilde{h}(x_n)}$ with $x_1<x_2<\ldots<x_n$ at hand, Friedrich's coefficients are calculated by fitting a polynomial of order $m$ to these estimates. In order to demonstrate this approach, the dynamics of a dissipative soliton before and after its drift-bifurcation is simulated (Liehr 2013: Dissipative Solitons in Reaction-Diffusion Systems. Springer, p. 164). By applying the approach of Friedrich et al. for estimating the deterministic dynamics, the equilibrium velocity of the dissipative soliton is recovered. End of explanation """ ds = velocity(tau=3.8, delta_t=0.05, R=3e-4, seed=0) v = ds.simulate(1000000, v0=np.zeros(1)) friedrich_method(v, default) """ Explanation: Beyond drift-bifurcation End of explanation """ ds = velocity(tau=2./0.3-3.8, delta_t=0.05, R=3e-4, seed=0) v = ds.simulate(1000000, v0=np.zeros(1)) friedrich_method(v, default) """ Explanation: Before drift-bifurcation End of explanation """
bspalding/research_public
lectures/Instability of parameter estimates.ipynb
apache-2.0
# We'll be doing some examples, so let's import the libraries we'll need import numpy as np import matplotlib.pyplot as plt import pandas as pd """ Explanation: Instability of Parameter Estimates By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. Parameters A parameter is anything that a model uses to constrain its predictions. Commonly, a parameter is quantity that describes a data set or distribution is a parameter. For example, the mean of a normal distribution is a parameter; in fact, we say that a normal distribution is <i>parametrized</i> by its mean and variance. If we take the mean of a set of samples drawn from the normal distribution, we get an estimate of the mean of the distribution. Similarly, the mean of a set of observations is an estimate of the parameter of the underlying distribution (which is often assumed to be normal). Other parameters include the median, the correlation coefficient to another series, the standard deviation, and every other measurement of a data set. You Never Know, You Only Estimate When you take the mean of a data set, you do not know the mean. You have estimated the mean as best you can from the data you have. The estimate can be off. This is true of any parameter you estimate. To actually understand what is going on you need to determine how good your estimate is by looking at its stability/standard error/confidence intervals. Instability of estimates Whenever we consider a set of observations, our calculation of a parameter can only be an estimate. It will change as we take more measurements or as time passes and we get new observations. We can quantify the uncertainty in our estimate by looking at how the parameter changes as we look at different subsets of the data. For instance, standard deviation describes how different the mean of a set is from the mean of each observation, that is, from each observation itself. In financial applications, data often comes in time series. In this case, we can estimate a parameter at different points in time; say, for the previous 30 days. By looking at how much this moving estimate fluctuates as we change our time window, we can compute the instability of the estimated parameter. End of explanation """ # Set a seed so we can play with the data without generating new random numbers every time np.random.seed(123) normal = np.random.randn(500) print np.mean(normal[:10]) print np.mean(normal[:100]) print np.mean(normal[:250]) print np.mean(normal) # Plot a stacked histogram of the data plt.hist([normal[:10], normal[10:100], normal[100:250], normal], normed=1, histtype='bar', stacked=True); plt.ylabel('Frequency') plt.xlabel('Value'); print np.std(normal[:10]) print np.std(normal[:100]) print np.std(normal[:250]) print np.std(normal) """ Explanation: Example: mean and standard deviation First, let's take a look at some samples from a normal distribution. We know that the mean of the distribution is 0 and the standard deviation is 1; but if we measure the parameters from our observations, we will get only approximately 0 and approximately 1. We can see how these estimates change as we take more and more samples: End of explanation """ #Generate some data from a bi-modal distribution def bimodal(n): X = np.zeros((n)) for i in range(n): if np.random.binomial(1, 0.5) == 0: X[i] = np.random.normal(-5, 1) else: X[i] = np.random.normal(5, 1) return X X = bimodal(1000) #Let's see how it looks plt.hist(X, bins=50) plt.ylabel('Frequency') plt.xlabel('Value') print 'mean:', np.mean(X) print 'standard deviation:', np.std(X) """ Explanation: Notice that, although the probability of getting closer to 0 and 1 for the mean and standard deviation, respectively, increases with the number of samples, we do not always get better estimates by taking more data points. Whatever our expectation is, we can always get a different result, and our goal is often to compute the probability that the result is significantly different than expected. With time series data, we usually care only about contiguous subsets of the data. The moving average (also called running or rolling) assigns the mean of the previous $n$ data points to each point in time. Below, we compute the 90-day moving average of a stock price and plot it to see how it changes. There is no result in the beginning because we first have to accumulate at least 90 days of data. Example: Non-Normal Underlying Distribution What happens if the underlying data isn't normal? A mean will be very deceptive. Because of this it's important to test for normality of your data. We'll use a Jarque-Bera test as an example. End of explanation """ mu = np.mean(X) sigma = np.std(X) N = np.random.normal(mu, sigma, 1000) plt.hist(N, bins=50) plt.ylabel('Frequency') plt.xlabel('Value'); """ Explanation: Sure enough, the mean is increidbly non-informative about what is going on in the data. We have collapsed all of our data into a single estimate, and lost of a lot of information doing so. This is what the distribution should look like if our hypothesis that it is normally distributed is correct. End of explanation """ from statsmodels.stats.stattools import jarque_bera jarque_bera(X) """ Explanation: We'll test our data using the Jarque-Bera test to see if it's normal. A significant p-value indicates non-normality. End of explanation """ def sharpe_ratio(asset, riskfree): return np.mean(asset - riskfree)/np.std(asset - riskfree) start = '2012-01-01' end = '2015-01-01' # Use an ETF that tracks 3-month T-bills as our risk-free rate of return treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:] pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end) returns = pricing.pct_change()[1:] # Get the returns on the asset # Compute the running Sharpe ratio running_sharpe = [sharpe_ratio(returns[i-90:i], treasury_ret[i-90:i]) for i in range(90, len(returns))] # Plot running Sharpe ratio up to 100 days before the end of the data set _, ax1 = plt.subplots() ax1.plot(range(90, len(returns)-100), running_sharpe[:-100]); ticks = ax1.get_xticks() ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates plt.xlabel('Date') plt.ylabel('Sharpe Ratio'); """ Explanation: Sure enough the value is < 0.05 and we say that X is not normal. This saves us from accidentally making horrible predictions. Example: Sharpe ratio One statistic often used to describe the performance of assets and portfolios is the Sharpe ratio, which measures the additional return per unit additional risk achieved by a portfolio, relative to a risk-free source of return such as Treasury bills: $$R = \frac{E[r_a - r_b]}{\sqrt{Var(r_a - r_b)}}$$ where $r_a$ is the returns on our asset and $r_b$ is the risk-free rate of return. As with mean and standard deviation, we can compute a rolling Sharpe ratio to see how our estimate changes through time. End of explanation """ # Compute the mean and std of the running Sharpe ratios up to 100 days before the end mean_rs = np.mean(running_sharpe[:-100]) std_rs = np.std(running_sharpe[:-100]) # Plot running Sharpe ratio _, ax2 = plt.subplots() ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates ax2.plot(range(90, len(returns)), running_sharpe) # Plot its mean and the +/- 1 standard deviation lines ax2.axhline(mean_rs) ax2.axhline(mean_rs + std_rs, linestyle='--') ax2.axhline(mean_rs - std_rs, linestyle='--') # Indicate where we computed the mean and standard deviations # Everything after this is 'out of sample' which we are comparing with the estimated mean and std ax2.axvline(len(returns) - 100, color='pink'); plt.xlabel('Date') plt.ylabel('Sharpe Ratio') plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation']) print 'Mean of running Sharpe ratio:', mean_rs print 'std of running Sharpe ratio:', std_rs """ Explanation: The Sharpe ratio looks rather volatile, and it's clear that just reporting it as a single value will not be very helpful for predicting future values. Instead, we can compute the mean and standard deviation of the data above, and then see if it helps us predict the Sharpe ratio for the next 100 days. End of explanation """ # Load time series of prices start = '2012-01-01' end = '2015-01-01' pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end) # Compute the rolling mean for each day mu = pd.rolling_mean(pricing, window=90) # Plot pricing data _, ax1 = plt.subplots() ax1.plot(pricing) ticks = ax1.get_xticks() ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates plt.ylabel('Price') plt.xlabel('Date') # Plot rolling mean ax1.plot(mu); plt.legend(['Price','Rolling Average']); """ Explanation: The standard deviation in this case is about a quarter of the range, so this data is extremely volatile. Taking this into account when looking ahead gave a better prediction than just using the mean, although we still observed data more than one standard deviation away. We could also compute the rolling mean of the Sharpe ratio to try and follow trends; but in that case, too, we should keep in mind the standard deviation. Example: Moving Average Let's say you take the average with a lookback window; how would you determine the standard error on that estimate? Let's start with an example showing a 90-day moving average. End of explanation """ print 'Mean of rolling mean:', np.mean(mu) print 'std of rolling mean:', np.std(mu) """ Explanation: This lets us see the instability/standard error of the mean, and helps anticipate future variability in the data. We can quantify this variability by computing the mean and standard deviation of the rolling mean. End of explanation """ # Compute rolling standard deviation std = pd.rolling_std(pricing, window=90) # Plot rolling std _, ax2 = plt.subplots() ax2.plot(std) ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates plt.ylabel('Standard Deviation of Moving Average') plt.xlabel('Date') print 'Mean of rolling std:', np.mean(std) print 'std of rolling std:', np.std(std) """ Explanation: In fact, the standard deviation, which we use to quantify variability, is itself variable. Below we plot the rolling standard deviation (for a 90-day window), and compute <i>its</i> mean and standard deviation. End of explanation """ # Plot original data _, ax3 = plt.subplots() ax3.plot(pricing) ax3.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates # Plot Bollinger bands ax3.plot(mu) ax3.plot(mu + std) ax3.plot(mu - std); plt.ylabel('Price') plt.xlabel('Date') plt.legend(['Price', 'Moving Average', 'Moving Average +1 Std', 'Moving Average -1 Std']) """ Explanation: To see what this changing standard deviation means for our data set, let's plot the data again along with the Bollinger bands: the rolling mean, one rolling standard deviation (of the data) above the mean, and one standard deviation below. Note that although standard deviations give us more information about the spread of the data, we cannot assign precise probabilities to our expectations for future observations without assuming a particular distribution for the underlying process. End of explanation """
sripaladugu/sripaladugu.github.io
ipynb/Pandas.ipynb
mit
import pandas as pd """ Explanation: Series End of explanation """ animals = ["Lion", "Tiger", "Monkey", None] s = pd.Series(animals) print(s) print("The name of this Series: ", s.name) numbers = [1, 2, 3, None] pd.Series(numbers) import numpy as np np.NaN == None np.NaN == np.NaN np.isnan(np.NaN) sports = {'Cricket': 'India', 'Football': 'America', 'Soccer': 'Brazil'} s = pd.Series(sports) s s.index s = pd.Series(['Cricket', 'Football', 'Soccer'], index = [ 'India', 'America', 'Brazil']) s """ Explanation: A Series is like a cross between a list and a dictionary. The items are stored in an order and there are labels with which you can retrieve them. A Series object also has a name attribute. End of explanation """ s.iloc[0] s.loc['America'] """ Explanation: Querying a Series A pandas Series can be queried either by the index position or the index label. As we saw if you don't give an index to the series, the position and the label are effectively the same values. To query by numeric location, starting at zero, use the iloc attribute. To query by the index label, you can use the loc attribute. End of explanation """ s = pd.Series(np.random.randint(0,1000,10000)) s.head() """ Explanation: iloc and loc are not methods, they are attributes. Okay, so now we know how to get data out of the series. Let's talk about working with the data. A common task is to want to consider all of the values inside of a series and want to do some sort of operation. This could be trying to find a certain number, summarizing data or transforming the data in some way. A typical programmatic approach to this would be to iterate over all the items in the series, and invoke the operation one is interested in. For instance, we could create a data frame of floating point values. Let's think of these as prices for different products. We could write a little routine which iterates over all of the items in the series and adds them together to get a total. This works, but it's slow. Modern computers can do many tasks simultaneously, especially, but not only, tasks involving mathematics. Pandas and the underlying NumPy libraries support a method of computation called vectorization. Vectorization works with most of the functions in the NumPy library, including the sum function. End of explanation """ %%timeit -n 100 summary = 0 for item in s: summary += item %%timeit -n 100 np.sum(s) """ Explanation: Magic functions begin with a percentage sign. If we type % sign and then hit the Tab key, we can see a list of the available magic functions. You could write your own magic functions too, but that's a little bit outside of the scope of this course. We're actually going to use what's called a cellular magic function. These start with two percentage signs and modify a raptor code in the current Jupyter cell. The function we're going to use is called timeit. And as you may have guessed from the name, this function will run our code a few times to determine, on average, how long it takes. End of explanation """ %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) for label, value in s.iteritems(): s.set_value(label, value + 2) %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) for label, value in s.iteritems(): s.loc[label] = value + 2 """ Explanation: Related feature in Pandas and NumPy is called broadcasting. With broadcasting, you can apply an operation to every value in the series, changing the series. For instance, if we wanted to increase every random variable by 2, we could do so quickly using the += operator directly on the series object. End of explanation """ %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) s += 2 """ Explanation: But if you find yourself iterating through a series, you should question whether you're doing things in the best possible way. Here's how we would do this using the series set value method. End of explanation """ s = pd.Series([2,1,2]) s.loc['Animal'] = 'Bear' s original_sports = pd.Series({'Archery':'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan'}) cricket_loving_countries = pd.Series(['Australia', 'India', 'England'], index=['Cricket','Cricket','Cricket']) all_countries = original_sports.append(cricket_loving_countries) all_countries original_sports """ Explanation: Amazing. Not only is it significantly faster, but it's more concise and maybe even easier to read too. The typical mathematical operations you would expect are vectorized, and the NumPy documentation outlines what it takes to create vectorized functions of your own. One last note on using the indexing operators to access series data. The .loc attribute lets you not only modify data in place, but also add new data as well. If the value you pass in as the index doesn't exist, then a new entry is added. And keep in mind, indices can have mixed types. While it's important to be aware of the typing going on underneath, Pandas will automatically change the underlying NumPy types as appropriate. Mixed types are also possible End of explanation """ all_countries['Cricket'] """ Explanation: There are a couple of important considerations when using append. First, Pandas is going to take your series and try to infer the best data types to use. In this example, everything is a string, so there's no problems here. Second, the append method doesn't actually change the underlying series. It instead returns a new series which is made up of the two appended together. We can see this by going back and printing the original series of values and seeing that they haven't changed. This is actually a significant issue for new Pandas users who are used to objects being changed in place. So watch out for it, not just with append but with other Pandas functions as well. End of explanation """ purchase_1 = pd.Series({'Name':'Kasi', 'Item purchased': 'Dog Food', 'Cost': 22.50}) purchase_2 = pd.Series({'Name':'Pradeep', 'Item purchased': 'Cat Food', 'Cost': 21.50}) purchase_3 = pd.Series({'Name':'Sri', 'Item purchased': 'Bird Food', 'Cost': 5.50}) df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store1','Store1','Store2']) df print(df.loc['Store2']) type(df.loc['Store2']) print(df.loc['Store1']) type(df.loc['Store1']) """ Explanation: Finally, we see that when we query the appended series for those who have cricket as their national sport, we don't get a single value, but a series itself. This is actually very common, and if you have a relational database background, this is very similar to every table query resulting in a return set which itself is a table. The DataFrame Data Structure You can create a DataFrame in many different ways, some of which you might expect. For instance, you can use a group of series, where each series represents a row of data. Or you could use a group of dictionaries, where each dictionary represents a row of data. End of explanation """ df.T # This essential turns your column names into indicies df.T.loc['Cost'] # We can then use the loc method """ Explanation: What if we want to do column, for example we want to get a list of all the costs? End of explanation """ print(df['Item purchased']) type(df['Item purchased']) """ Explanation: Since iloc and loc are used for row selection, the Panda's developers reserved indexing operator directly on the DataFrame for column selection. In a Panda's DataFrame, columns always have a name. So this selection is always label based, not as confusing as it was when using the square bracket operator on the series objects. For those familiar with relational databases, this operator is analogous to column projection. End of explanation """ df.loc['Store1']['Cost'] """ Explanation: Finally, since the result of using the indexing operator is the DataFrame or series, you can chain operations together. For instance, we could have rewritten the query for all Store 1 costs as End of explanation """ df.loc[:, ['Name','Cost']] """ Explanation: This looks pretty reasonable and gets us the result we wanted. But chaining can come with some costs and is best avoided if you can use another approach. In particular, chaining tends to cause Pandas to return a copy of the DataFrame instead of a view on the DataFrame. For selecting a data, this is not a big deal, though it might be slower than necessary. If you are changing data though, this is an important distinction and can be a source of error. Here's another method. As we saw, .loc does row selection, and it can take two parameters, the row index and the list of column names. .loc also supports slicing. If we wanted to select all rows, we can use a column to indicate a full slice from beginning to end. And then add the column name as the second parameter as a string. In fact, if we wanted to include multiply columns, we could do so in a list. And Pandas will bring back only the columns we have asked for. End of explanation """ df.drop('Store1') """ Explanation: So that's selecting and projecting data from a DataFrame based on row and column labels. The key concepts to remember are that the rows and columns are really just for our benefit. Underneath this is just a two axis labeled array, and transposing the columns is easy. Also, consider the issue of chaining carefully, and try to avoid it, it can cause unpredictable results. Where your intent was to obtain a view of the data, but instead Pandas returns to you a copy. In the Panda's world, friends don't let friends chain calls. So if you see it, point it out, and share a less ambiguous solution. End of explanation """ df.drop('Cost',axis=1) """ Explanation: It's easy to delete data in series and DataFrames, and we can use the drop function to do so. This function takes a single parameter, which is the index or roll label, to drop. This is another tricky place for new users to pandas. The drop function doesn't change the DataFrame by default. And instead, returns to you a copy of the DataFrame with the given rows removed. We can see that our original DataFrame is still intact. This is a very typical pattern in Pandas, where in place changes to a DataFrame are only done if need be, usually on changes involving indices. So it's important to be aware of. Drop has two interesting optional parameters. The first is called in place, and if it's set to true, the DataFrame will be updated in place, instead of a copy being returned. The second parameter is the axis, which should be dropped. By default, this value is 0, indicating the row axis. But you could change it to 1 if you want to drop a column. End of explanation """ del df['Item purchased'] df """ Explanation: There is a second way to drop a column, however. And that's directly through the use of the indexing operator, using the del keyword. This way of dropping data, however, takes immediate effect on the DataFrame and does not return a view. End of explanation """ df['Location'] = None df """ Explanation: Finally, adding a new column to the DataFrame is as easy as assigning it to some value. For instance, if we wanted to add a new location as a column with default value of none, we could do so by using the assignment operator after the square brackets. This broadcasts the default value to the new column immediately. End of explanation """ costs = df['Cost'] costs """ Explanation: The common work flow is to read your data into a DataFrame then reduce this DataFrame to the particular columns or rows that you're interested in working with. As you've seen, the Panda's toolkit tries to give you views on a DataFrame. This is much faster than copying data and much more memory efficient too. But it does mean that if you're manipulating the data you have to be aware that any changes to the DataFrame you're working on may have an impact on the base data frame you used originally. Here's an example using our same purchasing DataFrame from earlier. We can create a series based on just the cost category using the square brackets. End of explanation """ costs += 2 costs """ Explanation: Then we can increase the cost in this series using broadcasting. End of explanation """ df """ Explanation: Now if we look at our original DataFrame, we see those costs have risen as well. This is an important consideration to watch out for. If you want to explicitly use a copy, then you should consider calling the copy method on the DataFrame for it first. End of explanation """ !cat olympics.csv """ Explanation: A common workflow is to read the dataset in, usually from some external file. We saw previously how you can do this using Python, and lists, and dictionaries. You can imagine how you might use those dictionaries to create a Pandas DataFrame. Thankfully, Pandas has built-in support for delimited files such as CSV files as well as a variety of other data formats including relational databases, Excel, and HTML tables. I've saved a CSV file called olympics.csv, which has data from Wikipedia that contains a summary list of the medal various countries have won at the Olympics. We can take a look at this file using the shell command cat. Which we can invoke directly using the exclamation point. What happens here is that when the Jupyter notebook sees a line beginning with an exclamation mark, it sends the rest of the line to the operating system shell for evaluation. So cat works on Linux and Macs. End of explanation """ df = pd.read_csv('olympics.csv') df.head() """ Explanation: We see from the cat output that there seems to be a numeric list of columns followed by a bunch of column identifiers. The column identifiers have some odd looking characters in them. This is the unicode numero sign, which means number of. Then we have rows of data, all columns separated. We can read this into a DataFrame by calling the read_csv function of the module. When we look at the DataFrame we see that the first cell has an NaN in it since it's an empty value, and the rows have been automatically indexed for us. End of explanation """ df = pd.read_csv('olympics.csv', index_col=0, skiprows=1) df.head() """ Explanation: It seems pretty clear that the first row of data in the DataFrame is what we really want to see as the column names. It also seems like the first column in the data is the country name, which we would like to make an index. Read csv has a number of parameters that we can use to indicate to Pandas how rows and columns should be labeled. For instance, we can use the index call to indicate which column should be the index and we can also use the header parameter to indicate which row from the data file should be used as the header. End of explanation """ df.columns """ Explanation: Now this data came from the all time Olympic games medal table on Wikipedia. If we head to the page we could see that instead of running gold, silver and bronze in the pages, these nice little icons with a one, a two, and a three in them In our csv file these were represented with the strings 01 !, 02 !, and so on. We see that the column values are repeated which really isn't good practice. Panda's recognize this in a panda.1 and .2 to make things more unique. But this labeling isn't really as clear as it could be, so we should clean up the data file. We can of course do this just by going and editing the CSV file directly, but we can also set the column names using the Pandas name property. Panda stores a list of all of the columns in the .columns attribute. End of explanation """ df.rename? for col in df.columns: if col[:2]=='01': # if the first two letters are '01' df.rename(columns={col:'Gold'+col[4:]}, inplace=True) #mapping changes labels if col[:2]=='02': df.rename(columns={col:'Silver'+col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze'+col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#'+col[1:]}, inplace=True) df.head() """ Explanation: We can change the values of the column names by iterating over this list and calling the rename method of the data frame. Here we just iterate through all of the columns looking to see if they start with a 01, 02, 03 or numeric character. If they do, we can call rename and set the column parameters to a dictionary with the keys being the column we want to replace and the value being the new value we want. Here we'll slice some of the old values in two, since we don't want to lose the unique appended values. We'll also set the ever-important in place parameter to true so Pandas knows to update this data frame directly. End of explanation """ df['Gold']>0 """ Explanation: Querying a DataFrame Boolean masking is the heart of fast and efficient querying in NumPy. It's analogous a bit to masking used in other computational areas. A Boolean mask is an array which can be of one dimension like a series, or two dimensions like a DataFrame, where each of the values in the array are either true or false. This array is essentially overlaid on top of the data structure that we're querying. And any cell aligned with the true value will be admitted into our final result, and any sign aligned with a false value will not. Boolean masking is powerful conceptually and is the cornerstone of efficient NumPy and pandas querying. This technique is well used in other areas of computer science, for instance, in graphics. But it doesn't really have an analogue in other traditional relational databases, so I think it's worth pointing out here. Boolean masks are created by applying operators directly to the pandas series or DataFrame objects. For instance, in our Olympics data set, you might be interested in seeing only those countries who have achieved a gold medal at the summer Olympics. To build a Boolean mask for this query, we project the gold column using the indexing operator and apply the greater than operator with a comparison value of zero. This is essentially broadcasting a comparison operator, greater than, with the results being returned as a Boolean series. End of explanation """ only_gold = df.where(df['Gold']>0) only_gold.head() """ Explanation: The resultant series is indexed where the value of each cell is either true or false depending on whether a country has won at least one gold medal, and the index is the country name. So this builds us the Boolean mask, which is half the battle. What we want to do next is overlay that mask on the DataFrame. We can do this using the where function. The where function takes a Boolean mask as a condition, applies it to the DataFrame or series, and returns a new DataFrame or series of the same shape. Let's apply this Boolean mask to our Olympics data and create a DataFrame of only those countries who have won a gold at a summer games. End of explanation """ df['Gold'].count() only_gold['Gold'].count() only_gold = only_gold.dropna() only_gold.head() """ Explanation: We see that the resulting DataFrame keeps the original indexed values, and only data from countries that met the condition are retained. All of the countries which did not meet the condition have NaN data instead. This is okay. Most statistical functions built into the DataFrame object ignore values of NaN. End of explanation """ only_gold = df[df['Gold']>0] only_gold.head() #To get the no of countries who recieved at least one gold in Summer or Winter Olympics len(df[(df['Gold']>0) | df['Gold.1']>0]) #Are there any countries which won a gold in winter olympics but never in summer olympics df[(df['Gold']==0) & (df['Gold.1']>0)] """ Explanation: Often we want to drop those rows which have no data. To do this, we can use the drop NA function. You can optionally provide drop NA the axis it should be considering. Remember that the axis is just an indicator for the columns or rows and that the default is zero, which means rows. When you find yourself talking about pandas and saying phrases like, often I want to, it's quite likely the developers have included a shortcut for this common operation. For instance, in this example, we don't actually have to use the where function explicitly. The pandas developers allow the indexing operator to take a Boolean mask as a value instead of just a list of column names. End of explanation """ df['country'] = df.index df = df.set_index('Gold') df.head() """ Explanation: Extremely important, and often an issue for new users, is to remember that each Boolean mask needs to be encased in parenthesis because of the order of operations. This can cause no end of frustration if you're not used to it, so be careful. Indexing DataFrames The index is essentially a row level label, and we know that rows correspond to axis zero. In our Olympics data, we indexed the data frame by the name of the country. Indices can either be inferred, such as when we create a new series without an index, in which case we get numeric values, or they can be set explicitly, like when we use the dictionary object to create the series, or when we loaded data from the CSV file and specified the header. Another option for setting an index is to use the set_index function. This function takes a list of columns and promotes those columns to an index. Set index is a destructive process, it doesn't keep the current index. If you want to keep the current index, you need to manually create a new column and copy into it values from the index attribute. Let's go back to our Olympics DataFrame. Let's say that we don't want to index the DataFrame by countries, but instead want to index by the number of gold medals that were won at summer games. First we need to preserve the country information into a new column. We can do this using the indexing operator or the string that has the column label. Then we can use the set_index to set index of the column to summer gold medal wins. End of explanation """ df = df.reset_index() df.head() """ Explanation: You'll see that when we create a new index from an existing column it appears that a new first row has been added with empty values. This isn't quite what's happening. And we know this in part because an empty value is actually rendered either as a none or an NaN if the data type of the column is numeric. What's actually happened is that the index has a name. Whatever the column name was in the Jupiter notebook has just provided this in the output. We can get rid of the index completely by calling the function reset_index. This promotes the index into a column and creates a default numbered index. End of explanation """ df = pd.read_csv('census.csv') df.head() """ Explanation: One nice feature of pandas is that it has the option to do multi-level indexing. This is similar to composite keys in relational database systems. To create a multi-level index, we simply call set index and give it a list of columns that we're interested in promoting to an index. Pandas will search through these in order, finding the distinct data and forming composite indices. A good example of this is often found when dealing with geographical data which is sorted by regions or demographics. Let's change data sets and look at some census data for a better example. This data is stored in the file census.csv and comes from the United States Census Bureau. In particular, this is a breakdown of the population level data at the US county level. It's a great example of how different kinds of data sets might be formatted when you're trying to clean them. For instance, in this data set there are two summarized levels, one that contains summary data for the whole country. And one that contains summary data for each state, and one that contains summary data for each county. End of explanation """ df['SUMLEV'].unique() #40 belongs to state level data and 50 belongs to county level data """ Explanation: I often find that I want to see a list of all the unique values in a given column. In this DataFrame, we see that the possible values for the sum level are using the unique function on the DataFrame. This is similar to the SQL distinct operator. Here we can run unique on the sum level of our current DataFrame and see that there are only two different values, 40 and 50. End of explanation """ df = df[df['SUMLEV']==50] df.head() """ Explanation: Let's get rid of all of the rows that are summaries at the state level and just keep the county data. End of explanation """ columns_to_keep = ['STNAME', 'CTYNAME', 'BIRTHS2010', 'BIRTHS2011', 'BIRTHS2012', 'BIRTHS2013', 'BIRTHS2014', 'BIRTHS2015', 'POPESTIMATE2010', 'POPESTIMATE2011', 'POPESTIMATE2012', 'POPESTIMATE2013', 'POPESTIMATE2014', 'POPESTIMATE2015' ] df = df[columns_to_keep] df.head() """ Explanation: Also while this data set is interesting for a number of different reasons, let's reduce the data that we're going to look at to just the total population estimates and the total number of births. We can do this by creating a list of column names that we want to keep then project those and assign the resulting DataFrame to our df variable. End of explanation """ df = df.set_index(['STNAME','CTYNAME']) df.head() """ Explanation: The US Census data breaks down estimates of population data by state and county. We can load the data and set the index to be a combination of the state and county values and see how pandas handles it in a DataFrame. We do this by creating a list of the column identifiers we want to have indexed. And then calling set index with this list and assigning the output as appropriate. We see here that we have a dual index, first the state name and then the county name. End of explanation """ df.loc['Michigan', 'Washtenaw County'] """ Explanation: An immediate question which comes up is how we can query this DataFrame. For instance, we saw previously that the loc attribute of the DataFrame can take multiple arguments. And it could query both the row and the columns. When you use a MultiIndex, you must provide the arguments in order by the level you wish to query. Inside of the index, each column is called a level and the outermost column is level zero. For instance, if we want to see the population results from Washtenaw County, you'd want to the first argument as the state of Michigan. End of explanation """ df.loc[[('Michigan','Washtenaw County'),('Michigan','Wayne County')]] """ Explanation: You might be interested in just comparing two counties. For instance, Washtenaw and Wayne County which covers Detroit. To do this, we can pass the loc method, a list of tuples which describe the indices we wish to query. Since we have a MultiIndex of two values, the state and the county, we need to provide two values as each element of our filtering list. End of explanation """ purchase_1 = pd.Series({'Name': 'Chris', 'Item Purchased': 'Dog Food', 'Cost': 22.50}) purchase_2 = pd.Series({'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50}) purchase_3 = pd.Series({'Name': 'Vinod', 'Item Purchased': 'Bird Seed', 'Cost': 5.00}) df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2']) df """ Explanation: Okay so that's how hierarchical indices work in a nutshell. They're a special part of the pandas library which I think can make management and reasoning about data easier. Of course hierarchical labeling isn't just for rows. For example, you can transpose this matrix and now have hierarchical column labels. And projecting a single column which has these labels works exactly the way you would expect it to. Question End of explanation """ df = df.set_index([df.index, 'Name']) df.index.names = ['Location', 'Name'] df = df.append(pd.Series(data={'Cost': 3.00, 'Item Purchased': 'Kitty Food'}, name=('Store 2', 'Kevyn'))) df """ Explanation: Reindex the purchase records DataFrame to be indexed hierarchically, first by store, then by person. Name the indexes 'Location' and 'Name'. Then add a new entry to it with the value of: Name: 'Kevyn', Item Purchased:'Kitty Food', 'Cost':3.00, Location:'Store 2'. End of explanation """ df.reset_index() """ Explanation: If we want we can also reset the index as columns as follows: End of explanation """
diegocavalca/Studies
programming/Python/tensorflow/exercises/Graph_Solutions.ipynb
cc0-1.0
# Q1. Create a graph g = tf.Graph() with g.as_default(): # Define inputs with tf.name_scope("inputs"): a = tf.constant(2, tf.int32, name="a") b = tf.constant(3, tf.int32, name="b") # Ops with tf.name_scope("ops"): c = tf.multiply(a, b, name="c") d = tf.add(a, b, name="d") e = tf.subtract(c, d, name="e") # Q2. Start a session sess = tf.Session(graph=g) # Q3. Fetch c, d, e _c, _d, _e = sess.run([c, d, e]) print("c =", _c) print("d =", _d) print("e =", _e) # Close the session sess.close() """ Explanation: Q1-3. You are to implement the graph below. Complete the code. <img src="figs/fig1.png",width=500> End of explanation """ tf.reset_default_graph() # Define inputs a = tf.Variable(tf.random_uniform([])) b_pl = tf.placeholder(tf.float32, [None]) # Ops c = a * b_pl d = a + b_pl e = tf.reduce_sum(c) f = tf.reduce_mean(d) g = e - f # initialize variable(s) init = tf.global_variables_initializer() # Update variable update_op = tf.assign(a, a + g) # Q4. Create a (summary) writer to `asset` writer = tf.summary.FileWriter('asset', tf.get_default_graph()) #Q5. Add `a` to summary.scalar tf.summary.scalar("a", a) #Q6. Add `c` and `d` to summary.histogram tf.summary.histogram("c", c) tf.summary.histogram("d", d) #Q7. Merge all summaries. summaries = tf.summary.merge_all() # Start a session sess = tf.Session() # Initialize Variable(s) sess.run(init) # Fetch the value of c, d, and e. for step in range(5): _b = np.arange(10, dtype=np.float32) _, summaries_proto = sess.run([update_op, summaries], {b_pl:_b}) # Q8. Attach summaries_proto to TensorBoard. writer.add_summary(summaries_proto, global_step=step) # Close the session sess.close() """ Explanation: Q4-8. You are to implement the graph below. Complete the code. <img src="figs/fig3.png",width=500> End of explanation """
arturops/deep-learning
autoencoder/Simple_Autoencoder.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) """ Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation """ img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation """ # Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value img_size = mnist.train.images.shape[1] # Input and target placeholders inputs_ = tf.placeholder(tf.float32, shape=(None,img_size), name='inputs') targets_ = tf.placeholder(tf.float32, shape=(None,img_size), name='targets') # Output of hidden layer, single fully connected layer here with ReLU activation #encoded = tf.contrib.layers.fully_connected() encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) # Output layer logits, fully connected layer with no activation logits = tf.layers.dense(encoded, img_size) # Sigmoid output from logits decoded = tf.nn.sigmoid(logits, name='output') # Sigmoid cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Mean of the loss cost = tf.reduce_mean(loss) # Adam optimizer opt = tf.train.AdamOptimizer().minimize(cost) """ Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation """ # Create the session sess = tf.Session() """ Explanation: Training End of explanation """ epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation """
Faris137/MachineLearningArabic
Pima Project/.ipynb_checkpoints/Pima Project 2.0-checkpoint.ipynb
mit
import numpy as np import pandas as pd import seaborn as sb from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn import preprocessing %matplotlib inline df = pd.read_csv('diabetes.csv') df.head(20) #لاستعراض ال20 السجلات الاولى من إطار البيانات """ Explanation: محاولة لإستكشاف افضل الطرق لتحسين اداء نموذج بيما End of explanation """ df.info() """ Explanation: هذه الدالة تعطينا توصيف كامل للبيانات و تكشف لنا في ما إذا كانت هناك قيم مفقودة End of explanation """ sb.countplot(x='Outcome',data=df, palette='hls') sb.countplot(x='Pregnancies',data=df, palette='hls') sb.countplot(x='Glucose',data=df, palette='hls') sb.heatmap(df.corr()) sb.pairplot(df, hue="Outcome") from scipy.stats import kendalltau sb.jointplot(df['Pregnancies'], df['Glucose'], kind="hex", stat_func=kendalltau, color="#4CB391") import matplotlib.pyplot as plt g = sb.FacetGrid(df, row="Pregnancies", col="Outcome", margin_titles=True) bins = np.linspace(0, 50, 13) g.map(plt.hist, "BMI", color="steelblue", bins=bins, lw=0) sb.pairplot(df, vars=["Pregnancies", "BMI"]) """ Explanation: سيبورن مكتبة جميلة للرسوميات سهلة في الكتابة لكن مفيدة جداً في المعلومات التي ممكن ان نقراءها عبر الهيستوقرام فائدها ممكن ان تكون في 1- تلخيص توزيع البينات في رسوميات 2- فهم او الإطلاع على القيم الفريدة 3- تحمل الرسوميات معنى اعمق من الكلمات End of explanation """ columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Training classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_test, ypredict) """ Explanation: تجربة استخدام تقييس و تدريج الخواص لتحسين اداء النموذج End of explanation """ #scaling scaler = StandardScaler() # Fit only on training data scaler.fit(X_train) X_train = scaler.transform(X_train) # apply same transformation to test data X_test = scaler.transform(X_test) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Training classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_test, ypredict) """ Explanation: تجربة تحسين اداء النموذج باستخدام طريقة standard scaler End of explanation """ columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) scaler = preprocessing.MinMaxScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) # apply same transformation to test data X_test = scaler.transform(X_test) clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Training classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_test, ypredict) columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'] labels = df['Outcome'].values features = df[list(columns)].values X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30) clf = RandomForestClassifier(n_estimators=5) clf = clf.fit(X_train, y_train) accuracy = clf.score(X_train, y_train) print ' اداء النموذج في عينة التدريب بدقة ', accuracy*100 accuracy = clf.score(X_test, y_test) print ' اداء النموذج في عينة الفحص بدقة ', accuracy*100 ypredict = clf.predict(X_train) print '\n Training classification report\n', classification_report(y_train, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_train, ypredict) ypredict = clf.predict(X_test) print '\n Testing classification report\n', classification_report(y_test, ypredict) print "\n Confusion matrix of training \n", confusion_matrix(y_test, ypredict) """ Explanation: تجربة تحسين اداء النموذج بطريقة min-max scaler End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/uhh/cmip6/models/sandbox-3/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: UHH Source ID: SANDBOX-3 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
SteveDiamond/cvxpy
examples/notebooks/WWW/censored_data.ipynb
gpl-3.0
import numpy as np n = 30 # number of variables M = 50 # number of censored observations K = 200 # total number of observations np.random.seed(n*M*K) X = np.random.randn(K*n).reshape(K, n) c_true = np.random.rand(n) # generating the y variable y = X.dot(c_true) + .3*np.sqrt(n)*np.random.randn(K) # ordering them based on y order = np.argsort(y) y_ordered = y[order] X_ordered = X[order,:] #finding boundary D = (y_ordered[M-1] + y_ordered[M])/2. # applying censoring y_censored = np.concatenate((y_ordered[:M], np.ones(K-M)*D)) import matplotlib.pyplot as plt # Show plot inline in ipython. %matplotlib inline def plot_fit(fit, fit_label): plt.figure(figsize=(10,6)) plt.grid() plt.plot(y_censored, 'bo', label = 'censored data') plt.plot(y_ordered, 'co', label = 'uncensored data') plt.plot(fit, 'ro', label=fit_label) plt.ylabel('y') plt.legend(loc=0) plt.xlabel('observations'); """ Explanation: Fitting censored data Experimental measurements are sometimes censored such that we only know partial information about a particular data point. For example, in measuring the lifespan of mice, a portion of them might live through the duration of the study, in which case we only know the lower bound. One of the ways we can deal with this is to use Maximum Likelihood Estimation (MLE). However, censoring often make analytical solutions difficult even for well known distributions. We can overcome this challenge by converting the MLE into a convex optimization problem and solving it using CVXPY. This example is adapted from a homework problem from Boyd's CVX 101: Convex Optimization Course. Setup We will use similar notation here. Suppose we have a linear model: $$ y^{(i)} = c^Tx^{(i)} +\epsilon^{(i)} $$ where $y^{(i)} \in \mathbf{R}$, $c \in \mathbf{R}^n$, $k^{(i)} \in \mathbf{R}^n$, and $\epsilon^{(i)}$ is the error and has a normal distribution $N(0, \sigma^2)$ for $ i = 1,\ldots,K$. Then the MLE estimator $c$ is the vector that minimizes the sum of squares of the errors $\epsilon^{(i)}$, namely: $$ \begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^K (y^{(i)} - c^T x^{(i)})^2 \end{array} $$ In the case of right censored data, only $M$ observations are fully observed and all that is known for the remaining observations is that $y^{(i)} \geq D$ for $i=\mbox{M+1},\ldots,K$ and some constant $D$. Now let's see how this would work in practice. Data Generation End of explanation """ c_ols = np.linalg.lstsq(X_ordered, y_censored, rcond=None)[0] fit_ols = X_ordered.dot(c_ols) plot_fit(fit_ols, 'OLS fit') """ Explanation: Regular OLS Let's see what the OLS result looks like. We'll use the np.linalg.lstsq function to solve for our coefficients. End of explanation """ c_ols_uncensored = np.linalg.lstsq(X_ordered[:M], y_censored[:M], rcond=None)[0] fit_ols_uncensored = X_ordered.dot(c_ols_uncensored) plot_fit(fit_ols_uncensored, 'OLS fit with uncensored data only') bad_predictions = (fit_ols_uncensored<=D) & (np.arange(K)>=M) plt.plot(np.arange(K)[bad_predictions], fit_ols_uncensored[bad_predictions], color='orange', marker='o', lw=0); """ Explanation: We can see that we are systematically overestimating low values of $y$ and vice versa (red vs. cyan). This is caused by our use of censored (blue) observations, which are exerting a lot of leverage and pulling down the trendline to reduce the error between the red and blue points. OLS using uncensored data A simple way to deal with this while maintaining analytical tractability is to simply ignore all censored observations. $$ \begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2 \end{array} $$ Give that our $M$ is much smaller than $K$, we are throwing away the majority of the dataset in order to accomplish this, let's see how this new regression does. End of explanation """ import cvxpy as cp X_uncensored = X_ordered[:M, :] c = cp.Variable(shape=n) objective = cp.Minimize(cp.sum_squares(X_uncensored*c - y_ordered[:M])) constraints = [ X_ordered[M:,:]*c >= D] prob = cp.Problem(objective, constraints) result = prob.solve() c_cvx = np.array(c.value).flatten() fit_cvx = X_ordered.dot(c_cvx) plot_fit(fit_cvx, 'CVX fit') """ Explanation: We can see that the fit for the uncensored portion is now vastly improved. Even the fit for the censored data is now relatively unbiased i.e. the fitted values (red points) are now centered around the uncensored obsevations (cyan points). The one glaring issue with this arrangement is that we are now predicting many observations to be below $D$ (orange) even though we are well aware that this is not the case. Let's try to fix this. Using constraints to take into account of censored data Instead of throwing away all censored observations, lets leverage these observations to enforce the additional information that we know, namely that $y$ is bounded from below. We can do this by setting additional constraints: $$ \begin{array}{ll} \underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2 \ \mbox{subject to} & c^T x^{(i)} \geq D\ & \mbox{for } i=\mbox{M+1},\ldots,K \end{array} $$ End of explanation """ print("norm(c_true - c_cvx): {:.2f}".format(np.linalg.norm((c_true - c_cvx)))) print("norm(c_true - c_ols_uncensored): {:.2f}".format(np.linalg.norm((c_true - c_ols_uncensored)))) """ Explanation: Qualitatively, this already looks better than before as it no longer predicts inconsistent values with respect to the censored portion of the data. But does it do a good job of actually finding coefficients $c$ that are close to our original data? We'll use a simple Euclidean distance $\|c_\mbox{true} - c\|_2$ to compare: End of explanation """
adelle207/pyladies.cz
original/v1/s011-dicts/requests.ipynb
mit
import requests """ Explanation: Requests Nejdřiv si nainstaluj Requests, knihovnu pro webové klienty: $ pip install requests A pak v Pythonu: End of explanation """ odpoved = requests.get('http://python.cz') """ Explanation: Knihovna Requests ti umožní stahovat webové stránky serverů na Internetu, podobně jako to dělá webový prohlížeč. (Prohlížeče umí pak stránky umí i zobrazis, Requests je jen stahuje.) Zkusíme si to napřed se stránkou http://python.cz: End of explanation """ print(odpoved.request.headers) """ Explanation: Requests za tebe vytvořil dotaz, který poslal serveru jménem "python.cz". V dotazu jsou "hlavičky", které určují co přesně stáhnout: End of explanation """ print(odpoved.headers) """ Explanation: Server pak vrátil odpověď, která má také nějaké hlavičky... End of explanation """ print(odpoved.text[:200]) # text odpovědi """ Explanation: ... a hlavně tělo odpovědi – v tomto případě HTML stránku: End of explanation """ odpoved = requests.get('https://api.twitter.com/1.1/search/tweets.json') odpoved.text """ Explanation: Když dáš v prohlížeči zobrazit zdrojový kód stránky http://python.cz, vypadat stejně. Requests toho umí spoustu; kompletní dokumentace je na http://docs.python-requests.org. Twitter API Normální internetové stránky jsou uzpůsobené pro lidské čtenáře. Existují ale i "stránky" dělané na to, aby je zpracovávaly programy. Vžilo se pro ně označení "API" (angl. Application Programming Interface, rozhraní pro programování aplikací). (Přesnější termín je "Webové API" nebo "REST API", protože "API" označovat jakékoli programátorské rozhraní – např. seznam metod pythoních slovníků je slovníkové API.) Jedna ze stránek, které mají API, je Twitter. Jejich API je na stránce https://dev.twitter.com/rest/public. Zkusme ho použít: na stránce https://api.twitter.com/1.1/search/tweets.json by mělo jít hledat tweety: End of explanation """ # Dosaď svoje údaje! api_key = "D4HJp6PKmpon9eya1b2c3d4e5" api_secret = "rhvasRMhvbuHJpu4MIuAb4WO50gnoQa1b2c3d4e5f6g7h8i9j0" """ Explanation: Aj, chyba! Twitter je zabezpečený. Musíme se přihlásit jako aplikace, což bohužel není vůbec jednoduché. První krok je klasické přihlášení (nebo vytvoření účtu) na twitter.com. Potom jdi na stránku https://apps.twitter.com/, a vytvoř si tam aplikaci. (Jako jméno doporučuju třeba "xyz-test" kde xyz je tvoje jméno. Jako webovou adresu můžeš použít neexistující "http://test.example". Je taky potřeba doplnit dostatečně dlouhý popisek.) Po vytvoření aplikace si otevři její záznam a jdi na záložku "Keys and Access Tokens". Tam najdeš dvě speciální hesla, kterým se můžeš přihlásit. Zkopíruj si je do Pythonu: End of explanation """ # Zakódování hesla import base64 secret = '{}:{}'.format(api_key, api_secret) secret64 = base64.b64encode(secret.encode('ascii')).decode('ascii') # Vytvoření speciální hlavičky pro požadavek headers = { 'Authorization': 'Basic {}'.format(secret64), 'Host': 'api.twitter.com', } # Odeslání požadavku. # Předtím jsme použily funkci "requests.get", která stáhne informace ze serveru. # Tady je metoda "requests.post", která serveru řekne aby provedl nějakou operaci. # GET, POST, a ostatní HTTP metody jsou popsané na http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html odpoved = requests.post('https://api.twitter.com/oauth2/token', headers=headers, data={'grant_type': 'client_credentials'}) odpoved.text """ Explanation: Tyto kódy je potřeba určitým způsobem spojit dohromady a poslat Twitteru, aby vytvořil další heslo, které pak půjde použít pro API. Ten způsob se naštěstí dá zapsat programem: End of explanation """ bearer_token = odpoved.json()['access_token'] bearer_token """ Explanation: Odpověď nám přišla jako řetězec, ve formátu JSON. JSON je ve světě vebových API častý, a tak má Requests přímo metodu, která JSON dekóduje. Můžeš si tak ušetřit psaní import json a json.loads(odpoved.text). Z odpovědi nás zajímá heslo jménem access_token. End of explanation """ def bearer_auth(req): req.headers['Authorization'] = 'Bearer ' + bearer_token return req session = requests.Session() session.auth = bearer_auth """ Explanation: Tohle heslo si můžeš zapsat přímo do programu, a jeho získávání příště přeskočit (místo všeho od api_key = ... napsat jen berarer_token = 'AAAA...'. Blížíme se ke konci! Teď si uděláme si objekt Session s nastavenými přihlašovacími údaji. Tím řekneme knihovně Requests, aby tohle nové heslo používala pro další dotazy. End of explanation """ odpoved = session.get( 'https://api.twitter.com/1.1/search/tweets.json', params={'q': '#vesmír'}, ) """ Explanation: A teď když místo requests.get použiješ session.get, budeš přihlášená. Zkus znovu stáhnout stránku https://api.twitter.com/1.1/search/tweets.json. Tentokrát s parametrem 'q', který říká co hledáš: End of explanation """ data = odpoved.json() for tweet in data['statuses']: print(tweet['text']) """ Explanation: Zkus si vypsat odpoved.json(). Zjistíš, že je to slovník plný spooousty informací. Nejzajímavější z nich je pod klíčem 'statuses': seznam tweetů. Každý tweet je zase slovník spousty informací; samotný text je pod klíčem 'text'. Tweety se tedy dají vypsat takhle: End of explanation """
ComputationalModeling/spring-2017-danielak
past-semesters/spring_2016/day-by-day/day19-exploratory-data-analysis-with-climate-data/Data_Exploration_Plotting.ipynb
agpl-3.0
# put your code here, and add additional cells as necessary. """ Explanation: Exploring data Names of group members // Put your names here! Goals of this assignment The purpose of this assignment is to explore data using visualization and statistics. Section 1 The file datafile_1.csv contains a three-dimensional dataset and associated uncertainty in the data. Read the data file into numpy arrays and visualize it using two new types of plots: 2D plots of the various combinations of dimensions (x-y, x-z, y-z), including error bars (using the pyplot errorbar() method). Try plotting using symbols instead of lines, and make the error bars a different color than the points themselves. 3D plots of all three dimensions at the same time using the mplot3d toolkit - in particular, look at the scatter() method. Hints: Look at the documentation for numpy's loadtxt() method - in particular, what do the parameters skiprows, comments, and unpack do? If you set up the 3D plot as described above, you can adjust the viewing angle with the command ax.view_init(elev=ANGLE1,azim=ANGLE2), where ANGLE1 and ANGLE2 are in degrees. End of explanation """ # put your code here, and add additional cells as necessary. """ Explanation: Section 2 Now, we're going to experiment with data exploration. You have two data files to examine: GLB.Ts.csv, which contains mean global air temperature from 1880 through the present day (retrieved from the NASA GISS surface temperature website, "Global-mean monthly, seasonal, and annual means, 1880-present"). Each row in the data file contains the year, monthly global average, yearly global average, and seasonal global average. See this file for clues as to what the columns mean. bintanja2008.txt, which is a reconstruction of the global surface temperature, deep-sea temperature, ice volume, and relative sea level for the last 3 million years. This data comes from the National Oceanic and Atmospheric Administration's National Climatic Data Center website, and can be found here. Some important notes: These data files are slightly modified versions of those on the website - they have been altered to remove some characters that don't play nicely with numpy (letters with accents), and symbols for missing data have been replaced with 'NaN', or "Not a Number", which numpy knows to ignore. No actual data has been changed. In the file GLB.Ts.csv, the temperature units are in 0.01 degrees Celsius difference from the reference period 1950-1980 - in other words, the number 40 corresponds to a difference of +0.4 degrees C compared to the average temperature between 1950 and 1980. (This means you'll have to renormalize your values by a factor of 100.) In the file bintanja2008.txt, column 9, "Global sea level relative to present," is in confusing units - more positive values actually correspond to lower sea levels than less positive values. You may want to multiply column 9 by -1 in order to get more sensible values. There are many possible ways to examine this data. First, read both data files into numpy arrays - it's fine to load them into a single combined multi-dimensional array if you want, or split the data into multiple arrays. We'll then try a few things: For both datasets, make some plots of the raw data, particularly as a function of time. What do you see? How is the data "shaped"? Is there periodicity? Do some simple data analysis. What are the minimum, maximum, and mean values of the various quantities? (You may have problems with NaN - see nanmin and similar methods) If you calculate some sort of average for annual temperature in GLB.Ts.csv (say, the average temperature smoothed over 10 years), how might you characterize the yearly variability? Try plotting the smoothed value along with the raw data and show how they differ. There are several variables in the file bintanja2008.txt - try plotting multiple variables as a function of time together using the pyplot subplot functionality (and some more complicated subplot examples for further help). Do they seem to be related in some way? (Hint: plot surface temperature, deep sea temperature, ice volume, and sea level, and zoom in from 3 Myr to ~100,000 years) What about plotting the non-time quantities in bintanja2008.txt versus each other (i.e., surface temperature vs. ice volume or sea level) - do you see correlations? End of explanation """
agrc/Presentations
UGIC/2022/SpatiallyEnabledDataFrames/EditingWithDataFrames.ipynb
mit
import pandas as pd medians_df = pd.read_csv('assets/median_age.csv') medians_df.head() """ Explanation: Ditch the Cursor Editing Feature Classes with Spatialy-Enabled DataFrames ArcPy Is Great, And... Problem one: row[0] ```python def update_year_built(layer, year_fields): with arcpy.da.UpdateCursor(layer, year_fields) as cursor: for row in cursor: if row[0] is None or row[0] < 1 or row[0] == '': row[0] = f'{row[1]}{row[2]}' cursor.updateRow(row) ``` ```python If new parcels' owner/owner addr changed, or if PID is new, add to appropriate lists with arcpy.da.SearchCursor(tville_parcels_fc, parcel_check_fields) as new_cursor: for row in new_cursor: if row[0] in old_parcels: old_name = old_parcels[row[0]][0] old_addr = old_parcels[row[0]][1] if row[1] != old_name or row[2] != old_addr: own_addr_changed.append(row[0]) else: new_parcels.append(row[0]) ``` Problem 2: Transfering data between feature classes and tables python with arcpy.da.SearchCursor(new_data_fc, fields) as new_data_cursor, \ arcpy.da.InsertCursor(current_data_fc, fields) as current_data_cursor: for row in new_data_cursor: current_data_cursor.insertRow(row) copied_records += 1 Problem 3: Renaming/reordering fields ```python fieldmappings = arcpy.FieldMappings() fieldmappings.addTable(energov_parcels_fc) fieldmappings.addTable(tville_parcels_fc) fields_list = [ ('PIN', 'parcel_id'), ('own_cityst', 'own_citystate'), ('own_zip_fo', 'own_zip_four'), ('prop_locat', 'prop_location'), ('property_t', 'property_type'), ('neighborho', 'neighborhood_code'), ('adjusted_p', 'adjusted_prcl_total'), #: ... ] for field_map in fields_list: field_to_map_index = fieldmappings.findFieldMapIndex(field_map[0]) field_to_map = fieldmappings.getFieldMap(field_to_map_index) field_to_map.addInputField(tville_parcels_fc, field_map[1]) fieldmappings.replaceFieldMap(field_to_map_index, field_to_map) ``` Problem 4: Intermediate feature classes python ssa_summarized_roads = fr'{output_gdb}\ssa_bike_lanes_roads' ssa_summarized_paths = fr'{output_gdb}\ssa_bike_lanes_paths' ssa_summarized_lengths = fr'{output_gdb}b\SmallStatisticalAreas_2018_bike_lane_lengths' tract_summarized_roads = fr'{output_gdb}\tract_bike_lanes_roads' tract_summarized_paths = fr'{output_gdb}\tract_bike_lanes_paths' tract_summarized_lengths = fr'{output_gdb}\census_tracts_2020_bike_lane_lengths' buffered_tracts = fr'{output_gdb}\census_tracts_2020_buffered_30ft' buffered_areas = fr'{output_gdb}\small_areas_buffered_200ft' bike_lanes = fr'{output_gdb}\bike_lanes_20220111' major_paths = fr'{output_gdb}\major_paths' Enter the Pandas! pandas gives you the tools to work with tables of data defined by rows and columns, called a DataFrame End of explanation """ medians_df.loc[[0, 1, 2, 5], 'County'] medians_df.iloc[10:15, :4] """ Explanation: We can access individual rows and columns using .loc (with index labels) or .iloc (with indices) python medians_df.loc[row labels, column labels] medians_df.iloc[row indices, column indices] End of explanation """ medians_df[['Median_age', 'Avg_MonthlyIncome']].head() """ Explanation: We can also get just a few columns from all rows End of explanation """ from arcgis.features import GeoAccessor, GeoSeriesAccessor counties_fc_path = r'C:\Users\jdadams\AppData\Roaming\Esri\ArcGISPro\Favorites\opensgid.agrc.utah.gov.sde\opensgid.boundaries.county_boundaries' counties_df = pd.DataFrame.spatial.from_featureclass(counties_fc_path) counties_df.head() """ Explanation: Extending pandas Spatially The ArcGIS API for Python provides Spatially Enabled DataFrames, which include geometry information. End of explanation """ counties_df.loc[counties_df['stateplane'] == 'Central', ['name', 'stateplane', 'fips_str']] """ Explanation: pandas lets you work on rows that meet a certain condition End of explanation """ counties_df['emperor'] = 'Jake' counties_df.head() """ Explanation: You can easily add new columns End of explanation """ counties_df.groupby('stateplane').count() counties_df['acres'] = counties_df['SHAPE'].apply(lambda shape: shape.area / 4046.8564) counties_df.groupby('stateplane')['acres'].sum() """ Explanation: pandas provides powerful built in grouping and aggregation tools, along with Spatially Enabled DataFrames' geometry operations End of explanation """ counties_df.loc[(counties_df['pop_lastcensus'] < 100000) & (counties_df['stateplane'] == 'North'), 'emperor'] = 'Erik' counties_df[['name', 'pop_lastcensus', 'stateplane', 'emperor']].sort_values('name').head() """ Explanation: pandas Solutions to our Arcpy Problems row[0] Solution: Field Names ```python def update_unit_count(parcels_df): """Update unit counts in-place for single family, duplex, and tri/quad Args: parcels_df (pd.DataFrame): The evaluated parcel dataset with UNIT_COUNT, HOUSE_CNT, SUBTYPE, and NOTE columns """ # fix single family (non-pud) zero_or_null_unit_counts = (parcels_df['UNIT_COUNT'] == 0) | (parcels_df['UNIT_COUNT'].isna()) parcels_df.loc[(zero_or_null_unit_counts) &amp; (parcels_df['SUBTYPE'] == 'single_family'), 'UNIT_COUNT'] = 1 # fix duplex parcels_df.loc[(parcels_df['SUBTYPE'] == 'duplex'), 'UNIT_COUNT'] = 2 # fix triplex-quadplex parcels_df.loc[(parcels_df['UNIT_COUNT'] &lt; parcels_df['HOUSE_CNT']) &amp; (parcels_df['NOTE'] == 'triplex-quadplex'), 'UNIT_COUNT'] = parcels_df['HOUSE_CNT'] ``` Let's make Erik the emperor of the small counties that use State Plane North End of explanation """ census_fc_path = r'C:\Users\jdadams\AppData\Roaming\Esri\ArcGISPro\Favorites\opensgid.agrc.utah.gov.sde\opensgid.demographic.census_counties_2020' census_df = pd.DataFrame.spatial.from_featureclass(census_fc_path) counties_with_census_df = counties_df.merge(census_df[['geoid20', 'aland20']], left_on='fips_str', right_on='geoid20') counties_with_census_df.head() """ Explanation: Joined Tables Solution: Merged DataFrames ```python def _get_current_attachment_info_by_oid(self, live_data_subset_df): #: Join live attachment table to feature layer info live_attachments_df = pd.DataFrame(self.feature_layer.attachments.search()) live_attachments_subset_df = live_attachments_df.reindex(columns=['PARENTOBJECTID', 'NAME', 'ID']) merged_df = live_data_subset_df.merge( live_attachments_subset_df, left_on='OBJECTID', right_on='PARENTOBJECTID', how='left' ) return merged_df ``` Let's add census data to our counties End of explanation """ renames = { 'name': 'County Name', 'pop_lastcensus': 'Last Census Population', 'emperor': 'Benevolent Dictator for Life', 'acres': 'Acres', 'aland20': 'Land Area', } counties_with_census_df.rename(columns=renames, inplace=True) counties_with_census_df.head() """ Explanation: Renaming/Reordering Fields Solution: df.rename() and df.reindex() python final_parcels_df.rename( columns={ 'name': 'CITY', #: from cities 'NewSA': 'SUBCOUNTY', #: From subcounties/regions 'BUILT_YR': 'APX_BLT_YR', 'BLDG_SQFT': 'TOT_BD_FT2', 'TOTAL_MKT_VALUE': 'TOT_VALUE', 'PARCEL_ACRES': 'ACRES', }, inplace=True ) ```python final_fields = [ 'SHAPE', 'UNIT_ID', 'TYPE', 'SUBTYPE', 'IS_OUG', 'UNIT_COUNT', 'DUA', 'ACRES', 'TOT_BD_FT2', 'TOT_VALUE', 'APX_BLT_YR', 'BLT_DECADE', 'CITY', 'COUNTY', 'SUBCOUNTY', 'PARCEL_ID' ] logging.info('Writing final data out to disk...') output_df = final_parcels_df.reindex(columns=final_fields) output_df.spatial.to_featureclass(output_fc, sanitize_columns=False) ``` "Emperor" is too bold; let's use "Benevolent Dictator for Life" instead. End of explanation """ field_order = [ 'County Name', 'Benevolent Dictator for Life', 'Acres', 'Land Area', 'Last Census Population', 'SHAPE' ] final_counties_df = counties_with_census_df.reindex(columns=field_order) final_counties_df.head() """ Explanation: Now that we've got it all looking good, let's reorder the fields and get rid of the ones we don't want End of explanation """ final_counties_df.spatial.to_featureclass(r'C:\gis\Projects\HousingInventory\HousingInventory.gdb\counties_ugic') """ Explanation: Intermediate Feature Classes: New DataFrame Variables With everything we've done, we've not written a single feature class to either disk or in_memory python counties_df counties_with_census_df final_counties_df Finally, Write It All To Disk End of explanation """
cristhro/Machine-Learning
ejercicio 4/notebook.ipynb
gpl-3.0
data = pd.read_csv('train.csv', header=None ,delimiter=";") feature_names = ['usuario', 'palabra', 'palabraLeida', 'tiempoCaracter', 'hayErrPalabra', 'tiempoErrPalabra', 'numPalabra','tiempoPalabra', 'tamPalabra', 'caracter', 'falloCaracter', 'palabraCorrecta'] data.columns = feature_names """ Explanation: Importar datos de entreno End of explanation """ predict = pd.read_csv('predict.csv', header=None ,delimiter=";") feature_names = ['usuario', 'palabra', 'palabraLeida', 'tiempoCaracter', 'hayErrPalabra', 'tiempoErrPalabra', 'numPalabra','tiempoPalabra', 'tamPalabra', 'caracter', 'falloCaracter', 'palabraCorrecta'] predict.columns = feature_names data[data['caracter'] == 'Z'] """ Explanation: Importar datos para predecir End of explanation """ # Pasamos de boolean a un int, 1 para true y 0 para false data["hayErrPalabra"] = data['hayErrPalabra'].map({False: 0, True: 1}) data["falloCaracter"] = data['falloCaracter'].map({False: 0, True: 1}) data["palabraCorrecta"] = data['palabraCorrecta'].map({False: 0, True: 1}) predict["hayErrPalabra"] = predict['hayErrPalabra'].map({False: 0, True: 1}) predict["falloCaracter"] = predict['falloCaracter'].map({False: 0, True: 1}) predict["palabraCorrecta"] = predict['palabraCorrecta'].map({False: 0, True: 1}) """ Explanation: Mapear los valores verdadero y falso a 1 y 0 hayErrPalabra, falloCaracter, palabraCorrecta End of explanation """ data["usuario"] = data["usuario"].str.strip() predict["usuario"] = predict["usuario"].str.strip() """ Explanation: Quitarle los espacios en blanco al usuario End of explanation """ data["usuarioID"] = data['usuario'].map({"Cristhian": 0, "Jesus": 1}) predict["usuarioID"] = predict['usuario'].map({"Cristhian": 0, "Jesus": 1}) """ Explanation: Mapear el usuario en un campo usuarioID End of explanation """ data['caracter'] = data[data['caracter'].between('A', 'Z', inclusive=True)]['caracter'] predict['caracter'] = predict[predict['caracter'].between('A', 'Z', inclusive=True)]['caracter'] """ Explanation: Dejar solo los caracteres comprendidos entre A y Z Cuidado al hacer los tiempos de palabra, que se borran las filas que los contienen End of explanation """ d = {ni: indi for indi, ni in enumerate(set(data['palabra']))} data['palabra'] = [d[ni] for ni in data['palabra']] d = {ni: indi for indi, ni in enumerate(set(predict['palabra']))} predict['palabra'] = [d[ni] for ni in predict['palabra']] """ Explanation: (Mirar si interesa hacer o no) Mapear cada palabra a un numero para poder entrenar Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores End of explanation """ d = {ni: indi for indi, ni in enumerate(set(data['caracter']))} data['caracter'] = [d[ni] for ni in data['caracter']] d = {ni: indi for indi, ni in enumerate(set(predict['caracter']))} predict['caracter'] = [d[ni] for ni in predict['caracter']] """ Explanation: (Mirar si interesa hacer o no) Mapear cada caracter a un numero para poder entrenar Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores End of explanation """ caracter = data[~data['caracter'].isnull()][['usuario', 'caracter','tiempoCaracter','falloCaracter']] caracter['user'] = data['usuarioID'] caracter = caracter.groupby(['usuario','caracter']).mean() targerCaracter = caracter['user'] caracter = caracter.drop(['user'], axis=1) #caracter.iloc[0:3] caracter caracterPred = predict[~predict['caracter'].isnull()][['usuario', 'caracter','tiempoCaracter','falloCaracter']] caracterPred['user'] = predict['usuarioID'] caracterPred = caracterPred.groupby(['usuario','caracter']).mean() targerCaracterPred = caracterPred['user'] caracterPred = caracterPred.drop(['user'], axis=1) #caracterPred.iloc[0:3] caracterPred """ Explanation: Sacar tiempo medio de escritura del mismo caracter Hay que quitar los caracteres nulos End of explanation """ Enter = data[data['caracter'].isnull()][['usuario','tiempoCaracter']] Enter.columns = ['usuario', 'tiempoEnter'] Enter = Enter.groupby(['usuario']).mean() Enter """ Explanation: Sacar tiempo medio de pulsado de enter (caracteres nulos) End of explanation """ usPalTiempo = data[data['caracter'].isnull()][['usuario', 'palabra', 'tiempoPalabra', 'tiempoErrPalabra','tamPalabra']] usPalTiempo usPalTiempoPred = predict[predict['caracter'].isnull()][['usuario', 'palabra', 'tiempoPalabra', 'tiempoErrPalabra','tamPalabra']] usPalTiempoPred """ Explanation: Usuario, palabra, tiempo End of explanation """ falloCaracterPorPalabra = data.groupby(['usuario','palabra'])['falloCaracter'].sum() falloCaracterPorPalabra falloCaracterPorPalabraPred = predict.groupby(['usuario','palabra'])['falloCaracter'].sum() falloCaracterPorPalabraPred """ Explanation: Sacar la suma de fallos totales por palabra End of explanation """ tiempoCoreccionCaracter = data[data['falloCaracter'] > 0].groupby(['usuario','palabra'])['tiempoCaracter'].sum() tiempoCoreccionCaracter """ Explanation: Prueba tiempo correccion caracter End of explanation """ dataFallo = data[data['tiempoErrPalabra'] > 0] dataFallo[dataFallo['palabra'] == "PZKOFTLILILILI"] """ Explanation: Error en el entreno, hay un tiempo negativo, MIRAR End of explanation """ tiempoMedioPalabra = usPalTiempo.drop(['tamPalabra'], axis=1) tiempoMedioPalabra['user'] = data['usuarioID'] #usPalTiempo2['numPalabra'] = usPalTiempo['palabra'] tiempoMedioPalabra = tiempoMedioPalabra.groupby(['usuario','palabra']).mean() tiempoMedioPalabra['falloCaracterPorPalabra'] = falloCaracterPorPalabra targetTM = tiempoMedioPalabra['user'] tiempoMedioPalabra = tiempoMedioPalabra.drop(['user'], axis=1) tiempoMedioPalabra tiempoMedioPalabraPred = usPalTiempoPred.drop(['tamPalabra'], axis=1) tiempoMedioPalabraPred['user'] = predict['usuarioID'] #usPalTiempo2['numPalabra'] = usPalTiempo['palabra'] tiempoMedioPalabraPred = tiempoMedioPalabraPred.groupby(['usuario','palabra']).mean() tiempoMedioPalabraPred['falloCaracterPorPalabra'] = falloCaracterPorPalabraPred targetTM = tiempoMedioPalabraPred['user'] tiempoMedioPalabraPred = tiempoMedioPalabraPred.drop(['user'], axis=1) tiempoMedioPalabraPred """ Explanation: Sacar el tiempoPalabra medio de cada palabra del usuario para usarlo como modelo TiempoErrPalabra no se si es muy util End of explanation """ usPalTiempo3 = usPalTiempo.drop(['palabra'], axis=1) targetUS = usPalTiempo3['usuario'] usPalTiempo3 = usPalTiempo3.groupby(['usuario']).mean() #usPalTiempo3['tiempoMedioCaracter'] = usPalTiempo3['tiempoPalabra'] / usPalTiempo3['tamPalabra'] usPalTiempo3 usPalTiempo3['tiempoEnter'] = Enter usPalTiempo3 data """ Explanation: Sacar tiempo medio por caracter por tamaño de palabra End of explanation """ target = data['usuarioID'] target targetPred = predict['usuarioID'] targetPred """ Explanation: Sacar el target End of explanation """ data = data.drop(['usuario','palabraLeida','numPalabra', 'tamPalabra','usuarioID'], axis=1) predict = predict.drop(['usuario','palabraLeida','numPalabra', 'tamPalabra','usuarioID'], axis=1) #'palabra', (mirar estos) 'falloCaracter' 'palabraCorrecta', 'hayErrPalabra' data """ Explanation: Eliminar campos sobrantes (Usuario, palabra, palabraLeida, numPalabra, tamPalabra, caracter, usuarioID) End of explanation """ tiempoPorPalabra = data[data['tiempoErrPalabra'] > 0][['palabra','tiempoPalabra', 'tiempoErrPalabra', 'palabraCorrecta']] tiempoPorPalabra #data['tiempoPalabra'] = [tiempoPorPalabra['tiempoPalabra'] for tiempoPorPalabra['tiempoPalabra'] in data['tiempoPalabra']] data2 = data.copy() data2 = data2.drop(['tiempoPalabra', 'tiempoErrPalabra'], axis=1) #data2["tiempoPalabra"] = data2["palabra"].map(tiempoPorPalabra) data2 data """ Explanation: Cambiar datos malos por las mejoras End of explanation """ from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier random_forest = RandomForestClassifier(n_estimators=101) scores = cross_val_score(random_forest, data, target, cv=5) print(scores) print(scores.mean()) """ Explanation: Separar datos de entreno y datos de testeo Cross Validation Random Forest End of explanation """ from sklearn.model_selection import cross_val_score from sklearn import svm svm = svm.SVC(kernel='linear', C=1) scores = cross_val_score(svm, data, target, cv=5) print(scores) print(scores.mean()) """ Explanation: SVM End of explanation """ from sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import cross_val_score ada = AdaBoostClassifier(n_estimators=100) scores = cross_val_score(ada, data, target, cv=5) print(scores) print(scores.mean()) """ Explanation: AdaBoost datos originales End of explanation """ scores = cross_val_score(ada, tiempoMedioPalabra, targetTM, cv=5) print(scores) print(scores.mean()) """ Explanation: Pruebas con otro modelo End of explanation """ scores = cross_val_score(ada, caracter, targerCaracter, cv=5) print(scores) print(scores.mean()) """ Explanation: Pruebas con modelo tiempo medio por caracter End of explanation """ # no se si estaria bien asi ya que caracter tiene el usuario ada.fit(caracter,targerCaracter) """ Explanation: Entreno del modelo caracter, con los datos sin el target End of explanation """ pred = ada.predict(caracterPred) pred from sklearn.metrics import accuracy_score accuracy = accuracy_score(targerCaracterPred, pred) print(accuracy) score = ada.score(caracter, caracterPred) score caracter.describe() caracterPred caracter """ Explanation: Prediccion modelo sin Cross Validation End of explanation """
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
solved - 07 - Case study - air quality data.ipynb
bsd-2-clause
from IPython.display import HTML HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>') """ Explanation: <p><font size="6"><b> Case study: air quality data of European monitoring stations (AirBase)</b></font></p> <br> AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe. © 2016, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. End of explanation """ %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.options.display.max_rows = 8 """ Explanation: Some of the data files that are available from AirBase were included in the data folder: the hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations: FR04037 (PARIS 13eme): urban background site at Square de Choisy FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia BETR802: urban traffic site in Antwerp, Belgium BETN029: rural background site in Houtem, Belgium See http://www.eea.europa.eu/themes/air/interactive/no2 End of explanation """ with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) """ Explanation: Processing a single file We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file: End of explanation """ data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() """ Explanation: So we will need to do some manual processing. Just reading the tab-delimited data: End of explanation """ # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag']*24) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() """ Explanation: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Clean up this dataframe by using more options of `read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) <ul> <li>specify the correct delimiter</li> <li>specify that the values of -999 and -9999 should be regarded as NaN</li> <li>specify are own column names (for how the column names are made up, see See http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) </ul> </div> End of explanation """ flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() """ Explanation: For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Drop all 'flag' columns ('flag1', 'flag2', ...) End of explanation """ # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data2 = data.set_index('date') data_stacked = data2.stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked.head() # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() """ Explanation: Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. <div class="alert alert-info"> <b>REMEMBER</b>: <ul> <li>Recap: reshaping your data with [`stack` and `unstack`](./pandas_07_reshaping_data.ipynb)</li> </ul> <img src="img/schema-stack.svg" width=70%> </div> <div class="alert alert-success"> <b>EXERCISE</b>: <br><br> Reshape the dataframe to a timeseries. The end result should look like:<br><br> <div class='center'> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>BETR801</th> </tr> </thead> <tbody> <tr> <th>1990-01-02 09:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 12:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 13:00:00</th> <td>50.0</td> </tr> <tr> <th>1990-01-02 14:00:00</th> <td>55.0</td> </tr> <tr> <th>...</th> <td>...</td> </tr> <tr> <th>2012-12-31 20:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 21:00:00</th> <td>14.5</td> </tr> <tr> <th>2012-12-31 22:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 23:00:00</th> <td>15.0</td> </tr> </tbody> </table> <p style="text-align:center">170794 rows × 1 columns</p> </div> <ul> <li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li> <li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li> <li>Set the new datetime values as the index, and remove the original columns with date and hour values</li> </ul> **NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. </div> End of explanation """ data_stacked.index data_stacked.plot() """ Explanation: Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex: End of explanation """ def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, ['flag']*24) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data = data.set_index('date') data_stacked = data.stack() data_stacked = data_stacked.reset_index() # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked = data_stacked.rename(columns={0: station}) return data_stacked """ Explanation: Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Write a function `read_airbase_file(filename, station)`, using the above steps the read in and process the data, and that returns a processed timeseries.</li> </ul> </div> End of explanation """ filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = filename.split("/")[-1][:7] station test = read_airbase_file(filename, station) test.head() """ Explanation: Test the function on the data file from above: End of explanation """ import glob data_files = glob.glob("data/*0008001*") data_files """ Explanation: We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Use the `glob.glob` function to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.</li> </ul> </div> End of explanation """ dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.</li> <li>Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.</li> </ul> </div> End of explanation """ combined_data.to_csv("airbase_data.csv") """ Explanation: Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. End of explanation """ alldata = pd.read_csv('airbase_data.csv', index_col=0, parse_dates=True) """ Explanation: Working with time series data We processed the individual data files above, and saved it to a csv file airbase_data.csv. Let's import the file here (if you didn't finish the above exercises, a version of the dataset is also available in data/airbase_data.csv): End of explanation """ data = alldata['1999':].copy() """ Explanation: We only use the data from 1999 onwards: End of explanation """ data.head() # tail() data.info() data.describe(percentiles=[0.1, 0.5, 0.9]) """ Explanation: Som first exploration with the typical functions: End of explanation """ data.plot(kind='box', ylim=[0,250]) data['BETR801'].plot(kind='hist', bins=50) data.plot(figsize=(12,6)) """ Explanation: Quickly visualizing the data End of explanation """ data[-500:].plot(figsize=(12,6)) """ Explanation: This does not say too much .. We can select part of the data (eg the latest 500 data points): End of explanation """ data.loc['2009':, 'FR04037'].resample('M').mean().plot() data.loc['2009':, 'FR04037'].resample('M').median().plot() data.loc['2009':, 'FR04037'].resample('M').agg(['mean', 'median']).plot() """ Explanation: Exercises <div class="alert alert-warning"> <b>REMINDER</b>: <br><br> Take a look at the [Timeseries notebook](05 - Time series data.ipynb) when you require more info about... <ul> <li>`resample`</li> <li>string indexing of DateTimeIndex</li> </ul><br><br> </div> <div class="alert alert-success"> <b>QUESTION</b>: plot the monthly mean and median concentration of the 'FR04037' station for the years 2009-2012 </div> End of explanation """ daily = data['FR04037'].resample('D').mean() daily.resample('M').agg(['min', 'max']).plot() """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: plot the monthly mininum and maximum daily concentration of the 'BETR801' station </div> End of explanation """ data['2012'].mean().plot(kind='bar') """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: make a bar plot of the mean of the stations in year of 2012 </div> End of explanation """ data.resample('A').mean().plot() data.mean(axis=1).resample('A').mean().plot(color='k', linestyle='--', linewidth=4) """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The evolution of the yearly averages with, and the overall mean of all stations (indicate the overall mean with a thicker black line)? </div> End of explanation """ data.groupby(data.index.year).mean().plot() """ Explanation: Combination with groupby resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex). End of explanation """ data['month'] = data.index.month """ Explanation: But, groupby is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle. <div class="alert alert-success"> <b>QUESTION</b>: how does the *typical monthly profile* look like for the different stations? </div> 1. add a column to the dataframe that indicates the month (integer value of 1 to 12): End of explanation """ data.groupby('month').mean() """ Explanation: 2. Now, we can calculate the mean of each month over the different years: End of explanation """ data.groupby('month').mean().plot() data = data.drop('month', axis=1, errors='ignore') """ Explanation: 3. plot the typical monthly profile of the different stations: End of explanation """ df2011 = data['2011'].dropna() df2011 = data['2011'].dropna() df2011.groupby(df2011.index.week)[['BETN029', 'BETR801']].quantile(0.95).plot() df2011[['BETN029', 'BETR801']].resample('W').agg(lambda x: x.quantile(0.75)).plot() """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: plot the weekly 95% percentiles of the concentration in 'BETR801' and 'BETN029' for 2011 </div> End of explanation """ data.groupby(data.index.hour).mean().plot() """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The typical diurnal profile for the different stations? </div> End of explanation """ exceedances = data > 200 # group by year and count exceedances (sum of boolean) exceedances = exceedances.groupby(exceedances.index.year).sum() exceedances ax = exceedances.loc[2005:].plot(kind='bar') ax.axhline(18, color='k', linestyle='--') """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 for each year/station? </div> End of explanation """ yearly = data['2000':].resample('A').mean() (yearly > 40).sum() yearly.plot() plt.axhline(40, linestyle='--', color='k') """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: And are there exceedances of the yearly limit value of 40 µg/m3 since 200 ? </div> End of explanation """ data.index.weekday? data['weekday'] = data.index.weekday """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: What is the difference in the typical diurnal profile between week and weekend days? (and visualise it) </div> End of explanation """ data['weekend'] = data['weekday'].isin([5, 6]) data_weekend = data.groupby(['weekend', data.index.hour]).mean() data_weekend.head() data_weekend_BETR801 = data_weekend['BETR801'].unstack(level=0) data_weekend_BETR801.head() data_weekend_BETR801.plot() data = data.drop(['weekday', 'weekend'], axis=1) """ Explanation: Add a column indicating week/weekend End of explanation """ # add a weekday and week column data['weekday'] = data.index.weekday data['week'] = data.index.week data.head() # pivot table so that the weekdays are the different columns data_pivoted = data['2012'].pivot_table(columns='weekday', index='week', values='BETR801') data_pivoted.head() box = data_pivoted.boxplot() """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: Visualize the typical week profile for the different stations as boxplots (where the values in one boxplot are the daily means for the different weeks for a certain weekday). </div> Tip: the boxplot method of a DataFrame expects the data for the different boxes in different columns). For this, you can either use pivot_table as a combination of groupby and unstack End of explanation """ data['2012'].groupby(['weekday', 'week'])['BETR801'].mean().unstack(level=0).boxplot(); """ Explanation: An alternative method using groupby and unstack: End of explanation """ exceedances = data.rolling(8).mean().resample('D').max() > 100 exceedances = exceedances.groupby(exceedances.index.year).sum() ax = exceedances.loc[2005:].plot(kind='bar') """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: The maximum daily 8 hour mean should be below 100 µg/m³. What are the number of exceedances of this limit for each year/station? </div> Tip: have a look at the rolling method to perform moving window operations. Note: this is not an actual limit for NO2, but a nice exercise to introduce the rolling method. Other pollutans, such as 03 have actually such kind of limit values. End of explanation """ data[['BETR801', 'BETN029', 'FR04037', 'FR04012']].corr() data[['BETR801', 'BETN029', 'FR04037', 'FR04012']].resample('D').mean().corr() """ Explanation: <div class="alert alert-success"> <b>QUESTION</b>: Calculate the correlation between the different stations </div> End of explanation """
ltiao/notebooks
working-with-pandas-multiindex-dataframes-reading-and-writing-to-csv-and-hdf5.ipynb
mit
# create some noise a = np.random.randn(50, 600, 100) a.shape # create some noise with higher variance and add bias. b = 2. * np.random.randn(*a.shape) + 1. b.shape # manufacture some loss function # there are n_epochs * n_batchs * batch_size # recorded values of the loss loss = 10 / np.linspace(1, 100, a.size) loss.shape """ Explanation: Rationale For some certain loss functions, such the the negative evidence lower bound (NELBO) in variational inference, they are generally analytically intractable and thus unavailable in closed-form. As such, we might need to resort to taking stochastic estimates of the loss function. In these situations, it is very important to study and understand the robustness of the estimations we are making, particularly in terms of bias and variance. When proposing a new estimator, we may be interested in evaluating the loss at a fined-grained level - not only per batch, but perhaps even per data-point. This notebook explores storing the recorded losses in Pandas Dataframes. The recorded losses are 3d, with dimensions corresponding to epochs, batches, and data-points. Specifically, they are of shape (n_epochs, n_batches, batch_size). Instead of using the deprecated Panel functionality from Pandas, we explore the preferred MultiIndex Dataframe. Lastly, we play around with various data serialization formats supported out-of-the-box by Pandas. This might be useful if the training is GPU-intensive, so the script runs and records the loss remotely on a supercomputer, and we must write the results to file, download them and finally analyze them locally. This is usually trivial, but it is unclear what the behaviour is for more complex MultiIndex dataframes. We restrict our attention to the CSV format, which is human-friendly but very slow and inefficient, and the HDF5, which is basically diametrically opposed - it's basically completely inscrutable, but is very fast and takes up laess space. Synthetic Data End of explanation """ # we will create the indices from the # product of these iterators list(map(range, a.shape)) # create the MultiIndex index = pd.MultiIndex.from_product( list(map(range, a.shape)), names=['epoch', 'batch', 'datapoint'] ) # create the dataframe that records the two losses df = pd.DataFrame( dict(loss1=loss+np.ravel(a), loss2=loss+np.ravel(b)), index=index ) df """ Explanation: MultiIndex Dataframe End of explanation """ # some basic plotting fig, ax = plt.subplots() df.groupby(['epoch', 'batch']).mean().plot(ax=ax) plt.show() """ Explanation: Visualization In this contrived scenario, loss2 is more biased and has higher variance. End of explanation """ %%time df.to_csv('losses.csv') !ls -lh losses.csv %%time df_from_csv = pd.read_csv('losses.csv', index_col=['epoch', 'batch', 'datapoint'], float_precision='high') # does not recover exactly due to insufficient floating point precision df_from_csv.equals(df) # but it has recovered it up to some tiny epsilon ((df-df_from_csv)**2 < 1e-25).all() """ Explanation: CSV Read/Write End of explanation """ %%time df.to_hdf('store.h5', key='losses') """ Explanation: HDF5 Read/Write HDF5 writing is orders of magnitude faster. End of explanation """ !ls -lh store.h5 %%time df_from_hdf = pd.read_hdf('store.h5', key='losses') """ Explanation: Furthermore, the file sizes are significantly smaller. End of explanation """ df.equals(df_from_hdf) """ Explanation: Lastly, it is far more numerical precise. End of explanation """
NuGrid/NuPyCEE
regression_tests/temp/SYGMA_SSP_all_yields.ipynb
bsd-3-clause
#from imp import * #s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py') %pylab nbagg import sygma as s reload(s) print s.__file__ #import matplotlib #matplotlib.use('nbagg') #import matplotlib.pyplot as plt #matplotlib.use('nbagg') #import numpy as np from scipy.integrate import quad from scipy.interpolate import UnivariateSpline import os """ Explanation: Complex yield composition Test yield tables with standard (solar) composition. Results: $\odot$ The total production of H-1 and Fe-56 can be reproduced. $\odot$ Analysis of time steps and mass intervals with the t_m_bdys parameter. $\odot$ Check that non-default mode and default mode give the same results End of explanation """ s1=s.sygma(mgal=1e11,iniZ=0.02,yield_interp='None',imf_type='salpeter',table='yield_tables/isotope_yield_table.txt',sn1a_on=False) Yield_tot_sim_h1=s1.history.ism_iso_yield[-1][0] #get total final H-1 Yield_tot_sim_fe56=s1.history.ism_iso_yield[-1][60] #get total final H-1 print s1.history.isotopes[0],Yield_tot_sim_h1 print s1.history.isotopes[60],Yield_tot_sim_fe56 import read_yields as ry path = os.environ['SYGMADIR']+'/yield_tables/isotope_yield_table.txt' ytables = ry.read_nugrid_yields(path,excludemass=[32,60]) print 'total IMF range: ',s1.imf_bdys print 'yield IMF range: ',s1.imf_mass_ranges, masses=[1,1.65,2,3,4,5,6,7,15,20,25] #should be conform with imf_mass_ranges k_N=1e11*0.35/ (0.1**-0.35 - 100**-0.35) #(I) k=-1 ytot_h1=0 ytot_fe56=0 for mrange in s1.imf_mass_ranges: k=k+1 N_range=k_N/1.35 * (mrange[0]**-1.35 - mrange[1]**-1.35) #(II) y_h1=ytables.get(M=masses[k],Z=0.02,specie='H-1') y_fe56=ytables.get(M=masses[k],Z=0.02,specie='Fe-56') ytot_h1 = ytot_h1 + y_h1*N_range ytot_fe56 = ytot_fe56 + y_fe56*N_range print 'H-1, should be 1', ytot_h1/Yield_tot_sim_h1 print 'Fe-56, should be 1', ytot_fe56/Yield_tot_sim_fe56 """ Explanation: Pick two isotopes, H-1 and Fe-56 and check total production No interpolation of yields as described further below: yield_interp='None' End of explanation """ print len(s1.history.t_m_bdys) print len(s1.history.timesteps) print s1.history.t_m_bdys """ Explanation: Note: The yield interpolation to a finer grid via __inter_mm_planee and the scaling of the total ejecta via function func_total_ejecta (chem_evol.py) are skipped by introducing: yield_interp='None'. Timesteps & mass intervals The number of mass intervals can be larger than the number of time steps. This is when mass inverval boundary lies between two time steps. This occurs at the transition from one initial mass (which its mass interval) to another initial mass (interval). End of explanation """ s7=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1e9,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn',pop3_table='yield_tables/popIII_h1.txt') s8=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1e9,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=-1,iniZ=0.0001) s7.plot_sn_distr(marker1='o',color1='b',marker2='s',markevery=1) s8.plot_sn_distr(marker1='d',marker2='x',color2='r',markevery=1) s8=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1e9,imf_type='salpeter',imf_bdys=[1,30],special_timesteps=200,iniZ=0.0001) """ Explanation: SNII and SNIa : Compare non-default with default mode: numbers should be the same Do we really need this comparison? End of explanation """
tensorflow/docs-l10n
site/en-snapshot/quantum/tutorials/mnist.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ !pip install tensorflow==2.7.0 """ Explanation: MNIST classification <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/mnist"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/mnist.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial builds a quantum neural network (QNN) to classify a simplified version of MNIST, similar to the approach used in <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al</a>. The performance of the quantum neural network on this classical data problem is compared with a classical neural network. Setup End of explanation """ !pip install tensorflow-quantum # Update package resources to account for version changes. import importlib, pkg_resources importlib.reload(pkg_resources) """ Explanation: Install TensorFlow Quantum: End of explanation """ import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np import seaborn as sns import collections # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit """ Explanation: Now import TensorFlow and the module dependencies: End of explanation """ (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Rescale the images from [0,255] to the [0.0,1.0] range. x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0 print("Number of original training examples:", len(x_train)) print("Number of original test examples:", len(x_test)) """ Explanation: 1. Load the data In this tutorial you will build a binary classifier to distinguish between the digits 3 and 6, following <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> This section covers the data handling that: Loads the raw data from Keras. Filters the dataset to only 3s and 6s. Downscales the images so they fit can fit in a quantum computer. Removes any contradictory examples. Converts the binary images to Cirq circuits. Converts the Cirq circuits to TensorFlow Quantum circuits. 1.1 Load the raw data Load the MNIST dataset distributed with Keras. End of explanation """ def filter_36(x, y): keep = (y == 3) | (y == 6) x, y = x[keep], y[keep] y = y == 3 return x,y x_train, y_train = filter_36(x_train, y_train) x_test, y_test = filter_36(x_test, y_test) print("Number of filtered training examples:", len(x_train)) print("Number of filtered test examples:", len(x_test)) """ Explanation: Filter the dataset to keep just the 3s and 6s, remove the other classes. At the same time convert the label, y, to boolean: True for 3 and False for 6. End of explanation """ print(y_train[0]) plt.imshow(x_train[0, :, :, 0]) plt.colorbar() """ Explanation: Show the first example: End of explanation """ x_train_small = tf.image.resize(x_train, (4,4)).numpy() x_test_small = tf.image.resize(x_test, (4,4)).numpy() """ Explanation: 1.2 Downscale the images An image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4: End of explanation """ print(y_train[0]) plt.imshow(x_train_small[0,:,:,0], vmin=0, vmax=1) plt.colorbar() """ Explanation: Again, display the first training example—after resize: End of explanation """ def remove_contradicting(xs, ys): mapping = collections.defaultdict(set) orig_x = {} # Determine the set of labels for each unique image: for x,y in zip(xs,ys): orig_x[tuple(x.flatten())] = x mapping[tuple(x.flatten())].add(y) new_x = [] new_y = [] for flatten_x in mapping: x = orig_x[flatten_x] labels = mapping[flatten_x] if len(labels) == 1: new_x.append(x) new_y.append(next(iter(labels))) else: # Throw out images that match more than one label. pass num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value) num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value) num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2) print("Number of unique images:", len(mapping.values())) print("Number of unique 3s: ", num_uniq_3) print("Number of unique 6s: ", num_uniq_6) print("Number of unique contradicting labels (both 3 and 6): ", num_uniq_both) print() print("Initial number of images: ", len(xs)) print("Remaining non-contradicting unique images: ", len(new_x)) return np.array(new_x), np.array(new_y) """ Explanation: 1.3 Remove contradictory examples From section 3.3 Learning to Distinguish Digits of <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a>, filter the dataset to remove images that are labeled as belonging to both classes. This is not a standard machine-learning procedure, but is included in the interest of following the paper. End of explanation """ x_train_nocon, y_train_nocon = remove_contradicting(x_train_small, y_train) """ Explanation: The resulting counts do not closely match the reported values, but the exact procedure is not specified. It is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions. End of explanation """ THRESHOLD = 0.5 x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32) x_test_bin = np.array(x_test_small > THRESHOLD, dtype=np.float32) """ Explanation: 1.4 Encode the data as quantum circuits To process images using a quantum computer, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding. End of explanation """ _ = remove_contradicting(x_train_bin, y_train_nocon) """ Explanation: If you were to remove contradictory images at this point you would be left with only 193, likely not enough for effective training. End of explanation """ def convert_to_circuit(image): """Encode truncated classical image into quantum datapoint.""" values = np.ndarray.flatten(image) qubits = cirq.GridQubit.rect(4, 4) circuit = cirq.Circuit() for i, value in enumerate(values): if value: circuit.append(cirq.X(qubits[i])) return circuit x_train_circ = [convert_to_circuit(x) for x in x_train_bin] x_test_circ = [convert_to_circuit(x) for x in x_test_bin] """ Explanation: The qubits at pixel indices with values that exceed a threshold, are rotated through an $X$ gate. End of explanation """ SVGCircuit(x_train_circ[0]) """ Explanation: Here is the circuit created for the first example (circuit diagrams do not show qubits with zero gates): End of explanation """ bin_img = x_train_bin[0,:,:,0] indices = np.array(np.where(bin_img)).T indices """ Explanation: Compare this circuit to the indices where the image value exceeds the threshold: End of explanation """ x_train_tfcirc = tfq.convert_to_tensor(x_train_circ) x_test_tfcirc = tfq.convert_to_tensor(x_test_circ) """ Explanation: Convert these Cirq circuits to tensors for tfq: End of explanation """ class CircuitLayerBuilder(): def __init__(self, data_qubits, readout): self.data_qubits = data_qubits self.readout = readout def add_layer(self, circuit, gate, prefix): for i, qubit in enumerate(self.data_qubits): symbol = sympy.Symbol(prefix + '-' + str(i)) circuit.append(gate(qubit, self.readout)**symbol) """ Explanation: 2. Quantum neural network There is little guidance for a quantum circuit structure that classifies images. Since the classification is based on the expectation of the readout qubit, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> propose using two qubit gates, with the readout qubit always acted upon. This is similar in some ways to running small a <a href="https://arxiv.org/abs/1511.06464" class="external">Unitary RNN</a> across the pixels. 2.1 Build the model circuit This following example shows this layered approach. Each layer uses n instances of the same gate, with each of the data qubits acting on the readout qubit. Start with a simple class that will add a layer of these gates to a circuit: End of explanation """ demo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1), readout=cirq.GridQubit(-1,-1)) circuit = cirq.Circuit() demo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx') SVGCircuit(circuit) """ Explanation: Build an example circuit layer to see how it looks: End of explanation """ def create_quantum_model(): """Create a QNN model circuit and readout operation to go along with it.""" data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid. readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1] circuit = cirq.Circuit() # Prepare the readout qubit. circuit.append(cirq.X(readout)) circuit.append(cirq.H(readout)) builder = CircuitLayerBuilder( data_qubits = data_qubits, readout=readout) # Then add layers (experiment by adding more). builder.add_layer(circuit, cirq.XX, "xx1") builder.add_layer(circuit, cirq.ZZ, "zz1") # Finally, prepare the readout qubit. circuit.append(cirq.H(readout)) return circuit, cirq.Z(readout) model_circuit, model_readout = create_quantum_model() """ Explanation: Now build a two-layered model, matching the data-circuit size, and include the preparation and readout operations. End of explanation """ # Build the Keras model. model = tf.keras.Sequential([ # The input is the data-circuit, encoded as a tf.string tf.keras.layers.Input(shape=(), dtype=tf.string), # The PQC layer returns the expected value of the readout gate, range [-1,1]. tfq.layers.PQC(model_circuit, model_readout), ]) """ Explanation: 2.2 Wrap the model-circuit in a tfq-keras model Build the Keras model with the quantum components. This model is fed the "quantum data", from x_train_circ, that encodes the classical data. It uses a Parametrized Quantum Circuit layer, tfq.layers.PQC, to train the model circuit, on the quantum data. To classify these images, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1. End of explanation """ y_train_hinge = 2.0*y_train_nocon-1.0 y_test_hinge = 2.0*y_test-1.0 """ Explanation: Next, describe the training procedure to the model, using the compile method. Since the the expected readout is in the range [-1,1], optimizing the hinge loss is a somewhat natural fit. Note: Another valid approach would be to shift the output range to [0,1], and treat it as the probability the model assigns to class 3. This could be used with a standard a tf.losses.BinaryCrossentropy loss. To use the hinge loss here you need to make two small adjustments. First convert the labels, y_train_nocon, from boolean to [-1,1], as expected by the hinge loss. End of explanation """ def hinge_accuracy(y_true, y_pred): y_true = tf.squeeze(y_true) > 0.0 y_pred = tf.squeeze(y_pred) > 0.0 result = tf.cast(y_true == y_pred, tf.float32) return tf.reduce_mean(result) model.compile( loss=tf.keras.losses.Hinge(), optimizer=tf.keras.optimizers.Adam(), metrics=[hinge_accuracy]) print(model.summary()) """ Explanation: Second, use a custiom hinge_accuracy metric that correctly handles [-1, 1] as the y_true labels argument. tf.losses.BinaryAccuracy(threshold=0.0) expects y_true to be a boolean, and so can't be used with hinge loss). End of explanation """ EPOCHS = 3 BATCH_SIZE = 32 NUM_EXAMPLES = len(x_train_tfcirc) x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES] y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES] """ Explanation: Train the quantum model Now train the model—this takes about 45 min. If you don't want to wait that long, use a small subset of the data (set NUM_EXAMPLES=500, below). This doesn't really affect the model's progress during training (it only has 32 parameters, and doesn't need much data to constrain these). Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs. End of explanation """ qnn_history = model.fit( x_train_tfcirc_sub, y_train_hinge_sub, batch_size=32, epochs=EPOCHS, verbose=1, validation_data=(x_test_tfcirc, y_test_hinge)) qnn_results = model.evaluate(x_test_tfcirc, y_test) """ Explanation: Training this model to convergence should achieve >85% accuracy on the test set. End of explanation """ def create_classical_model(): # A simple model based off LeNet from https://keras.io/examples/mnist_cnn/ model = tf.keras.Sequential() model.add(tf.keras.layers.Conv2D(32, [3, 3], activation='relu', input_shape=(28,28,1))) model.add(tf.keras.layers.Conv2D(64, [3, 3], activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2))) model.add(tf.keras.layers.Dropout(0.25)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(1)) return model model = create_classical_model() model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) model.summary() model.fit(x_train, y_train, batch_size=128, epochs=1, verbose=1, validation_data=(x_test, y_test)) cnn_results = model.evaluate(x_test, y_test) """ Explanation: Note: The training accuracy reports the average over the epoch. The validation accuracy is evaluated at the end of each epoch. 3. Classical neural network While the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set. In the following example, a classical neural network is used for for the 3-6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set. End of explanation """ def create_fair_classical_model(): # A simple model based off LeNet from https://keras.io/examples/mnist_cnn/ model = tf.keras.Sequential() model.add(tf.keras.layers.Flatten(input_shape=(4,4,1))) model.add(tf.keras.layers.Dense(2, activation='relu')) model.add(tf.keras.layers.Dense(1)) return model model = create_fair_classical_model() model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) model.summary() model.fit(x_train_bin, y_train_nocon, batch_size=128, epochs=20, verbose=2, validation_data=(x_test_bin, y_test)) fair_nn_results = model.evaluate(x_test_bin, y_test) """ Explanation: The above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images: End of explanation """ qnn_accuracy = qnn_results[1] cnn_accuracy = cnn_results[1] fair_nn_accuracy = fair_nn_results[1] sns.barplot(["Quantum", "Classical, full", "Classical, fair"], [qnn_accuracy, cnn_accuracy, fair_nn_accuracy]) """ Explanation: 4. Comparison Higher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network. End of explanation """