repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
harmsm/pythonic-science
|
chapters/00_inductive-python/06_numpy-arrays.ipynb
|
unlicense
|
x = []
for i in range(1,11):
if i > 2:
x.append(i**2)
print(x[3])
"""
Explanation: Warm up
What will the following code spit out? (Don't just type it -- pencil and paper it).
End of explanation
"""
some_list = [1,2,3]
a_list_copy = some_list
some_list[1] = 273
"""
Explanation: How would you fix the following code so it does what you expect it to do?
End of explanation
"""
import numpy as np
an_array = np.array([1,2,3,4,5,6,7,8,9,10],dtype=np.int64)
print(an_array[3])
"""
Explanation: Numpy Arrays
Python lists are powerful
Can store any python object
Can be expanded or contracted at will (append, insert,extend, remove, pop)
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/48/My_swiss_army_knife.JPG/800px-My_swiss_army_knife.JPG" height="60%" width="60%" style="margin: auto;"/>
But would you want to cook with a swiss army knife?
Numpy arrays are the speciality knives of python programming
<img src="https://s3.amazonaws.com/cdn.metrokitchen.com/images/uploads/hk-34060-001-zoomed.jpg" height="90%" width="90%" style="margin: auto;"/>
numpy arrays are:
Less flexible than lists:
Each array can only store one type of value
Arrays cannot be resized after they're made
But numpy arrays are fast and set up to do math
If you've ever done matlab programming, numpy arrays use the same syntax
Predict what the following code will print out (and be ready to explain each line)
End of explanation
"""
some_list = [1,2,3]
print(some_list)
print(some_list + 5)
"""
Explanation: numpy arrays are an extension of python, not built in.
A quick return to lists:
Predict what this code will do.
End of explanation
"""
import numpy as np
some_array = np.array([1,2,3])
some_array = some_array + 5
print(some_array)
"""
Explanation: Write a program to add 5 to every entry in the list [1,2,3]
Predict what this code will do
End of explanation
"""
import numpy as np
a_list = list(range(100000))
an_array = np.array(range(100000),dtype=int)
%timeit for i in range(100000): a_list[i] + 5
%timeit an_array + 5
"""
Explanation: You can do math on numpy arrays in an "element-wise" fashion.
numpy arrays are fast
End of explanation
"""
import numpy as np
x = np.array([1,2,3])
y = np.array([4,5])
print(x + y)
print(np.sin(x))
"""
Explanation: Can you explain what just happened?
for i in range(blah blah) is in pure python. This is convenient, but slow.
an_array + 5 actually runs the loop, but in compiled C. This is super fast.
moral: don't use loops when working with numpy arrays.
Predict what the following code will do
End of explanation
"""
an_array = np.array([[1,2],[3,4]],dtype=int)
print(an_array)
an_array = np.zeros((2,2),dtype=float)
print(an_array)
an_array[0,0] = 1
an_array[0,1] = 2
an_array[1,0] = 3
an_array[1,1] = 4
print(an_array)
"""
Explanation: Summarize
How does math work with numpy arrays?
Predict what the following code will do
End of explanation
"""
an_array = np.array([[1,2,3],[4,5,6],[7,8,9]],dtype=int)
print(an_array[-1])
print("")
print(an_array[0,0])
print("")
print(an_array[:,0].reshape(3,1))
print("")
print(an_array[:,:])
"""
Explanation: Summarize
How can you construct arrays?
Predict what the following code will do
End of explanation
"""
# Define two vector-like arrays
x = np.array([3,5])
y = np.array([4,6])
print("Element wise sum:" , x + y ) # Addition
print("Element wise difference:", x - y ) # Substraction
print("Element wise product:", x * y) # Product
print("Element wise division:", x / y) # division
print("Dot product:", np.dot(x, y)) # Dot product
"""
Explanation: Summarize
How do you access elements inside multidimensional arrays?
Numpy arrays are real vectors and matrices
One major motivation for the authors who created numpy was to speed-up mathematical operations on vector, matrix, and tensor-like data structures.
np.linalg : Linear algebra module.
np.fft : Fourier Transform module.
np.random : Random sampling (from various statistical distributions) module.
Math operations
Numpy vector math operations
End of explanation
"""
# Define a matrix
M = np.array([
[5, 3],
[2, 7]
])
print("Matrix transpose:\n", M.T) # Transpose
print("Vector-matrix dot product", np.dot(M, x)) # Dot product
print("Matrix determinant:", np.linalg.det(M)) # Determinant
print("Matrix inverse:\n", np.linalg.inv(M)) # inverse
"""
Explanation: Numpy matrix math operations
End of explanation
"""
A = np.array([
[ 3, 2, -1],
[ 2, -2, 4],
[-1, 0.5, -1]
])
b = np.array([1, -2, 0])
print("x :", np.linalg.solve(A, b))
"""
Explanation: Solve a system of equations
<small>
Solve a system of equations using numpy.linalg.solve function.
Example:
$$
3 x + 2y - z = 1 \
2 x - 5y + 4z = -2 \
-x + \frac{1}{2} y - z = 0
$$
Written in matrix form:
$$
A \vec{x} = \vec{b}
$$
$$
\left[ \begin{array}{ccc}
3 & 2 & -1 \
2 & -5 & 4 \
-1 & \frac{1}{2} & -1 \
\end{array} \right]
%
\left[ \begin{array}{c}
x \
y \
z
\end{array} \right]
=
\left[ \begin{array}{c}
1 \
-2 \
0
\end{array} \right]
$$
</small>
End of explanation
"""
M = np.array([
[5, 3],
[2, 7]
])
λs, eigvec = np.linalg.eig(M)
print("λ 1:", λs[0])
print("Eigen vector 1:", eigvec[0])
print("\n")
print("λ 2:", λs[1])
print("Eigen vector 2:", eigvec[1])
"""
Explanation: Numpy to eigensystems
<small>
Easily diagonalize a matrix using numpy.linalg.eig function.
Example:
$$
M \vec{v} = \lambda \vec{v}
$$
</small>
End of explanation
"""
|
DistrictDataLabs/PyCon2016
|
notebooks/tutorial/Intro to NLTK.ipynb
|
mit
|
# Take a moment to explore what is in this directory
dir(nltk)
"""
Explanation: What is NLP?
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the natural language world - unstructured data that by its very nature has latent information that is important to humans. NLP practioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this tutorial we’ll explore how to do that using Python, the Natural Language Toolkit (NLTK) and Gensim.
NLTK is an excellent library for machine-learning based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Quick Overview of NLTK
NLTK was written by two eminent computational linguists, Steven Bird (Senior Research Associate of the LDC and professor at the University of Melbourne) and Ewan Klein (Professor of Linguistics at Edinburgh University). The NTLK library provides a combination of natural language corpora, lexical resources, and example grammars with language processing algorithms, methodologies and demonstrations for a very Pythonic "batteries included" view of natural language processing.
As such, NLTK is perfect for research-driven (hypothesis-driven) workflows for agile data science.
Installing NLTK
This notebook has a few dependencies, most of which can be installed via the python package manger - pip.
Python 2.7+ or 3.5+ (Anaconda is ok)
NLTK
The NLTK corpora
The BeautifulSoup library
The gensim libary
Once you have Python and pip installed you can install NLTK from the terminal as follows:
bash
~$ pip install nltk
~$ pip install matplotlib
~$ pip install beautifulsoup4
~$ pip install gensim
Note that these will also install Numpy and Scipy if they aren't already installed.
What NLTK Includes
tokenization, stemming, and tagging
chunking and parsing
language modeling
classification and clustering
logical semantics
NLTK is a useful pedagogical resource for learning NLP with Python and serves as a starting place for producing production grade code that requires natural language analysis. It is also important to understand what NLTK is not.
What NLTK is Not
Production ready out of the box
Lightweight
Generally applicable
Magic
NLTK provides a variety of tools that can be used to explore the linguistic domain but is not a lightweight dependency that can be easily included in other workflows, especially those that require unit and integration testing or other build processes. This stems from the fact that NLTK includes a lot of added code but also a rich and complete library of corpora that power the built-in algorithms.
The Good Parts of NLTK
Preprocessing
segmentation
tokenization
Part-of-Speech (PoS) tagging
Word level processing
WordNet
Lemmatization
Stemming
NGrams
Utilities
Tree
FreqDist
ConditionalFreqDist
Streaming CorpusReaders
Classification
Maximum Entropy
Naive Bayes
Decision Tree
Chunking
Named Entity Recognition
Parsers Galore!
The Bad parts of NLTK
Syntactic Parsing
No included grammar (not a black box)
No Feature/Dependency Parsing
No included feature grammar
The sem package
Toy only (lambda-calculus & first order logic)
Lots of extra stuff (heavyweight dependency)
papers, chat programs, alignments, etc.
Knowing the good and the bad parts will help you explore NLTK further - looking into the source code to extract the material you need, then moving that code to production. We will explore NLTK in more detail in the rest of this notebook.
Obtaining and Exploring the NLTK Corpora
NLTK ships with a variety of corpora, let's use a few of them to do some work. To download the NLTK corpora, open a Python interpreter:
python
import nltk
nltk.download()
This will open up a window with which you can download the various corpora and models to a specified location. For now, go ahead and download it all as we will be exploring as much of NLTK as we can. Also take note of the download_directory - you're going to want to know where that is so you can get a detailed look at the corpora that's included. I usually export an environment variable to track this. You can do this from your terminal:
~$ export NLTK_DATA=/path/to/nltk_data
End of explanation
"""
# Lists the various corpora and CorpusReader classes in the nltk.corpus module
for name in dir(nltk.corpus):
print(name)
if name.islower() and not name.startswith('_'): print(name)
"""
Explanation: Methods for Working with Sample NLTK Corpora
To explore much of the built-in corpus, use the following methods:
End of explanation
"""
# You can explore the titles with:
print(nltk.corpus.gutenberg.fileids())
# For a specific corpus, list the fileids that are available:
print(nltk.corpus.shakespeare.fileids())
"""
Explanation: fileids()
End of explanation
"""
hamlet = nltk.text.Text(nltk.corpus.gutenberg.words('shakespeare-hamlet.txt'))
"""
Explanation: text.Text()
The nltk.text.Text class is a wrapper around a sequence of simple (string) tokens - intended only for the initial exploration of text usually via the Python REPL. It has the following methods:
common_contexts
concordance
collocations
count
plot
findall
index
You shouldn't use this class in production level systems, but it is useful to explore (small) snippets of text in a meaningful fashion.
For example, you can get access to the text from Hamlet as follows:
End of explanation
"""
hamlet.concordance("king", 55, lines=10)
"""
Explanation: concordance()
The concordance function performs a search for the given token and then also provides the surrounding context.
End of explanation
"""
print(hamlet.similar("marriage"))
austen = nltk.text.Text(nltk.corpus.gutenberg.words("austen-sense.txt"))
print()
print(austen.similar("marriage"))
"""
Explanation: similar()
Given some context surrounding a word, we can discover similar words, e.g. words that that occur frequently in the same context and with a similar distribution: Distributional similarity:
Note ContextIndex.similar_words(word) calculates the similarity score for each word as the sum of the products of frequencies in each context. Text.similar() simply counts the number of unique contexts the words share.
http://bit.ly/2a2udIr
End of explanation
"""
hamlet.common_contexts(["king", "father"])
"""
Explanation: As you can see, this takes a bit of time to build the index in memory, one of the reasons it's not suggested to use this class in production code.
common_contexts()
Now that we can do searching and similarity, we find the common contexts of a set of words.
End of explanation
"""
inaugural = nltk.text.Text(nltk.corpus.inaugural.words())
inaugural.dispersion_plot(["citizens", "democracy", "freedom", "duty", "America"])
"""
Explanation: your turn, go ahead and explore similar words and contexts - what does the common context mean?
dispersion_plot()
NLTK also uses matplotlib and pylab to display graphs and charts that can show dispersions and frequency. This is especially interesting for the corpus of innagural addresses given by U.S. presidents.
End of explanation
"""
print(nltk.corpus.stopwords.fileids())
nltk.corpus.stopwords.words('english')
import string
print(string.punctuation)
"""
Explanation: Stopwords
End of explanation
"""
corpus = nltk.corpus.brown
print(corpus.paras())
"""
Explanation: These corpora export several vital methods:
paras (iterate through each paragraph)
sents (iterate through each sentence)
words (iterate through each word)
raw (get access to the raw text)
paras()
End of explanation
"""
print(corpus.sents())
"""
Explanation: sents()
End of explanation
"""
print(corpus.words())
"""
Explanation: words()
End of explanation
"""
print(corpus.raw()[:200]) # Be careful!
"""
Explanation: raw()
Be careful!
End of explanation
"""
reuters = nltk.corpus.reuters # Corpus of news articles
counts = nltk.FreqDist(reuters.words())
vocab = len(counts.keys())
words = sum(counts.values())
lexdiv = float(words) / float(vocab)
print("Corpus has %i types and %i tokens for a lexical diversity of %0.3f" % (vocab, words, lexdiv))
"""
Explanation: Your turn! Explore some of the text in the available corpora
<a id='freqdist'></a>
Frequency Analyses
In statistical machine learning approaches to NLP, the very first thing we need to do is count things - especially the unigrams that appear in the text and their relationships to each other. NLTK provides two excellent classes to enable these frequency analyses:
FreqDist
ConditionalFreqDist
And these two classes serve as the foundation for most of the probability and statistical analyses that we will conduct.
Zipf's Law
Zipf's law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.: the rank-frequency distribution is an inverse relation. Read more on Wikipedia.
First we will compute the following:
The count of words
The vocabulary (unique words)
The lexical diversity (the ratio of word count to vocabulary)
End of explanation
"""
counts.B()
"""
Explanation: counts()
End of explanation
"""
print(counts.most_common(40))
"""
Explanation: most_common()
The n most common tokens in the corpus
End of explanation
"""
print(counts.max())
"""
Explanation: counts.max()
The most frequent token in the corpus.
End of explanation
"""
print(counts.hapaxes()[0:10])
"""
Explanation: counts.hapaxes()
A list of all hapax legomena (words that only appear one time in the corpus).
End of explanation
"""
counts.freq('stipulate') * 100
"""
Explanation: counts.freq()
The percentage of the corpus for the given token.
End of explanation
"""
counts.plot(50, cumulative=False)
# By setting cumulative to True, we can visualize the cumulative counts of the _n_ most common words.
counts.plot(50, cumulative=True)
"""
Explanation: counts.plot()
Plot the frequencies of the n most commonly occuring words.
End of explanation
"""
from itertools import chain
brown = nltk.corpus.brown
categories = brown.categories()
counts = nltk.ConditionalFreqDist(chain(*[[(cat, word) for word in brown.words(categories=cat)] for cat in categories]))
for category, dist in counts.items():
vocab = len(dist.keys())
tokens = sum(dist.values())
lexdiv = float(tokens) / float(vocab)
print("%s: %i types with %i tokens and lexical diversity of %0.3f" % (category, vocab, tokens, lexdiv))
"""
Explanation: ConditionalFreqDist()
End of explanation
"""
for ngram in nltk.ngrams(["The", "bear", "walked", "in", "the", "woods", "at", "midnight"], 5):
print(ngram)
"""
Explanation: Your turn: compute the conditional frequency distribution of bigrams in a corpus
Hint:
<a id='ngram'></a>
End of explanation
"""
import bs4
from readability.readability import Document
# Tags to extract as paragraphs from the HTML text
TAGS = [
'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'h7', 'p', 'li'
]
def read_html(path):
with open(path, 'r') as f:
# Transform the document into a readability paper summary
html = Document(f.read()).summary()
# Parse the HTML using BeautifulSoup
soup = bs4.BeautifulSoup(html)
# Extract the paragraph delimiting elements
for tag in soup.find_all(TAGS):
# Get the HTML node text
yield tag.get_text()
for paragraph in read_html('fixtures/nrRB0.html'):
print(paragraph + "\n")
text = u"Medical personnel returning to New York and New Jersey from the Ebola-riddled countries in West Africa will be automatically quarantined if they had direct contact with an infected person, officials announced Friday. New York Gov. Andrew Cuomo (D) and New Jersey Gov. Chris Christie (R) announced the decision at a joint news conference Friday at 7 World Trade Center. “We have to do more,” Cuomo said. “It’s too serious of a situation to leave it to the honor system of compliance.” They said that public-health officials at John F. Kennedy and Newark Liberty international airports, where enhanced screening for Ebola is taking place, would make the determination on who would be quarantined. Anyone who had direct contact with an Ebola patient in Liberia, Sierra Leone or Guinea will be quarantined. In addition, anyone who traveled there but had no such contact would be actively monitored and possibly quarantined, authorities said. This news came a day after a doctor who had treated Ebola patients in Guinea was diagnosed in Manhattan, becoming the fourth person diagnosed with the virus in the United States and the first outside of Dallas. And the decision came not long after a health-care worker who had treated Ebola patients arrived at Newark, one of five airports where people traveling from West Africa to the United States are encountering the stricter screening rules."
for sent in nltk.sent_tokenize(text):
print(sent)
print()
for sent in nltk.sent_tokenize(text):
print(list(nltk.wordpunct_tokenize(sent)))
print()
for sent in nltk.sent_tokenize(text):
print(list(nltk.pos_tag(nltk.word_tokenize(sent))))
print()
"""
Explanation: Preprocessing Text
NLTK is great at the preprocessing of raw text - it provides the following tools for dividing text into it's constituent parts:
<a id='tokenize'></a>
<a id='segment'></a>
- sent_tokenize: a Punkt sentence tokenizer:
This tokenizer divides a text into a list of sentences, by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It must be trained on a large collection of plaintext in the target language before it can be used.
However, Punkt is designed to learn parameters (a list of abbreviations, etc.) unsupervised from a corpus similar to the target domain. The pre-packaged models may therefore be unsuitable: use PunktSentenceTokenizer(text) to learn parameters from the given text.
word_tokenize: a Treebank tokenizer
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This is the method that is invoked by word_tokenize(). It assumes that the text has already been segmented into sentences, e.g. using sent_tokenize().
<a id='pos'></a>
- pos_tag: a maximum entropy tagger trained on the Penn Treebank
There are several other taggers including (notably) the BrillTagger as well as the BrillTrainer to train your own tagger or tagset.
End of explanation
"""
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.lancaster import LancasterStemmer
from nltk.stem.porter import PorterStemmer
text = list(nltk.word_tokenize("The women running in the fog passed bunnies working as computer scientists."))
snowball = SnowballStemmer('english')
lancaster = LancasterStemmer()
porter = PorterStemmer()
for stemmer in (snowball, lancaster, porter):
stemmed_text = [stemmer.stem(t) for t in text]
print(" ".join(stemmed_text))
from nltk.stem.wordnet import WordNetLemmatizer
# Note: use part of speech tag, we'll see this in machine learning!
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(t) for t in text]
print(" ".join(lemmas))
"""
Explanation: All of these taggers work pretty well - but you can (and should train them on your own corpora).
<a id='lemmatize'></a>
Stemming and Lemmatization
We have an immense number of word forms as you can see from our various counts in the FreqDist above - it is helpful for many applications to normalize these word forms (especially applications like search) into some canonical word for further exploration. In English (and many other languages) - morphological context indicate gender, tense, quantity, etc. but these sublties might not be necessary:
<a id='stemming'></a>
Stemming = chop off affixes to get the root stem of the word:
running --> run
flowers --> flower
geese --> geese
Lemmatization = look up word form in a lexicon to get canonical lemma
women --> woman
foxes --> fox
sheep --> sheep
There are several stemmers available:
- Lancaster (English, newer and aggressive)
- Porter (English, original stemmer)
- Snowball (Many languages, newest)
<a id='wordnet'></a>
The Lemmatizer uses the WordNet lexicon
End of explanation
"""
import string
from nltk.corpus import wordnet as wn
## Module constants
lemmatizer = WordNetLemmatizer()
stopwords = set(nltk.corpus.stopwords.words('english'))
punctuation = string.punctuation
def tagwn(tag):
"""
Returns the WordNet tag from the Penn Treebank tag.
"""
return {
'N': wn.NOUN,
'V': wn.VERB,
'R': wn.ADV,
'J': wn.ADJ
}.get(tag[0], wn.NOUN)
def normalize(text):
for token, tag in nltk.pos_tag(nltk.wordpunct_tokenize(text)):
#if you're going to do part of speech tagging, do it here
token = token.lower()
if token in stopwords and token in punctuation:
continue
token = lemmatizer.lemmatize(token, tagwn(tag))
yield token
print(list(normalize("The eagle flies at midnight.")))
"""
Explanation: Note that the lemmatizer has to load the WordNet corpus which takes a bit.
Typical normalization of text for use as features in machine learning models looks something like this:
End of explanation
"""
print(nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize("John Smith is from the United States of America and works at Microsoft Research Labs"))))
"""
Explanation: <a id='nerc'></a>
Named Entity Recognition
NLTK has an excellent MaxEnt backed Named Entity Recognizer that is trained on the Penn Treebank. You can also retrain the chunker if you'd like - the code is very readable to extend it with a Gazette or otherwise.
<a id='chunk'></a>
End of explanation
"""
import os
from nltk.tag import StanfordNERTagger
# change the paths below to point to wherever you unzipped the Stanford NER download file
stanford_root = '/Users/benjamin/Development/stanford-ner-2014-01-04'
stanford_data = os.path.join(stanford_root, 'classifiers/english.all.3class.distsim.crf.ser.gz')
stanford_jar = os.path.join(stanford_root, 'stanford-ner-2014-01-04.jar')
st = StanfordNERTagger(stanford_data, stanford_jar, 'utf-8')
for i in st.tag("John Smith is from the United States of America and works at Microsoft Research Labs".split()):
print('[' + i[1] + '] ' + i[0])
"""
Explanation: You can also wrap the Stanford NER system, which many of you are also probably used to using.
End of explanation
"""
for name in dir(nltk.parse):
if not name.startswith('_'): print(name)
"""
Explanation: Parsing
Parsing is a difficult NLP task due to structural ambiguities in text. As the length of sentences increases, so does the number of possible trees.
End of explanation
"""
grammar = nltk.grammar.CFG.fromstring("""
S -> NP PUNCT | NP
NP -> N N | ADJP NP | DET N | DET ADJP
ADJP -> ADJ NP | ADJ N
DET -> 'an' | 'the' | 'a' | 'that'
N -> 'airplane' | 'runway' | 'lawn' | 'chair' | 'person'
ADJ -> 'red' | 'slow' | 'tired' | 'long'
PUNCT -> '.'
""")
def parse(sent):
sent = sent.lower()
parser = nltk.parse.ChartParser(grammar)
for p in parser.parse(nltk.word_tokenize(sent)):
yield p
for tree in parse("the long runway"):
tree.pprint()
tree[0].draw()
"""
Explanation: Similar to how you might write a compiler or an interpreter; parsing starts with a grammar that defines the construction of phrases and terminal entities.
End of explanation
"""
from nltk.parse.stanford import StanfordParser
# change the paths below to point to wherever you unzipped the Stanford NER download file
stanford_root = '/Users/benjamin/Development/stanford-parser-full-2014-10-31'
stanford_model = os.path.join(stanford_root, 'stanford-parser-3.5.0-models.jar')
stanford_jar = os.path.join(stanford_root, 'stanford-parser.jar')
st = StanfordParser(stanford_model, stanford_jar)
sent = "The man hit the building with the baseball bat."
for tree in st.parse(nltk.wordpunct_tokenize(sent)):
tree.pprint()
tree.draw()
"""
Explanation: NLTK does come with some large grammars; but if constructing your own domain specific grammar isn't your thing; then you can use the Stanford parser (so long as you're willing to pay for it).
End of explanation
"""
|
ewulczyn/ewulczyn.github.io
|
ipython/what_if_ab_testing_is_like_science/what_if_ab_testing_is_like_science_copy.ipynb
|
mit
|
import numpy as np
from statsmodels.stats.weightstats import ztest
from statsmodels.stats.power import tt_ind_solve_power
from scipy.stats import bernoulli
class Test():
def __init__(self, significance, power, mde, optimistic):
self.significance = significance
self.power = power
self.mde = mde
self.optimistic = optimistic
def compute_sample_size(self, u_hat):
var_hat = u_hat*(1-u_hat)
absolute_effect = u_hat - (u_hat*(1+self.mde))
standardized_effect = absolute_effect / np.sqrt(var_hat)
sample_size = tt_ind_solve_power(effect_size=standardized_effect,
alpha=self.significance,
power=self.power)
return sample_size
def run(self, control_cr, treatment_cr):
# run null hypothesis test with a fixed sample size
N = self.compute_sample_size(control_cr)
data_control = bernoulli.rvs(control_cr,size=N)
data_treatment = bernoulli.rvs(treatment_cr,size=N)
p = ztest(data_control, data_treatment)[1]
# if p > alpha, no clear winner
if p > self.significance:
if self.optimistic:
return treatment_cr
else:
return control_cr
# other wise pick the winner
else:
if data_control.sum() > data_treatment.sum():
return control_cr
else:
return treatment_cr
"""
Explanation: In Why Most Published Research Findings Are False John Ioannidis argues that if most hypotheses we test are false, we end up with more false research findings than true findings, even if we do rigorous hypothesis testing. The argument hinges on a vanilla application of Bayes' rule. Lets assume that science is "really hard" and that only 50 out of 1000 hypotheses we formulate are in fact true. Say we test our hypotheses at significance level alpha=0.05 and with power beta=0.80. Out of our 950 incorrect hypotheses, our hypothesis testing will lead to 950x0.05 = 47.5 false positives i.e. false research findings. Out of our 100 correct hypotheses, we will correctly identify 50x0.8 = 40 true research findings. To our horror, we find that most published findings are false!
Most applications of AB testing involve running multiple repeated experiments in order to optimize a metric. At each iteration, we test a hypothesis: Does the new design perform better than the control? If so, we adopt the new design as our control and test the next idea. After many iterations, we expect to have a design that is better than when we started. But Ioannidis' argument about how most research findings could be false should make us wonder:
Is it possible, that if the chances of generating a better new design are slim, that we adopt bad designs more often than we adopt good designs? What effect does this have on our performance in the long run?
How can we change our testing strategy in such a way that we still expect to increase performance over time? Conversely, how can we take advantage of a situation where the chances of generating a design that is better than the control is really high?
To investigate these questions, lets simulate the process of repeated AB testing for optimizing some conversion rate (CR) under different scenarios for how hard our optimization problem is. For example, our CR could be the fraction of users who donate to Wikipedia in response to being shown a particular fundraising banner. I will model the difficulty of the problem using a distribution over the percent lift in conversion rate (CR) that a new idea has over the control. In practice we might expect the mean of this distribution to change with time. As we work on a problem longer, the average idea probably gives a smaller performance increase. For our purposes, I will assume this distribution (call it $I$) is fixed and normally distributed.
We start with a control banner with some fixed conversion rate (CR). At each iteration, we test the control against a new banner whose percent lift over the control is drawn from $I$. If the new banner wins, it becomes the new control. We repeat this step several times to see what final the CR is after running a sequence of tests. I will refer to a single sequence of tests as a campaign. We can simulate several campaigns to characterize the distribution of outcomes we can expect at the end of a campaign.
Code
For those who are interested, this section describes the simulation code. The Test class, simulates running a single AB test. The parameters significance, power and mde correspond to the significance, power and minimum effect size of the z-test used to test the hypothesis that the new design and the control have the same CR. The optimistic parameter determines which banner we choose if we fail to reject the null hypothesis that the two designs are the same.
End of explanation
"""
class Campaign():
def __init__(self, base_rate, num_tests, test, mu, sigma):
self.num_tests = num_tests
self.test = test
self.mu = mu
self.sigma = sigma
self.base_rate = base_rate
def run(self):
true_rates = [self.base_rate,]
for i in range(self.num_tests):
#the control of the current test is the winner of the last test
control_cr = true_rates[-1]
# create treatment banner with a lift drawn from the lift distribution
lift = np.random.normal(self.mu, self.sigma)
treatment_cr = min(0.9, control_cr*(1.0+lift/100.0))
winning_cr = self.test.run(control_cr, treatment_cr)
true_rates.append (winning_cr)
return true_rates
"""
Explanation: The Campaign class simulates running num_tests AB tests, starting with a base_rate CR. The parameters mu and sigma characterize $I$, the distribution over the percent gain in performance of a new design compared to the control.
End of explanation
"""
import matplotlib.pyplot as plt
import pandas as pd
def expected_campaign_results(campaign, sim_runs):
fig = plt.figure(figsize=(10, 6), dpi=80)
d = pd.DataFrame()
for i in range(sim_runs):
d[i] = campaign.run()
d2 = pd.DataFrame()
d2['mean'] = d.mean(axis=1)
d2['lower'] = d2['mean'] + 2*d.std(axis=1)
d2['upper'] = d2['mean'] - 2*d.std(axis=1)
plt.plot(d2.index, d2['mean'], label= 'CR')
plt.fill_between(d2.index, d2['lower'], d2['upper'], alpha=0.31,
edgecolor='#3F7F4C', facecolor='0.75',linewidth=0)
plt.xlabel('num tests')
plt.ylabel('CR')
plt.plot(d2.index, [base_rate]*(num_tests+1), label = 'Start CR')
plt.legend()
"""
Explanation: The expected_campaign_results function implements running many campaigns with the same starting conditions. It generates a plot depicting the expected CR as a function of the number of sequential AB test.
End of explanation
"""
def plot_improvements(mu, sigma):
plt.figure(figsize = (7, 3))
x = np.arange(-45.0, 45.0, 0.5)
plt.xticks(np.arange(-45.0, 45.0, 5))
plt.plot(x, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (x - mu)**2 / (2 * sigma**2) ))
plt.xlabel('lift')
plt.ylabel('probability density')
plt.title('Distribution over lift in CR of a new design compared to the control')
#Distribution over % Improvements
mu = -5.0
sigma = 3
plot_improvements(mu, sigma)
"""
Explanation: Simulations
I will start out with a moderately pessimistic scenario and assume the average new design is 5% worse than the control and that standard deviation sigma is 3. The plot below shows the distribution over percent gains from new designs.
End of explanation
"""
# hypothesis test params
significance = 0.05
power = 0.8
mde = 0.10
#camapign params
num_tests = 30
base_rate = 0.2
#number of trials
sim_runs = 100
test = Test(significance, power, mde, optimistic = False)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: Lets start out with some standard values of alpha = 0.05, beta = 0.8 and mde = 0.10 for the hypothesis tests. The plot below shows the expected CR after a simulating a sequence of 30 AB tests 100 times.
End of explanation
"""
test = Test(significance, power, mde, optimistic = True)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: Even though we went through all work of running 100 AB test, we cannot expect to improve our CR. The good news is that although most of our ideas were bad, doing the AB testing prevented us from loosing performance. The plot below shows what would happen if we had used the new idea as the control when the hypothesis test could not discern a significant difference.
End of explanation
"""
mu = 0.0
sigma = 5
plot_improvements(mu, sigma)
"""
Explanation: Impressive. The CR starts tanking at a rapid pace. This is an extreme example but it spells out a clear warning: if your optimization problem is hard, stick to your control.
Now lets imagine a world in which most ideas are neutral but there is still the potential for big wins and big losses. The plot below shows our new distribution over the quality of new ideas.
End of explanation
"""
test = Test(significance, power, mde, optimistic = False)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: And here are the result of the new simulation:
End of explanation
"""
mde = 0.05
test = Test(significance, power, mde, optimistic = False)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: Now there is huge variance in how things could turn out. In expectation, we get a 2% absolute gain every 10 tests. As you might have guessed, in this scenario it does not matter which banner you choose when the hypothesis test does not detect a significant difference.
Lets see if we can reduce the variance in outcomes by decreasing the minimum detectable effect mde to 0.05. This will cost us in terms of runtime for each test, but it also should reduce the variance in the expected results.
End of explanation
"""
mu = 5
sigma = 3
plot_improvements(mu, sigma)
"""
Explanation: Now we can expect 5% absolute gain every 15 tests. Furthermore, it is very unlikely that we have not improved out CR after 30 tests.
Finally, lets consider the rosy scenario in which most new ideas are a winner.
End of explanation
"""
mde = 0.10
test = Test(significance, power, mde, optimistic = False)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: Again, here are the result of the new simulation:
End of explanation
"""
test = Test(significance, power, mde, optimistic = True)
campaign = Campaign(base_rate, num_tests, test, mu, sigma)
expected_campaign_results(campaign, sim_runs)
"""
Explanation: Having good ideas is a recipe for runaway success. You might even decide that its foolish to choose the control banner when you don't have significance since chances are that your new idea is better, even if you could not detect it. The plot below shows that choosing the new idea over the control leads to even faster growth in performance.
End of explanation
"""
|
kit-cel/wt
|
ccgbc/ch4_LDPC_Analysis/RegularLDPC_BEC.ipynb
|
gpl-2.0
|
import numpy as np
import matplotlib.pyplot as plot
from ipywidgets import interactive
import ipywidgets as widgets
import math
%matplotlib inline
"""
Explanation: Regular LDPC Codes on the BEC
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Convergence analysis of regular LDPC codes on the binary erasure channel (BEC)
End of explanation
"""
def fixedpoint(epsilon, dv, dc):
plot.figure(3)
x = np.linspace(0, 1, num=1000)
y = epsilon * (1 - (1-x)**(dc-1))**(dv-1) - x
print('Rate of the code %1.2f' % (1-dv/dc))
if any(e >= 0 for e in y[2:]):
color = (1, 0, 0)
else:
color = (0, 0.59, 0.51)
plot.rcParams.update({'font.size': 16})
plot.plot(x, y, color=color)
plot.xlabel(r'$\xi$')
plot.ylabel(r'$f(\epsilon,\xi)-\xi$')
plot.xlim(0,1)
plot.grid()
plot.show()
"""
Explanation: In this notebook, we look at the performance evaluation of regular $[d_{\mathtt{v}},d_{\mathtt{c}}]$ LDPC codes. We first consider the fixed-point equation before looking at the evolution of the message erasure probability as a function of the erasures
End of explanation
"""
interactive_plot = interactive(fixedpoint, \
epsilon=widgets.FloatSlider(min=0.0,max=1,step=0.001,value=0.5, continuous_update=True, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
dv = widgets.IntSlider(min=2,max=10,step=1,value=3, continuous_update=False, description=r'\(d_{\mathtt{v}}\)'), \
dc = widgets.IntSlider(min=3, max=20, step=1, value=6, continuous_update=False, description=r'\(d_{\mathtt{c}}\)'))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
"""
Explanation: This code evaluates the fixed point equation for regular $[d_{\mathtt{v}},d_{\mathtt{c}}]$ LDPC codes. The fixed point equation in this case reads
$$f(\epsilon,\xi)-\xi <= 0\quad \forall \xi \in (0,1]$$
with
$$f(\epsilon,\xi) = \epsilon\left(1-(1-\xi)^{d_{\mathtt{c}}-1}\right)^{d_{\mathtt{v}}-1}$$
The plot below shows the evaluate of $f(\epsilon,\xi)-\xi$. If $f(\epsilon,\xi)-\xi \leq 0$ for any $\xi > 0$, then decoding is possible and the curve is displayed with blue color. In the other case, it is displayed with red color.
You can use the sliders to control the values $[d_{\mathtt{v}},d_{\mathtt{c}}]$ of the code and the channel parameter $\epsilon$ (epsilon).
End of explanation
"""
def f_iter(epsilon, dv, dc):
num_iter = 101
plot.figure(4)
xi = np.zeros(num_iter)
xi[0] = epsilon
for k in np.arange(1,num_iter):
xi[k] = epsilon * (1 - (1-xi[k-1])**(dc-1))**(dv-1)
print('Rate of the code %1.2f' % (1-dv/dc))
if any(e == 0 for e in xi[:]):
color = (0, 0.59, 0.51)
else:
color = (1,0,0)
plot.rc('text', usetex=True)
plot.rc('font', family='serif')
plot.rcParams.update({'font.size': 16})
plot.plot(np.arange(1,num_iter+1), xi, color=color)
plot.xlabel(r'Iterations $\ell$')
plot.ylabel(r'$\xi_\ell = f(\epsilon,\xi_{\ell-1})$')
plot.ylim(0,max(epsilon+0.1,dv/dc))
plot.xlim(0,num_iter)
plot.grid()
plot.show()
epsilon_values = np.arange(0,1,0.001)
interactive_update = interactive(f_iter, \
epsilon=widgets.SelectionSlider(options=[("%1.3f"%i,i) for i in epsilon_values], value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \
dv = widgets.IntSlider(min=2,max=10,step=1,value=3, continuous_update=False, description=r'\(d_{\mathtt{v}}\)'), \
dc = widgets.IntSlider(min=3, max=20, step=1, value=6, continuous_update=False, description=r'\(d_{\mathtt{c}}\)'))
output = interactive_update.children[-1]
output.layout.height = '350px'
interactive_update
"""
Explanation: In the following, we show the update equation of the code, i.e., how the code behaves as a function of the iteration counter for the first 100 iterations.
End of explanation
"""
|
JShadowMan/package
|
python/course/ch02-syntax-and-container/.ipynb_checkpoints/基本语法及常用容器-checkpoint.ipynb
|
mit
|
year = 2019 # 赋值表达式, 一行可以只写一个语句
month = 7; day = 23; hour = 22; minute = 11; second = 0 # 一行也可以写多个语句, 使用 ; 进行分隔
if 1900 < year < 2100 and 1 <= month <= 12 \
and 1 <= day <= 31 and 0 <= hour < 24 \
and 0 <= minute < 60 and 0 <= second < 60: # 多个物理行组成一个逻辑行
print("时间正确")
"""
Explanation: Python中的基本语法
Python中没有不使用{}来区分语句块, 而使用Tab缩进来表示不同的语句块.
并且代码尾部不需要写;来表示一个语句的结束, 但是如果想多个语句写在一行, 也是可以的, 但是不怎么推荐.
代码中的注释
在Python中通常使用#来表示注释, 也可以使用""" ... """来表示一大段的注释
高阶技巧: 通常我们也可以在代码中使用""" ... """来进行doctest
Python中的简单语句
所谓简单语句, 即由一个单独的逻辑行构成。 多条简单语句可以存在于同一行内并以分号分隔。
* 物理行: 是以一个行终止序列结束的字符序列, 就是一般意义上的一行代码
* 逻辑行: 是以一个语句的结束作为标志, 一般指一个表达式/语法
End of explanation
"""
year += 2019 # "+=" | "-=" | "*=" | "@=" | "/=" | "//=" | "%=" | "**=" | ">>=" | "<<=" | "&=" | "^=" | "|="
month **= month; print(month)
"""
Explanation: Python中同样支持__增强赋值__语句
End of explanation
"""
assert id(1) == id(1)
assert not isinstance(None, object)
"""
Explanation: 与其他语言一样, 我们可以在代码中增加__断言语句__用于在测试阶段在代码中注入测试代码.
一般我们通过在执行代码的时候增加优化参数-O执行代码, 系统会自动跳过断言语句
End of explanation
"""
def implement_me():
pass
implement_me()
"""
Explanation: pass表示什么都不做, 一般拿来做代码块的占位作用
End of explanation
"""
def return_multi():
return 1, 2, "abc"
return_multi()
"""
Explanation: 在Python中, 我们可以在一个函数中一次返回多个值
End of explanation
"""
def try_raise():
try:
raise Exception("错误消息")
except Exception as e:
print("异常消息:", e)
raise
try_raise()
"""
Explanation: 在Python中抛出一个异常使用的是raise而不是throw. 另外需要注意的是, python中支持直接使用raise抛出最后一个激活的异常
End of explanation
"""
import random
random.randint(0, 10)
from random import randint
randint(0, 10)
"""
Explanation: import语句用于引入某一个包, 查找顺序为系统目录 -> 安装的第三方包 -> 工作目录
from ... import ...表示从某个包中引入部分内容到__当前作用域__
End of explanation
"""
def global_var1():
v = 2
print(v, id(v))
v = 1
print(v, id(v))
global_var()
print(v, id(v))
def global_var2():
global v
v = 2
print(v, id(v))
v = 1
print(v, id(v))
global_var()
print(v, id(v))
"""
Explanation: global ...语句用于声明之后的标识符号为全局变量
End of explanation
"""
def nonlocal_var():
v = 2
print(v, id(v))
def inner_func():
nonlocal v
# v = 3
print(v, id(v))
inner_func()
v = 1
print(v, id(v))
nonlocal_var()
print(v, id(v))
"""
Explanation: nonlocal与global不同之处在于:
* global在任何位置都可以使用, 表示之后的标识符为全局变量
* nonlocal则只有嵌套函数中才可以使用, 表示之后的标识符为外部函数的局部变量
End of explanation
"""
def get_level(score: int) -> str:
if score < 60:
return 'D'
elif score < 70:
return 'C'
elif score < 80:
return 'B'
elif score < 90:
return 'A'
else:
return 'A+'
print(get_level(59), get_level(61), get_level(81), get_level(100))
"""
Explanation: Python中的复合语句
复合语句是包含其它语句(语句组)的语句,它们会以某种方式影响或控制所包含其它语句的执行。一般值得就是程序的判断结构, 循环结构等
判断结构
在Python中的判断结构与其他语言基本类似, 需要注意的是Python中没有else if只有elif
End of explanation
"""
# 简单的 while 循环
count = 3
while count > 0:
value = randint(0, 5)
print("随机值:", value)
if value > 3:
break
count -= 1
else:
print("进入else")
"""
Explanation: 循环结构
python中只支持while和for循环. Python中的循环结构与众不同的一点是有一个额外的else段
End of explanation
"""
for index in range(3):
value = randint(0, 5)
print("随机值:", value)
if value > 3:
break
else:
print("进入else")
# 也可以使用for循环遍历字典和列表元组
value = {
"tuple": (1, 2, 3),
"list": [4, 5, 6]
}
for k, v in value.items():
print("value[{}] = {}".format(k, v))
for el in v:
print("\t{}".format(el))
"""
Explanation: for循环在Python仅支持for var in iters的形式, 同样支持else语句块
range为一个特殊的函数, 函数原型为range(start, stop, step) 一般在for循环中使用
End of explanation
"""
for i in range(3):
try:
print(1)
continue
finally:
print(3)
print(4)
"""
Explanation: 思考以下打印值
End of explanation
"""
# 定义函数, 啥也不做
def function_name1():
pass
function_name1()
#function_name1(1, 2, 3)
"""
Explanation: Python中的函数定义
在python中我们使用def来定义函数, 这个关键字相当于javascript或者php中的function关键字
在函数定义中我们可以使用装饰器来装饰一个函数, 这个特性将在之后讲解到
End of explanation
"""
def function_name2(ival: int, sval: str) -> str:
return 'ival = {}, sval = {}'.format(ival, sval)
function_name2(0, "参数")
"""
Explanation: 在函数中我们可以定义形参, 这里可以使用变量标注增强代码的可读性和可维护性
End of explanation
"""
def function_name3(i: int, s: str, *args, **kwargs):
print(i, s, args, kwargs, sep=" | ")
function_name3(1024, "参数2", "参数3", 4, options=[1,2], size=1)
"""
Explanation: 在函数声明时, 我们可以使用*args来声明接受可变数量的__位置参数__, 可以使用**kwargs来接受可变的__具名参数__
End of explanation
"""
def function_name4(i, *args, count, **kwargs):
pass
#function_name4(1, 2, 3)
"""
Explanation: 需要注意的是: 在*args接受可变变量之后的所有参数均需要带有默认值
End of explanation
"""
def test_return_in_try():
try:
return 1
finally:
return 2
test_return_in_try()
"""
Explanation: 思考以下返回值
End of explanation
"""
class Fruit(object):
color: str # 变量标注
# 构造函数
def __init__(self, color):
self.color = color
print(Fruit())
class Apple(Fruit):
# 子类的构造函数
def __init__(self):
# 调用父类的构造函数
super(Apple, self).__init__('red')
"""
Explanation: Python中的类定义
在Python中我们使用class关键字来定义类, 语法为class ClassName(ParentClass), 如果不声明父类, 则默认父类为object
___类也可以被装饰器进行装饰__
End of explanation
"""
import abc # abstract base class
class Fruit(object, metaclass=abc.ABCMeta):
@abc.abstractclassmethod
def impl_me():
pass
print(Fruit())
"""
Explanation: 当然Python中的类也支持静态方法和接口定义, 还支持getter/setter
End of explanation
"""
class Fruit(object, metaclass=abc.ABCMeta):
@classmethod
@abc.abstractmethod
def impl_me(cls):
pass
print(Fruit())
class Apple(Fruit):
pass
print(Apple())
class People(object):
def __init__(self):
pass
@staticmethod
def factory():
return FuShiApple()
@property
def weight(self):
return 1
@weight.setter
def weight(self, v):
print(v)
f = FuShiApple()
print(f, f.weight)
f.weight = 10
"""
Explanation: 在新版本的Python中, 我们可以直接通过@classmethod装饰器来直接定义接口
End of explanation
"""
def function_a():
print("function_a")
print(function_a)
function_a, function_b = 1, function_a
print(function_a)
function_b()
"""
Explanation: Python中的装饰器
装饰器的实现基础:
* Python中的函数是一个可变变量
* Python支持高阶函数, 即参数可以为一个函数对象, 返回值也可以是一个函数
Python中原生支持高阶函数, 我们先来看一个简单例子
End of explanation
"""
def function_a():
print("function_a")
def wrapper(func):
print("wrapper before")
func()
print("wrapper after")
wrapper(function_a)
"""
Explanation: 有了以上基础之后, 我们就可以对一个函数对象进行操作了, 来实现一个基础版本的装饰器
End of explanation
"""
def function_a():
print("function_a")
def decorator(func):
def __wrapper():
print("wrapper before")
func()
print("wrapper after")
return __wrapper
function_a = decorator(function_a)
function_a()
"""
Explanation: 这个版本存在一个问题就是每次都要我们手动wrapper一次才可以进行函数的调用修改, 接下来我们改进一下, 我们这次直接覆盖自身
End of explanation
"""
def function_a(value, size=1):
print("function_a", value, size)
def decorator(func):
def __wrapper(*args, **kwargs):
print("wrapper before")
func(*args, **kwargs)
print("wrapper after")
return __wrapper
function_a = decorator(function_a)
function_a("asd", 123)
"""
Explanation: 接下来我们再支持下函数参数的传递
End of explanation
"""
def decorator(func):
def __wrapper(*args, **kwargs):
print("wrapper before")
func(*args, **kwargs)
print("wrapper after")
return __wrapper
@decorator
def function_a(value, size=1):
print("function_a", value, size)
function_a("qqq", 100);
"""
Explanation: 其实以上就已经实现了一个装饰器了, 这时候我们直接使用语法糖来装饰
End of explanation
"""
|
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/DockerImages/12_tensorflow/notebooks/07 Mini-batch.ipynb
|
mit
|
print("Train features size = ", train_features.size * 4)
print("Train labels size = ", train_labels.size * 4)
print("Weights size =", 784 * 10 * 4)
print("Bias size = ", 10 * 4)
"""
Explanation: Question 1
Calculate the memory size of train_features, train_labels, weights, and bias in bytes. Ignore memory for overhead, just calculate the memory required for the stored data.
You may have to look up how much memory a float32 requires, using this link. (Single-precision floating-point format is a computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point.)
train_features Shape: (55000, 784) Type: float32
train_labels Shape: (55000, 10) Type: float32
weights Shape: (784, 10) Type: float32
bias Shape: (10,) Type: float32
End of explanation
"""
print("How many batches are there? ", math.ceil(50000 / 128))
print("What is the last batch size? ", 50000 % 128)
"""
Explanation: Question 2
Use the parameters below, how many batches are there, and what is the last batch size?
features is (50000, 400)
labels is (50000, 10)
batch_size is 128
End of explanation
"""
|
kunalj101/scipy2015-blaze-bokeh
|
1.6 Layout.ipynb
|
mit
|
# Import the functions from your file
# Create your plots with your new functions
# Test the visualizations in the notebook
from bokeh.plotting import show, output_notebook
# Show climate map
# Show legend
# Show timeseries
"""
Explanation: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right:15%">
<h1 align='center'>Bokeh Tutorial</h1>
1.6 Layout
Exercise: Wrap your visualizations in functions
Wrap each of the previous visualizations in a function in a python file (e.g. viz.py):
Climate + Map: climate_map()
Legend: legend()
Timeseries: timeseries()
End of explanation
"""
from bokeh.plotting import vplot, hplot
# Create your layout
# Show layout
"""
Explanation: Exercise: Layout your plots using hplot and vplot
End of explanation
"""
from bokeh.plotting import output_file
"""
Explanation: Exercise: Store your layout in an html page
End of explanation
"""
|
saga-survey/saga-code
|
ipython_notebooks/DECALS low-SB_brick selection and data download.ipynb
|
gpl-2.0
|
bricks = Table.read('decals_dr3/survey-bricks.fits.gz')
bricksdr3 = Table.read('decals_dr3/survey-bricks-dr3.fits.gz')
fn_in_sdss = 'decals_dr3/in_sdss.npy'
try:
bricksdr3['in_sdss'] = np.load(fn_in_sdss)
except:
bricksdr3['in_sdss'] = ['unknown']*len(bricksdr3)
bricksdr3
goodbricks = (bricksdr3['in_sdss'] == 'unknown') & (bricksdr3['nexp_r']>=10)
if np.sum(goodbricks) > 0:
for brick in ProgressBar(bricksdr3[goodbricks], ipython_widget=True):
sc = SkyCoord(brick['ra']*u.deg, brick['dec']*u.deg)
bricksdr3['in_sdss'][bricksdr3['brickname']==brick['brickname']] = 'yes' if in_sdss(sc) else 'no'
np.save('decals_dr3/in_sdss', bricksdr3['in_sdss'])
plt.scatter(bricksdr3['ra'], bricksdr3['dec'],
c=bricksdr3['nexp_r'], lw=0, s=3, vmin=0)
plt.colorbar()
yeses = bricksdr3['in_sdss'] == 'yes'
nos = bricksdr3['in_sdss'] == 'no'
plt.scatter(bricksdr3['ra'][yeses], bricksdr3['dec'][yeses], c='r',lw=0, s=1)
plt.scatter(bricksdr3['ra'][nos], bricksdr3['dec'][nos], c='w',lw=0, s=1)
plt.xlim(0, 360)
plt.ylim(-30, 40)
sdssbricks = bricksdr3[bricksdr3['in_sdss']=='yes']
plt.scatter(sdssbricks['ra'], sdssbricks['dec'],
c=sdssbricks['nexp_r'], lw=0, s=3, vmin=0)
plt.colorbar()
plt.xlim(0, 360)
plt.ylim(-30, 40)
"""
Explanation: Load up the DECALS info tables
End of explanation
"""
maxn = np.max(sdssbricks['nexp_r'])
bins = np.linspace(-1, maxn+1, maxn*3)
plt.hist(sdssbricks['nexp_r'], bins=bins, histtype='step', ec='r',log=True)
plt.hist(sdssbricks['nexp_g'], bins=bins+.1, histtype='step', ec='g',log=True)
plt.hist(sdssbricks['nexp_z'], bins=bins-.1, histtype='step', ec='k',log=True)
plt.xlim(bins[0], bins[-1])
"""
Explanation: Alright, now lets just pick a few specific bricks that are both in SDSS and have fairly deep g and r data
End of explanation
"""
plt.hexbin(sdssbricks['nexp_g'], sdssbricks['nexp_r'],bins='log')
plt.xlabel('g')
plt.ylabel('r')
"""
Explanation: And the joint distribution?
End of explanation
"""
deep_r = np.random.choice(sdssbricks['brickname'][(sdssbricks['nexp_r']>20)&(sdssbricks['nexp_g']>2)])
ra = bricks[bricks['BRICKNAME']==deep_r]['RA'][0]
dec = bricks[bricks['BRICKNAME']==deep_r]['DEC'][0]
print('http://skyserver.sdss.org/dr13/en/tools/chart/navi.aspx?ra={}&dec={}&scale=3.0&opt=P'.format(ra, dec))
deep_r
deep_g = np.random.choice(sdssbricks['brickname'][(sdssbricks['nexp_r']>15)&(sdssbricks['nexp_g']>20)])
ra = bricks[bricks['BRICKNAME']==deep_g]['RA'][0]
dec = bricks[bricks['BRICKNAME']==deep_g]['DEC'][0]
print('http://skyserver.sdss.org/dr13/en/tools/chart/navi.aspx?ra={}&dec={}&scale=3.0'.format(ra, dec))
deep_g
#bricknames = [deep_r, deep_g]
# hard code this from the result above for repeatability
bricknames = ['1193p057', '2208m005']
sdssbricks[np.in1d(sdssbricks['brickname'], bricknames)]
base_url = 'http://portal.nersc.gov/project/cosmo/data/legacysurvey/dr3/'
catalog_fns = []
for nm in bricknames:
url = base_url + 'tractor/{}/tractor-{}.fits'.format(nm[:3], nm)
outfn = 'decals_dr3/catalogs/' + os.path.split(url)[-1]
if os.path.isfile(outfn):
print(outfn, 'already exists')
else:
tmpfn = data.download_file(url)
shutil.move(tmpfn, outfn)
catalog_fns.append(outfn)
catalog_fns
"""
Explanation: Looks like there isn't much with lots of r and lots of g... 🙁
So we pick one of each.
End of explanation
"""
import casjobs
jobs = casjobs.CasJobs(base_url='http://skyserver.sdss.org/CasJobs/services/jobs.asmx', request_type='POST')
# this query template comes from Marla's download_host_sqlfile w/ modifications
query_template = """
SELECT p.objId as OBJID,
p.ra as RA, p.dec as DEC,
p.type as PHOTPTYPE, dbo.fPhotoTypeN(p.type) as PHOT_SG,
p.flags as FLAGS,
flags & dbo.fPhotoFlags('SATURATED') as SATURATED,
flags & dbo.fPhotoFlags('BAD_COUNTS_ERROR') as BAD_COUNTS_ERROR,
flags & dbo.fPhotoFlags('BINNED1') as BINNED1,
p.modelMag_u as u, p.modelMag_g as g, p.modelMag_r as r,p.modelMag_i as i,p.modelMag_z as z,
p.modelMagErr_u as u_err, p.modelMagErr_g as g_err,
p.modelMagErr_r as r_err,p.modelMagErr_i as i_err,p.modelMagErr_z as z_err,
p.MODELMAGERR_U,p.MODELMAGERR_G,p.MODELMAGERR_R,p.MODELMAGERR_I,p.MODELMAGERR_Z,
p.EXTINCTION_U, p.EXTINCTION_G, p.EXTINCTION_R, p.EXTINCTION_I, p.EXTINCTION_Z,
p.DERED_U,p.DERED_G,p.DERED_R,p.DERED_I,p.DERED_Z,
p.PETRORAD_U,p.PETRORAD_G,p.PETRORAD_R,p.PETRORAD_I,p.PETRORAD_Z,
p.PETRORADERR_U,p.PETRORADERR_G,p.PETRORADERR_R,p.PETRORADERR_I,p.PETRORADERR_Z,
p.DEVRAD_U,p.DEVRADERR_U,p.DEVRAD_G,p.DEVRADERR_G,p.DEVRAD_R,p.DEVRADERR_R,
p.DEVRAD_I,p.DEVRADERR_I,p.DEVRAD_Z,p.DEVRADERR_Z,
p.DEVAB_U,p.DEVAB_G,p.DEVAB_R,p.DEVAB_I,p.DEVAB_Z,
p.CMODELMAG_U, p.CMODELMAGERR_U, p.CMODELMAG_G,p.CMODELMAGERR_G,
p.CMODELMAG_R, p.CMODELMAGERR_R, p.CMODELMAG_I,p.CMODELMAGERR_I,
p.CMODELMAG_Z, p.CMODELMAGERR_Z,
p.PSFMAG_U, p.PSFMAGERR_U, p.PSFMAG_G, p.PSFMAGERR_G,
p.PSFMAG_R, p.PSFMAGERR_R, p.PSFMAG_I, p.PSFMAGERR_I,
p.PSFMAG_Z, p.PSFMAGERR_Z,
p.FIBERMAG_U, p.FIBERMAGERR_U, p.FIBERMAG_G, p.FIBERMAGERR_G,
p.FIBERMAG_R, p.FIBERMAGERR_R, p.FIBERMAG_I, p.FIBERMAGERR_I,
p.FIBERMAG_Z, p.FIBERMAGERR_Z,
p.FRACDEV_U, p.FRACDEV_G, p.FRACDEV_R, p.FRACDEV_I, p.FRACDEV_Z,
p.Q_U,p.U_U, p.Q_G,p.U_G, p.Q_R,p.U_R, p.Q_I,p.U_I, p.Q_Z,p.U_Z,
p.EXPAB_U, p.EXPRAD_U, p.EXPPHI_U, p.EXPAB_G, p.EXPRAD_G, p.EXPPHI_G,
p.EXPAB_R, p.EXPRAD_R, p.EXPPHI_R, p.EXPAB_I, p.EXPRAD_I, p.EXPPHI_I,
p.EXPAB_Z, p.EXPRAD_Z, p.EXPPHI_Z,
p.FIBER2MAG_R, p.FIBER2MAGERR_R,
p.EXPMAG_R, p.EXPMAGERR_R,
p.PETROR50_R, p.PETROR90_R, p.PETROMAG_R,
p.expMag_r + 2.5*log10(2*PI()*p.expRad_r*p.expRad_r + 1e-20) as SB_EXP_R,
p.petroMag_r + 2.5*log10(2*PI()*p.petroR50_r*p.petroR50_r) as SB_PETRO_R,
ISNULL(w.j_m_2mass,9999) as J, ISNULL(w.j_msig_2mass,9999) as JERR,
ISNULL(w.H_m_2mass,9999) as H, ISNULL(w.h_msig_2mass,9999) as HERR,
ISNULL(w.k_m_2mass,9999) as K, ISNULL(w.k_msig_2mass,9999) as KERR,
ISNULL(s.z, -1) as SPEC_Z, ISNULL(s.zErr, -1) as SPEC_Z_ERR, ISNULL(s.zWarning, -1) as SPEC_Z_WARN,
ISNULL(pz.z,-1) as PHOTOZ, ISNULL(pz.zerr,-1) as PHOTOZ_ERR
FROM dbo.fGetObjFromRectEq({ra1}, {dec1}, {ra2}, {dec2}) n, PhotoPrimary p
{into}
LEFT JOIN SpecObj s ON p.specObjID = s.specObjID
LEFT JOIN PHOTOZ pz ON p.ObjID = pz.ObjID
LEFT join WISE_XMATCH as wx on p.objid = wx.sdss_objid
LEFT join wise_ALLSKY as w on wx.wise_cntr = w.cntr
WHERE n.objID = p.objID
"""
casjobs_tables = jobs.list_tables()
job_ids = []
for bricknm in bricknames:
thisbrick = bricks[bricks['BRICKNAME']==bricknm]
assert len(thisbrick) == 1
thisbrick = thisbrick[0]
intostr = 'INTO mydb.decals_brick_' + bricknm
qry = query_template.format(ra1=thisbrick['RA1'], ra2=thisbrick['RA2'],
dec1=thisbrick['DEC1'], dec2=thisbrick['DEC2'],
into=intostr)
if intostr.split('.')[1] in casjobs_tables:
print(bricknm, 'already present')
continue
job_ids.append(jobs.submit(qry, 'DR13', bricknm))
# wait for the jobs to finish
finished = False
while not finished:
for i in job_ids:
stat = jobs.status(i)[-1]
if stat == 'failed':
raise ValueError('Job {} failed'.format(i))
if stat != 'finished':
time.sleep(1)
break
else:
finished = True
print('Finished jobs', job_ids)
jids = []
for bnm in bricknames:
table_name = 'decals_brick_' + bnm
ofn = 'decals_dr3/catalogs/sdss_comparison_{}.csv'.format(bnm)
if os.path.isfile(ofn):
print(table_name, 'already downloaded')
else:
jids.append(jobs.request_output(table_name,'CSV'))
done_jids = []
while len(done_jids)<len(jids):
time.sleep(1)
for i, bnm in zip(jids, bricknames):
if i in done_jids:
continue
if jobs.status(i)[-1] != 'finished':
continue
ofn = 'decals_dr3/catalogs/sdss_comparison_{}.csv'.format(bnm)
jobs.get_output(i, ofn)
done_jids.append(i)
print(ofn)
"""
Explanation: Now get the matched SDSS catalogs
End of explanation
"""
|
a301-teaching/a301_code
|
notebooks/heating_rate_npz.ipynb
|
mit
|
import h5py
import numpy as np
import datetime as dt
from datetime import timezone as tz
import matplotlib
from matplotlib import pyplot as plt
import pyproj
from numpy import ma
from a301utils.a301_readfile import download
from a301lib.cloudsat import get_geo
from IPython.display import Image, display
from datetime import datetime,timezone
flx_file='2008082060027_10105_CS_2B-FLXHR_GRANULE_P2_R04_E02.npz'
download(flx_file)
with np.load(flx_file) as npz:
lons=npz['lons']
lats=npz['lats']
height=npz['height']
shortwave_hr=npz['shortwave_hr']
longwave_hr=npz['longwave_hr']
date_times=npz['date_times']
prof_times=npz['prof_times']
date_times=[datetime.fromtimestamp(item,timezone.utc) for item in date_times]
date_times=np.array(date_times)
meters2km=1.e3
"""
Explanation: Atmospheric heating rate (Cloudsat only)
This notebook plots vertical cross sections through a cyclone of $Q_R$ the longwave and
shortwave heating rate in K/hour. It uses the level 2B product FLXHR, which is described
at the cloudsat website as follows:
"This algorithm derives estimates of broadband fluxes and heating rates consistent with liquid and ice water content estimates from the CloudSat Profiling Radar (CPR). For each radar profile, a broadband radiative transfer model is used to calculate upwelling and downwelling longwave and shortwave fluxes at each CPR range gate from the surface to the lower stratosphere. Profiles of cloud ice and liquid water content and cloud particle effective radii are defined based on the CloudSat 2B-LWC and 2B-IWC products while precipitation properties are defined using the CloudSat 2C-PRECIP-COLUMN dataset. Ancillary atmospheric state variables are interpolated from ECMWF analyses and surface albedos are assigned based on seasonally-varying maps of surface reflectance properties in combination with daily snow and sea ice cover maps from passive microwave instruments. Equivalent clear sky radiative flux profiles are generated by removing all clouds and repeating the calculations. Corresponding profiles of atmospheric heating are inferred from the vertical derivative of these fluxes."
1. Reading in the shortwave and longwave radiative heating rates
Format is described on the Cloudsat web site
Units: K/hr
Variable name: Qr
Shape: Qr[2, 37082, 125] where Qr[0,37082,125] is the shortwave heating rate
and Qr[1,37082,125] is the longwave heating rate. The other two dimensions are the
same as the radar reflectivity: there are 37082 radar measurements in an orbit, binned
into 125 vertical height bins
End of explanation
"""
long_wave_hr=np.ma.masked_invalid(longwave_hr)
short_wave_hr=np.ma.masked_invalid(shortwave_hr)
"""
Explanation: 2. Make a masked array of the reflectivity so that pcolormesh will plot it
End of explanation
"""
first_time=date_times[0]
print('orbit start: {}'.format(first_time))
start_hour=6
start_minute=45
storm_start=starttime=dt.datetime(first_time.year,first_time.month,first_time.day,
start_hour,start_minute,0,tzinfo=tz.utc)
#
# get 3 minutes of data from the storm_start
#
storm_stop=storm_start + dt.timedelta(minutes=3)
print('storm start: {}'.format(storm_start))
time_hit = np.logical_and(date_times > storm_start,date_times < storm_stop)
storm_lats = lats[time_hit]
storm_lons=lons[time_hit]
storm_prof_times=prof_times[time_hit]
storm_sw_hr=shortwave_hr[time_hit,:]
storm_lw_hr=longwave_hr[time_hit,:]
storm_height=height[time_hit,:]
storm_date_times=date_times[time_hit]
len(date_times)
"""
Explanation: 3. Find the part of the orbing that corresponds to the 3 minutes containing the storm
You need to enter the start_hour and start_minute for the start time of your cyclone in the granule
End of explanation
"""
great_circle=pyproj.Geod(ellps='WGS84')
distance=[0]
start=(storm_lons[0],storm_lats[0])
for index in np.arange(1,len(storm_lons)):
azi12,azi21,step= great_circle.inv(storm_lons[index-1],storm_lats[index-1],
storm_lons[index],storm_lats[index])
distance.append(distance[index-1] + step)
distance=np.array(distance)/meters2km
"""
Explanation: 4. convert time to distance by using pyproj to get the greatcircle distance between shots
End of explanation
"""
%matplotlib inline
plt.close('all')
from matplotlib import cm
from matplotlib.colors import Normalize
vmin=-30
vmax=30
the_norm=Normalize(vmin=vmin,vmax=vmax,clip=False)
cmap_ref=cm.RdBu_r
cmap_ref.set_over('pink')
cmap_ref.set_under('k')
cmap_ref.set_bad('0.75') #75% grey
#
# Q-1: What is the difference between the distance,height,field,ax
# and cmap,norm arguments to this function? Why do I structure
# the function signature this way?
#
def plot_field(distance,height,field,ax,cmap=None,norm=None):
if cmap is None:
cmap=cm.jet
col=ax.pcolormesh(distance,height,field,cmap=cmap,
norm=the_norm)
cax=fig.colorbar(col,extend='both',ax=ax,pad= 0.01)
return ax,cax
fig,(ax1,ax2)=plt.subplots(2,1,figsize=(20,10))
cloud_height_km=height[0,:]/meters2km
ax1,cax1=plot_field(distance,cloud_height_km,storm_sw_hr.T,ax1,cmap=cmap_ref,
norm=the_norm)
ax2,cax2=plot_field(distance,cloud_height_km,storm_lw_hr.T,ax2,cmap=cmap_ref,
norm=the_norm)
for colorbar in [cax1,cax2]:
text=colorbar.set_label('heating rate (K/hr)',rotation=-90,verticalalignment='bottom')
for ax in [ax1,ax2]:
ax.set(ylim=[0,17],xlim=(0,1200))
ax.set_xlabel('distance (km)',fontsize=15)
ax.set_ylabel('height (km)',fontsize=15)
text=fig.suptitle('storm radiative heating rates: shortwave (top), longwave (bottom)',size=25)
fig.savefig('heating_rates.png',dpi=100)
"""
Explanation: 5. Make the plot assuming that height is the same for every shot
i.e. assume that height[0,:] = height[1,:] = ...
in reality, the bin heights are depend on the details of the radar returns, so
we would need to historgram the heights into a uniform set of bins -- ignore that for this qualitative picture
End of explanation
"""
|
sdpython/pyquickhelper
|
_unittests/ut_helpgen/notebooks_python/td1a_cenonce_session1.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: TD 1 : Premiers pas en Python
End of explanation
"""
x = 5
y = 10
z = x + y
print (z) # affiche z
"""
Explanation: Partie 1
Un langage de programmation permet de décrire avec précision des opérations très simples sur des données. Comme tout langage, il a une grammaire et des mot-clés. La complexité d'un programme vient de ce qu'il faut beaucoup d'opérations simples pour arriver à ses fins. Voyons cela quelques usages simples. Il vous suffit d'exécuter chaque petit extrait en appuyant sur le triangle pointant vers la droite ci-dessus. N'hésitez pas à modifier les extraits pour mieux comprendre ce que le programme fait.
La calculatrice
End of explanation
"""
x = 2
y = x + 1
print (y)
x += 5
print (x)
"""
Explanation: On programme sert souvent à automatiser un calcul comme le calcul mensuel du taux de chômage, le taux d'inflation, le temps qu'il fera demain... Pour pouvoir répéter ce même calcul sur des valeurs différentes, il faut pouvoir décrire ce calcul sans savoir ce que sont ces valeurs. Un moyen simple est de les nommer : on utilise des variables. Une variable désigne des données. x=5 signifie que la variable xcontient 5. x+3 signifie qu'on ajoute 3 à x sans avoir besoin de savoir ce que x désigne.
L'addition, l'incrémentation
End of explanation
"""
a = 0
for i in range (0, 10) :
a = a + i # répète dix fois cette ligne
print (a)
"""
Explanation: Lorsqu'on programme, on passe son temps à écrire des calculs à partir de variables pour les stocker dans d'autres variables voire dans les mêmes variables. Lorsqu'on écrit y=x+5, cela veut dire qu'on doit ajouter 5 à x et qu'on stocke le résultat dans y. Lorsqu'on écrit x += 5, cela veut dire qu'on doit ajouter 5 à x et qu'on n'a plus besoin de la valeur que x contenait avant l'opération.
La répétition ou les boucles
End of explanation
"""
a = 10
if a > 0 :
print(a) # un seul des deux blocs est pris en considération
else :
a -= 1
print (a)
"""
Explanation: Le mot-clé print n'a pas d'incidence sur le programme. En revanche, il permet d'afficher l'état d'une variable au moment où on exécute l'instruction print.
L'aiguillage ou les tests
End of explanation
"""
a = 10
print (a) # quelle est la différence
print ("a") # entre les deux lignes
s = "texte"
s += "c"
print (s)
"""
Explanation: Les chaînes de caractères
End of explanation
"""
print("2" + "3")
print(2+3)
"""
Explanation: Toute valeur a un type et cela détermine les opérations qu'on peut faire dessus. 2 + 2 fait 4 pour tout le monde. 2 + "2" fait quatre pour un humain, mais est incompréhensible pour l'ordinateur car on ajoute deux choses différentes (torchon + serviette).
End of explanation
"""
a = 5
a + 4
print (a) # ou voudrait voir 9 mais c'est 5 qui apparaît
"""
Explanation: Partie 2
Dans cette seconde série, partie, il s'agit d'interpréter pourquoi un programme ne fait pas ce qu'il est censé faire ou pourquoi il provoque une erreur, et si possible, de corriger cette erreur.
Un oubli
End of explanation
"""
a = 0
for i in range (0, 10)
a = a + i
print (a)
"""
Explanation: Une erreur de syntaxe
End of explanation
"""
a = 0
for i in range (0, 10):
a = a + i
print (a)
"""
Explanation: Une autre erreur de syntaxe
End of explanation
"""
a = 0
s = "e"
print (a + s)
"""
Explanation: Une opération interdite
End of explanation
"""
a = 0
for i in range (0, 10) :
a = (a + (i+2)*3
print (a)
"""
Explanation: Un nombre impair de...
End of explanation
"""
14%2, 233%2
"""
Explanation: Partie 3
Il faut maintenant écrire trois programmes qui :
Ecrire un programme qui calcule la somme des 10 premiers entiers au carré.
Ecrire un programme qui calcule la somme des 5 premiers entiers impairs au carré.
Ecrire un programme qui calcule la somme des qui 10 premières factorielles : $\sum_{i=1}^{10} i!$.
A propos de la parité :
End of explanation
"""
%load_ext tutormagic
%%tutor --lang python3
a = 0
for i in range (0, 10):
a = a + i
"""
Explanation: Tutor Magic
Cet outil permet de visualiser le déroulement des programmes (pas trop grand, site original pythontutor.com).
End of explanation
"""
|
norsween/data-science
|
springboard-answers-to-exercises/Mini_Project_Linear_Regression-Answers.ipynb
|
gpl-3.0
|
# special IPython command to prepare the notebook for matplotlib and other libraries
%matplotlib inline
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
import seaborn as sns
# special matplotlib argument for improved plots
from matplotlib import rcParams
sns.set_style("whitegrid")
sns.set_context("poster")
"""
Explanation: Regression in Python
This is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious :-) The CS109 git repository also has the solutions if you're stuck.
Linear Regression Models
Prediction using linear regression
Linear regression is used to model and predict continuous outcomes with normal random errors. There are nearly an infinite number of different types of regression models and each regression model is typically defined by the distribution of the prediction errors (called "residuals") of the type of data. Logistic regression is used to model binary outcomes whereas Poisson regression is used to predict counts. In this exercise, we'll see some examples of linear regression as well as Train-test splits.
The packages we'll cover are: statsmodels, seaborn, and scikit-learn. While we don't explicitly teach statsmodels and seaborn in the Springboard workshop, those are great libraries to know.
End of explanation
"""
from sklearn.datasets import load_boston
import pandas as pd
boston = load_boston()
boston.keys()
boston.data.shape
# Print column names
print(boston.feature_names)
# Print description of Boston housing data set
print(boston.DESCR)
"""
Explanation: Part 2: Exploratory Data Analysis for Linear Relationships
The Boston Housing data set contains information about the housing values in suburbs of Boston. This dataset was originally taken from the StatLib library which is maintained at Carnegie Mellon University and is now available on the UCI Machine Learning Repository.
Load the Boston Housing data set from sklearn
This data set is available in the sklearn python module which is how we will access it today.
End of explanation
"""
bos = pd.DataFrame(boston.data)
bos.head()
"""
Explanation: Now let's explore the data set itself.
End of explanation
"""
bos.columns = boston.feature_names
bos.head()
"""
Explanation: There are no column names in the DataFrame. Let's add those.
End of explanation
"""
print(boston.target.shape)
bos['PRICE'] = boston.target
bos.head()
"""
Explanation: Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.
End of explanation
"""
bos.describe()
"""
Explanation: EDA and Summary Statistics
Let's explore this data set. First we use describe() to get basic summary statistics for each of the columns.
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.scatter(bos.CRIM, bos.PRICE)
plt.xlabel("Per capita crime rate by town (CRIM)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
# Answers to Part 2 Exercise Set 1
# Question 1) What kind of relationship do you see? e.g. positive, negative?
# linear? non-linear? Is there anything else strange or interesting about
# the data? What about outliers?
# I see a weak negative linear relationship. Yes, the data looks interesting in
# that its distribution appears to be positively skewed and has a few outliers.
# Part 2 Exercise Set 1
# Question 2: Create scatter plots between *RM* and *PRICE*, and PTRATIO and PRICE.
# Label your axes appropriately using human readable labels.
# Tell a story about what you see.
# Create scatter plots between *RM* and *PRICE*
plt.scatter(bos.RM, bos.PRICE)
plt.xlabel("Average Number of Rooms Per Dwelling (RM)")
plt.ylabel("Housing Price")
plt.title("Relationship between RM and Price")
# Part 2 Exercise Set 1:
# Create scatter plot between *PTRATIO* and *PRICE*
plt.scatter(bos.PTRATIO, bos.PRICE)
plt.xlabel("Pupil-Teacher Ratio by Town (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between PTRATIO and Price")
# Question 2 continuation: it appears that a positive linear
# relationship seemed to exist in the graph between average
# number of rooms per dwelling and housing price.
# your turn: create some other scatter plots
# scatter plot between *NOX* and *PRICE*
plt.scatter(bos.NOX, bos.PRICE)
plt.xlabel("Nitric Oxides Concentration (parts per 10 million) (NOX)")
plt.ylabel("Housing Price")
plt.title("Relationship between NOX and Price")
# Exercise 1: What are some other numeric variables of interest? Why do you think
# they are interesting? Plot scatterplots with these variables and
# PRICE (house price) and tell a story about what you see.
# In my opinion, other variables of interest would be nitric oxides
# concentration since it can describe pollutants in the area.
# Another is the column describing percent of black population
# that may describe neighborhood housing prices.
# your turn: create some other scatter plots
# Create a scatter plot between *NOX* and *PRICE*
plt.scatter(bos.B, bos.PRICE)
plt.xlabel("1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town (B)")
plt.ylabel("Housing Price")
plt.title("Relationship between B and Price")
# your turn: create some other scatter plots
# Create a scatter plot between *DIS* and *LSTAT*
plt.scatter(bos.DIS, bos.LSTAT)
plt.xlabel("weighted distances to five Boston employment centres (DIS)")
plt.ylabel("% lower status of the population")
plt.title("Relationship between DIS and LSTAT")
import seaborn as sns
sns.regplot(y="PRICE", x="RM", data=bos, fit_reg = True)
"""
Explanation: Scatterplots
Let's look at some scatter plots for three variables: 'CRIM' (per capita crime rate), 'RM' (number of rooms) and 'PTRATIO' (pupil-to-teacher ratio in schools).
End of explanation
"""
plt.hist(np.log(bos.CRIM))
plt.title("CRIM")
plt.xlabel("Crime rate per capita")
plt.ylabel("Frequencey")
plt.show()
# Part 2 Exercise 1: In the above histogram, we took the logarithm of the crime rate per
# capita. Repeat this histogram without taking the log.
plt.hist(bos.CRIM)
plt.title("CRIM")
plt.xlabel("Crime rate per capita")
plt.ylabel("Frequencey")
plt.show()
# Exercise 2 Question 1 continuation: What was the purpose of taking the log? What do we gain
# by making this transformation? What do you now notice about this variable that is not
# obvious without making the transformation?
# We usually take logarithms of variables that are multiplicatively related or in other
# words it's growing exponentially in time. By taking logarithms of variables before
# plotting the data, any exponential nature of variables is taken out of equation so
# that we can see the pattern in a linear model if that's the case. Logging in short,
# is similar to deflaton so that a trend can be straightened out and a linear model
# can be fitted.
# Before taking the logarithm of the variable, it's obvious that it's exponential in nature.
# Part 2 Exercise 2:
# Plot the histogram for RM and PTRATIO against each other, along
# with the two variables you picked in the previous section. We
# are looking for correlations in predictors here.
import seaborn as sns
sns.set(color_codes=True)
sns.jointplot(bos.RM, bos.PTRATIO)
# Part 2 Exercise 2 Continuation:
# Plot the histogram for the two variables you picked in
# the previous section.
import seaborn as sns
sns.set(color_codes=True)
sns.jointplot(bos.NOX, bos.PRICE)
"""
Explanation: Histograms
End of explanation
"""
# Import regression modules
import statsmodels.api as sm
from statsmodels.formula.api import ols
# statsmodels works nicely with pandas dataframes
# The thing inside the "quotes" is called a formula, a bit on that below
m = ols('PRICE ~ RM',bos).fit()
print(m.summary())
"""
Explanation: Part 3: Linear Regression with Boston Housing Data Example
Here,
$Y$ = boston housing prices (called "target" data in python, and referred to as the dependent variable or response variable)
and
$X$ = all the other features (or independent variables, predictors or explanatory variables)
which we will use to fit a linear regression model and predict Boston housing prices. We will use the least-squares method to estimate the coefficients.
We'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.
End of explanation
"""
# Part 3 Exercise 1: Create a scatterplot between the predicted prices,
# available in m.fittedvalues (where m is the fitted model)
# and the original prices.
# Import regression modules
import statsmodels.api as sm
from statsmodels.formula.api import ols
# statsmodels works nicely with pandas dataframes
# The thing inside the "quotes" is called a formula, a bit on that below
m = ols('PRICE ~ RM',bos).fit()
# Create the scatter plot between predicted values and *PRICE*
plt.scatter(m.predict(), bos.PRICE)
plt.xlabel("Predicted Housing Price Based on Linear Regression")
plt.ylabel("Housing Price")
plt.title("Relationship between Predicted Price and Original Price")
"""
Explanation: Let's see how our model actually fit our data. We can see below that there is a ceiling effect, we should probably look into that. Also, for large values of $Y$ we get underpredictions, most predictions are below the 45-degree gridlines.
End of explanation
"""
from sklearn.linear_model import LinearRegression
X = bos.drop('PRICE', axis = 1)
# This creates a LinearRegression object
lm = LinearRegression()
lm
# Use all 13 predictors to fit linear regression model
lm.fit(X, bos.PRICE)
"""
Explanation: Fitting Linear Regression using sklearn
End of explanation
"""
# Part 3 Exercise 2 Question:
# How would you change the model to not fit an intercept term?
# Would you recommend not having an intercept? Why or why not?
# To change the model to not fit an intercept term then
# we need to fit a linear regression through the origin (RTO).
# Using sklearn's LinearRegression function, I will have to set
# the fit_intercept parameter to False.
# As far as recommending whether to have an intercept or not,
# this would depend on the data set. Hocking (1996) and Adelman
# et.al. (1994) have found that a careful change of data range
# and data size needs to be considered. For example, if the
# data is far from the origin then fitting through the origin
# might present a discontinuity from an otherwise linear
# function with a positive or negative intercept. If uncertain,
# then one might run a couple of diagnostics. Hahn (1977)
# suggested to run a fit with and without an intercept then
# compare the standard errors to decide whether OLS or RTO
# provides a superior fit.
print('Estimated intercept coefficient: {}'.format(lm.intercept_))
print('Number of coefficients: {}'.format(len(lm.coef_)))
# The coefficients
pd.DataFrame({'features': X.columns, 'estimatedCoefficients': lm.coef_})[['features', 'estimatedCoefficients']]
"""
Explanation: <div class="span5 alert alert-info">
<h3>Part 3 Checkup Exercise Set II</h3>
<p><b>Exercise:</b> How would you change the model to not fit an intercept term? Would you recommend not having an intercept? Why or why not? For more information on why to include or exclude an intercept, look [here](https://online.stat.psu.edu/~ajw13/stat501/SpecialTopics/Reg_thru_origin.pdf).</p>
<p><b>Exercise:</b> One of the assumptions of the linear model is that the residuals must be i.i.d. (independently and identically distributed). To satisfy this, is it enough that the residuals are normally distributed? Explain your answer.</p>
<p><b>Exercise:</b> True or false. To use linear regression, $Y$ must be normally distributed. Explain your answer.</p>
</div>
End of explanation
"""
# first five predicted prices
lm.predict(X)[0:5]
# Part 3 Exercise Set III:
# Question 1: Histogram: Plot a histogram of all the predicted prices. Write a story
# about what you see. Describe the shape, center and spread of the distribution.
# Are there any outliers? What might be the reason for them? Should we do
# anything special with them?
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.hist(lm.predict(X))
plt.title("Linear Regression")
plt.xlabel("Predicted Prices")
plt.ylabel("Frequency")
plt.show()
# The graph appears to be symmetric and bell-shaped, showing a normal
# distribution. The center seems to be around 20 in the x-axis.
# The spread of the distribution is from -5 to 45. Yes, there
# are outliers in the form of negative valued prices.
# Part 3 Exercise Set III
# Question 2: Scatterplot: Let's plot the true prices compared to
# the predicted prices to see they disagree
# (we did this with statsmodels before).
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Create the scatter plot between predicted values and *PRICE*
plt.scatter(lm.predict(X), bos.PRICE)
plt.xlabel("Predicted Housing Price Based on Linear Regression")
plt.ylabel("Housing Price")
plt.title("Relationship between Predicted Price and Original Price")
# Question 3: We have looked at fitting a linear model in both
# statsmodels and scikit-learn. What are the advantages
# and disadvantages of each based on your exploration?
# Based on the information provided by both packages,
# what advantage does statsmodels provide?
print(np.sum((bos.PRICE - lm.predict(X)) ** 2))
print(np.sum(lm.predict(X) - np.mean(bos.PRICE)) ** 2)
# Part 3 Exercise Set IV:
# Question 1: Fit a linear regression model using only the Pupil-teacher
# ratio by town (PTRATIO) column and interpret the coefficients.
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
import pandas as pd
lm = LinearRegression()
lm.fit(X[['PTRATIO']], bos.PRICE)
print('Estimated intercept coefficient: {}'.format(lm.intercept_))
print('Number of coefficients: {}'.format(len(lm.coef_)))
# Exercise 2: Calculate (or extract) the R2 value. What does it tell you?
lm.score(X[['PTRATIO']], bos.PRICE)
# Exercise 3: Compute the F-statistic. What does it tell you?
m = ols('PRICE ~ PTRATIO',bos).fit()
print(m.summary())
"""
Explanation: Predict Prices
We can calculate the predicted prices ($\hat{Y}_i$) using lm.predict.
$$ \hat{Y}i = \hat{\beta}_0 + \hat{\beta}_1 X_1 + \ldots \hat{\beta}{13} X_{13} $$
End of explanation
"""
# Part 3 Exercise Set V
# Fit a linear regression model using three independent variables
# 1) 'CRIM' (per capita crime rate by town)
# 2) 'RM' (average number of rooms per dwelling)
# 3) 'PTRATIO' (pupil-teacher ratio by town)
lm = LinearRegression()
lm.fit(X[['CRIM','RM','PTRATIO']], bos.PRICE)
# Calculate (or extract) the R2 value.
lm.score(X[['CRIM', 'RM', 'PTRATIO']], bos.PRICE)
# Compute the F-statistic.
m = ols('PRICE ~ CRIM + RM + PTRATIO',bos).fit()
print(m.summary())
"""
Explanation: <div class="span5 alert alert-info">
<h3>Part 3 Checkup Exercise Set V</h3>
<p>Fit a linear regression model using three independent variables</p>
<ol>
<li> 'CRIM' (per capita crime rate by town)
<li> 'RM' (average number of rooms per dwelling)
<li> 'PTRATIO' (pupil-teacher ratio by town)
</ol>
<p><b>Exercise:</b> Compute or extract the $F$-statistic. What does it tell you about the model?</p>
<p><b>Exercise:</b> Compute or extract the $R^2$ statistic. What does it tell you about the model?</p>
<p><b>Exercise:</b> Which variables in the model are significant in predicting house price? Write a story that interprets the coefficients.</p>
</div>
End of explanation
"""
# Part 4
# Find another variable (or two) to add to the model we built in Part 3.
# Compute the F-test comparing the two models as well as the AIC. Which model is better?
m = ols('PRICE ~ CRIM + RM + PTRATIO + NOX + TAX',bos).fit()
print(m.summary())
# Part 5 Exercise 1:
# Create a scatter plot of fitted values versus residuals
plt.scatter(m.fittedvalues, m.resid)
plt.ylabel("Fitted Values")
plt.xlabel("Normalized residuals")
# Part 5 Exercise 2:
# Construct a quantile plot of the residuals.
from scipy import stats
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
x = stats.loggamma.rvs(c=2.5, size=500)
res = stats.probplot(m.resid, plot=ax)
ax.set_title("Probplot for loggamma dist with shape parameter 2.5")
plt.show()
# Part 5 Exercise 3:
# What are some advantages and disadvantages of the fitted vs.
# residual and quantile plot compared to each other?
# Answer: The fitted vs. residual plot is the most frequently
# created plot using residuals analysis. Adavatages of
# plotting it is to be able to determine non-linearity,
# unequal error variances and outliers.
"""
Explanation: Part 4: Comparing Models
During modeling, there will be times when we want to compare models to see which one is more predictive or fits the data better. There are many ways to compare models, but we will focus on two.
<div class="span5 alert alert-info">
<h3>Part 4 Checkup Exercises</h3>
<p><b>Exercise:</b> Find another variable (or two) to add to the model we built in Part 3. Compute the $F$-test comparing the two models as well as the AIC. Which model is better?</p>
</div>
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp
|
environments_setup/mlops-composer-mlflow/environment-test.ipynb
|
apache-2.0
|
import os
import re
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
import pymysql
from IPython.core.display import display, HTML
mlflow_tracking_uri = mlflow.get_tracking_uri()
MLFLOW_EXPERIMENTS_URI = os.environ['MLFLOW_EXPERIMENTS_URI']
print("MLflow tracking server URI: {}".format(mlflow_tracking_uri))
print("MLflow artifacts store root: {}".format(MLFLOW_EXPERIMENTS_URI))
print("MLflow SQL connction name: {}".format(os.environ['MLFLOW_SQL_CONNECTION_NAME']))
print("MLflow SQL connction string: {}".format(os.environ['MLFLOW_SQL_CONNECTION_STR']))
print("Cloud Composer name: {}".format(os.environ['MLOPS_COMPOSER_NAME']))
print("Cloud Composer instance region: {}".format(os.environ['MLOPS_REGION']))
display(HTML('<hr>You can check results of this test in MLflow and GCS folder:'))
display(HTML('<h4><a href="{}" rel="noopener noreferrer" target="_blank">Click to open MLflow UI</a></h4>'.format(os.environ['MLFLOW_TRACKING_EXTERNAL_URI'])))
display(HTML('<h4><a href="https://console.cloud.google.com/storage/browser/{}" rel="noopener noreferrer" target="_blank">Click to open GCS folder</a></h4>'.format(MLFLOW_EXPERIMENTS_URI.replace('gs://',''))))
"""
Explanation: Verifying the MLOps environment on GCP
This notebook verifies the MLOps environment provisioned on GCP
1. Test using the local MLflow server in AI Notebooks instance in log entries to the Cloud SQL
2. Test deploying and running an Airflow workflow on Composer that uses MLflow server on GKE to log entries to the Cloud SQL
1. Running a local MLflow experiment
We implement a simple Scikit-learn model training routine, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
End of explanation
"""
experiment_name = "notebooks-test"
mlflow.set_experiment(experiment_name)
with mlflow.start_run(nested=True):
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr = LogisticRegression()
lr.fit(X, y)
score = lr.score(X, y)
print("Score: %s" % score)
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model")
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
current_model=mlflow.get_artifact_uri('model')
"""
Explanation: 1.1. Training a simple Scikit-learn model from Notebook environment
End of explanation
"""
sqlauth=re.search('mysql\\+pymysql://(?P<user>.*):(?P<psw>.*)@127.0.0.1:3306/mlflow', os.environ['MLFLOW_SQL_CONNECTION_STR'],re.DOTALL)
connection = pymysql.connect(
host='127.0.0.1',
port=3306,
database='mlflow',
user=sqlauth.group('user'),
passwd=sqlauth.group('psw')
)
"""
Explanation: 1.2. Query the Mlfow entries from Cloud SQL
End of explanation
"""
cursor = connection.cursor()
cursor.execute("SHOW TABLES")
for entry in cursor:
print(entry[0])
"""
Explanation: List tables
You should see a list of table names like 'experiments','metrics','model_versions','runs'
End of explanation
"""
cursor.execute("SELECT * FROM experiments where name='{}' ORDER BY experiment_id desc LIMIT 1".format(experiment_name))
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print("'{}' experiment ID: {}".format(experiment_name, experiment_id))
"""
Explanation: Retrieve experiment
End of explanation
"""
cursor.execute("SELECT * FROM runs where experiment_id={} ORDER BY start_time desc LIMIT 1".format(experiment_id))
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print("Last run id of '{}' experiment is: {}\n".format(experiment_name, run_uuid))
print(entity)
"""
Explanation: Query runs
End of explanation
"""
cursor.execute("SELECT * FROM metrics where run_uuid = '{}'".format(run_uuid))
for entry in cursor:
print(entry)
"""
Explanation: Query metrics
End of explanation
"""
!gsutil ls {current_model}
"""
Explanation: 1.3. List the artifacts in Cloud Storage
End of explanation
"""
COMPOSER_NAME=os.environ['MLOPS_COMPOSER_NAME']
REGION=os.environ['MLOPS_REGION']
"""
Explanation: 2. Submitting a workflow to Composer
We implement a one-step Airflow workflow that trains a Scikit-learn model, and examine the logged entries in Cloud SQL and produced articats in Cloud Storage through MLflow tracking.
End of explanation
"""
%%writefile test-sklearn-mlflow.py
import airflow
import mlflow
import mlflow.sklearn
import numpy as np
from datetime import timedelta
from sklearn.linear_model import LogisticRegression
from airflow.operators import PythonOperator
def train_model(**kwargs):
print("Train lr model step started...")
print("MLflow tracking uri: {}".format(mlflow.get_tracking_uri()))
mlflow.set_experiment("airflow-test")
with mlflow.start_run(nested=True):
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr = LogisticRegression()
lr.fit(X, y)
score = lr.score(X, y)
print("Score: %s" % score)
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model")
print("Model saved in run %s" % mlflow.active_run().info.run_uuid)
print("Train lr model step finished.")
default_args = {
'retries': 1,
'start_date': airflow.utils.dates.days_ago(0)
}
with airflow.DAG(
'test_sklearn_mlflow',
default_args=default_args,
schedule_interval=None,
dagrun_timeout=timedelta(minutes=20)) as dag:
train_model_op = PythonOperator(
task_id='train_sklearn_model',
provide_context=True,
python_callable=train_model
)
"""
Explanation: 2.1. Writing the Airflow workflow
End of explanation
"""
!gcloud composer environments storage dags import \
--environment {COMPOSER_NAME} --location {REGION} \
--source test-sklearn-mlflow.py
!gcloud composer environments storage dags list \
--environment {COMPOSER_NAME} --location {REGION}
"""
Explanation: 2.2. Uploading the Airflow workflow
End of explanation
"""
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} unpause -- test_sklearn_mlflow
!gcloud composer environments run {COMPOSER_NAME} \
--location {REGION} trigger_dag -- test_sklearn_mlflow
"""
Explanation: 2.3. Triggering the workflow
Please wait for 30-60 seconds before triggering the workflow at the first Airflow Dag import
End of explanation
"""
cursor = connection.cursor()
"""
Explanation: 2.4. Query the MLfow entries from Cloud SQL
End of explanation
"""
experiment_name = "airflow-test"
cursor.execute("SELECT * FROM experiments where name='{}' ORDER BY experiment_id desc LIMIT 1".format(experiment_name))
if cursor.rowcount == 0:
print("Experiment not found")
else:
experiment_id = list(cursor)[0][0]
print("'{}' experiment ID: {}".format(experiment_name, experiment_id))
"""
Explanation: Retrieve experiment
End of explanation
"""
cursor.execute("SELECT * FROM runs where experiment_id={} ORDER BY start_time desc LIMIT 1".format(experiment_id))
if cursor.rowcount == 0:
print("No runs found")
else:
entity=list(cursor)[0]
run_uuid = entity[0]
print("Last run id of '{}' experiment is: {}\n".format(experiment_name, run_uuid))
print(entity)
"""
Explanation: Query runs
End of explanation
"""
cursor.execute("SELECT * FROM metrics where run_uuid = '{}'".format(run_uuid))
if cursor.rowcount == 0:
print("No metrics found")
else:
for entry in cursor:
print(entry)
"""
Explanation: Query metrics
End of explanation
"""
!gsutil ls {MLFLOW_EXPERIMENTS_URI}/{experiment_id}/{run_uuid}/artifacts/model
"""
Explanation: 2.5. List the artifacts in Cloud Storage
End of explanation
"""
|
WNoxchi/Kaukasos
|
FAI02_old/Lesson9/neural_sr_attempt2.ipynb
|
mit
|
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
import utils2; importlib.reload(utils2)
from utils2 import *
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
from keras import metrics
from vgg16_avg import VGG16_Avg
from bcolz_array_iterator import BcolzArrayIterator
limit_mem()
path = '../data/'
dpath = path
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32)
preproc = lambda x: (x - rn_mean)[:, :, :, ::-1]
deproc = lambda x,s: np.clip(x.reshape(s)[:, :, :, ::-1] + rn_mean, 0, 255)
arr_lr = bcolz.open(dpath+'trn_resized_72.bc')
arr_hr = bcolz.open(path+'trn_resized_288.bc')
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
def conv_block(x, filters, size, stride=(2,2), mode='same', act=True):
x = Convolution2D(filters, size, size, subsample=stride, border_mode=mode)(x)
x = BatchNormalization(mode=2)(x)
return Activation('relu')(x) if act else x
def res_block(ip, nf=64):
x = conv_block(ip, nf, 3, (1,1))
x = conv_block(x, nf, 3, (1,1), act=False)
return merge([x, ip], mode='sum')
def up_block(x, filters, size):
x = keras.layers.UpSampling2D()(x)
x = Convolution2D(filters, size, size, border_mode='same')(x)
x = BatchNormalization(mode=2)(x)
return Activation('relu')(x)
def get_model(arr):
inp=Input(arr.shape[1:])
x=conv_block(inp, 64, 9, (1,1))
for i in range(4): x=res_block(x)
x=up_block(x, 64, 3)
x=up_block(x, 64, 3)
x=Convolution2D(3, 9, 9, activation='tanh', border_mode='same')(x)
outp=Lambda(lambda x: (x+1)*127.5)(x)
return inp,outp
inp,outp=get_model(arr_lr)
shp = arr_hr.shape[1:]
vgg_inp=Input(shp)
vgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))
for l in vgg.layers: l.trainable=False
def get_outp(m, ln): return m.get_layer(f'block{ln}_conv2').output
vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])
vgg1 = vgg_content(vgg_inp)
vgg2 = vgg_content(outp)
def mean_sqr_b(diff):
dims = list(range(1,K.ndim(diff)))
return K.expand_dims(K.sqrt(K.mean(diff**2, dims)), 0)
w=[0.1, 0.8, 0.1]
def content_fn(x):
res = 0; n=len(w)
for i in range(n): res += mean_sqr_b(x[i]-x[i+n]) * w[i]
return res
m_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))
m_sr.compile('adam', 'mae')
def train(bs, niter=10):
targ = np.zeros((bs, 1))
bc = BcolzArrayIterator(arr_hr, arr_lr, batch_size=bs)
for i in range(niter):
hr,lr = next(bc)
m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)
its = len(arr_hr)//16; its
arr_lr.chunklen, arr_hr.chunklen
%time train(64, 18000)
"""
Explanation: 01 SEP 2017
End of explanation
"""
arr_lr_c8 = bcolz.carray(arr_lr, chunklen=8, rootdir=path+'trn_resized_72_c8.bc')
arr_lr_c8.flush()
arr_hr_c8 = bcolz.carray(arr_hr, chunklen=8, rootdir=path+'trn_resized_288_c8.bc')
arr_hr_c8.flush()
arr_lr_c8.chunklen, arr_hr_c8.chunklen
"""
Explanation: Finally starting to understand this problem. So ResourceExhaustedError isn't system memory (or at least not only) but graphics memory. The card (obviously) cannot handle a batch size of 64. But batch size must be a multiple of chunk length, which here is 64.. so I have to find a way to reduce the chunk length down to something my system can handle: no more than 8.
End of explanation
"""
arr_lr_c8 = bcolz.open(path+'trn_resized_72_c8.bc')
arr_hr_c8 = bcolz.open(path+'trn_resized_288_c8.bc')
inp,outp=get_model(arr_lr_c8)
shp = arr_hr_c8.shape[1:]
vgg_inp=Input(shp)
vgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))
for l in vgg.layers: l.trainable=False
vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])
vgg1 = vgg_content(vgg_inp)
vgg2 = vgg_content(outp)
m_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))
m_sr.compile('adam', 'mae')
def train(bs, niter=10):
targ = np.zeros((bs, 1))
bc = BcolzArrayIterator(arr_hr_c8, arr_lr_c8, batch_size=bs)
for i in range(niter):
hr,lr = next(bc)
m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)
%time train(8, 18000) # not sure what exactly the '18000' is for
arr_lr.shape, arr_hr.shape, arr_lr_c8.shape, arr_hr_c8.shape
# 19439//8 = 2429
%time train(8, 2430)
"""
Explanation: That looks successful, now to redo the whole thing with the _c8 versions:
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/beautiful_soup_drill_down.ipynb
|
mit
|
# Import required modules
import requests
from bs4 import BeautifulSoup
import pandas as pd
"""
Explanation: Title: Drilling Down With Beautiful Soup
Slug: beautiful_soup_drill_down
Summary: Drilling Down With Beautiful Soup
Date: 2016-05-01 12:00
Category: Python
Tags: Web Scraping
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create a variable with the URL to this tutorial
url = 'http://en.wikipedia.org/wiki/List_of_A_Song_of_Ice_and_Fire_characters'
# Scrape the HTML at the url
r = requests.get(url)
# Turn the HTML into a Beautiful Soup object
soup = BeautifulSoup(r.text, "lxml")
"""
Explanation: Download the HTML and create a Beautiful Soup object
End of explanation
"""
# Create a variable to score the scraped data in
character_name = []
"""
Explanation: If we looked at the soup object, we'd see that the names we want are in a heirarchical list. In psuedo-code, it looks like:
class=toclevel-1 span=toctext
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
To get the CHARACTER NAMES, we are going to need to drill down to grap into loclevel-2 and grab the toctext
Setting up where to put the results
End of explanation
"""
# for each item in all the toclevel-2 li items
# (except the last three because they are not character names),
for item in soup.find_all('li',{'class':'toclevel-2'})[:-3]:
# find each span with class=toctext,
for post in item.find_all('span',{'class':'toctext'}):
# add the stripped string of each to character_name, one by one
character_name.append(post.string.strip())
"""
Explanation: Drilling down with a forloop
End of explanation
"""
# View all the character names
character_name
"""
Explanation: Results
End of explanation
"""
# Create a list object where to store the for loop results
houses = []
# For each element in the character_name list,
for name in character_name:
# split up the names by a blank space and select the last element
# this works because it is the last name if they are a house,
# but the first name if they only have one name,
# Then append each last name to the houses list
houses.append(name.split(' ')[-1])
# Convert houses into a pandas series (so we can use value_counts())
houses = pd.Series(houses)
# Count the number of times each name/house name appears
houses.value_counts()
"""
Explanation: Quick analysis: Which house has the most main characters?
End of explanation
"""
|
hannorein/rebound
|
ipython_examples/OrbitPlot.ipynb
|
gpl-3.0
|
import rebound
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)
sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)
sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)
sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit
"""
Explanation: Orbit Plot
REBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets.
End of explanation
"""
%matplotlib inline
fig, ax = rebound.OrbitPlot(sim)
"""
Explanation: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.
End of explanation
"""
fig, ax = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True, xlim=[-2,2], ylim=[-2.5,1.5])
fig, ax = rebound.OrbitPlot(sim, orbit_type="solid", lw=2)
fig, ax = rebound.OrbitPlot(sim, orbit_type=None)
fig, ax = rebound.OrbitPlot(sim, fancy=True, color=True, lw=2)
"""
Explanation: There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).
End of explanation
"""
from IPython.display import display, clear_output
import matplotlib.pyplot as plt
sim.move_to_com()
for i in range(3):
sim.integrate(sim.t+0.31)
fig, ax = rebound.OrbitPlot(sim,color=True,unitlabel="[AU]",xlim=[-2,2.],ylim=[-2,2.])
display(fig)
plt.close(fig)
clear_output(wait=True)
"""
Explanation: Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array.
From within iPython/Jupyter one can also call the OrbitPlot routine in a loop, thus making an animation as one steps through a simulation. This is a one way of keeping track of what is going on in a simulation without having to wait until the end. To do that we need to import the display and clear_output function from iPython first. We'll also need access to the clear function of matplotlib. Then, we run a loop, updating the figure as we go along. (The following cell is not rendered in the documentation but you should be able to run it locally)
End of explanation
"""
fig = rebound.OrbitPlot(sim,slices=0.5,xlim=[-2.,2],ylim=[-2.,2])
"""
Explanation: To get an idea of the three-dimensional distribution of orbits, use the slices option. This will plot the orbits three times, from different perspectives. You can size of the z direction by changing the value of slices. For example, slices=0.5 corresponds to plots half the size of the main plot.
End of explanation
"""
sim = rebound.Simulation()
sim.add(m=1.) #Star A
sim.add(m=1., a=1.) #Star B
sim.add(a=2.) #Planet ABb
sim.add(a=0.2, primary=sim.particles[1]) #Bb,
sim.move_to_com()
fig = rebound.OrbitPlot(sim)
"""
Explanation: The axes on the plots are automatically aligned with each other. The aspect of all plots is equal (a circular orbit will be a circle).
Advanced Plotting
One important caveat to keep in mind is that OrbitPlot plots osculating Kepler orbits in Jacobi coordinates. This can lead to spurious plots in some general cases, e.g., when a particle is in orbit around a particle with non-zero index:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(figsize=(5,5))
ax.set_aspect("equal")
ps = sim.particles
# manually set plot boundaries
lim = 2.3
ax.set_xlim([-lim, lim])
ax.set_ylim([-lim, lim])
# plot the stars and planets with separate symbols
for star in ps[:2]:
ax.scatter(star.x, star.y, s=35, marker='*', facecolor='black', zorder=3)
for planet in ps[2:]:
ax.scatter(planet.x, planet.y, s=10, facecolor='black', zorder=3)
# Now individually plot orbit trails with appropriate orbit
from rebound.plotting import fading_line
ABb = ps[2] # circumbinary planet, use default jacobi coordinates
o = np.array(ABb.sample_orbit())
lc = fading_line(o[:,0], o[:,1])
ax.add_collection(lc)
Bb = ps[3] # planet in orbit around B, assign it as primary
o = np.array(Bb.sample_orbit(primary=ps[1]))
lc = fading_line(o[:,0], o[:,1])
ax.add_collection(lc);
"""
Explanation: Circumbinary Planet ABb is plotted correctly in orbit around the center of mass of A and B, but Bb's Jacobi orbit is also around the center of mass of the interior particles, which corresponds to a hyperbolic orbit. It's important to note that while the plot looks incorrect, IAS15 would correctly integrate their motions.
There's no way to generically assign specific primaries to particular particles, since this concept becomes ill-defined near the boundaries of different bodies' Hill spheres bodies and particles could for example switch primaries in a given simulation. But it's straightforward to make custom plots since version 3.5.10:
End of explanation
"""
|
infilect/ml-course1
|
keras-notebooks/Frameworks/2.3.1 Keras Backend.ipynb
|
mit
|
import keras.backend as K
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from kaggle_data import load_data, preprocess_data, preprocess_labels
X_train, labels = load_data('../data/kaggle_ottogroup/train.csv', train=True)
X_train, scaler = preprocess_data(X_train)
Y_train, encoder = preprocess_labels(labels)
X_test, ids = load_data('../data/kaggle_ottogroup/test.csv', train=False)
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y_train.shape[1]
print(nb_classes, 'classes')
dims = X_train.shape[1]
print(dims, 'dims')
feats = dims
training_steps = 25
x = K.placeholder(dtype="float", shape=X_train.shape)
target = K.placeholder(dtype="float", shape=Y_train.shape)
# Set model weights
W = K.variable(np.random.rand(dims, nb_classes))
b = K.variable(np.random.rand(nb_classes))
# Define model and loss
y = K.dot(x, W) + b
loss = K.categorical_crossentropy(y, target)
activation = K.softmax(y) # Softmax
lr = K.constant(0.01)
grads = K.gradients(loss, [W,b])
updates = [(W, W-lr*grads[0]), (b, b-lr*grads[1])]
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Training
loss_history = []
for epoch in range(training_steps):
current_loss = train([X_train, Y_train])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: {}".format(current_loss))
loss_history = [np.mean(lh) for lh in loss_history]
# plotting
plt.plot(range(len(loss_history)), loss_history, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
"""
Explanation: Keras Backend
In this notebook we will be using the Keras backend module, which provides an abstraction over both Theano and Tensorflow.
Let's try to re-implement the Logistic Regression Model using the keras.backend APIs.
The following code will look like very similar to what we would write in Theano or Tensorflow (with the only difference that it may run on both the two backends).
End of explanation
"""
# Placeholders and variables
x = K.placeholder()
target = K.placeholder()
w = K.variable(np.random.rand())
b = K.variable(np.random.rand())
"""
Explanation: Your Turn
Please switch to the Theano backend and restart the notebook.
You should see no difference in the execution!
Reminder: please keep in mind that you can execute shell commands from a notebook (pre-pending a ! sign).
Thus:
shell
!cat ~/.keras/keras.json
should show you the content of your keras configuration file.
Moreover
Try to play a bit with the learning reate parameter to see how the loss history floats...
Exercise: Linear Regression
To get familiar with automatic differentiation, we start by learning a simple linear regression model using Stochastic Gradient Descent (SGD).
Recall that given a dataset ${(x_i, y_i)}_{i=0}^N$, with $x_i, y_i \in \mathbb{R}$, the objective of linear regression is to find two scalars $w$ and $b$ such that $y = w\cdot x + b$ fits the dataset. In this tutorial we will learn $w$ and $b$ using SGD and a Mean Square Error (MSE) loss:
$$\mathcal{l} = \frac{1}{N} \sum_{i=0}^N (w\cdot x_i + b - y_i)^2$$
Starting from random values, parameters $w$ and $b$ will be updated at each iteration via the following rule:
$$w_t = w_{t-1} - \eta \frac{\partial \mathcal{l}}{\partial w}$$
<br>
$$b_t = b_{t-1} - \eta \frac{\partial \mathcal{l}}{\partial b}$$
where $\eta$ is the learning rate.
NOTE: Recall that linear regression is indeed a simple neuron with a linear activation function!!
Definition: Placeholders and Variables
First of all, we define the necessary variables and placeholders for our computational graph. Variables maintain state across executions of the computational graph, while placeholders are ways to feed the graph with external data.
For the linear regression example, we need three variables: w, b, and the learning rate for SGD, lr.
Two placeholders x and target are created to store $x_i$ and $y_i$ values.
End of explanation
"""
# Define model and loss
# %load ../solutions/sol_2311.py
"""
Explanation: Notes:
In case you're wondering what's the difference between a placeholder and a variable, in short:
Use K.variable() for trainable variables such as weights (W) and biases (b) for your model.
Use K.placeholder() to feed actual data (e.g. training examples)
Model definition
Now we can define the $y = w\cdot x + b$ relation as well as the MSE loss in the computational graph.
End of explanation
"""
# %load ../solutions/sol_2312.py
"""
Explanation: Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD:
End of explanation
"""
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
"""
Explanation: The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.
End of explanation
"""
# Generate data
np_x = np.random.rand(1000)
np_target = 0.96*np_x + 0.24
# Training
loss_history = []
for epoch in range(200):
current_loss = train([np_x, np_target])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: %.03f, w, b: [%.02f, %.02f]" % (current_loss, K.eval(w), K.eval(b)))
"""
Explanation: Training
Training is now just a matter of calling the function we have just defined. Each time train is called, indeed, w and b will be updated using the SGD rule.
Having generated some random training data, we will feed the train function for several epochs and observe the values of w, b, and loss.
End of explanation
"""
# Plot loss history
# %load ../solutions/sol_2313.py
"""
Explanation: We can also plot the loss history:
End of explanation
"""
|
SKA-ScienceDataProcessor/algorithm-reference-library
|
workflows/notebooks/imaging-wterm_arlexecute.ipynb
|
apache-2.0
|
%matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from data_models.polarisation import PolarisationFrame
from wrappers.serial.image.iterators import image_raster_iter
from processing_library.image.operations import create_w_term_like
# Use serial wrappers by default
from wrappers.serial.visibility.base import create_visibility, create_visibility, create_visibility_from_rows
from wrappers.serial.skycomponent.operations import create_skycomponent
from wrappers.serial.image.operations import show_image, export_image_to_fits
from wrappers.serial.visibility.iterators import vis_timeslice_iter
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.imaging.base import invert_2d, create_image_from_visibility, \
predict_skycomponent_visibility, advise_wide_field
from wrappers.serial.visibility.iterators import vis_timeslice_iter
from wrappers.serial.imaging.weighting import weight_visibility
from wrappers.serial.visibility.iterators import vis_timeslices
from wrappers.arlexecute.griddata.kernels import create_awterm_convolutionfunction
from wrappers.arlexecute.griddata.convolution_functions import apply_bounding_box_convolutionfunction
# Use arlexecute for imaging
from wrappers.arlexecute.execution_support.arlexecute import arlexecute
from workflows.arlexecute.imaging.imaging_arlexecute import invert_list_arlexecute_workflow
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
doplot = True
pylab.rcParams['figure.figsize'] = (12.0, 12.0)
pylab.rcParams['image.cmap'] = 'rainbow'
"""
Explanation: Wide-field imaging demonstration
This script makes a fake data set, fills it with a number of point components, and then images it using a variety of algorithms. See imaging-fits for a similar notebook that checks for errors in the recovered properties of the images.
The measurement equation for a wide field of view interferometer is:
$$V(u,v,w) =\int \frac{I(l,m)}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um + w(\sqrt{1-l^2-m^2}-1))} dl dm$$
We will show various algorithms for computing approximations to this integral. Calculation of the visibility V from the sky brightness I is called predict, and the inverese is called invert.
End of explanation
"""
lowcore = create_named_configuration('LOWBD2-CORE')
"""
Explanation: Construct the SKA1-LOW core configuration
End of explanation
"""
times = numpy.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]) * (numpy.pi / 12.0)
frequency = numpy.array([1e8])
channel_bandwidth = numpy.array([1e7])
reffrequency = numpy.max(frequency)
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame("stokesI"))
"""
Explanation: We create the visibility.
This just makes the uvw, time, antenna1, antenna2, weight columns in a table
End of explanation
"""
advice = advise_wide_field(vt, wprojection_planes=1)
"""
Explanation: Advise on wide field parameters. This returns a dictionary with all the input and calculated variables.
End of explanation
"""
if doplot:
plt.clf()
plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 1], '.', color='b')
plt.plot(-vt.data['uvw'][:, 0], -vt.data['uvw'][:, 1], '.', color='r')
plt.xlabel('U (wavelengths)')
plt.ylabel('V (wavelengths)')
plt.show()
plt.clf()
plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 2], '.', color='b')
plt.xlabel('U (wavelengths)')
plt.ylabel('W (wavelengths)')
plt.show()
plt.clf()
plt.plot(vt.data['time'][vt.u>0.0], vt.data['uvw'][:, 2][vt.u>0.0], '.', color='b')
plt.plot(vt.data['time'][vt.u<=0.0], vt.data['uvw'][:, 2][vt.u<=0.0], '.', color='r')
plt.xlabel('U (wavelengths)')
plt.ylabel('W (wavelengths)')
plt.show()
plt.clf()
n, bins, patches = plt.hist(vt.w, 50, normed=1, facecolor='green', alpha=0.75)
plt.xlabel('W (wavelengths)')
plt.ylabel('Count')
plt.show()
"""
Explanation: Plot the synthesized UV coverage.
End of explanation
"""
npixel = 512
cellsize=0.001
facets = 4
flux = numpy.array([[100.0]])
vt.data['vis'] *= 0.0
model = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
spacing_pixels = npixel // facets
log.info('Spacing in pixels = %s' % spacing_pixels)
spacing = 180.0 * cellsize * spacing_pixels / numpy.pi
centers = -1.5, -0.5, +0.5, +1.5
comps=list()
for iy in centers:
for ix in centers:
pra = int(round(npixel // 2 + ix * spacing_pixels - 1))
pdec = int(round(npixel // 2 + iy * spacing_pixels - 1))
sc = pixel_to_skycoord(pra, pdec, model.wcs)
log.info("Component at (%f, %f) %s" % (pra, pdec, str(sc)))
comp = create_skycomponent(flux=flux, frequency=frequency, direction=sc,
polarisation_frame=PolarisationFrame("stokesI"))
comps.append(comp)
predict_skycomponent_visibility(vt, comps)
"""
Explanation: Show the planar nature of the uvw sampling, rotating with hour angle
Create a grid of components and predict each in turn, using the full phase term including w.
End of explanation
"""
arlexecute.set_client(use_dask=True)
dirty = create_image_from_visibility(vt, npixel=512, cellsize=0.001,
polarisation_frame=PolarisationFrame("stokesI"))
vt = weight_visibility(vt, dirty)
future = invert_list_arlexecute_workflow([vt], [dirty], context='2d')
dirty, sumwt = arlexecute.compute(future, sync=True)[0]
if doplot:
show_image(dirty)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirty.data.max(), dirty.data.min(), sumwt))
export_image_to_fits(dirty, '%s/imaging-wterm_dirty.fits' % (results_dir))
"""
Explanation: Make the dirty image and point spread function using the two-dimensional approximation:
$$V(u,v,w) =\int I(l,m) e^{2 \pi j (ul+um)} dl dm$$
Note that the shape of the sources vary with position in the image. This space-variant property of the PSF arises from the w-term neglected in the two-dimensional invert.
End of explanation
"""
dirtyFacet = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
future = invert_list_arlexecute_workflow([vt], [dirtyFacet], facets=4, context='facets')
dirtyFacet, sumwt = arlexecute.compute(future, sync=True)[0]
if doplot:
show_image(dirtyFacet)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyFacet.data.max(), dirtyFacet.data.min(), sumwt))
export_image_to_fits(dirtyFacet, '%s/imaging-wterm_dirtyFacet.fits' % (results_dir))
"""
Explanation: This occurs because the Fourier transform relationship between sky brightness and visibility is only accurate over small fields of view.
Hence we can make an accurate image by partitioning the image plane into small regions, treating each separately and then glueing the resulting partitions into one image. We call this image plane partitioning image plane faceting.
$$V(u,v,w) = \sum_{i,j} \frac{1}{\sqrt{1- l_{i,j}^2- m_{i,j}^2}} e^{-2 \pi j (ul_{i,j}+um_{i,j} + w(\sqrt{1-l_{i,j}^2-m_{i,j}^2}-1))}
\int I(\Delta l, \Delta m) e^{-2 \pi j (u\Delta l_{i,j}+u \Delta m_{i,j})} dl dm$$
End of explanation
"""
dirtyFacet2 = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
future = invert_list_arlexecute_workflow([vt], [dirtyFacet2], facets=2, context='facets')
dirtyFacet2, sumwt = arlexecute.compute(future, sync=True)[0]
if doplot:
show_image(dirtyFacet2)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyFacet2.data.max(), dirtyFacet2.data.min(), sumwt))
export_image_to_fits(dirtyFacet2, '%s/imaging-wterm_dirtyFacet2.fits' % (results_dir))
"""
Explanation: That was the best case. This time, we will not arrange for the partitions to be centred on the sources.
End of explanation
"""
if doplot:
wterm = create_w_term_like(model, phasecentre=vt.phasecentre, w=numpy.max(vt.w))
show_image(wterm)
plt.show()
dirtywstack = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
future = invert_list_arlexecute_workflow([vt], [dirtywstack], vis_slices=101, context='wstack')
dirtywstack, sumwt = arlexecute.compute(future, sync=True)[0]
show_image(dirtywstack)
plt.show()
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" %
(dirtywstack.data.max(), dirtywstack.data.min(), sumwt))
export_image_to_fits(dirtywstack, '%s/imaging-wterm_dirty_wstack.fits' % (results_dir))
"""
Explanation: Another approach is to partition the visibility data by slices in w. The measurement equation is approximated as:
$$V(u,v,w) =\sum_i \int \frac{ I(l,m) e^{-2 \pi j (w_i(\sqrt{1-l^2-m^2}-1))})}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um)} dl dm$$
If images constructed from slices in w are added after applying a w-dependent image plane correction, the w term will be corrected.
The w-dependent w-beam is:
End of explanation
"""
for rows in vis_timeslice_iter(vt):
visslice = create_visibility_from_rows(vt, rows)
dirtySnapshot = create_image_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1, compress_factor=0.0)
future = invert_list_arlexecute_workflow([visslice], [dirtySnapshot], context='2d')
dirtySnapshot, sumwt = arlexecute.compute(future, sync=True)[0]
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" %
(dirtySnapshot.data.max(), dirtySnapshot.data.min(), sumwt))
if doplot:
dirtySnapshot.data -= dirtyFacet.data
show_image(dirtySnapshot)
plt.title("Hour angle %.2f hours" % (numpy.average(visslice.time) * 12.0 / 43200.0))
plt.show()
"""
Explanation: The w-term can also be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of $u,v$
$$w = a u + b v$$
Transforming to a new coordinate system:
$$ l' = l + a (\sqrt{1-l^2-m^2}-1))$$
$$ m' = m + b (\sqrt{1-l^2-m^2}-1))$$
Ignoring changes in the normalisation term, we have:
$$V(u,v,w) =\int \frac{I(l',m')}{\sqrt{1-l'^2-m'^2}} e^{-2 \pi j (ul'+um')} dl' dm'$$
To illustrate this, we will construct images as a function of time. For comparison, we show difference of each time slice from the best facet image. Instantaneously the sources are un-distorted but do lie in the wrong location.
End of explanation
"""
dirtyTimeslice = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
future = invert_list_arlexecute_workflow([vt], [dirtyTimeslice], vis_slices=vis_timeslices(vt, 'auto'),
padding=2, context='timeslice')
dirtyTimeslice, sumwt = arlexecute.compute(future, sync=True)[0]
show_image(dirtyTimeslice)
plt.show()
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" %
(dirtyTimeslice.data.max(), dirtyTimeslice.data.min(), sumwt))
export_image_to_fits(dirtyTimeslice, '%s/imaging-wterm_dirty_Timeslice.fits' % (results_dir))
"""
Explanation: This timeslice imaging leads to a straightforward algorithm in which we correct each time slice and then sum the resulting timeslices.
End of explanation
"""
dirtyWProjection = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)
gcfcf = create_awterm_convolutionfunction(model, nw=101, wstep=800.0/101, oversampling=8,
support=60,
use_aaf=True)
future = invert_list_arlexecute_workflow([vt], [dirtyWProjection], context='2d', gcfcf=[gcfcf])
dirtyWProjection, sumwt = arlexecute.compute(future, sync=True)[0]
if doplot:
show_image(dirtyWProjection)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirtyWProjection.data.max(),
dirtyWProjection.data.min(), sumwt))
export_image_to_fits(dirtyWProjection, '%s/imaging-wterm_dirty_WProjection.fits' % (results_dir))
"""
Explanation: Finally we try w-projection. For a fixed w, the measurement equation can be stated as as a convolution in Fourier space.
$$V(u,v,w) =G_w(u,v) \ast \int \frac{I(l,m)}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um)} dl dm$$
where the convolution function is:
$$G_w(u,v) = \int \frac{1}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+um + w(\sqrt{1-l^2-m^2}-1))} dl dm$$
Hence when gridding, we can use the transform of the w beam to correct this effect while gridding.
End of explanation
"""
|
fantasycheng/udacity-deep-learning-project
|
tutorials/intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
|
mit
|
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = # bag of words here
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = ## create the word-to-index dictionary here
"""
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
pass
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/probability/examples/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import time
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import bijectors as tfb
from tensorflow_probability import distributions as tfd
tf.enable_v2_behavior()
"""
Explanation: Approximate inference for STS models with non-Gaussian observations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/STS_approximate_inference_for_models_with_non_Gaussian_observations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook demonstrates the use of TFP approximate inference tools to incorporate a (non-Gaussian) observation model when fitting and forecasting with structural time series (STS) models. In this example, we'll use a Poisson observation model to work with discrete count data.
End of explanation
"""
num_timesteps = 30
observed_counts = np.round(3 + np.random.lognormal(np.log(np.linspace(
num_timesteps, 5, num=num_timesteps)), 0.20, size=num_timesteps))
observed_counts = observed_counts.astype(np.float32)
plt.plot(observed_counts)
"""
Explanation: Synthetic Data
First we'll generate some synthetic count data:
End of explanation
"""
def build_model(approximate_unconstrained_rates):
trend = tfp.sts.LocalLinearTrend(
observed_time_series=approximate_unconstrained_rates)
return tfp.sts.Sum([trend],
observed_time_series=approximate_unconstrained_rates)
"""
Explanation: Model
We'll specify a simple model with a randomly walking linear trend:
End of explanation
"""
positive_bijector = tfb.Softplus() # Or tfb.Exp()
# Approximate the unconstrained Poisson rate just to set heuristic priors.
# We could avoid this by passing explicit priors on all model params.
approximate_unconstrained_rates = positive_bijector.inverse(
tf.convert_to_tensor(observed_counts) + 0.01)
sts_model = build_model(approximate_unconstrained_rates)
"""
Explanation: Instead of operating on the observed time series, this model will operate on the series of Poisson rate parameters that govern the observations.
Since Poisson rates must be positive, we'll use a bijector to transform the
real-valued STS model into a distribution over positive values. The Softplus
transformation $y = \log(1 + \exp(x))$ is a natural choice, since it is nearly linear for positive values, but other choices such as Exp (which transforms the normal random walk into a lognormal random walk) are also possible.
End of explanation
"""
def sts_with_poisson_likelihood_model():
# Encode the parameters of the STS model as random variables.
param_vals = []
for param in sts_model.parameters:
param_val = yield param.prior
param_vals.append(param_val)
# Use the STS model to encode the log- (or inverse-softplus)
# rate of a Poisson.
unconstrained_rate = yield sts_model.make_state_space_model(
num_timesteps, param_vals)
rate = positive_bijector.forward(unconstrained_rate[..., 0])
observed_counts = yield tfd.Poisson(rate, name='observed_counts')
model = tfd.JointDistributionCoroutineAutoBatched(sts_with_poisson_likelihood_model)
"""
Explanation: To use approximate inference for a non-Gaussian observation model,
we'll encode the STS model as a TFP JointDistribution. The random variables in this joint distribution are the parameters of the STS model, the time series of latent Poisson rates, and the observed counts.
End of explanation
"""
pinned_model = model.experimental_pin(observed_counts=observed_counts)
"""
Explanation: Preparation for inference
We want to infer the unobserved quantities in the model, given the observed counts. First, we condition the joint log density on the observed counts.
End of explanation
"""
constraining_bijector = pinned_model.experimental_default_event_space_bijector()
"""
Explanation: We'll also need a constraining bijector to ensure that inference respects the constraints on the STS model's parameters (for example, scales must be positive).
End of explanation
"""
#@title Sampler configuration
# Allow external control of sampling to reduce test runtimes.
num_results = 500 # @param { isTemplate: true}
num_results = int(num_results)
num_burnin_steps = 100 # @param { isTemplate: true}
num_burnin_steps = int(num_burnin_steps)
"""
Explanation: Inference with HMC
We'll use HMC (specifically, NUTS) to sample from the joint posterior over model parameters and latent rates.
This will be significantly slower than fitting a standard STS model with HMC, since in addition to the model's (relatively small number of) parameters we also have to infer the entire series of Poisson rates. So we'll run for a relatively small number of steps; for applications where inference quality is critical it might make sense to increase these values or to run multiple chains.
End of explanation
"""
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=pinned_model.unnormalized_log_prob,
step_size=0.1),
bijector=constraining_bijector)
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=0.75)
initial_state = constraining_bijector.forward(
type(pinned_model.event_shape)(
*(tf.random.normal(part_shape)
for part_shape in constraining_bijector.inverse_event_shape(
pinned_model.event_shape))))
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=None)
t0 = time.time()
samples = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
"""
Explanation: First we specify a sampler, and then use sample_chain to run that sampling
kernel to produce samples.
End of explanation
"""
f = plt.figure(figsize=(12, 4))
for i, param in enumerate(sts_model.parameters):
ax = f.add_subplot(1, len(sts_model.parameters), i + 1)
ax.plot(samples[i])
ax.set_title("{} samples".format(param.name))
"""
Explanation: We can sanity-check the inference by examining the parameter traces. In this case they appear to have explored multiple explanations for the data, which is good, although more samples would be helpful to judge how well the chain is mixing.
End of explanation
"""
param_samples = samples[:-1]
unconstrained_rate_samples = samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(np.random.poisson(rate_samples),
[10, 90], axis=0)
_ = plt.plot(observed_counts, color="blue", ls='--', marker='o', label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color="green", ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey', label='counts', alpha=0.2)
plt.xlabel("Day")
plt.ylabel("Daily Sample Size")
plt.title("Posterior Mean")
plt.legend()
"""
Explanation: Now for the payoff: let's see the posterior over Poisson rates! We'll also plot the 80% predictive interval over observed counts, and can check that this interval appears to contain about 80% of the counts we actually observed.
End of explanation
"""
def sample_forecasted_counts(sts_model, posterior_latent_rates,
posterior_params, num_steps_forecast,
num_sampled_forecasts):
# Forecast the future latent unconstrained rates, given the inferred latent
# unconstrained rates and parameters.
unconstrained_rates_forecast_dist = tfp.sts.forecast(sts_model,
observed_time_series=unconstrained_rate_samples,
parameter_samples=posterior_params,
num_steps_forecast=num_steps_forecast)
# Transform the forecast to positive-valued Poisson rates.
rates_forecast_dist = tfd.TransformedDistribution(
unconstrained_rates_forecast_dist,
positive_bijector)
# Sample from the forecast model following the chain rule:
# P(counts) = P(counts | latent_rates)P(latent_rates)
sampled_latent_rates = rates_forecast_dist.sample(num_sampled_forecasts)
sampled_forecast_counts = tfd.Poisson(rate=sampled_latent_rates).sample()
return sampled_forecast_counts, sampled_latent_rates
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
def plot_forecast_helper(data, forecast_samples, CI=90):
"""Plot the observed time series alongside the forecast."""
plt.figure(figsize=(10, 4))
forecast_median = np.median(forecast_samples, axis=0)
num_steps = len(data)
num_steps_forecast = forecast_median.shape[-1]
plt.plot(np.arange(num_steps), data, lw=2, color='blue', linestyle='--', marker='o',
label='Observed Data', alpha=0.7)
forecast_steps = np.arange(num_steps, num_steps+num_steps_forecast)
CI_interval = [(100 - CI)/2, 100 - (100 - CI)/2]
lower, upper = np.percentile(forecast_samples, CI_interval, axis=0)
plt.plot(forecast_steps, forecast_median, lw=2, ls='--', marker='o', color='orange',
label=str(CI) + '% Forecast Interval', alpha=0.7)
plt.fill_between(forecast_steps,
lower,
upper, color='orange', alpha=0.2)
plt.xlim([0, num_steps+num_steps_forecast])
ymin, ymax = min(np.min(forecast_samples), np.min(data)), max(np.max(forecast_samples), np.max(data))
yrange = ymax-ymin
plt.title("{}".format('Observed time series with ' + str(num_steps_forecast) + ' Day Forecast'))
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.legend()
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
"""
Explanation: Forecasting
To forecast the observed counts, we'll use the standard STS tools to build a forecast distribution over the latent rates (in unconstrained space, again since STS is designed to model real-valued data), then pass the sampled forecasts through a Poisson observation model:
End of explanation
"""
surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=pinned_model.event_shape,
bijector=constraining_bijector)
# Allow external control of optimization to reduce test runtimes.
num_variational_steps = 1000 # @param { isTemplate: true}
num_variational_steps = int(num_variational_steps)
t0 = time.time()
losses = tfp.vi.fit_surrogate_posterior(pinned_model.unnormalized_log_prob,
surrogate_posterior,
optimizer=tf.optimizers.Adam(0.1),
num_steps=num_variational_steps)
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
plt.plot(losses)
plt.title("Variational loss")
_ = plt.xlabel("Steps")
posterior_samples = surrogate_posterior.sample(50)
param_samples = posterior_samples[:-1]
unconstrained_rate_samples = posterior_samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(
np.random.poisson(rate_samples), [10, 90], axis=0)
_ = plt.plot(observed_counts, color='blue', ls='--', marker='o',
label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color='green',
ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(
np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey',
label='counts', alpha=0.2)
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.title('Posterior Mean')
plt.legend()
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
"""
Explanation: VI inference
Variational inference can be problematic when inferring a full time series, like our approximate counts (as opposed to just
the parameters of a time series, as in standard STS models). The standard assumption that variables have independent posteriors is quite wrong, since each timestep is correlated with its neighbors, which can lead to underestimating uncertainty. For this reason, HMC may be a better choice for approximate inference over full time series. However, VI can be quite a bit faster, and may be useful for model prototyping or in cases where its performance can be empirically shown to be 'good enough'.
To fit our model with VI, we simply build and optimize a surrogate posterior:
End of explanation
"""
|
cfelton/myhdl_exercises
|
01_mex_shifty.ipynb
|
mit
|
def shifty(clock, reset, load, load_value, output_bit, initial_value=0):
"""
Ports:
load: input, load strobe, load the `load_value`
load_value: input, the value to be loaded
output_bit: output, The most significant
initial_value: internal shift registers initial value (value after reset)
"""
assert isinstance(load_value.val, intbv)
# the internal shift register will be the same sizes as the `load_value`
shiftreg = Signal(intbv(initial_value,
min=load_value.min, max=load_value.max))
mask = shiftreg.max-1
# non-working template
@always_seq(clock.posedge, reset=reset)
def beh():
output_bit.next = shiftreg[0]
# for monitoring, access outside this function
shifty.shiftreg = shiftreg
return beh
"""
Explanation: MyHDL Function (module)
An introductory MyHDL tutorial presents a small example towards the begining of the post. A MyHDL anatomy graphic (see below) is used to describe the parts of a MyHDL module. Note, the nomenclature is a little odd here, in Python a module is a file and in MyHDL a module (typically sometimes called a component) is a Python function that describes a set of hardware behavior. Hardware module is commonly used to name an HDL component in a digital circuit - the use has been propagated forward.
<center><figure>
<a href="https://www.flickr.com/photos/79765478@N08/14230879911" title="myhdl_module_anatomy by cfelton*, on Flickr"><img src="https://farm3.staticflickr.com/2932/14230879911_03ce54dcde_z.jpg" width="640" height="322" alt="myhdl_module_anatomy"></a>
<caption> MyHDL Module Anatomy </caption>
</figure></center>
A Shift Register
<!-- there is an assumption the user will know what a shift register is, these exercises are for people that know Verilog/VHDL. Not teaching digital logic from scratch !! -->
What exactly does a shift register do? In the exercise description section there is a link to a short video describing a shift register. Basically, to generate a shift register all we really need is a description of what the expected behavior is. In this case we have a parallel value, load_value, that will be serialized to a single bit, the following table shows the temporal behavior. If we have an constrained integer with a maximum value of 256, the following will be the behavior:
Time | load | ival (d) | shift (b) | obit
-----+------+----------+-----------+-----
T0 | 1 | 32 | 0000_0000 | 0
T1 | 0 | X | 0010_0000 | 0
T2 | 0 | X | 0100_0000 | 0
T3 | 0 | X | 1000_0000 | 1
T4 | 0 | X | 0000_0001 | 0
T5 | 0 | X | 0000_0010 | 0
In the above table abbreviations are used for the Signals listed in the module.
ival: initial_value
shift: shiftreg
obit: output_bit
Exercise Description
This exercise is to implement the shift register shown with the following additions:
Make the shift register circular
Add an inital condition parameter initial_value
To make the the shift register(YouTube) circular connect the most-significant-bit (msb) to the least-significant-bit (lsb).
Sections from the MyHDL manual that may be useful:
Bit indexing and slicing
Signals, Why Signal Assignments
The concat function
Fill in the body of the following and then run the test cell.
Hints
An internal signal will be used to represent the shift register. The width (max value) of the register is determined by the type of load_value.
End of explanation
"""
stimulator(shifty)
# Note, the following waveform plotter is experimental. Using
# an external waveform viewer, like gtkwave, would be useful.
vcd.parse_and_plot('vcd/01_mex_stim.vcd')
"""
Explanation: The following function will stimulate the above MyHDL module. The stimulator all exercise the module in the same way whereas the verification (test) will use random values for testing and test numerous cycles. The cell after the stimulator is a cell that plots the waveform of the stimulator. Waring, the embedded VCD waveform plotter is beta and very limited. It is useful for very simple waveforms. For full waveform viewing use an external tool such as gtkwave.
End of explanation
"""
test(shifty)
# View the generated VHDL
%less output/shifty.vhd
# View the generated Verilog
%less output/shifty.v
"""
Explanation: After the above shifty implementation has been coded, run the next cell to test and verify the behavior of the described digital circuit. If the test fails it will print out a number of simuilation steps and some values. The VCD file can be displayed via the vcd.parse_and_plot('vcd/01_mex_test.vcd') function (same as above and the same basic waveforms warning) for debug or use an eternal waveform viewer (e.g. gtkwave) to view the simulation waveform and debug.
End of explanation
"""
|
ini-python-course/ss15
|
notebooks/Fast Online Plotting with PyQtGraph.ipynb
|
mit
|
import pyqtgraph.examples
pyqtgraph.examples.run()
"""
Explanation: PyQtGraph
Fast Online Plotting in Python
"PyQtGraph is a pure-python graphics and GUI library built on PyQt4 / PySide and numpy. It is intended for use in mathematics / scientific / engineering applications. Despite being written entirely in python, the library is very fast due to its heavy leverage of numpy for number crunching and Qt's GraphicsView framework for fast display." - http://www.pyqtgraph.org/
PyQtGraph or Matplotlib?
If you just need to make neat publication-quality plots/figures, then Matplotlib should be your first choice. However, if you are interested in making fast plot updates (> 50 updates per sec), then PyQtGraph is probably the best library to use.
Prerequisites for this notebook:
Numpy
(optional) Basics of PyQt
This notebook covers a few basic features of the library that are sufficient to get you started.
The main topics covered here are:
Animate data stored in numpy arrays (~ a video).
How to style your plots.
How to setup a grid layout.
Refer to the examples provided in the package to learn different features of PyQtGraph. These examples can be accessed via a GUI by running the following in a python shell:
End of explanation
"""
import pyqtgraph as pg # pg is often used as the shorthand notation
from pyqtgraph.Qt import QtCore # import QtCore from the Qt library
"""
Explanation: Animate Numpy Arrays
End of explanation
"""
app = pg.QtGui.QApplication([]) # init QApplication
"""
Explanation: pyqtgraph.Qt links to the PyQt library. We wish to use the timer() function of the pyqt library in our example. The timer function can be used if you want someething to happen “in a while” or “every once in a while”.
End of explanation
"""
x = np.random.rand(500,50,50) # create a random numpy array to display - 500 images of size 50x50
pg.setConfigOptions(antialias=True) # enable antialiasing
view = pg.GraphicsView() # create a main graphics window
view.show() # show the window
"""
Explanation: Here, app refers to an instance of the Qt's QApplication class.
QApplication manages the GUI-application's control flow, where all events from the window system and other sources are processed and dispatched. There can only be one QApplication object defined for all your plots created.
End of explanation
"""
p = pg.PlotItem() # add a plotItem
view.setCentralItem(p) # add the plotItem to the graphicsWindow and set it as central
"""
Explanation: When displaying images at a different resolution, setting antialias to True makes the graphics appear smooth without any artifacts. Antialiasing minimizes aliasing when representing a high-resolution image at a lower resolution. Other useful config options are 'background' and 'foreground' colors.
GraphicsView generates a main graphics window. The default size is (640,480). You can change this to the size of your choice by using the resize function, e.g, view.resize(50,50).
End of explanation
"""
img = pg.ImageItem(border='w', levels=(x.min(),x.max())) # create an image object
p.addItem(img) # add the imageItem to the plotItem
"""
Explanation: For a given graphics window, you can create multiple plots. Here, we created a single plot item and added it to the graphics window.
End of explanation
"""
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data update function
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
"""
Explanation: Within each plot, you can define multiple drawing items (or artists). Here, we added an image item. Examples of other items are: PlotCurveItem, ArrowItem, etc.
End of explanation
"""
# setup and start the timer
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
"""
Explanation: Here, we create a function to update the image item with new data. To this end, we use a counter to iterate over each image stored within x.
End of explanation
"""
app.exec_() # execute the app
"""
Explanation: The timer function is used to repeatedly call the animLoop with a delay of 0 between each call.
End of explanation
"""
# Animate a 3D numpy array
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.QtGui.QApplication([])
x = np.random.rand(500,50,50)
pg.setConfigOptions(antialias=True)
# main graphics window
view = pg.GraphicsView()
# show the window
view.show()
# add a plotItem
p = pg.PlotItem()
# add the plotItem to the graphicsWindow and set it as central
view.setCentralItem(p)
# create an image object
img = pg.ImageItem(border='w', levels=(x.min(),x.max()))
# add the imageItem to the plotItem
p.addItem(img)
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data generator
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
app.exec_()
"""
Explanation: Finally, you need to execute the QApplication. Any PyQtGraph code must be wrapped between the app initialization and the app execution. Here is the code all put together (execute and check):
End of explanation
"""
# imports
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
# init qApp
app = pg.QtGui.QApplication([])
# setup the main window
view = pg.GraphicsView()
view.resize(900,500)
view.setWindowTitle('Notebook')
view.show()
# main layout
layout = pg.GraphicsLayout(border='r') # with a red bordercolor
# set the layout as a central item
view.setCentralItem(layout)
# create a text block
label = pg.LabelItem('PyQtGraph Grid Layout Example', size='25px', color='y')
# create a plot with two random curves
p1 = pg.PlotItem()
curve11 = pg.PlotCurveItem(pen=pg.mkPen(color='g', width=1))
curve12 = pg.PlotCurveItem(pen=pg.mkPen(color='b', width=1, style=QtCore.Qt.DashLine))
p1.addItem(curve11); p1.addItem(curve12)
curve11.setData(np.random.rand(100))
curve12.setData(np.random.rand(100))
# create another plot with two random curves
p2 = pg.PlotItem()
curve21 = pg.PlotCurveItem(pen=pg.mkPen(color='w', width=1, style=QtCore.Qt.DotLine))
curve22 = pg.PlotCurveItem(pen=pg.mkPen(color='c', width=1, style=QtCore.Qt.DashLine))
p2.addItem(curve21); p2.addItem(curve22)
curve21.setData(np.random.rand(100))
curve22.setData(np.random.rand(100))
# Finally organize the layout
layout.addItem(label, row=0, col=0, colspan=2)
layout.addItem(p1, row=1, col=0)
layout.addItem(p2, row=1, col=1)
app.exec_()
"""
Explanation: Exercise 1
Animate an RGB array.
Animate a 2D array (sequence of line plots). Use pg.PlotCurveItem instead of pg.ImageItem and setData instead of setImage to update the data.
Styling Plots
PyQtGraph provides a function called mkPen(args) to create a drawing pen that can be passed as an argument (pen = pg.mkPen()) to style while defining several plot items. A few examples of defining mkPen are:
pg.mkPen('y', width=3, style=QtCore.Qt.DashLine) # Make a dashed yellow line 2px wide
pg.mkPen(0.5) # Solid gray line 1px wide
pg.mkPen(color=(200,200,255), style=QtCore.Qt.DotLine) # Dotted pale-blue line
Exercise 2
Repeat Exercise 1 with a yellow dashed line plot animation.
Plots Grid Layout
You can create a grid layout for your plots using the GraphicsLayout function. The layout can then be used as a placeholder for all your plots within the main graphics window. Here is an example with two plots placed next to each other beneath a wide text block:
End of explanation
"""
|
llondon6/kerr_public
|
examples/plot_qnm_frequency.ipynb
|
mit
|
# Define which base QNM to use. Note that the same QNM with m --> -m may be used at some point.
l,m,n = 2,1,0
# Useful to development: turn module reloading
%load_ext autoreload
# Inline plotting
%matplotlib inline
# Force module recompile
%autoreload 2
# Import kerr and numpy
from kerr import leaver
from kerr.formula.zdqnm_frequencies import kappa
from numpy import linspace,array,sin,pi,zeros,arange,ones
from numpy.linalg import norm
from kerr.basics import rgb
# Setup plotting backend
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.pyplot import *
import matplotlib.pyplot as my
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 1
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
#from kerr.formula.zdqnm_sepconstants import SC as scfit
"""
Explanation: Plot QNM Frequencies under different scenarios to demonstrate Conventions and Properties.
Summary: ...
End of explanation
"""
# Define a function to plot the real and imaginary parts of the complex frequency given l,m,n.
# Defining this function will save save time/code later.
def plot_mode(l,m,n,linestyle='-',conj=None):
if conj is not None:
x_,wc_,sc_ = conj
# %%%%%%%%%%%%%%%%%%%%%%%%%%% #
# APPLY SYMMETRY RELATIONSHIP #
# FOR m --> -1*m #
# %%%%%%%%%%%%%%%%%%%%%%%%%%% #
wc_ = -wc_.conj()
sc_ = sc_.conj()
jf_range = 0.99* sin( 0.5*pi * linspace(-1,1,101) )
wc = zeros(jf_range.shape).astype(complex)
sc = zeros(jf_range.shape).astype(complex)
for k,jf in enumerate(jf_range):
wc[k],sc[k] = leaver(jf,l,m,n)
fig = figure( figsize=9*array([1,1]) )
grey = 0.8*array([1,1,1])
ax = [0,0,0,0]
#x = jf_range
x = kappa([jf_range,l,m])
jfzeroline = lambda : axvline( x[jf_range==min(abs(jf_range))], linestyle='--', color=grey )
ax[0]=subplot(2,2,1); jfzeroline()
if conj is not None:
plot( x_, wc_.real, color=grey, linewidth=4 )
plot( x, wc.real, linestyle )
# xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m))
ylabel(r'$\mathrm{Re}\,\tilde{\omega}_{%i%i%i}$' % (l,m,n) )
ax[1]=subplot(2,2,2); jfzeroline()
gca().yaxis.set_label_position("right"); gca().yaxis.tick_right()
if conj is not None:
plot( x_, wc_.imag, color=grey, linewidth=4 )
plot( x, wc.imag, linestyle )
# xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m))
ylabel(r'$\mathrm{Im}\,\tilde{\omega}_{%i%i%i}$' % (l,m,n) )
ax[2]=subplot(2,2,3); jfzeroline()
if conj is not None:
plot( x_, sc_.real, color=grey, linewidth=4 )
#plot( x, scfit[(l,m,n)](jf_range).real, color=0*grey, linestyle='--', alpha=0.1, linewidth=6 )
plot( x, sc.real, 'm'+linestyle )
xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m))
ylabel(r'$\mathrm{Re}\,\tilde{K}_{%i%i%i}$' % (l,m,n) )
ax[3]=subplot(2,2,4); jfzeroline()
gca().yaxis.set_label_position("right"); gca().yaxis.tick_right()
if conj is not None:
plot( x_, sc_.imag, color=grey, linewidth=4 )
#plot( x, -scfit[(l,m,n)](jf_range).imag, color=0*grey, linestyle='--', alpha=0.1, linewidth=6 )
plot( x, sc.imag, 'm'+linestyle )
xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m))
ylabel(r'$\mathrm{Im}\,\tilde{K}_{%i%i%i}$' % (l,m,n) )
show()
#
return x,wc,sc
"""
Explanation: Plot Single QNM Frequency on jf = [-1,1] using tabulated data
End of explanation
"""
# Plot the desired QNM
x,wc,sc = plot_mode(l,m,n)
"""
Explanation: Plot the single QNM
End of explanation
"""
plot_mode(l,-m,n);
"""
Explanation: Plot -m on separate figure for comparison
End of explanation
"""
#
plot_mode(l,-m,n,linestyle='--',conj=(x,wc,sc));
"""
Explanation: Demonstrate Symmetry Property by plotting two QNMs that differ only by the sign of m
End of explanation
"""
# Inline plotting
#%matplotlib inline
#%matplotlib notebook
mpl.rcParams['lines.linewidth'] = 1
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 16
mpl.rcParams['axes.labelsize'] = 20
#
jf = 0.68
n_range = arange(4)
#
wc = zeros(n_range.shape).astype(complex)
sc = zeros(n_range.shape).astype(complex)
wc_= zeros(n_range.shape).astype(complex)
sc_= zeros(n_range.shape).astype(complex)
for k,n in enumerate(n_range):
wc[k] ,sc[k] = leaver( jf,l, m,n )
wc_[k],sc_[k] = leaver(-jf,l,-m,n )
fig = figure( figsize=8*array([1,1]) )
ms = 8; clr = rgb(n_range.size,jet=True)
for k in range( len(n_range) ):
plot( wc[k].real, wc[k].imag, 'o', ms=ms, mec=0.3*clr[k], mfc=clr[k], alpha=0.4 )
#plot(-wc[k].real, wc[k].imag, 'ok', ms=ms, alpha=0.4, mfc='none' )
for k in range( len(n_range) ):
plot( wc_[k].real, wc_[k].imag, 's', ms=ms, mec=0.3*clr[k], mfc=clr[k], alpha=0.4 )
#plot(-wc_[k].real, wc_[k].imag, 'xk', ms=ms, alpha=0.4, mfc='none' )
# Label axes
xlabel(r'$\mathrm{Re}\,\tilde{w}_{%i%in}$' % (l,m) )
ylabel(r'$\mathrm{Im}\,\tilde{w}_{%i%in}$' % (l,m) )
print norm(wc+wc_.conj())
"""
Explanation: But note that coincident solutions correspond to pairs: (jf,l,m,n) and (-jf,l,-m,n)
End of explanation
"""
# First, let's interpolate the separation constants. This will help a lot with plotting.
# This requires interp2d
import scipy.interpolate as intpl
interp2d = intpl.interp2d
from numpy import hstack,vstack
from numpy import meshgrid
from matplotlib import cm
fig = figure( figsize=(15,10) )
ax = fig.add_subplot(111, projection='3d')
x = vstack( [wc_.real,wc.real] )
y = vstack( [wc_.imag,wc.imag] )
z = vstack( [sc_.real,sc.real] )
SCR = interp2d(x,y,z)
gca().scatter( x,y,z, c='r', marker='o',s=12)
print z
# Create grid
x_range = linspace( min(x.reshape(x.size,)), max(x.reshape(x.size,)) )
y_range = linspace( min(y.reshape(y.size,)), max(y.reshape(y.size,)) )
xx,yy = meshgrid(x_range,y_range)
zz = SCR(xx,yy)
gca().plot_surface(x_range,y_range,zz,cmap=cm.coolwarm,linewidth=0)
"""
Explanation: But what do I mean by coincident solutions?? When solving leaver's equations for a given l and m, there are both positive and neagtive frequency solutions. Let's try to visualize this.
End of explanation
"""
|
gboeing/urban-data-science
|
modules/10-spatial-models/lecture.ipynb
|
mit
|
import geopandas as gpd
import matplotlib.pyplot as plt
import pandas as pd
import pysal as ps
# load CA tracts
tracts_ca = gpd.read_file('../../data/tl_2017_06_tract/').set_index('GEOID')
# keep LA, ventura, orange counties only (and drop offshore island tracts)
to_drop = ['06037599100', '06037599000', '06111980000', '06111990100', '06111003612']
tracts_ca = tracts_ca[tracts_ca['COUNTYFP'].isin(['037', '059', '111'])].drop(index=to_drop)
# project tracts
crs = '+proj=utm +zone=11 +ellps=WGS84 +datum=WGS84 +units=m +no_defs'
tracts_ca = tracts_ca.to_crs(crs)
tracts_ca.shape
# load CA tract-level census variables
df_census = pd.read_csv('../../data/census_tracts_data_ca.csv', dtype={'GEOID10':str}).set_index('GEOID10')
df_census.shape
# merge tract geometries with census variables and create med home value 1000s
tracts = tracts_ca.merge(df_census, left_index=True, right_index=True, how='left')
tracts['med_home_value_k'] = tracts['med_home_value'] / 1000
tracts.shape
"""
Explanation: Spatial Models
Overview of today's topics:
quick refresher
spatial fixed effects
spatial regimes
spatial lag
spatial error
geographically-weighted regression
1. Quick refresher
1.1. Theory and models
Spatial models are models that include geographic information to account for spatial relationships and processes. They can take on many different forms:
Spatially-explicit regression models (with PySAL)
Agent-based models and/or cellular automata (with Mesa)
Bayesian spatial models using Markov chain Monte Carlo methods (with PyMC3)
We will focus on spatially-explicit regression models here. Spatially-explicit regression models are a type of statistical model: sets of assumptions plus mathematical relationships between variables, producing a formal representation of some theory. We are essentially trying to explain the process underlying the generation of our observed data. Spatial inference introduces explicit spatial relationships into the statistical modeling framework, as both theory-driven (e.g., spatial spillovers) and data-driven (e.g., MAUP) issues could otherwise violate modeling assumptions.
1.2. Statistical inference refresher
Statistical inference is the process of using a sample to infer the characteristics of an underlying population (from which this sample was drawn) through estimation and hypothesis testing. What is the probability distribution (the probabilities of occurrence of different possible outcome values of our response variable)? Contrast this with descriptive statistics, which focus simply on describing the characteristics of the sample itself.
Common goals of inferential statistics include:
parameter estimation and confidence intervals
hypothesis rejection
prediction and explanation
model selection
Schools of statistical inference:
frequentist
frequentists think of probability as proportion of time some outcome occurs (relative frequency)
given lots of repeated trials, how likely is the observed outcome?
concepts: statistical hypothesis testing, p-values, confidence intervals
bayesian
bayesians think of probability as amount of certainty observer has about an outcome occurring (subjective probability)
probability as a measure of how much info the observer has about the real world, updated as info changes
concepts: prior probability, likelihood, bayes' rule, posterior probability
1.3. Regression refresher
This course presumes you're already comfortable with multiple regression and OLS, as a prerequisite.
Regression assumptions:
an additive, linear relationship between response and predictors
uncorrelated predictors
uncorrelated, homoskedastic, normally-distributed errors
Regression topics:
specification: choosing variables to include and the functional form
transformation: pre-processing to improve linear fit (log, power, etc) and feature scaling
estimation: using an algorithm (such as OLS, WLS, MLE, etc) to estimate (aka, fit or train) your model's parameters
validation and diagnostics: model's goodness of fit ($R^2$), parameters' statistical significance ($t$-test and $p$-values), check errors and assumptions (diagnostic tests, residual plot, Q-Q plot, etc), outlier influence (leverage), robustness checks (alternative specifications)
resampling: cross-validation (out-of-sample prediction with train/test subsets) and bootstrapping (random subsampling to generate estimates' distribution)
model selection and regularization: bias-variance tradeoff (over/under-fitting), lasso (L1 regularization), ridge (L2 regularization), hyperparameters
2. Setup and data prep
End of explanation
"""
# choose which variables to use as predictors
predictors = ['pct_white', 'pct_built_before_1940', 'med_rooms_per_home', 'pct_bachelors_degree']
# choose a response variable and drop any rows in which it is null
response = 'med_home_value_k'
tracts = tracts.dropna(subset=[response])
tracts.shape
# inspect the descriptive stats for your model's variables
tracts[[response] + predictors].describe().T.round(2)
# create design matrix of predictors (drop nulls) and response matrix
X = tracts[predictors].dropna()
Y = tracts.loc[X.index][[response]]
# estimate linear regression model with OLS
ols = ps.model.spreg.OLS(y=Y.values,
x=X.values,
name_x=X.columns.tolist(),
name_y=response,
name_ds='tracts')
print(ols.summary)
"""
Explanation: Today we will explore a hedonic model of home prices, using a naively specified model that offers lots of opportunities for critique and enhancement. First let's get our data into the right format for estimating our model on them:
- the design matrix is a $n×k$ matrix of $n$ non-null observations on $k$ predictor variables
- the response vector is a $n×1$ vector of $n$ non-null observations on the response variable (note that PySAL wants its responses to be matrices)
End of explanation
"""
# create a new dummy variable for each county, with 1 if tract is in this county and 0 if not
for county in tracts['COUNTYFP'].unique():
new_col = f'dummy_county_{county}'
tracts[new_col] = (tracts['COUNTYFP'] == county).astype(int)
# leave out one dummy variable to prevent perfect collinearity
# ie, a subset of predictors sums to 1 (which full set of dummies will do)
county_dummies = [f'dummy_county_{county}' for county in tracts['COUNTYFP'].unique()]
county_dummies = county_dummies[:-1]
county_dummies
# create design matrix of predictors (drop nulls) and response matrix
X = tracts[predictors + county_dummies].dropna()
Y = tracts.loc[X.index][[response]]
# estimate linear regression model with spatial fixed effects
ols = ps.model.spreg.OLS(y=Y.values,
x=X.values,
name_x=X.columns.tolist(),
name_y=response,
name_ds='tracts')
print(ols.summary)
# now it's your turn
# what happens if you change which county dummy you excluded?
# how do the coefficients change? which ones do or do not?
"""
Explanation: That's our plain old OLS. Now let's explore different kinds of spatial models.
Types of spatially explicit models:
Spatial heterogeneity: account for systematic differences across space without explicitly modeling interdependency
spatial fixed effects: intercept varies for each spatial group
spatial regimes: intercept and coefficients vary for each spatial group
geographically weighted regression: model local relationships that vary across study area
Spatial dependence: model interdependencies between observations through space
spatial lag model: spatially-lagged endogenous variable added as predictor (because of endogeneity, cannot use OLS to estimate)
spatial error model: spatial effects in error term
spatial combo model: both lag and error
3. Spatial fixed effects
Intercept varies for each each spatial group. Use dummy variables to represent the groups (counties) into which our observations (tracts) are nested. Uses OLS for estimation.
End of explanation
"""
# create design matrix of predictors (drop nulls), response matrix, and regimes vector
X = tracts[predictors].dropna()
Y = tracts.loc[X.index][[response]]
regimes = tracts.loc[X.index]['COUNTYFP']
regimes.sample(5)
# estimate spatial regimes model with OLS
olsr = ps.model.spreg.OLS_Regimes(y=Y.values,
x=X.values,
regimes=regimes.values,
name_regimes='county',
name_x=X.columns.tolist(),
name_y=response,
name_ds='tracts')
print(olsr.summary)
# now it's your turn
# read through the model output above which county has the largest magnitude
# coefficient on pct_white? how would you interpret that in the real world?
"""
Explanation: 4. Spatial regimes
Intercept and coefficients vary for each spatial group (aka, regime). Here, the regimes are our 3 counties. In essence, this generates a separate regression model for each regime. We use OLS for estimation, but you can also combine spatial regimes with spatial autoregressive models (the latter is introduced later).
End of explanation
"""
fixed_kernel = False
spatial_kernel = 'gaussian'
search = 'golden_section'
criterion = 'AICc'
%%time
# select an adaptive (NN) bandwidth for our GWR model, given the data
centroids = tracts.loc[X.index].centroid
coords = list(zip(centroids.x, centroids.y))
sel = ps.model.mgwr.sel_bw.Sel_BW(coords=coords,
y=Y.values,
X_loc=X.values,
kernel=spatial_kernel,
fixed=fixed_kernel)
nn = sel.search(search_method=search, criterion=criterion)
# what is the selected adaptive bandwidth value?
# ie, number of NNs to use to determine locally-varying bandwidth distances
nn
%%time
# estimate the GWR model parameters
# pass fixed=False to treat bw as number of NNs (adaptive kernel)
model = ps.model.mgwr.gwr.GWR(coords=coords,
y=Y.values,
X=X.values,
bw=nn,
kernel=spatial_kernel,
fixed=fixed_kernel)
gwr = model.fit()
# inspect the results
gwr.summary()
"""
Explanation: 5. Geographically weighted regression
The problem with global regression models is that they are essentially spatial averages, obfuscating all the local variation in the process you're exploring. GWR allows us to investigate how model parameters and performance vary across the study area. It calibrates a regression model on each observation's local neighborhood then combines these into a global model for the study area. A user-defined bandwidth determines how these local models are calibrated: GWR estimates a model for each observation, using all the other observations weighted by their inverse-distance to that observation. The weighting is determined by fitting a spatial kernel to the data parameterized by the bandwidth distance.
Accordingly, the combination of bandwidth and kernel affects the smoothing (i.e., over-/under-fitting) of your model. Common kernels include the gaussian and bisquare. Bandwidth can be fixed or adaptive. If fixed, then the same distance is used for weighting across every observation's local neighborhood. However, this can introduce problems if your observations vary in density. Consider tracts: a tract in downtown LA may have 100 other tracts within 20km of it, but a tract in the Antelope Valley may have only 2 or 3 (too few for precise estimation). An adaptive bandwidth instead uses a fixed number of nearest neighbors to adjust the bandwidth distance accordingly: tracts in dense areas get a narrower bandwidth distance and tracts in sparse areas get a wider one. For more on GWR, this book offers a good gentle introduction.
We need to specify fixed vs adaptive bandwidth (adaptive), spatial kernel (gaussian), optimization technique (golden section), and a criterion to minimize (AICc).
End of explanation
"""
# a constant was added, so we'll add it to our predictors
cols = ['constant'] + predictors
cols
# turn GWR local parameter estimates into a GeoDataFrame with tract geometries
params = pd.DataFrame(gwr.params, columns=cols, index=X.index)
params = tracts[['geometry']].merge(params, left_index=True, right_index=True, how='right')
params.head()
"""
Explanation: Compare the summary statistics across the local models (at bottom of output) to the global model (above).
End of explanation
"""
# helper function to generate colormaps for GWR plots
def get_cmap(values, cmap_name='coolwarm', n=256):
import numpy as np
from matplotlib.colors import LinearSegmentedColormap as lsc
name = f'{cmap_name}_new'
cmap = plt.cm.get_cmap(cmap_name)
vmin = values.min()
vmax = values.max()
if vmax < 0:
# if all values are negative, use the negative half of the colormap
return lsc.from_list(name, cmap(np.linspace(0, 0.5, n)))
elif vmin > 0:
# if all values are positive use the positive half of the colormap
return lsc.from_list(name, cmap(np.linspace(0.5, 1, n)))
else:
# otherwise there are positive and negative values so use zero as midpoint
# and truncate the colormap such that if the original spans ± the greatest
# absolute value, we only use colors from it spanning vmin to vmax
abs_max = max(values.abs())
start = (vmin + abs_max) / (abs_max * 2)
stop = (vmax + abs_max) / (abs_max * 2)
return lsc.from_list(name, cmap(np.linspace(start, stop, n)))
# plot the spatial distribution of local parameter estimates
# set nrows, ncols to match your number of predictors!
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
for col, ax in zip(predictors, axes.flat):
ax.set_aspect('equal')
ax.axis('off')
ax.set_title(f'Local {col} coefficients')
gdf = params.dropna(subset=[col], axis='rows')
ax = gdf.plot(ax=ax,
column=col,
cmap=get_cmap(gdf[col]),
legend=True,
legend_kwds={'shrink': 0.6})
fig.tight_layout()
"""
Explanation: A common way to report GWR results is to visualize their spatial distribution.
First, we'll create a helper function to generate (properly centered and truncated) colormaps for our subsequent visualizations.
End of explanation
"""
# turn GWR local t-values into a GeoDataFrame with tract geometries
# set t-values below significance threshold to zero then clip to ± 4
# p<.05 corresponds to |t|>1.96, and |t|>4 corresponds to p<.0001
tvals = pd.DataFrame(gwr.filter_tvals(alpha=0.05), columns=cols, index=X.index).clip(-4, 4)
tvals = tracts[['geometry']].merge(tvals, left_index=True, right_index=True, how='right')
# plot the spatial distribution of local t-values
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
for col, ax in zip(predictors, axes.flat):
ax.set_aspect('equal')
ax.axis('off')
ax.set_title(f'Local {col} $t$ values')
gdf = tvals.dropna(subset=[col], axis='rows')
ax = gdf.plot(ax=ax,
column=col,
cmap=get_cmap(gdf[col]),
legend=True,
legend_kwds={'shrink': 0.6})
fig.tight_layout()
"""
Explanation: Above are our locally-varying parameter estimates. But they're not all statistically significantly different from zero. Where are they (in-)significant?
End of explanation
"""
# turn GWR local R-squared values into a GeoDataFrame with tract geometries
col = 'Local $R^2$ values'
r_squared = pd.DataFrame(gwr.localR2, index=X.index, columns=[col])
r_squared = tracts[['geometry']].merge(r_squared, left_index=True, right_index=True, how='right')
# plot the spatial distribution of local R-squared values
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_aspect('equal')
ax.axis('off')
ax.set_title(col)
gdf = r_squared.dropna(subset=[col], axis='rows')
ax = gdf.plot(ax=ax,
column=col,
cmap='Reds',
legend=True,
legend_kwds={'shrink': 0.6})
fig.tight_layout()
# now it's your turn
# try increasing or decreasing the nearest neighbors bandwidth value above
# how does that change the model's results and visualizations?
"""
Explanation: How well does our model perform across the study area?
End of explanation
"""
# compute spatial weights for only those tracts that appear in design matrix
W = ps.lib.weights.Queen.from_dataframe(tracts.loc[X.index])
W.transform = 'r'
# compute OLS spatial diagnostics to check the nature of spatial dependence
ols = ps.model.spreg.OLS(y=Y.values,
x=X.values,
w=W,
spat_diag=True,
moran=True)
# calculate moran's I (for the response) and its significance
mi = ps.explore.esda.Moran(y=Y, w=W, two_tailed=True)
print(mi.I)
print(mi.p_sim)
# moran's I (for the residuals): moran's i, standardized i, p-value
ols.moran_res
"""
Explanation: 6. Spatial diagnostics
So far we've seen different spatial heterogeneity models. Now we'll explore spatial dependence (modeling interdependencies between observations over space), starting by using queen-contiguity spatial weights to model spatial relationships between observations and OLS to check diagnostics.
End of explanation
"""
# lagrange multiplier test for spatial lag model: stat, p
ols.lm_lag
# lagrange multiplier test for spatial error model: stat, p
ols.lm_error
"""
Explanation: Interpreting the results: a significant Moran's I suggests spatial autocorrelation, but doesn't tell us which alternative specification should be used. Lagrange Multiplier (LM) diagnostics can help with that. If one LM test is significant and the other isn't, then that tells us which model specification (spatial lag vs spatial error) to use.
End of explanation
"""
# robust lagrange multiplier test for spatial lag model: stat, p
ols.rlm_lag
# robust lagrange multiplier test for spatial error model: stat, p
ols.rlm_error
"""
Explanation: Interpreting the results: if (and only if) both the LM tests produce significant statistics, try the robust versions (the nonrobust LM tests are sensitive to each other).
End of explanation
"""
# maximum-likelihood estimation with full matrix expression
mll = ps.model.spreg.ML_Lag(y=Y.values,
x=X.values,
w=W,
method='full',
name_w='queen',
name_x=X.columns.tolist(),
name_y=response,
name_ds='tracts')
print(mll.summary)
# the spatial autoregressive parameter estimate, rho
mll.rho
"""
Explanation: So... which model specification to choose? Workflow:
If neither LM test is significant: use regular OLS.
If only one LM test is significant: use that model spec.
If both LM tests are significant: run robust versions.
If only one robust LM test is significant: use that model spec.
If both robust LM tests are significant (this can often happen with large sample sizes):
first consider if the initial model specification is actually a good fit
if so, use the spatial specification corresponding to the larger robust-LM statistic
or consider a combo model
A hint for our working example here: our model is not well-specified!
7. Spatial lag model
When the diagnostics indicate the presence of a spatial diffusion process. Uses the spatially-lagged endogenous variable as a predictor. Because of endogeneity, cannot use OLS to estimate.
Model specification:
$y = \rho W y + \beta X + u$
where $y$ is a $n \times 1$ vector of observations (response), $W$ is a $n \times n$ spatial weights matrix (thus $Wy$ is the spatially-lagged response), $\rho$ is the spatial autoregressive parameter to be estimated, $X$ is a $n \times k$ matrix of observations (exogenous predictors), $\beta$ is a $k \times 1$ vector of parameters (coefficients) to be estimated, and $u$ is a $n \times 1$ vector of errors.
End of explanation
"""
# maximum-likelihood estimation with full matrix expression
mle = ps.model.spreg.ML_Error(y=Y.values,
x=X.values,
w=W,
method='full',
name_w='queen',
name_x=X.columns.tolist(),
name_y=response,
name_ds='tracts')
print(mle.summary)
# the spatial autoregressive parameter estimate, lambda
mle.lam
# now it's your turn
# re-calculate the spatial weights matrix using distance bands and linear decay
# how does that change the diagnostics, lag model, and error model results?
"""
Explanation: Remember, from my assigned JAPA article, that the interpretation of spatial-lag models is tricky:
Due to spatial spillover, each coefficient alone does not represent the marginal effect on the response of a unit increase in the predictor. Instead, it represents the direct effect: what happens locally if you make a unit change in the predictor only in one tract. But also present are indirect effects: local spillovers in each tract from a unit predictor change in other tracts.
Refer to the article for details on how to calculate and interpret total effects.
8. Spatial error model
When the diagnostics indicate the presence of spatial error dependence (spatial effects in error term).
Model specification:
$y = \beta X + u$
where $X$ is a $n \times k$ matrix of observations (exogenous predictors), $\beta$ is a $k \times 1$ vector of parameters (coefficients) to be estimated, and $u$ is a $n \times 1$ vector of spatially autocorrelated errors. The errors $u$ follow a spatial autoregressive specification:
$u = \lambda Wu + \epsilon$
where $\lambda$ is a spatial autoregressive parameter to be estimated and $\epsilon$ is the vector of uncorrelated errors.
End of explanation
"""
gmc = ps.model.spreg.GM_Combo_Het(y=Y.values,
x=X.values,
w=W,
name_w='queen',
name_ds='tracts',
name_x=X.columns.tolist(),
name_y=response)
print(gmc.summary)
"""
Explanation: 9. Spatial lag+error combo model
Estimated with GMM (generalized method of moments). Essentially a spatial error model with endogenous explanatory variables.
Model specification:
$y = \rho W y + \beta X + u$
where $y$ is a $n \times 1$ vector of observations (response), $W$ is a $n \times n$ spatial weights matrix (thus $Wy$ is the spatially-lagged response), $\rho$ is the spatial autoregressive parameter to be estimated, $X$ is a $n \times k$ matrix of observations (exogenous predictors), $\beta$ is a $k \times 1$ vector of parameters (coefficients) to be estimated, and $u$ is a $n \times 1$ vector of spatially autocorrelated errors.
The errors $u$ follow a spatial autoregressive specification:
$u = \lambda Wu + \epsilon$
where $\lambda$ is a spatial autoregressive parameter to be estimated and $\epsilon$ is the vector of uncorrelated errors.
End of explanation
"""
|
pacoqueen/ginn
|
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Script Magics.ipynb
|
gpl-2.0
|
import sys
"""
Explanation: Running Scripts from IPython
IPython has a %%script cell magic, which lets you run a cell in
a subprocess of any interpreter on your system, such as: bash, ruby, perl, zsh, R, etc.
It can even be a script of your own, which expects input on stdin.
End of explanation
"""
%%script python2
import sys
print 'hello from Python %s' % sys.version
%%script python3
import sys
print('hello from Python: %s' % sys.version)
"""
Explanation: Basic usage
To use it, simply pass a path or shell command to the program you want to run on the %%script line,
and the rest of the cell will be run by that script, and stdout/err from the subprocess are captured and displayed.
End of explanation
"""
%%ruby
puts "Hello from Ruby #{RUBY_VERSION}"
%%bash
echo "hello from $BASH"
"""
Explanation: IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc.
These are all equivalent to %%script <name>
End of explanation
"""
%%bash
echo "hi, stdout"
echo "hello, stderr" >&2
%%bash --out output --err error
echo "hi, stdout"
echo "hello, stderr" >&2
print(error)
print(output)
"""
Explanation: Capturing output
You can also capture stdout/err from these subprocesses into Python variables, instead of letting them go directly to stdout/err
End of explanation
"""
%%ruby --bg --out ruby_lines
for n in 1...10
sleep 1
puts "line #{n}"
STDOUT.flush
end
"""
Explanation: Background Scripts
These scripts can be run in the background, by adding the --bg flag.
When you do this, output is discarded unless you use the --out/err
flags to store output as above.
End of explanation
"""
ruby_lines
print(ruby_lines.read())
"""
Explanation: When you do store output of a background thread, these are the stdout/err pipes,
rather than the text of the output.
End of explanation
"""
%%script python2 -Qnew
print 1/3
"""
Explanation: Arguments to subcommand
You can pass arguments the subcommand as well,
such as this example instructing Python to use integer division from Python 3:
End of explanation
"""
%%script --bg --out bashout bash -c "while read line; do echo $line; sleep 1; done"
line 1
line 2
line 3
line 4
line 5
"""
Explanation: You can really specify any program for %%script,
for instance here is a 'program' that echos the lines of stdin, with delays between each line.
End of explanation
"""
import time
tic = time.time()
line = True
while True:
line = bashout.readline()
if not line:
break
sys.stdout.write("%.1fs: %s" %(time.time()-tic, line))
sys.stdout.flush()
"""
Explanation: Remember, since the output of a background script is just the stdout pipe,
you can read it as lines become available:
End of explanation
"""
|
chetan51/nupic.research
|
projects/dynamic_sparse/notebooks/replicateDense.ipynb
|
gpl-3.0
|
%load_ext autoreload
%autoreload 2
# general imports
import os
import numpy as np
# torch imports
import torch
import torch.optim as optim
import torch.optim.lr_scheduler as schedulers
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchsummary import summary
# nupic research imports
from nupic.research.frameworks.pytorch.image_transforms import RandomNoise
from nupic.torch.modules import KWinners
# local library
from networks_module.base_networks import *
from models_module.base_models import *
# local files
from utils import *
import math
# plotting
import matplotlib.pyplot as plt
from matplotlib import rcParams
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
rcParams['figure.figsize'] = (12,6)
PATH_TO_WHERE_DATASET_WILL_BE_SAVED = PATH = "~/nta/datasets"
"""
Explanation: Goal: Investigate how DSNN fares in a toy problem.
Compare following models:
- Large dense (same architecture as large sparse, but dense)
- Small dense (same number of params as large sparse, but dense)
- Large sparse
- Large sparse + dynamic sparse
End of explanation
"""
from models_module.base_models import BaseModel, SparseModel, DSNNMixedHeb
from networks_module.hebbian_networks import MLP, MLPHeb
# load dataset
config = (dict(
dataset_name="MNIST",
data_dir="~/nta/datasets",
test_noise=True
))
dataset = Dataset(config)
test_noise = True
use_kwinners = True
epochs = 15
on_perc = 0.1
# large dense
config = dict(hidden_sizes=[100,100,100], use_kwinners=use_kwinners)
network = MLP(config=config)
config = dict(debug_weights=True)
model = BaseModel(network=network, config=config)
model.setup()
print("\nLarge Dense")
large_dense = model.train(dataset, epochs, test_noise=test_noise);
"""
Explanation: Test with kwinners
End of explanation
"""
large_dense
results = large_dense
h, w = math.ceil(len(results)/4), 4
combinations = []
for i in range(h):
for j in range(w):
combinations.append((i,j))
fig, axs = plt.subplots(h, w, gridspec_kw={'hspace': 0.5, 'wspace': 0.5})
fig.set_size_inches(16,16)
for (i, j), k in zip(combinations[:len(results)], sorted(results.keys())):
axs[i, j].plot(range(len(results[k])), results[k])
axs[i, j].set_title(k)
"""
Explanation: Debugging the dense model
End of explanation
"""
|
wzxiong/DAVIS-Machine-Learning
|
labs/lab2.ipynb
|
mit
|
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
%matplotlib inline
plt.style.use('ggplot')
datafolder = "../data/"
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv(datafolder+'Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
"""
Explanation: Ridge regression and model selection
Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning.
Loading data
End of explanation
"""
alphas = 10**np.linspace(10,-2,100)*0.5
ridge = Ridge()
coefs = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
coefs.append(ridge.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.axis('tight')
plt.xlabel('lambda')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization');
"""
Explanation: Ridge Regression
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ko/hub/tutorials/tf_hub_delf_module.ipynb
|
apache-2.0
|
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install scikit-image
from absl import logging
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps
from scipy.spatial import cKDTree
from skimage.feature import plot_matches
from skimage.measure import ransac
from skimage.transform import AffineTransform
from six import BytesIO
import tensorflow as tf
import tensorflow_hub as hub
from six.moves.urllib.request import urlopen
"""
Explanation: DELF 및 TensorFlow Hub를 사용하여 이미지를 일치시키는 방법
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf_hub_delf_module"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
TensorFlow Hub(TF-Hub)는 재사용 가능한 리소스, 특히 사전 훈련된 모듈에서 패키징된 머신러닝 전문 지식을 공유하는 플랫폼입니다.
이 colab에서는 DELF 신경망과 이미지 처리 로직을 패키지로 구성해 키포인트와 설명자를 식별하는 모듈을 사용합니다. 신경망의 가중치는 이 논문에 설명된 대로 랜드마크의 이미지에 대해 훈련되었습니다.
설정
End of explanation
"""
#@title Choose images
images = "Bridge of Sighs" #@param ["Bridge of Sighs", "Golden Gate", "Acropolis", "Eiffel tower"]
if images == "Bridge of Sighs":
# from: https://commons.wikimedia.org/wiki/File:Bridge_of_Sighs,_Oxford.jpg
# by: N.H. Fischer
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/2/28/Bridge_of_Sighs%2C_Oxford.jpg'
# from https://commons.wikimedia.org/wiki/File:The_Bridge_of_Sighs_and_Sheldonian_Theatre,_Oxford.jpg
# by: Matthew Hoser
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/c3/The_Bridge_of_Sighs_and_Sheldonian_Theatre%2C_Oxford.jpg'
elif images == "Golden Gate":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/1/1e/Golden_gate2.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/3/3e/GoldenGateBridge.jpg'
elif images == "Acropolis":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/ce/2006_01_21_Ath%C3%A8nes_Parth%C3%A9non.JPG'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/5/5c/ACROPOLIS_1969_-_panoramio_-_jean_melis.jpg'
else:
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/d/d8/Eiffel_Tower%2C_November_15%2C_2011.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/a/a8/Eiffel_Tower_from_immediately_beside_it%2C_Paris_May_2008.jpg'
"""
Explanation: 데이터
다음 셀에서는 일치와 비교를 위해 DELF로 처리할 두 이미지의 URL을 지정합니다.
End of explanation
"""
def download_and_resize(name, url, new_width=256, new_height=256):
path = tf.keras.utils.get_file(url.split('/')[-1], url)
image = Image.open(path)
image = ImageOps.fit(image, (new_width, new_height), Image.ANTIALIAS)
return image
image1 = download_and_resize('image_1.jpg', IMAGE_1_URL)
image2 = download_and_resize('image_2.jpg', IMAGE_2_URL)
plt.subplot(1,2,1)
plt.imshow(image1)
plt.subplot(1,2,2)
plt.imshow(image2)
"""
Explanation: 이미지를 다운로드, 크기 조정, 저장 및 표시합니다.
End of explanation
"""
delf = hub.load('https://tfhub.dev/google/delf/1').signatures['default']
def run_delf(image):
np_image = np.array(image)
float_image = tf.image.convert_image_dtype(np_image, tf.float32)
return delf(
image=float_image,
score_threshold=tf.constant(100.0),
image_scales=tf.constant([0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0]),
max_feature_num=tf.constant(1000))
result1 = run_delf(image1)
result2 = run_delf(image2)
"""
Explanation: 데이터에 DELF 모듈 적용하기
DELF 모듈은 이미지를 입력으로 받아 주목할 부분을 벡터로 설명합니다. 다음 셀에 이 colab 로직의 핵심 부분이 포함되어 있습니다.
End of explanation
"""
#@title TensorFlow is not needed for this post-processing and visualization
def match_images(image1, image2, result1, result2):
distance_threshold = 0.8
# Read features.
num_features_1 = result1['locations'].shape[0]
print("Loaded image 1's %d features" % num_features_1)
num_features_2 = result2['locations'].shape[0]
print("Loaded image 2's %d features" % num_features_2)
# Find nearest-neighbor matches using a KD tree.
d1_tree = cKDTree(result1['descriptors'])
_, indices = d1_tree.query(
result2['descriptors'],
distance_upper_bound=distance_threshold)
# Select feature locations for putative matches.
locations_2_to_use = np.array([
result2['locations'][i,]
for i in range(num_features_2)
if indices[i] != num_features_1
])
locations_1_to_use = np.array([
result1['locations'][indices[i],]
for i in range(num_features_2)
if indices[i] != num_features_1
])
# Perform geometric verification using RANSAC.
_, inliers = ransac(
(locations_1_to_use, locations_2_to_use),
AffineTransform,
min_samples=3,
residual_threshold=20,
max_trials=1000)
print('Found %d inliers' % sum(inliers))
# Visualize correspondences.
_, ax = plt.subplots()
inlier_idxs = np.nonzero(inliers)[0]
plot_matches(
ax,
image1,
image2,
locations_1_to_use,
locations_2_to_use,
np.column_stack((inlier_idxs, inlier_idxs)),
matches_color='b')
ax.axis('off')
ax.set_title('DELF correspondences')
match_images(image1, image2, result1, result2)
"""
Explanation: 위치 및 설명 벡터를 사용하여 이미지 일치시키기
End of explanation
"""
|
peterwittek/qml-rg
|
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
|
gpl-3.0
|
import os
import numpy as np
import tools as im
from matplotlib import pyplot as plt
from skimage.transform import resize
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
"""
Explanation: Finding the right capcha with Keras
End of explanation
"""
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32)
X=np.array(X) #transforms it into an array
Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes)
return X,Y
"""
Explanation: We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
End of explanation
"""
training_set, training_labels = im.load_images(path_train)
test_set, test_labels = im.load_images(path_test)
X_train,Y_train=prep_datas(training_set,training_labels)
X_test,Y_test=prep_datas(test_set,test_labels)
"""
Explanation: We then load the training set and the test set and prepare them with the function prep_datas.
End of explanation
"""
i=11
plt.subplot(1,2,1)
plt.imshow(training_set[i],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(X_train[i][0],cmap='gray')
"""
Explanation: Image before/after compression
End of explanation
"""
# import the necessary packages
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras.optimizers import SGD
# this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/
class LeNet:
@staticmethod
def build(width, height, depth, classes, weightsPath=None):
# initialize the model
model = Sequential()
# first set of CONV => RELU => POOL
model.add(Convolution2D(20, 5, 5, border_mode="same",input_shape=(depth, height, width)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# second set of CONV => RELU => POOL
model.add(Convolution2D(50, 5, 5, border_mode="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
"""
Explanation: Lenet neural network
End of explanation
"""
model = LeNet.build(width=32, height=32, depth=1, classes=2)
opt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1)
y_pred = model.predict_classes(X_test)
print(y_pred)
print(test_labels)
"""
Explanation: We build the neural network and fit it on the training set
End of explanation
"""
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
y_pred = model.predict_classes(X_real)
"""
Explanation: We now compare with the real world images (with the deshear method)
End of explanation
"""
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_pred,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
"""
Explanation: with the labels of Peter
End of explanation
"""
|
Neuroglycerin/neukrill-net-work
|
notebooks/troubleshooting_and_sysadmin/Opening test.py pickles.ipynb
|
mit
|
import pickle
cd /disk/scratch/neuroglycerin/dump/
ls
with open("test.py.pkl","rb") as f:
p = pickle.load(f)
len(p)
p[0].shape[0]*80
"""
Explanation: Two important submission csvs were written wrong, but in anticipation of this problem we pickled the results. Opening them now.
End of explanation
"""
import numpy as np
y = np.vstack(p)
y.shape
"""
Explanation: Looks like everything should be there, just have to figure out why it didn't write these to the csv right. Next part was the stack:
End of explanation
"""
import neukrill_net.utils
cd ~/repos/neukrill-net-work/
settings = neukrill_net.utils.Settings("settings.json")
import os
names = [os.path.basename(n) for n in settings.image_fnames['test']]
len(names)
"""
Explanation: That worked, what about finding the name for the csv?
End of explanation
"""
cd /disk/scratch/neuroglycerin/submissions/
ls
!gzip -d alexnet_based_40aug.csv.gz
!wc -l alexnet_based_40aug.csv
"""
Explanation: That also seems to be fine...
Only explanation I can think of at this point is that it somehow redefined the image_fname dict to be over one of the splits. But that makes no sense because the image_fname dictionary that gets modified is a different instance to that in the test.py script.
Looking at the submission csvs:
End of explanation
"""
130400/80
"""
Explanation: The splits would have been equal to the full dataset divided by 80:
End of explanation
"""
neukrill_net.utils.write_predictions("alexnet_based_40aug.csv",y,names,settings.classes)
"""
Explanation: Including the header, that's exactly correct.
All we can do now is rewrite the submission csv with the full names and submit it to check it's valid.
End of explanation
"""
cd /disk/scratch/neuroglycerin/dump/
with open("test2.py.pkl","rb") as f:
p16aug = pickle.load(f)
y16aug = np.vstack(p16aug)
y16aug.shape
cd /disk/scratch/neuroglycerin/submissions/
neukrill_net.utils.write_predictions("alexnet_based_16aug.csv.gz",y16aug,names,settings.classes)
"""
Explanation: And we have to do the same for 16aug predictions.
End of explanation
"""
|
feststelltaste/software-analytics
|
courses/20191014_ML-Summit/Analyzing Java Dependencies with jdeps (Demo Notebook).ipynb
|
gpl-3.0
|
from ozapfdis import jdeps
deps = jdeps.read_jdeps_file(
"../datasets/jdeps_dropover.txt",
filter_regex="at.dropover")
deps.head()
"""
Explanation: Questions
Which types / classes have unwanted dependencies in our code?
Which group of types / classes is highly cohesive but lowly coupled?
Idea
Using JDK's jdeps command line utility, we can extract the existing dependencies between Java types:
bash
jdeps -v dropover-classes.jar > jdeps.txt
Data
Read data in with <b>O</b>pen <b>Z</b>ippy <b>A</b>nalysis <b>P</b>latform <b>F</b>or <b>D</b>ata <b>I</b>n <b>S</b>oftware
End of explanation
"""
deps = deps[['from', 'to']]
deps['group_from'] = deps['from'].str.split(".").str[2]
deps['group_to'] = deps['to'].str.split(".").str[2]
deps.head()
"""
Explanation: Modeling
Extract the information about existing modules based on path naming conventions
End of explanation
"""
from ausi import d3
d3.create_d3force(
deps,
"jdeps_demo_output/dropover_d3forced",
group_col_from="group_from",
group_col_to="group_to")
d3.create_semantic_substrate(
deps,
"jdeps_demo_output/dropover_semantic_substrate")
d3.create_hierarchical_edge_bundling(
deps,
"jdeps_demo_output/dropover_bundling")
"""
Explanation: Visualization
Output results with <b>A</b>n <b>U</b>nified <b>S</b>oftware <b>I</b>ntegrator
End of explanation
"""
|
rvm-segfault/edx
|
python_for_data_sci_dse200x/week3/Intro Notebook.ipynb
|
apache-2.0
|
365 * 24 * 60 * 60
print(str(_/1e6) + ' million')
x = 4 + 3
print (x)
"""
Explanation: Number of seconds in a year
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import plot
plot([0,1,0,1])
"""
Explanation: This is a markdown cell
This is heading 2
This is heading 3
Hi!
One Fish
Two Fish
Red Fish
Blue Fish
Example Bold Text here
example italic text here
http://google.com
This is a Latex equation
$\int_0^\infty x^{-\alpha}$
End of explanation
"""
|
HumanCompatibleAI/imitation
|
examples/1_train_bc.ipynb
|
mit
|
from stable_baselines3 import PPO
from stable_baselines3.ppo import MlpPolicy
import gym
env = gym.make("CartPole-v1")
expert = PPO(
policy=MlpPolicy,
env=env,
seed=0,
batch_size=64,
ent_coef=0.0,
learning_rate=0.0003,
n_epochs=10,
n_steps=64,
)
expert.learn(1000) # Note: set to 100000 to train a proficient expert
"""
Explanation: Train an Agent using Behavior Cloning
Behavior cloning is the most naive approach to imitation learning.
We take the transitions of trajectories taken by some expert and use them as training samples to train a new policy.
The method has many drawbacks and often does not work.
However in this example, where we train an agent for the CartPole-v1 environment, it is feasible.
First we need some kind of expert in CartPole-v1 so we can sample some expert trajectories.
For convenience we just train one using the stable-baselines3 library.
End of explanation
"""
from stable_baselines3.common.evaluation import evaluate_policy
reward, _ = evaluate_policy(expert, env, 10)
print(reward)
"""
Explanation: Let's quickly check if the expert is any good.
We usually should be able to reach a reward of 500, which is the maximum achievable value.
End of explanation
"""
from imitation.data import rollout
from imitation.data.wrappers import RolloutInfoWrapper
from stable_baselines3.common.vec_env import DummyVecEnv
rollouts = rollout.rollout(
expert,
DummyVecEnv([lambda: RolloutInfoWrapper(env)]),
rollout.make_sample_until(min_timesteps=None, min_episodes=50),
)
transitions = rollout.flatten_trajectories(rollouts)
"""
Explanation: Now we can use the expert to sample some trajectories.
We flatten them right away since we are only interested in the individual transitions for behavior cloning.
imitation comes with a number of helper functions that makes collecting those transitions really easy. First we collect 50 episode rollouts, then we flatten them to just the transitions that we need for training.
Note that the rollout function requires a vectorized environment and needs the RolloutInfoWrapper around each of the environments.
End of explanation
"""
print(
f"""The `rollout` function generated a list of {len(rollouts)} {type(rollouts[0])}.
After flattening, this list is turned into a {type(transitions)} object containing {len(transitions)} transitions.
The transitions object contains arrays for: {', '.join(transitions.__dict__.keys())}."
"""
)
"""
Explanation: Lets have a quick look at what we just generated using those library functions:
End of explanation
"""
from imitation.algorithms import bc
bc_trainer = bc.BC(
observation_space=env.observation_space,
action_space=env.action_space,
demonstrations=transitions,
)
"""
Explanation: After we collected our transitions, its time to set up our behavior cloning algorithm.
End of explanation
"""
reward_before_training, _ = evaluate_policy(bc_trainer.policy, env, 10)
print(f"Reward before training: {reward_before_training}")
"""
Explanation: As you can see the untrained policy only gets poor rewards:
End of explanation
"""
bc_trainer.train(n_epochs=1)
reward_after_training, _ = evaluate_policy(bc_trainer.policy, env, 10)
print(f"Reward after training: {reward_after_training}")
"""
Explanation: After training, we can match the rewards of the expert (500):
End of explanation
"""
|
danielfrg/pelican-ipynb
|
pelican_jupyter/tests/pelican/markup-nbdata/content/nbdata-file.ipynb
|
apache-2.0
|
a = 1
a
b = 'pew'
b
%matplotlib inline
import matplotlib.pyplot as plt
from pylab import *
x = linspace(0, 5, 10)
y = x ** 2
figure()
plot(x, y, 'r')
xlabel('x')
ylabel('y')
title('title')
show()
import numpy as np
num_points = 130
y = np.random.random(num_points)
plt.plot(y)
"""
Explanation: This Jupyter notebook uses an .nbdata file for metadata
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur purus mi, sollicitudin ac justo a, dapibus ultrices dolor. Curabitur id eros mattis, tincidunt ligula at, condimentum urna. Morbi accumsan, risus eget porta consequat, tortor nibh blandit dui, in sodales quam elit non erat. Aenean lorem dui, lacinia a metus eu, accumsan dictum urna. Sed a egestas mauris, non porta nisi. Suspendisse eu lacinia neque. Morbi gravida eros non augue pharetra, condimentum auctor purus porttitor.
Header 2
End of explanation
"""
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
"""
Explanation: This is some text, here comes some latex
End of explanation
"""
import re
text = 'foo bar\t baz \tqux'
re.split('\s+', text)
"""
Explanation: Apos?
End of explanation
"""
|
walkon302/CDIPS_Recommender
|
notebook_versions/Exploring_Data_v2.ipynb
|
apache-2.0
|
import sys
import os
sys.path.append(os.getcwd()+'/../')
# other
import numpy as np
import glob
import pandas as pd
import ntpath
#keras
from keras.preprocessing import image
# plotting
import seaborn as sns
sns.set_style('white')
import matplotlib.pyplot as plt
%matplotlib inline
# debuggin
from IPython.core.debugger import Tracer
#stats
import scipy.stats as stats
import bqplot.pyplot as bqplt
"""
Explanation: Data Exploration
End of explanation
"""
user_profile = pd.read_csv('../data_user_view_buy/user_profile.csv',sep='\t',header=None)
user_profile.columns = ['user_id','buy_spu','buy_sn','buy_ct3','view_spu','view_sn','view_ct3','time_interval','view_cnt','view_seconds']
string =str(user_profile.buy_spu.as_matrix()[3002])
print(string)
print(string[0:7]+'-'+string[7::])
#print(str(user_profile.buy_spu.as_matrix()[0])[7::])
user_profile.head(10)
print('n rows: {0}').format(len(user_profile))
"""
Explanation: Data File
End of explanation
"""
def plot_trajectory_scatter(user_profile,scatter_color_col=None,samplesize=50,size=10,savedir=None):
plt.figure(figsize=(12,1*samplesize/10))
for ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):
trajectory = user_profile.loc[user_profile.user_id==user_id,]
time = 0-trajectory.time_interval.as_matrix()/60.0/60.0/24.0
# add image or not
if scatter_color_col is not None:
c = trajectory[scatter_color_col].as_matrix()
else:
c = np.ones(len(trajectory))
plt.scatter(time,np.ones(len(time))*ui,s=size,c=c,edgecolors="none",cmap="jet")
plt.axvline(x=0,linewidth=1)
sns.despine()
plt.title('example user trajectories')
plt.xlabel('days to purchase')
if savedir is not None:
plt.savefig(savedir,dpi=100)
"""
Explanation: Plotting Functions
End of explanation
"""
user_profile.describe()
print('unique users:{0}').format(len(user_profile.user_id.unique()))
print('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))
print('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))
print('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))
print('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))
print('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))
print('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))
samplesize = 2000
plt.figure(figsize=(12,4))
plt.subplot(1,3,1)
plt.hist(np.random.choice(user_profile.time_interval.as_matrix()/60.0/60.0,samplesize))
sns.despine()
plt.title('sample histogram from "time interval"')
plt.xlabel('hours from view to buy')
plt.ylabel('counts of items')
plt.subplot(1,3,2)
plt.hist(np.random.choice(user_profile.view_cnt.as_matrix(),samplesize))
sns.despine()
plt.title('sample histogram from "view count"')
plt.xlabel('view counts')
plt.ylabel('counts of items')
plt.subplot(1,3,3)
plt.hist(np.random.choice(user_profile.view_seconds.as_matrix(),samplesize))
sns.despine()
plt.title('sample histogram from "view lengths"')
plt.xlabel('view lengths (seconds)')
plt.ylabel('counts of items')
"""
Explanation: Descriptions of Data
End of explanation
"""
print('longest time interval')
print(user_profile.time_interval.min())
print('longest time interval')
print(user_profile.time_interval.max()/60.0/60.0/24)
"""
Explanation: there are many items that are viewed more than a day before buying
most items are viewed less than 10 times and for less than a couple minutes (though need to zoom in)
End of explanation
"""
mean_time_interval = np.array([])
samplesize =1000
for user_id in np.random.choice(user_profile.user_id.unique(),samplesize):
mean_time_interval = np.append(mean_time_interval, user_profile.loc[user_profile.user_id==user_id,'time_interval'].mean())
plt.figure(figsize=(12,3))
plt.hist(mean_time_interval/60.0,bins=200)
sns.despine()
plt.title('sample histogram of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
"""
Explanation: longest span from viewing to buying is 6 days
Average Time for Items Viewed before Being Bought
End of explanation
"""
plt.figure(figsize=(12,3))
plt.hist(mean_time_interval/60.0,bins=1000)
plt.xlim(0,100)
sns.despine()
plt.title('sample histogram of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
"""
Explanation: 5% look like they have relatively short sessions (maybe within one sitting)
End of explanation
"""
plt.figure(figsize=(8,3))
plt.hist(mean_time_interval/60.0,bins=200,cumulative=True,normed=True)
plt.xlim(0,2000)
sns.despine()
plt.title('sample cdf of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
"""
Explanation: zooming in to look at the shortest sessions.
about 7% have sessions <10 minutes
End of explanation
"""
user_id = 1606682799
trajectory = user_profile.loc[user_profile.user_id==user_id,]
trajectory= trajectory.sort_values(by='time_interval',ascending=False)
trajectory
"""
Explanation: 20% has sessions less <100 minutes
Example Trajectories
End of explanation
"""
plot_trajectory_scatter(user_profile)
"""
Explanation: this is an example trajectory of someone who browsed a few items and then bought item 31.. within the same session.
End of explanation
"""
samplesize =1000
number_of_times_item_bought = np.empty(samplesize)
number_of_times_item_viewed = np.empty(samplesize)
for ii,item_id in enumerate(np.random.choice(user_profile.view_spu.unique(),samplesize)):
number_of_times_item_bought[ii] = len(user_profile.loc[user_profile.buy_spu==item_id,'user_id'].unique()) # assume the same user would not buy the same product
number_of_times_item_viewed[ii] = len(user_profile.loc[user_profile.view_spu==item_id]) # same user can view the same image more than once for this count
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.bar(np.arange(len(number_of_times_item_bought)),number_of_times_item_bought)
sns.despine()
plt.title('item popularity (purchases)')
plt.xlabel('item')
plt.ylabel('# of times items were bought')
plt.subplot(1,2,2)
plt.hist(number_of_times_item_bought,bins=100)
sns.despine()
plt.title('item popularity (purchases)')
plt.xlabel('# of times items were bought sample size='+str(samplesize))
plt.ylabel('# of items')
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.bar(np.arange(len(number_of_times_item_viewed)),number_of_times_item_viewed)
sns.despine()
plt.title('item popularity (views)')
plt.xlabel('item')
plt.ylabel('# of times items were viewed')
plt.subplot(1,2,2)
plt.hist(number_of_times_item_bought,bins=100)
sns.despine()
plt.title('item popularity (views) sample size='+str(samplesize))
plt.xlabel('# of times items were viewed')
plt.ylabel('# of items')
plt.figure(figsize=(6,4))
plt.subplot(1,1,1)
thresh =30
include = number_of_times_item_bought<thresh
plt.scatter(number_of_times_item_viewed[include],number_of_times_item_bought[include],)
(r,p) = stats.pearsonr(number_of_times_item_viewed[include],number_of_times_item_bought[include])
sns.despine()
plt.xlabel('number of times viewed')
plt.ylabel('number of times bought')
plt.title('r='+str(np.round(r,2))+' data truncated buys<'+str(thresh))
"""
Explanation: here are 50 random subjects and when they view items (could make into an interactive plot)
What's the distribution of items that are bought? Are there some items that are much more popular than others?
End of explanation
"""
samplesize =1000
items_bought_per_user = np.empty(samplesize)
items_viewed_per_user = np.empty(samplesize)
for ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):
items_bought_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'buy_spu'].unique())
items_viewed_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'view_spu'].unique())
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.hist(items_bought_per_user)
sns.despine()
plt.title('number of items bought per user (sample of 1000)')
plt.xlabel('# items bought')
plt.ylabel('# users')
plt.subplot(1,2,2)
plt.hist(items_viewed_per_user)
sns.despine()
plt.title('number of items viewed per user (sample of 1000)')
plt.xlabel('# items viewed')
plt.ylabel('# users')
"""
Explanation: Items bought and viewed per user?
End of explanation
"""
urls = pd.read_csv('../../deep-learning-models-master/img/eval_img_url.csv',header=None)
urls.columns = ['spu','url']
print(len(urls))
urls.head(10)
urls[['spu','url']].groupby(['spu']).agg(['count']).head()
"""
Explanation: How many times did the user buy an item he/she already looked at?
Image URLs
How many of the SPUs in our dataset (smaller) have urls in our url.csv?
End of explanation
"""
urls.loc[urls.spu==357870273655002,'url'].as_matrix()
urls.loc[urls.spu==357889732772303,'url'].as_matrix()
"""
Explanation: items with more than one url?
End of explanation
"""
#urls.loc[urls.spu==1016200950427238422,'url']
tmp_urls = urls.loc[urls.spu==1016200950427238422,'url'].as_matrix()
tmp_urls
from urllib import urlretrieve
import time
# scrape images
for i,tmp_url in enumerate(tmp_urls):
urlretrieve(tmp_url, '../data_img_tmp/{}.jpg'.format(i))
#time.sleep(3)
# plot them.
print('two images from url with same spu (ugh)')
plt.figure(figsize=(8,3))
for i,tmp_url in enumerate(tmp_urls):
img_path= '../data_img_tmp/{}.jpg'.format(i)
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,len(tmp_urls),i+1)
plt.imshow(img)
plt.grid(b=False)
"""
Explanation: these are the same item, just different images.
End of explanation
"""
urls.spu[0]
urls.url[0]
"""
Explanation: These are different thought!!
End of explanation
"""
view_spus = user_profile.view_spu.unique()
contained = 0
spus_with_url = list(urls.spu.as_matrix())
for view_spu in view_spus:
if view_spu in spus_with_url:
contained+=1
print(contained/np.float(len(view_spus)))
buy_spus = user_profile.buy_spu.unique()
contained = 0
spus_with_url = list(urls.spu.as_matrix())
for buy_spu in buy_spus:
if buy_spu in spus_with_url:
contained+=1
print(contained/np.float(len(buy_spus)))
"""
Explanation: the url contains the spu, but I'm not sure what the other numbers are. The goods_num? The category etc?
End of explanation
"""
buy_spu in spus_with_url
len(urls.spu.unique())
len(user_profile.view_spu.unique())
"""
Explanation: we only have the url for 7% of the bought items and 9% of the viewed items
End of explanation
"""
spu_fea = pd.read_pickle("../data_nn_features/spu_fea.pkl") #takes forever to load
spu_fea['view_spu']=spu_fea['spu_id']
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
print('number of items with features: {0}').format(len(spu_fea))
spu_fea.head()
# merge with userdata
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
user_profile_w_features['has_features']=user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x))
user_profile_w_features.has_features= user_profile_w_features.has_features.astype('int')
user_profile_w_features.head()
"""
Explanation: Are the images we have in this new dataset?
at the moment, I don't know how to find the spu of the images we have.
Viewing DataSet with Feature Data in
End of explanation
"""
plot_trajectory_scatter(user_profile_w_features,scatter_color_col='has_features',samplesize=100,size=10,savedir='../../test.png')
"""
Explanation: Plotting Trajectories and Seeing How many features we have
End of explanation
"""
1-(user_profile_w_features['features'].isnull()).mean()
"""
Explanation: What percent of rows have features?
End of explanation
"""
1-user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x)).mean()
buy_spus = user_profile.buy_spu.unique()
contained = 0
spus_with_features = list(spu_fea.spu_id.as_matrix())
for buy_spu in buy_spus:
if buy_spu in spus_with_features:
contained+=1
print(contained/np.float(len(buy_spus)))
contained
len(buy_spus)
view_spus = user_profile.view_spu.unique()
contained = 0
spus_with_features = list(spu_fea.spu_id.as_matrix())
for view_spu in view_spus:
if view_spu in spus_with_features:
contained+=1
print(contained/np.float(len(view_spus)))
len(view_spus)
"""
Explanation: What percent of bought items are in the feature list?
End of explanation
"""
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views.pkl')
len(user_profile)
print('unique users:{0}').format(len(user_profile.user_id.unique()))
print('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))
print('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))
print('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))
print('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))
print('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))
print('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))
#user_profile.groupby(['user_id'])['buy_spu'].nunique()
# how many items bought per user in this dataset?
plt.figure(figsize=(8,3))
plt.hist(user_profile.groupby(['user_id'])['buy_spu'].nunique(),bins=20,normed=False)
sns.despine()
plt.xlabel('number of items bought per user')
plt.ylabel('number of user')
user_profile.loc[user_profile.user_id==4283991208,]
"""
Explanation: Evaluation Dataset
End of explanation
"""
user_profile.loc[user_profile.user_id==6539296,]
"""
Explanation: some people have longer viewing trajectories. first item was viewed 28hours ahead of time.
End of explanation
"""
plot_trajectory_scatter(user_profile,samplesize=100,size=10,savedir='../figures/trajectories_evaluation_dataset.png')
"""
Explanation: this person bought two items.
End of explanation
"""
%%bash
jupyter nbconvert --to slides Exploring_Data.ipynb && mv Exploring_Data.slides.html ../notebook_slides/Exploring_Data_v2.slides.html
jupyter nbconvert --to html Exploring_Data.ipynb && mv Exploring_Data.html ../notebook_htmls/Exploring_Data_v2.html
cp Exploring_Data.ipynb ../notebook_versions/Exploring_Data_v2.ipynb
# push to s3
import sys
import os
sys.path.append(os.getcwd()+'/../')
from src import s3_data_management
s3_data_management.push_results_to_s3('Exploring_Data_v1.html','../notebook_htmls/Exploring_Data_v1.html')
s3_data_management.push_results_to_s3('Exporing_Data_v1.slides.html','../notebook_slides/Exploring_Data_v1.slides.html')
"""
Explanation: I'd like to make this figure better - easier to tell which rows people are on
Save Notebook
End of explanation
"""
|
byque/programacion_en_python
|
b-variables_y_tipos_simples_de_datos/variables_y_tipos_de_datos.ipynb
|
gpl-3.0
|
# La siguiente línea imprime ¡Hola! como salida en el monitor
print("¡Hola!")
"""
Explanation: Variables y Tipos Simples de Datos
Comentarios
Los comentarios empiezan con un '#' y sirven para añadir notas al programa para describir la solución implementada en el código.
Todo lo que está después de un '#' es ignorado por el intérprete de Python.
End of explanation
"""
mensaje = "¡Hola mundo Python!"
print(mensaje)
mensaje = "¡Hola mundo!"
print(mensaje)
mensaje = "¡Hola mundo del curso de Python!"
print(mensaje)
"""
Explanation: Variables
End of explanation
"""
# Las variables de tipo string son cadenas de caracteres.
mensaje1 = "Esto es un string"
mensaje2 = 'Esto también es un string'
# Se puede usar comillas o apóstrofes para definir strings
# Su uso depende del programador y también si se va a usar
# comillas o apóstrofes dentro del mensaje.
mensaje3 = 'Esto es un string con "comillas"'
mensaje4 = "Esto es un string con 'apóstrofes'"
print(mensaje1)
print(mensaje2)
print(mensaje3)
print(mensaje4)
# Basado en https://developers.google.com/edu/python/strings
cadena = 'hola'
numero = 57
print cadena
print len(cadena)
print(cadena + ' ' + 'mundo')
print('\nvalor = ' + str(numero))
print('Longitud de la cadena de caracteres = ' + str(len(cadena)))
# Primer uso de métodos ver 'nombre.title()'
nombre = "julio cAsas"
# Imprime la primera letra de cada palabra en mayúsculas.
print(nombre.title())
# Imprime todo en minúsculas.
print(nombre.lower())
# Imprime todo en mayúsculas.
print(nombre.upper())
nombre = "pedro"
apellido = "restrepo"
# Concatenar cadenas
nombre_completo = nombre + " " + apellido
print(nombre_completo.title())
# Concatenar al imprimir
print("Hola " + nombre_completo.title())
# Guardar como variable antes de imprimir
mensaje = "Hola " + nombre_completo.title() + ", es un placer."
print(mensaje)
# Añadir un espacio de tabulador al texto
print("\tEspacio de tabulador")
# Añadir una línea nueva
print("Esta es una\nLínea nueva")
# Añadir un tabulador y una línea nueva
print("Línea nueva + tabulador:\n\tSe ve así.")
# Espacio al final de la cadena
lenguaje = 'python '
print(lenguaje + "<-Aquí está el espacio")
# Quitar temporalmente el espacio al final
print(lenguaje.rstrip() + "<-Espacio removido temporalmente")
# La variable original no ha cambiado
print(lenguaje + "<-Aquí está de nuevo el espacio")
# Quitar permanentemente el espacio al final
lenguaje = lenguaje.rstrip()
print(lenguaje + "<-Espacio removido permanentemente")
# Espacio al inicio y al final de la cadena
lenguaje = ' python '
print("Espacio al inicio y al final:->" + lenguaje + "<-")
# Quitar temporalmente el espacio del principio
print("Quitar espacio del incio:->" + lenguaje.lstrip() + "<-")
# Quitar temporalmente ambos espacios
print("Quitar ambos espacios:->" + lenguaje.strip() + "<-")
"""
Explanation: Cadenas de Caracteres
End of explanation
"""
# Basado en https://developers.google.com/edu/python/strings
cadena = 'hola'
print cadena[1]
print cadena[1:3]
print cadena[1:]
print cadena[:3]
"""
Explanation: Subcadenas de Caracteres
End of explanation
"""
comando = 'tiempo=30'
respuesta = 'OK,tiempo=30'
print respuesta.find(comando)
print('valor ' + comando[comando.find("="):])
# Crear una variable numérica a partir de un cadena de caracteres
tiempo = int(comando[comando.find("=")+1:])
print tiempo
"""
Explanation: Método para Buscar en Cadena de Caracteres
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/csir-csiro/cmip6/models/sandbox-1/atmoschem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
abulbasar/machine-learning
|
SparkML - 07 Click Prediction (Outbrain dataset).ipynb
|
apache-2.0
|
from datetime import datetime
import matplotlib.pyplot as plt
import pyspark.sql.functions as F
from pyspark.sql.window import Window
import numpy as np
import pandas as pd
from sklearn import metrics
pd.options.display.max_columns = 1000
pd.options.display.max_rows = 10
fast_mode = True
%matplotlib inline
from IPython.core.magic import register_line_magic
@register_line_magic
def show(line, n = 5):
return eval(line).limit(n).toPandas()
@register_line_magic
def sql(line, n = 10):
return spark.sql(line)
base_path = "/data/kaggle/outbrain_ctr/parquet/"
def cache_df(df, name, sorage_level = StorageLevel.MEMORY_ONLY):
df.createOrReplaceTempView(name)
spark.catalog.cacheTable(name)
def load(name, rebase_timestamp = False, cache = True):
df = spark.read.load(base_path + name)
if rebase_timestamp and "timestamp" in df.columns:
df = df.withColumn("timestamp"
, F.expr("cast(from_unixtime(cast((timestamp + 1465876799998)/1000 as int)) as timestamp)"))
if cache:
cache_df(df, name)
df.alias(name)
print("Number of partitions for df %s: %d" % (name, df.rdd.getNumPartitions()))
return df
!ls -1 /data/kaggle/outbrain_ctr/parquet/
"""
Explanation: ERD
https://drive.google.com/open?id=1dHAdBT84rEDf3WiE7FSfFrpnGQVL0wUSsWAGEH7ywZg
End of explanation
"""
clicks_train = load("clicks_train")
clicks_train.show()
clicks_test = load("clicks_test")
clicks_test.show()
clicks_train.count(), clicks_test.count()
"""
Explanation: Clicks
End of explanation
"""
%time clicks_train.select("ad_id").distinct().count(), clicks_test.select("ad_id").distinct().count()
"""
Explanation: Distinct count of ad_id in training and test dataset
End of explanation
"""
%time clicks_train.select("ad_id").intersect(clicks_test.select("ad_id")).count()
1- 316035/381385
"""
Explanation: Common ad_id in training and test datasets
End of explanation
"""
ctrs = clicks_train.groupBy("ad_id")\
.agg(F.expr("sum(clicked)/count(*)").alias("ctr"), F.count("*").alias("view_count"))
ctrs.show()
%time ctrs.select("ctr").describe().show()
"""
Explanation: 17% of ad_id in testing dataset are unique.
Calculate CTR on training dataset. Note, we cannot calculate the CTR on test dataset since clicked column is provded the value. In fact, the rask is to predict the probability of click.
End of explanation
"""
%time ctrs.selectExpr("percentile(ctr, 0.5)").show()
ctrs.filter("ad_id = 182320").show()
"""
Explanation: Median CTR
End of explanation
"""
view_counts = ctrs.select("view_count").toPandas()
np.percentile(view_counts["view_count"], [99, 95, 90])
"""
Explanation: Find 99, 95 and 90 percentile values of the view counts of the ads.
End of explanation
"""
%time ctrs.filter("view_count>100").select("ctr").toPandas()["ctr"].plot.hist(bins = 50, density = True)
plt.xlabel("CTR")
plt.ylabel("Frquency (normalized)")
"""
Explanation: To build confidence on the CTR, filter out the ads with fewer than 100 views (approx 99 percentile value)
End of explanation
"""
#y_pred = clicks_train.join(ctrs.select("ad_id", "ctr"), on = ["ad_id"], how="left").select("ctr").toPandas()["ctr"]
#y_true = clicks_train.select("clicked").toPandas()["clicked"]
#%time metrics.average_precision_score(y_true, y_pred)
clicks_test_baseline = clicks_test.join(ctrs.select("ad_id", "ctr"), on = ["ad_id"], how="left")
clicks_test_baseline.show()
clicks_test_baseline.groupBy("display_id").count()
clicks_test_baseline.groupBy("display_id").agg(F.sum("ctr").alias("ap")).selectExpr("avg(ap)").show()
"""
Explanation: Consier, the CTR as baseline for click prediction. Using CTR as based, calculate the MAP (mean avg precision)
End of explanation
"""
%time clicks_train.groupBy("display_id").count().select("count").distinct().show()
"""
Explanation: How many ads are there for each display_id?
End of explanation
"""
%time clicks_train.groupBy("display_id").agg(F.sum("clicked").alias("clicks")).filter("clicks=0").count()
"""
Explanation: Does each display_id in the training dataset has atleast one click?
End of explanation
"""
%time clicks_train.groupBy("display_id").agg(F.sum("clicked").alias("clicks")).filter("clicks>1").count()
"""
Explanation: So, each display_id has atleast one click. Does it display_id in the training dataset has more than click?
End of explanation
"""
if fast_mode:
print("Loading page_view sample dataset")
page_views = load("page_views_sample", rebase_timestamp=True, cache=True)
#page_views = page_views.sample(False, 0.01, 1)
#cache_df(page_views, "page_views")
else:
print("Loading full page_view dataset")
page_views = load("page_views", rebase_timestamp=True, cache=False)
page_views.printSchema()
page_views.show()
"""
Explanation: So, each display_id in the clicks dataset has only one click.
Page Views
End of explanation
"""
page_views.count()
"""
Explanation: Page views table is nearly 100 GB in decompressed csv file. How many records are there?
End of explanation
"""
%time page_views.filter("isnull(timestamp)").count()
stats = page_views.selectExpr("count(distinct(uuid)) as users"
, "count(distinct(document_id)) as documents"
, "count(distinct(geo_location)) as locations")
%time stats.show()
"""
Explanation: Does each record in page_views have timestamp?
End of explanation
"""
page_views_by_user = page_views.groupBy("uuid").count().groupBy("count").count()\
.toDF("view_count", "num_users").toPandas().sort_values("view_count")
page_views_by_user
page_views_by_user.iloc[:20, :].plot.bar("view_count", "num_users")
"""
Explanation: Some users are more frequent visitor that the others. Find the number of visitors based on the view counts.
End of explanation
"""
users_distinct_count = page_views_by_user.num_users.sum()
"""
Explanation: Number of unique users
End of explanation
"""
(page_views_by_user.num_users * page_views_by_user.view_count).sum()/users_distinct_count
"""
Explanation: Average page views per user
End of explanation
"""
page_views_by_user["cum_percentual"] = page_views_by_user.num_users.cumsum()/users_distinct_count
page_views_by_user
page_views_by_user.iloc[:20, :].plot.line(x = "view_count", y = "cum_percentual")
page_views_by_user.tail(10)
"""
Explanation: Cumulative Percentual
End of explanation
"""
page_views_by_platform = page_views.groupBy("platform")\
.count().toPandas().set_index("platform").sort_index()
page_views_by_platform
page_views_by_platform.plot.pie(y = "count", labels = ["Desktop", "Mobile", "Tablet"]
, figsize = (8, 8), autopct = "%.2f", fontsize = 15)
plt.title("Page views by platform")
"""
Explanation: Page views by platform
End of explanation
"""
page_views_by_traffic_source = page_views.groupBy("traffic_source").count().toPandas()
page_views_by_traffic_source = page_views_by_traffic_source.set_index("traffic_source").sort_index()
page_views_by_traffic_source
page_views_by_traffic_source.plot.pie(y = "count", labels = ["Internal", "Search", "Social"]
, figsize = (8, 8), autopct = "%.2f", fontsize = 15)
plt.title("Page views by traffic source")
"""
Explanation: Page views by traffic source
End of explanation
"""
events = load("events")
events.show()
events = events.withColumn("timestamp", F.expr("from_unixtime(cast((timestamp + 1465876799998)/1000 as int))"))
events.show()
events = load("events", rebase_timestamp=True)
events.count()
"""
Explanation: Events
End of explanation
"""
%time events.selectExpr("count(distinct uuid)", "count(distinct document_id)", "count(distinct geo_location)").first()
"""
Explanation: Find distinct count of users, document, and location
End of explanation
"""
%time events.groupBy("display_id").count().filter("count>1").count()
"""
Explanation: Is the display_id unique in the events dataset?
End of explanation
"""
%time clicks_train.select("display_id").distinct().join(events, on = ["display_id"], how = "left_anti").count()
"""
Explanation: Do all the display_id in the clicks_train present in events?
End of explanation
"""
%time clicks_test.select("display_id").distinct().join(events, on = ["display_id"], how = "left_anti").count()
"""
Explanation: So, display_id for each record in clicks_train is present in events dataset.
Check the same for clicks_test dataset.
End of explanation
"""
events.selectExpr("count(*)/count(distinct uuid) avg_event_by_user").first()
"""
Explanation: Average events by user
End of explanation
"""
events.filter("isnull(timestamp)").count()
"""
Explanation: Does the timestamp exist for each record in events?
End of explanation
"""
%time clicks_train.join(events, on = ["display_id"]).select("timestamp").describe().show()
%time clicks_test.join(events, on = ["display_id"]).select("timestamp").describe().show()
"""
Explanation: Does the timestamp in clicks_train and clicks_test have overlap?
End of explanation
"""
def join_views_and_events(columns):
df1 = page_views.select(*columns).withColumn("page_views", F.lit(1)).withColumn("events", F.lit(0))
df2 = events.select(*columns).withColumn("page_views", F.lit(0)).withColumn("events", F.lit(1))
df3 = df1.union(df2)
df4 = df3.groupBy(columns).agg(
F.sum("page_views").alias("page_views_count"),
F.sum("events").alias("events_count"))
return df4
# Cache output to disk. The dataframe is too large to hold in the memory of the current machine
views_and_event = join_views_and_events(["uuid", "document_id"]).persist(StorageLevel.DISK_ONLY)
views_and_event.show()
"""
Explanation: Clearly, the date ranges of training and test data overlap.
Alignment between Events and Page Views
Number of page views without matching event
How many events have matching page views by uuid and document_id
How many events have no page views
How many views records have not matching events
A given user might visit the same page more than once. Show sample events for which multiple page_views exist.
Show the distribution of number of ditinct users who view the same document multiple times
End of explanation
"""
%time views_and_event.filter("page_views_count > 0 and events_count = 0").count()
"""
Explanation: How many user-document combination does not have any click on ads?
End of explanation
"""
%time views_and_event.filter("page_views_count = 0 and events_count > 0").count()
"""
Explanation: How many user-doc combination does have machine record in page views?
End of explanation
"""
events.count()/page_views.count()
"""
Explanation: Considering events represents the page views that have got clicks, what fraction of page views got clicks?
End of explanation
"""
%time page_views.filter("uuid = 'a34004004c3e50' and document_id = 140264").show()
%time events.filter("uuid = 'a34004004c3e50' and document_id = 140264").show()
"""
Explanation: Let's take a sample uuid and document_id to see whether the event records have a matching page_views record.
End of explanation
"""
repeated_page_views = page_views.groupBy(["uuid", "document_id"]).count()\
.filter("count > 1").orderBy(F.desc("count"))
repeated_page_views.show()
repeated_page_views.count()
"""
Explanation: Hypothesis: The page views record come web server logs. Events some user tracking devices such as omniture, google analytics. User may open the page, the after sometime, user may choose to view an ad.
User may view a page more than once. Find how many users have viewed the same page more than once.
End of explanation
"""
if repeated_page_views.count()>0:
sample_record = repeated_page_views.sample(True, 0.1).take(1)[0]
page_views.filter(F.col("uuid") == sample_record.uuid)\
.filter(F.col("document_id") == sample_record.document_id).show()
"""
Explanation: Look at a sample uuid who has repeated visited a page to observe the pattern in the source, location and timestamp
End of explanation
"""
promoted_contents = load("promoted_content")
promoted_contents.show()
promoted_contents.count()
"""
Explanation: Advertisement (Promopoted Content)
End of explanation
"""
%time promoted_contents.groupBy("ad_id").count().filter("count>1").count()
"""
Explanation: Promoted_content stores the meta of the ads. Double check the ad_id is unique in this dataset.
End of explanation
"""
%time promoted_contents\
.selectExpr("count(distinct document_id)", "count(distinct campaign_id)", "count(distinct advertiser_id)").first()
"""
Explanation: How many unique campaigns, documents and advertisers are there?
End of explanation
"""
%time (clicks_train.select("ad_id").union(clicks_test.select("ad_id"))\
.join(promoted_contents, on = ["ad_id"], how = "leftanti").count())
"""
Explanation: Does ad_id in the clicks dataset have meta data info in promoted_contents?
End of explanation
"""
avg_ctr_by_campaign = promoted_contents.join(ctrs, on = "ad_id").groupBy("campaign_id")\
.agg(F.avg("ctr").alias("avg_ctr"))
avg_ctr_by_campaign.show()
"""
Explanation: So, all the ad_id in clicks dataset exist in the promoted_contents.
Find average CTR for campaign
End of explanation
"""
avg_ctr_by_advertiser = promoted_contents.join(ctrs, on = "ad_id").groupBy("advertiser_id")\
.agg(F.avg("ctr").alias("avg_ctr"))
avg_ctr_by_advertiser.show()
"""
Explanation: Find avg ctr by advertiser.
End of explanation
"""
avg_ctr_by_document = promoted_contents.join(ctrs, on = "ad_id").groupBy("document_id")\
.agg(F.avg("ctr").alias("avg_ctr"))
avg_ctr_by_document.show()
"""
Explanation: Find avg ctr by document.
End of explanation
"""
documents_meta = load("documents_meta")
documents_meta.show(5, False)
documents_meta.count()
"""
Explanation: Document Attributes
Document Meta Data
End of explanation
"""
documents_meta.groupby("document_id").count().filter("count>1").count()
"""
Explanation: Verify whether the document_id is unique in document_meta.
End of explanation
"""
documents_meta.select("source_id").distinct().count()
"""
Explanation: How many source_ids are there?
End of explanation
"""
documents_meta.select("publisher_id").distinct().count()
"""
Explanation: How many publisher_ids are there?
End of explanation
"""
documents_categories = load("documents_categories").drop_duplicates(["document_id", "category_id"])
documents_categories.printSchema()
documents_categories.show(5, False)
documents_categories.count()
documents_categories.select("category_id").distinct().count()
"""
Explanation: Document Categories
End of explanation
"""
from pyspark.ml.feature import StringIndexer
#String indexer required string column
documents_categories = documents_categories.withColumn("category_id", F.expr("cast(category_id as string)"))
if "id" in documents_categories.columns:
documents_categories = documents_categories.drop("id")
category_indexer = StringIndexer(inputCol="category_id", outputCol="id")
documents_categories = category_indexer.fit(documents_categories).transform(documents_categories)
documents_categories.show()
"""
Explanation: category_id is not indexed. Using string indexer to index it. One Hot Encoding is another option. Here I am not doing it because the I want to use the confidence_level to weigh in.
End of explanation
"""
documents_categories_n = documents_categories\
.withColumn("pair", F.struct("id", "confidence_level"))\
.groupBy("document_id")\
.agg(F.collect_list("pair").alias("categories"))
documents_categories_n.printSchema()
documents_categories_n.count()
documents_categories_n.show(3, False)
"""
Explanation: Group the data by document_id and pack the other information in a array field.
End of explanation
"""
documents_entities = load("documents_entities").drop_duplicates(["document_id", "entity_id"])
documents_entities.show(5, False)
"""
Explanation: Document entities
End of explanation
"""
documents_entities.groupBy("document_id").count().select("count").describe().show()
"""
Explanation: Find stats around the number of entities per document.
End of explanation
"""
documents_entities.select("entity_id").distinct().count()
"""
Explanation: Find count of unique entity_ids.
End of explanation
"""
if "id" in documents_entities.columns:
documents_entities = documents_entities.drop("id")
documents_entities = StringIndexer(inputCol="entity_id", outputCol="id")\
.fit(documents_entities)\
.transform(documents_entities)
documents_entities.show()
documents_entities_n = documents_entities\
.withColumn("pair", F.struct("id", "confidence_level"))\
.groupBy("document_id")\
.agg(F.collect_list("pair").alias("entities"))
documents_entities_n.printSchema()
"""
Explanation: Apply StringIndex to index the entity_id.
End of explanation
"""
documents_topics = load("documents_topics").drop_duplicates(["document_id","topic_id"])
documents_topics.show(5, False)
print(sorted([v.topic_id for v in documents_topics.select("topic_id").distinct().collect()]))
"""
Explanation: Document topics
End of explanation
"""
documents_topics.select("topic_id").distinct().count()
documents_topics_n = documents_topics\
.toDF("document_id", "id", "confidence_level")\
.withColumn("pair", F.struct("id", "confidence_level"))\
.groupBy("document_id")\
.agg(F.collect_list("pair").alias("topics"))
documents_topics_n.printSchema()
documents_topics_n.count()
"""
Explanation: Topic_id seems already indexed. And there 300 topics are there.
End of explanation
"""
docs = documents_meta.join(documents_categories_n, on = "document_id", how = "full")
docs = docs.join(documents_topics_n, on = "document_id", how = "full")
docs = docs.join(documents_entities_n, on = "document_id", how = "full")
docs.persist(StorageLevel.DISK_ONLY).count()
docs.printSchema()
%show docs
"""
Explanation: Create a field new field for each dataset to indicate the original source file and then join all 4 datasets - categories, entities, topics and meta
End of explanation
"""
docs.selectExpr("sum(if(isnull(topics), 1, 0)) null_topics"
, "sum(if(isnull(categories), 1, 0)) null_categories"
, "sum(if(isnull(entities), 1, 0)) null_entities"
, "sum(if(isnull(publisher_id), 1, 0)) null_meta").first()
"""
Explanation: Find count of null values of each type of information - meta, category, entity and topic.
End of explanation
"""
docs.select(F.explode("topics")).select("col.id").distinct().count()
"""
Explanation: Vectorize and caculate weighted IDF scores
How many document topics are there?
End of explanation
"""
docs.select(F.explode("categories")).select("col.id").distinct().count()
"""
Explanation: How many document categories are there?
End of explanation
"""
docs.select(F.explode("entities")).select("col.id").distinct().count()
from pyspark.ml.linalg import SparseVector, VectorUDT
def to_vector(values, n):
if values is not None:
values = sorted(values, key=lambda v: v.id)
indices = [v.id for v in values]
values = [v.confidence_level for v in values]
return SparseVector(n, indices, values)
return SparseVector(n, [], [])
spark.udf.register("to_vector", to_vector, VectorUDT())
docs_vectorized = docs\
.withColumn("topics_vector", F.expr("to_vector(topics, 300)"))\
.withColumn("categories_vector", F.expr("to_vector(categories, 97)"))
#.withColumn("entities_vector", F.expr("to_vector(entities, 1326009)"))
docs_vectorized.printSchema()
docs_vectorized.cache().count()
docs_vectorized.select("topics_vector").first()
docs_vectorized.select("categories_vector").count()
from pyspark.ml.feature import IDF, Tokenizer
if "topics_idf" in docs.columns:
docs = docs.drop("topics_idf")
if "entities_idf" in docs.columns:
docs = docs.drop("entities_idf")
if "categories_idf" in docs.columns:
docs = docs.drop("categories_idf")
topics_idf = IDF(inputCol="topics_vector", outputCol="topics_idf")
entities_idf = IDF(inputCol="entities_vector", outputCol="entities_idf")
categories_idf = IDF(inputCol="categories_vector", outputCol="categories_idf")
df1 = docs_vectorized
df2 = topics_idf.fit(df1).transform(df1).cache()
df3 = categories_idf.fit(df2).transform(df2).cache()
#df4 = entities_idf.fit(df3).transform(df3).cache()
docs_idf = df3
docs_idf.printSchema()
docs_idf.select("document_id", "topics_idf", "categories_idf").first()
"""
Explanation: How many document entities are there?
End of explanation
"""
user_has_already_viewed_doc = (page_views
.withColumn("user_has_already_viewed_doc"
, F.expr("((ROW_NUMBER() OVER (PARTITION BY uuid, document_id ORDER BY timestamp))) > 1"))
.select("uuid", "document_id", "timestamp", "user_has_already_viewed_doc")
)
%show user_has_already_viewed_doc.filter("uuid = '6c4a7527da27d7' and document_id = 38922")
user_views_count = (page_views
.withColumn("user_views_count",
F.expr("COUNT(1) OVER (PARTITION BY uuid ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.select("uuid", "timestamp", "user_views_count"))
%show user_views_count.filter("uuid = '6c4a7527da27d7' and document_id = 38922")
#page_views = page_views.withColumn("user_avg_views_of_distinct_docs", F.expr("COUNT(distinct document_id) " +
# "OVER (PARTITION BY uuid ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
#
#%show page_views.filter("uuid = '6c4a7527da27d7' and document_id = 38922")
"""
Explanation: Feature Generation
User profile
| Column | Description |
|---|---|
|user_has_already_viewed_doc| For each content recommended to the user, verify whether the user had previously visited that pages.
|user_views_count | Do eager readers behave differently from other users? Let’s add this feature and let machine learning models guess that.
|user_views_categories, user_views_topics, user_views_entities | User profile vectors based on categories, topics and entities of documents that users have previously viewed (weighted by confidence and TF-IDF), to model users preferences in a Content-Based Filtering approach
|user_avg_views_of_distinct_docs | Ratio between (#user_distinct_docs_views / #user_views), indicating how often users read previously visited pages again.
End of explanation
"""
doc_event_days_since_published = (events
.join(documents_meta, on = "document_id")
.selectExpr("display_id"
, "document_id"
, "timestamp"
, "publish_time"
, "datediff(timestamp, publish_time) age")
)
doc_event_days_since_published.show()
page_view_count_by_document_id = page_views.groupBy("document_id")\
.count().withColumn("page_view_count_by_document_id", F.expr("log(count)"))\
.select("document_id", "page_view_count_by_document_id")
page_view_count_by_document_id.show()
page_view_count_by_uuid = (page_views
.groupBy("uuid")
.count()
.withColumn("page_view_count_by_uuid", F.expr("log(count)"))
.select("uuid", "page_view_count_by_uuid"))
page_view_count_by_uuid.show()
"""
Explanation: Ads and Documents
| Column | Description |
|---|---|
|doc_ad_days_since_published, doc_event_days_since_published | Days elapsed since the ad document was published in a given user visit. The general assumption is that new content is more relevant to users. But if you are reading an old post, you might be interested in other old posts.
|doc_avg_views_by_distinct_users_cf | Average page views of the ad document by distinct users. Is this a webpage people usually return to?
|ad_views_count, doc_views_count|How popular is a document or ad?
End of explanation
"""
events.selectExpr("uuid", "geo_location"
, "split(geo_location, '>')[0] country"
, "split(geo_location, '>')[1] state"
).show()
events.selectExpr("split(geo_location, '>')[0] country").distinct().toPandas()
(events
.selectExpr("display_id", "timestamp", "hour(timestamp) hour")
.withColumn("day_session", F.expr("hour % 8"))).show()
"""
Explanation: Events
| Column | Description |
|---|---|
|event_local_hour (binned), event_weekend | Event timestamps were in UTC-4, so I processed event geolocation to get timezones and adjust for users' local time. They were binned in periods like morning, afternoon, midday, evening, night. A flag indicating whether it was a weekend was also included. The assumption here is that time influences the kind of content users will read.
|event_country, event_country_state | The field event_geolocation was parsed to extract the user’s country and state in a page visit.
|ad_id, doc_event_id, doc_ad_id, ad_advertiser, … | All of the original categorical fields were One-Hot Encoded to be used by the models, generating about 126,000 features.
End of explanation
"""
events.selectExpr("split(geo_location, '>')[0] country")\
.groupBy("country").count().orderBy(F.desc("count")).show()
"""
Explanation: Top countries by ad clicks
End of explanation
"""
events.cache()
clicks_train.cache()
documents_meta.cache()
promoted_contents.cache()
avg_ctrs_by_ad_id = clicks_train.groupBy("ad_id").agg(F.avg("clicked").alias("avg_ctr_by_ad_id"))
%show avg_ctrs_by_ad_id
avg_ctrs_by_campaign_id = (clicks_train
.join(promoted_contents, on = "ad_id")
.groupBy("campaign_id")
.agg(F.avg("clicked").alias("avg_ctr_by_campaign_id")))
%show avg_ctrs_by_campaign_id
avg_ctrs_by_advertiser_id = (clicks_train
.join(promoted_contents, on = "ad_id")
.groupBy("advertiser_id")
.agg(F.avg("clicked").alias("avg_ctr_by_advertiser_id"))
.cache()
)
%show avg_ctrs_by_advertiser_id
avg_ctrs_by_document_id = (clicks_train
.join(promoted_contents, on = "ad_id")
.groupBy("document_id")
.agg(F.avg("clicked").alias("avg_ctr_by_document_id"))
.cache()
)
%show avg_ctrs_by_document_id
avg_ctrs_by_time = (events
.join(documents_meta, on = "document_id", how = "left")
.join(clicks_train, on = "display_id", how = "left")
.join(promoted_contents, on = "ad_id", how = "left")
.withColumn("total_clicks_by_ad_id"
, F.expr("SUM(clicked) OVER (PARTITION BY ad_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_events_by_ad_id"
, F.expr("COUNT(*) OVER (PARTITION BY ad_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_clicks_by_advertiser_id"
, F.expr("SUM(clicked) OVER (PARTITION BY advertiser_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_events_by_advertiser_id"
, F.expr("COUNT(*) OVER (PARTITION BY advertiser_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_clicks_by_campaign_id"
, F.expr("SUM(clicked) OVER (PARTITION BY campaign_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_events_by_campaign_id"
, F.expr("COUNT(*) OVER (PARTITION BY campaign_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_clicks_by_publisher_id"
, F.expr("SUM(clicked) OVER (PARTITION BY publisher_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.withColumn("total_events_by_publisher_id"
, F.expr("COUNT(*) OVER (PARTITION BY publisher_id ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING)"))
.selectExpr("display_id"
, "timestamp"
, "(total_clicks_by_advertiser_id/total_events_by_advertiser_id) avg_ctr_by_advertiser_id"
, "(total_clicks_by_campaign_id/total_events_by_campaign_id) avg_ctr_by_campaign_id"
, "(total_clicks_by_ad_id/total_events_by_ad_id) avg_ctr_by_ad_id"
, "(total_clicks_by_publisher_id/total_events_by_publisher_id) avg_ctr_by_publisher_id")
)
%show avg_ctrs_by_time
"""
Explanation: Average CTR
| Column | Description |
|---|---|
|avg_ctr_ad_id, avg_ctr_publisher_id, avg_ctr_advertiser_id, avg_ctr_campain_id, avg_ctr_entity_id_country … | Average CTR (#clicks / #views) given some categorical combinations and CTR confidence (details on Part II post). Eg. P(click category01, category02).
End of explanation
"""
docs.printSchema()
docs_idf.drop("document_id").printSchema()
clicks = (clicks_train.withColumn("is_train", F.lit(1))
.union(clicks_test
.withColumn("clicked", F.lit(0))
.withColumn("is_train", F.lit(0))))
df = (clicks
.join(events.alias("events"), on = ["display_id"], how = "left")
.join(docs_idf.alias("docs_idf"), on = ["document_id"], how = "left")
.join(promoted_contents.drop("document_id"), on = ["ad_id"], how = "left")
.join(page_view_count_by_uuid, on = ["uuid"], how = "left")
.join(page_view_count_by_document_id, on = ["document_id"], how = "left")
.withColumn("clicks_by_ad_id", F.expr("sum(clicked) over (partition by ad_id)"))
.withColumn("events_by_ad_id", F.expr("count(*) over (partition by ad_id)"))
.withColumn("avg_ctr_by_ad_id", F.expr("clicks_by_ad_id/events_by_ad_id"))
.withColumn("clicks_by_campaign_id", F.expr("sum(clicked) over (partition by campaign_id)"))
.withColumn("events_by_campaign_id", F.expr("count(*) over (partition by campaign_id)"))
.withColumn("avg_ctr_by_campaign_id", F.expr("clicks_by_campaign_id/events_by_campaign_id"))
.withColumn("clicks_by_document_id", F.expr("sum(clicked) over (partition by events.document_id)"))
.withColumn("events_by_document_id", F.expr("count(*) over (partition by events.document_id)"))
.withColumn("avg_ctr_by_document_id", F.expr("clicks_by_campaign_id/events_by_document_id"))
.withColumn("clicks_by_advertiser_id", F.expr("sum(clicked) over (partition by advertiser_id)"))
.withColumn("events_by_advertiser_id", F.expr("count(*) over (partition by advertiser_id)"))
.withColumn("avg_ctr_by_advertiser_id", F.expr("clicks_by_campaign_id/events_by_advertiser_id"))
.withColumn("country", F.expr("split(geo_location, '>')[0]"))
.withColumn("state", F.expr("split(geo_location, '>')[1]"))
.withColumn("doc_age", F.expr("datediff(timestamp, publish_time)"))
.withColumn("session", F.expr("cast((hour(timestamp) % 8) as string)"))
.withColumn("source_id", F.expr("cast(source_id as string)"))
.withColumn("publisher_id", F.expr("cast(publisher_id as string)"))
)
df.printSchema()
df.write.mode("overwrite").save(base_path + "merged_enriched")
df.printSchema()
"""
Explanation: Content-Based Similarities
| Column | Description |
|---|---|
|user_doc_ad_sim_categories, user_doc_ad_sim_topics, user_doc_ad_sim_entities | Cosine similarity between user profile and ad document profile vectors (TF-IDF).
|doc_event_doc_ad_sim_categories, doc_event_doc_ad_sim_topics, doc_event_doc_ad_sim_entities | Cosine similarity between event document (landing page context) and ad document profile vectors (TF-IDF).
Prepare training and test datasets
End of explanation
"""
df = load("merged_enriched", cache = False)
df.count()
features = [
'platform'
, 'source_id'
, 'publisher_id'
, 'topics_idf'
, 'categories_idf'
, 'avg_ctr_by_ad_id'
, 'avg_ctr_by_campaign_id'
, 'avg_ctr_by_document_id'
, 'avg_ctr_by_advertiser_id'
, "country"
, "state"
, "doc_age"
, "session"
, "ad_id"
, "display_id"
, "is_train"
, "clicked"
]
df.selectExpr(*features).printSchema()
%show df
df1 = df
"""
Explanation: Machine Learning
End of explanation
"""
def show_null_counts(df):
null_testers = ["sum(if(isnull(%s), 1, 0)) %s" % (f, f) for f in df.columns]
null_counts = df.selectExpr(*null_testers).toPandas().T
null_counts.columns = ["Count"]
null_counts["pct"] = null_counts.Count/df.count()
null_counts["dtype"] = [t[1] for t in df.dtypes]
print(null_counts.to_string())
show_null_counts(df.selectExpr(*features))
df_trunc = df.selectExpr(*features)
distinct_counts = df_trunc.selectExpr(*["approx_count_distinct(%s)"
% f for f in df.selectExpr(*features).columns]).toPandas()
print(distinct_counts.T.to_string())
fill_na_values = {"platform": "<null>"
, "source_id": "<null>"
, "publisher_id": "<null>"
, "avg_ctr_by_ad_id": 0.0
, "avg_ctr_by_campaign_id": 0.0
, "avg_ctr_by_document_id": 0.0
, "avg_ctr_by_advertiser_id": 0.0
, "country": "null"
, "state": "null"
, "doc_age": -1}
df_null_removed = df.selectExpr(*features).na.fill(fill_na_values)
show_null_counts(df_null_removed)
"""
Explanation: Detect and impute null values
End of explanation
"""
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler
categorical_columns = [col for col, dtype in df_null_removed.dtypes if dtype == "string"]
df_string_indexed = df_null_removed
for col in categorical_columns:
indexer = StringIndexer(inputCol=col, outputCol="%s_index" % col)
df_string_indexed = indexer.fit(df_string_indexed).transform(df_string_indexed)
one_hot_estimator = OneHotEncoderEstimator(
inputCols = [col + "_index" for col in categorical_columns],
outputCols = [col + "_vec" for col in categorical_columns]
)
df_ohe = one_hot_estimator.fit(df_string_indexed).transform(df_string_indexed)
df_ohe.dtypes
to_be_vectorized = [('topics_idf', 'vector'),
('categories_idf', 'vector'),
('avg_ctr_by_ad_id', 'double'),
('avg_ctr_by_campaign_id', 'double'),
('avg_ctr_by_document_id', 'double'),
('avg_ctr_by_advertiser_id', 'double'),
('doc_age', 'int'),
('country_vec', 'vector'),
('session_vec', 'vector'),
('source_id_vec', 'vector'),
('state_vec', 'vector'),
('publisher_id_vec', 'vector'),
('platform_vec', 'vector')]
vector_assembler = VectorAssembler(inputCols = [c for c, _ in to_be_vectorized], outputCol="features")
df_vectorized = vector_assembler.transform(df_ohe)
df_vectorized.dtypes
df_train, df_test = df_vectorized.filter("is_train = 1").select("display_id", "ad_id", "clicked", "features")\
.randomSplit(weights=[0.7, 0.3], seed = 1)
cache_df(df_train, "df_train")
df_train.printSchema()
df_train.count()
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(maxIter=10, regParam=0.1, elasticNetParam=0.8, featuresCol="features", labelCol="clicked")
lrModel = lr.fit(df_train)
print("Coefficients: " + str(lrModel.coefficients))
print("Intercept: " + str(lrModel.intercept))
trainingSummary = lrModel.summary
objectiveHistory = trainingSummary.objectiveHistory
print("objectiveHistory:")
for objective in objectiveHistory:
print(objective)
# Obtain the receiver-operating characteristic as a dataframe and areaUnderROC.
trainingSummary.roc.show()
print("areaUnderROC: " + str(trainingSummary.areaUnderROC))
# for multiclass, we can inspect metrics on a per-label basis
print("False positive rate by label:")
for i, rate in enumerate(trainingSummary.falsePositiveRateByLabel):
print("label %d: %s" % (i, rate))
print("True positive rate by label:")
for i, rate in enumerate(trainingSummary.truePositiveRateByLabel):
print("label %d: %s" % (i, rate))
print("Precision by label:")
for i, prec in enumerate(trainingSummary.precisionByLabel):
print("label %d: %s" % (i, prec))
print("Recall by label:")
for i, rec in enumerate(trainingSummary.recallByLabel):
print("label %d: %s" % (i, rec))
print("F-measure by label:")
for i, f in enumerate(trainingSummary.fMeasureByLabel()):
print("label %d: %s" % (i, f))
accuracy = trainingSummary.accuracy
falsePositiveRate = trainingSummary.weightedFalsePositiveRate
truePositiveRate = trainingSummary.weightedTruePositiveRate
fMeasure = trainingSummary.weightedFMeasure()
precision = trainingSummary.weightedPrecision
recall = trainingSummary.weightedRecall
print("Accuracy: %s\nFPR: %s\nTPR: %s\nF-measure: %s\nPrecision: %s\nRecall: %s"
% (accuracy, falsePositiveRate, truePositiveRate, fMeasure, precision, recall))
lrModel.write().overwrite().save(base_path + "lrModel")
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Make predictions.
predictions = lrModel.transform(df_test)
# Select example rows to display.
predictions.select("prediction", "clicked", "features").show(5)
# Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(
labelCol="clicked", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Accuracy = %g " % (accuracy))
"""
Explanation: Apply StringIndexing and OneHotE Convert categorical values (string type) into index values, subsequently to one hot encoded values
End of explanation
"""
|
JelleAalbers/xeshape
|
S1_psd_mc_Erik.ipynb
|
mit
|
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import stats
# import warnings
# warnings.filterwarnings('error')
from multihist import Hist1d, Histdd
"""
Explanation: Imports
End of explanation
"""
# Digitizer sample size
dt = 2
# Waveform time labels
spe_ts = np.linspace(0, 639*2, 640) - 340 * 2
# Valid time (because the waveform does not range the full time span)
valid_t_range = (-100, 300)
t_mask = (valid_t_range[0] <= spe_ts) & (spe_ts < valid_t_range[1])
spe_ts = spe_ts[t_mask]
spe_t_edges = np.concatenate([[spe_ts[0] - dt/2], spe_ts + dt/2])
default_params = dict(
t1 = 3.1, # Singlet lifetime, Nest 2014 p2
t3 = 24, # Triplet lifetime, Nest 2014 p2
fs = 0.2, # Singlet fraction
tts = 2., # Transit time spread.
s1_min=50,
s1_max=100,
dset='er',
pulse_model=1, # This is the CHANNEL that is used...
n_photons = int(2e5),
t_min = -15.,
t_max = 125.,
s1_sample = 'data', # 'uniform'
error_offset = 0. ,
error_pct = 0.
)
def get_params(params):
'''
Returns full set of parameters, setting the values given in `params` and setting the values in
`default_params` if not set explicity.
'''
for k, v in default_params.items(): # key, value
params.setdefault(k, v)
if params['tts'] < 0:
params['tts'] = 1e-6
return params
"""
Explanation: Default settings
End of explanation
"""
import pickle
from scipy.interpolate import interp1d
spe_pulses_cum = []
spe_ys = []
for ch, fn in enumerate(['170323_103732', '170323_104831']):
with open('../pulse_shape_single_pe/%s_ch%d.pickle' % (fn, ch) , 'rb') as infile:
ys = pickle.load(infile)[t_mask]
plt.plot(spe_ts, ys/ys.sum(), label='Channel %d' % ch)
spe_ys.append(ys/ys.sum())
# spe_pulses_cum: list of 2 elements: cumulative distribution for two channels
spe_pulses_cum.append(
interp1d(spe_ts, np.cumsum(ys)/ys.sum())
)
plt.ylim(-0.01, 0.3)
plt.xlabel('Time (ns)')
plt.ylabel('Area / (2 ns)')
plt.legend()
plt.title('Relative (normalized) amplitude of single p.e. pulses.')
plt.show()
for ch, p in enumerate(spe_pulses_cum):
plt.plot(spe_ts, p(spe_ts), label='Channel %d' % ch)
plt.grid(alpha=0.2, linestyle='-')
plt.xlabel('Time (ns)')
plt.ylabel('Cumulative fraction of area found')
plt.legend()
plt.show()
"""
Explanation: Load PMT pulses
Pulse shape
One of the elements of simulted S1s is the single p.e. pulse model. We extract this from the gain calibration dataset.
End of explanation
"""
# custom_pmt_pulse_current(pmt_pulse, offset, dt, samples_before, samples_after)
from pax.simulation import custom_pmt_pulse_current
for ch, c in zip([0, 1], ['blue', 'red']):
plt.plot(custom_pmt_pulse_current(spe_pulses_cum[ch], 0.1, 2, 10, 100), color=c)
plt.plot(spe_ts * 0.5 + 10 - 0.5, spe_ys[ch] * 0.5, color=c, ls='--')
plt.xlim(-10, 60)
plt.xlabel('Time sample number')
plt.ylabel('Relative amplitude')
plt.show()
"""
Explanation: What do we need the cumulative fraction for? Well, we input this into the custom_pmt_pulse_current in pax.simulation. Here is a quick check that all is well. There is just a little shift, but the alignment is quite arbitrary anyway.
End of explanation
"""
gain_params = []
for ch, fn in enumerate(['170323_103732', '170323_104831']):
with open('../pulse_shape_single_pe/%s_ch%d_function.pickle' % (fn, ch) , 'rb') as infile:
_norm, _popt, _perr = pickle.load(infile)
gain_params.append(np.concatenate([np.array([_norm]), _popt, _perr]))
gain_params = np.array(gain_params)
import scipy
def area_sample(n_values, gain_params, **params):
params = get_params(params)
channel = params['pulse_model']
norm, mu, sigma, _, _ = gain_params[channel]
lower, upper = (0., 3.)
X = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma)
return X.rvs(n_values)
def gaus_trunc(x, mu, sigma):
return (x > 0) * np.exp( - (x - mu)**2 / (2 * sigma**2))
nbins = 600
ran = (-0.5, 3.5)
for channel in (0, 1):
plt.hist(area_sample(200000, gain_params, pulse_model = channel), bins=nbins, histtype='step', normed=True, range=ran)
x_plot = np.linspace(*ran, num=nbins)
y_plot = gaus_trunc(x_plot,gain_params[channel][1], gain_params[channel][2])
norm = 1 / (np.sum(y_plot) * (ran[1] - ran[0])) * nbins
plt.plot(x_plot, norm * y_plot)
plt.title('Channel %d' % channel)
plt.show()
"""
Explanation: Gain variation
End of explanation
"""
import numba
# def split_s1_groups(x, n_x, s1_min, s1_max):
# """Splits x into groups with uniform(s1_min, s1_max) elements, then return matrix of histograms per group.
# Returns: integer array (n_x, n_groups)
# n_x: number of possible values in x. Assumed to be from 0 ... n_x - 1
# s1_min: minimum S1 number of hits
# s1_max: maximum S1 number of hits
# """
# # We want to exhaust the indices x. Simulate a generous amount of S1 sizes
# n_s1_est = int(1.5 * 2 * len(x) / (s1_min + s1_max))
# if
# hits_per_s1 = np.random.randint(s1_min, s1_max, size=n_s1_est)
# result = np.zeros((n_x, n_s1_est), dtype=np.int)
# s1_i = _split_s1_groups(x, hits_per_s1, result)
# return result[:,:s1_i - 1]
# @numba.jit(nopython=True)
# def _split_s1_groups(x, hits_per_s1, result):
# s1_i = 0
# for i in x:
# if hits_per_s1[s1_i] == 0:
# s1_i += 1
# continue
# result[i, s1_i] += 1
# hits_per_s1[s1_i] -= 1
# return s1_i
def split_s1_groups(x, n_x, areas, **params):
"""Splits x into groups with uniform (s1_min, s1_max) elements, then return matrix of histograms per group.
Returns: integer array (n_x, n_groups)
n_x: number of possible values in x. Assumed to be from 0 ... n_x - 1
s1_min: minimum S1 number of hits
s1_max: maximum S1 number of hits
"""
params = get_params(params)
# We want to exhaust the indices x. Simulate a generous amount of S1 sizes
n_s1_est = int(1.5 * 2 * len(x) / (params['s1_min'] + params['s1_max']))
if params['s1_sample'] == 'data' and 'xams_data' not in globals():
print('Warning: data-derived s1 area distribution not possible, reverting to uniform...')
params['s1_sample'] = 'uniform'
if params['s1_sample'] == 'uniform':
pe_per_s1 = (params['s1_max'] - params['s1_min']) * np.random.random(size=n_s1_est) + params['s1_min']
elif params['s1_sample'] == 'data':
# Take S1 from the data sample
s1s_data = xams_data[params['dset']]['s1']
s1s_data = s1s_data[(s1s_data >= params['s1_min']) & (s1s_data < params['s1_max'])]
pe_per_s1 = np.random.choice(s1s_data, size=n_s1_est)
else:
raise ValueError('Configuration not understood, got this: ', params['s1_sample'])
result = np.zeros((n_x, n_s1_est), dtype=float)
# s1_i = _split_s1_groups(x, pe_per_s1, result)
s1_i = _split_s1_groups(x, pe_per_s1, result, areas)
return result[:,:s1_i - 1]
@numba.jit(nopython=True)
def _split_s1_groups(x, hits_per_s1, result, areas):
s1_i = 0
for photon_i, i in enumerate(x):
if hits_per_s1[s1_i] < 0:
s1_i += 1
continue
result[i, s1_i] += areas[photon_i]
hits_per_s1[s1_i] -= areas[photon_i]
return s1_i
# %%timeit
# split_s1_groups(np.random.randint(0, 100, size=int(1e6)), 101, 10, 20)
def shift(x, n):
"""Shift the array x n samples to the right, adding zeros to the left."""
if n > 0:
return np.pad(x, (n, 0), mode='constant')[:len(x)]
else:
return np.pad(x, (0, -n), mode='constant')[-len(x):]
def simulate_s1_pulse(**params):
# n_photons=int(2e5),
"""Return (wv_matrix, time_matrix, t_shift vector) for simulated S1s, consisting of n_photons in total
"""
params = get_params(params)
n_photons = params['n_photons']
##
# Make matrix (n_samples, n_waveforms) of pulse waveforms with various shifts
##
i_noshift = np.searchsorted(spe_t_edges, [0])[0] # Index corresponding to no shift in the waveform
y = spe_ys[params['pulse_model']] # This is the CHANNEL
# This is a matrix filled with waveforms, ordered by their SHIFT.
# So, these are all just model waveforms and will be selected later
wv_matrix = np.vstack([shift(y, i - i_noshift)
for i in range(len(spe_ts))]).T
##
# Simulate S1 pulse times, convert to index
##
times = np.zeros(n_photons)
n_singlets = np.random.binomial(n=n_photons, p=params['fs']) # We randomly select if the photon came from a singlet
# or triplet decay
# Time is distributed according to exponential distribution
# This is the TRUE time of all the photons generated, assuming time=0 is the time of the interaction
times += np.concatenate([
np.random.exponential(params['t1'], n_singlets),
np.random.exponential(params['t3'], n_photons - n_singlets)
])
# Since `times` is now sorted in (singlet, triplet), shuffle them
np.random.shuffle(times)
# Here we start taking into account detector physics: the transit time spread (simulated as normal dist.)
times += np.random.normal(0, params['tts'], size=n_photons)
# Find the bin that the photon would be in if it were sampled.
indices = np.searchsorted(spe_t_edges, times)
# Now, we delete all the photons that are outside of the bin range and re-match to the bin centers
# (Check the searchsorted documentation)
indices = indices[~((indices == 0) | (indices == len(spe_t_edges)))] - 1
# This is the new amount of photons simulated
if len(indices) < n_photons:
# print('Warning: I just threw away %d photons...' % (n_photons - len(indices)))
n_photons = len(indices)
# TODO: gain variation simulation
areas = area_sample(n_photons, gain_params, **params)
# NOTE do we also want to take the difference between the two channels into accont?
##
# Build instruction matrix, simulate waveforms
##
# So far, we've just been simulating a bunch of photons (very many).
# We are now going to split this into S1s: the split will be made at a random point between s1_min and s1_max.
# `index_matrix` is a matrix split into groups forming S1s.
# index_matrix = split_s1_groups(indices, len(spe_t_edges) - 1, params['s1_min'], params['s1_max'])
index_matrix = split_s1_groups(indices, len(spe_t_edges) - 1, areas, **params)
# Now, index_matrix[:, 0] contains a list of number of entries for the shift for each timestamp in bin
n_s1 = index_matrix.shape[1]
# return wv_matrix, index_matrix
# Remember that wv_matrix is a matrix of waveforms, each element at position i of which is shifted i samples
s1_waveforms = np.dot(wv_matrix, index_matrix)
# return s1_waveforms
##
# Alignment based on maximum sample, compute average pulse
##
time_matrix, t_shift = aligned_time_matrix(spe_ts, s1_waveforms)
return s1_waveforms, time_matrix, t_shift
def aligned_time_matrix(ts, wv_matrix, mode = '10p'):
"""Return time matrix that would align waveforms im wv_matrix"""
n_s1 = wv_matrix.shape[1]
if mode == 'max':
# Find the position of maximum sample and match its times
t_shift = ts[np.argmax(wv_matrix, axis=0)]
elif mode == '10p':
fraction_reached = np.cumsum(wv_matrix, axis=0) / np.sum(wv_matrix, axis=0)
# Get the sample where 10% is reached by taking the sample closest to the 10% point
# This is as good as you can get without introducing fractional samples (which may be an improvement)
# TODO get interpolation in here
distance_to_10p_point = np.abs(fraction_reached - 0.1)
t_shift = ts[np.argmin(distance_to_10p_point, axis=0)]
time_matrix = np.repeat(ts, n_s1).reshape(wv_matrix.shape)
time_matrix -= t_shift[np.newaxis,:]
return time_matrix, t_shift
def average_pulse(time_matrix, wv_matrix):
"""Return average pulse, given time and waveform matrices"""
h, _ = np.histogram(time_matrix, bins=spe_t_edges, weights=wv_matrix)
h /= h.sum()
return h
def s1_average_pulse_model(*args, **kwargs):
wv_matrix, time_matrix, _ = simulate_s1_pulse(*args, **kwargs)
return average_pulse(time_matrix, wv_matrix)
s1_wvs, tmat, _ = simulate_s1_pulse(n_photons=int(2e5), t3=1, t1=50, tts=1, fs=0.5, dset='nr')
for i in range(100):
plt.plot(tmat[:, i], s1_wvs[:, i], alpha=0.1, c='k')
plt.grid(alpha=0.2, linestyle='-')
"""
Explanation: S1 model
Simulation
End of explanation
"""
def s1_models_resample(*args, n_data_s1s=1000, bootstrap_trials=10, **kwargs):
"""Return bootstrap_trials waveform templates from sampling n_data_s1s s1s"""
wv_matrix, time_matrix, _ = simulate_s1_pulse(*args, **kwargs)
n_s1s = wv_matrix.shape[1]
waveform_templates = np.zeros((len(spe_ts), bootstrap_trials))
for i in range(bootstrap_trials):
new_indices = np.random.randint(n_s1s, size=n_data_s1s)
waveform_templates[:, i] = average_pulse(time_matrix[:, new_indices],
wv_matrix[:, new_indices])
return waveform_templates
def sigmas_plot(x, q, color='b', **kwargs):
for n_sigma, alpha in [(1,0.5), (2, 0.1)]:
plt.fill_between(x,
np.percentile(q, 100 * stats.norm.cdf(-n_sigma), axis=1),
np.percentile(q, 100 * stats.norm.cdf(n_sigma), axis=1),
alpha=alpha, linewidth=0, color=color, step='mid')
plt.plot(x,
np.percentile(q, 50, axis=1),
color=color, linestyle='-', alpha=0.5, linewidth=1, **kwargs)
waveform_templates = s1_models_resample(n_data_s1s=100, s1_min=50, s1_max=60, bootstrap_trials=100)
sigmas_plot(spe_ts, waveform_templates)
"""
Explanation: Here is what we get out.
wv_matrix is a matrix containing the y-coordinates of the waveforms. The columns are the individual waveforms, to get the first waveform, go wv_matrix[:, 0]. time_matrix is the same thing except for it contains the times. t_shift_vector contains the shift of the waveform in ns (based on pulse times).
Statistical errors
Here we simulate statistical errors by simulating n_data_s1s and then performing bootstrap trials. The conclusion:....
End of explanation
"""
import itertools
def s1_models_error(*args, shifts=None, **kwargs):
'''
Compute the error on the S1 waveform given errors on specific parameters.
This will compute the S1 model for parameter +error, +0, and -error.
All combinations of paramters are tried.
`shifts` is a dict containting the allowed shift (+/-) for each model parameter.
`*args` and `**kwargs` will be passed to `s1_average_pulse_model` to compute the base model.
This function can also be used for getting the difference in pulse model for channel 0 and 1.
'''
if shifts is None:
# Default uncertainty: in pulse model and in TTS
shifts = dict(tts=0.5, pulse_model=[0,1])
base_model = s1_average_pulse_model(*args, **kwargs)
# Allow specifying a single +- amplitude of variation
for p, shift_values in shifts.items():
if isinstance(shift_values, (float, int)):
shifts[p] = kwargs.get(p, default_params[p]) + np.array([-1, 0, 1]) * shift_values
shift_pars = sorted(shifts.keys())
shift_values = [shifts[k] for k in shift_pars]
# shift_value_combs is a list of paramters that will be tried to compute the average pulse.
# Contains all combintations: (+, 0, -) for all the parameters. ((3n)^2 for n number of parameters.)
shift_value_combs = list(itertools.product(*shift_values))
alt_models = []
for vs in shift_value_combs:
kw = dict()
kw.update(kwargs)
for i, p in enumerate(shift_pars):
kw[p] = vs[i]
alt_models.append(s1_average_pulse_model(*args, **kw))
alt_models = np.vstack(alt_models)
# Hmmm. this seems like an upper estimate of the error, no?
# ask jelle
minus = np.min(alt_models, axis=0)
plus = np.max(alt_models, axis=0)
return minus, base_model, plus
# return [s1_average_pulse_model(*args, **kwargs)
# for q in [-tts_sigma, 0, tts_sigma]]
minus, base, plus = s1_models_error()
plt.fill_between(spe_ts, minus, plus, alpha=0.5, linewidth=0, label='Uncertainty')
plt.plot(spe_ts, base, label='Base model')
plt.xlabel('Time, (ns)')
plt.ylabel('Time')
plt.legend()
plt.show()
"""
Explanation: Statistical errors are negligible if you have more than a few hundred waveforms.
Systematic errors
End of explanation
"""
xams_data = dict()
xams_data['nr'], xams_data['er'], xams_data['bg_nr'] = pickle.load(open('highfield_dataframes.pickle', 'rb'))
xams_s1s = dict()
# Get pulse waveforms to matrix rather than object column
for k, d in xams_data.items():
xams_s1s[k] = np.array([x for x in d['s1_pulse']])
del d['s1_pulse']
"""
Explanation: Real data waveforms
Here we read the S1 data for three (highfield) datasets: NR, ER and BG_NR. We store it in the form of a dict (keys: er, nr, bg_nr). Each dict item is an array containing the waveforms (per row).
End of explanation
"""
plt.plot(spe_ts, xams_s1s['nr'][0])
plt.xlabel('Time (ns)')
plt.ylabel('Amplitude')
plt.show()
def real_s1_wv(**params):
"""Return average S1 waveform, number of S1s it was constructed from"""
params = get_params(params)
areas = xams_data[params['dset']]['s1'].values
mask = (params['s1_min'] < areas) & (areas < params['s1_max'])
# Could now derive distribution, I'll just assume uniform for the moment.
# Hist1d(areas[mask],
# bins=np.linspace(params['s1_min'], params['s1_max'], 100)).plot()
n_data_s1s = mask.sum()
wvs = xams_s1s[params['dset']][mask].T
tmat, _ = aligned_time_matrix(spe_ts, wvs)
real_s1_avg = average_pulse(tmat, wvs)
return real_s1_avg, n_data_s1s
s1_range = (10, 20)
dset ='nr'
ydata, n_data_s1s = real_s1_wv(s1_min = s1_range[0], s1_max = s1_range[1])
plt.plot(spe_ts, ydata)
plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (s1_range[0], s1_range[1], n_data_s1s))
s1_bins = np.linspace(0, 100, 11)
for left, right in zip(s1_bins[:-1], s1_bins[1:]):
ydata, n_data_s1s = real_s1_wv(s1_min = left, s1_max = right, dset = 'er')
plt.plot(spe_ts, ydata, label = '%d - %d p.e.' % (left, right))
#plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (left, right, n_data_s1s))
#plt.show()
plt.xlim(-10, 100)
plt.title('ER')
plt.legend()
plt.show()
for left, right in zip(s1_bins[:-1], s1_bins[1:]):
ydata, n_data_s1s = real_s1_wv(s1_min = left, s1_max = right, dset='nr')
plt.plot(spe_ts, ydata, label = '%d - %d p.e.' % (left, right))
#plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (left, right, n_data_s1s))
#plt.show()
plt.xlim(-10, 100)
plt.title('NR')
plt.legend()
plt.show()
"""
Explanation: Here's an example waveform
End of explanation
"""
def residuals(ydata, minus, base, plus, **params):
params = get_params(params)
# CHANGED BY ERIK check for zero
sigma = get_sigma(minus, base, plus, **params)
if 0. in sigma:
zero_positions = np.where(sigma == 0)
print('Warning: found zero in error array at positions: ', zero_positions)
print('Replacing with infinite error instead...')
for pos in zero_positions:
sigma[pos] = np.inf
return (ydata - base) / sigma
def get_sigma(minus, base, plus, **params):
params = get_params(params)
sigma = np.abs(plus - minus)/2 + params['error_offset'] + params['error_pct'] * np.abs(base)
return sigma
def comparison_plot(ydata, minus, base, plus, **params):
params = get_params(params)
sigmas = get_sigma(minus, base, plus, **params)
# large subplot
ax2 = plt.subplot2grid((3,1), (2,0))
ax1 = plt.subplot2grid((3,1), (0,0), rowspan=2, sharex=ax2)
#f, (ax1, ax2) = plt.subplots(2, sharex=True)
plt.sca(ax1)
# plt.fill_between(spe_ts, minus, plus, alpha=0.5, linewidth=0, step='mid')
plt.fill_between(spe_ts, base - sigmas, base + sigmas,
alpha=0.5, linewidth=0, step='mid')
plt.plot(spe_ts, base, linestyle='steps-mid', label='Model')
plt.plot(spe_ts, ydata, marker='.', linestyle='', markersize=3, c='k', label='Observed')
plt.grid(alpha=0.1, linestyle='-', which='both')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel("Fraction of amplitude")
plt.axhline(0, c='k', alpha=0.5)
leg = plt.legend(loc='upper right', numpoints=1)
leg.get_frame().set_linewidth(0.0)
leg.get_frame().set_alpha(0.5)
plt.ylim(0, None)
#ax1.set_xticklabels([])
# Add residuals
plt.sca(ax2)
plt.subplot2grid((3,1), (2,0), sharex=ax1)
plt.xlim(params['t_min'], params['t_max'])
res = residuals(ydata, minus, base, plus)
plt.plot(spe_ts, res,
linestyle='', marker='x', c='k', markersize=3)
plt.ylim(-3, 3)
plt.grid(which='both', linestyle='-', alpha=0.1)
plt.axhline(0, c='k', alpha=0.5)
plt.ylabel("Residual")
plt.xlabel("Time since alignment point")
plt.text(#plt.xlim()[1] * 0.5, plt.ylim()[1] * 0.6,
60, 2,
'Mean abs. res.: %0.3f' % np.abs(res).mean())
plt.tight_layout()
plt.gcf().subplots_adjust(0,0,1,1,0,0)
def comparison_plot_2(ydata, minus, base, plus, **params):
params = get_params(params)
res = residuals(ydata, minus, base, plus, **params)
sigmas = get_sigma(minus, base, plus, **params)
# plt.fill_between(spe_ts, minus - params['error_offset'], plus + params['error_offset'],
# alpha=0.5, linewidth=0, step='mid')
plt.fill_between(spe_ts, base - sigmas, base + sigmas,
alpha=0.5, linewidth=0, step='mid')
plt.plot(spe_ts, base, linestyle='steps-mid', label='Model')
plt.plot(spe_ts, ydata, marker='.', linestyle='', markersize=3, c='k', label='Observed')
plt.yscale('log')
plt.ylim(2e-5, 1e-1)
plt.ylabel("Fraction of amplitude")
plt.xlabel('Time (ns)')
for _l in (params['t_min'], params['t_max']):
plt.axvline(_l, ls='dotted', color='black')
plt.twinx()
plt.plot(spe_ts, np.abs(res), color='red')
plt.ylabel('Residual / error')
plt.ylim(0)
plt.xlim(params['t_min'] - 20, params['t_max'] + 50)
res = res[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
chi2 = sum(res**2) / len(spe_ts[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])])
print('chi2 = %f' % chi2)
cust_params = {
's1_min' : 20,
's1_max' : 30,
'dset' : 'nr',
'tts' : .75,
'fs' : 0.2
}
ydata, n_data_s1s = real_s1_wv(**cust_params)
minus, base, plus = s1_models_error(**cust_params)
res = residuals(ydata, minus, base, plus)
comparison_plot(ydata, minus, base, plus)
print('Average waveform %.1f - %.1f p.e., %d events.' % (cust_params['s1_min'], cust_params['s1_max'], n_data_s1s))
comparison_plot_2(ydata, minus, base, plus, error_offset = 0.0002)
"""
Explanation: Model-data comparison
Plotting
End of explanation
"""
def gof(verbose=True, mode = 'chi2_ndf', **params):
'''
Get the mean residuals for given model parameters.
'''
params = get_params(params)
# Do not allow unphysical values
if params['t1'] < 0 or params['t3'] < 0 or not (0 <= params['fs'] <= 1):
result = float('inf')
else:
ydata, _ = real_s1_wv(**params)
# By default, the errors are set to: [0,1] for pulse model, 1.0 for tts
minus, base, plus = s1_models_error(**params)
res = residuals(ydata, minus, base, plus, **params)
assert len(res) == len(spe_ts)
res = res[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
if mode == 'mean':
result = np.abs(res).mean()
elif mode == 'median':
result = np.median(np.abs(res))
elif mode == 'chi2':
result = np.sum(res**2)
elif mode == 'chi2_ndf':
result = 1/len(res) *np.sum(res**2)
elif mode == 'res':
result = res
else:
raise ValueError('Mode unknown, fot this: %s' % mode)
if verbose and (mode != 'res'):
print('gof={gof}, fs={fs}, t1={t1}, t3={t3}, tts={tts}'.format(gof=result, **params))
return result
from copy import deepcopy
def gof_simultaneous(fs_er, fs_nr, verbose=True, mode='mean', **params):
params = get_params(params)
params_er = deepcopy(params)
params_nr = deepcopy(params)
params_er['dset'] = 'er'
params_nr['dset'] = 'nr'
params_er['fs'] = fs_er
params_nr['fs'] = fs_nr
gof_er = gof(verbose=False, mode=mode, **params_er)
gof_nr = gof(verbose=False, mode=mode, **params_nr)
if verbose:
print('gof_er={gof_er}, gof_nr={gof_nr}, fs_er={fs_er}, fs_nr={fs_nr} t1={t1}, t3={t3}, tts={tts}'.format(
gof_er=gof_er, gof_nr=gof_nr, fs_er = params_er['fs'], fs_nr = params_nr['fs'], **params))
return gof_er + gof_nr
gof_simultaneous(fs_er = 0.2, fs_nr = 0.16, mode='chi2', error_offset = 2e-4)
"""
Explanation: Fitting
Residuals function
End of explanation
"""
iterations = 100
n_photons_scan = [int(1e4), int(3e4), int(7e4), int(2e5)]
const_gofs = []
for n_photons in n_photons_scan:
print(n_photons)
const_gofs.append([gof(verbose = False, mode='chi2', n_photons = n_photons) for _ in range(iterations)])
for gofs, n_photons, c in zip(const_gofs, n_photons_scan, ['blue', 'orange', 'green', 'red', 'black']):
plt.hist(gofs, label="%d" % n_photons, histtype='step', range=(0, 500), bins=100, color = c)
plt.axvline(np.mean(gofs), color = c)
plt.legend()
plt.show()
"""
Explanation: Statistics of nphotons and stability of fit
End of explanation
"""
for i in range(10):
plt.plot(gof(mode='res', error_offset = 0.))
for i in range(10):
plt.plot((gof(mode='res', error_offset = 0., error_pct = 0.1))**2)
def sigma_from_params(**params):
params = get_params(params)
# ydata, _ = real_s1_wv(**params)
minus, base, plus = s1_models_error(**params)
sigma = get_sigma(minus, base, plus, **params)
sigma = sigma[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
return sigma
plt.plot(1/sigma_from_params(error_pct = 5e-2, error_ofset = 1e-3))
plt.ylim(0)
iterations = 250
n_photons_scan = [int(1e4), int(3e4), int(7e4), int(2e5)]
const_gofs = []
for n_photons in n_photons_scan:
print(n_photons)
const_gofs.append([gof(verbose = False, mode='chi2', n_photons = n_photons,
error_pct = 1e-2, error_ofset = 1e-4) for _ in range(iterations)])
for gofs, n_photons, c in zip(const_gofs, n_photons_scan, ['blue', 'orange', 'green', 'red', 'black']):
plt.hist(gofs / np.average(gofs), label="%d" % n_photons, histtype='step', range=(0, 2), bins=200, color = c)
plt.axvline(color = c)
plt.legend()
plt.show()
ydata, n_data_s1s = real_s1_wv()
minus, base, plus = s1_models_error()
# res = residuals(ydata, minus, base, plus)
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4, t_max= 125)
# plt.ylim(0, 2)
"""
Explanation: Wait, what? The residuals spread get larger with increasing stats? That does not sound right.
End of explanation
"""
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof_simultaneous(fs_er=x[0], fs_nr=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100,
mode='chi2', error_offset = 1e-4),
[0.2, 0.3, 25., 2.],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=10000),
method='Powell',
)
print('Done')
# mode = mean, s1_min =30, s1_max = 100: [ 0.20968042, 0.28464569, 24.8145522 , 2.42197182]
# array([ 0.17916349, 0.32752012, 24.00000003, 1.03864494])
# array([ 0.18086791, 0.24823393, 24.23984679, 2.3384889 ]) 462.62128366264312
# array([ 0.19454366, 0.3126068 , 25.57424767, 2.38196603]) 484.92280858647905
x = optresult.x
def check_params(plot_type = 0, **params):
params = get_params(params)
ydata, _ = real_s1_wv(**params)
minus, base, plus = s1_models_error(**params)
if plot_type == 1:
comparison_plot(ydata, minus, base, plus, **params)
elif plot_type == 2:
comparison_plot_2(ydata, minus, base, plus, **params)
elif plot_type == 0:
comparison_plot(ydata, minus, base, plus, **params)
plt.show()
comparison_plot_2(ydata, minus, base, plus, **params)
return
x
optresult
check_params(s1_min = 30, s1_max = 100, dset='er', fs=x[0], t3 = x[2], tts=x[3], plot_type=0, error_offset = 1e-4)
plt.title('ER')
plt.show()
check_params(s1_min = 30, s1_max = 100, dset='nr', fs=x[1], t3 = x[2], tts=x[3], plot_type=0, error_offset = 1e-4)
plt.title('NR')
plt.show()
gofs = [gof_simultaneous(fs_er=x[0], fs_nr=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100,
mode='chi2', error_offset = 1e-4)
for _ in range(20)]
plt.hist(gofs)
"""
Explanation: Fit fit fit
End of explanation
"""
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], tts=x[1], s1_min=30, s1_max = 100, error_pct = 1e-2, error_offset = 1e-4, mode='chi2_ndf'),
[0.2, 2],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
print('Done')
optresult
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], tts=fit[1], s1_min = 30, s1_max = 100,
error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
"""
Explanation: Fit singlet fraction and TTS
End of explanation
"""
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t3=x[1], tts=x[2], s1_min = 30, s1_max = 100,
error_pct = 0.5e-2, error_offset = 1e-5),
[0.2, 24, 3],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t3=fit[1], tts=fit[2], error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 0.5e-2, error_offset = 1e-5)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 0.5e-2, error_offset = 1e-5)
plt.show()
def gof_v_parameter(parameter, variation_range, num, **params):
params_to_try = np.linspace(*variation_range, num=num)
gofs = []
for param_value in params_to_try:
params[parameter] = param_value
gofs.append(gof(**params))
return params_to_try, np.array(gofs)
def gof_v_2_paramters(parameter1, parameter2, variation_range1, variation_range2, num1, num2, **params):
import time
start = time.time()
params_to_try1 = np.linspace(*variation_range1, num=num1)
params_to_try2 = np.linspace(*variation_range2, num=num2)
gvd = []
for par1 in params_to_try1:
for par2 in params_to_try2:
params[parameter1] = par1
params[parameter2] = par2
gof_value = gof(**params)
gvd.append([par1, par2, gof_value])
stop = time.time()
print('Computation took %d seconds (%.1f s/it)' % ((stop - start), (stop - start) / len(gvd)))
return np.array(gvd)
nx = 20
ny = 20
ding = gof_v_2_paramters('fs', 't3', (0.16, 0.24), (23., 27.), nx, ny, tts=fit[2],
error_pct = 1e-2, error_offset = 1e-4, verbose=False)
plt.scatter(ding[:,0], ding[:,1], c=ding[:, 2])
plt.colorbar()
x = np.reshape(ding[:, 0], (nx, ny))
y = np.reshape(ding[:, 1], (nx, ny))
z = np.reshape(ding[:, 2], (nx, ny))
plt.pcolormesh(x, y, z/ np.min(z))
plt.colorbar()
edge_x = ding[:, 0]
edge_y =
plt.figure()
ax = plt.gca()
pc = ax.pcolormesh(edge_x, edge_y, 1000* (h_fg - h_bg).T, cmap='RdBu', vmin = -3e-1, vmax = 3e-1)
fss, gofs = gof_v_parameter('fs', (0.14, 0.24), 20, fs=fit[0], t3=fit[1], tts=fit[2], error_pct = 1e-2, error_offset = 1e-4)
plt.plot(fss, gofs, marker='.', markersize=5)
optresult_nr = optimize.minimize(
lambda x: gof(fs=x[0], t3=x[1], tts=x[2], dset = 'nr', error_pct = 1e-2, error_offset = 1e-4),
[0.2, 24, 3],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult_nr.x
print(fit)
ydata, _ = real_s1_wv(dset='nr')
minus, base, plus = s1_models_error(fs=fit[0], t3=fit[1], tts=fit[2], dset='nr', error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
for _l in (-15, 125):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
plt.hist(xams_data['er']['s1'], bins=100, histtype='step', range=(50,100))
plt.hist(xams_data['nr']['s1'], bins=100, histtype='step', range=(50,100))
plt.show()
"""
Explanation: GOF uncertainty
Need higher stats?
Fit three parameters
End of explanation
"""
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t1=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100, dst='er'),
[0.2, 3.1, 24, 3],
bounds=[[.01, 1], [.1, 5], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
# fit = optresult.x
# ydata, _ = real_s1_wv()
# minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3])
# comparison_plot(ydata, minus, base, plus)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3], s1_min=30, s1_max = 100)
comparison_plot(ydata, minus, base, plus)
plt.show()
comparison_plot_2(ydata, minus, base, plus)
for _l in (-20, 100):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
"""
Explanation: Fit four parameters
ER
End of explanation
"""
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t1=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100, dst='nr'),
[0.2, 3.1, 24, 3],
bounds=[[.01, 1], [.1, 5], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3], s1_min=30, s1_max = 100, dset='nr')
comparison_plot(ydata, minus, base, plus)
plt.show()
comparison_plot_2(ydata, minus, base, plus)
for _l in (-20, 100):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
"""
Explanation: The fit is pushing the singlet livetime to very low values... There is some degeneracy here, and also some mis-modeling, it seems. The sample at 0 is always under-estimated. Why? Maybe because the tts is actually quite low but modeled here as large. The effects may not be symmetric: there are many things causing a delay, but not a negative delay.
NR
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
machine-learning/remove_backgrounds.ipynb
|
mit
|
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
"""
Explanation: Title: Remove Backgrounds
Slug: remove_backgrounds
Summary: How to remove the backgrounds in images using OpenCV in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
<a alt="grabcut" href="https://machinelearningflashcards.com">
<img src="remove_backgrounds/Grabcut_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
"""
# Load image
image_bgr = cv2.imread('images/plane_256x256.jpg')
"""
Explanation: Load Image
End of explanation
"""
# Convert to RGB
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
"""
Explanation: Convert To RGB
End of explanation
"""
# Rectange values: start x, start y, width, height
rectangle = (0, 56, 256, 150)
"""
Explanation: Draw Rectangle Around Foreground
End of explanation
"""
# Create initial mask
mask = np.zeros(image_rgb.shape[:2], np.uint8)
# Create temporary arrays used by grabCut
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
# Run grabCut
cv2.grabCut(image_rgb, # Our image
mask, # The Mask
rectangle, # Our rectangle
bgdModel, # Temporary array for background
fgdModel, # Temporary array for background
5, # Number of iterations
cv2.GC_INIT_WITH_RECT) # Initiative using our rectangle
# Create mask where sure and likely backgrounds set to 0, otherwise 1
mask_2 = np.where((mask==2) | (mask==0), 0, 1).astype('uint8')
# Multiply image with new mask to subtract background
image_rgb_nobg = image_rgb * mask_2[:, :, np.newaxis]
"""
Explanation: Apply GrabCut
End of explanation
"""
# Show image
plt.imshow(image_rgb_nobg), plt.axis("off")
plt.show()
"""
Explanation: Show image
End of explanation
"""
|
feststelltaste/software-analytics
|
notebooks/Checking the modularization based on changes (3D Version).ipynb
|
gpl-3.0
|
import pandas as pd
from sklearn.metrics.pairwise import cosine_distances
from sklearn.manifold import MDS
import numpy as np
from matplotlib import cm
from matplotlib.colors import rgb2hex
import ipyvolume as ipv
# read, filter and prepare data
git_log = pd.read_csv("https://git.io/Jez2h")
prod_code = git_log.copy()
prod_code = prod_code[prod_code.file.str.endswith(".java")]
prod_code = prod_code[prod_code.file.str.startswith("backend/src/main")]
prod_code = prod_code[~prod_code.file.str.endswith("package-info.java")]
prod_code['hit'] = 1
# pivot table to get a change vector per file
commit_matrix = prod_code.reset_index().pivot_table(
index='file',
columns='sha',
values='hit',
fill_value=0)
commit_matrix.iloc[0:5,50:55]
# calculate distance between files based on changes
dissimilarity_matrix = cosine_distances(commit_matrix)
# break down matrix to 3D representation
model = MDS(dissimilarity='precomputed', random_state=0, n_components=3)
dissimilarity_3d = model.fit_transform(dissimilarity_matrix)
# extract module names
dissimilarity_3d_df = pd.DataFrame(
dissimilarity_3d,
index=commit_matrix.index,
columns=["x", "y", "z"])
dissimilarity_3d_df['module'] = dissimilarity_3d_df.index.str.split("/").str[6]
dissimilarity_3d_df.head()
"""
Explanation: Introduction
In my previous blog post, we looked at the similarity within and across modules by only looking at the change data of each source code file.
In this analysis, we use same data analysis approach, but visualize the result in a 3D scatter plot.
Data Wrangling
We just repeat the stuff explained in the mentioned blog post. The only difference is, that we are going from a 2D representation of the distance matrix to a 3D representation.
End of explanation
"""
modules = dissimilarity_3d_df[['module']].drop_duplicates()
rgb_colors = [x for x in cm.Spectral(np.linspace(0,1,len(modules)))]
modules['color'] = rgb_colors
modules = modules.set_index("module", drop=True)
dissimilarity_3d_df['color'] = dissimilarity_3d_df['module'].map(modules['color'].to_dict())
dissimilarity_3d_df.head()
"""
Explanation: Visualization
So this part is new: We brew a color for each module.
End of explanation
"""
x = dissimilarity_3d_df['x']
y = dissimilarity_3d_df['y']
z = dissimilarity_3d_df['z']
color = dissimilarity_3d_df['color'].values.tolist()
ipv.quickscatter(x, y, z, color=color, size=7, marker="sphere")
"""
Explanation: And then, we visualize this data with ipyvolume.
End of explanation
"""
|
psas/liquid-engine-analysis
|
archive/electric_pump_calcs/pump_sizing.ipynb
|
gpl-3.0
|
import math as m
# propellant properties and physical constants
rho = 789 # propellant density (ethanol and LOX respectively) [kg/m^3]
p_v = 8.84E3 # propellant vapor pressure [Pa]
g_0 = 9.81 # gravitational acceleration [m/s/s]
# rocket model (assuming sea-level operation)
isp = 246 * 0.90 # specific impulse (sea-level, optimistic estimate) [s]
f = 4.5E3 # thrust [N]
mdot_t = f/(g_0 * isp) # total mass flowrate [kg/s]
chamber_p= 380 * 6.895 * 1000 # chamber pressure (assumed to be 380 psi currently) [Pa]
loss_factor = 1.15 # estimate of line and injector losses
OF = 1.3 # mixture ratio
mdot_o = mdot_t / (1 + (1/OF)) # oxidizer mass flowrate [kg/s]
mdot_f = mdot_t / (1 + OF) # fuel mass flowrate [kg/s]
p_i = 101.3E3 # inlet pressure (currently 1 atm) [Pa]
delta_p = chamber_p * loss_factor - p_i # required pump discharge pressure [Pa]
#barleycorn conversions
g_0 = 32.2
delta_p = delta_p * 0.000145038
mdot = mdot_f * 2.20462
rho = rho * 0.062428
h_i = p_i * 0.000145038 * 144 / rho
h_v = p_v * 0.000145038 * 144 / rho
# derived parameters
q = mdot / rho # volumetric flowrate [f^3/s]
h_p = 144 * delta_p / rho # required head rise [ft]
npsh_a = h_i - h_v # Net Positive Suction Head (available) [ft]
print("mass flow rate")
print("%.3f" % mdot, "lbm/s ", "\n")
print("volumetric flow rate")
print("%.4f" % q, "ft^3/s", "\n")
print("required pressure head")
print("%.3f" % h_p, "ft", "\n")
print("Net Positive Suction Head Available")
print("%.3f" % npsh_a, "ft")
"""
Explanation: Electric Feed System Pump Size Estimates
We're exploring the use of an all-electric pump for the propellant feed system. What follows is a preliminary pump sizing and requirements analysis to guide further inquiry.
Design Approach
Turbomachines, contrary to popular belief, are not not powered by black magic (largely). The process for determining pump requirements is a straight forward one. First we need to establish design goals. High efficiency, high performance, and minimum mass are desirable from a vehicle mass faction standpoint. But designs that increase efficiency and decrease mass can also increase pump-inlet pressure required to suppress cavitation, which can in turn lead to increases in propellant tank mass. Also we do not wish to compromise design simplicity, reliability, operational life, and cost. In some sense these are hard tradeoffs. So we should first define the relative importance/emphasis of these design goals.
Design Inputs
Engine and vehicle requirements dictate the types, flow rates and pressure levels of the propellants that the pumps deliver to the engine thrust chamber. These are assumed to be given from an earlier and more fundamental analysis of vehicle design. The pumps must deliver propellant without requiring an inlet pressure greater that that allowed by other parts of the feed system. A pump operating under a specified condition (rotational speed and fluid angles of attack on internal surfaces) produce a constant-volume flow rate and constant head rise. To summarize the (given) inputs are propellant mass flow rate, pump pressure rise, pump inlet pressure, and physical and derived quantities such as propellant density and vapor pressure, volume flow rate and NPSH. Volumetric flow rate, Q, is given by
$$
Q = \frac{\dot{m}}{\rho}
$$
where $\dot{m}$ is the mass flow rate, and $\rho$ is the fluid density.
The pump head rise is given by
$$
H_p = \frac{\Delta p_p}{g_0 \rho}
$$
where $\Delta p_p$ is the required rise in pressure and $g_0$ is the gravitational acceleration.
Available Net Positive Suction Head is given by
$$
NPSH_{available} = \frac{p_i - p_v}{g_0 \rho}
$$
Note that this formulation of $NPSH$ assumes that the elevation head (including the additional head due to acceleration of the launch vehicle) is small relative to the vapor pressure and tank pressure.
Also note that due to the preponderence of barleycorn units used in hydraulic engineering this document will use them as well, both by convention, and also due to the fact that most vendor data sheets are provided in that unit system.
Useful Physical Properties of Propellants
| | density $\rho$ [kg/m^3] | vapor pressure $p_v$ [Pa] |
|-------- | ----------------------- | ---------------------------|
| ethanol | 789 | 8.84E3 |
| LOX | 1141 | 5E6 |
End of explanation
"""
# number of stages
delta_p_s = 47E6 * 0.000145038 # estimated allowable pressure rise per stage [Pa]
n = int(m.ceil(delta_p/delta_p_s)) # number of stages
print("number of stages")
print(n)
"""
Explanation: A brief vendor search shows that there are no COTS pump solutions that even remotely approach satisfying our requirements with reasonable weights and pricepoints. On possibility is to accept a lower chamber pressure, or to jack up the pressure at the pump inlet by pressurizing the propellant tanks. This has the advantage of making the required pump head more reasonable, and should additionally help supress cavitation within the pump. The disadvantage is that it goes against the original point of using pumps especially to enable thin and lightwight propellant tanks.
Design Outputs
Given the requirements and the design inputs there are six basic steps needed to size a pump.
Determine the number of stages.
Determine the pump rotational speed.
Determine the pump impeller tip speeds.
Determine the pump impeller entrance and exit tip diameters
Determine pump efficiency
Determine the shaft power required to drive the pump.
Determine Number of Stages
The number of stages is generally either determined by the next largest integer value greater than the ratio of the pump-pressure rise to the maximum allowable stage-pressure rise, or by $NPSH_{required}$. If a centrifugal pump is used the approximate limits on stage-pressure rise is 47MPa for LOX and ethanol (citation needed). At 47 MPa impeller tips become so thick that they restrict flow passages and develop large k-losses. We determine the number of stages from a simple rule of thumb
$$ \label{eqn:1}
n \geq \frac{\Delta p_p}{\Delta p_s}
$$
where $\Delta p_p$ is the required pressure rise and we assume $\Delta p_s$ to be 47MPa.
End of explanation
"""
from IPython.display import Image
Image(filename='specific_speed.png')
# pump rotational speed
psi = 1 # pump stage head coefficient (estimated)
u_t = psi * m.sqrt(2 * g_0 * h_p) # impeller vane tip speed [ft/s]
u_ss = 10E3 # suction specific speed
npsh_r = npsh_a * 0.8
n_r = (u_ss * npsh_r**0.75)/(21.2 * m.sqrt(q)) # pump rotational speed [RPM]
n_r_rad = n_r * 2 * m.pi / 60
n_s = (21.2* n_r * m.sqrt(q))/(h_p / n)**0.75 # pump specific speed
print("impeller tip speed")
print("%.2f" % u_t, "ft/s", "\n")
print("pump rotational speed")
print("%.2f" % n_r, "RPM", "\n")
print("pump specific speed")
print("%.2f" % n_s)
"""
Explanation: Determine Pump Rotational Speed
The next step is to estimate the impeller-tip speeds, rotational speed, and finally impeller sizes. Impeller tip speed is given by
$$ \label{eqn:2}
u_t = \psi \sqrt{\frac{2 g_0 H_p}{n}}
$$
where $\psi$ is the pump-stage head coefficient which we can assume to have a numerical value of between 0.4 and 1.5 (citation needed). As mentioned previously we want the highest rotational speed to get the lowest pump mass and highest performance, however there are practical constraints on things such as bearings. If there is no boost stage and and if there is a definite NPSH limit the pump rotational speed in $rad/s$ can be determined by
$$ \label{eqn:3}
N_r = \frac{u_{ss}NPSH^{0.75}}{\sqrt{Q}}
$$
Suction-specific speed, $u_{ss}$, indicates the minimum net positive suction head (NPSH, inlet pressurization above the vapor pressure expressed in meters) at which a pump operating at rotational speed $N$, and volume flow rate $Q$, can operate without cavitation effects. $u_{ss}$ values fall between 5000 and 50,000 depending on efficiency of the pump with 7000 to 12,000 being typical. Lower numbers are more conservative. A large suction-specific speed indicates an ability to operate at low inlet pressures. Large values are obtained by using large diameters for the pump-inlet tip and small diameters for the inlet hub such that the inlet axial velocity head is minimized. Additionally thin, gradually curved blades with sharp leading edges produce the minimum static pressure gradients on the blade surfaces. Inducers have both features, and are therefore widely used in pumping applications on rocket engines. The important thing is that the $NPSH_{available}$ must be higher than $NPSH_{required}$ to avoid cavitation. $NPSH_{required}$ is determined on a pump by pump basis. This places the upper limit on impeller specific speed. If additional head is required by the pump the propellant needs to be pre-pressurized before reaching the pump impeller, such as with the aformentioned inducer, or with additional pump stages (e.g. 'booster' stages), or by pressurizing the propellant tank. As a historical note the German V-2 oxygen tank was pressurized to 2.3 atm (autogenuously) largely to suppress pump cavitation.
$$ \label{eqn:4}
N_s = \frac{N_r \sqrt{Q}}{(\frac{H_p}{n})^{0.75}}
$$
$N_s$ is the stage-specific speed where $N_r$ is the pump's rotational speed and $n$ is the number of stages. Stage specific speed, $N_s$, is a parameter that characterizes geometrically and hydrodynamically similar pumps. Centrifugal pump are generally low-capacity/high-head devices and thus are at the low specific speed side of the spectrum. Each class of pump (radial, francis, mixed, axial) lends itself to a particular range of specific speeds.
End of explanation
"""
# impeller diameters
l = 0.3 # hub to tip diameter ratio (assume 0.3)
phi = 1 # inducer flow coefficicent (assume 0.1 with inducer, 1.0 otherwise)
d_o = u_t * 2 / n_r_rad * 12 # impeller outer diameter [in]
d_i = (4 * q/(m.pi * phi * n_r_rad * (1 - l**2)))**(1/3) * 12 # impeller inner diameter [in]
print("impeller discharge diameter")
print("%.3f" % d_o, "in", "\n")
print("impeller inlet diameter")
print("%.4f" % d_i, "in")
"""
Explanation: It should be noted that the combination of flowrate, pressure rise and fluid density togeather with the allowable range of speeds places the pump in the very low end of the specific speed range, where the predicted maximum attainable efficicency for several types of centrifugal pumps are below those of positive displacement type pumps. It is not likely that an efficiency beter than 50% is attainable; as such several stages may be required for the desired capacity. Positive dispalcement type pumps may offer improved efficiency at the cost of less steady flow, noise, and much poorer reliability.
Additionally, it's difficult to extrapolate pump efficiencies availabile in literature to such small high-head pumps, as the implicit assumption of geometric and hydrodynamic similarity may well be broken.
Impeller Diameters
Knowing the rotational speed, we can get the impeller diameters using
$$ \label{eqn:5}
D_o = \frac{u_t}{N_r}
$$
$$ \label{eqn:6}
D_i = \left( \frac{4Q}{\pi \phi N_r(1 - L^2)} \right)^{1/3}
$$
Where $D_o$ is the diameter of the impeller outlet ($m$), also called the discharge, $D_i$ is the diameter of the impeller inlet ($m$), $L$ is the ratio of the hub diameter to the tip diameter, and $\phi$ is the inducer-inlet flow coefficient. Minimizing the factors $\phi$ and $L$ minimizes the NPSH required by the pump. The inducer-inlet flow coefficient, $\phi$, is the tangent of the flow angle approaching the inducers blade tip. It is one of the main design parameters used to maximize suction performance. Generally, lower values are better. Assume 0.1 if using inducer, otherwise use 1.
End of explanation
"""
# shaft power
from numpy import interp
#from a table of efficiencies for radial impellers found in Munson et. al., 2009
def effic(n_s):
eta_t = [0.55, 0.80, 0.83, 0.85, 0.84, 0.82]
n_s_t = [500, 1000, 1500, 2000, 2500, 3000]
eta = interp(n_s, n_s_t, eta_t)
return eta
eff = 0.4 # a more realistic estimated efficiency
#eta = input('given an pump specific speed of {} and flow rate of {} m^3/s what is the pump efficiency (from a lookup table in Munson) '.format("%.3f" % n_s, "%.3f" % q))
def power(mdot, h_p, n_s): # required shaft power [W]
p_req = mdot * h_p/(effic(n_s) * 0.738)
return p_req
print("pump efficiency")
print("%.2f" % float(100*effic(n_s)), "%", "\n")
print("required shaft power (optimistic)")
print("%.2f" % power(mdot, h_p, n_s), "W", "\n")
print("required shaft power (realistic)")
print("%.2f" % power(mdot,h_p, eff), "W")
"""
Explanation: Shaft Power
The required shaft power can be determined from
$$ \label{eqn:7}
P_{req} = \frac{g_0 \dot{m} H_p}{\eta_p}
$$
where $\eta_p$ is the pump efficiency and can be determined from the stage-specific speed and tabulated efficiency data.
End of explanation
"""
f_t = 50 #engine run time [s]
print("%.2f" % float(power(mdot,h_p, eff)*f_t/1000), "kJ") #Total stored energy [kJ]
#changed for illustrative purposes
"""
Explanation: It should be noted that 62% is very optimistic for any hombrew pump volute/inducer/impeller combination. We should count ourselves lucky if we hit 40% efficient.
End of explanation
"""
|
adrn/thejoker
|
docs/examples/5-Calibration-offsets.ipynb
|
mit
|
import astropy.table as at
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import pymc3_ext as pmx
import exoplanet as xo
import exoplanet.units as xu
import arviz as az
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
"""
Explanation: If you have not already read it, you may want to start with the first tutorial: Getting started with The Joker.
Inferring calibration offsets between instruments
Also in addition to the default linear parameters (see Tutorial 1, or the documentation for JokerSamples.default()), The Joker allows adding linear parameters to account for possible calibration offsets between instruments. For example, there may be an absolute velocity offset between two spectrographs. Below we will demonstrate how to simultaneously infer and marginalize over a constant velocity offset between two simulated surveys of the same "star".
First, some imports we will need later:
End of explanation
"""
data = []
for filename in ['data-survey1.ecsv', 'data-survey2.ecsv']:
tbl = at.QTable.read(filename)
_data = tj.RVData.guess_from_table(tbl, t_ref=tbl.meta['t_ref'])
data.append(_data)
"""
Explanation: The data for our two surveys are stored in two separate CSV files included with the documentation. We will load separate RVData instances for the two data sets and append these objects to a list of datasets:
End of explanation
"""
for d, color in zip(data, ['tab:blue', 'tab:red']):
_ = d.plot(color=color)
"""
Explanation: In the plot below, the two data sets are shown in different colors:
End of explanation
"""
with pm.Model() as model:
dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 10),
u.km/u.s)
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=256*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s,
v0_offsets=[dv0_1])
"""
Explanation: To tell The Joker to handle additional linear parameters to account for offsets in absolute velocity, we must define a new parameter for the offset betwen survey 1 and survey 2 and specify a prior. Here we will assume a Gaussian prior on the offset, centered on 0, but with a 10 km/s standard deviation. We then pass this in to JokerPrior.default() (all other parameters here use the default prior) through the v0_offsets argument:
End of explanation
"""
prior_samples = prior.sample(size=1_000_000,
random_state=rnd)
joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=128)
joker_samples
"""
Explanation: The rest should look familiar: The code below is identical to previous tutorials, in which we generate prior samples and then rejection sample with The Joker:
End of explanation
"""
_ = tj.plot_rv_curves(joker_samples, data=data)
"""
Explanation: Note that the new parameter, dv0_1, now appears in the returned samples above.
If we pass these samples in to the plot_rv_curves function, the data from other surveys is, by default, shifted by the mean value of the offset before plotting:
End of explanation
"""
_ = tj.plot_rv_curves(joker_samples, data=data,
apply_mean_v0_offset=False)
"""
Explanation: However, the above behavior can be disabled by setting apply_mean_v0_offset=False. Note that with this set, the inferred orbit will not generally pass through data that suffer from a measurable offset:
End of explanation
"""
with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)
trace = pmx.sample(
tune=500, draws=500,
start=mcmc_init,
cores=1, chains=2)
az.summary(trace, var_names=prior.par_names)
"""
Explanation: As introduced in the previous tutorial, we can also continue generating samples by initializing and running standard MCMC:
End of explanation
"""
mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()
df = mcmc_samples.tbl.to_pandas()
colnames = mcmc_samples.par_names
colnames.pop(colnames.index('s'))
_ = corner.corner(df[colnames])
"""
Explanation: Here the true offset is 4.8 km/s, so it looks like we recover this value!
A full corner plot of the MCMC samples:
End of explanation
"""
|
sangheestyle/ml2015project
|
howto/model25_using_acc_cat_for_users.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
from nlp import extract_entities
"""
Explanation: Model25: using category accuracy per users
End of explanation
"""
import pickle
questions = pickle.load(open('questions01.pkl', 'rb'))
users = pickle.load(open('users01.pkl', 'rb'))
categories = pickle.load(open('categories01.pkl', 'rb'))
set(users[0].keys()) - set(['cat_uid'])
from sklearn.preprocessing import normalize
wanted_user_items = list(set(users[0].keys()) - set(['cat_uid']))
X_pos_uid = users.select(wanted_user_items)
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid', 'ne_nor_mean', 'ne_mean', 'ne_median'])
X_pos_uid = normalize(X_pos_uid, norm='l1')
X_pos_qid = normalize(X_pos_qid, norm='l1')
print(X_pos_qid[0])
print(X_pos_uid[0])
from sklearn.cluster import KMeans
# Question category
n_components = 27
est = KMeans(n_clusters=n_components)
est.fit(X_pos_qid)
pred_cat_qid = est.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 27
est = KMeans(n_clusters=n_components)
est.fit(X_pos_uid)
pred_cat_uid = est.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())})
questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())})
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
"""
Explanation: KMeans
End of explanation
"""
def add_features(X):
for item in X:
# category
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item.update(users[uid])
else:
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item.update(questions[qid])
import pickle
questions = pickle.load(open('questions01.pkl', 'rb'))
users = pickle.load(open('users01.pkl', 'rb'))
categories = pickle.load(open('categories01.pkl', 'rb'))
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
from nlp import extract_entities
import math
from collections import Counter
from numpy import abs, sqrt
from sklearn.linear_model import ElasticNetCV
from sklearn.cross_validation import ShuffleSplit, cross_val_score
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import normalize
from sklearn.svm import LinearSVC
from sklearn.cluster import KMeans
wanted_user_items = list(set(users[0].keys()) - set(['cat_uid']))
X_pos_uid = users.select(wanted_user_items)
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid', 'ne_nor_mean', 'ne_mean', 'ne_median'])
X_pos_uid = normalize(X_pos_uid, norm='l1')
X_pos_qid = normalize(X_pos_qid, norm='l1')
tu = ('l1', 'n_uid_clust', 'n_qid_clust', 'rmse')
print ('=== Bench with ElasticNetCV: {0}, {1}, {2}, {3}'.format(*tu))
for ii in [27]:
n_uid_clu = ii
n_qid_clu = ii
# clustering for uid
uid_est = KMeans(n_clusters=n_uid_clu)
uid_est.fit(X_pos_uid)
pred_cat_uid = uid_est.predict(X_pos_uid)
# clustering for qid
qid_est = KMeans(n_clusters=n_qid_clu)
qid_est.fit(X_pos_qid)
pred_cat_qid = qid_est.predict(X_pos_qid)
users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())})
questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())})
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
X_train, y_train = featurize(load_buzz(), group='train',
sign_val=None, extra=['sign_val', 'avg_pos'])
add_features(X_train)
unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group']
wanted_features = list(set(X_train[1].keys()) - set(unwanted_features))
X_train = select(X_train, wanted_features)
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
X_new = X_train_dict_vec
#X_new = LinearSVC(C=0.01, penalty="l1", dual=False, random_state=50).fit_transform(X_train_dict_vec, y_train)
n_samples = X_new.shape[0]
cv = ShuffleSplit(n_samples, n_iter=5, test_size=0.2, random_state=50)
print("L1-based feature selection:", X_train_dict_vec.shape, X_new.shape)
for l1 in [0.7]:
scores = cross_val_score(ElasticNetCV(n_jobs=3, normalize=True, l1_ratio = l1),
X_new, y_train,
cv=cv, scoring='mean_squared_error')
rmse = sqrt(abs(scores)).mean()
print ('{0}, {1}, {2}, {3}'.format(l1, n_uid_clu, n_qid_clu, rmse))
"""
Explanation: B. Modeling
End of explanation
"""
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
add_features(X_test)
X_test = select(X_test, wanted_features)
unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group']
wanted_features = list(set(X_train[1].keys()) - set(unwanted_features))
X_train = select(X_train, wanted_features)
X_train[0]
users[131]
categories['astronomy']
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
for l1_ratio in [0.7]:
print('=== l1_ratio:', l1_ratio)
regressor = ElasticNetCV(n_jobs=3, normalize=True, l1_ratio=l1_ratio, random_state=50)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
write_result(load_buzz()['test'], predictions, file_name=str(l1_ratio)+'guess_adj.csv', adj=True)
"""
Explanation: Original
=== Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse
L1-based feature selection: (28494, 1112) (28494, 1112)
0.7, 27, 27, 74.88480204218828
Without users features for regression
=== Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse
L1-based feature selection: (28494, 1112) (28494, 1112)
0.7, 27, 27, 74.94733641570902
Training and testing model
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_text_binary_classification.ipynb
|
apache-2.0
|
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: Hyperparameter tuning text binary classification model
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_text_binary_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_hyperparmeter_tuning_text_binary_classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to do hyperparameter tuning for a custom text binary classification model.
Dataset
The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment.
Objective
In this notebook, you learn how to create a hyperparameter tuning job for a custom text binary classification model from a Python script in a docker container using the Vertex client library. You can alternatively hyperparameter tune models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create an Vertex hyperparameter turning job for training a custom model.
Tune the custom model.
Evaluate the study results.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import time
from google.cloud.aiplatform import gapic as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
"""
Explanation: Container (Docker) image
Next, we will set the Docker container images for training.
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available:
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
"""
Explanation: Machine Type
Next, set the machine type to use for training.
Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own hyperparameter tuning and training of a custom text binary classification.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Job Service for hyperparameter tuning.
End of explanation
"""
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
"""
Explanation: Tuning a model - Hello World
There are two ways you can hyperparameter tune and train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for hyperparameter tuning and training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for hyperparameter tuning and training a custom model.
Prepare your hyperparameter tuning job specification
Now that your clients are ready, your first step is to create a Job Specification for your hyperparameter tuning job. The job specification will consist of the following:
trial_job_spec: The specification for the custom job.
worker_pool_spec : The specification of the type of machine(s) you will use for hyperparameter tuning and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
study_spec: The specification for what to tune.
parameters: This is the specification of the hyperparameters that you will tune for the custom training job. It will contain a list of the
metrics: This is the specification on how to evaluate the result of each tuning trial.
Prepare your machine specification
Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
"""
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
"""
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
"""
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_imdb.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
"""
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom hyperparameter tuning job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom hyperparameter tuning job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the hyperparameter tuning script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The hyperparameter tuning distribution strategy to use for single or distributed hyperparameter tuning.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
"""
study_spec = {
"metrics": [
{
"metric_id": "val_accuracy",
"goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE,
}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
],
"algorithm": aip.StudySpec.Algorithm.GRID_SEARCH,
}
"""
Explanation: Create a study specification
Let's start with a simple study. You will just use a single parameter -- the learning rate. Since its just one parameter, it doesn't make much sense to do a random search. Instead, we will do a grid search over a range of values.
metrics:
metric_id: In this example, the objective metric to report back is 'val_accuracy'
goal: In this example, the hyperparameter tuning service will evaluate trials to maximize the value of the objective metric.
parameters: The specification for the hyperparameters to tune.
parameter_id: The name of the hyperparameter that will be passed to the Python package as a command line argument.
scale_type: The scale type determines the resolution the hyperparameter tuning service uses when searching over the search space.
UNIT_LINEAR_SCALE: Uses a resolution that is the same everywhere in the search space.
UNIT_LOG_SCALE: Values close to the bottom of the search space are further away.
UNIT_REVERSE_LOG_SCALE: Values close to the top of the search space are further away.
search space: This is where you will specify the search space of values for the hyperparameter to select for tuning.
integer_value_spec: Specifies an integer range of values between a min_value and max_value.
double_value_spec: Specifies a continuous range of values between a min_value and max_value.
discrete_value_spec: Specifies a list of values.
algorithm: The search method for selecting hyperparameter values per trial:
GRID_SEARCH: Combinatorically search -- which is used in this example.
RANDOM_SEARCH: Random search.
End of explanation
"""
hpt_job = {
"display_name": JOB_NAME,
"trial_job_spec": {"worker_pool_specs": worker_pool_spec},
"study_spec": study_spec,
"max_trial_count": 6,
"parallel_trial_count": 1,
}
"""
Explanation: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification:
display_name: The human readable name you assign to this custom hyperparameter tuning job.
trial_job_spec: The specification for the custom hyperparameter tuning job.
study_spec: The specification for what to tune.
max_trial_count: The maximum number of tuning trials.
parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially.
End of explanation
"""
# Make folder for Python hyperparameter tuning script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: IMDB Movie Reviews text binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration hyperparameter tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Examine the hyperparameter tuning package
Package layout
Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
%%writefile custom/trainer/task.py
# HP Tuning hello world example
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
from hypertune import HyperTune
import argparse
import os
import sys
import time
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--model-dir',
dest='model_dir',
default='/tmp/saved_model',
type=str,
help='Model dir.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Instantiate the HyperTune reporting object
hpt = HyperTune()
for epoch in range(1, args.epochs+1):
# mimic metric result at the end of an epoch
acc = args.lr * epoch
# save the metric result to communicate back to the HPT service
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_accuracy',
metric_value=acc,
global_step=epoch)
print('epoch: {}, accuracy: {}'.format(epoch, acc))
time.sleep(1)
"""
Explanation: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary:
Passes the hyperparameter values for a trial as a command line argument (parser.add_argument('--lr',...))
Mimics a training loop, where on each loop (epoch) the variable accuracy is set to the loop iteration * the learning rate.
Reports back the objective metric accuracy back to the hyperparameter tuning service using report_hyperparameter_tuning_metric().
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
"""
Explanation: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
def create_hyperparameter_tuning_job(hpt_job):
response = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hpt_job
)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_hyperparameter_tuning_job(hpt_job)
"""
Explanation: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps:
Import the HyperTune module: from hypertune import HyperTune().
At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are:
hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification.
metric_value: The value of the objective metric to report back to the hyperparameter service.
global_step: The epoch iteration, starting at 0.
Hyperparameter Tune the model
Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter:
-hpt_job: The specification for the hyperparameter tuning job.
The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-hyperparameter_tuning_job: The specification for the hyperparameter tuning job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom hyperparameter tuning job.
End of explanation
"""
# The full unique ID for the hyperparameter tuning job
hpt_job_id = response.name
# The short numeric ID for the hyperparameter tuning job
hpt_job_short_id = hpt_job_id.split("/")[-1]
print(hpt_job_id)
"""
Explanation: Now get the unique identifier for the hyperparameter tuning job you created.
End of explanation
"""
def get_hyperparameter_tuning_job(name, silent=False):
response = clients["job"].get_hyperparameter_tuning_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_hyperparameter_tuning_job(hpt_job_id)
"""
Explanation: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id.
End of explanation
"""
while True:
job_response = get_hyperparameter_tuning_job(hpt_job_id, True)
if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Study trials have not completed:", job_response.state)
if job_response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
print("Study trials have completed")
break
time.sleep(60)
"""
Explanation: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
End of explanation
"""
best = (None, None, None, 0.0)
for trial in job_response.trials:
print(trial)
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
"""
Explanation: Review the results of the study
Now review the results of trials.
End of explanation
"""
print("ID", best[0])
print("Learning Rate", best[1])
print("Decay", best[2])
print("Validation Accuracy", best[3])
"""
Explanation: Best trial
Now look at which trial was the best:
End of explanation
"""
BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model"
"""
Explanation: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at:
MODEL_DIR/<best_trial_id>/model
End of explanation
"""
study_spec = {
"metrics": [
{"metric_id": "loss", "goal": aip.StudySpec.MetricSpec.GoalType.MAXIMIZE}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
{
"parameter_id": "decay",
"double_value_spec": {"min_value": 1e-6, "max_value": 1e-2},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
],
"algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH,
}
"""
Explanation: Tuning a model - IMDB Movie Reviews
Now that you have seen the overall steps for hyperparameter tuning a custom training job using a Python package that mimics training a model, you will do a new hyperparameter tuning job for a custom training job for a IMDB Movie Reviews model.
For this example, you will change two parts:
Specify the IMDB Movie Reviews custom hyperparameter tuning Python package.
Specify a study specification specific to the hyperparameters used in the IMDB Movie Reviews custom hyperparameter tuning Python package.
Create a study specification
In this study, you will tune for two hyperparameters using the random search algorithm:
learning rate: The search space is a set of discrete values.
learning rate decay: The search space is a continuous range between 1e-6 and 1e-2.
The objective (goal) is to maximize the validation accuracy.
You will run a maximum of six trials.
End of explanation
"""
hpt_job = {
"display_name": JOB_NAME,
"trial_job_spec": {"worker_pool_specs": worker_pool_spec},
"study_spec": study_spec,
"max_trial_count": 6,
"parallel_trial_count": 1,
}
"""
Explanation: Assemble a hyperparameter tuning job specification
Now assemble the complete description for the custom hyperparameter tuning specification:
display_name: The human readable name you assign to this custom hyperparameter tuning job.
trial_job_spec: The specification for the custom hyperparameter tuning job.
study_spec: The specification for what to tune.
max_trial_count: The maximum number of tuning trials.
parallel_trial_count: How many trials to try in parallel; otherwise, they are done sequentially.
End of explanation
"""
%%writefile custom/trainer/task.py
# Custom Training for IMDB Movie Reviews
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
from hypertune import HyperTune
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--decay', dest='decay',
default=0.98, type=float,
help='Decay rate')
parser.add_argument('--units', dest='units',
default=64, type=int,
help='Number of units.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Preparing dataset
BUFFER_SIZE = 1000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(args.units)),
tf.keras.layers.Dense(args.units, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=args.lr, decay=args.decay),
metrics=['accuracy'])
return model
model = build_and_compile_rnn_model(encoder)
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='loss',
metric_value=logs['loss'],
global_step=epoch)
model.fit(train_dataset, epochs=args.epochs, callbacks=[HPTCallback()])
model.save(args.model_dir)
"""
Explanation: Task.py contents
In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary:
Parse the command line arguments for the hyperparameter settings for the current trial.
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads IMDB Movie Review dataset from TF Datasets (tfds).
Builds a simple RNN model using TF.Keras model API.
The learning rate and number of units per dense and LSTM layer hyperparameter values are used during the compile of the model.
Compiles the model (compile()).
A definition of a callback HPTCallback which obtains the validation loss at the end of each epoch (on_epoch_end()) and reports it to the hyperparameter tuning service using hpt.report_hyperparameter_tuning_metric().
Train the model with the fit() method and specify a callback which will report the validation loss back to the hyperparameter tuning service.
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
"""
Explanation: Store hyperparameter tuning script on your Cloud Storage bucket
Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
def create_hyperparameter_tuning_job(hpt_job):
response = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hpt_job
)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_hyperparameter_tuning_job(hpt_job)
"""
Explanation: Reporting back the result of the trial using hypertune
For each trial, your Python script needs to report back to the hyperparameter tuning service the objective metric for which you specified as the criteria for evaluating the trial.
For this example, you will specify in the study specification that the objective metric will be reported back as loss.
You report back the value of the objective metric using HyperTune. This Python module is used to communicate key/value pairs to the hyperparameter tuning service. To setup this reporting in your Python package, you will add code for the following three steps:
Import the HyperTune module: from hypertune import HyperTune().
At the end of every epoch, write the current value of the objective function to the log as a key/value pair using hpt.report_hyperparameter_tuning_metric(). In this example, the parameters are:
hyperparameter_metric_tag: The name of the objective metric to report back. The name must be identical to the name specified in the study specification.
metric_value: The value of the objective metric to report back to the hyperparameter service.
global_step: The epoch iteration, starting at 0.
Hyperparameter Tune the model
Now start the hyperparameter tuning of your custom model on Vertex. Use this helper function create_hyperparameter_tuning_job, which takes the following parameter:
-hpt_job: The specification for the hyperparameter tuning job.
The helper function calls job client service's create_hyperparameter_tuning_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-hyperparameter_tuning_job: The specification for the hyperparameter tuning job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom hyperparameter tuning job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom hyperparameter tuning job.
End of explanation
"""
# The full unique ID for the custom job
hpt_job_id = response.name
# The short numeric ID for the custom job
hpt_job_short_id = hpt_job_id.split("/")[-1]
print(hpt_job_id)
"""
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
"""
def get_hyperparameter_tuning_job(name, silent=False):
response = clients["job"].get_hyperparameter_tuning_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_hyperparameter_tuning_job(hpt_job_id)
"""
Explanation: Get information on a hyperparameter tuning job
Next, use this helper function get_hyperparameter_tuning_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
The helper function calls the job client service's get_hyperparameter_tuning_job method, with the following parameter:
name: The Vertex fully qualified identifier for the hyperparameter tuning job.
If you recall, you got the Vertex fully qualified identifier for the hyperparameter tuning job in the response.name field when you called the create_hyperparameter_tuning_job method, and saved the identifier in the variable hpt_job_id.
End of explanation
"""
while True:
job_response = get_hyperparameter_tuning_job(hpt_job_id, True)
if job_response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Study trials have not completed:", job_response.state)
if job_response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
print("Study trials have completed")
break
time.sleep(60)
"""
Explanation: Wait for tuning to complete
Hyperparameter tuning the above model may take upwards of 20 minutes time.
Once your model is done tuning, you can calculate the actual time it took to tune the model by subtracting end_time from start_time.
For your model, we will need to know the location of the saved models for each trial, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/<trial_number>/saved_model.pb'.
End of explanation
"""
best = (None, None, None, 0.0)
for trial in job_response.trials:
print(trial)
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
"""
Explanation: Review the results of the study
Now review the results of trials.
End of explanation
"""
print("ID", best[0])
print("Learning Rate", best[1])
print("Decay", best[2])
print("Validation Accuracy", best[3])
"""
Explanation: Best trial
Now look at which trial was the best:
End of explanation
"""
BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model"
"""
Explanation: Get the Best Model
If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at:
MODEL_DIR/<best_trial_id>/model
End of explanation
"""
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
"""
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
"""
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
"""
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
End of explanation
"""
model.evaluate(x_test, y_test)
"""
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
catalystcomputing/DSIoT-Python-sessions
|
Session4/code/01 Loading EPOS Category Data for modelling.ipynb
|
apache-2.0
|
# Imports
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
# Training Data
training_raw = pd.read_table("../data/training_data.dat")
df_training = pd.DataFrame(training_raw)
df_training.head()
# test Data
test_raw = pd.read_table("../data/test_data.dat")
df_test = pd.DataFrame(test_raw)
df_test.head()
# target names
target_categories = ['Unclassified','Art','Aviation','Boating','Camping /Walking /Climbing','Collecting']
target_values = ['1','528','529','530','531','532']
# features
feature_names = ['Barcode','Description','UnitRRP']
# Extract features from panda
training_data = df_training[feature_names].values
training_data[:3]
# Extract target results from panda
target = df_training["CategoryID"].values
# Create classifier class
model_dtc = DecisionTreeClassifier()
# train model
model_dtc.fit(training_data, target)
"""
Explanation: EPOS Data Set composition
Terms used for columns in the data
Barcode: https://en.wikipedia.org/wiki/Barcode
Description: product description
UnitRRP: Products recommended retail price/selling price
CategoryID: Surrogate key for Category https://en.wikipedia.org/wiki/Surrogate_key
Category: Human readable product categorisation
Data files
training_data.dat
Training data 526 data items with 6 categories.
test_data.dat
Training data 191 data items with 6 categories.
End of explanation
"""
# features
feature_names_integers = ['Barcode','UnitRRP']
# Extra features from panda (without description)
training_data_integers = df_training[feature_names_integers].values
training_data_integers[:3]
# train model again
model_dtc.fit(training_data_integers, target)
# Extract test data and test the model
test_data_integers = df_test[feature_names_integers].values
test_target = df_test["CategoryID"].values
expected = test_target
predicted_dtc = model_dtc.predict(test_data_integers)
print(metrics.classification_report(expected, predicted_dtc, target_names=target_categories))
print(metrics.confusion_matrix(expected, predicted_dtc))
metrics.accuracy_score(expected, predicted, normalize=True, sample_weight=None)
predicted[:5]
"""
Explanation: We fail here because the description column is a string.
Lets try again without the description.
End of explanation
"""
from sklearn.linear_model import SGDClassifier
# Create classifier class
model_sgd = SGDClassifier()
# train model again
model_sgd.fit(training_data_integers, target)
predicted_sgd = model_sgd.predict(test_data_integers)
print(metrics.classification_report(expected, predicted_sgd, target_names=target_categories))
print(metrics.confusion_matrix(expected, predicted_sgd))
metrics.accuracy_score(expected, predicted_sgd, normalize=True, sample_weight=None)
"""
Explanation: Lets try a different Classifier
Linear classifiers (SVM, logistic regression, a.o.) with SGD training.
End of explanation
"""
|
WNoxchi/Kaukasos
|
FADL1/keras_lesson1.ipynb
|
mit
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
PATH = "data/dogscats/"
sz = 224
batch_size=64
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.layers import Dropout, Flatten, Dense
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
train_data_dir = f'{PATH}train'
valid_data_Dir = f'{PATH}valid'
"""
Explanation: keras_lesson1.ipynb -- CodeAlong of fastai/courses/dl1/keras_lesson1.ipynb
Wayne H Nixalo
Using TensorFlow backend
# pip install tensorflow-gpu keras
Introduction to our first task: 'Dogs vs Cats'
End of explanation
"""
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(sz,sz),
batch_size=batch_size,
class_mode='binary')
validation_generator = train_datagen.flow_from_directory(valid_data_dir,
shuffle=False,
target_size=(sz,sz),
batch_size=batch_size,
class_mode='binary')
"""
Explanation: Data Augmentation parameters copied from fastai --> copied from Keras Docs
Instead of creating a single data object, in Keras we have to define a data generator to specfy how to generate the data. We have to tell it what kind of dat aug, and what kind of normalization.
Generally, copy-pasting Keras code from the internet works.
Keras uses the same directory structure as FastAI.
2 possible outcomes: class_mode='binary'. Multipe: 'categorical'
In Keras you have to specify a data generator without augmentation for the testing set.
Important to NOT shuffle the validation set, or else accuracy tracking can't be done.
End of explanation
"""
base_model = ResNet50(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
"""
Explanation: In Keras you have to manually specify the base model and construct on top the layers you want to add.
End of explanation
"""
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
"""
Explanation: Specify the model. There's no concept of automatically freezing layers in Keras, so you have to loop through the layers you want to freeze and call .trainable=False
Keras also has a concept of compiling a model, which DNE in FastAI / PyTorch
End of explanation
"""
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
"""
Explanation: Keras expects to be told how many batches there are per epoch. num_batches = size of generator / batch_size
Keras also defaults to zero workers. For good speed: include num workers.
End of explanation
"""
split_at = 140
for layer in model.layers[:split_at]: layer.trainable = False
for layer in model.layers[:split_at]: layer.trainable = True
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=1,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
"""
Explanation: There isn't a concept of differential learning rates or layer groups in Keras or partial unfreezing, so you'll have to decide manually. In this case: printing out to take a look, and starting from layer 140 onwards. You'll have to recompile the model after this.
End of explanation
"""
|
jasontlam/snorkel
|
test/learning/test_TF_notebook.ipynb
|
apache-2.0
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ['SNORKELDB'] = 'sqlite:///{0}{1}crowdsourcing.db'.format(os.getcwd(), os.sep)
from snorkel import SnorkelSession
session = SnorkelSession()
"""
Explanation: Testing TFNoiseAwareModel
We'll start by testing the textRNN model on a categorical problem from tutorials/crowdsourcing. In particular we'll test for (a) basic performance and (b) proper construction / re-construction of the TF computation graph both after (i) repeated notebook calls, and (ii) with GridSearch in particular.
End of explanation
"""
from snorkel.models import candidate_subclass
from snorkel.contrib.models.text import RawText
Tweet = candidate_subclass('Tweet', ['tweet'], cardinality=5)
train_tweets = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
len(train_tweets)
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, train_tweets, split=0)
train_marginals.shape
"""
Explanation: Load candidates and training marginals
End of explanation
"""
# Simple unigram featurizer
def get_unigram_tweet_features(c):
for w in c.tweet.text.split():
yield w, 1
# Construct feature matrix
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator(f=get_unigram_tweet_features)
%time F_train = featurizer.apply(split=0)
F_train
%time F_test = featurizer.apply_existing(split=1)
F_test
from snorkel.learning import LogisticRegression
model = LogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train.todense(), train_marginals)
"""
Explanation: Train LogisticRegression
End of explanation
"""
from snorkel.learning import SparseLogisticRegression
model = SparseLogisticRegression(cardinality=Tweet.cardinality)
model.train(F_train, train_marginals, n_epochs=50, print_freq=10)
import numpy as np
test_labels = np.load('crowdsourcing_test_labels.npy')
acc = model.score(F_test, test_labels)
print(acc)
assert acc > 0.6
# Test with batch size s.t. N % batch_size == 1...
model.score(F_test, test_labels, batch_size=9)
"""
Explanation: Train SparseLogisticRegression
Note: Testing doesn't currently work with LogisticRegression above, but no real reason to use that over this...
End of explanation
"""
from snorkel.learning import TextRNN
test_tweets = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
train_kwargs = {
'dim': 100,
'lr': 0.001,
'n_epochs': 25,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.train(train_tweets, train_marginals, X_dev=test_tweets, Y_dev=test_labels, **train_kwargs)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
# Test with batch size s.t. N % batch_size == 1...
lstm.score(test_tweets, test_labels, batch_size=9)
"""
Explanation: Train basic LSTM
With dev set scoring during execution (note we use test set here to be simple)
End of explanation
"""
from snorkel.learning.utils import GridSearch
# Searching over learning rate
param_ranges = {'lr': [1e-3, 1e-4], 'dim': [50, 100]}
model_class_params = {'seed' : 123, 'cardinality': Tweet.cardinality}
model_hyperparams = {
'dim': 100,
'n_epochs': 20,
'dropout': 0.2,
'print_freq': 10
}
searcher = GridSearch(TextRNN, param_ranges, train_tweets, train_marginals,
model_class_params=model_class_params,
model_hyperparams=model_hyperparams)
# Use test set here (just for testing)
lstm, run_stats = searcher.fit(test_tweets, test_labels)
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
"""
Explanation: Run GridSearch
End of explanation
"""
lstm = TextRNN(seed=123, cardinality=Tweet.cardinality)
lstm.load('TextRNN_best', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc > 0.60
"""
Explanation: Reload saved model outside of GridSearch
End of explanation
"""
lstm.load('TextRNN_0', save_dir='checkpoints/grid_search')
acc = lstm.score(test_tweets, test_labels)
print(acc)
assert acc < 0.60
"""
Explanation: Reload a model with different structure
End of explanation
"""
from snorkel.annotations import load_label_matrix
import numpy as np
L_train = load_label_matrix(session, split=0)
train_labels = np.load('crowdsourcing_train_labels.npy')
from snorkel.learning import GenerativeModel
# Searching over learning rate
searcher = GridSearch(GenerativeModel, {'epochs': [0, 10, 30]}, L_train)
# Use training set labels here (just for testing)
gen_model, run_stats = searcher.fit(L_train, train_labels)
acc = gen_model.score(L_train, train_labels)
print(acc)
assert acc > 0.97
"""
Explanation: Testing GenerativeModel
Testing GridSearch on crowdsourcing data
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/csiro-bom/cmip6/models/sandbox-1/atmoschem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:55
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
anoopsarkar/nlp-class-hw
|
chunker/default.ipynb
|
apache-2.0
|
from default import *
import os
"""
Explanation: chunker: default program
End of explanation
"""
chunker = LSTMTagger(os.path.join('data', 'train.txt.gz'), os.path.join('data', 'chunker'), '.tar')
decoder_output = chunker.decode('data/input/dev.txt')
"""
Explanation: Run the default solution on dev
End of explanation
"""
flat_output = [ output for sent in decoder_output for output in sent ]
import conlleval
true_seqs = []
with open(os.path.join('data','reference','dev.out')) as r:
for sent in conlleval.read_file(r):
true_seqs += sent.split()
conlleval.evaluate(true_seqs, flat_output)
"""
Explanation: Evaluate the default output
End of explanation
"""
|
missmoss/python-scraper
|
google_places_scraper.ipynb
|
mit
|
import json #for reading oauth info and save the results
import io
from googleplaces import GooglePlaces, types, lang
from pprint import pprint
with io.open('google_places_key.json') as cred:
creds = json.load(cred)
google_places = GooglePlaces(**creds)
"""
Explanation: Prepare the connection
Apply a Google Places Web Service API key on Google Developers Console
Save the key as an json file in the same folder of the python script
import the modules we need and set up a credential connection to Google Places API
(If you didn't install 'googleplaces' module before, go to terminal and type pip install https://github.com/slimkrazy/python-google-places/zipball/master to install the module)
End of explanation
"""
query_result = google_places.nearby_search(
lat_lng = {'lat': 42.3555885, 'lng': -71.0646816}, rankby = 'distance', types = [types.TYPE_FOOD])
"""
Explanation: Get data from API
Every account can excuete 1,000 API calls within 24 hours. The response limit of every search is 20. So again we have to narrow down the search criteria to get more data. Let's begin with the geometry of Boston Downtown Crossing.
For the parameters you can set up in the following query, go to Documentation
End of explanation
"""
if query_result.raw_response:
print 'status: ' + query_result.raw_response['status']
print 'next_page_token: ' + query_result.raw_response['next_page_token']
print 'number of results: ' + str(len(query_result.raw_response['results']))
"""
Explanation: Then we check if we get any results from API and print some information to the screen.
End of explanation
"""
for place in query_result.places:
pprint(vars(place)) #only get geo_location, icon, id, name, place_id, rating, types, vicinty
# The following method has to make a further API call.
place.get_details() #get more details including phone_number, opening_hours, photos, reviews ... etc
pprint(vars(place))
break #Here I break when we finish the first place since 20 reesults are too long.
"""
Explanation: The response from API above contains many information:
1. A 'next_page_token': Use this token at your next search assigned to parameter 'pagetoken' and you'll get the following 20 results of your previous serach.
2. Your 'results' of places: These are the data we want. But only few basic informatino here, so we have to get more details in the follwing steps.
Look into details of the data
We use vars(object) to extend all the information in the object:
To see all the response values and their definitinos in places, go to Google Places Search Results
End of explanation
"""
result = []
#Put your lantitude and longtitude pairs in the list and run the search in turns
lat_lng_list = [{'lat': 2.356357, 'lng': -71.0623345}, #Park Street Station
{'lat': 42.356357, 'lng': -71.0623345}, #China Town Station
{'lat': 42.3555885, 'lng': -71.0646816}] #Downtown Crossing Station
for pair in lat_lng_list:
query_result = google_places.nearby_search(
lat_lng = pair, rankby = 'distance', types = [types.TYPE_FOOD])
for place in query_result.places:
place.get_details()
tmp = vars(place)
results.append(tmp)
with open('my_boston_restaurants_google_places.json', 'wb') as f:
results_json = json.dumps(results, indent=4, skipkeys=True, sort_keys=True)
f.write(results_json)
"""
Explanation: Scrape the data nad save them to json files
Let's start to collect the data and save them to files for further usage. We list all the criterias we want to search for, go through them one by one, append to a list and save the list as a json file.
End of explanation
"""
|
kubeflow/pipelines
|
samples/core/lightweight_component/lightweight_component.ipynb
|
apache-2.0
|
# Install the SDK
!pip3 install 'kfp>=0.1.31.2' --quiet
import kfp.deprecated as kfp
import kfp.deprecated.components as components
"""
Explanation: Lightweight python components
Lightweight python components do not require you to build a new container image for every code change.
They're intended to use for fast iteration in notebook environment.
Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
* The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
* The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
* If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
* To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
End of explanation
"""
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
"""
Explanation: Simple function that just add two numbers:
End of explanation
"""
add_op = components.create_component_from_func(add)
"""
Explanation: Convert the function to a pipeline operation
End of explanation
"""
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
"""
Explanation: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
End of explanation
"""
my_divmod(100, 7)
"""
Explanation: Test running the python function directly
End of explanation
"""
divmod_op = components.create_component_from_func(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
"""
Explanation: Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
End of explanation
"""
import kfp.deprecated.dsl as dsl
@dsl.pipeline(
name='calculation-pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a=7,
b=8,
c=17,
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
"""
Explanation: Define the pipeline
Pipeline function has to be decorated with the @dsl.pipeline decorator
End of explanation
"""
#Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
#Submit a pipeline run
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
|
amitkaps/hackermath
|
Module_3b_principal_component_analysis.ipynb
|
mit
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
"""
Explanation: Principle Component Analysis (PCA)
Key Equation: $Ax = \lambda b ~~ \text{for} ~~ n \times n $
PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. This is an Unsupervised Learning Techniques - Which means we don't have a target variable.
End of explanation
"""
np.random.seed(123)
a = np.arange(12, 56, 0.5)
e = np.random.normal(0, 100, a.size)
b = 500 + 20*a + e
X = np.c_[a,b]
def plot2var (m, xlabel, ylabel):
x = m[:,0]
y = m[:,1]
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(x, y, s = 40, alpha = 0.8)
sns.rugplot(x, color="m", ax=ax)
sns.rugplot(y, color="m", vertical=True, ax=ax)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plot2var(X, 'a', 'b')
"""
Explanation: From 2 Dimension to 1 Dimension
Let us generate a two variable data set - $a,b$
$$ b = 50 + 3a + \epsilon$$
End of explanation
"""
X_mean = np.mean(X, axis=0)
X_mean
X_sd = np.std(X, axis=0)
X_sd
X_std = np.subtract(X, X_mean) / X_sd
def plot2var_std (m, xlabel, ylabel):
x = m[:,0]
y = m[:,1]
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(x, y, s = 40, alpha = 0.8)
sns.rugplot(x, color="m", ax=ax)
sns.rugplot(y, color="m", vertical=True, ax=ax)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.xlim([-3,3])
plt.ylim([-3,3])
plot2var_std(X_std, "a", "b")
"""
Explanation: Standardizing the Variables
Centering the Variables (Remove mean and divide by std dev)
End of explanation
"""
cov_mat_2var = np.cov(X_std.T)
cov_mat_2var
"""
Explanation: Calculate the Covariance Matrix
End of explanation
"""
eigen_val_2var, eigen_vec_2var = np.linalg.eig(cov_mat_2var)
eigen_val_2var
eigen_vec_2var
eigen_vec_2var[1].dot(eigen_vec_2var[0])
"""
Explanation: So now this is the symetric $A$ matrix we are trying to solve
$$ Ax = \lambda x $$
where
$$ A = \begin{bmatrix} 1.01 & -0.92 \ -0.92 & 1.01 \end{bmatrix} $$
Get Eigen-vectors and Eigen-values
Lets get the eigen-vectors for this matrix
End of explanation
"""
def plot2var_eigen (m, xlabel, ylabel):
x = m[:,0]
y = m[:,1]
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(x, y, s = 40, alpha = 0.8)
sns.rugplot(x, color="m", ax=ax)
sns.rugplot(y, color="m", vertical=True, ax=ax)
cov_mat = np.cov(m.T)
eigen_val, eigen_vec = np.linalg.eig(cov_mat)
plt.quiver(eigen_vec[0, 0], eigen_vec[0, 1], angles='xy', scale_units='xy', scale=1, color='brown')
plt.quiver(eigen_vec[1, 0], eigen_vec[1, 1], angles='xy', scale_units='xy', scale=1, color='brown')
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.xlim(-3,3)
plt.ylim(-3,3)
plot2var_eigen(X_std, 'a' ,'b')
"""
Explanation: So our eigen vectors and eigen values are:
$$ \lambda_1 = 1.93, \lambda_2 = 0.09 $$
$$ \vec{v_1} = \begin{bmatrix} 0.707 \ -0.707\end{bmatrix} $$
$$ \vec{v_2} = \begin{bmatrix} 0.707 \ 0.707\end{bmatrix} $$
These are orthogonal to each other. Let us plots to see these eigen vectors
End of explanation
"""
eigen_vec_2var
X_std.T.shape
eigen_vec_2var.shape
X_proj = eigen_vec_2var.dot(X_std.T)
plot2var_eigen(X_proj.T, 'pca1' ,'pca2')
"""
Explanation: Projection Matrix
Let us project our orginal values to see the new results
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X_std)
X_pca_proj = pca.transform(X_std)
plot2var_std(X_pca_proj, 'pca1', 'pca2')
pca.explained_variance_
"""
Explanation: Using PCA from SKlearn
End of explanation
"""
pop = pd.read_csv('data/cars_small.csv')
pop.head()
"""
Explanation: From 4 Dimensions to 2 Dimensions
Run PCA with 2 dimensions on the cars dataset
End of explanation
"""
pop = pop.drop(['model'], axis = 1)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df = pop.apply(le.fit_transform)
df.head()
g = sns.PairGrid(df, hue = 'type')
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter, alpha = 0.8)
"""
Explanation: Preprocessing - brand, price, kmpl, bhp
End of explanation
"""
X = df.iloc[:,:4]
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
"""
Explanation: Standardizing
End of explanation
"""
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)
# Doing this directly using np.cov
print('NumPy covariance matrix: \n%s' %np.cov(X_std.T))
cov_mat = np.cov(X_std.T)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)
"""
Explanation: Eigendecomposition - Computing Eigenvectors and Eigenvalues
End of explanation
"""
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort(key=lambda x: x[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
print(i[0])
"""
Explanation: How do you select which 2 axis to choose?
Sorting the Eigenvalues and Eigenvectors
In order to decide which eigenvector(s) can dropped without losing too much information for the construction of lower-dimensional subspace, we need to inspect the corresponding eigenvalues: The eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data; those are the ones can be dropped.
End of explanation
"""
tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
plt.bar(range(4), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(4), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
"""
Explanation: Explained Variance
The explained variance tells us how much information (variance) can be attributed to each of the principal components.
End of explanation
"""
matrix_w = np.hstack((eig_pairs[0][1].reshape(4,1),
eig_pairs[1][1].reshape(4,1)))
print('Matrix W:\n', matrix_w)
X_proj = X_std.dot(matrix_w)
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(X_proj[:,0], X_proj[:,1], c = df.type, s = 100, cmap = plt.cm.viridis)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
"""
Explanation: Projection Matrix
The “projection matrix” is just a matrix of our concatenated top k eigenvectors. Here, we are reducing the 4-dimensional feature space to a 2-dimensional feature subspace, by choosing the “top 2” eigenvectors with the highest eigenvalues to construct our $n×k$-dimensional eigenvector matrix $W$.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X_std)
X_proj_sklearn = pca.transform(X_std)
fig, ax = plt.subplots(figsize=(6, 6))
plt.scatter(X_proj_sklearn[:,0], X_proj_sklearn[:,1], c = df.type,
s = 100, cmap = plt.cm.viridis)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
pca.explained_variance_
"""
Explanation: PCA using sklearn
End of explanation
"""
digits = pd.read_csv('data/digits.csv')
digits.head()
digits.shape
digitsX = digits.iloc[:,1:785]
digitsX.head()
pca = PCA(n_components=2)
pca.fit(digitsX)
digits_trans = pca.transform(digitsX)
digits_trans
plt.scatter(digits_trans[:,0], digits_trans[:,1], c = digits.num,
s = 20, alpha = 0.8, cmap = plt.cm.viridis)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
"""
Explanation: From 784 Dimensions to 2 Dimensions
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_artifacts_correction_rejection.ipynb
|
bsd-3-clause
|
import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
"""
Explanation: .. _tut_artifacts_reject:
Rejecting bad data (channels and segments)
End of explanation
"""
raw.info['bads'] = ['MEG 2443']
"""
Explanation: .. _marking_bad_channels:
Marking bad channels
Sometimes some MEG or EEG channels are not functioning properly
for various reasons. These channels should be excluded from
analysis by marking them bad as. This is done by setting the 'bads'
in the measurement info of a data container object (e.g. Raw, Epochs,
Evoked). The info['bads'] value is a Python string. Here is
example:
End of explanation
"""
# Reading data with a bad channel marked as bad:
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# restrict the evoked to EEG and MEG channels
evoked.pick_types(meg=True, eeg=True, exclude=[])
# plot with bads
evoked.plot(exclude=[])
print(evoked.info['bads'])
"""
Explanation: Why setting a channel bad?: If a channel does not show
a signal at all (flat) it is important to exclude it from the
analysis. If a channel as a noise level significantly higher than the
other channels it should be marked as bad. Presence of bad channels
can have terribe consequences on down stream analysis. For a flat channel
some noise estimate will be unrealistically low and
thus the current estimate calculations will give a strong weight
to the zero signal on the flat channels and will essentially vanish.
Noisy channels can also affect others when signal-space projections
or EEG average electrode reference is employed. Noisy bad channels can
also adversely affect averaging and noise-covariance matrix estimation by
causing unnecessary rejections of epochs.
Recommended ways to identify bad channels are:
Observe the quality of data during data
acquisition and make notes of observed malfunctioning channels to
your measurement protocol sheet.
View the on-line averages and check the condition of the channels.
Compute preliminary off-line averages with artifact rejection,
SSP/ICA, and EEG average electrode reference computation
off and check the condition of the channels.
View raw data with :func:mne.io.Raw.plot without SSP/ICA
enabled and identify bad channels.
.. note::
Setting the bad channels should be done as early as possible in the
analysis pipeline. That's why it's recommended to set bad channels
the raw objects/files. If present in the raw data
files, the bad channel selections will be automatically transferred
to averaged files, noise-covariance matrices, forward solution
files, and inverse operator decompositions.
The actual removal happens using :func:pick_types <mne.pick_types> with
exclude='bads' option (see :ref:picking_channels).
Instead of removing the bad channels, you can also try to repair them.
This is done by interpolation of the data from other channels.
To illustrate how to use channel interpolation let us load some data.
End of explanation
"""
evoked.interpolate_bads(reset_bads=False)
"""
Explanation: Let's now interpolate the bad channels (displayed in red above)
End of explanation
"""
evoked.plot(exclude=[])
"""
Explanation: Let's plot the cleaned data
End of explanation
"""
eog_events = mne.preprocessing.find_eog_events(raw)
n_blinks = len(eog_events)
# Center to cover the whole blink with full duration of 0.5s:
onset = eog_events[:, 0] / raw.info['sfreq'] - 0.25
duration = np.repeat(0.5, n_blinks)
raw.annotations = mne.Annotations(onset, duration, ['bad blink'] * n_blinks)
raw.plot(events=eog_events) # To see the annotated segments.
"""
Explanation: .. note::
Interpolation is a linear operation that can be performed also on
Raw and Epochs objects.
For more details on interpolation see the page :ref:channel_interpolation.
.. _marking_bad_segments:
Marking bad raw segments with annotations
MNE provides an :class:mne.Annotations class that can be used to mark
segments of raw data and to reject epochs that overlap with bad segments
of data. The annotations are automatically synchronized with raw data as
long as the timestamps of raw data and annotations are in sync.
See :ref:sphx_glr_auto_tutorials_plot_brainstorm_auditory.py
for a long example exploiting the annotations for artifact removal.
The instances of annotations are created by providing a list of onsets and
offsets with descriptions for each segment. The onsets and offsets are marked
as seconds. onset refers to time from start of the data. offset is
the duration of the annotation. The instance of :class:mne.Annotations
can be added as an attribute of :class:mne.io.Raw.
End of explanation
"""
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
"""
Explanation: As the data is epoched, all the epochs overlapping with segments whose
description starts with 'bad' are rejected by default. To turn rejection off,
use keyword argument reject_by_annotation=False when constructing
:class:mne.Epochs. When working with neuromag data, the first_samp
offset of raw acquisition is also taken into account the same way as with
event lists. For more see :class:mne.Epochs and :class:mne.Annotations.
.. _rejecting_bad_epochs:
Rejecting bad epochs
When working with segmented data (Epochs) MNE offers a quite simple approach
to automatically reject/ignore bad epochs. This is done by defining
thresholds for peak-to-peak amplitude and flat signal detection.
In the following code we build Epochs from Raw object. One of the provided
parameter is named reject. It is a dictionary where every key is a
channel type as a sring and the corresponding values are peak-to-peak
rejection parameters (amplitude ranges as floats). Below we define
the peak-to-peak rejection values for gradiometers,
magnetometers and EOG:
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {"auditory/left": 1}
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
baseline = (None, 0) # means from the first instant to t = 0
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks_meg, baseline=baseline, reject=reject,
reject_by_annotation=True)
"""
Explanation: .. note::
The rejection values can be highly data dependent. You should be careful
when adjusting these values. Make sure not too many epochs are rejected
and look into the cause of the rejections. Maybe it's just a matter
of marking a single channel as bad and you'll be able to save a lot
of data.
We then construct the epochs
End of explanation
"""
epochs.drop_bad()
"""
Explanation: We then drop/reject the bad epochs
End of explanation
"""
print(epochs.drop_log[40:45]) # only a subset
epochs.plot_drop_log()
"""
Explanation: And plot the so-called drop log that details the reason for which some
epochs have been dropped.
End of explanation
"""
|
edeno/Jadhav-2016-Data-Analysis
|
notebooks/2017_06_22_Repository_Data_Access.ipynb
|
gpl-3.0
|
from src.parameters import ANIMALS
ANIMALS
"""
Explanation: Repository and Data Access
Fork my github repository
Clone the forked repository to a local directory
Install miniconda (or anaconda) if it isn't already installed. Type into bash:
bash
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
bash miniconda.sh -b -p $HOME/miniconda
export PATH="$HOME/miniconda/bin:$PATH"
hash -r
Switch to the development branch. Type into bash:
bash
git checkout develop
Go to the local repository (.../Jadhav-2016-Data-Analysis) and install the anaconda environment for the repository. Type into bash:
bash
conda update -q conda
conda info -a
conda env create -f environment.yml
source activate Jadhav-2016-Data-Analysis
python setup.py develop
Make sure the environment is set up correctly by running the tests. Type into bash:
bash
pytest
Copy the data folders HPa_direct, HPb_direct, HPc_direct from the dropbox folder EastWestSideHippos Team Folder/HCPFCdata to Jadhav-2016-Data-Analysis/Raw-Data
Accessing the Data via Python
Parameters Module
The src.parameters module has some convenient constants for accessing the data. Most important are the ANIMALS dictionary, the N_DAYS, and the SAMPLING_FREQUENCY.
The ANIMALS dictionary maps the animal name to the data directory containing the animal's data.
End of explanation
"""
from src.parameters import N_DAYS, SAMPLING_FREQUENCY
print('Days: {0}'.format(N_DAYS))
print('Sampling Frequency: {0}'.format(SAMPLING_FREQUENCY))
"""
Explanation: N_DAYS corresponds to the number of days of recording and SAMPLING_FREQUENCY corresponds to the sampling rate of the tetrodes recording neural activity
End of explanation
"""
from src.data_processing import make_epochs_dataframe
days = range(1, N_DAYS + 1)
epoch_info = make_epochs_dataframe(ANIMALS, days)
epoch_info
"""
Explanation: Data Processing Module
The src.data_processing module has convenient functions for importing the data files into Pandas dataframes. You can use these functions in conjunction with the constants from the src.parameters module. The most useful functions are:
make_epochs_dataframe: returns descriptive information about each epoch
make_tetrode_dataframe: returns descriptive information about each tetrode
make_neuron_dataframe: returns descriptive information about each neuron
get_spike_indicator_dataframe: returns the spiking data for each neuron for each time point
get_interpolated_position_dataframe: returns data about the position of animal for each time point
Let's take each of these functions in turn.
make_epochs_dataframe
An epoch is a single recording session. There are typically multiple recording sessions (epochs) a day. The epoch dataframe gives information about what task the animal is performing during the epoch (sleeping, running a linear track, running a w-track, resting, etc). Each row of the epoch dataframe corresponds to an epoch.
End of explanation
"""
epoch_info.index.tolist()
"""
Explanation: Often we use the epoch as a key to access more information about that epoch (information about the tetrodes, neuron, etc). Epoch keys are tuples with the following format: (Animal, Day, Epoch).
The index of the epoch dataframe can be used to produce these keys.
End of explanation
"""
epoch_info.loc[epoch_info.type == 'run']
epoch_info.loc[epoch_info.type == 'run'].index.tolist()
"""
Explanation: This is useful if we want to filter the epoch dataframe by a particular attribute (Say we only want sessions where the animal is asleep) and use the keys to access the data for that epoch.
End of explanation
"""
from src.data_processing import make_tetrode_dataframe
tetrode_info = make_tetrode_dataframe(ANIMALS)
list(tetrode_info.keys())
"""
Explanation: make_tetrode_dataframe
make_tetrode_dataframe is a dictionary of pandas dataframes, where the keys corresponds to an epoch and the values are pandas dataframes detailing information about the tetrodes in that epoch (brain area, number of cells recorded, etc.).
The dictionary keys for the epochs are tuples (Animal, Day, Epoch). For example, let's load the tetrode dataframe and display all the keys:
End of explanation
"""
epoch_key = ('HPa', 6, 2)
tetrode_info[epoch_key]
"""
Explanation: If we want to access a particular epoch we can just pass the corresponding epoch tuple. In this case, we want animal HPa, day 6, epoch 2. This returns a dataframe where each row corresponds to a tetrode for that epoch.
End of explanation
"""
[tetrode_info[epoch_key]
for epoch_key in epoch_info.loc[
(epoch_info.type == 'sleep') & (epoch_info.day == 8)].index]
"""
Explanation: Remember that the epoch dataframe index can be used as keys, which can be useful if we want to access a particular epoch.
End of explanation
"""
from src.data_processing import make_neuron_dataframe
neuron_info = make_neuron_dataframe(ANIMALS)
list(neuron_info.keys())
"""
Explanation: make_neuron_dataframe
The neuron dataframe is set up similarly to the tetrode dataframe. It is a dictionary of pandas dataframes, where the keys are the epoch tuples (Animal, Day, Epoch) and the values are the pandas dataframes. The dataframes give descriptive information about the number of spikes, the average spiking rate, which brain area, etc.
End of explanation
"""
epoch_key = ('HPa', 6, 2)
neuron_info[epoch_key]
"""
Explanation: We can access the neuron dataframe for a particular epoch in the same way as the tetrodes
End of explanation
"""
from src.data_processing import get_spike_indicator_dataframe
neuron_key = ('HPa', 6, 2, 1, 4)
get_spike_indicator_dataframe(neuron_key, ANIMALS)
"""
Explanation: get_spike_indicator_dataframe
The spike indicator dataframe is a dataframe where each row corresponds to the recording timestamp and is_spike is an indicator function where 1 indicates a spike has occurred at that timestamp and 0 indicates no spike has occurred at that timestamp. We can access the spike indicator dataframe for a particular neuron by using the neuron key (Animal, Day, Epoch, Tetrode, Neuron).
End of explanation
"""
neuron_info[epoch_key].index.tolist()
"""
Explanation: This information can be obtained from the neuron dataframe. The index of the neuron dataframe gives the key.
End of explanation
"""
neuron_info[epoch_key].query('area == "CA1"')
"""
Explanation: Like the epoch and tetrode dataframe, this allows us to filter for certain attributes (like if we want neurons in a certain brain area) and select only those neurons. For example, if we want only CA1 neurons:
End of explanation
"""
neuron_info[epoch_key].query('area == "CA1"').index.tolist()
"""
Explanation: We can get the keys for CA1 neurons only:
End of explanation
"""
pd.concat(
[get_spike_indicator_dataframe(neuron_key, ANIMALS)
for neuron_key in neuron_info[epoch_key].query('area == "CA1"').index], axis=1)
"""
Explanation: And then get the spike indicator data for those neurons:
End of explanation
"""
pd.concat(
[get_spike_indicator_dataframe(neuron_key, ANIMALS)
for neuron_key in neuron_info[epoch_key].query('area == "CA1"').index], axis=1).values
"""
Explanation: If we want the numpy array, use .values
End of explanation
"""
from src.data_processing import get_interpolated_position_dataframe
epoch_key = ('HPa', 6, 2)
get_interpolated_position_dataframe(epoch_key, ANIMALS)
"""
Explanation: get_interpolated_position_dataframe
This dataframe gives information about the animal's position during an epoch. Like the tetrode and neuron dataframes, it is a dictionary of dataframes where the keys correspond to the epoch (Animal, Day, Epoch).
End of explanation
"""
|
gtesei/DeepExperiments
|
Recurrent_Neural_Networks_1.1.0.ipynb
|
apache-2.0
|
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Tensorflow
import tensorflow as tf
#
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "b<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
"""
Explanation: Recurrent Neural Networks
For an introduction to RNN take a look at this great article.
Basic RNNs
End of explanation
"""
tf.reset_default_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons], dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons, n_neurons], dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
"""
Explanation: Manual RNN
End of explanation
"""
tf.reset_default_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1], dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
#show_graph(tf.get_default_graph())
"""
Explanation: Using rnn()
The static_rnn() function creates an unrolled RNN network by chaining cells.
End of explanation
"""
tf.reset_default_graph()
n_steps = 2
n_inputs = 3
n_neurons = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
print("outputs =", outputs.eval(feed_dict={X: X_batch}))
#show_graph(tf.get_default_graph())
"""
Explanation: Using dynamic_rnn()
The dynamic_rnn() function uses a while_loop() operation to run over the cell the appropriate number of times, and you can set swap_memory = True if you want it to swap the GPU’s memory to the CPU’s memory during backpropagation to avoid OOM errors. Conveniently, it also accepts a single tensor for all inputs at every time step (shape [None, n_steps, n_inputs]) and it outputs a single tensor for all outputs at every time step (shape [None, n_steps, n_neurons]); there is no need to stack, unstack, or transpose.
End of explanation
"""
tf.reset_default_graph()
n_steps = 2
n_inputs = 3
n_neurons = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
seq_length = tf.placeholder(tf.int32, [None]) ### <----------------------------------------
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, sequence_length=seq_length, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2]) ### <------------------------
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
"""
Explanation: Packing sequences
End of explanation
"""
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
with tf.variable_scope("rnn", initializer=tf.contrib.layers.variance_scaling_initializer()):
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = fully_connected(states, n_outputs, activation_fn=None)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
"""
Explanation: Training a sequence classifier
We will treat each image as a sequence of 28 rows of 28 pixels each (since each MNIST image is 28 × 28 pixels). We will use cells of 150 recurrent neurons, plus a fully connected layer containing 10 neurons (one per class) connected to the output of the last time step, followed by a softmax layer.
End of explanation
"""
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
batch_size = 150
num_classes = 10
epochs = 100
hidden_units = 150
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate IRNN...')
a = Input(shape=x_train.shape[1:])
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu')(a)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('IRNN test score:', scores[0])
print('IRNN test accuracy:', scores[1])
"""
Explanation: Training the same sequence classifier with Keras
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons1 = 150
n_neurons2 = 100
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
hidden1 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons1, activation=tf.nn.relu)
hidden2 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons2, activation=tf.nn.relu)
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([hidden1, hidden2])
outputs, states_tuple = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states = tf.concat(axis=1, values=states_tuple)
logits = fully_connected(states, n_outputs, activation_fn=None)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
"""
Explanation: Multi-layer RNN
It is quite common to stack multiple layers of cells. This gives you a deep RNN.
To implement a deep RNN in TensorFlow, you can create several cells and stack them into a MultiRNNCell.
End of explanation
"""
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
keras.backend.clear_session()
batch_size = 150
num_classes = 10
epochs = 50 # instead of 100 (too much time)
hidden_units_1 = 150
hidden_units_2 = 100
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate IRNN...')
a = Input(shape=x_train.shape[1:])
b = SimpleRNN(hidden_units_1,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = SimpleRNN(hidden_units_2,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu')(b)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('IRNN test score:', scores[0])
print('IRNN test accuracy:', scores[1])
"""
Explanation: Multi-layer RNN with Keras
When stacking RNNs with Keras remember to set return_sequences=True on hidden layers.
End of explanation
"""
t_min, t_max = 0, 30
n_steps = 20
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, (t_max - t_min) // resolution)
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
"""
Explanation: Time series
Now let’s take a look at how to handle time series, such as stock prices, air temperature, brain wave patterns, and so on. In this section we will train an RNN to predict the next value in a generated time series. Each training instance is a randomly selected sequence of 20 consecutive values from the time series, and the target sequence is the same as the input sequence, except it is shifted by one time step into the future.
End of explanation
"""
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
loss = tf.reduce_sum(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_iterations = 1000
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
"""
Explanation: Using an OuputProjectionWrapper
End of explanation
"""
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = fully_connected(stacked_rnn_outputs, n_outputs, activation_fn=None)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_iterations = 1000
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
"""
Explanation: Without using an OutputProjectionWrapper
End of explanation
"""
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
def ts_next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
keras.backend.clear_session()
batch_size = 50
hidden_units = 100
learning_rate = 0.001
n_inputs = 1
n_outputs = 1
n_steps = 20
print('Evaluate IRNN...')
a = Input(shape=(n_steps,n_inputs))
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = keras.layers.core.Reshape((-1,n_neurons))(b)
b = Dense(1,activation=None)(b)
b = keras.layers.core.Reshape((n_steps, n_outputs))(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_squared_error'])
X_batch, y_batch = ts_next_batch(batch_size*1000, n_steps)
x_test, y_test = ts_next_batch(batch_size, n_steps)
model.fit(X_batch, y_batch,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = model.predict(X_new,verbose=0)
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
"""
Explanation: With Keras
End of explanation
"""
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
keep_prob = 0.5
learning_rate = 0.001
is_training = True
def deep_rnn_with_dropout(X, y, is_training):
if is_training:
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicRNNCell(num_units=n_neurons), input_keep_prob=keep_prob) for _ in range(n_layers)],)
else:
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicRNNCell(num_units=n_neurons) for _ in range(n_layers)],)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = fully_connected(stacked_rnn_outputs, n_outputs, activation_fn=None)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
return outputs, loss, training_op
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
outputs, loss, training_op = deep_rnn_with_dropout(X, y, is_training)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 2000
batch_size = 50
with tf.Session() as sess:
if is_training:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
save_path = saver.save(sess, "/tmp/my_model.ckpt")
else:
saver.restore(sess, "/tmp/my_model.ckpt")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
is_training = False
with tf.Session() as sess:
if is_training:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
save_path = saver.save(sess, "/tmp/my_model.ckpt")
else:
saver.restore(sess, "/tmp/my_model.ckpt")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
"""
Explanation: Dropout
If you build a very deep RNN, it may end up overfitting the training set. To prevent that, a common technique is to apply dropout. You can simply add a dropout layer before or after the RNN as usual, but if you also want to apply dropout between the RNN layers, you need to use a DropoutWrapper.
End of explanation
"""
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
def ts_next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
keras.backend.clear_session()
batch_size = 50
hidden_units = 100
learning_rate = 0.001
n_inputs = 1
n_outputs = 1
n_steps = 20
n_layers = 3
keep_prob = 0.5
print('Evaluate IRNN...')
a = Input(shape=(n_steps,n_inputs))
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = Dropout(keep_prob)(b)
for i in range(n_layers-1):
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = Dropout(keep_prob)(b)
b = keras.layers.core.Reshape((-1,n_neurons))(b)
b = Dense(1,activation=None)(b)
b = keras.layers.core.Reshape((n_steps, n_outputs))(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_squared_error'])
X_batch, y_batch = ts_next_batch(batch_size*2000, n_steps)
x_test, y_test = ts_next_batch(batch_size*2, n_steps)
model.fit(X_batch, y_batch,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = model.predict(X_new,verbose=0)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
"""
Explanation: Dropout with Keras
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
multi_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons) for _ in range(n_layers)])
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = fully_connected(top_layer_h_state, n_outputs, activation_fn=None, scope="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
"""
Explanation: LSTM
The Long Short-Term Memory (LSTM) cell was proposed in (Hochreiter-Schmidhuber,1997), and it was gradually improved over the years by several researchers. If you consider the LSTM cell as a black box, it can be used very much like a basic cell, except it will perform much better; training will converge faster and it will detect long-term dependencies in the data.
End of explanation
"""
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
keras.backend.clear_session()
batch_size = 150
num_classes = 10
epochs = 10
n_neurons = 150
n_layers = 3
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate LSTM...')
a = Input(shape=x_train.shape[1:])
b = LSTM(n_neurons,return_sequences=True)(a)
for i in range(n_layers-2):
b = LSTM(n_neurons,return_sequences=True)(b)
b = LSTM(n_neurons,return_sequences=False)(b)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('LSTM test score:', scores[0])
print('LSTM test accuracy:', scores[1])
"""
Explanation: LSTM with Keras
End of explanation
"""
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.contrib.rnn.BasicRNNCell( num_units = n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.contrib.rnn.BasicRNNCell( num_units = n_neurons)
"""
Explanation: Distributing layers across devices
If you try to create each cell in a different device() block, it will not work.
End of explanation
"""
import tensorflow as tf
class DeviceCellWrapper(tf.contrib.rnn.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
tf.reset_default_graph()
n_inputs = 5
n_neurons = 100
devices = ["/cpu:0"]*5
n_steps = 20
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
lstm_cells = [DeviceCellWrapper(device, tf.contrib.rnn.BasicRNNCell(num_units=n_neurons))
for device in devices]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: rnd.rand(2, n_steps, n_inputs)}))
"""
Explanation: This fails because a BasicRNNCell is a cell factory, not a cell per se; no cells get created when you create the factory, and thus no variables do either. The device block is simply ignored. The cells actually get created later. When you call dynamic_rnn(), it calls the MultiRNNCell, which calls each individual BasicRNNCell, which create the actual cells (including their variables). Unfortunately, none of these classes provide any way to control the devices on which the variables get created. If you try to put the dynamic_rnn() call within a device block, the whole RNN gets pinned to a single device.
The trick is to create your own cell wrapper
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
"""
Explanation: # Bidirectional LSTM on the IMDB sentiment classification task on Keras
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
"""
Explanation: LSTM on the IMDB sentiment classification task on Keras
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(100))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
"""
Explanation: LSTM+FC on the IMDB sentiment classification task on Keras
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import Conv1D, MaxPooling1D
from keras.datasets import imdb
del model
keras.backend.clear_session()
# Embedding
max_features = 20000
maxlen = 100
embedding_size = 128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 30
epochs = 4
'''
Note:
batch_size is highly sensitive.
Only 2 epochs are needed as the dataset is very small.
'''
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=maxlen))
model.add(Dropout(0.25))
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(LSTM(lstm_output_size))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
"""
Explanation: Recurrent convolutional network on the IMDB sentiment
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
# set parameters:
max_features = 5000
maxlen = 400
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 4
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
"""
Explanation: Convolutional network on the IMDB sentiment
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
def create_ngram_set(input_list, ngram_value=2):
"""
Extract a set of n-grams from a list of integers.
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=2)
{(4, 9), (4, 1), (1, 4), (9, 4)}
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=3)
[(1, 4, 9), (4, 9, 4), (9, 4, 1), (4, 1, 4)]
"""
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
def add_ngram(sequences, token_indice, ngram_range=2):
"""
Augment the input list of list (sequences) by appending n-grams values.
Example: adding bi-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017}
>>> add_ngram(sequences, token_indice, ngram_range=2)
[[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]]
Example: adding tri-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018}
>>> add_ngram(sequences, token_indice, ngram_range=3)
[[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]]
"""
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
# Set parameters:
# ngram_range = 2 will add bi-grams features
ngram_range = 2
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
if ngram_range > 1:
print('Adding {}-gram features'.format(ngram_range))
# Create set of unique n-gram from the training set.
ngram_set = set()
for input_list in x_train:
for i in range(2, ngram_range + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
# Dictionary mapping n-gram token to a unique integer.
# Integer values are greater than max_features in order
# to avoid collision with existing features.
start_index = max_features + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
# max_features is the highest integer that could be found in the dataset.
max_features = np.max(list(indice_token.keys())) + 1
# Augmenting x_train and x_test with n-grams features
x_train = add_ngram(x_train, token_indice, ngram_range)
x_test = add_ngram(x_test, token_indice, ngram_range)
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
# we add a GlobalAveragePooling1D, which will average the embeddings
# of all words in the document
model.add(GlobalAveragePooling1D())
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
"""
Explanation: IMDB datasets with bi-gram embeddings
End of explanation
"""
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
def create_ngram_set(input_list, ngram_value=2):
"""
Extract a set of n-grams from a list of integers.
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=2)
{(4, 9), (4, 1), (1, 4), (9, 4)}
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=3)
[(1, 4, 9), (4, 9, 4), (9, 4, 1), (4, 1, 4)]
"""
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
def add_ngram(sequences, token_indice, ngram_range=2):
"""
Augment the input list of list (sequences) by appending n-grams values.
Example: adding bi-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017}
>>> add_ngram(sequences, token_indice, ngram_range=2)
[[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]]
Example: adding tri-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018}
>>> add_ngram(sequences, token_indice, ngram_range=3)
[[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]]
"""
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
# Set parameters:
# ngram_range = 2 will add bi-grams features
ngram_range = 2
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
if ngram_range > 1:
print('Adding {}-gram features'.format(ngram_range))
# Create set of unique n-gram from the training set.
ngram_set = set()
for input_list in x_train:
for i in range(2, ngram_range + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
# Dictionary mapping n-gram token to a unique integer.
# Integer values are greater than max_features in order
# to avoid collision with existing features.
start_index = max_features + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
# max_features is the highest integer that could be found in the dataset.
max_features = np.max(list(indice_token.keys())) + 1
# Augmenting x_train and x_test with n-grams features
x_train = add_ngram(x_train, token_indice, ngram_range)
x_test = add_ngram(x_test, token_indice, ngram_range)
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
"""
Explanation: IMDB datasets with bi-gram embeddings and Convolution1D
End of explanation
"""
|
cgrudz/cgrudz.github.io
|
teaching/stat_775_2021_fall/activities/activity-2021-09-01.ipynb
|
mit
|
import numpy as np
"""
Explanation: Introduction to Python part IV (And a discussion of linear transformations)
Activity 1: Discussion of linear transformations
Orthogonality also plays a key role in understanding linear transformations. How can we understand linear transformations in terms of a composition of rotations and diagonal matrices? There are two specific matrix factorizations that arise this way, can you name them and describe the conditions in which they are applicable?
What is a linear inverse problem? What conditions guarantee a solution?
What is a pseudo-inverse? How is this related to an orthogonal projection? How is this related to the linear inverse problem?
What is a weighted norm and what is a weighted pseudo-norm?
Activity 2: Basic data analysis and manipulation
End of explanation
"""
A = np.array([[1,2,3], [4,5,6], [7, 8, 9]])
print('A = ')
print(A)
B = np.hstack([A, A])
print('B = ')
print(B)
C = np.vstack([A, A])
print('C = ')
print(C)
"""
Explanation: Exercise 1:
Arrays can be concatenated and stacked on top of one another, using NumPy’s vstack and hstack functions for vertical and horizontal stacking, respectively.
End of explanation
"""
D = np.hstack((A[:, :1], A[:, -1:]))
print('D = ')
print(D)
"""
Explanation: Write some additional code that slices the first and last columns of A, and stacks them into a 3x2 array. Make sure to print the results to verify your solution.
Note a ‘gotcha’ with array indexing is that singleton dimensions are dropped by default. That means A[:, 0] is a one dimensional array, which won’t stack as desired. To preserve singleton dimensions, the index itself can be a slice or array. For example, A[:, :1] returns a two dimensional array with one singleton dimension (i.e. a column vector).
End of explanation
"""
patient3_week1 = data[3, :7]
print(patient3_week1)
"""
Explanation: An alternative way to achieve the same result is to use Numpy’s delete function to remove the second column of A. Use the search function for the documentation on the np.delete function to find the syntax for constructing such an array.
Exercise 2:
The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with NumPy.
The np.diff function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset.
End of explanation
"""
np.diff(patient3_week1)
"""
Explanation: Calling np.diff(patient3_week1) would do the following calculations
[ 0 - 0, 2 - 0, 0 - 2, 4 - 0, 2 - 4, 2 - 2 ]
and return the 6 difference values in a new array.
End of explanation
"""
|
donovanr/letter_ladders
|
letter_ladders.ipynb
|
gpl-2.0
|
import networkx as nx
import letter_ladders as ll
"""
Explanation: Try out letter ladder code on some different word corpuses
End of explanation
"""
# get default dict into list, add all words as nodes to graph, group words by length
built_in_wordlist = [w.strip() for w in open('/usr/share/dict/words') if len(w.strip()) > 1 and w.strip().islower()]
built_in_wordlist.extend(['a','i'])
built_in_wordlist_graph = nx.DiGraph();
built_in_wordlist_graph.add_nodes_from(built_in_wordlist)
built_in_wordlist_grouped_by_len = ll.group_wordlist_by_len(built_in_wordlist)
# add all possible edges to the entire wordlist graph
for i in xrange(1,max([len(word) for word in wordlist])+1):
ll.ersosion_filter_nx(i,built_in_wordlist_graph,wordlist_grouped_by_len)
# save the graph -- this took a while
nx.write_gpickle(built_in_wordlist_graph,"wordlist_graph.gpickle")
# use this cell if you've already built the graph
built_in_wordlist_graph = nx.read_gpickle("wordlist_graph.gpickle")
# find long letter ladders!
all_long_paths_built_in = ll.find_all_longest_and_next_longest_paths(built_in_wordlist_graph);
ll.print_paths(all_long_paths_built_in)
"""
Explanation: Built in dictionary
End of explanation
"""
# get default dict into list, add all words as nodes to graph, group words by length
with open("ospd4.txt") as word_file:
scrabble_wordlist = list(w.strip() for w in word_file if len(w.strip()) > 1 and w.strip().islower() and w.strip().isalpha())
scrabble_wordlist.extend(['a','i'])
scrabble_wordlist_graph = nx.DiGraph();
scrabble_wordlist_graph.add_nodes_from(scrabble_wordlist)
scrabble_wordlist_grouped_by_len = ll.group_wordlist_by_len(scrabble_wordlist)
# add all possible edges to the entire wordlist graph
for i in xrange(1,max([len(word) for word in wordlist])+1):
ll.ersosion_filter_nx(i,scrabble_wordlist_graph,wordlist_grouped_by_len)
# save the graph -- this took a while
nx.write_gpickle(scrabble_wordlist_graph,"scrabble_wordlist_graph.gpickle")
scrabble_wordlist_graph = nx.read_gpickle("scrabble_wordlist_graph.gpickle")
# find long letter ladders!
all_long_paths_scrabble = ll.find_all_longest_and_next_longest_paths(scrabble_wordlist_graph);
ll.print_paths(all_long_paths_scrabble)
"""
Explanation: Scrabble dictionary
from https://raw.githubusercontent.com/bahmutov/prefix-dictionary/master/ospd4.txt
End of explanation
"""
# get default dict into list, add all words as nodes to graph, group words by length
with open("scowl_60.txt") as word_file:
scowl_60_wordlist = list(w.strip() for w in word_file if len(w.strip()) > 1 and w.strip().islower() and w.strip().isalpha())
scowl_60_wordlist.extend(['a','i'])
scowl_60_wordlist_graph = nx.DiGraph();
scowl_60_wordlist_graph.add_nodes_from(scrabble_wordlist)
scowl_60_wordlist_grouped_by_len = ll.group_wordlist_by_len(scowl_60_wordlist)
# add all possible edges to the entire wordlist graph
for i in xrange(1,max([len(word) for word in wordlist])+1):
ll.ersosion_filter_nx(i,scowl_60_wordlist_graph,wordlist_grouped_by_len)
# save the graph -- this took a while
nx.write_gpickle(scowl_60_wordlist_graph,"scowl_60_wordlist_graph.gpickle")
scowl_60_wordlist_graph = nx.read_gpickle("scowl_60_wordlist_graph.gpickle")
# find long letter ladders!
all_long_paths_scowl_60 = ll.find_all_longest_and_next_longest_paths(scowl_60_wordlist_graph);
ll.print_paths(all_long_paths_scowl_60)
"""
Explanation: Scowl 60 word list
from http://app.aspell.net/create; remover header before processing below
End of explanation
"""
# get default dict into list, add all words as nodes to graph, group words by length
with open("scowl_70.txt") as word_file:
scowl_70_wordlist = list(w.strip() for w in word_file if len(w.strip()) > 1 and w.strip().islower() and w.strip().isalpha())
scowl_70_wordlist.extend(['a','i'])
scowl_70_wordlist_graph = nx.DiGraph();
scowl_70_wordlist_graph.add_nodes_from(scrabble_wordlist)
scowl_70_wordlist_grouped_by_len = ll.group_wordlist_by_len(scowl_70_wordlist)
# add all possible edges to the entire wordlist graph
for i in xrange(1,max([len(word) for word in wordlist])+1):
ll.ersosion_filter_nx(i,scowl_70_wordlist_graph,wordlist_grouped_by_len)
# save the graph -- this took a while
nx.write_gpickle(scowl_70_wordlist_graph,"scowl_70_wordlist_graph.gpickle")
scowl_70_wordlist_graph = nx.read_gpickle("scowl_70_wordlist_graph.gpickle")
# find long letter ladders!
all_long_paths_scowl_70 = ll.find_all_longest_and_next_longest_paths(scowl_70_wordlist_graph);
ll.print_paths(all_long_paths_scowl_70)
"""
Explanation: Scowl 70 word list
from http://app.aspell.net/create
End of explanation
"""
|
ioos/system-test
|
content/downloads/notebooks/2015-11-09-Scenario_1A_Model_Strings.ipynb
|
unlicense
|
known_csw_servers = ['http://data.nodc.noaa.gov/geoportal/csw',
'http://cwic.csiss.gmu.edu/cwicv1/discovery',
'http://geoport.whoi.edu/geoportal/csw',
'https://edg.epa.gov/metadata/csw',
'http://www.ngdc.noaa.gov/geoportal/csw',
'http://cmgds.marine.usgs.gov/geonetwork/srv/en/csw',
'http://www.nodc.noaa.gov/geoportal/csw',
'http://cida.usgs.gov/gdp/geonetwork/srv/en/csw',
'http://geodiscover.cgdi.ca/wes/serviceManagerCSW/csw',
'http://geoport.whoi.edu/gi-cat/services/cswiso',
'https://data.noaa.gov/csw']
"""
Explanation: A common task is to find out what information is available for further research later on.
We can programmatically build a list of strings to query common data catalogs and find out what services are available.
This post will show how to perform a query for numerical models strings and try to answer the question: how many services are available in each catalog?
To answers that question we will start by building a list of known catalogs services.
(This post is part of Theme 1 - Scenario A.)
End of explanation
"""
known_model_strings = ['roms', 'selfe', 'adcirc', 'ncom',
'hycom', 'fvcom', 'pom', 'wrams', 'wrf']
from owslib import fes
model_name_filters = []
for model in known_model_strings:
kw = dict(literal='*%s*' % model, wildCard='*')
title_filter = fes.PropertyIsLike(propertyname='apiso:Title', **kw)
subject_filter = fes.PropertyIsLike(propertyname='apiso:Subject', **kw)
model_name_filters.append(fes.Or([title_filter, subject_filter]))
"""
Explanation: And a list of known model strings to query.
End of explanation
"""
from owslib.csw import CatalogueServiceWeb
model_results = []
for x in range(len(model_name_filters)):
model_name = known_model_strings[x]
single_model_filter = model_name_filters[x]
for url in known_csw_servers:
try:
csw = CatalogueServiceWeb(url, timeout=20)
csw.getrecords2(constraints=[single_model_filter],
maxrecords=1000, esn='full')
for record, item in csw.records.items():
for d in item.references:
result = dict(model=model_name,
scheme=d['scheme'],
url=d['url'],
server=url)
model_results.append(result)
except BaseException as e:
print("- FAILED: {} - {}".format(url, e))
"""
Explanation: The FES filter we build below is simpler than what we did before.
We are only looking for matches in Title or Subject that contain the model strings.
End of explanation
"""
from pandas import DataFrame
df = DataFrame(model_results)
df = df.drop_duplicates()
"""
Explanation: Note that some servers have a maximum amount of records you can retrieve at once and are failing our query here.
(See https://github.com/ioos/system-test/issues/126.)
Let's get the data as a pandas.DataFrame.
End of explanation
"""
total_services = DataFrame(df.groupby("scheme").size(),
columns=(["Number of services"]))
ax = total_services.sort('Number of services',
ascending=False).plot(kind="barh", figsize=(10, 8))
"""
Explanation: And now that we have the results, what do they mean?
First let's plot the total number of services available.
End of explanation
"""
def normalize_service_urn(urn):
urns = urn.split(':')
if urns[-1].lower() == "url":
del urns[-1]
return urns[-1].lower()
urns = df.copy(deep=True)
urns["urn"] = urns["scheme"].map(normalize_service_urn)
urns_summary = DataFrame(urns.groupby("scheme").size(),
columns=(["Number of services"]))
ax = urns_summary.sort('Number of services',
ascending=False).plot(kind="barh", figsize=(10, 6))
"""
Explanation: We can note that some identical services types URNs are being identified differently!
There should be a consistent way of representing each service,
or a mapping needs to be made available.
We can try to get around the issue of the same services being identified differently by relying on the "Scheme" metadata field.
End of explanation
"""
records_per_csw = DataFrame(urns.groupby(["model", "server"]).size(),
columns=(["Number of services"]))
model_csw_plotter = records_per_csw.unstack("model")
ax = model_csw_plotter['Number of services'].plot(kind='barh', figsize=(10, 8))
"""
Explanation: A little better, but still not ideal.
Let's move forward and plot the number of services available for the list of model strings we requested.
Models per CSW server:
End of explanation
"""
records_per_csw = DataFrame(urns.groupby(["scheme", "server"]).size(),
columns=(["Number of services"]))
model_csw_plotter = records_per_csw.unstack("server")
ax = model_csw_plotter.plot(kind='barh', subplots=True,
figsize=(12, 30), sharey=True)
"""
Explanation: Services available per CSW server:
End of explanation
"""
HTML(html)
"""
Explanation: Querying several catalogs like we did in this notebook is very slow.
This approach should be used only to help to determine which catalog we can use after we know what type of data and service we need.
You can see the original IOOS System Test notebook here.
End of explanation
"""
|
cliburn/sta-663-2017
|
homework/09_Multivariate_Optimization_Solutions.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Multivariate Optimization
In this homework, we will implement the conjugate graident descent algorithm. While you should nearly always use an optimization routine from a library for practical data analyiss, this exercise is useful because it will make concepts from multivariatble calculus and linear algebra covered in the lectrures concrete for you. Also, it brings you up the learning curve for the implementaiton of more complex algorithms than the ones you have been exposed to so far.
Note: The exercise assumes that we can calculate the gradient and Hessian of the fucntion we are trying to minimize. This can be computationally expensive or not even possible for soeme functions. Approximate methods can then be used; we do not go into such complexities here.
Conjugate gradient descent
We want to implement the line search method
$$
x_{k+1} = x_k + \alpha_k p_k
$$
where $\alpha_k$ is the step size and $p_k$ is the search direction.
In particular, we want the search directions $p_k$ to be conjugate, as this will allow us to find the minimum in $n$ steps for $x \in \mathbb{R}^n$ if $f(x)$ is a quadratic function.
The following exercises will unpack this:
What quadratic functions are
What conjugate vectors are
How to find conjugate vectors by Gram-Schmidt process
How to find the step size $\alpha_k$
and finally wrap them all into a conjugate gradient algorithm.
Quadratic function surfaces
Recall that our objective is to minimize a scalar valued function which maps $\mathbb{R}^n \mapsto \mathbb{R}$, for example, a log likelihoood function (for MLE) or unnormalized posterior distribution (for MAP). Geometrically, we are tring to find the value of the lowest point of some surface. The conjugate gradient algorihtm assumes that the surface can be approximated by the quadratic expression (say, by using a Taylor series expansion about $x$)
$$
f(x) = \frac{1}{2}x^TAx - b^Tx + c
$$
and that
$$
\nabla f = Ax - b = 0
$$
at the minimum (if A is positive definite). Note that $A$ is a matrix, $b$ is a vector, and $c$ is a scalar. Also, note that the matrix $A$ is the Hessian of the quadratic function.For simplicity, we'll work in $\mathbb{R}^2$ so we can visualize the surface, so that $x$ is a 2-vector.
Note: A form is a polynomial function where every term has the same degree - for example, $x^2 + 2xy + y^2$ is a quadratic form, whcih can be rewritten as
$$
\begin{pmatrix}
x & y
\end{pmatrix}
\begin{pmatrix}
1 & 1\
1 & 1
\end{pmatrix}
\begin{pmatrix}
x \
y
\end{pmatrix}
$$
That is, $x^TAx$ is a quadratic form.
End of explanation
"""
def f(x, A, b, c):
"""Surface of a quadratic function."""
return 0.5*x.T@A@x + b.T@x + c
def plot_contour(bounds, n, A, b, c):
"""Contour plot of quadratic function."""
xmin, xmax, ymin, ymax = bounds
x = np.linspace(xmin, xmax, n)
y = np.linspace(ymin, ymax, n)
X, Y = np.meshgrid(x, y)
z = np.zeros((n, n))
for i in range(n):
for j in range(n):
v = np.array([X[i, j], Y[i, j]])
z[i, j] = f(v, A, b, c)
g = plt.contour(X, Y, z)
plt.clabel(g, inline=True, fontsize=10)
plt.axis('square')
def plot_vectors(vs):
"""Plot the vectors vs."""
for v in vs:
plt.arrow(0, 0, v[0], v[1], head_width=0.5, head_length=0.5)
A = np.eye(2)
b = np.zeros(2)
c = 0
n = 25
bounds = [-8, 8, -8, 8]
plot_contour(bounds, n, A, b, c)
u1 = np.array([3,3])
v1 = np.array([3,-3])
plot_vectors([u1, v1])
plt.axis(bounds)
u1 @ v1
"""
Explanation: Exercise 1 (20 points)
We will work with function $f_1$
$$
f1(x) = \frac{1}{2} x^T \pmatrix{1 & 0 \ 0 & 1}x
$$
and function $f_2$
$$
f2(x) = \frac{1}{2} x^T \pmatrix{1 & 0 \ 0 & 3}x
$$
Plot the labeled contours of the quadratic functions
Use a streamplot to show the gradient field of the above quadratic functions.
End of explanation
"""
Y, X = np.mgrid[bounds[0]:bounds[1]:n*1j, bounds[2]:bounds[3]:n*1j]
U = A[0,0]*X + A[0,1]*Y - b[0]
V = A[1,0]*X + A[1,1]*Y - b[1]
plt.streamplot(X, Y, U, V, color=U, linewidth=2, cmap=plt.cm.autumn)
plt.axis('square')
pass
"""
Explanation: The gradient vector field
End of explanation
"""
A = np.array([[1,0],[0,3]])
b = np.zeros(2)
c = 0
u2 = np.array([3, np.sqrt(3)])
v2 = np.array([3, -np.sqrt(3)])
plot_contour(bounds, n, A, b, c)
plot_vectors([u2, v2])
np.around(u2@A@v2, 6)
plt.axis(bounds)
"""
Explanation: Conjugate vectors
The vectros $u_2$ and $v_2$ are conjugate, i.e. $u_2^TAv_2 = 0$. The geometric intuition is that $u_2$ and $v_2$ would be orthogonal if we stretched the contour plots so that it became isotropic (same in all directions, just like when $A = \mathbb{1}$).
End of explanation
"""
def inner(u, v, A):
"""Inner product with respect to matrix A."""
return u@A@v
def gram_schmidt(U, inner):
"""Find matrix of conjugate (under A) vectors V from the matrix of basiss vectors U."""
n = U.shape[1]
V = np.zeros_like(U).astype('float')
V[:, 0] = U[:, 0]
for i in range(1, n):
v = U[:, i]
for j in range(i):
u = V[:, j]
v = v - inner(u, v)/inner(u, u)*u
V[:, i] = v
return V
from functools import partial
inner_ = partial(inner, A=A)
U = np.array([[3,3], [3,-3]]).T
gram_schmidt(U, inner_)
A = np.array([[1,0],[0,3]])
b = np.zeros(2)
c = 0
u2 = np.array([3, 3])
v2 = np.array([4.5, -1.5])
plot_contour(bounds, n, A, b, c)
plot_vectors([u2, v2])
np.around(u2@A@v2, 6)
plt.axis(bounds)
"""
Explanation: Gram-Schmidt
The way to numerically find conjugate vectors is to use the Gram-Schmidt process. Here, instead of the usual projection
$$
\text{proj}_u(v) = \frac{u \cdot v}{u \cdot u} \, u
$$
we use the generalized projection
$$
\text{proj}_u(v) = \frac{uA^Tv}{uA^Tu} \, u
$$
Exercise 2 (30 points)
The vectors $u$ and $v$ are orthogonal i.e. $u^Tv = 0$ and conjugate with respect to $A$ if $u^TAv = 0$. Write a Gram-Schmidt function to find orthogonal and conjuate vectors with the following signature
```python
def gram_schmidt(U, inner):
"""Return an orthogonal matrix.
U is a matrix of (column) vecotrs.
inner is a function that calculates the inner product.
Returns an orthogonal matrix of the same shape as U.
"""
```
Use this function and the appropiate inner product to plot
An orhtogonal set of basis vectors for $f_1$
A conjugate set of basic vectors for $f_2$
where the first basis vector is to parallel to $\pmatrix{1 \ 1}$.
End of explanation
"""
def cg(x, A, b, c, max_iter=100, tol=1e-3):
"""Conjugate gradient descent on a quadratic function surface."""
i = 0
r = b - A@x
p = r
delta = r@r
xs = [x]
while i < max_iter and delta > tol**2:
# find next position using optimal step size
alpha = (r @ p)/(p @ A @ p)
x = x + alpha*p
xs.append(x)
# find new direction using Gram-Schmidt
r = b - A@x
beta = (r@A@p ) / (p.T @ A @ p)
p = r - beta*p
# calculate distance moved
delta = r@r
# update count
i = i+1
return i, np.array(xs)
x = np.array([6,7])
A = np.array([[1, 0], [0, 3]])
b = np.zeros(2)
c = 0
i, xs = cg(x, A, b, c)
n = 25
bounds = [-8, 8, -8, 8]
plot_contour(bounds, n, A, b, c)
plt.scatter([xs[0,0]], [xs[0,1]], c='red')
plt.plot(xs[:,0], xs[:,1], c='red')
plt.axis(bounds)
pass
i
"""
Explanation: Exercise 3 (20 points)
We now need to find the "step size" $\alpha$ to take in the direction of the search vector $p$. We can get a quadratic approximation to a general nonliner function $f$ by taking the Taylor series in the driection of $p$
$$
f(x + \alpha p) = f(x) + \alpha [f'(x)]^T p + \frac{\alpha^2}{2} p^T f''(x) p
$$
Find the derivative with respect to $\alpha$ and use this to find the optimal value for $\alpha$ with respect to the quadratic approcimaiton. Write a funciton that returns $\alpha$ for a quadratic funciton with the following signature
```python
def step(x, p, A, b):
"""Returns the optimal step size to take in line search on a quadratic.
A and b are the coefficients of the quadartic expression
$$
f(x) = \frac{1}{2}x^TAx - b^Tx + c
$$
p is the search direction
x is the current location
"""
```
Line search
We now know how to find a search direction $p_k$ - this is a vector that is conjugate to the previous search direction. The first search direction is usually set to be the gradient. Next we need to find out how far along $p_k$ we need to travel, i.e., we need to find $\alpha_k$. First we take a Taylor expansion in the direction of $p$
$$
f(x + \alpha p) = f(x) + \alpha [f'(x)]^T p + \frac{\alpha^2}{2} p^T f''(x) p
$$
followed by finding the derivative with respect to $\alpha$
$$
\frac{d}{d\alpha} f(x + \alpha p) = [f'(x)]^T p + \alpha p^T f''(x) p
$$
Solvign for $\frac{d}{d\alpha} f(x + \alpha p) = 0$, we get
$$
\alpha = - \frac{[f'(x)]^T p}{p^T f''(x) p} \
= - \frac{\nabla f^T p}{p^T A p} \
= \frac{(b - Ax)^T p}{p^T A p}
$$
Exercise 4 (30 points)
Implement the conjugate grdient descent algorithm with the following signature
```python
def cg(x, A, b, c, max_iter=100, tol=1e-3):
"""Conjugate gradient descent on a quadratic function surface.
x is the starting position
A, b and c are the coefficients of the quadartic expression
$$
f(x) = \frac{1}{2}x^TAx - b^Tx + c
$$
max_iter is the maximum number of iterations to take
tol is the tolerance (stop if the length of the gradient is smaller than tol)
Returns the number of steps taken and the list of all positions visited.
"""
```
Use cg to find the minimum of the funciton $f_2$ from Exercise 1, starting from $\pmatrix{6 \ 7}$.
Plot the contour of the funciton f and the trajectory taken from the inital starting poitn $x$ to the final position, inlcuding all the intermediate steps.
We are not particularly concerned about efficiency here, so don't worry about JIT/AOT/C++ level optimization.
Conjugate gradient algorithm
For a more comprehensive discussion and efficient implementaiton, see An Introduction to
the Conjugate Gradient Method Without the Agonizing Pain
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: MPIESM-1-2-HAM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
pligor/predicting-future-product-prices
|
04_time_series_prediction/06_price_history_varlen-no-outliers.ipynb
|
agpl-3.0
|
from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from IPython.display import Image
from IPython.core.display import HTML
from mylibs.tf_helper import getDefaultGPUconfig
from data_providers.binary_shifter_varlen_data_provider import \
BinaryShifterVarLenDataProvider
from data_providers.price_history_varlen_data_provider import PriceHistoryVarLenDataProvider
from models.model_05_price_history_rnn_varlen import PriceHistoryRnnVarlen
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
from common import get_or_run_nn
"""
Explanation: https://r2rt.com/recurrent-neural-networks-in-tensorflow-iii-variable-length-sequences.html
End of explanation
"""
num_epochs = 10
series_max_len = 60
num_features = 1 #just one here, the function we are predicting is one-dimensional
state_size = 400
target_len = 30
batch_size = 47
"""
Explanation: Step 0 - hyperparams
End of explanation
"""
csv_in = '../price_history_03a_fixed_width.csv'
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
# XX, YY, sequence_lens, seq_mask = PriceHistoryVarLenDataProvider.createAndSaveDataset(
# csv_in=csv_in,
# npz_out=npz_path,
# input_seq_len=60, target_seq_len=30)
# XX.shape, YY.shape, sequence_lens.shape, seq_mask.shape
dp = PriceHistoryVarLenDataProvider(filteringSeqLens = lambda xx : xx >= target_len,
npz_path=npz_path)
dp.inputs.shape, dp.targets.shape, dp.sequence_lengths.shape, dp.sequence_masks.shape
"""
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
"""
model = PriceHistoryRnnVarlen(rng=random_state, dtype=dtype, config=config)
graph = model.getGraph(batch_size=batch_size, state_size=state_size,
target_len=target_len, series_max_len=series_max_len)
show_graph(graph)
"""
Explanation: Step 2 - Build model
End of explanation
"""
num_epochs, state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
from os.path import isdir
data_folder = '../../../../Dropbox/data'
assert isdir(data_folder)
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='001_plain_rnn_60to30', nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
sns.tsplot(data=dp.inputs[ind].flatten())
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
from cost_functions.huber_loss import huber_loss
average_huber_loss = np.mean([np.mean(huber_loss(dp.targets[ind], preds_dict[ind]))
for ind in range(len(dp.targets))])
average_huber_loss
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
"""
Explanation: Step 3 training the network
End of explanation
"""
num_epochs, state_size, batch_size
cost_func = PriceHistoryRnnVarlen.COST_FUNCS.MSE
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
cost_func= cost_func,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='001_plain_rnn_60to30_mse')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
"""
Explanation: TODO Co integration
https://en.wikipedia.org/wiki/Cointegration
https://www.quora.com/What-are-some-methods-to-check-similarities-between-two-time-series-data-sets
https://stackoverflow.com/questions/11362943/efficient-cointegration-test-in-python
Mean Squared Error (instead of huber loss)
End of explanation
"""
|
tata-antares/tagging_LHCb
|
experiments_MC_data_reweighting/not_simulated_tracks_removing.ipynb
|
apache-2.0
|
%pylab inline
figsize(8, 6)
import sys
sys.path.insert(0, "../")
"""
Explanation: Idea
Результат (ожидаемый)
обучение происходит на своем родном канале на симулированных данных
учитываются различия симуляции и данных (см. ниже алгоритм)
оценка качества (как и калибровка) будут несмещенными
качество лучше, чем baseline
Алгоритм
используем инклюзивный подход
учим при помощи классификатора перевзвешивание различия данных (удаляем треки из симуляции, которые не появляются в данных и удаляем из данных треки, которые не умеем симулировать):
классификатор предсказывает $p(MC)$ и $p(RD)$
для симуляции при $p(MC)>0.5$
$$w_{MC}=\frac{p(RD)}{p(MC)},$$
иначе
$$w_{MC}=1$$
- для данных при $p(MC)<0.5$
$$w_{RD}=\frac{p(MC)}{p(RD)},$$
иначе
$$w_{RD}=1$$
- нормируем веса в событии
- в формуле комбинирования возводим в степень $w * sign$
End of explanation
"""
import pandas
import numpy
from folding_group import FoldingGroupClassifier
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
from rep.report.metrics import RocAuc
from sklearn.metrics import roc_curve, roc_auc_score
from decisiontrain import DecisionTrainClassifier
from rep.estimators import SklearnClassifier
"""
Explanation: Import
End of explanation
"""
import root_numpy
MC = pandas.DataFrame(root_numpy.root2array('../datasets/MC/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000))
data = pandas.DataFrame(root_numpy.root2array('../datasets/data/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000))
data.head()
MC.head()
"""
Explanation: Reading initial data
End of explanation
"""
from utils import data_tracks_preprocessing
data = data_tracks_preprocessing(data, N_sig_sw=True)
MC = data_tracks_preprocessing(MC)
', '.join(data.columns)
print sum(data.signB == 1), sum(data.signB == -1)
print sum(MC.signB == 1), sum(MC.signB == -1)
"""
Explanation: Data preprocessing:
Add necessary features:
- #### define label = signB * signTrack
* if > 0 (same sign) - label **1**
* if < 0 (different sign) - label **0**
diff pt, min/max PID
Apply selections:
remove ghost tracks
loose selection on PID
End of explanation
"""
mask_sw_positive = (data.N_sig_sw.values > 1) * 1
data.head()
data['group_column'] = numpy.unique(data.event_id, return_inverse=True)[1]
MC['group_column'] = numpy.unique(MC.event_id, return_inverse=True)[1]
data.index = numpy.arange(len(data))
MC.index = numpy.arange(len(MC))
"""
Explanation: Define mask for non-B events
End of explanation
"""
# features = ['cos_diff_phi', 'diff_pt', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP',
# 'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm',
# 'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb',
# 'IPPU', 'eta', 'max_PID_mu_e', 'max_PID_mu_k', 'partlcs']
features = ['cos_diff_phi', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP',
'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm',
'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb',
'IPPU', 'eta', 'max_PID_mu_e', 'max_PID_mu_k', 'partlcs']
"""
Explanation: Define features
End of explanation
"""
b_ids_data = numpy.unique(data.group_column.values, return_index=True)[1]
b_ids_MC = numpy.unique(MC.group_column.values, return_index=True)[1]
Bdata = data.iloc[b_ids_data].copy()
BMC = MC.iloc[b_ids_MC].copy()
Bdata['Beta'] = Bdata.diff_eta + Bdata.eta
BMC['Beta'] = BMC.diff_eta + BMC.eta
Bdata['Bphi'] = Bdata.diff_phi + Bdata.phi
BMC['Bphi'] = BMC.diff_phi + BMC.phi
Bfeatures = ['Beta', 'Bphi', 'ptB']
hist(Bdata['ptB'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['ptB'].values, normed=True, alpha=0.5, bins=60);
hist(Bdata['Beta'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['Beta'].values, normed=True, alpha=0.5, bins=60);
hist(Bdata['Bphi'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['Bphi'].values, normed=True, alpha=0.5, bins=60);
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1000,
n_threads=16)
data_vs_MC_B = pandas.concat([Bdata, BMC])
label_data_vs_MC_B = [0] * len(Bdata) + [1] * len(BMC)
weights_data_vs_MC_B = numpy.concatenate([Bdata.N_sig_sw.values * (Bdata.N_sig_sw.values > 1) * 1,
numpy.ones(len(BMC))])
weights_data_vs_MC_B_all = numpy.concatenate([Bdata.N_sig_sw.values, numpy.ones(len(BMC))])
tt_B = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=Bfeatures, group_feature='group_column')
%time tt_B.fit(data_vs_MC_B, label_data_vs_MC_B, sample_weight=weights_data_vs_MC_B)
pass
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B)
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B_all)
from hep_ml.reweight import GBReweighter, FoldingReweighter
reweighterB = FoldingReweighter(GBReweighter(), random_state=3444)
reweighterB.fit(BMC[Bfeatures], Bdata[Bfeatures], target_weight=Bdata.N_sig_sw)
BMC_weights = reweighterB.predict_weights(BMC[Bfeatures])
hist(Bdata['ptB'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['ptB'].values, normed=True, alpha=0.5, bins=60, weights=BMC_weights);
weights_data_vs_MC_B_w = numpy.concatenate([Bdata.N_sig_sw.values * (Bdata.N_sig_sw.values > 1) * 1,
BMC_weights])
weights_data_vs_MC_B_all_w = numpy.concatenate([Bdata.N_sig_sw.values, BMC_weights])
tt_B = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=Bfeatures, group_feature='group_column')
%time tt_B.fit(data_vs_MC_B, label_data_vs_MC_B, sample_weight=weights_data_vs_MC_B_w)
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B_all_w)
MC['N_sig_sw'] = BMC_weights[numpy.unique(MC.group_column.values, return_inverse=True)[1]]
"""
Explanation: Test that B-events similar in MC and data
End of explanation
"""
def compute_target_number_of_tracks(X):
ids = numpy.unique(X.group_column, return_inverse=True)[1]
number_of_tracks = numpy.bincount(X.group_column)
target = number_of_tracks[ids]
return target
from decisiontrain import DecisionTrainRegressor
from rep.estimators import SklearnRegressor
from rep.metaml import FoldingRegressor
tt_base_reg = DecisionTrainRegressor(learning_rate=0.02, n_estimators=1000,
n_threads=16)
%%time
tt_data_NT = FoldingRegressor(SklearnRegressor(tt_base_reg), n_folds=2, random_state=321,
features=features)
tt_data_NT.fit(data, compute_target_number_of_tracks(data), sample_weight=data.N_sig_sw.values * mask_sw_positive)
from sklearn.metrics import mean_squared_error
mean_squared_error(compute_target_number_of_tracks(data), tt_data_NT.predict(data),
sample_weight=data.N_sig_sw.values) ** 0.5
mean_squared_error(compute_target_number_of_tracks(data),
[numpy.mean(compute_target_number_of_tracks(data))] * len(data),
sample_weight=data.N_sig_sw.values) ** 0.5
%%time
tt_MC_NT = FoldingRegressor(SklearnRegressor(tt_base_reg), n_folds=2, random_state=321,
features=features)
tt_MC_NT.fit(MC, compute_target_number_of_tracks(MC), sample_weight=MC.N_sig_sw.values)
mean_squared_error(compute_target_number_of_tracks(MC),
tt_MC_NT.predict(MC), sample_weight=MC.N_sig_sw.values) ** 0.5
mean_squared_error(compute_target_number_of_tracks(MC),
[numpy.mean(compute_target_number_of_tracks(MC))] * len(MC),
sample_weight=MC.N_sig_sw.values) ** 0.5
tt_MC_NT.get_feature_importances().sort_values(by='effect')[-5:]
"""
Explanation: Test that number of tracks is independent on Track description
End of explanation
"""
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1000,
n_threads=16)
B_signs = data['signB'].groupby(data['group_column']).aggregate(numpy.mean)
B_weights = data['N_sig_sw'].groupby(data['group_column']).aggregate(numpy.mean)
B_signs_MC = MC['signB'].groupby(MC['group_column']).aggregate(numpy.mean)
B_weights_MC = MC['N_sig_sw'].groupby(MC['group_column']).aggregate(numpy.mean)
"""
Explanation: Define base estimator and B weights, labels
End of explanation
"""
from scipy.special import logit, expit
def compute_Bprobs(X, track_proba, weights=None, normed_weights=False):
if weights is None:
weights = numpy.ones(len(X))
_, data_ids = numpy.unique(X['group_column'], return_inverse=True)
track_proba[~numpy.isfinite(track_proba)] = 0.5
track_proba[numpy.isnan(track_proba)] = 0.5
if normed_weights:
weights_per_events = numpy.bincount(data_ids, weights=weights)
weights /= weights_per_events[data_ids]
predictions = numpy.bincount(data_ids, weights=logit(track_proba) * X['signTrack'] * weights)
return expit(predictions)
"""
Explanation: B probability computation
End of explanation
"""
tt_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_data.fit(data, data.label, sample_weight=data.N_sig_sw.values * mask_sw_positive)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC, compute_Bprobs(MC, tt_data.predict_proba(MC)[:, 1]), sample_weight=B_weights_MC),
roc_auc_score(
B_signs, compute_Bprobs(data, tt_data.predict_proba(data)[:, 1]), sample_weight=B_weights)]})
"""
Explanation: Inclusive tagging: training on data
End of explanation
"""
tt_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_MC.fit(MC, MC.label)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC, compute_Bprobs(MC, tt_MC.predict_proba(MC)[:, 1]), sample_weight=B_weights_MC),
roc_auc_score(
B_signs, compute_Bprobs(data, tt_MC.predict_proba(data)[:, 1]), sample_weight=B_weights)]})
"""
Explanation: Inclusive tagging: training on MC
End of explanation
"""
combined_data_MC = pandas.concat([data, MC])
combined_label = numpy.array([0] * len(data) + [1] * len(MC))
combined_weights_data = data.N_sig_sw.values #/ numpy.bincount(data.group_column)[data.group_column.values]
combined_weights_data_passed = combined_weights_data * mask_sw_positive
combined_weights_MC = MC.N_sig_sw.values# / numpy.bincount(MC.group_column)[MC.group_column.values]
combined_weights = numpy.concatenate([combined_weights_data_passed,
1. * combined_weights_MC / sum(combined_weights_MC) * sum(combined_weights_data_passed)])
combined_weights_all = numpy.concatenate([combined_weights_data,
1. * combined_weights_MC / sum(combined_weights_MC) * sum(combined_weights_data)])
"""
Explanation: New method
Reweighting with classifier
combine data and MC together to train a classifier
End of explanation
"""
%%time
tt_base_large = DecisionTrainClassifier(learning_rate=0.3, n_estimators=1000,
n_threads=20)
tt_data_vs_MC = FoldingGroupClassifier(SklearnClassifier(tt_base_large), n_folds=2, random_state=321,
train_features=features + ['label'], group_feature='group_column')
tt_data_vs_MC.fit(combined_data_MC, combined_label, sample_weight=combined_weights)
a = []
for n, p in enumerate(tt_data_vs_MC.staged_predict_proba(combined_data_MC)):
a.append(roc_auc_score(combined_label, p[:, 1], sample_weight=combined_weights))
plot(a)
"""
Explanation: train classifier to distinguish data and MC
End of explanation
"""
combined_p = tt_data_vs_MC.predict_proba(combined_data_MC)[:, 1]
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights)
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights_all)
"""
Explanation: quality
End of explanation
"""
from utils import calibrate_probs, plot_calibration
combined_p_calib = calibrate_probs(combined_label, combined_weights, combined_p)[0]
plot_calibration(combined_p, combined_label, weight=combined_weights)
plot_calibration(combined_p_calib, combined_label, weight=combined_weights)
"""
Explanation: calibrate probabilities (due to reweighting rule where probabilities are used)
End of explanation
"""
# reweight data predicted as data to MC
used_probs = combined_p_calib
data_probs_to_be_MC = used_probs[combined_label == 0]
MC_probs_to_be_MC = used_probs[combined_label == 1]
track_weights_data = numpy.ones(len(data))
# take data with probability to be data
mask_data = data_probs_to_be_MC < 0.5
track_weights_data[mask_data] = (data_probs_to_be_MC[mask_data]) / (1 - data_probs_to_be_MC[mask_data])
# reweight MC predicted as MC to data
track_weights_MC = numpy.ones(len(MC))
mask_MC = MC_probs_to_be_MC > 0.5
track_weights_MC[mask_MC] = (1 - MC_probs_to_be_MC[mask_MC]) / (MC_probs_to_be_MC[mask_MC])
# simple approach, reweight only MC
track_weights_only_MC = (1 - MC_probs_to_be_MC) / MC_probs_to_be_MC
# data_ids = numpy.unique(data['group_column'], return_inverse=True)[1]
# MC_ids = numpy.unique(MC['group_column'], return_inverse=True)[1]
# # event_weight_data = (numpy.bincount(data_ids, weights=data.N_sig_sw) / numpy.bincount(data_ids))[data_ids]
# # event_weight_MC = (numpy.bincount(MC_ids, weights=MC.N_sig_sw) / numpy.bincount(MC_ids))[MC_ids]
# # normalize weights for tracks in a way that sum w_track = 1 per event
# track_weights_data /= numpy.bincount(data_ids, weights=track_weights_data)[data_ids]
# track_weights_MC /= numpy.bincount(MC_ids, weights=track_weights_MC)[MC_ids]
"""
Explanation: compute MC and data track weights
End of explanation
"""
hist(combined_p_calib[combined_label == 1], label='MC', normed=True, alpha=0.4, bins=60,
weights=combined_weights_MC)
hist(combined_p_calib[combined_label == 0], label='data', normed=True, alpha=0.4, bins=60,
weights=combined_weights_data);
legend(loc='best')
hist(track_weights_MC, normed=True, alpha=0.4, bins=60, label='MC')
hist(track_weights_data, normed=True, alpha=0.4, bins=60, label='RD');
legend(loc='best')
numpy.mean(track_weights_data), numpy.mean(track_weights_MC)
hist(combined_p_calib[combined_label == 1], label='MC', normed=True, alpha=0.4, bins=60,
weights=track_weights_MC * MC.N_sig_sw.values)
hist(combined_p_calib[combined_label == 0], label='data', normed=True, alpha=0.4, bins=60,
weights=track_weights_data * data.N_sig_sw.values);
legend(loc='best')
roc_auc_score(combined_label, combined_p_calib,
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values,
track_weights_MC * MC.N_sig_sw.values]))
"""
Explanation: reweighting plotting
End of explanation
"""
%%time
tt_check = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=433,
train_features=features + ['label'], group_feature='group_column')
tt_check.fit(combined_data_MC, combined_label,
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values * mask_sw_positive,
track_weights_MC * MC.N_sig_sw.values]))
roc_auc_score(combined_label, tt_check.predict_proba(combined_data_MC)[:, 1],
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values * mask_sw_positive,
track_weights_MC * MC.N_sig_sw.values]))
# * sum(track_weights_data * mask_sw_positive) / sum(track_weights_MC)
roc_auc_score(combined_label, tt_check.predict_proba(combined_data_MC)[:, 1],
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values,
track_weights_MC * MC.N_sig_sw.values]))
# * sum(track_weights_data) / sum(track_weights_MC)
"""
Explanation: Check reweighting rule
train classifier to distinguish data vs MC with provided weights
End of explanation
"""
tt_reweighted_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_reweighted_MC.fit(MC, MC.label, sample_weight=track_weights_MC * MC.N_sig_sw.values)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_MC.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_MC.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_MC.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_MC.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
"""
Explanation: Classifier trained on MC
End of explanation
"""
%%time
tt_reweighted_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
tt_reweighted_data.fit(data, data.label,
sample_weight=track_weights_data * data.N_sig_sw.values * mask_sw_positive)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_data.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_data.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_data.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_data.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
"""
Explanation: Classifier trained on data
End of explanation
"""
numpy.mean(mc_sum_weights_per_event), numpy.mean(data_sum_weights_per_event)
_, data_ids = numpy.unique(data['group_column'], return_inverse=True)
mc_sum_weights_per_event = numpy.bincount(MC.group_column.values, weights=track_weights_MC)
data_sum_weights_per_event = numpy.bincount(data_ids, weights=track_weights_data)
hist(mc_sum_weights_per_event, bins=60, normed=True, alpha=0.5)
hist(data_sum_weights_per_event, bins=60, normed=True, alpha=0.5, weights=B_weights);
hist(mc_sum_weights_per_event, bins=60, normed=True, alpha=0.5)
hist(data_sum_weights_per_event, bins=60, normed=True, alpha=0.5, weights=B_weights);
hist(numpy.bincount(MC.group_column), bins=81, normed=True, alpha=0.5, range=(0, 80))
hist(numpy.bincount(data.group_column), bins=81, normed=True, alpha=0.5, range=(0, 80));
hist(expit(p_tt_mc) - expit(p_data), bins=60, weights=B_weights, normed=True, label='standard approach',
alpha=0.5);
hist(expit(p_data_w_MC) - expit(p_data_w), bins=60, weights=B_weights, normed=True, label='compensate method',
alpha=0.5);
legend()
xlabel('$p_{MC}-p_{data}$')
"""
Explanation:
End of explanation
"""
from utils import compute_mistag
bins_perc = [10, 20, 30, 40, 50, 60, 70, 80, 90]
compute_mistag(expit(p_data), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='data')
compute_mistag(expit(p_tt_mc), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='MC')
compute_mistag(expit(p_data_w), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='new')
legend(loc='best')
xlim(0.3, 0.5)
ylim(0.2, 0.5)
bins_edg = numpy.linspace(0.3, 0.9, 10)
compute_mistag(expit(p_data), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='data')
compute_mistag(expit(p_tt_mc), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='MC')
compute_mistag(expit(p_data_w), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='new')
legend(loc='best')
"""
Explanation: Calibration
End of explanation
"""
|
pfctdayelise/aomp
|
How many Australian Open players need photos in Wikipedia?.ipynb
|
mit
|
import mwclient
site = mwclient.Site('en.wikipedia.org')
PLAYERSFILE = 'sampleplayers.txt'
def getPage(name):
return site.Pages[name]
def hasImage(page):
# TODO
return False
hasimage = []
needsimage = []
with open(PLAYERSFILE) as players:
for player in players:
page = getPage(player)
if hasImage(page):
hasimage.append(player)
else:
needsimage.append(player)
print("Has image:", hasimage)
print("Needs image:", needsimage)
"""
Explanation: <a data-flickr-embed="true" href="https://www.flickr.com/photos/pfctdayelise/371603584" title="Sania Mirza"><img src="https://farm1.staticflickr.com/136/371603584_b2127a2671_n.jpg" width="198" height="320" alt="Sania Mirza" align="left" style="padding-right:30px;"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
In 2006 and 2007 I went to the Australian Open to watch the tennis, and source freely-licensed photographs of tennis players for Wikipedia. I took around 100 photos, and happily many of them survive in Wikipedia articles to this day.
Seeing your images on articles is a pretty feel-good way of contributing to Wikipedia. So I am thinking about going again this year (the Open started today), but the scattershot approach I used in 2007 isn't going to cut it any more. So I need to figure out, which players don't have any photos on their Wikipedia bio?
Who's playing?
Using the Wikipedia API
Redirects
Page doesn't exist
Disambiguation pages
hasImage
Full results
Also this photo of Sania Mirza is the third most popular image I have on Flickr - no idea why.
Who's playing?
The official website has a list of players. That's pretty quick to manually copy into a text file and delete a few stray lines.
There are 546 players, so I'm going to work with a shorter sample until I get things vaguely working, to speed up development and avoid hitting the API unnecessarily often.
Using the Wikipedia API
Actually there is no Wikipedia API. But there is a MediaWiki API. It's very powerful, too - all kinds of bots are powered by it. And there is a good Python client library, called mwclient. OK, so I just want something a bit like this *cracks knuckles* ...
# sampleplayers.txt
Baghdatis, Marcos
Bai, Yan
Baker, Brian
Barrere, Gregoire
Basic, Mirza
End of explanation
"""
def normaliseName(name):
last, first = name.strip().split(', ')
return ' '.join([first, last])
hasimage = []
needsimage = []
with open(PLAYERSFILE) as players:
for player in players:
forwardname = normaliseName(player)
page = getPage(forwardname)
if hasImage(page):
hasimage.append(forwardname)
else:
needsimage.append(forwardname)
print("Has image:", hasimage)
print("Needs image:", needsimage)
"""
Explanation: I'll worry about hasImage in a minute. Right now there is a more pressing problem: fixing up the names. I need to get rid of those newlines and make them 'firstname lastname' to match the Wikipedia naming convention.
End of explanation
"""
with open(PLAYERSFILE) as players:
for player in players:
forwardname = normaliseName(player)
page = getPage(forwardname)
print(forwardname.upper())
print(page.text()[:200])
"""
Explanation: OK. Maybe now I should verify the pages look like what I think they do...
End of explanation
"""
def getPage(name):
return site.Pages[name].resolve_redirect()
for player in ['Yan Bai', 'Mirza Basic']:
page = getPage(player)
print(player.upper())
print(page.text()[:200])
"""
Explanation: This reveals a few issues I need to deal with before I start looking at images. The Marcos Baghdatis article seems legit. Yan Bai and Mirza Basic are redirects. Brian Baker is a disambiguation page, and Gregoire Barrere maybe doesn't have a page yet. 😢 Can anyone fix that?
Redirects
If I type in "Yan Bai" at Wikipedia, I get whisked off to https://en.wikipedia.org/wiki/Bai_Yan . There is a visual hint that something happened:
<img src="blog/redirect.png" />
Happily, the API knows about redirects and can automatically resolve them for me.
End of explanation
"""
page.name
"""
Explanation: Looks better! In the second case, the correct name is Mirza Bašić. It's embarrassing that the official Australian Open website can't cope with diacritics tbh.
To record the correct name of the page, I can do the following:
End of explanation
"""
site.Pages['Gregoire Barrere'].exists
"""
Explanation: Page doesn't exist
Gregoire Barrere (or rather Grégoire Barrère) doesn't have a page yet. The API also copes with this pretty well:
End of explanation
"""
def getPage(name):
page = site.Pages[name].resolve_redirect()
if not page.exists:
return
return page
needspage = []
hasimage = []
needsimage = []
with open(PLAYERSFILE) as players:
for player in players:
forwardname = normaliseName(player)
page = getPage(forwardname)
if not page:
needspage.append(forwardname)
continue
if hasImage(page):
hasimage.append(page.name)
else:
needsimage.append(page.name)
print("Needs page:", needspage)
print("Has image:", hasimage)
print("Needs image:", needsimage)
"""
Explanation: So I update my getPage function:
End of explanation
"""
page = site.Pages['Brian Baker']
print(page.text())
"""
Explanation: Disambiguation pages
Disambiguation or "dab" pages are what I would call part of the Wikipedia API. They're built on editing community conventions rather than technical capabilities of MediaWiki. But I need to deal with them otherwise the results will be nonsense.
So let's look at the full content of the Brian Baker page and see what there is to play with:
End of explanation
"""
page = site.Pages['Brian Baker']
cats = page.categories()
for cat in cats:
print(cat['title'])
"""
Explanation: Hmm ok...kind of not that useful. If I look at the page on Wikipedia, I can see there is a bit more structure that is not evident in the page wikitext:
<img src="blog/brianbakerdisambig.png" />
At the bottom there is a category which seems pretty definitive in terms of identifying a disambiguation page. Categories are part of the MediaWiki API:
End of explanation
"""
def isDisambiguation(page):
cats = page.categories()
disambigCat = 'Category:All disambiguation pages'
return disambigCat in [cat['title'] for cat in cats]
page = site.Pages['Brian Baker']
print(isDisambiguation(page))
page = site.Pages['Marcos Baghdatis']
print(isDisambiguation(page))
"""
Explanation: There are some bonus categories, because MediaWiki supports hidden categories. This is one of those features that you don't need unless your wiki has millions of pages and a crowd of obsessive sorters. If you have your preferences arranged just-so you can actually get these categories to show up.
<img src="blog/disambighiddencategories.png" />
OK so... to detect a disambiguation page, I can probably just look for one of these categories. In the API there are two ways to do this - check if the page is in the category, or check if category is attached to the page. Sounds much of a muchness, but the category All disambiguation pages has over 265,000 members. So I have a hunch let's not do it that way 😉
End of explanation
"""
needspage = []
disambigs = []
hasimage = []
needsimage = []
with open(PLAYERSFILE) as players:
for player in players:
forwardname = normaliseName(player)
page = getPage(forwardname)
if not page:
needspage.append(forwardname)
continue
if isDisambiguation(page):
disambigs.append(page.name)
elif hasImage(page):
hasimage.append(page.name)
else:
needsimage.append(page.name)
print("Needs page:", needspage)
print("Disambig:", disambigs)
print("Has image:", hasimage)
print("Needs image:", needsimage)
"""
Explanation: (Another task is try and resolve the disambiguation page to the correct page, but I'll tackle that later.)
End of explanation
"""
page = site.Pages['Marcos Baghdatis']
images = page.images()
for image in images:
print(image['title'])
"""
Explanation: hasImage
Now I have certainty I'm on a player's biography, I can check for images.
I could try and parse the wikitext and see if the tennis player infobox has an image value filled out, but it seems simpler to start with the images API.
End of explanation
"""
def isBoring(imagename):
# Flags, Increase2.svg, Decrease2.svg
return imagename.endswith('.svg')
def hasImage(page):
images = page.images()
imgnames = [image['title'] for image in images]
interestingImages = [imgname for imgname in imgnames
if not isBoring(imgname)]
return bool(interestingImages)
for player in['Marcos Baghdatis', 'Bai Yan', 'Mirza Bašić']:
page = getPage(player)
print(player, hasImage(page))
"""
Explanation: That's... a lot of flags. It's because editors like to do this kind of thing:
<img src="blog/flags.png" />
So to filter them out, what do they have in common?
What jumps out at me is that they are SVGs. SVGs are not normally used for photographs (which is what we are trying to detect), so that will be a good start.
End of explanation
"""
needspage = []
disambigs = []
hasimage = []
needsimage = []
with open(PLAYERSFILE) as players:
for player in players:
forwardname = normaliseName(player)
page = getPage(forwardname)
if not page:
needspage.append(forwardname)
continue
if isDisambiguation(page):
disambigs.append(page.name)
elif hasImage(page):
hasimage.append(page.name)
else:
needsimage.append(page.name)
print("Has image:", hasimage)
print("Disambig:", disambigs)
print("Needs image:", needsimage)
print("No page:", needspage)
"""
Explanation: Now we can put it all together:
End of explanation
"""
|
robblack007/clase-metodos-numericos
|
Practicas/P1/Practica 1 - Introduccion a Jupyter.ipynb
|
mit
|
2 + 3
2*3
2**3
sin(pi)
"""
Explanation: Introducción a Jupyter
Expresiones aritmeticas y algebraicas
Empezaremos esta práctica con algo de conocimientos previos de programación. Se que muchos de ustedes no han tenido la oportunidad de utilizar Python como lenguaje de programación y mucho menos Jupyter como ambiente de desarrollo para computo cientifico, asi que el primer objetivo de esta práctica será acostumbrarnos a la sintaxis del lenguaje y a las funciones que hacen especial a Jupyter.
Primero tratemos de evaluar una expresión aritmetica. Para correr el código en la siguiente celda, tan solo tienes que hacer clic en cualquier punto de ella y presionar las teclas Shift + Return.
End of explanation
"""
from math import sin, pi
sin(pi)
"""
Explanation: Sin embargo no existen funciones trigonométricas cargadas por default. Para esto tenemos que importarlas de la libreria math:
End of explanation
"""
a = 10
a
"""
Explanation: Variables
Las variables pueden ser utilizadas en cualquier momento, sin necesidad de declararlas, tan solo usalas!
End of explanation
"""
c =
"""
Explanation: Ejercicio
Ejecuta el siguiente calculo y guardalo en una variable:
$$
c = \pi *10^2
$$
Nota: Una vez que hayas concluido el calculo y guardado el valor en una variable, puedes desplegar el valor de cualquier variable al ejecutar en una celda el nombre de la variable
End of explanation
"""
from pruebas_1 import prueba_1_1
prueba_1_1(_, c)
"""
Explanation: Ejecuta la prueba de abajo para saber si has creado el codigo correcto
End of explanation
"""
A = [2, 4, 8, 10]
A
"""
Explanation: Listas
Las listas son una manera de guardar varios datos en un mismo arreglo. Podemos tener por ejemplo:
End of explanation
"""
A*2
"""
Explanation: Pero si intentamos multiplicar estos datos por un numero, no tendrá el comportamiento esperado.
End of explanation
"""
f = lambda x: x**2 + 1
"""
Explanation: Funciones
Podemos definir funciones propias de la siguiente manera:
End of explanation
"""
f(2)
"""
Explanation: Esta linea de codigo es equivalente a definir una función matemática de la siguiente manera:
$$
f(x) = x^2 + 1
$$
Por lo que si la evaluamos con $x = 2$, obviamente obtendremos como resultado $5$.
End of explanation
"""
def g(x):
y = x**2 + 1
return y
"""
Explanation: Esta notación que introducimos es muy util para funciones matemáticas, pero esto nos obliga a pensar en las definiciones de una manera funcional, lo cual no siempre es la solución (sobre todo en un lenguaje con un paradigma de programación orientado a objetos).
Esta función tambien puede ser escrita de la siguiente manera:
End of explanation
"""
g(2)
"""
Explanation: Con los mismos resultados:
End of explanation
"""
def cel_a_faren(grados_cel):
grados_faren = # Escribe el codigo para hacer el calculo aqui
return grados_faren
"""
Explanation: Ejercicio
Define una función que convierta grados Celsius a grados Farenheit, de acuerdo a la siguiente formula:
$$
F = \frac{9}{5} C + 32
$$
End of explanation
"""
cel_a_faren(10)
cel_a_faren(50)
from pruebas_1 import prueba_1_2
prueba_1_2(cel_a_faren)
"""
Explanation: Y para probar trata de convertir algunos datos:
End of explanation
"""
for dato in A:
print dato*2
"""
Explanation: Ciclos de control
Cuando queremos ejecutar código varias veces tenemos varias opciones, vamos a explorar rapidamente el ciclo for.
python
for paso in pasos:
...
codigo_a_ejecutar(paso)
...
En este caso el codigo se ejecutará tantas veces sean necesarias para usar todos los elementos que hay en pasos.
Por ejemplo, pordemos ejecutar la multiplicacion por 2 en cada uno de los datos:
End of explanation
"""
B = []
for dato in A:
B.append(dato*2)
B
"""
Explanation: ó agregarlo en una lista nueva:
End of explanation
"""
C = [] # Escribe el codigo para declarar el primer arreglo adentro de los corchetes
C
D = []
# Escribe el codigo de tu ciclo for aqui
D
"""
Explanation: y aun muchas cosas mas, pero por ahora es momento de empezar con la práctica.
Ejercicio
Crea una lista C con los enteros positivos de un solo digito, es decir: $\left{ x \in \mathbb{Z} \mid 0 \leq x < 10\right}$
Crea una segunda lista D con los cuadrados de cada elemento de C
End of explanation
"""
from pruebas_1 import prueba_1_3
prueba_1_3(C, D)
"""
Explanation: Ejecuta las pruebas de abajo
End of explanation
"""
f = lambda x: x**3 + 2*x**2 + 10*x - 20
f(1.0)
f(2.0)
"""
Explanation: Método de bisección
Para obtener una raiz real de un polinomio $f(x) = x^3 + 2 x^2 + 10 x - 20$ por el metodo de bisección, tenemos que primero definir dos puntos, uno que evaluado en el polinomio nos de positivo, y otro que nos de negativo. Propondremos $x_1 = 1$ y $x_2 = 2$, y los evaluaremos para asegurarnos de que cumplan lo que acabamos de pedir.
End of explanation
"""
x_1, x_2 = 1.0, 2.0
xm1 = (x_1 + x_2)/2.0
f(xm1)
"""
Explanation: Una vez que tenemos dos puntos de los que sabemos que definen el intervalo donde se encuetra una raiz, podemos empezar a iterar para descubrir el punto medio.
$$x_m = \frac{x_1 + x_2}{2}$$
Si hacemos esto ingenuamente y lo evaluamos en la función, podremos iterar manualmente:
End of explanation
"""
x_1, x_2 = x_1, xm1
xm2 = (x_1 + x_2)/2.0
f(xm2)
"""
Explanation: Y de aqui podemos notar que el resultado que nos dio esto es positivo, es decir que la raiz tiene que estar entre $x_1$ y $x_M$. Por lo que para nuestra siguiente iteración usaremos el nuevo intervalo $x_1 = 1$ y $x_2 = 2.875$, es decir que ahora asignaremos el valor de $x_M$ a $x_2$.
End of explanation
"""
def biseccion(x1, x2):
return (x1 + x2)/2.0
"""
Explanation: Y podriamos seguir haciendo esto hasta que tengamos la exactitud que queremos, pero esa no seria una manera muy inteligente de hacerlo (tenemos una maquina a la que le gusta hacer tareas repetitivas y no la aprovechamos?).
En vez de eso, notemos que la formula no cambia absolutamente en nada, por lo que la podemos hacer una funcion y olvidarnos de ella.
End of explanation
"""
x_1, x_2 = x_1, xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
"""
Explanation: Si volvemos a ejecutar el codigo que teniamos, sustituyendo esta función, obtendremos exactamente el mismo resultado:
End of explanation
"""
x_1, x_2 = 1.0, 2.0
xm1 = biseccion(x_1, x_2)
f(xm1)
if x_2*xm1 > 0:
x_2 = xm1
else:
x_1 = xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
if x_2*xm2 > 0:
x_2 = xm2
else:
x_1 = xm2
xm3 = biseccion(x_1, x_2)
f(xm3)
"""
Explanation: Y ahora lo que tenemos que hacer es poner una condicion para que $x_M$ se intercambie con $x_1$ ó $x_2$ dependiendo del signo.
End of explanation
"""
n = (log(1) - log(0.001))/(log(2))
n
"""
Explanation: Si, yo se que parece raro, pero si lo revisas con calma te daras cuenta que funciona.
Ya casi llegamos, tan solo tenemos que ir guardando cada una de las aproximaciones en un arreglo, y calcularemos el numero de aproximaciones necesarias para llegar a la precisión requerida. Tomemos en cuenta $\varepsilon = 0.001$. La formula para el numero de aproximaciones necesarias es:
$$n = \frac{\ln{a} - \ln{\varepsilon}}{\ln{2}}$$
donde $a$ es el tamaño del intervalo original.
End of explanation
"""
def metodo_biseccion(funcion, x1, x2, n):
xs = []
for i in range(n):
xs.append(biseccion(x1, x2))
if funcion(x2)*funcion(xs[-1]) > 0:
x2 = xs[-1]
else:
x1 = xs[-1]
return xs[-1]
metodo_biseccion(f, 1.0, 2.0, 10)
"""
Explanation: Es decir, $n = 10$.
End of explanation
"""
|
cmawer/pycon-2017-eda-tutorial
|
notebooks/0-Intro/0-Introduction-to-Jupyter-Notebooks.ipynb
|
mit
|
# in select mode, shift j/k (to select multiple cells at once)
# split cell with ctrl shift -
# merge with shift M
first = 1
second = 2
third = 3
"""
Explanation: Keyboard shortcuts
For help, ESC + h
End of explanation
"""
import numpy as np
np.random.choice()
"""
Explanation: Different heading levels
With text and $\LaTeX$ support.
You can also get monospaced fonts by indenting 4 spaces:
mkdir toc
cd toc
Wrap with triple-backticks and language:
bash
mkdir toc
cd toc
wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
End of explanation
"""
mylist = !ls
[x.split('_')[-1] for x in mylist]
%%bash
pwd
for i in *.ipynb
do
echo $i | awk -F . '{print $1}'
done
echo
echo "break"
echo
for i in *.ipynb
do
echo $i | awk -F - '{print $2}'
done
"""
Explanation: SQL
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
End of explanation
"""
def silly_function(xval):
"""Takes a value and returns the absolute value."""
xval_sq = xval ** 2.0
1 + 4
xval_abs = np.sqrt(xval_sq)
return xval_abs
silly_function?
silly_function??
silly_function()
import numpy as np
#
np.linspace??
#
np.linspace?
ex_dict = {}
# Indent/dedent/comment
for _ in range(5):
ex_dict["one"] = 1
ex_dict["two"] = 2
ex_dict["three"] = 3
ex_dict["four"] = 4
ex_dict
"""
Explanation: Tab; shift-tab; shift-tab-tab; shift-tab-tab-tab-tab; and more!
End of explanation
"""
ex_dict["one_better_name"] = 1.
ex_dict["two_better_name"] = 2.
ex_dict["three_better_name"] = 3.
ex_dict["four_better_name"] = 4.
"""
Explanation: Multicursor magic
End of explanation
"""
|
privong/pythonclub
|
sessions/03-matplotlib_aplpy/01 Matplotlib tutorial 2.ipynb
|
gpl-3.0
|
import numpy as np
import matplotlib.pyplot as plt
import pickle
# This is my custom object which holds the structure for my grains
from GrainStructure import Grain_Structure
"""
Explanation: Advanced matplotlib (or Problems I faced with matplotlib)
Alejandro Sazo Gómez<br />
Ingeniero Civil Informático, UTFSM.<br />
Estudiante Magíster en Cs. Ing. Informática, UTFSM
I) 3D Histograms
Let's suppose you have a dynamical system modeling a phenomena and we perform an iterative numerical simulation until we reach a steady state. At each step of the iteration, let's say, a step (or time) $t$, we can get the distribution plot of a certain data.
In this case I worked with grain growth simulations... What is grain growth? OK, briefly we got a system of microscopical grains in ceramic and metals and under certain conditions of temperature and presure the grains grows at expense of other grains, which shrinks and even dissapear.
<img src=images/fig4.gif></img>
Source: http://www.tms.org/pubs/journals/JOM/0109/Holm-0109.html
The distribution of grain areas defines some material properties (conductivity, resistance...).
So, at each step of the numerical simulation, we can get the (relative) distribution of grain areas. We should find by theoretical results and by experimental data with some real materials that a steady state in distribution is reached independently of the number of grains...
Let's take a look on how a distribution plot should look
1) Save and load data
The data of each simulation step has been saved using pickle package. This package helps to serialize and deserialize objects (convert our object to a byte stream and vice versa).
End of explanation
"""
# Example path and object, path must be created!
path = "grains_data/test/all.pkl"
GS = Grain_Structure()
# Save object to a file
with open(path, 'wb') as output:
pickle.dump(GS, output)
"""
Explanation: In order to save our structure we call the funcion pickle.dump() which receives as arguments the object that we want to save, the output file and a protocol to serialize. For example,
End of explanation
"""
# The class of the stored object must be loaded before!
GS = pickle.load(open("grains_data/10/all.pkl", "rb"))
"""
Explanation: In order to recover the saved object we can use the function pickle.load() which receives as arguments a file object. This file must be open!. The return of this function is the desired object.
End of explanation
"""
%matplotlib inline
# This is the data of the histogram recovered from pickle object
areas = GS.areas[GS.grains_ids]
relareas = len(areas) * areas
# Let's plot
plt.figure(figsize=(15,6))
fs = 20
bins = np.linspace(-2, 7, 20)
plt.title(r"Distribution (linear) of relative area at 10%", fontsize=fs)
# Here is the histogram. We wanted a distribution, so normed help us to check
# the integral of the distribution be 1
plt.hist(relareas, bins, align='right', normed=True)
# A nice latex label, if it's written in latex, then it must be true...
plt.xlabel(r"$A_i / \overline{A}$", fontsize=fs)
plt.xlim([0, 8])
plt.show()
"""
Explanation: 2) Building histograms from loaded data
Now that we have our data we can make some histograms...
End of explanation
"""
%matplotlib inline
GS = pickle.load(open("grains_data/40/all.pkl", "rb"))
areas = GS.areas[GS.grains_ids]
relareas = len(areas) * areas
plt.figure(figsize=(15,6))
plt.title(r"Distribution (linear) of relative area at 40%", fontsize=fs)
plt.hist(relareas, bins, align='right', normed=True, color='r')
plt.xlabel(r"$A_i / \overline{A}$", fontsize=fs)
plt.xlim([0, 8])
plt.show()
"""
Explanation: This distribution corresponds to a numerical simulation after 10% of grains were removed. The histogram has been normed so we can take is as a distribution. A plot of what happens in an advanced state (40% of grains removed) is shown here:
End of explanation
"""
# Simple package for 3D plotting, not the fastest but lightweight
# http://matplotlib.org/1.4.3/mpl_toolkits/mplot3d/api.html#axes3d
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Create a figure
fig = plt.figure(figsize=(12,12))
# Add a 3D subplot in figure, in general its methods are the same as a plt.*
ax = fig.add_subplot(111,projection='3d')
# Our bins for plot log(distribution)
binslog = np.linspace(-3, 1, 10)
# The data files are labeled from 10 to 40
percentages = np.arange(0, 50, 10)
# For each file
for i in percentages:
# Load data
GS = pickle.load(open("grains_data/"+str(i)+"/all.pkl", "rb"))
areas = GS.areas[GS.grains_ids]
relareas = np.log10(len(areas) * areas) # or areas.shape[0] * areas
# Generate an histogram data with numpy instead of directly plot
# n is actually the histogram, we use the bins to plot "manually" the bars
n, bins_edges = np.histogram(relareas, binslog, density=True)
ax.bar(binslog[:-1], n, width=0.4, zs=i, zdir='y', color=(i/40.0, 0.0, 1.0), alpha=0.8)
# Fancy axis labels...
ax.set_xlabel(r'$\log_{10} A_i / \overline{A}$', fontsize=fs)
ax.set_ylabel(r'grains removed (\%)', fontsize=fs)
# I wanted to show the history in the y axis from back to front
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_xlim([-3, 1.0])
plt.show()
"""
Explanation: 3) 3D plots for histograms
How is the distribution of areas along the simulation? We could plot the histograms over time as a 3D plot.
A lot of data has been generated and we can load it as shown above.
End of explanation
"""
# The core module!
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.animation as manimation
# Some fancy settings for use with LaTeX (Remember, LaTeX = True)
plt.rc('font', **{'family': 'serif', 'serif': ['Times']})
plt.rc('text', usetex=True)
plt.rcParams["figure.figsize"] = [12.5, 8.]
# Declare the video writter. For simple setting I use ffmpeg, other formats are available
FFMpegWriter = manimation.writers['ffmpeg']
metadata = dict(title='Pythonclub Demo', artist='John Doe', comment='')
writer = FFMpegWriter(metadata=metadata)
# Simulation data
t = 0
dt = 0.01
MAX_TIME = 2.0
xx = np.linspace(0, 1, 100)
# The figure and video
f, ax = plt.subplots()
plt.show(block=False)
with writer.saving(f, "videos/myvideo.mp4", 100):
while t < MAX_TIME:
ax.clear()
ax.grid()
yy = np.sin(xx + t)
ax.plot(xx, yy)
t = t + dt
plt.draw()
writer.grab_frame()
print "Finished"
"""
Explanation: II) Videos and animations
1) A simple video recorder
What if I want to record the simulation to analyze its behavior or debug logical errors in my code? Any simple plot can be seen as a frame of a video, we just need to plot it and have a writer which creates a video for us.
End of explanation
"""
# Classic inline and modules
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as manimation
from IPython.display import HTML
# Set the backend for inline display using html5
plt.rc('animation', html='html5')
"""
Explanation: 2) Animations inline
If we want to see animations in our jupyter notebook, we must take another approach. We can easily embed animations using html5 since matplotlib 1.5.1
End of explanation
"""
# Declare the initial figure just like before
f = plt.figure(figsize=(5,5))
# This is useful so our animation remains with fixed axis
ax = plt.axes(xlim=(-10, 10), ylim=(0, 1))
# My functions
num_functions = 2
def fun1(xx, i):
return np.exp(-(xx + i/2)**2.0)
def fun2(xx, i):
return np.exp(-(xx - i/2)**2.0)
# Where to plot!
lines = [plt.plot([], [], lw=2)[0] for i in range(num_functions)]
xx = np.linspace(-10, 10, 100)
# Function for clear plot
def init():
for line in lines:
line.set_data([], [])
return lines
# Update function, the real animation
def animate(i):
#for j, line in enumerate(lines):
lines[0].set_data(xx, fun1(xx, i))
lines[1].set_data(xx, fun2(xx, i))
return lines
"""
Explanation: The following code sets the figure and the necessary functions for an animation
End of explanation
"""
myanim = manimation.FuncAnimation(f, animate, init_func=init,
frames=100, interval=50, blit=True)
HTML(myanim.to_html5_video())
"""
Explanation: The animation part. We use the method FuncAnimation which needs a figure to draw, the update (animate) function, a function to draw background (clear frame), number of frames.
The interval argument indicates the time interval between frames in milliseconds.
The blit argument indicates that func and init_func must return an iterable of artists to be re-drawn, thus we draw only the parts that have changed.
End of explanation
"""
|
rrbb014/data_science
|
fastcampus_dss/2016_05_17/0517_02__SymPy를 사용한 함수 미분.ipynb
|
mit
|
def f(x):
return 2*x
x = 10
y = f(x)
print(x, y)
"""
Explanation: SymPy를 사용한 함수 미분
데이터 분석에서 미분의 필요성
그다지 관련이 없어 보이지만 사실 데이터 분석에도 미분(differentiation)이 필요하다. 데이터 분석의 목표 중 하나는 확률 모형의 모수(parameter)나 상태 변수(state)를 추정(estimation)하는 작업이다. 이러한 작업은 근본적으로 함수의 최소점 혹은 최대점을 찾는 최적화(optimization) 작업이며 미분 혹은 편미분을 사용한 도함수를 필요로 한다. 따라서 함수 미분의 지식은 데이터 분석 및 머신 러닝의 각종 내부 구조를 이해하는데 필수적인다.
다행스러운 점은 데이터 분석자 입장에서 필요한 미분의 수준은 그다지 높지 않다는 점이다. 보통은 선형 다항식이나 지수함수의 편미분 정도의 개념만 알고 있으면 되고 대부분의 경우 최적화 라이브러리를 사용하거나 theano, tensorflow 등의 라이브러리에서 도함수나 미분값을 계산해 주기 때문에 실제로 도함수를 구할 일도 그다지 많지는 않다.
함수와 변수
프로그래밍을 익힌 사람에게는 변수(variable)와 함수(function)의 개념이 낯설지 않다. 변수란 실제 값을 대표하는 기호이며 함수는 이러한 변수를 기반으로 만들어진 수식으로 변수값이 어떤 수치로 결정되면 함수 값도 수식에 의해 결정된다.
변수는 보통 $x$, $y$, $z$ 등 알파벳 소문자로 표시하며 함수는 $f(x)$, $g(x,y)$ 와 같이 사용할 입력 변수를 괄호안에 넣어 표시한다. 함수의 결과를 다른 변수에 넣어 다시 사용하는 경우도 있다.
$$ y = f(x) $$
$$ z = g(y) = g(f(x)) $$
파이썬의 함수는 이러한 함수의 개념을 그대로 구현한 것이다.
End of explanation
"""
x = np.linspace(-0.9, 2.9, 100)
y = x**3 - 3*x**2 + x
plt.plot(x, y);
"""
Explanation: 역함수(inverse function)는 함수의 입력과 출력을 반대로 한 것이며 다음과 같은 기호로 표시한다.
$$ y = f(x), \;\;\; \rightarrow \;\;\; x = f^{-1}(y) $$
예측 문제와 함수
예측(prediction) 문제는 독립 변수, 혹은 feature $x$를 입력으로 하여 원하는 종속 변수 혹은 targer $y$와 가능한한 비슷한 값을 만드는 함수 $f$를 찾는 문제라고 할 수 있다.
$$ y \approx \hat{y} = f(x) $$
데이터 분석에서 많이 사용되는 함수들
데이터 분석에서 많이 사용되는 함수의 형태는 다항식(polynomial) 함수, 지수(exponential) 함수, 로그(log) 함수 등이다.
다항식 함수
다항식 함수는 상수항 $c_0$, 일차항 $c_1x$, 이차항 $c_2x^2$, $\cdots$ 등의 거듭제곱 항의 선형 조합으로 이루어진 함수이다. 다음은 단변수(uni-variate) 다항식 함수의 전형적인 형태이다.
$$ f(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_n x^n $$
지수 함수와 로그 함수
밑(base)를 오일러 수 $e$로 하는 지수함수는 다음과 같이 표시한다. 이는 $e$라는 숫자를 $x$번 거듭제곱한 것이라 생각하면 된다.
$$ y = e^x $$
또는
$$ y = \exp x $$
지수 함수의 역함수는 자연로그 함수이다.
$$ y = \log x $$
만약 밑이 $e$가 아닌 경우에는 다음과 같이 변형하여 사용한다.
$$ y = a^x = e^{\log a \cdot x} $$
함수의 그래프와 기울기
함수의 형상을 직관적으로 파악하기 위해 그래프(graph)를 사용하기도 한다. 파이썬에서는 matplotlib의 라인 플롯을 사용하여 그래프를 만들 수 있다.
다만 matplotlib에서는 구체적인 위치가 있어야지만 플롯을 만들 수 있기 때문에 그래프를 작성할 $x$ 영역을 작은 구간으로 나눈 벡터를 생성하고 이 벡터 값에 대한 함수값을 계산하여 그래프를 작성한다. 구간의 간격이 너무 크면 그래프가 부정확해지고 구간의 간격이 너무 작으면 쓸데없이 세부적인 그림을 그리게 되므로 계산 시간이 증가하고 메모리 등의 리소스가 낭비된다.
End of explanation
"""
x = np.linspace(-0.9, 2.9, 100)
y = x**3-3*x**2+x
plt.plot(x, y)
plt.plot(0, 0, 'ro'); plt.plot(x, x, 'r:');
plt.plot(1, -1, 'go'); plt.plot(x, (3*1**2-6*1+1)*(x-1)-1, 'g:');
"""
Explanation: 함수의 그래프는 앞에서 그린 것처럼 부드러운 곡선(curve)의 형태로 나타나는 경우가 많다. 이 곡선에 대해 한 점만 공통으로 가지는 접선(tangent)를 그릴 수 있는데 이 접선이 수평선과 이루는 각도를 기울기(slope)라고 한다.
End of explanation
"""
import sympy
sympy.init_printing(use_latex='mathjax') # Juypter 노트북에서 수학식의 LaTeX 표현을 위해 필요함
x = sympy.symbols('x')
x
type(x)
"""
Explanation: 미분
미분(differenciation)이란 이러한 함수로부터 새로운 함수를 도출하는 변환의 일종이다. 미분을 통해 만들어진 새로운 함수는 원래 함수의 기울기(slope)를 나타낸다. 미분으로 만들어진 함수를 원래 함수의 도함수(derivative)라고 한다. 실제로는 극한과 수렴이라는 복잡한 개념을 사용하여 미분을 정의하지만 최적화(optimization)를 위해서는 단순히 기울기를 뜻한다고만 알아도 충분하다.
도함수는 함수 기호에 뒤에 prime 윗첨자를 붙이거나 함수 기호의 앞에 $\dfrac{d}{dx}$, $\dfrac{\partial}{\partial x}$ 등을 붙여서 표시한다. 분수처럼 표기하기도 하는데 분모의 위치에는 미분하고자 하는 변수가 오고 분자의 위치에는 미분하는 함수 자체의 기호나 혹은 함수 계산의 결과로 얻어지는 변수를 넣는다.
예를 들어 $y = f(x)$라는 함수를 미분하면 다음과 같다.
$$ f'(x) = \dfrac{d}{dx}(f) = \dfrac{df}{dx} = \dfrac{d}{dx}(y) = \dfrac{dy}{dx} $$
미분 공식
현실적으로 미분은 다음에 설명할 몇가지 공식(formula)를 조합하여 원래 함수에서 도함수를 도출하는 과정이다. 함수가 복잡해지면 몇 페이지에 달아는 공식집이 필요할 정도이지만 여기에서는 가장 핵심적인 몇가지 공식만을 소개한다. 다양한 미분 공식에 대해 알고 싶다면 다음 웹사이트들을 참조한다.
https://en.wikipedia.org/wiki/Derivative#Rules_of_computation
https://en.wikipedia.org/wiki/Differentiation_rules
기본 미분 공식 (암기)
상수
$$ \dfrac{d}{dx}(c) = 0 $$
$$ \dfrac{d}{dx}(cf) = c \cdot \dfrac{df}{dx} $$
거듭제곱
$$ \dfrac{d}{dx}(x^n) = n x^{n-1} $$
로그
$$ \dfrac{d}{dx}(\log x) = \dfrac{1}{x} $$
지수
$$ \dfrac{d}{dx}(e^x) = e^x $$
선형 조합
$$ \dfrac{d}{dx}\left(c_1 f_1 + c_2 f_2 \right) = c_1 \dfrac{df_1}{dx} + c_2 \dfrac{df_2}{dx}$$
이러한 기본 공식을 사용하여 다음 함수를 미분하면,
$$ y = 1 + 2x + 3x^2 + 4\exp(x) + 5\log(x) $$
답은 다음과 같다.
$$ \dfrac{dy}{dx} = 2 + 6x + 4\exp(x) + \dfrac{5}{x} $$
곱셈 법칙
어떤 함수의 형태가 두 개의 함수를 곱한 것과 같을 때는 다음과 같이 각 개별 함수의 도함수를 사용하여 원래의 함수의 도함수를 구한다.
$$ \dfrac{d}{dx}\left( f \cdot g \right) = \dfrac{df}{dx} \cdot g + f \cdot \dfrac{dg}{dx} $$
곱셈 법칙을 사용하면 다음과 같은 함수를 미분하여,
$$ f = x \cdot \exp(x) $$
다음과 같은 도함수를 구한다.
$$ \dfrac{df}{dx} = \exp(x) + x \exp(x) $$
연쇄 법칙
연쇄 법칙(chain rule)은 미분하고자 하는 함수가 어떤 두 함수의 nested form 인 경우 적용할 수 있다.
$$ f(x) = h(g(x)) $$
인 경우 도함수는 다음과 같이 구한다.
$$ \dfrac{df}{dx} = \dfrac{df}{dg} \cdot \dfrac{dg}{dx} $$
예를 들어 정규 분포의 확률 밀도 함수는 기본적으로 다음과 같은 형태라고 볼 수 있다.
$$ f = \exp \dfrac{(x-\mu)^2}{\sigma^2} $$
이 함수의 도함수는 다음과 같이 구할 수 있다.
$$ f = exp(z) \;,\;\;\;\; z = \dfrac{y^2}{\sigma^2} \;,\;\;\;\; y = x-\mu $$
$$ \dfrac{df}{dx} = \dfrac{df}{dz} \cdot \dfrac{dz}{dy} \cdot \dfrac{dy}{dx} $$
$$ \dfrac{df}{dz} = \exp(z) = \exp \dfrac{(x-\mu)^2}{\sigma^2} $$
$$ \dfrac{dz}{dy} = \dfrac{2y}{\sigma^2} = \dfrac{2(x-\mu)}{\sigma^2} $$
$$ \dfrac{dy}{dx} = 1 $$
$$ \dfrac{df}{dx} = \dfrac{2(x-\mu)}{\sigma^2} \exp \dfrac{(x-\mu)^2}{\sigma^2}$$
로그함수의 미분
로그 함수에 연쇄 법칙을 적용하면 다음과 같은 규칙을 얻을 수 있다.
$$ \dfrac{d}{dx} \log f(x) = \dfrac{f'(x)}{f(x)} $$
편미분
만약 함수가 두 개 이상의 독립변수를 가지는 다변수 함수인 경우에도 미분 즉, 기울기는 하나의 변수에 대해서만 구할 수 있다. 이를 편미분(partial differentiation)이라고 한다. 따라서 편미분의 결과로 하나의 함수에 대해 여러개의 도함수가 나올 수 있다.
다음은 편미분의 간단한 예이다.
$$ f(x,y) = x^2 + xy + y^2 $$
$$ f_x(x,y) = \dfrac{\partial f}{\partial x} = 2x + y $$
$$ f_y(x,y) = \dfrac{\partial f}{\partial y} = x + 2y $$
SymPy
SymPy는 심볼릭 연산(symbolic operation)을 지원하기 위한 파이썬 패키지이다. 심볼릭 연산이란 사람이 연필로 계산하는 미분/적분과 동일한 형태의 연산을 말한다. 즉, $x^2$의 미분 연산을 수행하면 그 결과가 $2x$란 형태로 출력된다.
딥 러닝(deep learning) 등에 많이 사용되는 파이썬의 theano 패키지나 tensorflow 패키지도 뉴럴 네트워크 트레이닝시에 필요한 기울기 함수 계산을 위해 이러한 심볼릭 연산 기능을 갖추고 있다.
이를 위해서는 SymPy의 symbols 명령을 사용하여 $x$라는 기호가 단순한 숫자나 벡터 변수가 아닌 기호에 해당하는 것임을 알려주어야 한다.
End of explanation
"""
f = x * sympy.exp(x)
f
"""
Explanation: 일단 심볼 변수를 정의하면 이를 사용하여 다음과 같이 함수를 정의한다. 이 때 수학 함수는 SymPy 전용 함수를 사용해야 한다.
End of explanation
"""
sympy.diff(f)
sympy.simplify(sympy.diff(f)) # 인수분해
"""
Explanation: 함수가 정의되면 diff 명령으로 미분을 할 수 있다. 또한 simplify 명령으로 소인수분해 등을 통한 수식 정리가 가능하다.
End of explanation
"""
x, y = sympy.symbols('x y')
f = x**2 + x*y + y**2
f
sympy.diff(f, x)
sympy.diff(f, y)
"""
Explanation: 편미분을 하는 경우에는 어떤 변수로 미분하는지를 명시해야 한다.
End of explanation
"""
x, mu, sigma = sympy.symbols('x mu sigma')
f = sympy.exp((x-mu)**2)/sigma**2
f
sympy.diff(f, x)
sympy.simplify(sympy.diff(f, x))
"""
Explanation: 복수의 기호를 사용하는 경우에도 편미분을 해야 한다.
End of explanation
"""
|
geoscixyz/computation
|
docs/case-studies/TDEM/Kevitsa_VTEM.ipynb
|
mit
|
from SimPEG import Mesh, EM, Utils, Maps
from matplotlib.colors import LogNorm
%pylab inline
import numpy as np
from scipy.constants import mu_0
from ipywidgets import interact, IntSlider
import cPickle as pickle
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
files = ['dc_mesh.txt', 'dc_sigma.txt']
keys = ['mesh', 'sigma']
downloads = Utils.download([url + f for f in files], folder='./KevitsaDC', overwrite=True)
downloads = dict(zip(keys, downloads))
"""
Explanation: Kevitsa VTEM
End of explanation
"""
mesh3D = Mesh.TensorMesh.readUBC(downloads["mesh"])
sigmadc = mesh3D.readModelUBC(downloads["sigma"])
actind = ~np.isnan(sigmadc)
figsize(8, 4)
indy = 6
temp = 1./sigmadc.copy()
temp[~actind] = np.nan
out = mesh3D.plotSlice(temp, normal="Y", ind=indy, pcolorOpts={"norm": LogNorm(), "cmap":"jet_r"}, clim=(1e0, 1e3))
plt.ylim(-800, 250)
plt.xlim(5000, 11000)
plt.gca().set_aspect(2.)
plt.title(("y= %d m")%(mesh3D.vectorCCy[indy]))
cb = plt.colorbar(out[0], orientation="horizontal")
cb.set_label("Resistivity (Ohm-m)")
"""
Explanation: Model
This model is a synthetic based on geologic surfaces interpreted from seismic data over the Kevitsa deposit in Finland. Synthetic 3D conductivity model is generated, and below figure shows conductivity section acrosses the mineralzined zone of interest. Nearsurface conductor on the lefthand side corresponds to sedimentary unit, and embedded conductor on the righthand side indicates conductive mineralized zone. Our interest here is in conductive mineralized zone at depth.
End of explanation
"""
sig_halfspace = 2e-3
sig_target = 0.1
sig_air = 1e-8
times = np.logspace(-4, -2, 21)
def diffusion_distance(sigma, time):
return 1.28*np.sqrt(time/(sigma * mu_0))
print(
'min diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.min()))
)
print(
'max diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.max()))
)
# x-direction
csx = 20 # core mesh cell width in the x-direction
ncx = 20
npadx = 15 # number of x padding cells
# z-direction
csz = 20 # core mesh cell width in the z-direction
ncz = 40
npadz = 15 # number of z padding cells
# padding factor (expand cells to infinity)
pf = 1.3
# cell spacings in the x and z directions
hx = Utils.meshTensor([(csx, ncx), (csx, npadx, pf)])
hz = Utils.meshTensor([(csz, npadz, -pf), (csz, ncz), (csz, npadz, pf)])
# define a SimPEG mesh
mesh = Mesh.CylMesh([hx, 1, hz], x0 ="00C")
# X and Z limits we want to plot to. Try
xlim = np.r_[0., mesh.vectorCCx.max()]
zlim = np.r_[mesh.vectorCCz.max(), mesh.vectorCCz.min()]
fig, ax = plt.subplots(1,1)
mesh.plotGrid(ax=ax)
ax.set_title('Simulation Mesh')
ax.set_xlim(xlim)
ax.set_ylim(zlim)
print(
'The maximum diffusion distance (in background) is: {:.2e} m. '
'Does the mesh go sufficiently past that?'.format(
diffusion_distance(sig_halfspace, times.max())
)
)
ax.set_aspect("equal")
"""
Explanation: Question:
Can we see mineralized zone at depth (~200 m) using airborne EM?
To answer this question, we simplify our model as a) conductive layer and b) conductive cylinder embedded at depth.
Mesh
We use symmetric cylindrical mesh to simulate airborne time domain EM with this simplied model. Below code show how you design mesh.
End of explanation
"""
# create a vector that has one entry for every cell center
sigma = sig_air*np.ones(mesh.nC) # start by defining the conductivity of the air everwhere
sigma[mesh.gridCC[:,2] < 0.] = sig_halfspace # assign halfspace cells below the earth
sigma_background = sigma.copy()
sigma_layer = sigma.copy()
radius = 150.
# indices of the sphere (where (x-x0)**2 + (z-z0)**2 <= R**2)
layer_ind = np.logical_and(mesh.gridCC[:,2]>-300, mesh.gridCC[:,2]<-200)
blk_ind = (mesh.gridCC[:,0] < radius) & layer_ind
sigma[blk_ind] = sig_target # assign the conductivity of the sphere
sigma_layer[layer_ind] = sig_target # assign the conductivity of the sphere
plt.set_cmap(plt.get_cmap('jet_r'))
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma_layer), ax=ax, mirror=True, clim=(0, 3), grid=False)
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Layer')
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma), ax=ax, mirror=True, clim=(0, 3), grid=False)
# plot formatting and titles
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Cylinder')
"""
Explanation: Next, we put the model on the mesh
End of explanation
"""
rx_loc = np.array([[0., 0., 41.]])
src_loc = np.array([[0., 0., 41.]])
offTime = 0.007307
peakTime = 0.006
a = 3.
dbdt_z = EM.TDEM.Rx.Point_dbdt(locs=rx_loc, times=times+offTime, orientation='z') # vertical db_dt
rxList = [dbdt_z] # list of receivers
srcList = [
EM.TDEM.Src.CircularLoop(
rxList, loc=src_loc, radius=13., orientation='z', waveform=EM.TDEM.Src.VTEMWaveform(offTime=offTime, peakTime=peakTime, a=3.)
)
]
# solve the problem at these times
timeSteps = [(peakTime/5, 5), ((offTime-peakTime)/5, 5), (1e-5, 10), (5e-5, 10), (1e-4, 10), (5e-4, 19)]
prob = EM.TDEM.Problem3D_b(mesh, timeSteps = timeSteps, sigmaMap=Maps.IdentityMap(mesh))
survey = EM.TDEM.Survey(srcList)
prob.pair(survey)
src = srcList[0]
rx = src.rxList[0]
wave = []
for time in prob.times:
wave.append(src.waveform.eval(time))
wave = np.hstack(wave)
plt.plot(prob.times, wave, 'k.-')
plt.plot(rx.times, np.zeros_like(rx.times), 'r.')
plt.ylim(-0.2, 1.2)
plt.grid(True)
plt.title('Current Waveform')
plt.xlabel('time (s)')
"""
Explanation: Forward Simulation
Define source and receiver loop location, and put parameters of the waveform. Here we use a current loop source having 13m-radius and measure db/dt in vertical direction inside of the loop. Both loops are located 41m above the surface.
End of explanation
"""
d_background = survey.dpred(sigma_background)
d_layer = survey.dpred(sigma_layer)
d = survey.dpred(sigma)
area = 13**2*np.pi
figsize(6, 3)
plt.loglog((rx.times-offTime)*1e6, -d_layer*1e12/area, 'k', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d*1e12/area , 'b', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d_background*1e12/area, 'k--', lw=1)
plt.xlabel("Time (micro-s)")
plt.ylabel("Voltage (pV/A-m$^4$)")
plt.legend(("Layer", "Cylinder","Half-space"), loc=1, fontsize = 10)
plt.ylim(1e-4, 1e1)
plt.grid(True)
"""
Explanation: Compute Predicted Data
We compute three different data: a) background (halfspace), b) layer, and d) cylinder models.
End of explanation
"""
f_layer = prob.fields(sigma_layer)
plt.set_cmap(plt.get_cmap('viridis'))
def vizfield_layer(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f_layer[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.show()
interact(vizfield_layer, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
"""
Explanation: Question:
What was your thoughts on the above plot? Can we see conductive mineralzied zone?
Singals from Layer and Cylinder have significant difference, can you explain why?
Underlying physics of the measured voltage can be govered by Faraday's law:
$$ \nabla \times \vec{e} - \frac{d\vec{b}}{dt}$$
By showing how electric field propagates in the subsurface we illustrate why layer and cylinder model show significant difference.
Electric field in the layer model
End of explanation
"""
f = prob.fields(sigma)
def vizfield_cylinder(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
# ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.tight_layout()
plt.show()
interact(vizfield_cylinder, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
"""
Explanation: Electric Field in the Cylinder model
End of explanation
"""
|
robertoalotufo/ia898
|
master/tutorial_pehist_1.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = mpimg.imread('../data/cameraman.tif')
ia.adshow(f, 'f: imagem original')
plt.plot(ia.histogram(f)),plt.title('h: histograma original');
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Equalização-do-Histograma-utilizando-o-conceito-de-distribuição-uniforme-desejada" data-toc-modified-id="Equalização-do-Histograma-utilizando-o-conceito-de-distribuição-uniforme-desejada-1"><span class="toc-item-num">1 </span>Equalização do Histograma utilizando o conceito de distribuição uniforme desejada</a></div><div class="lev2 toc-item"><a href="#Imagem-original" data-toc-modified-id="Imagem-original-11"><span class="toc-item-num">1.1 </span>Imagem original</a></div><div class="lev2 toc-item"><a href="#Ilustração-com-caso-numérico" data-toc-modified-id="Ilustração-com-caso-numérico-12"><span class="toc-item-num">1.2 </span>Ilustração com caso numérico</a></div><div class="lev2 toc-item"><a href="#Referências:" data-toc-modified-id="Referências:-13"><span class="toc-item-num">1.3 </span>Referências:</a></div>
# Equalização do Histograma utilizando o conceito de distribuição uniforme desejada
Uma outra forma de equacionar o problema de equalizar a distribuição dos pixels
de uma imagem é supor que você tem um conjunto de pixels do tamanho da imagem
onde a distribuição de pixels seja uniforme.
Pensando num modelo simplificado, imagine que se deseja reproduzir
uma fotografia num mosaico construído de ladrilhos onde existe o conjunto de ladrilhos para
serem usados. Neste caso da equalização, existe um mesmo número de ladrilhos
para cada tom de cinza.
Qual é o procedimento para montar o mosaico e saber exatamente onde se coloca
cada ladrilho no mosaico? Existe uma esquema simples e intuitivo:
1. Verifica-se o tom de cinza de cada ponto na imagem e anota qual é a sua posição
no mosaico.
2. Ordena todos os tons de cinza da fotografia.
3. Ordena todos os tons de cinza dos ladrilhos.
4. Usa o primeiro conjunto de ladrilhos mais escuros e coloca-os nas posições
do tons de cinza mais escuros da fotografia.
5. Continua com o processo anterior até utilizar todos os ladrilhos.
Podemos fazer o mesmo procedimento de forma computacional, de forma muito mais
eficiente, porém usando o mesmo princípio.
Usamos aqui a indexação por arrays, trabalhando sempre com a indexação do
array de forma linearizada. ``f.ravel()`` é a visão da imagem de forma linearizada.
## Imagem original
End of explanation
"""
fsi = np.argsort(f.ravel())
fs = (f.ravel()[fsi]).reshape(f.shape)
ia.adshow(fs, 'fs: imagem com pixels ordenados')
ia.adshow(ia.normalize(fsi.reshape(f.shape)),'fsi:endereço na imagem original')
"""
Explanation: Ordena os pixels da imagem original, sabendo-se seu endereço (posição em fsi).
Isto é obtido com a função argsort. Veja como se usa o argsort: tutorial_numpy_argsort argsort.
End of explanation
"""
gs = np.linspace(0,255,f.size).astype(np.uint8)
ia.adshow(gs.reshape(f.shape), 'gs: distribuição uniforme, pixels ordenados')
"""
Explanation: Cria uma imagem de mesmas dimensões, porém com os pixels ordenados e com
distribuição uniforme de tons de cinza. Usamos a função linspace e
depois damos reshape na imagem para ficar bidimensional. Estes são os ladrilhos
disponíveis:
End of explanation
"""
nb=ia.nbshow(3)
nb.nbshow(fs,'fs: imagem original pixels ordenados')
nb.nbshow(ia.normalize(fsi.reshape(f.shape)),'fsi:endereço na imagem original')
nb.nbshow(gs.reshape(f.shape),'gs: distribuição uniforme desejada, pixels ordenados')
nb.nbshow()
"""
Explanation: Agora temos a imagem original ordenada e os tons de cinza uniformemente
distribuídos, também ordenados:
End of explanation
"""
g = np.empty( (f.size,), np.uint8)
g[fsi] = gs
"""
Explanation: Como sabemos o endereço original destes pixels, pois são indicados pelo
argsort feito acima, podemos agora atribuir os pixels ordenados para os
pixels ordenados da imagem uniforme.
End of explanation
"""
ia.adshow(g.reshape(f.shape),'g[fsi] = gs, imagem equalizada')
"""
Explanation: Pronto, o mosaico está montado e a image g está equalizada.
End of explanation
"""
h = ia.histogram(g)
plt.bar( np.arange(h.size), h)
plt.title('histograma de g');
"""
Explanation: Para mostrar o seu histograma e comprovar a equalização feita:
End of explanation
"""
f = np.array([1, 7, 3, 0, 2, 2, 4, 3, 2, 0, 5, 3, 7, 7, 7, 5])
h = ia.histogram(f)
fsi = np.argsort(f)
fs = f[fsi]
print('imagem original f :',f)
print('indices para ordenar fsi:',fsi)
print('f c/pixels ordenados fs :',fs)
print('histogram h: h :',h)
"""
Explanation: Ilustração com caso numérico
Para facilitar o entendimento, é repetido a seguir o mesmo procedimento acima, porém
com uma imagem numérica de uma dimensão com 16 pixels de valores entre 0 a 7.
Fazendo a analogia com a construção do mosaico, existem 2 pastilhas de cada tom de cinza
entre 0 a 7 para construir o mosaico com 16 pastilhas.
Calculando a imagem ordenada de f e o endereço dos pixels fsi para ordenar f:
End of explanation
"""
gs = np.linspace(0,7,f.size).round(0).astype(np.int)
print('ladrilhos ordenados, gs :', gs)
"""
Explanation: Pastilhas disponíveis para o mosaico: 2 pastilhas de cada tom de cinza:
End of explanation
"""
print('ladrilhos disponíveis gs:',gs)
print('endereço para colocar cada ladrilho fsi:',fsi)
g = np.empty( (f.size,), np.uint8)
g[fsi] = gs
print('mosaico montado g[fsi] = gs:',g)
"""
Explanation: Mapeando as pastilhas no mosaico final (gs), utilizando o endereço dos pixels fsi:
End of explanation
"""
print('g[fsi]= gs')
for i in np.arange(g.size):
print('g[%d] = %d' % (fsi[i],gs[i]))
"""
Explanation: Para entender como g[fsi]=gs é calculado, veja as atribuições elemento a elemento:
End of explanation
"""
print('imagem usando os ladrilhos g:',g)
print('imagem original: f:',f)
"""
Explanation: Imagem original e com histograma equalizado (mosaico):
End of explanation
"""
print('histograma de g:', ia.histogram(g))
"""
Explanation: Histograma da imagem equalizada (mosaico):
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cnrm-cerfacs/cmip6/models/sandbox-2/toplevel.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp
|
day03/1.1 Introduction - Deep Learning and ANN.ipynb
|
mit
|
# Import the required packages
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import scipy
# Display plots in notebook
%matplotlib inline
# Define plot's default figure size
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
#read the datasets
train = pd.read_csv("data/intro_to_ann.csv")
X, y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])
X.shape
y.shape
#Let's plot the dataset and see how it is
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.BuGn)
"""
Explanation: Introduction to Deep Learning
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.
These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics.
Deep learning is one of the leading tools in data analysis these days and one of the most common frameworks for deep learning is Keras.
The Tutorial will provide an introduction to deep learning using keras with practical code examples.
This Section will cover:
Getting a conceptual understanding of multi-layer neural networks
Training neural networks for image classification
Implementing the powerful backpropagation algorithm
Debugging neural network implementations
Building Blocks: Artificial Neural Networks (ANN)
In machine learning and cognitive science, an artificial neural network (ANN) is a network inspired by biological neural networks which are used to estimate or approximate functions that can depend on a large number of inputs that are generally unknown
An ANN is built from nodes (neurons) stacked in layers between the feature vector and the target vector.
A node in a neural network is built from Weights and Activation function
An early version of ANN built from one node was called the Perceptron
<img src="imgs/Perceptron.png" width="45%">
The Perceptron is an algorithm for supervised learning of binary classifiers. functions that can decide whether an input (represented by a vector of numbers) belongs to one class or another.
Much like logistic regression, the weights in a neural net are being multiplied by the input vertor summed up and feeded into the activation function's input.
A Perceptron Network can be designed to have multiple layers, leading to the Multi-Layer Perceptron (aka MLP)
<img src="imgs/MLP.png" width="45%">
Single Layer Neural Network
<img src="imgs/single_layer.png" width="65%" />
(Source: Python Machine Learning, S. Raschka)
Weights Update Rule
We use a gradient descent optimization algorithm to learn the Weights Coefficients of the model.
<br><br>
In every epoch (pass over the training set), we update the weight vector $w$ using the following update rule:
$$
w = w + \Delta w, \text{where } \Delta w = - \eta \nabla J(w)
$$
<br><br>
In other words, we computed the gradient based on the whole training set and updated the weights of the model by taking a step into the opposite direction of the gradient $ \nabla J(w)$.
In order to fin the optimal weights of the model, we optimized an objective function (e.g. the Sum of Squared Errors (SSE)) cost function $J(w)$.
Furthermore, we multiply the gradient by a factor, the learning rate $\eta$ , which we choose carefully to balance the speed of learning against the risk of overshooting the global minimum of the cost function.
Gradient Descent
In gradient descent optimization, we update all the weights simultaneously after each epoch, and we define the partial derivative for each weight $w_j$ in the weight vector $w$ as follows:
$$
\frac{\partial}{\partial w_j} J(w) = \sum_{i} ( y^{(i)} - a^{(i)} ) x^{(i)}_j
$$
Note: The superscript $(i)$ refers to the ith sample. The subscript $j$ refers to the jth dimension/feature
Here $y^{(i)}$ is the target class label of a particular sample $x^{(i)}$ , and $a^{(i)}$ is the activation of the neuron
(which is a linear function in the special case of Perceptron).
We define the activation function $\phi(\cdot)$ as follows:
$$
\phi(z) = z = a = \sum_{j} w_j x_j = \mathbf{w}^T \mathbf{x}
$$
Binary Classification
While we used the activation $\phi(z)$ to compute the gradient update, we may use a threshold function (Heaviside function) to squash the continuous-valued output into binary class labels for prediction:
$$
\hat{y} =
\begin{cases}
1 & \text{if } \phi(z) \geq 0 \
0 & \text{otherwise}
\end{cases}
$$
Building Neural Nets from scratch
Idea:
We will build the neural networks from first principles.
We will create a very simple model and understand how it works. We will also be implementing backpropagation algorithm.
Please note that this code is not optimized and not to be used in production.
This is for instructive purpose - for us to understand how ANN works.
Libraries like theano have highly optimized code.
Perceptron and Adaline Models
Take a look at this notebook : <a href="extra/1.1.1 Perceptron and Adaline.ipynb" target="_blank_"> Perceptron and Adaline </a>
If you want a sneak peek of alternate (production ready) implementation of Perceptron for instance try:
python
from sklearn.linear_model import Perceptron
Introducing the multi-layer neural network architecture
<img src="imgs/multi-layers-1.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
Now we will see how to connect multiple single neurons to a multi-layer feedforward neural network; this special type of network is also called a multi-layer perceptron (MLP).
The figure shows the concept of an MLP consisting of three layers: one input layer, one hidden layer, and one output layer.
The units in the hidden layer are fully connected to the input layer, and the output layer is fully connected to the hidden layer, respectively.
If such a network has more than one hidden layer, we also call it a deep artificial neural network.
Notation
we denote the ith activation unit in the lth layer as $a_i^{(l)}$ , and the activation units $a_0^{(1)}$ and
$a_0^{(2)}$ are the bias units, respectively, which we set equal to $1$.
<br><br>
The activation of the units in the input layer is just its input plus the bias unit:
$$
\mathbf{a}^{(1)} = [a_0^{(1)}, a_1^{(1)}, \ldots, a_m^{(1)}]^T = [1, x_1^{(i)}, \ldots, x_m^{(i)}]^T
$$
<br><br>
Note: $x_j^{(i)}$ refers to the jth feature/dimension of the ith sample
Notes on Notation (usually) Adopted
The terminology around the indices (subscripts and superscripts) may look a little bit confusing at first.
<br><br>
You may wonder why we wrote $w_{j,k}^{(l)}$ and not $w_{k,j}^{(l)}$ to refer to
the weight coefficient that connects the kth unit in layer $l$ to the jth unit in layer $l+1$.
<br><br>
What may seem a little bit quirky at first will make much more sense later when we vectorize the neural network representation.
<br><br>
For example, we will summarize the weights that connect the input and hidden layer by a matrix
$$ W^{(1)} \in \mathbb{R}^{h×[m+1]}$$
where $h$ is the number of hidden units and $m + 1$ is the number of hidden units plus bias unit.
<img src="imgs/multi-layers-2.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
Forward Propagation
Starting at the input layer, we forward propagate the patterns of the training data through the network to generate an output.
Based on the network's output, we calculate the error that we want to minimize using a cost function that we will describe later.
We backpropagate the error, find its derivative with respect to each weight in the network, and update the model.
Sigmoid Activation
<img src="imgs/logistic_function.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
<img src="imgs/fwd_step.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
<img src="imgs/fwd_step_net.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
Backward Propagation
The weights of each neuron are learned by gradient descent, where each neuron's error is derived with respect to it's weight.
<img src="imgs/bkwd_step_net.png" width="50%" />
(Source: Python Machine Learning, S. Raschka)
Optimization is done for each layer with respect to the previous layer in a technique known as BackPropagation.
<img src="imgs/backprop.png" width="50%">
(The following code is inspired from these terrific notebooks)
End of explanation
"""
import random
random.seed(123)
# calculate a random number where: a <= rand < b
def rand(a, b):
return (b-a)*random.random() + a
"""
Explanation: Start Building our MLP building blocks
Note: This process will eventually result in our own Neural Networks class
A look at the details
<img src="imgs/mlp_details.png" width="65%" />
End of explanation
"""
# Make a matrix
def makeMatrix(I, J, fill=0.0):
return np.zeros([I,J])
"""
Explanation: Function to generate a random number, given two numbers
Where will it be used?: When we initialize the neural networks, the weights have to be randomly assigned.
End of explanation
"""
# our sigmoid function
def sigmoid(x):
#return math.tanh(x)
return 1/(1+np.exp(-x))
"""
Explanation: Define our activation function. Let's use sigmoid function
End of explanation
"""
# derivative of our sigmoid function, in terms of the output (i.e. y)
def dsigmoid(y):
return y - y**2
"""
Explanation: Derivative of our activation function.
Note: We need this when we run the backpropagation algorithm
End of explanation
"""
# Putting all together
class MLP:
def __init__(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-0.2, 0.2)
for j in range(self.nh):
for k in range(self.no):
self.wo[j][k] = rand(-2.0, 2.0)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
def backPropagate(self, targets, N, M):
if len(targets) != self.no:
print(targets)
raise ValueError('wrong number of target values')
# calculate error terms for output
output_deltas = np.zeros(self.no)
for k in range(self.no):
error = targets[k]-self.ao[k]
output_deltas[k] = dsigmoid(self.ao[k]) * error
# calculate error terms for hidden
hidden_deltas = np.zeros(self.nh)
for j in range(self.nh):
error = 0.0
for k in range(self.no):
error += output_deltas[k]*self.wo[j][k]
hidden_deltas[j] = dsigmoid(self.ah[j]) * error
# update output weights
for j in range(self.nh):
for k in range(self.no):
change = output_deltas[k] * self.ah[j]
self.wo[j][k] += N*change + M*self.co[j][k]
self.co[j][k] = change
# update input weights
for i in range(self.ni):
for j in range(self.nh):
change = hidden_deltas[j]*self.ai[i]
self.wi[i][j] += N*change + M*self.ci[i][j]
self.ci[i][j] = change
# calculate error
error = 0.0
for k in range(len(targets)):
error += 0.5*(targets[k]-self.ao[k])**2
return error
def test(self, patterns):
self.predict = np.empty([len(patterns), self.no])
for i, p in enumerate(patterns):
self.predict[i] = self.activate(p)
#self.predict[i] = self.activate(p[0])
def activate(self, inputs):
if len(inputs) != self.ni-1:
print(inputs)
raise ValueError('wrong number of inputs')
# input activations
for i in range(self.ni-1):
self.ai[i] = inputs[i]
# hidden activations
for j in range(self.nh):
sum_h = 0.0
for i in range(self.ni):
sum_h += self.ai[i] * self.wi[i][j]
self.ah[j] = sigmoid(sum_h)
# output activations
for k in range(self.no):
sum_o = 0.0
for j in range(self.nh):
sum_o += self.ah[j] * self.wo[j][k]
self.ao[k] = sigmoid(sum_o)
return self.ao[:]
def train(self, patterns, iterations=1000, N=0.5, M=0.1):
# N: learning rate
# M: momentum factor
patterns = list(patterns)
for i in range(iterations):
error = 0.0
for p in patterns:
inputs = p[0]
targets = p[1]
self.activate(inputs)
error += self.backPropagate([targets], N, M)
if i % 5 == 0:
print('error in interation %d : %-.5f' % (i,error))
print('Final training error: %-.5f' % error)
"""
Explanation: Our neural networks class
When we first create a neural networks architecture, we need to know the number of inputs, number of hidden layers and number of outputs.
The weights have to be randomly initialized.
```python
class MLP:
def init(self, ni, nh, no):
# number of input, hidden, and output nodes
self.ni = ni + 1 # +1 for bias node
self.nh = nh
self.no = no
# activations for nodes
self.ai = [1.0]*self.ni
self.ah = [1.0]*self.nh
self.ao = [1.0]*self.no
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
# set them to random vaules
self.wi = rand(-0.2, 0.2, size=self.wi.shape)
self.wo = rand(-2.0, 2.0, size=self.wo.shape)
# last change in weights for momentum
self.ci = makeMatrix(self.ni, self.nh)
self.co = makeMatrix(self.nh, self.no)
```
Activation Function
```python
def activate(self, inputs):
if len(inputs) != self.ni-1:
print(inputs)
raise ValueError('wrong number of inputs')
# input activations
for i in range(self.ni-1):
self.ai[i] = inputs[i]
# hidden activations
for j in range(self.nh):
sum_h = 0.0
for i in range(self.ni):
sum_h += self.ai[i] * self.wi[i][j]
self.ah[j] = sigmoid(sum_h)
# output activations
for k in range(self.no):
sum_o = 0.0
for j in range(self.nh):
sum_o += self.ah[j] * self.wo[j][k]
self.ao[k] = sigmoid(sum_o)
return self.ao[:]
```
BackPropagation
```python
def backPropagate(self, targets, N, M):
if len(targets) != self.no:
print(targets)
raise ValueError('wrong number of target values')
# calculate error terms for output
output_deltas = np.zeros(self.no)
for k in range(self.no):
error = targets[k]-self.ao[k]
output_deltas[k] = dsigmoid(self.ao[k]) * error
# calculate error terms for hidden
hidden_deltas = np.zeros(self.nh)
for j in range(self.nh):
error = 0.0
for k in range(self.no):
error += output_deltas[k]*self.wo[j][k]
hidden_deltas[j] = dsigmoid(self.ah[j]) * error
# update output weights
for j in range(self.nh):
for k in range(self.no):
change = output_deltas[k] * self.ah[j]
self.wo[j][k] += N*change +
M*self.co[j][k]
self.co[j][k] = change
# update input weights
for i in range(self.ni):
for j in range(self.nh):
change = hidden_deltas[j]*self.ai[i]
self.wi[i][j] += N*change +
M*self.ci[i][j]
self.ci[i][j] = change
# calculate error
error = 0.0
for k in range(len(targets)):
error += 0.5*(targets[k]-self.ao[k])**2
return error
```
End of explanation
"""
# create a network with two inputs, one hidden, and one output nodes
ann = MLP(2, 1, 1)
%timeit -n 1 -r 1 ann.train(zip(X,y), iterations=2)
"""
Explanation: Running the model on our dataset
End of explanation
"""
%timeit -n 1 -r 1 ann.test(X)
prediction = pd.DataFrame(data=np.array([y, np.ravel(ann.predict)]).T,
columns=["actual", "prediction"])
prediction.head()
np.min(prediction.prediction)
"""
Explanation: Predicting on training dataset and measuring in-sample accuracy
End of explanation
"""
# Helper function to plot a decision boundary.
# This generates the contour plot to show the decision boundary visually
def plot_decision_boundary(nn_model):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
nn_model.test(np.c_[xx.ravel(), yy.ravel()])
Z = nn_model.predict
Z[Z>=0.5] = 1
Z[Z<0.5] = 0
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.BuGn)
plot_decision_boundary(ann)
plt.title("Our initial model")
"""
Explanation: Let's visualize and observe the results
End of explanation
"""
# Put your code here
#(or load the solution if you wanna cheat :-)
# %load solutions/sol_111.py
"""
Explanation: Exercise:
Create Neural networks with 10 hidden nodes on the above code.
What's the impact on accuracy?
End of explanation
"""
#Put your code here
# %load solutions/sol_112.py
"""
Explanation: Exercise:
Train the neural networks by increasing the epochs.
What's the impact on accuracy?
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231
|
solutions/vijendra/assignment1/two_layer_net.ipynb
|
mit
|
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
"""
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
"""
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
"""
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
"""
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
"""
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
"""
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
"""
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
"""
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
"""
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
"""
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
"""
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
"""
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
"""
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
"""
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
"""
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
"""
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
"""
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
learning_rates = [1e-4, 2e-4]
regularization_strengths = [1,1e4]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_size,hidden_size,num_classes)
net.train(X_train, y_train,X_val,y_val, learning_rate= learning_rate, reg=regularization_strength,
num_iters=1500)
y_train_predict = net.predict(X_train)
y_val_predict = net.predict(X_val)
accuracy_train = np.mean(y_train_predict == y_train)
accuracy_validation = np.mean(y_val_predict == y_val)
results[(learning_rate,regularization_strength)] = (accuracy_train,accuracy_validation)
if accuracy_validation > best_val:
best_val = accuracy_validation
best_net = net
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# visualize the weights of the best network
show_net_weights(best_net)
"""
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
"""
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
"""
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation
"""
|
cs207-project/TimeSeries
|
docs/vptree_demo.ipynb
|
mit
|
def find_similar_pt():
rn = lambda: random.randint(0, 10000)
aset = [(rn(), rn()) for i in range(40000)]
q = (rn(), rn())
rad = 9990
distance = lambda a, b: math.sqrt(sum([((x-y)**2) for x, y in zip(a, b)]))
s = time.time()
print("creating vptree...")
root = VpNode(aset, distance=distance)
print("vptree created", time.time() - s)
s = time.time()
print("searching...")
se = VpSearch(root, q, rad, 30)
#out = se.search()
out = se.knn()
for k, v in sorted(se.stat.items()):
print(k, v)
print("number of resultes: %s" % len(out))
print("vptree search done, searching time", time.time() - s)
projx = lambda x: map(lambda y: y[0], x)
projy = lambda x: map(lambda y: y[1], x)
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].scatter(list(projx(aset)), list(projy(aset)), s = 20, alpha=0.1)
ax[0].scatter([q[0]], [q[1]], s = 40, color='g')
ax[0].scatter(list(projx(out)), list(projy(out)), s = 10, color='r')
ax[0].annotate("query", xy=q)
ax[1].scatter([q[0]], [q[1]], s = 40, color='g')
ax[1].scatter(list(projx(out)), list(projy(out)), s = 10, color='r')
plt.show()
"""
Explanation: 1. In this example we performed a knn search in a 2d set with random
End of explanation
"""
find_similar_pt()
"""
Explanation: here we find the top 30 closest points to objective point in a set of 40000 tuples. The graph below shows
End of explanation
"""
def tsmaker(m, s, j):
"returns metadata and a time series in the shape of a jittered normal"
t = np.arange(0.0, 1.0, 0.01)
v = norm.pdf(t, m, s) + j*np.random.randn(100)
return (t, v)
mus = np.random.uniform(low=0.0, high=1.0, size=50)
sigs = np.random.uniform(low=0.05, high=0.4, size=50)
jits = np.random.uniform(low=0.05, high=0.2, size=50)
ts_set = [tsmaker(m, s, j) for i, m, s, j in zip(range(50), mus, sigs, jits)]
ts_set[0][1]
def find_similar_ts():
rn = lambda: random.randint(0, 10000)
aset = [tsmaker(m, s, j) for i, m, s, j in zip(range(50), mus, sigs, jits)]
q = tsmaker(mus[1], sigs[1], jits[1])
rad = 9990
distance = lambda a, b: math.sqrt(sum([((x-y)**2) for x, y in zip(a[1], b[1])]))
s = time.time()
print("creating vptree...")
root = VpNode(aset, distance=distance)
print("vptree created", time.time() - s)
s = time.time()
print("searching...")
se = VpSearch(root, q, rad, 5)
#out = se.search()
out = se.knn()
for k, v in sorted(se.stat.items()):
print(k, v)
print("number of resultes: %s s" % len(out))
print("vptree search done", time.time() - s)
plt.plot(q[1], label='original timeseries', linewidth=2)
plt.plot(out[0][1], label='similar_1')
plt.plot(out[1][1], label='similar_2')
plt.plot(out[2][1], label='similar_3')
plt.legend()
plt.show()
find_similar_ts()
find_similar_ts()
find_similar_ts()
"""
Explanation: 2. VPTREE on timeseries
End of explanation
"""
def levenshtein(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
def find_similar_ts(file_name):
f = open(file_name)
next(f)
aset = [w[:-1] for w in f]
rad = 1
distance = levenshtein
s = time.time()
print("\ninput set %s points" % len(aset))
print("creating tree...")
root = VpNode(aset, distance=distance)
print("created: %s nodes" % VpNode.ids)
print("done in s: %s" % (time.time() - s))
print("searching...")
while True:
q = input(">>")
s = time.time()
se = VpSearch(root, q, rad, 10)
out = se.knn()
print(se.stat)
print("founded %s results:" % len(out))
count = 1
print("\n".join(out))
print("done in s: %s" % (time.time() - s))
"""
Explanation: 3. VPTREE on text corpus
Levenshtein Distance
In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other. It is named after Vladimir Levenshtein, who considered this distance in 1965.[1]
End of explanation
"""
find_similar_ts('wordsEn.txt')
"""
Explanation: Note:
Since the word dictionary is really large, the below function may take over 10 mins to run:
End of explanation
"""
|
UltronAI/Deep-Learning
|
CS231n/reference/CS231n-master/assignment3/ImageGradients.ipynb
|
mit
|
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
"""
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
"""
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
"""
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
"""
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
"""
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
"""
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
"""
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
"""
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
"""
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
"""
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
"""
def compute_saliency_maps(X, y, model):
"""
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
"""
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
"""
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
"""
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
"""
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
"""
def make_fooling_image(X, target_y, model):
"""
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
"""
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
"""
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
"""
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
"""
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/unstructured/Unstructured-ML.ipynb
|
apache-2.0
|
APIKEY="AIzaSyBQrrl4SZhE3QtxsnbjY2WTdgcBz0G0Rfs" # CHANGE
print APIKEY
PROJECT_ID = "qwiklabs-gcp-14067121d7b1d12c" # CHANGE
print PROJECT_ID
BUCKET = "qwiklabs-gcp-14067121d7b1d12c" # CHANGE
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT_ID
from googleapiclient.discovery import build
print("\n","Google Cloud API Client credentials established")
"""
Explanation: <h2>Establish environment variables and services for Google Cloud API access</h2>
In Console, go to <b> Products & services </b> > <b> APIs & services </b> > <b> Credentials </b>
Click on <b> Create Credentials </b> and select <b> API Key </b>
Copy the API KEy and paste it in the APIKEY field below.
+
In Console, go to <b> Products & services </b> > <b> Home </b>
Select and copy the Project ID
Paste the Project ID into both the PROJECT_ID field and the BUCKET field below.
+
Run the next block
End of explanation
"""
def SentimentAnalysis(text):
from googleapiclient.discovery import build
lservice = build('language', 'v1beta1', developerKey=APIKEY)
response = lservice.documents().analyzeSentiment(
body={
'document': {
'type': 'PLAIN_TEXT',
'content': text
}
}).execute()
return response
print("\n","Sentiment Analysis function defined.")
"""
Explanation: <h2> Define an API calling function </h2>
Run the following block of code to define a language service API interface
When this is called it will pass the text to the service using a JSON formatted block
And it will receive a JSON response from the Google Cloud Language service
The response will be automatically represented by Python as a 'dict' object (a dictionary)
End of explanation
"""
sampleline = u'There are places I remember, all my life though some have changed.'
results = SentimentAnalysis(sampleline)
print("\n","This is the Python object that is returned; a dictionary.")
print("\n")
print("Function returns :",type(results))
print(results)
import json
print("\n","This is the JSON formatted version of the object")
print(json.dumps(results, sort_keys=True, indent=4))
"""
Explanation: <h2> Test the Sentiment Analysis </h2>
Use a simple string to test the function and verify all the API elements are working.
End of explanation
"""
# Working with the smaller sample file
#
lines = sc.textFile("/sampledata/road-not-taken.txt")
#
# The Spark map transformation will execute SentimentAnalysis on each element in lines and store the result in sentiment.
# Remember that due to lazy execution, this line just queues up the transformation, it does not run yet.
# So you will not see errors at this point.
#
sentiment = lines.map(SentimentAnalysis)
#
#
print (type(sentiment))
# sentiment is a pyspark.rdd.PipelinedRDD
#
# If it is properly formed then an action such as sentiment.collect() will run the job.
# If not properly formed, it will throw errors.
#
output = sentiment.collect()
#
# The sentiment rdd contains JSON returns. In python these are collected into a list of dictionaries.
#
print(type(output))
print("\n")
for line in output:
print(line)
"""
Explanation: <h2>Use the Dataproc cluster to run a Spark job that uses the Machine Learning API </h2>
End of explanation
"""
#
# Ouput is a list of dictionaries
# When the list is iterated, each line is one dictionary
# And the dictionary is double-subscripted
#
for line in output:
print("Score: ",line['documentSentiment']['score'], " Magnitude :",line['documentSentiment']['magnitude'])
"""
Explanation: <h2> Working with the results in Python </h2>
The collect() action is good for validating sample output, but don't use it on big data because all the results must fit in memory.
The following shows how the data that is passed back is formatted as a Python list of dictionary objects.
End of explanation
"""
def TailoredAnalysis(text):
from googleapiclient.discovery import build
lservice = build('language', 'v1beta1', developerKey=APIKEY)
response = lservice.documents().analyzeEntities(
body={
'document': {
'type': 'PLAIN_TEXT',
'content': text
}
}).execute()
return response
print("\n","Tailored Analysis function defined.")
"""
Explanation: <h2> Using another feature of the Natural Language API </h2>
In this code block there is an analyze Entities version of the Language Service
Analysis of entities identifies and recognizes items in text
End of explanation
"""
# [STEP 1] HDFS
#lines = sc.textFile("/sampledata/road-not-taken.txt")
#
#
# [STEP 2] Cloud Storage
#lines = sc.textFile("gs://<your-bucket>/time-machine-P1.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P2.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P3.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P4.txt")
lines = sc.textFile("gs://qwiklabs-gcp-14067121d7b1d12c/time-machine-P1.txt")
#
#
#
entities = lines.map(TailoredAnalysis)
from operator import add
rdd = entities.map(lambda x: x['entities'])
#
# results = rdd.flatMap(lambda x: x ).filter(lambda x: x['type']==u'PERSON').map(lambda x:(x['name'],1)).reduceByKey(add)
#
# It is common practice to use line continuation "\" to help make the chain more readable
results = rdd.flatMap(lambda x: x )\
.filter(lambda x: x['type']==u'PERSON')\
.map(lambda x:(x['name'],1))\
.reduceByKey(add)
print(sorted(results.take(25)))
# [STEP 3] Cloud Storage
#lines = sc.textFile("gs://<your-bucket>/time-machine-P1.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P2.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P3.txt")
#lines = sc.textFile("gs://<your-bucket>/time-machine-P4.txt")
#
lines = sc.textFile("gs://qwiklabs-gcp-14067121d7b1d12c/time-machine-P2.txt")
#
entities = lines.map(TailoredAnalysis)
from operator import add
rdd = entities.map(lambda x: x['entities'])
#
# results = rdd.flatMap(lambda x: x ).filter(lambda x: x['type']==u'PERSON').map(lambda x:(x['name'],1)).reduceByKey(add)
#
# It is common practice to use line continuation "\" to help make the chain more readable
results = rdd.flatMap(lambda x: x )\
.filter(lambda x: x['type']==u'LOCATION')\
.map(lambda x:(x['name'],1))\
.reduceByKey(add)
print(sorted(results.take(25)))
"""
Explanation: <h2> Working with the results in Spark </h2>
Working with results in Python can be useful. However, if the results are too big to fit in memory, you will want to perform transformations
on the data while still in RDDs using Spark. In this section you will explore transforming the results of the TailoredAnalysis function in Spark.
<h3> Load some sample files into Cloud Storage </h3>
Step 1: Locate some samle text such as a news article in your browser. Copy the text into a file called sample.txt
Step 2: SSH to the Master Node
Step 3: Upload the file to Cloud Storage
gsutil cp sample.txt gs://[your-bucket]
Step 4: Download some more prepared sample files:
curl https://storage.googleapis.com/cloud-training/archdp/sherlock2.txt > sherlock2.txt
curl https://storage.googleapis.com/cloud-training/archdp/sherlock3.txt > sherlock3.txt
curl https://storage.googleapis.com/cloud-training/archdp/sherlock4.txt > sherlock4.txt
Step 5: Upload the files to your Cloud Storage:
gsutil cp sample* gs://[your-bucket]
<h3> Run the analysis and variations </h3>
This code recognizes people and locations mentioned in the document, and returns a list of them with the number of mentions.
+
Step 6: In the following code block, replace the bucket name [your-bucket] with your bucket
After you run the block and see the results, change the word 'PERSON' to 'LOCATION' and run it again
End of explanation
"""
# Replace with your bucket
#
results.repartition(1).saveAsTextFile("gs://qwiklabs-gcp-14067121d7b1d12c/sampleoutput/")
print("Output to Cloud Storage is complete.")
"""
Explanation: <h2> Save as text to Cloud Storage </h2>
<h3>Write file to Cloud Storage</h3>
Replace the bucket in the example with your bucket.
Run the next block to save the RDD to cloud storage.
repartition(1) reorganizes the RDD internally into a single partition.
saveAsTextFile saved the partition in a folder called sampleoutput.
<h3>View Cloud Storage in Console</h3>
In Console, go to the Cloud Storage Browser, locate the sampleoutput folder, and look inside.
Inside the folder you will find part-xxxxx coresponding to the partitions in the RDD.
End of explanation
"""
|
kunalj101/scipy2015-blaze-bokeh
|
2. Blaze.ipynb
|
mit
|
import pandas as pd
df = pd.read_csv('data/iris.csv')
df.head()
df.groupby(df.Species).PetalLength.mean() # Average petal length per species
"""
Explanation: <img src=images/continuum_analytics_b&w.png align="left" width="15%" style="margin-right:15%">
<h1 align='center'>Introduction to Blaze</h1>
In this tutorial we'll learn how to use Blaze to discover, migrate, and query data living in other databases. Generally this tutorial will have the following format
odo - Move data to database
blaze - Query data in database
Goal: Accessible, Interactive, Analytic Queries
NumPy and Pandas provide accessible, interactive, analytic queries; this is valuable.
End of explanation
"""
from odo import odo
import numpy as np
import pandas as pd
odo("data/iris.csv", pd.DataFrame)
"""
Explanation: <hr/>
But as data grows and systems become more complex, moving data and querying data become more difficult. Python already has excellent tools for data that fits in memory, but we want to hook up to data that is inconvenient.
From now on, we're going to assume one of the following:
You have an inconvenient amount of data
That data should live someplace other than your computer
<hr/>
Databases and Python
When in-memory arrays/dataframes cease to be an option, we turn to databases. These live outside of the Python process and so might be less convenient. The open source Python ecosystem includes libraries to interact with these databases and with foreign data in general.
Examples:
SQL - sqlalchemy
Hive/Cassandra - pyhive
Impala - impyla
RedShift - redshift-sqlalchemy
...
MongoDB - pymongo
HBase - happybase
Spark - pyspark
SSH - paramiko
HDFS - pywebhdfs
Amazon S3 - boto
Today we're going to use some of these indirectly with odo (was into) and Blaze. We'll try to point out these libraries as we automate them so that, if you'd like, you can use them independently.
<hr />
<img src="images/continuum_analytics_logo.png"
alt="Continuum Logo",
align="right",
width="30%">,
odo (formerly into)
Odo migrates data between formats and locations.
Before we can use a database we need to move data into it. The odo project provides a single consistent interface to move data between formats and between locations.
We'll start with local data and eventually move out to remote data.
odo docs
<hr/>
Examples
Odo moves data into a target from a source
```python
odo(source, target)
```
The target and source can be either a Python object or a string URI. The following are all valid calls to into
```python
odo('iris.csv', pd.DataFrame) # Load CSV file into new DataFrame
odo(my_df, 'iris.json') # Write DataFrame into JSON file
odo('iris.csv', 'iris.json') # Migrate data from CSV to JSON
```
<hr/>
Exercise
Use odo to load the iris.csv file into a Python list, a np.ndarray, and a pd.DataFrame
End of explanation
"""
odo("data/iris.csv", "sqlite:///my.db::iris")
"""
Explanation: <hr/>
URI Strings
Odo refers to foreign data either with a Python object like a sqlalchemy.Table object for a SQL table, or with a string URI, like postgresql://hostname::tablename.
URI's often take on the following form
protocol://path-to-resource::path-within-resource
Where path-to-resource might point to a file, a database hostname, etc. while path-within-resource might refer to a datapath or table name. Note the two main separators
:// separates the protocol on the left (sqlite, mongodb, ssh, hdfs, hive, ...)
:: separates the path within the database on the right (e.g. tablename)
odo docs on uri strings
<hr/>
Examples
Here are some example URIs
myfile.json
myfiles.*.csv'
postgresql://hostname::tablename
mongodb://hostname/db::collection
ssh://user@host:/path/to/myfile.csv
hdfs://user@host:/path/to/*.csv
<hr />
Exercise
Migrate your CSV file into a table named iris in a new SQLite database at sqlite:///my.db. Remember to use the :: separator and to separate your database name from your table name.
odo docs on SQL
End of explanation
"""
type(_)
"""
Explanation: What kind of object did you get receive as output? Call type on your result.
End of explanation
"""
odo('s3://nyqpug/tips.csv', pd.DataFrame)
"""
Explanation: <hr/>
How it works
Odo is a network of fast pairwise conversions between pairs of formats. We when we migrate between two formats we traverse a path of pairwise conversions.
We visualize that network below:
Each node represents a data format. Each directed edge represents a function to transform data between two formats. A single call to into may traverse multiple edges and multiple intermediate formats. Red nodes support larger-than-memory data.
A single call to into may traverse several intermediate formats calling on several conversion functions. For example, we when migrate a CSV file to a Mongo database we might take the following route:
Load in to a DataFrame (pandas.read_csv)
Convert to np.recarray (DataFrame.to_records)
Then to a Python Iterator (np.ndarray.tolist)
Finally to Mongo (pymongo.Collection.insert)
Alternatively we could write a special function that uses MongoDB's native CSV
loader and shortcut this entire process with a direct edge CSV -> Mongo.
These functions are chosen because they are fast, often far faster than converting through a central serialization format.
This picture is actually from an older version of odo, when the graph was still small enough to visualize pleasantly. See odo docs for a more updated version.
<hr/>
Remote Data
We can interact with remote data in three locations
On Amazon's S3 (this will be quick)
On a remote machine via ssh
On the Hadoop File System (HDFS)
For most of this we'll wait until we've seen Blaze, briefly we'll use S3.
S3
For now, we quickly grab a file from Amazon's S3.
This example depends on boto to interact with S3.
conda install boto
odo docs on aws
End of explanation
"""
import pandas as pd
df = pd.read_csv('data/iris.csv')
df.head(5)
df.Species.unique()
df.Species.drop_duplicates()
"""
Explanation: <hr/>
<img src="images/continuum_analytics_logo.png"
alt="Continuum Logo",
align="right",
width="30%">,
Blaze
Blaze translates a subset of numpy/pandas syntax into database queries. It hides away the database.
On simple datasets, like CSV files, Blaze acts like Pandas with slightly different syntax. In this case Blaze is just using Pandas.
<hr/>
Pandas example
End of explanation
"""
import blaze as bz
d = bz.Data('data/iris.csv')
d.head(5)
d.Species.distinct()
"""
Explanation: <hr/>
Blaze example
End of explanation
"""
db = bz.Data('sqlite:///my.db')
#db.iris
#db.iris.head()
db.iris.Species.distinct()
db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]
"""
Explanation: <hr/>
Foreign Data
Blaze does different things under-the-hood on different kinds of data
CSV files: Pandas DataFrames (or iterators of DataFrames)
SQL tables: SQLAlchemy.
Mongo collections: PyMongo
...
SQL
We'll play with SQL a lot during this tutorial. Blaze translates your query to SQLAlchemy. SQLAlchemy then translates to the SQL dialect of your database, your database then executes that query intelligently.
Blaze $\rightarrow$ SQLAlchemy $\rightarrow$ SQL $\rightarrow$ Database computation
This translation process lets analysts interact with a familiar interface while leveraging a potentially powerful database.
To keep things local we'll use SQLite, but this works with any database with a SQLAlchemy dialect. Examples in this section use the iris dataset. Exercises use the Lahman Baseball statistics database, year 2013.
If you have not downloaded this dataset you could do so here - https://github.com/jknecht/baseball-archive-sqlite/raw/master/lahman2013.sqlite.
<hr/>
Examples
Lets dive into Blaze Syntax. For simple queries it looks and feels similar to Pandas
End of explanation
"""
# Inspect SQL query
query = db.iris[db.iris.Species == 'versicolor'][['Species', 'SepalLength']]
print bz.compute(query)
query = bz.by(db.iris.Species, longest=db.iris.PetalLength.max(),
shortest=db.iris.PetalLength.min())
print bz.compute(query)
"""
Explanation: <hr />
Work happens on the database
If we were using pandas we would read the table into pandas, then use pandas' fast in-memory algorithms for computation. Here we translate your query into SQL and then send that query to the database to do the work.
Pandas $\leftarrow_\textrm{data}$ SQL, then Pandas computes
Blaze $\rightarrow_\textrm{query}$ SQL, then database computes
If we want to dive into the internal API we can inspect the query that Blaze transmits.
<hr />
End of explanation
"""
# db = bz.Data('postgresql://postgres:postgres@ec2-54-159-160-163.compute-1.amazonaws.com') # Use Postgres if you don't have the sqlite file
db = bz.Data('sqlite:///data/lahman2013.sqlite')
db.dshape
# View the Salaries table
# What are the distinct teamIDs in the Salaries table?
# What is the minimum and maximum yearID in the Sarlaries table?
# For the Oakland Athletics (teamID OAK), pick out the playerID, salary, and yearID columns
# Sort that result by salary.
# Use the ascending=False keyword argument to the sort function to find the highest paid players
"""
Explanation: <hr />
Exercises
Now we load the Lahman baseball database and perform similar queries
End of explanation
"""
import pandas as pd
iris = pd.read_csv('data/iris.csv')
iris.groupby('Species').PetalLength.min()
iris = bz.Data('sqlite:///my.db::iris')
bz.by(iris.Species, largest=iris.PetalLength.max(),
smallest=iris.PetalLength.min())
print(_)
"""
Explanation: <hr />
Example: Split-apply-combine
In Pandas we perform computations on a per-group basis with the groupby operator. In Blaze our syntax is slightly different, using instead the by function.
End of explanation
"""
result = bz.by(db.Salaries.teamID, avg=db.Salaries.salary.mean(),
max=db.Salaries.salary.max(),
ratio=db.Salaries.salary.max() / db.Salaries.salary.min()
).sort('ratio', ascending=False)
odo(result, list)[:10]
"""
Explanation: <hr/>
Store Results
By default Blaze only shows us the first ten lines of a result. This provides a more interactive feel and stops us from accidentally crushing our system. Sometimes we do want to compute all of the results and store them someplace.
Blaze expressions are valid sources for odo. So we can store our results in any format.
<hr/>
Exercise: Storage
The solution to the first split-apply-combine problem is below. Store that result in a list, a CSV file, and in a new SQL table in our database (use a uri like sqlite://... to specify the SQL table.)
End of explanation
"""
|
ejolly/Python
|
forFun/echoPy.ipynb
|
mit
|
from pyechonest import config, artist, song
import pandas as pd
config.ECHO_NEST_API_KEY = 'XXXXXXXX' #retrieved from https://developer.echonest.com/account/profile
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Some code playing with the Echonest API python wrapper
Pythondocs
Github
API Overview
Things you can do with the API
Remix part of the API
More examples with Remix
Code for examples
Resources for the Spotify web API:
Python wrapper
Meteor.js wrapper with links to Node.js and general client js wrappers
End of explanation
"""
songs = song.search(title='Elastic Heart',artist='Sia',buckets='id:spotify',limit=True,results=1)
elasticHeart = songs[0]
elasticHeartFeatures = pd.DataFrame.from_dict(elasticHeart.audio_summary,orient='index')
pd.DataFrame.from_dict([elasticHeart.audio_summary])
"""
Explanation: Query a single song, get its audio features and make a dataframe
End of explanation
"""
floHottest = song.search(artist = 'Flo Rida' ,sort = 'song_hotttnesss-desc', buckets = 'id:spotify', limit = True, results = 20)
fsongFeatures = []
for song in floHottest:
fsongFeatures.append(song.audio_summary)
S= pd.DataFrame.from_dict(songFeatures)
S.index = [song.title for song in siaHottest]
S['hotness'] = [song.song_hotttnesss for song in siaHottest]
F= pd.DataFrame.from_dict(fsongFeatures)
F.index = [song.title for song in floHottest]
F['hotness'] = [song.song_hotttnesss for song in floHottest]
u,idx = np.unique(S.index,return_index=True)
S = S.ix[idx,:]
u,idx = np.unique(F.index,return_index=True)
F = F.ix[idx,:]
ax = pd.DataFrame({'Flo Rida':F.mean(), 'Sia': S.mean()}).plot(kind='bar',figsize=(18,6),rot=0, color = ['lightblue','salmon']);
ax.set_title("Average Song Features for Artist's Hottest 20 tracks",fontsize=14);
ax.tick_params(axis='x', labelsize=12)
Elastic_Heart = siaHottest[5].get_tracks('spotify')
Elastic_Heart[1]
%%html
<iframe src="https://embed.spotify.com/?uri=spotify:track:3yFdQkEQNzDwpPB1iIFtaM" width="300" height="380" frameborder="0" allowtransparency="true"></iframe>
"""
Explanation: Grab and compare the hottest tracks, available in Spotify, for 2 artists on a number of audio features
End of explanation
"""
|
GoogleCloudPlatform/cloudml-samples
|
notebooks/scikit-learn/HyperparameterTuningWithScikitLearnInCMLE.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC
End of explanation
"""
%env PROJECT_ID PROJECT_ID
%env BUCKET_ID BUCKET_ID
%env JOB_DIR gs://BUCKET_ID/scikit_learn_job_dir
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./auto_mpg_hp_tuning
%env MAIN_TRAINER_MODULE auto_mpg_hp_tuning.train
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
%env HPTUNING_CONFIG hptuning_config.yaml
! mkdir auto_mpg_hp_tuning
"""
Explanation: scikit-learn HP Tuning on AI Platform
This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository.
Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
How to train your model on AI Platform with HP tuning.
Using HP Tuning for training can be done in a few steps:
1. Create your python model file
1. Add argument parsing for the hyperparameter values. (These values are chosen for you in this notebook)
1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
1. Add code to track the performance of your hyperparameter values.
1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
1. Prepare a package
1. Submit the training job
Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get HP tuning up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
Overview of Hyperparameter Tuning - Hyperparameter tuning takes advantage of the processing infrastructure of Google Cloud Platform to test different hyperparameter configurations when training your model.
Part 0: Setup
Create a project on GCP
Create a Google Cloud Storage Bucket
Enable AI Platform Training and Prediction and Compute Engine APIs
Install Cloud SDK
Install scikit-learn [Optional: used if running locally]
Install pandas [Optional: used if running locally]
Install cloudml-hypertune [Optional: used if running locally]
These variables will be needed for the following steps.
* TRAINER_PACKAGE_PATH <./auto_mpg_hp_tuning> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
* MAIN_TRAINER_MODULE <auto_mpg_hp_tuning.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
* JOB_DIR <gs://$BUCKET_ID/scikit_learn_job_dir> - The path to a Google Cloud Storage location to use for job output.
* RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
* PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
* HPTUNING_CONFIG <hptuning_config.yaml> - Path to the job configuration file.
Replace:
* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* BUCKET_ID <YOUR_BUCKET_ID> - with the bucket id you created above.
* JOB_DIR <gs://YOUR_BUCKET_ID/scikit_learn_job_dir> - with the bucket id you created above.
* REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.
End of explanation
"""
%%writefile ./auto_mpg_hp_tuning/train.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import datetime
import os
import pandas as pd
import subprocess
from google.cloud import storage
import hypertune
from sklearn.externals import joblib
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
"""
Explanation: The data
The Auto MPG Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/auto_mpg/. The data has been pre-processed to remove rows with incomplete data so as not to create additional steps for this notebook.
Training file is auto-mpg.data
Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.
Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
Part 1: Create your python model file
First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a scikit-learn model. However, there are a few key differences:
1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.
1. Define a command-line argument in your main training module for each tuned hyperparameter.
1. Use the value passed in those arguments to set the corresponding hyperparameter in your application's scikit-learn code.
1. Use cloudml-hypertune to track your training jobs metrics.
The code in this file first handles the hyperparameters passed to the file from AI Platform. Then it loads the data into a pandas DataFrame that can be used by scikit-learn. Then the model is fit against the training data and the metrics for that data are shared with AI Platform. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform's prediction service.
Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
Setup the imports
End of explanation
"""
%%writefile -a ./auto_mpg_hp_tuning/train.py
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir', # handled automatically by AI Platform
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--alpha', # Specified in the config file
help='Constant that multiplies the L1 term.',
default=1.0,
type=float
)
parser.add_argument(
'--max_iter', # Specified in the config file
help='The maximum number of iterations.',
default=1000,
type=int
)
parser.add_argument(
'--tol', # Specified in the config file
help='The tolerance for the optimization: if the updates are smaller than tol, '
'the optimization code checks the dual gap for optimality and continues '
'until it is smaller than tol.',
default=0.0001,
type=float
)
parser.add_argument(
'--selection', # Specified in the config file
help='Supported criteria are “cyclic” loop over features sequentially and '
'“random” a random coefficient is updated every iteration ',
default='cyclic'
)
args = parser.parse_args()
"""
Explanation: Load the hyperparameter values that are passed to the model during training.
In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.)
End of explanation
"""
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Public bucket holding the auto mpg data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/auto_mpg/auto-mpg.data')
# Download the data
blob.download_to_filename('auto-mpg.data')
# ---------------------------------------
# This is where your model code would go. Below is an example model using the auto mpg dataset.
# ---------------------------------------
# Define the format of your input data including unused columns
# (These are the columns from the auto-mpg data files)
COLUMNS = (
'mpg',
'cylinders',
'displacement',
'horsepower',
'weight',
'acceleration',
'model-year',
'origin',
'car-name'
)
# Load the training auto mpg dataset
with open('./auto-mpg.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS, delim_whitespace=True)
# Remove the column we are trying to predict ('mpg') from our features list
# Convert the Dataframe to a lists of lists
features = raw_training_data.drop('mpg', axis=1).drop('car-name', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
labels = raw_training_data['mpg'].values.tolist()
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size=0.15)
"""
Explanation: Add code to download the data from GCS
In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model.
End of explanation
"""
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Create the regressor, here we will use a Lasso Regressor to demonstrate the use of HP Tuning.
# Here is where we set the variables used during HP Tuning from
# the parameters passed into the python script
regressor = Lasso(
alpha=args.alpha,
max_iter=args.max_iter,
tol=args.tol,
selection=args.selection)
# Transform the features and fit them to the regressor
regressor.fit(train_features, train_labels)
"""
Explanation: Use the Hyperparameters
Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's scikit-learn code.
End of explanation
"""
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Calculate the mean accuracy on the given test data and labels.
score = regressor.score(test_features, test_labels)
# The default name of the metric is training/hptuning/metric.
# We recommend that you assign a custom name. The only functional difference is that
# if you use a custom name, you must set the hyperparameterMetricTag value in the
# HyperparameterSpec object in your job request to match your chosen name.
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#HyperparameterSpec
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='my_metric_tag',
metric_value=score,
global_step=1000)
"""
Explanation: Report the mean accuracy as hyperparameter tuning objective metric.
End of explanation
"""
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Export the model to a file
model_filename = 'model.joblib'
joblib.dump(regressor, model_filename)
# Example: job_dir = 'gs://BUCKET_ID/scikit_learn_job_dir/1'
job_dir = args.job_dir.replace('gs://', '') # Remove the 'gs://'
# Get the Bucket Id
bucket_id = job_dir.split('/')[0]
# Get the path
bucket_path = job_dir.lstrip('{}/'.format(bucket_id)) # Example: 'scikit_learn_job_dir/1'
# Upload the model to GCS
bucket = storage.Client().bucket(bucket_id)
blob = bucket.blob('{}/{}'.format(
bucket_path,
model_filename))
blob.upload_from_filename(model_filename)
"""
Explanation: Export and save the model to GCS
End of explanation
"""
%%writefile ./auto_mpg_hp_tuning/__init__.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Note that __init__.py can be an empty file.
"""
Explanation: Part 2: Create Trainer Package with Hyperparameter Tuning
Next we need to build the Trainer Package, which holds all your code and dependencies need to train your model on AI Platform.
First, we create an empty __init__.py file.
End of explanation
"""
%%writefile ./hptuning_config.yaml
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# hyperparam.yaml
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 30
maxParallelTrials: 5
hyperparameterMetricTag: my_metric_tag
enableTrialEarlyStopping: TRUE
params:
- parameterName: alpha
type: DOUBLE
minValue: 0.0
maxValue: 10.0
scaleType: UNIT_LINEAR_SCALE
- parameterName: max_iter
type: INTEGER
minValue: 1000
maxValue: 5000
scaleType: UNIT_LINEAR_SCALE
- parameterName: tol
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LINEAR_SCALE
- parameterName: selection
type: CATEGORICAL
categoricalValues: [
"cyclic",
"random"
]
"""
Explanation: Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info.
In this config file several key things are set:
* maxTrials - How many training trials should be attempted to optimize the specified hyperparameters.
* maxParallelTrials: 5 - The number of training trials to run concurrently.
* params - The set of parameters to tune.. These are the different parameters to pass into your model and the specified ranges you wish to try.
* parameterName - The parameter name must be unique amongst all ParameterConfigs
* type - The type of the parameter. [INTEGER, DOUBLE, ...]
* minValue & maxValue - The range of values that this parameter could be.
* scaleType - How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
End of explanation
"""
%%writefile ./setup.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['cloudml-hypertune']
setup(
name='auto_mpg_hp_tuning',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Auto MPG sklearn HP tuning training application'
)
"""
Explanation: Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info.
To do this, AI Platform uses a setup.py file to install your dependencies.
End of explanation
"""
! gcloud config set project $PROJECT_ID
"""
Explanation: Part 3: Submit Training Job
Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:
job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S")
job-dir - The path to a Google Cloud Storage location to use for job output.
package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.
config - Path to the job configuration file. This file should be a YAML document (JSON also accepted) containing a Job resource as defined in the API
Note: Check to make sure gcloud is set to the current PROJECT_ID
End of explanation
"""
! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC \
--config $HPTUNING_CONFIG
"""
Explanation: Submit the training job.
End of explanation
"""
! gsutil ls $JOB_DIR/*
"""
Explanation: [Optional] StackDriver Logging
You can view the logs for your training job:
1. Go to https://console.cloud.google.com/
1. Select "Logging" in left-hand pane
1. In left-hand pane, go to "AI Platform" and select Jobs
1. In filter by prefix, use the value of $JOB_NAME to view the logs
On the logging page of your model, you can view the different results for each HP tuning job.
Example:
{
"trialId": "2",
"hyperparameters": {
"selection": "random",
"max_iter": "1892",
"tol": "0.0609819896050862",
"alpha": "4.3704164028167725"
},
"finalMetric": {
"trainingStep": "1000",
"objectiveValue": 0.8658283435394591
}
}
[Optional] Verify Model File in GCS
View the contents of the destination model folder to verify that all 30 model files have indeed been uploaded to GCS.
Note: The model can take a few minutes to train and show up in GCS.
End of explanation
"""
|
feststelltaste/software-analytics
|
notebooks/Reading a Git log file output with Pandas.ipynb
|
gpl-3.0
|
with open (r'data/gitlog_aim42.log') as log:
[print(line, end='') for line in log.readlines()[:8]]
"""
Explanation: Context
Reading data from a software version control system can be pretty useful if you want to answer some evolutionary questions like
* Who are our main committers to the software?
* Are there any areas in the code where only one developer knows of?
* Where were we working on the last months?
In my previous notebook, I showed you how to read a Git repository directly in Python with Pandas and GitPython. As much as I like that approach (because everything is in one place and therefore reproducible), it's (currently) very slow while reading all the statistics information (but I'll work on that!). What I want to have now is a really fast method to read in a complete Git repository.
I take this opportunity to show you how to read any kind of structure, linear data into Pandas' <tt>DataFrame</tt>. The general rule of thumb is: As long as you see a pattern in the raw data, Pandas can read and tame it, too!
The idea
We are taking a shortcut for retrieving the commit history by exporting it into a log file. You can use e. g.
<pre>
git log --all --numstat --pretty=format:'--%h--%ad--%aN' --no-renames > git.log
</pre>
to do this. This will output a file with all the log information of a repository.
In this notebook, we analyze the Git repository of aim42 (an open book project about how to improve legacy systems).
The first entries of that file look something like this:
End of explanation
"""
import pandas as pd
commits = pd.read_csv("data\gitlog_aim42.log",
sep="\u0012",
header=None,
names=['raw'])
commits.head()
"""
Explanation: For each commit, we choose to create a header line with the following commit info (by using <tt>--pretty=format:'--%h--%ad--%aN'</tt>):
<pre>
--fa1ca6f--Thu Dec 22 08:04:18 2016 +0100--feststelltaste
</pre>
It contains the SHA key, the timestamp as well as the author's name of the commit, separated by <tt>--</tt>.
For each other row, we got some statistics about the modified files:
<pre>
2 0 src/main/asciidoc/appendices/bibliography.adoc
</pre>
It contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.
Let's get started!
Import the data
First, I'll show you my approach on how to read nearly everything into a <tt>DataFrame</tt>. The key is to use Pandas' <tt>read_csv</tt> for reading "non-character separated values". How to do that? We simply choose a separator that doesn't occur in the file that we want to read. My favorite character for this is the "DEVICE CONTROL TWO" character U+0012. I haven't encountered a situation yet where this character was included in a data set.
We just read our <tt>git.log</tt> file without any headers (because there are none) and give the only column a nice name.
End of explanation
"""
commit_marker = commits[
commits['raw'].str.startswith("--")]
commit_marker.head()
"""
Explanation: Data Wrangling
OK, but now we have a <strike>problem</strike> data wrangling challenge. We have the commit info as well as the statistic for the modified file in one column, but they don't belong together. What we want is to have the commit info along with the file statistics in separate columns to get some serious analysis started.
Commit info
Let's treat the commit info first. Luckily, we set some kind of anchor or marker to identify the commit info: Each commit info starts with a <tt>--</tt>. So let's extract all the commit info from the original <tt>commits</tt> <tt>DataFrame</tt>.
End of explanation
"""
commit_info = commit_marker['raw'].str.extract(
r"^--(?P<sha>.*?)--(?P<date>.*?)--(?P<author>.*?)$",
expand=True)
commit_info['date'] = pd.to_datetime(commit_info['date'])
commit_info.head()
"""
Explanation: With this, we can focus on extracting the information of a commit info row. The next command could be looking a little frightening, but don't worry. We go through it step by step.
End of explanation
"""
file_stats_marker = commits[
~commits.index.isin(commit_info.index)]
file_stats_marker.head()
"""
Explanation: We want to extract some data from the <tt>raw</tt> column. For this, we use the <tt>extract</tt> method on the string representation (note the<tt> str</tt>) of all the rows. This method expects a regular expression. We provide our own regex
<pre>
^--(?P<sha>.\*?)--(?P<date>.\*?)--(?P<author>.\*?)$
</pre>
that works as follows:
<tt>^</tt>: the beginning of the row
<tt>--</tt>: the two dashes that we choose and are used in the git log file as separator between the entries
<tt>(?P<sha>.*?)--</tt>: a named match group (marked by the <tt>(</tt> and <tt>)</tt> ) with the name <tt>sha</tt> for all characters (<tt>.*</tt>) until the next occurrence (<tt>?</tt>) of the <tt>--</tt> separators.
and so on until
<tt>\$</tt>: the marker for the end of the row (actually, <tt>^</tt> and <tt>$</tt> aren't needed, but it looks nicer from a regex string's perspective in my eyes ;-) )
I use these ugly looking, named match groups because then the name of such a group will be used by Pandas for the name of the column (therefore we avoid renaming the columns later on).
The <tt>expand=True</tt> keyword delivers a <tt>DataFrame</tt> with columns for each detected regex group.
We simply store the result into a new <tt>DataFrame</tt> variable <tt>commit_info</tt>.
Because we've worked with the string representation of the row, Pandas didn't recognize the right data types for our newly created columns. That's why we need to cast the <tt>date</tt> column to the right type.
OK, this part is ready, let's have a look at the file statistics!
File statistics
Every row that is not a commit info row is a file statistics row. So we just reuse the index of our already prepared <tt>commit_info</tt> <tt>DataFrame</tt> to get all the other data by saying "give me all commits that are not in the index of the <tt>commit_info</tt>'s <tt>DataFrame</tt>".
End of explanation
"""
file_stats = file_stats_marker['raw'].str.split(
"\t", expand=True)
file_stats = file_stats.rename(
columns={ 0: "insertions", 1: "deletions", 2: "filename"})
file_stats['insertions'] = pd.to_numeric(
file_stats['insertions'], errors='coerce')
file_stats['deletions'] = pd.to_numeric(
file_stats['deletions'], errors='coerce')
file_stats.head()
"""
Explanation: Luckily, the row's data is just a tab-separated string that we can easily split with the <tt>split</tt> method. We expand the result to get a <tt>DataFrame</tt> , rename the default columns to something that make more sense and adjust some data types. For the later, we use the keyword <tt>coerce</tt> that will let <tt>to_numeric</tt> return <tt>Nan</tt>'s for all entries that are not a number.
End of explanation
"""
commit_info.reindex(commits.index).head(3)
"""
Explanation: Putting it all together
Now we have three parts: all commits, the separated commit info and the file statistics.
We only need to glue the commit info and the file statistics together into a normalized <tt>DataFrame</tt>. For this, we have to make some adjustments to the indexes.
For the commit info, we want to have each info for each file statistics row. That means we reindex the commit info by using the index of the <tt>commits</tt> <tt>DataFrame</tt>...
End of explanation
"""
commit_data = commit_info.reindex(
commits.index).fillna(method="ffill")
commit_data.head()
"""
Explanation: ...and fill the missing values for the file statistics' rows to get the needed structure. Together, this is done like the following:
End of explanation
"""
commit_data = commit_data[~commit_data.index.isin(commit_info.index)]
commit_data.head()
"""
Explanation: After filling the file statistics rows, we can throw away the dedicated commit info rows by reusing the index from above (look at the index for seeing this clearly).
End of explanation
"""
commit_data = commit_data.join(file_stats)
commit_data.head()
"""
Explanation: The easy step afterward is to join the <tt>file_stats</tt> <tt>DataFrame</tt> with the <tt>commit_data</tt>.
End of explanation
"""
%%time
import pandas as pd
commits = pd.read_csv(r'C:\dev\repos\aim42\git.log', sep="\u0012", header=None, names=['raw'])
commit_marker = commits[commits['raw'].str.startswith("--",na=False)]
commit_info = commit_marker['raw'].str.extract(r"^--(?P<sha>.*?)--(?P<date>.*?)--(?P<author>.*?)$", expand=True)
commit_info['date'] = pd.to_datetime(commit_info['date'])
file_stats_marker = commits[~commits.index.isin(commit_info.index)]
file_stats = file_stats_marker['raw'].str.split("\t", expand=True)
file_stats = file_stats.rename(columns={0: "insertions", 1: "deletions", 2: "filename"})
file_stats['insertions'] = pd.to_numeric(file_stats['insertions'], errors='coerce')
file_stats['deletions'] = pd.to_numeric(file_stats['deletions'], errors='coerce')
commit_data = commit_info.reindex(commits.index).fillna(method="ffill")
commit_data = commit_data[~commit_data.index.isin(commit_info.index)]
commit_data = commit_data.join(file_stats)
"""
Explanation: We're done!
Complete code block
To much code to look through? Here is everything from above in a condensed format.
End of explanation
"""
%matplotlib inline
timed_commits = commit_data.set_index(pd.DatetimeIndex(commit_data['date']))[['insertions', 'deletions']].resample('1D').sum()
(timed_commits['insertions'] - timed_commits['deletions']).cumsum().fillna(method='ffill').plot()
"""
Explanation: Just some milliseconds to run through, not bad!
Summary
In this notebook, I showed you how to read some non-perfect structured data via the non-character separator trick. I also showed you how to transform the rows that contain multiple kinds of data into one nicely structured <tt>DataFrame</tt>.
Now that we have the Git repository <tt>DataFrame</tt>, we can do some nice things with it e. g. visualizing the code churn of a project, but that's a story for another notebook! But to give you a short preview:
End of explanation
"""
|
roebius/deeplearning1_keras2
|
nbs/char-rnn.ipynb
|
apache-2.0
|
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read().lower()
print('corpus length:', len(text))
!tail -n 25 {path}
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
"""
Explanation: Setup
We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who are interested. We'll look at it closely next week.
End of explanation
"""
maxlen = 40
sentences = []
next_chars = []
for i in range(0, len(idx) - maxlen+1):
sentences.append(idx[i: i + maxlen])
next_chars.append(idx[i+1: i+maxlen+1])
print('nb sequences:', len(sentences))
sentences = np.concatenate([[np.array(o)] for o in sentences[:-2]])
next_chars = np.concatenate([[np.array(o)] for o in next_chars[:-2]])
sentences.shape, next_chars.shape
n_fac = 24
model=Sequential([
Embedding(vocab_size, n_fac, input_length=maxlen),
LSTM(units=512, input_shape=(n_fac,),return_sequences=True, dropout=0.2, recurrent_dropout=0.2,
implementation=2),
Dropout(0.2),
LSTM(512, return_sequences=True, dropout=0.2, recurrent_dropout=0.2,
implementation=2),
Dropout(0.2),
TimeDistributed(Dense(vocab_size)),
Activation('softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
"""
Explanation: Preprocess and create model
End of explanation
"""
def print_example():
seed_string="ethics is a basic foundation of all that"
for i in range(320):
x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:] # [-40] picks up the last 40 chars
preds = model.predict(x, verbose=0)[0][-1] # [-1] picks up the last char
preds = preds/np.sum(preds)
next_char = choice(chars, p=preds)
seed_string = seed_string + next_char
print(seed_string)
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.optimizer.lr=0.001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.optimizer.lr=0.0001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.save_weights('data/char_rnn.h5')
model.optimizer.lr=0.00001
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, epochs=1)
print_example()
print_example()
model.save_weights('data/char_rnn.h5')
"""
Explanation: Train
End of explanation
"""
|
indiependente/Social-Networks-Structure
|
results/RandomGraph Results Analysis.ipynb
|
mit
|
#!/usr/bin/python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from stats import parse_results, get_percentage, get_avg_per_seed, draw_pie, draw_bars, draw_bars_comparison, draw_avgs
"""
Explanation: Random Graph Experiments Output Visualization
End of explanation
"""
pr, eigen, bet = parse_results('test_rdbg.txt')
"""
Explanation: Parse results
End of explanation
"""
draw_pie(get_percentage(pr))
"""
Explanation: PageRank Seeds Percentage
How many times the "Top X" nodes from PageRank have led to the max infection
End of explanation
"""
draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(pr)+[(0, np.mean(pr[:,1]))]))
"""
Explanation: Avg adopters per seed comparison
End of explanation
"""
draw_pie(get_percentage(eigen))
"""
Explanation: Eigenvector Seeds Percentage
How many times the "Top X" nodes from Eigenvector have led to the max infection
End of explanation
"""
draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(eigen)+[(0, np.mean(eigen[:,1]))]))
"""
Explanation: Avg adopters per seed comparison
End of explanation
"""
draw_pie(get_percentage(bet))
"""
Explanation: Betweenness Seeds Percentage
How many times the "Top X" nodes from Betweenness have led to the max infection
End of explanation
"""
draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(bet)+[(0, np.mean(bet[:,1]))]))
"""
Explanation: Avg adopters per seed comparison
End of explanation
"""
draw_bars(np.sort(pr.view('i8,i8'), order=['f0'], axis=0).view(np.int),
np.sort(eigen.view('i8,i8'), order=['f0'], axis=0).view(np.int),
np.sort(bet.view('i8,i8'), order=['f0'], axis=0).view(np.int))
"""
Explanation: 100 runs adopters comparison
End of explanation
"""
pr_mean = np.mean(pr[:,1])
pr_mean_seed = np.mean(pr[:,0])
print 'Avg Seed:',pr_mean_seed, 'Avg adopters:', pr_mean
"""
Explanation: Centrality Measures Averages
PageRank avg adopters and seed
End of explanation
"""
eigen_mean = np.mean(eigen[:,1])
eigen_mean_seed = np.mean(eigen[:,0])
print 'Avg Seed:',eigen_mean_seed, 'Avg adopters:',eigen_mean
"""
Explanation: Eigenv avg adopters and seed
End of explanation
"""
bet_mean = np.mean(bet[:,1])
bet_mean_seed = np.mean(bet[:,0])
print 'Avg Seed:',bet_mean_seed, 'Avg adopters:',bet_mean
draw_avgs([pr_mean, eigen_mean, bet_mean])
"""
Explanation: Betweenness avg adopters and seed
End of explanation
"""
|
anhquan0412/deeplearning_fastai
|
deeplearning1/nbs/statefarm-sample.ipynb
|
apache-2.0
|
from __future__ import division, print_function
%matplotlib inline
# path = "data/state/"
path = "data/state/sample/"
from importlib import reload # Python 3
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64
#batch_size=1
"""
Explanation: Enter State Farm
End of explanation
"""
%cd data/state
%cd train
%mkdir ../sample
%mkdir ../sample/train
%mkdir ../sample/valid
for d in glob('c?'):
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
from shutil import copyfile
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
for i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i])
%cd ../valid
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
for i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i])
%cd ../../../..
%mkdir data/state/results
%mkdir data/state/sample/test
"""
Explanation: Create sample
The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
End of explanation
"""
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,
test_filename) = get_classes(path)
steps_per_epoch = int(np.ceil(batches.samples/batch_size))
validation_steps = int(np.ceil(val_batches.samples/(batch_size*2)))
"""
Explanation: Create batches
End of explanation
"""
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax')
])
"""
Explanation: Basic models
Linear model
First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
End of explanation
"""
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: As you can see below, this training is going nowhere...
End of explanation
"""
model.summary()
"""
Explanation: Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:
End of explanation
"""
10*3*224*224
"""
Explanation: Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer:
End of explanation
"""
np.round(model.predict_generator(batches, int(np.ceil(batches.samples/batch_size)))[:10],2)
"""
Explanation: Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:
End of explanation
"""
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:
End of explanation
"""
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.
End of explanation
"""
rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)
val_res = [model.evaluate_generator(rnd_batches, int(np.ceil(rnd_batches.samples/(batch_size*2)))) for i in range(10)]
np.round(val_res, 2)
"""
Explanation: We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:
End of explanation
"""
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax', kernel_regularizer=l2(0.01))
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples.
L2 regularization
The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function):
End of explanation
"""
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(100, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr = 0.01
model.fit_generator(batches, steps_per_epoch, epochs=5, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.
Single hidden layer
The next simplest model is to add a single hidden layer.
End of explanation
"""
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Conv2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Conv2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr = 0.001
model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches,
validation_steps=validation_steps)
return model
conv1(batches)
"""
Explanation: Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.
Single conv layer
2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:
End of explanation
"""
gen_t = image.ImageDataGenerator(width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result.
So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.
Data augmentation
To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.
Width shift: move the image left and right -
End of explanation
"""
gen_t = image.ImageDataGenerator(height_shift_range=0.05)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: Height shift: move the image up and down -
End of explanation
"""
gen_t = image.ImageDataGenerator(shear_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: Random shear angles (max in radians) -
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: Rotation: max in degrees -
End of explanation
"""
gen_t = image.ImageDataGenerator(channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: Channel shift: randomly changing the R,G,B colors -
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: And finally, putting it all together!
End of explanation
"""
model.optimizer.lr = 0.0001
model.fit_generator(batches, steps_per_epoch, epochs=5, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.
End of explanation
"""
model.fit_generator(batches, steps_per_epoch, epochs=25, validation_data=val_batches,
validation_steps=validation_steps)
"""
Explanation: Lucky we tried that - we starting to make progress! Let's keep going.
End of explanation
"""
|
dataewan/deep-learning
|
autoencoder/Simple_Autoencoder_Solution.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
jmhsi/justin_tinker
|
data_science/courses/temp/courses/ml1/lesson2-rf_interpretation.ipynb
|
apache-2.0
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
set_plot_sizes(12,14,16)
"""
Explanation: Random Forest Model interpretation
End of explanation
"""
PATH = "data/bulldozers/"
df_raw = pd.read_feather('tmp/raw')
df_trn, y_trn, nas = proc_df(df_raw, 'SalePrice')
def split_vals(a,n): return a[:n], a[n:]
n_valid = 12000
n_trn = len(df_trn)-n_valid
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
raw_train, raw_valid = split_vals(df_raw, n_trn)
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
df_raw
"""
Explanation: Load in our data from last lesson
End of explanation
"""
set_rf_samples(50000)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Confidence based on tree variance
For model interpretation, there's no need to use the full dataset on each tree - using a subset will be both faster, and also provide better interpretability (since an overfit model will not provide much variance across trees).
End of explanation
"""
%time preds = np.stack([t.predict(X_valid) for t in m.estimators_])
np.mean(preds[:,0]), np.std(preds[:,0])
"""
Explanation: We saw how the model averages predictions across the trees to get an estimate - but how can we know the confidence of the estimate? One simple way is to use the standard deviation of predictions, instead of just the mean. This tells us the relative confidence of predictions - that is, for rows where the trees give very different results, you would want to be more cautious of using those results, compared to cases where they are more consistent. Using the same example as in the last lesson when we looked at bagging:
End of explanation
"""
def get_preds(t): return t.predict(X_valid)
%time preds = np.stack(parallel_trees(m, get_preds))
np.mean(preds[:,0]), np.std(preds[:,0])
"""
Explanation: When we use python to loop through trees like this, we're calculating each in series, which is slow! We can use parallel processing to speed things up:
End of explanation
"""
x = raw_valid.copy()
x['pred_std'] = np.std(preds, axis=0)
x['pred'] = np.mean(preds, axis=0)
x.Enclosure.value_counts().plot.barh();
flds = ['Enclosure', 'SalePrice', 'pred', 'pred_std']
enc_summ = x[flds].groupby('Enclosure', as_index=False).mean()
enc_summ
enc_summ = enc_summ[~pd.isnull(enc_summ.SalePrice)]
enc_summ.plot('Enclosure', 'SalePrice', 'barh', xlim=(0,11));
enc_summ.plot('Enclosure', 'pred', 'barh', xerr='pred_std', alpha=0.6, xlim=(0,11));
"""
Explanation: We can see that different trees are giving different estimates this this auction. In order to see how prediction confidence varies, we can add this into our dataset.
End of explanation
"""
raw_valid.ProductSize.value_counts().plot.barh();
flds = ['ProductSize', 'SalePrice', 'pred', 'pred_std']
summ = x[flds].groupby(flds[0]).mean()
summ
(summ.pred_std/summ.pred).sort_values(ascending=False)
"""
Explanation: Question: Why are the predictions nearly exactly right, but the error bars are quite wide?
End of explanation
"""
fi = rf_feat_importance(m, df_trn); fi[:10]
fi.plot('cols', 'imp', figsize=(10,6), legend=False);
def plot_fi(fi): return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False)
plot_fi(fi[:30]);
to_keep = fi[fi.imp>0.005].cols; len(to_keep)
df_keep = df_trn[to_keep].copy()
X_train, X_valid = split_vals(df_keep, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5,
n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
fi = rf_feat_importance(m, df_keep)
plot_fi(fi);
"""
Explanation: Feature importance
It's not normally enough to just to know that a model can make accurate predictions - we also want to know how it's making predictions. The most important way to see this is with feature importance.
End of explanation
"""
df_trn2, y_trn, nas = proc_df(df_raw, 'SalePrice', max_n_cat=7)
X_train, X_valid = split_vals(df_trn2, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.6, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
fi = rf_feat_importance(m, df_trn2)
plot_fi(fi[:25]);
"""
Explanation: One-hot encoding
End of explanation
"""
from scipy.cluster import hierarchy as hc
corr = np.round(scipy.stats.spearmanr(df_keep).correlation, 4)
corr_condensed = hc.distance.squareform(1-corr)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=(16,10))
dendrogram = hc.dendrogram(z, labels=df_keep.columns, orientation='left', leaf_font_size=16)
plt.show()
"""
Explanation: Removing redundant features
One thing that makes this harder to interpret is that there seem to be some variables with very similar meanings. Let's try to remove redundent features.
End of explanation
"""
def get_oob(df):
m = RandomForestRegressor(n_estimators=30, min_samples_leaf=5, max_features=0.6, n_jobs=-1, oob_score=True)
x, _ = split_vals(df, n_trn)
m.fit(x, y_train)
return m.oob_score_
"""
Explanation: Let's try removing some of these related features to see if the model can be simplified without impacting the accuracy.
End of explanation
"""
get_oob(df_keep)
"""
Explanation: Here's our baseline.
End of explanation
"""
for c in ('saleYear', 'saleElapsed', 'fiModelDesc', 'fiBaseModel', 'Grouser_Tracks', 'Coupler_System'):
print(c, get_oob(df_keep.drop(c, axis=1)))
"""
Explanation: Now we try removing each variable one at a time.
End of explanation
"""
to_drop = ['saleYear', 'fiBaseModel', 'Grouser_Tracks']
get_oob(df_keep.drop(to_drop, axis=1))
"""
Explanation: It looks like we can try one from each group for removal. Let's see what that does.
End of explanation
"""
df_keep.drop(to_drop, axis=1, inplace=True)
X_train, X_valid = split_vals(df_keep, n_trn)
np.save('tmp/keep_cols.npy', np.array(df_keep.columns))
keep_cols = np.load('tmp/keep_cols.npy')
df_keep = df_trn[keep_cols]
"""
Explanation: Looking good! Let's use this dataframe from here. We'll save the list of columns so we can reuse it later.
End of explanation
"""
reset_rf_samples()
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
"""
Explanation: And let's see how this model looks on the full dataset.
End of explanation
"""
from pdpbox import pdp
from plotnine import *
set_rf_samples(50000)
"""
Explanation: Partial dependence
End of explanation
"""
df_trn2, y_trn, nas = proc_df(df_raw, 'SalePrice', max_n_cat=7)
X_train, X_valid = split_vals(df_trn2, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.6, n_jobs=-1)
m.fit(X_train, y_train);
plot_fi(rf_feat_importance(m, df_trn2)[:10]);
df_raw.plot('YearMade', 'saleElapsed', 'scatter', alpha=0.01, figsize=(10,8));
x_all = get_sample(df_raw[df_raw.YearMade>1930], 500)
ggplot(x_all, aes('YearMade', 'SalePrice'))+stat_smooth(se=True, method='loess')
x = get_sample(X_train[X_train.YearMade>1930], 500)
def plot_pdp(feat, clusters=None, feat_name=None):
feat_name = feat_name or feat
p = pdp.pdp_isolate(m, x, feat)
return pdp.pdp_plot(p, feat_name, plot_lines=True,
cluster=clusters is not None, n_cluster_centers=clusters)
plot_pdp('YearMade')
plot_pdp('YearMade', clusters=5)
feats = ['saleElapsed', 'YearMade']
p = pdp.pdp_interact(m, x, feats)
pdp.pdp_interact_plot(p, feats)
plot_pdp(['Enclosure_EROPS w AC', 'Enclosure_EROPS', 'Enclosure_OROPS'], 5, 'Enclosure')
df_raw.YearMade[df_raw.YearMade<1950] = 1950
df_keep['age'] = df_raw['age'] = df_raw.saleYear-df_raw.YearMade
X_train, X_valid = split_vals(df_keep, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.6, n_jobs=-1)
m.fit(X_train, y_train)
plot_fi(rf_feat_importance(m, df_keep));
"""
Explanation: This next analysis will be a little easier if we use the 1-hot encoded categorical variables, so let's load them up again.
End of explanation
"""
from treeinterpreter import treeinterpreter as ti
df_train, df_valid = split_vals(df_raw[df_keep.columns], n_trn)
row = X_valid.values[None,0]; row
prediction, bias, contributions = ti.predict(m, row)
prediction[0], bias[0]
idxs = np.argsort(contributions[0])
[o for o in zip(df_keep.columns[idxs], df_valid.iloc[0][idxs], contributions[0][idxs])]
contributions[0].sum()
"""
Explanation: Tree interpreter
End of explanation
"""
df_ext = df_keep.copy()
df_ext['is_valid'] = 1
df_ext.is_valid[:n_trn] = 0
x, y = proc_df(df_ext, 'is_valid')
m = RandomForestClassifier(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(x, y);
m.oob_score_
fi = rf_feat_importance(m, x); fi[:10]
feats=['SalesID', 'saleElapsed', 'MachineID']
(X_train[feats]/1000).describe()
(X_valid[feats]/1000).describe()
x.drop(feats, axis=1, inplace=True)
m = RandomForestClassifier(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(x, y);
m.oob_score_
fi = rf_feat_importance(m, x); fi[:10]
set_rf_samples(50000)
feats=['SalesID', 'saleElapsed', 'MachineID', 'age', 'YearMade', 'saleDayofyear']
X_train, X_valid = split_vals(df_keep, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
for f in feats:
df_subs = df_keep.drop(f, axis=1)
X_train, X_valid = split_vals(df_subs, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print(f)
print_score(m)
reset_rf_samples()
df_subs = df_keep.drop(['SalesID', 'MachineID', 'saleDayofyear'], axis=1)
X_train, X_valid = split_vals(df_subs, n_trn)
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, max_features=0.5, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
plot_fi(rf_feat_importance(m, X_train));
np.save('tmp/subs_cols.npy', np.array(df_subs.columns))
"""
Explanation: Extrapolation
End of explanation
"""
m = RandomForestRegressor(n_estimators=160, max_features=0.5, n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
"""
Explanation: Our final model!
End of explanation
"""
|
grokkaine/biopycourse
|
day1/data.ipynb
|
cc0-1.0
|
#use like this: cat file.txt | python script.py
import sys
for line in sys.stdin:
# do suff
print(line)
"""
Explanation: Python and the data
Text and binary: streaming, serialization, regular expression
The Web: XML parsing, html scraping, web frameworks, API calls
Data Storage: SQLite, SQL querrying, Chunking and HDF5, pytables
Python and other languages: C, R, Julia
Text and binary
Text manipulation is quite simplified in Python thanks to a wide variety of packages. It is generally advisable not to reinvent the wheels, so only perform quick and dirty regular expression based text parsing when it is really necessary. This is because complex RE parsing is hard to decode and test properly.
File streaming
In the tutorial we exercised raw text file opening in Python. What if we want to read text from the standard input? (Useful for pipelining, and generally for saving space.)
End of explanation
"""
import io
# Writing to a buffer
output = io.StringIO()
output.write('FIrst stream into the buffer. ')
print('Second stream.', file=output)
# Retrieve the value written
print(output.getvalue())
output.close() # discard buffer memory
# Initialize a read buffer
input = io.StringIO('Inital value for read buffer')
# Read from the buffer
print(input.read())
print("Second read output:",input.read())
"""
Explanation: Text streaming involves using only the communication layer and the RAM, and not storing the data on disk immediately. When is this useful:
- You want to use input from a dozen of super large archived FASTQ files.
- You want to asynchronuously write to a couple of output files, in cases when you are using multuthreading or multiprocessing.
- You want to pipe the result of a Python computation straight into some program running on another machine, another cluster node etc.
End of explanation
"""
d = {'first': [1,"two"], 'second': set([3, 4, 'five'])}
import pickle
with open('dumpfile.pkl','wb') as fout:
pickle.dump(d, fout)
with open('dumpfile.pkl','rb') as fin:
d2 = pickle.load(fin)
print(d2)
"""
Explanation: Task:
- create a text file and deposit random tab separated numbers in it, then archive it using the gzip module.
- Open the file as a byte stream input and decode it to ascii using the io.TextIOWrapper class.
- Make a second version where you open the stream outside python using the gzip program (or tar). Time the difference.
Pickling
This in Python jargon means object serialization, a very important feature allowing you to save on disk the contents of a Python datastructure directly, in a specially compressed or sometimes binary format.
End of explanation
"""
import json
#json_string = json.dumps([1, 2, 3, "a", "b", "c"])
d = {'first': [1,"two"], 'second': [3, 4, 'five']}
json_string = json.dumps(d)
print(json_string)
"""
Explanation: JSON
A short word for JavaScript Object Notation, .json became ubiquitous as a simple data interchange format mainly in remote Web API calls and microtransactions. Json is easily loaded into native Python datastructures. An example:
End of explanation
"""
%%R
library(feather)
path <- "my_data.feather"
write_feather(df, path)
df <- read_feather(path)
import feather
path = 'my_data.feather'
feather.write_dataframe(df, path)
df = feather.read_dataframe(path)
"""
Explanation: Feather
When it comes to fast serialization between R and Python, the current champion is Feather. However, since any disk operation is limited by the mechanics of the disk, for extreme performance it is recommended to keep the serialized objects in memory or use SSDs.
End of explanation
"""
import sys
f = open('data/Homo_sapiens.GRCh38.pep.all.fa','r')
peptides = {}
for l in f:
if l[0]=='>':
#print l.strip().split()
record = {}
r = l.strip('\n').split()
pepid = r[0][1:]
record['pep'] = 1 if r[1].split(':')[1]=='known' else 0
record['gene'] = r[3].split(':')[1]
record['transcript'] = r[4].split(':')[1]
peptides[pepid] = record
f.close()
##using regular expressions to match all known peptides
nupep2 = 0
import re
#pattern = re.compile('^>.*(known).*')
pattern = re.compile('^>((?!known).)*$')
with open('data/Homo_sapiens.GRCh38.pep.all.fa','rt') as f:
for l in f:
if pattern.search(l) is not None: nupep2 += 1
npep = len(peptides)
upep = set([pepid for pepid in peptides if peptides[pepid]['pep']==0]) #unknown peptides
nunknown = len(upep)
genes = set([peptides[pepid]['gene'] for pepid in upep])
trans = set([peptides[pepid]['transcript'] for pepid in upep])
print(npep, nupep2, nunknown, len(genes), len(trans))
with open('unknown_peptides.txt','w') as f:
for pepid in upep:
f.write('\t'.join([pepid, peptides[pepid]['gene'], peptides[pepid]['transcript']])+'\n')
"""
Explanation: Parsing and regular expressions
Used for any raw text format in biology, such as (FASTA, FASTAQ, PDB, VCF, GFF, SAM).
Example: FASTA parsing
Open the file containing all peptide sequences in the human body.
How many unknown peptides does it contain?
How many unique genes and transcripts are in there for the unknown peptides?
Output a tab separated file containing the gene id and transcript id for each unknown peptide.
Observation:
Usage of Biopython and pandas modules.
Task:
Order the chromosomes by the number of unknown peptides versus the total number of peptides they translate.
ENSP00000388523 pep:known chromosome:GRCh38:7:142300924:142301432:1 gene:ENSG00000226660 transcript:ENST00000455382 gene_biotype:TR_V_gene transcript_biotype:TR_V_gene
MDTWLVCWAIFSLLKAGLTEPEVTQTPSHQVTQMGQEVILRCVPISNHLYFYWYRQILGQ
KVEFLVSFYNNEISEKSEIFDDQFSVERPDGSNFTLKIRSTKLEDSAMYFCASSE
Task:
- Run the code below and figure out what I did. Python scripting is very often all about inheriting someone elses bloaded abandonware and making it work for you!
End of explanation
"""
f = open('data/Homo_sapiens.GRCh38.pep.all.fa','r')
from Bio import SeqIO
fasta = SeqIO.parse(f,'fasta')
i = 0
name, sequence = fasta.id, fasta.seq.tostring()
if len(sequence)<100 and len(sequence)>20:
i += 1
print i
print "Name",name
print "Sequence",sequence
if i > 5: break
f.close()
"""
Explanation: You have seen an example of how text processing is done in Python using the standard libraries. However, you should only do this when your task is extremely unusual. For most other cases it is preferable to use a dedicated library. Most biological formats have dedicated libraries in Python, and when only available in another tool or language it is always preferable to glue a call.
Example: the "@" character in FASTQ is also a valid confidence score. If you make a hasted script matching for "@" as the deliniation of a new record, you might also end up with a corrupted result.
Task:
Here is an example of how you should do the task above. Run this via Jupyter as well.
End of explanation
"""
import sys
import xml.etree.ElementTree as ET
tree = ET.ElementTree(file='data/curated_sbml.xml')
#tree = ET.parse(open('data/curated_sbml.xml'))
root = tree.getroot()
print root.tag, root.attrib
for child in root:
print child.tag, child.attrib
for child2 in child:
print child2.tag, child2.attrib
#print tree.write(sys.stdout)
for elem in root.iter('reaction'):
print elem.tag, elem.attrib
for elem in root.iter('species'):
print elem.tag, elem.attrib
print elem.get('id')
print tree.findall('.//reaction')
"""
Explanation: The Web
A lot of the information today is web based, so Python has tools to help parsing the most popular web formats, web frameworks for client and server side processing, but also more mundane tasks such as web site scraping or making API calls.
XML parsing
XML is a general file format used for data interchange, especially among different applications. One of the most popular use in Biology is the SBML format, that aims to store a biological model specification, no matter how specific that model may be.
Task:
- Download a curated SBML file from the BioModels database:
http://www.ebi.ac.uk/biomodels-main/
- Find out how many reactions the file contains.
Extra task:
- Make a simplified XML file of the reactants and their k-values for each reaction.
End of explanation
"""
import pathlib
data_loc = r'D:\windata\work\biopycourse\data'
path = pathlib.Path(data_loc) / "curated_sbml.xml"
import xmltodict
with open(path,'r') as fd:
doc = xmltodict.parse(fd.read())
doc['sbml']['model']['notes']['body']['div'][1]
"""
Explanation: xmltodict
The standard library option is recommended when making a program, but it will be an overkill when having to parse a file with a quick script.
Task:
- download and install xmltodict, then list the reactions using this library.
End of explanation
"""
from bs4 import BeautifulSoup
import urllib3
http = urllib3.PoolManager()
redditHtml = http.request('GET', "https://old.reddit.com/r/Python/", preload_content=False)
soup = BeautifulSoup(redditHtml, 'html.parser')
#get the HTML of the table called site Table where all the links are displayed
main_table = soup.find("div",attrs={'id':'siteTable'})
#Now we go into main_table and get every a element in it which has a class "title"
links = main_table.find_all("a",class_="title")
for_display = [(link.text, link['href']) for link in links[:5]]
for l in for_display:
print(l)
"""
Explanation: Web scraping
This is concerned with automatic information processing from the Internet.
Task:
- Mine an online KEGG pathway for its reaction elements.
BeautifulSoup is loved by hackers. Aside from html it can also parse xml.
I like to read Reddit occasinally but as a programmer I am too lazy to open the page! So I use this script to extract headlines...
End of explanation
"""
# Do not run this cell!
from flask import Flask
app = Flask("the_flask_module")
@app.route("/hello")
def hello_page():
return "I'm a hello page"
@app.route("/hello/details")
def hello_deeper():
return "I'm a details page"
app.run(host="0.0.0.0", port=5001)
"""
Explanation: Web Frameworks
As a general purpose language, Python is very popular for server side scripting. If Javascript rules as the scripting language of the web client, on the web server Python is ubiquitous due to it's fast prototyping. Only very recently Javascript started to also be popular, with frameworks like node.js.
Why would this matter for you?
- You can present your research interactively.
- Interactivity also helps you work with your own data.
- A web interface allows anyone to inspect your data or your findings.
- It allows you to link your data to public datasets and the opposite.
Flask
Flask is a very capable microframework widely used for web development.
http://flask.pocoo.org/
Task:
- Run the data/flasktest.py file and open the browser at :http://0.0.0.0:5001/hello
End of explanation
"""
from Bio import Entrez
Entrez.email = "your@mail.here" # Always tell NCBI who you are
handle = Entrez.einfo()
#result = handle.read()
record = Entrez.read(handle)
print(record.keys())
print(record["DbList"])
"""
Explanation: Django
Worth just mentioning Django is a similarly popular yet more mature web framework that was amont the first to use a model-view-controller architecture wich simplifies reusability. One can write entire websites only from python code and html templates, although in general Javascript is also used for complex websites along with manual database configurations.
Using Jupyter for web interaction
While it is possible to turn Jupyter into an interactive web form with buttons and other standard widgets, we will not have time to do this, as it would suppose to learn a lot of web development concepts.
However, we presented an interactive example in the Jupyter section and we will also learn how to use Python to create interactive web plots inside the plotting chapter.
Remote web API calls example
Getting information as fast as possible into our Python data structures is vital. Only as a last resource should one program his own downloaders and parsers. When this is not found in Python, it can be possible to call libraries from Perl or Python or access web records with specified API calls. BioPython wraps a few API calls such as Entrez resources. Entrez is a federated search engine over various NCBI and NIH resource databases.
End of explanation
"""
from Bio import Entrez
Entrez.email = "your@mail.here" # Always tell NCBI who you are
handle = Entrez.esearch(db="Taxonomy", term="Synechocystis")
record = Entrez.read(handle)
print(record["IdList"])
#assuming only one record is returned
handle = Entrez.efetch(db="Taxonomy", id=record["IdList"][0], retmode="xml")
records = Entrez.read(handle)
print(records[0].keys())
print(records[0]["Lineage"])
"""
Explanation: BioPython
So let us for example find the exact lineage for this amazing breed of bacteria that changed both plants and atmosphere in the earlier days of our planet... As biologists that try to learn Python, I hope you will love BioPython at least as much as I do. A number of programmers created Bio::Perl which is to date containing a few more modules than BioPython, however I got the feeling the Python version is more updated. It is unfortunate that we don't have time to explore it in a great detail.
We will use it again over the course.
Aside from BioPython, web API can be ofered by virtually any website and with a little effort one can either download an Python access package or program his own. Functional annotation for example, is weakly covered in Python, but DAVID is another API independent from BioPython.
First, install with:
conda install -c https://conda.anaconda.org/anaconda biopython
End of explanation
"""
from flask import Flask, request
from flask_restful import Resource, Api
from sqlalchemy import create_engine
from json import dumps
from flask import jsonify
db_connect = create_engine('sqlite:///test.db')
app = Flask(__name__)
api = Api(app)
class SNP(Resource):
def get(self):
conn = db_connect.connect() # connect to database
query = conn.execute("select * from snps") # This line performs query and returns json result
return {'snips': [i[0] for i in query.cursor.fetchall()]} # Fetches first column that is Employee ID
class Gene_Name(Resource):
def get(self, snp_id):
conn = db_connect.connect()
query = conn.execute("select * from snps where Id =%d " %int(snp_id))
result = {'data': [dict(zip(tuple(query.keys()),i)) for i in query.cursor]}
return jsonify(result)
# adding two routes
api.add_resource(SNP, '/snps')
api.add_resource(Gene_Name, '/snps/<snp_id>')
if __name__ == '__main__':
app.run(port='6789')
"""
Explanation: Setting up a RESTful API via Python
$ pip install flask flask-jsonpify flask-sqlalchemy flask-restful
Task:
- test via browser:
- :6789/snps
- :6789/snps/snp_id
- use a third library to test in Jupyter. Many times you will not find someone to guide you precisely so you have to figure things out.
End of explanation
"""
import sqlite3 as lite
import sys
snps = (
(1, 'Gene1', 52642),
(2, 'Gene2', 57127),
(3, 'Gene3', 9000),
(4, 'Gene4', 29000)
)
con = lite.connect('test.db')
with con:
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS snps")
cur.execute("CREATE TABLE snps(Id INT, GeneSYM TEXT, NucleodidePos INT)")
cur.executemany("INSERT INTO snps VALUES(?, ?, ?)", snps)
"""
Explanation: Python and the databases
Why would you ever need to know database interaction through Python?
Almost every piece of biological or even scientific data is stored in a database.
Relational databases can be interogated with a very simple query language called SQL.
Most programs are mere interfaces to databases.
Stop pushing buttons, a bit of Python and a bit of SQL is all you need to bring you to the data!
SQLite
This is a very simple database. Most R annotation packages to not do anything but download a SQLite database into your computers. It is faster to directly interogate it through Python than to learn how to use a package specific set of functions.
The code bellow creates a test database with a table of SNPs and inserts a few records.
End of explanation
"""
import sqlite3 as lite
import sys
con = lite.connect('test.db')
with con:
cur = con.cursor()
cur.execute("SELECT * FROM snps")
rows = cur.fetchall()
for row in rows:
print(row)
"""
Explanation: Now let us interogate the database:
End of explanation
"""
from flask import Flask, request
from flask_restful import Resource, Api
from sqlalchemy import create_engine
from json import dumps
from flask import jsonify
db_connect = create_engine('sqlite:///test.db')
app = Flask(__name__)
api = Api(app)
class SNP(Resource):
def get(self):
conn = db_connect.connect() # connect to database
query = conn.execute("select * from snps") # This line performs query and returns json result
return {'snips': [i[0] for i in query.cursor.fetchall()]} # Fetches first column that is Employee ID
class Gene_Name(Resource):
def get(self, snp_id):
conn = db_connect.connect()
query = conn.execute("select * from snps where Id =%d " %int(snp_id))
result = {'data': [dict(zip(tuple(query.keys()),i)) for i in query.cursor]}
return jsonify(result)
# adding two routes
api.add_resource(SNP, '/snps')
api.add_resource(Gene_Name, '/snps/<snp_id>')
if __name__ == '__main__':
app.run(port='6789')
"""
Explanation: SQL is an interogation language that can get relatively complex and it falls out of the scope of this course. However in data science it is extremely useful to be able to operate databases because relational databases allow for very fast data access and operations, together with data compression. However there are many other database types, used predominantly in big data, such as document databases, graph databases and others, also known as NoSQL databases, and Python can bridge to them all.
Setting up a RESTful API via Python
$ pip install flask flask-jsonpify flask-sqlalchemy flask-restful
Task:
- test via browser:
- :6789/snps
- :6789/snps/snp_id
- use a third library to test in Jupyter. Many times you will not find someone to guide you precisely so you have to figure things out.
End of explanation
"""
import multiprocessing as mp,os
def process_wrapper(chunkStart, chunkSize):
with open("input.txt") as f:
f.seek(chunkStart)
lines = f.read(chunkSize).splitlines()
for line in lines:
process(line)
def chunkify(fname,size=1024*1024):
fileEnd = os.path.getsize(fname)
with open(fname,'r') as f:
chunkEnd = f.tell()
while True:
chunkStart = chunkEnd
f.seek(size,1)
f.readline()
chunkEnd = f.tell()
yield chunkStart, chunkEnd - chunkStart
if chunkEnd > fileEnd:
break
#init objects
pool = mp.Pool(cores)
jobs = []
#create jobs
for chunkStart,chunkSize in chunkify("input.txt"):
jobs.append( pool.apply_async(process_wrapper,(chunkStart,chunkSize)) )
#wait for all jobs to finish
for job in jobs:
job.get()
#clean up
pool.close()
"""
Explanation: Text chunking
There is no general library for chunking that I can reccomend. Text data is chunked differently than images, sounds, videos etc. Go ahead and test this multiprocessing ad-hoc example of text chunking:
End of explanation
"""
from tables import *
class Particle(IsDescription):
identity = StringCol(itemsize=22, dflt=" ", pos=0) # character String
idnumber = Int16Col(dflt=1, pos = 1) # short integer
speed = Float32Col(dflt=1, pos = 2) # single-precision
# Open a file in "w"rite mode
fileh = open_file("objecttree.h5", mode = "w")
# Get the HDF5 root group
root = fileh.root
# Create the groups
group1 = fileh.create_group(root, "group1")
group2 = fileh.create_group(root, "group2")
# Now, create an array in root group
array1 = fileh.create_array(root, "array1", ["string", "array"], "String array")
# Create 2 new tables in group1
table1 = fileh.create_table(group1, "table1", Particle)
table2 = fileh.create_table("/group2", "table2", Particle)
# Create the last table in group2
array2 = fileh.create_array("/group1", "array2", [1,2,3,4])
# Now, fill the tables
for table in (table1, table2):
# Get the record object associated with the table:
row = table.row
# Fill the table with 10 records
for i in xrange(10):
# First, assign the values to the Particle record
row['identity'] = 'This is particle: %2d' % (i)
row['idnumber'] = i
row['speed'] = i * 2.
# This injects the Record values
row.append()
# Flush the table buffers
table.flush()
# Finally, close the file (this also will flush all the remaining buffers!)
fileh.close()
"""
Explanation: Chunking numerical data: HDF5, pytables
For chunking numerical data, the most popular format on PC is HDF5. However on clouds there are specialized streaming libraries that are much more efficient. We will discuss the Map/Reduce paradigm at the data engineering chapters.
Task:
- Adapt the introductory code bellow to store single cell expression data. How can you improve querrying? What do you know about indexing?
End of explanation
"""
import readline
import rpy2.robjects as robjects
robjects.r('''
source("http://www.bioconductor.org/biocLite.R")
biocLite("ALL")
library("ALL")
data("ALL")
#install.packages("gplots")
eset <- ALL[, ALL$mol.biol %in% c("BCR/ABL", "ALL1/AF4")]
library("limma")
f <- factor(as.character(eset$mol.biol))
design <- model.matrix(~f)
fit <- eBayes(lmFit(eset,design))
selected <- p.adjust(fit$p.value[, 2]) <0.05
esetSel <- eset [selected, ]
color.map <- function(mol.biol) { if (mol.biol=="ALL1/AF4") "#FF0000" else "#0000FF" }
patientcolors <- unlist(lapply(esetSel$mol.bio, color.map))
#heatmap(exprs(esetSel), col=topo.colors(100), ColSideColors=patientcolors)
library("gplots")
heatmap.2(exprs(esetSel), col=redgreen(75), scale="row", ColSideColors=patientcolors,
key=TRUE, symkey=FALSE, density.info="none", trace="none", cexRow=0.5)
''')
"""
Explanation: Python and other languages
The "rest" can be an external program, a remote program or a library made for a different language. To a certain degree all languages became good at accessing external resources but Python excels at it. We learned how to access remote APIs. We will only learn here how to deal with C and R.
Python and C
There are ways to extend Python with C and C++, but it is cumbersome. There are different interpreters for Python, most popular being CPython which is the standard one and PyPy which is a just-in-time compiler and interpreter having speeds that match .js and Java. In principle the extension code needs to be re-written in order to run on different interpreters.
Here is an example extension C code, written for the CPython interpreter. When compiled the spam function is callable from Python, so Python was extended with a new module.function():
```
static PyObject *
spam_system(PyObject self, PyObject args)
{
const char *command;
int sts;
if (!PyArg_ParseTuple(args, "s", &command))
return NULL;
sts = system(command);
return Py_BuildValue("i", sts);
}
```
Enter Cython.
Cython is a static compiler that makes it possible to combine C with Python. It is heavily promoted and used by the Scipy stack and it can run on PyPy too. The following code is written in Cython, and as you can see it does differ in one substantial way to Python: variable are statically declared. Another major difference is that this code does not run on an interpreter, instead it is compiled into C and assembled in machine code. A similar project exists for Java, called Jython.
def primes(int kmax): # The argument will be converted to int or raise a TypeError.
cdef int n, k, i # These variables are declared with C types.
cdef int p[1000] # Another C type
result = [] # A Python type
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
SWIG
While Cython is cool, it does require you to write new code. If you have a C/C++ codebase and you want it in Python, perhaps the best option is SWIG. This is a multilanguage library, one can extend Tcl, Perl, Java and C# with it. Let's say you have the following pure C code containing a number of different functions:
```
/ File : example.c /
#include <time.h>
double My_variable = 3.0;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
int my_mod(int x, int y) {
return (x%y);
}
char *get_time()
{
time_t ltime;
time(<ime);
return ctime(<ime);
}
```
All you have to do is write an interface of the code to SWIG:
```
/ example.i /
%module example
%{
/ Put header files here or function declarations like below /
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
%}
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
```
Run a sequence of commands that compiles and links the code with special SWIG signatures. This is slightly different depending on the OS, what you see is Unix/Linux modus operandi.
swig -python example.i
gcc -c example.c example_wrap.c -I/usr/local/include/python2.7
ld -shared example.o example_wrap.o -o _example.so
On Python the result is a module like any other:
```
import example
example.fact(5)
120
example.my_mod(7,3)
1
example.get_time()
'Sun Feb 11 23:01:07 1996'
```
Python and R
While some Python and R programmers don't talk to each other, the languages do. It is possible to call Python from R (rpy) and R from Python (rPython). It works better to call R from Python, in fact the library is much more developed in this direction.
It requires a special module called rpy2. We will make use of R again in the 'omics chapters. For now let us use this example slightly modified for rpy2.
You can see the whole output from R and you can also interact with R environment through the execution.
But, how to get the required rpy2 module?
Google 'conda install rpy2' and feel lucky, the page at https://anaconda.org/r/rpy2 says:
conda install rpy2
This failed on my Ubuntu 64 bits system, it seems that Anaconda has problems maintaining it on the site. So I went to rpy2 homepage:
http://rpy2.bitbucket.org/
.. and I installed rpy2 with pip (Anaconda installs the pip package manager)
pip install rpy2
.. Yea, so this took me one hour last night to fix, but the problem was only affecting Linux and Anaconda. Only use import readline if you have Linux.
End of explanation
"""
import readline
import numpy as np
from rpy2.robjects import r
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
data = np.random.random((10,10))
r.heatmap(data)
"""
Explanation: However in the example above there is no real communication between the two languages. Let us change that with another small example, in which we send a numpy matrix to R. Don't worry at this point about what numpy is, we will learn it in the scientific computing chapter.
End of explanation
"""
#%load_ext rpy2.ipython
#from rpy2.robjects import r
#import rpy2.robjects.numpy2ri
#rpy2.robjects.numpy2ri.activate()
import numpy as np
data = np.random.random((10,10))
%Rpush data
%R heatmap(data)
"""
Explanation: All is well above, except the display happens in an external R GUI frame. It would be nice to have an inline plot, matplotlib style. Well guess what, you are in luck because IPython is also having native support for R. This is the recommended way in which Python and R can interact in the IPython notebook:
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.3/examples/distortion_method_none.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Black Hole Binary (distortion_method='none')
Attempting to set a very cool temperature for a star with a large mass to mimic a black hole will likely cause out-of-bounds errors in atmosphere tables. You can get around this slightly by using blackbody atmospheres for the compact object/black hole, but there is still significant added expense for computing the eclipse. In cases where you only need the distortion of one star caused by the gravity of a compact object, without accounting for the presence of eclipses, you can set the compact object such that it does not even generate a mesh (or therefore any light), but still influences the distortion and dynamics of the other component(s).
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.add_dataset('lc', compute_times=phoebe.linspace(0,1,101), dataset='lc01')
b.add_dataset('mesh', compute_times=[0.25], dataset='mesh01')
b.run_compute(model='original_model')
"""
Explanation: Adding Datasets
Now we'll create a light curve dataset, expose the mesh at quarter-phase, and compute the original model for comparison.
End of explanation
"""
print(b.filter(qualifier='distortion_method', context='compute'))
print(b.get_parameter(qualifier='distortion_method', component='secondary', context='compute'))
b.set_value('distortion_method', component='secondary', value='none')
"""
Explanation: Distortion Method
Now we'll disable the meshing for the secondary component and see how that affects the resulting light curve.
End of explanation
"""
print(b.filter(qualifier='pblum*'))
b.run_compute(model='distortion_method_none')
"""
Explanation: IMPORTANT NOTE: this can affect passband-luminosity scaling. By default, PHOEBE will scale to the set pblum of the primary star, so if setting distortion_method of the 'primary' component to 'none', then everything will scale to a flux of zero. In this case, you will want to provide the pblum of the secondary star instead by switching pblum_component or using pblum_mode='absolute'.
End of explanation
"""
_ = b.plot(kind='lc', legend=True, show=True)
_ = b.plot(kind='lc', model='distortion_method_none', show=True)
"""
Explanation: Plotting
If we plot both models (with the secondary star meshed and without), we can see that we lost half of the flux (since the stars had the same luminosity) and are only left with the ellipsoidal variations of the primary component.
End of explanation
"""
_ = b.plot(kind='mesh', model='distortion_method_none', show=True)
_ = b.plot(kind='mesh', model='original_model', show=True)
"""
Explanation: And if we plot the exposed meshes, we'll see that no mesh was created for the secondary component when setting distortion_method to 'none'.
End of explanation
"""
|
marcinofulus/teaching
|
ML_SS2017/Numpy_cwiczenia.ipynb
|
gpl-3.0
|
import numpy as np
x = np.linspace(0,10,23)
f = np.sin(x)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,f,'o-')
plt.plot(4,0,'ro')
# f1 = f[1:-1] * f[:]
print(np.shape(f[:-1]))
print(np.shape(f[1:]))
ff = f[:-1] * f[1:]
print(ff.shape)
x_zero = x[np.where(ff < 0)]
x_zero2 = x[np.where(ff < 0)[0] + 1]
f_zero = f[np.where(ff < 0)]
f_zero2 = f[np.where(ff < 0)[0] + 1]
print(x_zero)
print(f_zero)
Dx = x_zero2 - x_zero
df = np.abs(f_zero)
Df = np.abs(f_zero - f_zero2)
print(Dx)
print(df)
print(Df)
xz = x_zero + (df * Dx) / Df
xz
plt.plot(x,f,'o-')
plt.plot(x_zero,f_zero,'ro')
plt.plot(x_zero2,f_zero2,'go')
plt.plot(xz,np.zeros_like(xz),'yo-')
np.where(ff < 0)[0] + 1
"""
Explanation: 1. Utwórz wektor zer o rozmiarze 10
python
np.zeros
2. Ile pamięci zajmuje tablica?
3.Utwórz wektor 10 zer z wyjątkiem 5-tego elementu równego 4
4. Utwórz wektor kolejnych liczb od 111 do 144.
np.arange
5. Odwróć kolejność elementów wektora.
6. Utwórz macierz 4x4 z wartościamy od 0 do 15
reshape
7. Znajdź wskażniki niezerowych elementów wektora
np.nonzero
8. Znajdż miejsca zerowe funkcji wykorzystując np.nonzero.
znajdź odcinek na którym funkcja zmienia znak
wykonaj liniową interpolację mjejsca zerowego
Algorytm powinien zawierać tylko wektorowe operacje.
Funkcja jest dana jako tablice argumentów i wartosci.
End of explanation
"""
Z = np.random.random(30)
"""
Explanation: 9. Utwórz macierz 3x3:
identycznościową np.eye
losową z wartościami 0,1,2
10. Znajdz minimalną wartość macierzy i jej wskaźnik
11. Znajdz średnie odchylenie od wartości średniej dla wektora
End of explanation
"""
x = np.linspace(0,3,64)
y = np.linspace(0,3,64)
X,Y = np.meshgrid(x,y)
X
Y
np.sin(X**2+Y**2)
plt.contourf(X,Y,np.sin(X**2+Y**2))
"""
Explanation: 12. Siatka 2d.
Utworz index-array warości współrzędnych x i y dla obszaru $(-2,1)\times(-1,3)$.
* Oblicz na nim wartości funkcji $sin(x^2+y^2)$
* narysuj wynik za pomocą imshow i countour
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/10_recommend/labs/composer_gcf_trigger/composertriggered.ipynb
|
apache-2.0
|
import os
PROJECT = 'your-project-id' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
"""
Explanation: Triggering a Cloud Composer Pipeline with a Google Cloud Function
In this advanced lab you will learn how to create and run an Apache Airflow workflow in Cloud Composer that completes the following tasks:
- Watches for new CSV data to be uploaded to a Cloud Storage bucket
- A Cloud Function call triggers the Cloud Composer Airflow DAG to run when a new file is detected
- The workflow finds the input file that triggered the workflow and executes a Cloud Dataflow job to transform and output the data to BigQuery
- Moves the original input file to a different Cloud Storage bucket for storing processed files
Part One: Create Cloud Composer environment and workflow
First, create a Cloud Composer environment if you don't have one already by doing the following:
1. In the Navigation menu under Big Data, select Composer
2. Select Create
3. Set the following parameters:
- Name: mlcomposer
- Location: us-central1
- Other values at defaults
4. Select Create
The environment creation process is completed when the green checkmark displays to the left of the environment name on the Environments page in the GCP Console.
It can take up to 20 minutes for the environment to complete the setup process. Move on to the next section - Create Cloud Storage buckets and BigQuery dataset.
Set environment variables
End of explanation
"""
%%bash
## create GCS buckets
exists=$(gsutil ls -d | grep -w gs://${PROJECT}_input/)
if [ -n "$exists" ]; then
echo "Skipping the creation of input bucket."
else
echo "Creating input bucket."
gsutil mb -l ${REGION} gs://${PROJECT}_input
echo "Loading sample data for later"
gsutil cp resources/usa_names.csv gs://${PROJECT}_input
fi
exists=$(gsutil ls -d | grep -w gs://${PROJECT}_output/)
if [ -n "$exists" ]; then
echo "Skipping the creation of output bucket."
else
echo "Creating output bucket."
gsutil mb -l ${REGION} gs://${PROJECT}_output
fi
"""
Explanation: Create Cloud Storage buckets
Create two Cloud Storage Multi-Regional buckets in your project.
- project-id_input
- project-id_output
Run the below to automatically create the buckets and load some sample data:
End of explanation
"""
%%writefile simple_load_dag.py
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A simple Airflow DAG that is triggered externally by a Cloud Function when a
file lands in a GCS bucket.
Once triggered the DAG performs the following steps:
1. Triggers a Google Cloud Dataflow job with the input file information received
from the Cloud Function trigger.
2. Upon completion of the Dataflow job, the input file is moved to a
gs://<target-bucket>/<success|failure>/YYYY-MM-DD/ location based on the
status of the previous step.
"""
import datetime
import logging
import os
from airflow import configuration
from airflow import models
from airflow.contrib.hooks import gcs_hook
from airflow.contrib.operators import dataflow_operator
from airflow.operators import python_operator
from airflow.utils.trigger_rule import TriggerRule
# We set the start_date of the DAG to the previous date. This will
# make the DAG immediately available for scheduling.
YESTERDAY = datetime.datetime.combine(
datetime.datetime.today() - datetime.timedelta(1),
datetime.datetime.min.time())
# We define some variables that we will use in the DAG tasks.
SUCCESS_TAG = 'success'
FAILURE_TAG = 'failure'
# An Airflow variable called gcp_completion_bucket is required.
# This variable will contain the name of the bucket to move the processed
# file to.
# '_names' must appear in CSV filename to be ingested (adjust as needed)
# we are only looking for files with the exact name usa_names.csv (you can specify wildcards if you like)
INPUT_BUCKET_CSV = 'gs://'+models.Variable.get('gcp_input_location')+'/usa_names.csv'
# TODO: Populate the models.Variable.get() with the actual variable name for your output bucket
COMPLETION_BUCKET = 'gs://'+models.Variable.get('gcp_completion_bu____')
DS_TAG = '{{ ds }}'
DATAFLOW_FILE = os.path.join(
configuration.get('core', 'dags_folder'), 'dataflow', 'process_delimited.py')
# The following additional Airflow variables should be set:
# gcp_project: Google Cloud Platform project id.
# gcp_temp_location: Google Cloud Storage location to use for Dataflow temp location.
DEFAULT_DAG_ARGS = {
'start_date': YESTERDAY,
'retries': 2,
# TODO: Populate the models.Variable.get() with the variable name for your GCP Project
'project_id': models.Variable.get('gcp_pro____'),
'dataflow_default_options': {
'project': models.Variable.get('gcp_pro____'),
# TODO: Populate the models.Variable.get() with the variable name for temp location
'temp_location': 'gs://'+models.Variable.get('gcp_temp_l_______'),
'runner': 'DataflowRunner'
}
}
def move_to_completion_bucket(target_bucket, target_infix, **kwargs):
"""A utility method to move an object to a target location in GCS."""
# Here we establish a connection hook to GoogleCloudStorage.
# Google Cloud Composer automatically provides a google_cloud_storage_default
# connection id that is used by this hook.
conn = gcs_hook.GoogleCloudStorageHook()
# The external trigger (Google Cloud Function) that initiates this DAG
# provides a dag_run.conf dictionary with event attributes that specify
# the information about the GCS object that triggered this DAG.
# We extract the bucket and object name from this dictionary.
source_bucket = models.Variable.get('gcp_input_location')
source_object = models.Variable.get('gcp_input_location')+'/usa_names.csv'
completion_ds = kwargs['ds']
target_object = os.path.join(target_infix, completion_ds, source_object)
logging.info('Copying %s to %s',
os.path.join(source_bucket, source_object),
os.path.join(target_bucket, target_object))
conn.copy(source_bucket, source_object, target_bucket, target_object)
logging.info('Deleting %s',
os.path.join(source_bucket, source_object))
conn.delete(source_bucket, source_object)
# Setting schedule_interval to None as this DAG is externally trigger by a Cloud Function.
# The following Airflow variables should be set for this DAG to function:
# bq_output_table: BigQuery table that should be used as the target for
# Dataflow in <dataset>.<tablename> format.
# e.g. lake.usa_names
# input_field_names: Comma separated field names for the delimited input file.
# e.g. state,gender,year,name,number,created_date
# TODO: Name the DAG id GcsToBigQueryTriggered
with models.DAG(dag_id='GcsToBigQueryTr_______',
description='A DAG triggered by an external Cloud Function',
schedule_interval=None, default_args=DEFAULT_DAG_ARGS) as dag:
# Args required for the Dataflow job.
job_args = {
'input': INPUT_BUCKET_CSV,
# TODO: Populate the models.Variable.get() with the variable name for BQ table
'output': models.Variable.get('bq_output_t____'),
# TODO: Populate the models.Variable.get() with the variable name for input field names
'fields': models.Variable.get('input_field_n____'),
'load_dt': DS_TAG
}
# Main Dataflow task that will process and load the input delimited file.
# TODO: Specify the type of operator we need to call to invoke DataFlow
dataflow_task = dataflow_operator.DataFlowPythonOp_______(
task_id="process-delimited-and-push",
py_file=DATAFLOW_FILE,
options=job_args)
# Here we create two conditional tasks, one of which will be executed
# based on whether the dataflow_task was a success or a failure.
success_move_task = python_operator.PythonOperator(task_id='success-move-to-completion',
python_callable=move_to_completion_bucket,
# A success_tag is used to move
# the input file to a success
# prefixed folder.
op_args=[models.Variable.get('gcp_completion_bucket'), SUCCESS_TAG],
provide_context=True,
trigger_rule=TriggerRule.ALL_SUCCESS)
failure_move_task = python_operator.PythonOperator(task_id='failure-move-to-completion',
python_callable=move_to_completion_bucket,
# A failure_tag is used to move
# the input file to a failure
# prefixed folder.
op_args=[models.Variable.get('gcp_completion_bucket'), FAILURE_TAG],
provide_context=True,
trigger_rule=TriggerRule.ALL_FAILED)
# The success_move_task and failure_move_task are both downstream from the
# dataflow_task.
dataflow_task >> success_move_task
dataflow_task >> failure_move_task
"""
Explanation: Create BigQuery Destination Dataset and Table
Next, we'll create a data sink to store the ingested data from GCS<br><br>
Create a new Dataset
In the Navigation menu, select BigQuery
Then click on your qwiklabs project ID
Click Create Dataset
Name your dataset ml_pipeline and leave other values at defaults
Click Create Dataset
Create a new empty table
Click on the newly created dataset
Click Create Table
For Destination Table name specify ingest_table
For schema click Edit as Text and paste in the below schema
state: STRING,<br>
gender: STRING,<br>
year: STRING,<br>
name: STRING,<br>
number: STRING,<br>
created_date: STRING,<br>
filename: STRING,<br>
load_dt: DATE<br><br>
Click Create Table
Review of Airflow concepts
While your Cloud Composer environment is building, let’s discuss the sample file you’ll be using in this lab.
<br><br>
Airflow is a platform to programmatically author, schedule and monitor workflows
<br><br>
Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies.
<br><br>
Core concepts
DAG - A Directed Acyclic Graph is a collection of tasks, organised to reflect their relationships and dependencies.
Operator - The description of a single task, it is usually atomic. For example, the BashOperator is used to execute bash command.
Task - A parameterised instance of an Operator; a node in the DAG.
Task Instance - A specific run of a task; characterised as: a DAG, a Task, and a point in time. It has an indicative state: running, success, failed, skipped, …<br><br>
The rest of the Airflow concepts can be found here.
Complete the DAG file
Cloud Composer workflows are comprised of DAGs (Directed Acyclic Graphs). The code shown in simple_load_dag.py is the workflow code, also referred to as the DAG.
<br><br>
Open the file now to see how it is built. Next will be a detailed look at some of the key components of the file.
<br><br>
To orchestrate all the workflow tasks, the DAG imports the following operators:
- DataFlowPythonOperator
- PythonOperator
<br><br>
Action: <span style="color:blue">Complete the # TODOs in the simple_load_dag.py DAG file below</span> file while you wait for your Composer environment to be setup.
End of explanation
"""
## Run this to display which key value pairs to input
import pandas as pd
pd.DataFrame([
('gcp_project', PROJECT),
('gcp_input_location', PROJECT + '_input'),
('gcp_temp_location', PROJECT + '_output/tmp'),
('gcp_completion_bucket', PROJECT + '_output'),
('input_field_names', 'state,gender,year,name,number,created_date'),
('bq_output_table', 'ml_pipeline.ingest_table')
], columns = ['Key', 'Value'])
"""
Explanation: Viewing environment information
Now that you have a completed DAG, it's time to copy it to your Cloud Composer environment and finish the setup of your workflow.<br><br>
1. Go back to Composer to check on the status of your environment.
2. Once your environment has been created, click the name of the environment to see its details.
<br><br>
The Environment details page provides information, such as the Airflow web UI URL, Google Kubernetes Engine cluster ID, name of the Cloud Storage bucket connected to the DAGs folder.
<br><br>
Cloud Composer uses Cloud Storage to store Apache Airflow DAGs, also known as workflows. Each environment has an associated Cloud Storage bucket. Cloud Composer schedules only the DAGs in the Cloud Storage bucket.
Setting Airflow variables
Our DAG relies on variables to pass in values like the GCP Project. We can set these in the Admin UI.
Airflow variables are an Airflow-specific concept that is distinct from environment variables. In this step, you'll set the following six Airflow variables used by the DAG we will deploy.
End of explanation
"""
%%bash
gcloud composer environments run ENVIRONMENT_NAME \
--location ${REGION} variables -- \
--set gcp_project ${PROJECT}
"""
Explanation: Option 1: Set the variables using the Airflow webserver UI
In your Airflow environment, select Admin > Variables
Populate each key value in the table with the required variables from the above table
Option 2: Set the variables using the Airflow CLI
The next gcloud composer command executes the Airflow CLI sub-command variables. The sub-command passes the arguments to the gcloud command line tool.<br><br>
To set the three variables, run the gcloud composer command once for each row from the above table. Just as an example, to set the variable gcp_project you could do this:
End of explanation
"""
AIRFLOW_BUCKET = 'us-central1-composer-21587538-bucket' # REPLACE WITH AIRFLOW BUCKET NAME
os.environ['AIRFLOW_BUCKET'] = AIRFLOW_BUCKET
"""
Explanation: Copy your Airflow bucket name
Navigate to your Cloud Composer instance<br/><br/>
Select DAGs Folder<br/><br/>
You will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/>
Copy the bucket name into the variable below (example: us-central1-composer-08f6edeb-bucket)
End of explanation
"""
%%bash
gsutil cp simple_load_dag.py gs://${AIRFLOW_BUCKET}/dags # overwrite DAG file if it exists
gsutil cp -r dataflow/process_delimited.py gs://${AIRFLOW_BUCKET}/dags/dataflow/ # copy Dataflow job to be ran
"""
Explanation: Copy your Airflow files to your Airflow bucket
End of explanation
"""
import google.auth
import google.auth.transport.requests
import requests
import six.moves.urllib.parse
# Authenticate with Google Cloud.
# See: https://cloud.google.com/docs/authentication/getting-started
credentials, _ = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform'])
authed_session = google.auth.transport.requests.AuthorizedSession(
credentials)
project_id = 'your-project-id'
location = 'us-central1'
composer_environment = 'composer'
environment_url = (
'https://composer.googleapis.com/v1beta1/projects/{}/locations/{}'
'/environments/{}').format(project_id, location, composer_environment)
composer_response = authed_session.request('GET', environment_url)
environment_data = composer_response.json()
airflow_uri = environment_data['config']['airflowUri']
# The Composer environment response does not include the IAP client ID.
# Make a second, unauthenticated HTTP request to the web server to get the
# redirect URI.
redirect_response = requests.get(airflow_uri, allow_redirects=False)
redirect_location = redirect_response.headers['location']
# Extract the client_id query parameter from the redirect.
parsed = six.moves.urllib.parse.urlparse(redirect_location)
query_string = six.moves.urllib.parse.parse_qs(parsed.query)
print(query_string['client_id'][0])
"""
Explanation: Navigating Using the Airflow UI
To access the Airflow web interface using the GCP Console:
1. Go back to the Composer Environments page.
2. In the Airflow webserver column for the environment, click the new window icon.
3. The Airflow web UI opens in a new browser window.
Trigger DAG run manually
Running your DAG manually ensures that it operates successfully even in the absence of triggered events.
1. Trigger the DAG manually click the play button under Links
Part Two: Trigger DAG run automatically from a file upload to GCS
Now that your manual workflow runs successfully, you will now trigger it based on an external event.
Create a Cloud Function to trigger your workflow
We will be following this reference guide to setup our Cloud Function
1. In the code block below, uncomment the project_id, location, and composer_environment and populate them
2. Run the below code to get your CLIENT_ID (needed later)
End of explanation
"""
'use strict';
const fetch = require('node-fetch');
const FormData = require('form-data');
/**
* Triggered from a message on a Cloud Storage bucket.
*
* IAP authorization based on:
* https://stackoverflow.com/questions/45787676/how-to-authenticate-google-cloud-functions-for-access-to-secure-app-engine-endpo
* and
* https://cloud.google.com/iap/docs/authentication-howto
*
* @param {!Object} data The Cloud Functions event data.
* @returns {Promise}
*/
exports.triggerDag = async data => {
// Fill in your Composer environment information here.
// The project that holds your function
const PROJECT_ID = 'your-project-id';
// Navigate to your webserver's login page and get this from the URL
const CLIENT_ID = 'your-iap-client-id';
// This should be part of your webserver's URL:
// {tenant-project-id}.appspot.com
const WEBSERVER_ID = 'your-tenant-project-id';
// The name of the DAG you wish to trigger
const DAG_NAME = 'GcsToBigQueryTriggered';
// Other constants
const WEBSERVER_URL = `https://${WEBSERVER_ID}.appspot.com/api/experimental/dags/${DAG_NAME}/dag_runs`;
const USER_AGENT = 'gcf-event-trigger';
const BODY = {conf: JSON.stringify(data)};
// Make the request
try {
const iap = await authorizeIap(CLIENT_ID, PROJECT_ID, USER_AGENT);
return makeIapPostRequest(
WEBSERVER_URL,
BODY,
iap.idToken,
USER_AGENT,
iap.jwt
);
} catch (err) {
throw new Error(err);
}
};
/**
* @param {string} clientId The client id associated with the Composer webserver application.
* @param {string} projectId The id for the project containing the Cloud Function.
* @param {string} userAgent The user agent string which will be provided with the webserver request.
*/
const authorizeIap = async (clientId, projectId, userAgent) => {
const SERVICE_ACCOUNT = `${projectId}@appspot.gserviceaccount.com`;
const JWT_HEADER = Buffer.from(
JSON.stringify({alg: 'RS256', typ: 'JWT'})
).toString('base64');
let jwt = '';
let jwtClaimset = '';
// Obtain an Oauth2 access token for the appspot service account
const res = await fetch(
`http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/${SERVICE_ACCOUNT}/token`,
{
headers: {'User-Agent': userAgent, 'Metadata-Flavor': 'Google'},
}
);
const tokenResponse = await res.json();
if (tokenResponse.error) {
return Promise.reject(tokenResponse.error);
}
const accessToken = tokenResponse.access_token;
const iat = Math.floor(new Date().getTime() / 1000);
const claims = {
iss: SERVICE_ACCOUNT,
aud: 'https://www.googleapis.com/oauth2/v4/token',
iat: iat,
exp: iat + 60,
target_audience: clientId,
};
jwtClaimset = Buffer.from(JSON.stringify(claims)).toString('base64');
const toSign = [JWT_HEADER, jwtClaimset].join('.');
const blob = await fetch(
`https://iam.googleapis.com/v1/projects/${projectId}/serviceAccounts/${SERVICE_ACCOUNT}:signBlob`,
{
method: 'POST',
body: JSON.stringify({
bytesToSign: Buffer.from(toSign).toString('base64'),
}),
headers: {
'User-Agent': userAgent,
Authorization: `Bearer ${accessToken}`,
},
}
);
const blobJson = await blob.json();
if (blobJson.error) {
return Promise.reject(blobJson.error);
}
// Request service account signature on header and claimset
const jwtSignature = blobJson.signature;
jwt = [JWT_HEADER, jwtClaimset, jwtSignature].join('.');
const form = new FormData();
form.append('grant_type', 'urn:ietf:params:oauth:grant-type:jwt-bearer');
form.append('assertion', jwt);
const token = await fetch('https://www.googleapis.com/oauth2/v4/token', {
method: 'POST',
body: form,
});
const tokenJson = await token.json();
if (tokenJson.error) {
return Promise.reject(tokenJson.error);
}
return {
jwt: jwt,
idToken: tokenJson.id_token,
};
};
/**
* @param {string} url The url that the post request targets.
* @param {string} body The body of the post request.
* @param {string} idToken Bearer token used to authorize the iap request.
* @param {string} userAgent The user agent to identify the requester.
*/
const makeIapPostRequest = async (url, body, idToken, userAgent) => {
const res = await fetch(url, {
method: 'POST',
headers: {
'User-Agent': userAgent,
Authorization: `Bearer ${idToken}`,
},
body: JSON.stringify(body),
});
if (!res.ok) {
const err = await res.text();
throw new Error(err);
}
};
"""
Explanation: Create the Cloud Function
Navigate to Compute > Cloud Functions
Select Create function
For name specify 'gcs-dag-trigger-function'
For trigger type select 'Cloud Storage'
For event type select 'Finalize/Create'
For bucket, specify the input bucket you created earlier
Important: be sure to select the input bucket and not the output bucket to avoid an endless triggering loop)
populate index.js
Complete the four required constants defined below in index.js code and paste it into the Cloud Function editor (the js code will not run in this notebook). The constants are:
- PROJECT_ID
- CLIENT_ID (from earlier)
- WEBSERVER_ID (part of Airflow webserver URL)
- DAG_NAME (GcsToBigQueryTriggered)
End of explanation
"""
{
"name": "nodejs-docs-samples-functions-composer-storage-trigger",
"version": "0.0.1",
"dependencies": {
"form-data": "^2.3.2",
"node-fetch": "^2.2.0"
},
"engines": {
"node": ">=8.0.0"
},
"private": true,
"license": "Apache-2.0",
"author": "Google Inc.",
"repository": {
"type": "git",
"url": "https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git"
},
"devDependencies": {
"@google-cloud/nodejs-repo-tools": "^3.3.0",
"mocha": "^6.0.0",
"proxyquire": "^2.1.0",
"sinon": "^7.2.7"
},
"scripts": {
"test": "mocha test/*.test.js --timeout=20000"
}
}
"""
Explanation: populate package.json
Copy and paste the below into package.json
End of explanation
"""
|
neuro-data-mining/materials
|
Convolution/What's a Convolution?.ipynb
|
mit
|
from __future__ import division
import matplotlib
matplotlib.use("TkAgg")
%pylab inline
plt.xkcd();
from scipy.stats import multivariate_normal
from scipy.io import wavfile
from IPython.display import Audio
import matplotlib.animation as animation
import base64
import scipy.signal
from PIL import Image
import plotFunks as pF
"""
Explanation: Convolutions Aren't Convoluted!
Convolution is a basic mathematical operation -- in a very real way, it's only slightly less fundamental than addition and multiplication*! Because of its fundamental nature, convolution arises repeatedly in both theoretical and applied contexts.
This notebook focuses on the mathematical foundations of convolution. If you're interested in the applications of convolution, check out the other notebook in this folder.
Below, I assume pretty minimal math background. There is a very elegant view of convolutions that comes from the theory of Fourier transforms and linear algebra. If you're interested, check out this nice exposition of that point of view, from Kenneth Miller of Columbia, which does a good job developing the approach gently. There are even loads of neuroscience examples!
* For the mathematically inclined: convolutions are defined whenever you have a set with a binary operation -- that means groups, monoids, and even categories! Check out this blog post by Chris Olah for more on that front, or for a very nice introduction to groups.
Preliminaries
End of explanation
"""
def plotSignal(signal,signalName):
plt.figure(figsize=(16,2));
plt.plot(signal,'-o',color='k');
pF.cleanPlot(plt.gca());
plt.xlim(-len(signal)/10,len(signal)+len(signal)/10);
pF.addAxis(plt.gca(),'horizontal');
plt.title(signalName);
def plotSignalAsDelta(signal,signalName,color='r'):
plt.figure(figsize=(16,4)); plt.subplot(2,1,1)
plt.plot(signal,'-o',color='k');
pF.cleanPlot(plt.gca());
plt.xlim(-len(signal)/10,len(signal)+len(signal)/10);
pF.addAxis(plt.gca(),'horizontal');
plt.title(signalName);
plt.subplot(2,1,2)
deltaPlot(signal,color=color);
pF.cleanPlot(plt.gca());
plt.xlim(-len(signal)/10,len(signal)+len(signal)/10);
pF.addAxis(plt.gca(),'horizontal');
plt.title('also ' +signalName);
def deltaPlot(inp,color='r'):
plt.scatter(np.arange(0,len(inp)),inp,
linewidth=0,marker='o',s=36,color=color,
zorder=10);
plt.vlines(np.arange(0,len(inp)),0,inp,)
def plotKronecker():
pad=[0,0]; padLen = len(pad)
deltaPlot(pad+[1]+pad,color='b')
pF.cleanPlot(plt.gca());
plt.xlim(-padLen/10,2*padLen+1+padLen/10)
pF.addAxis(plt.gca(),'horizontal');
plt.title('the delta function')
def kernelsPlot(kernels,kernelNames):
numKernels = len(kernels)
plt.figure(figsize=(16,4));
for idx,(kernel,name) in enumerate(zip(kernels,kernelNames)):
plt.subplot(1,numKernels,idx+1)
deltaPlot(kernel)
pF.cleanPlot(plt.gca());
plt.xlim(-len(kernel)/10,len(kernel)+len(kernel)/10);
pF.addAxis(plt.gca(),'horizontal');
plt.title(name);
def convolutionPlot(signals,signalName,kernels,kernelNames):
for idx,(signal,kernel,kernelName) in enumerate(zip(signals[1:],kernels,kernelNames)):
plt.figure(figsize=(16,4));
plt.subplot(1,3,1)
#Plot the original signal for reference
deltaPlot(signals[0],color='blue');
plt.title(signalName); plt.ylim(-1,1.5);
plt.xlim(-len(signals[0])/10,len(signals[0])+len(signals[0])/10);
pF.cleanPlot(plt.gca()); pF.addAxis(plt.gca(),'horizontal')
plt.subplot(1,3,2)
#Plot the kernel for reference
deltaPlot(kernel)
plt.title(kernelName); plt.ylim(-1,1.5);
plt.xlim(-len(kernel)/10,len(kernel)+len(kernel)/10);
pF.cleanPlot(plt.gca()); pF.addAxis(plt.gca(),'horizontal')
plt.subplot(1,3,3)
#Plot convolved signal
outName = signalName+'*'+kernelName
sCf = deltaPlot(signal,color='purple'); plt.ylim(-1,1.5);
plt.xlim(-len(signal)/10,len(signal)+len(signal)/10);
pF.cleanPlot(plt.gca()); pF.addAxis(plt.gca(),'horizontal')
plt.title(outName);
def probabilityPlot(ax,locs,edge,labels):
ax.set_ylim([0,1]); ax.set_xlim([locs[0]-edge,locs[1]+edge]);
ax.xaxis.set_ticklabels('');
ax.xaxis.set_ticks(locs); ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticks([0,0.5,1]);
ax.tick_params(axis='x',top='off')
ax.tick_params(axis='y',right='off')
plt.ylabel('Probability')
"""
Explanation: Plotting Functions
End of explanation
"""
# from http://jakevdp.github.io/blog/2013/05/12/embedding-matplotlib-animations/
from tempfile import NamedTemporaryFile
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim,fps=1):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=fps,
extra_args=['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])
video = open(f.name, "rb").read()
anim._encoded_video = base64.b64encode(video).decode('utf-8')
return VIDEO_TAG.format(anim._encoded_video)
from IPython.display import HTML
def display_animation(anim,fps=1):
plt.close(anim._fig)
return HTML(anim_to_html(anim,fps=fps))
"""
Explanation: HTML-based animation display
End of explanation
"""
def randomWalk(tMax=1,sigma=1,eps=0.1):
signal=[0]; scaleFactor = np.sqrt(eps)
for t in np.arange(0,tMax,eps):
signal.append(signal[-1]+np.random.normal(0,sigma*scaleFactor))
return np.asarray(signal[1:])
signals = [randomWalk(tMax=0.1,eps=0.01)]
signal = signals[0]
signalName = 's'
plotSignal(signal,signalName)
"""
Explanation: Understanding Signals
In the traditional view, convolutions arise when we want to understand how very simple systems respond when signals pass through them. In order to make this idea concrete, we need to define "simple systems" and "signals".
Discrete signals are, fundamentally, just indexed collections of numbers, also known as arrays. When we have a signal in time, a signal is an array of numbers. When that signal is distributed in space, like an image, it is a 2-D array of numbers.
Below, we generate a signal using a random walk. Random walks are used to model everything from the stock market to the motion of atoms. They represent the simplest form of an auto-correlated signal, so they're a nice modest step up from white noise. We generate a random walk by adding together samples from a Gaussian distribution -- the value at time t is just the running total of the sum of t samples.
Because this signal is random, it can be helpful to come back and re-run the cell below in order to get more examples later -- some examples will be better than others for illustrating points about convolution below.
End of explanation
"""
plotKronecker()
"""
Explanation: Since our signal is just an array, we can think of it as a collection of points. The zeroth element of the array is a point at $t = 0$, the next is a point at $t = 1$, etc. The height of the point is determined by the number in the array.
Put another way, we can break a signal of length $N$ down into $N$ components. If we were to represent the component at the timepoint $i$ as a function $e_i$, it would look like:
$$
e_i(t) = 1 \text{ if } t = i \
e_i(t) = 0 \text{ otherwise }
$$
This function is also called a "Kronecker's Delta", "the delta function", or the "unit impulse". If we were to draw it, it might look like:
End of explanation
"""
plotSignalAsDelta(signals[0],'s',color='b')
"""
Explanation: We construct our signal from these components as follows: we multiply each $e_i$ by the signal $s$ at the timepoint $i$. The elements that are $0$ stay zero, while the element that is $1$ now has the value $s(i)$. We then add all $N$ of the $e_i$ together, and the result is our original signal.
If you've taken linear algebra, this process may sound familiar to you -- it's exactly the way that we construct a vector out of a set of basis vectors. This is a deep connection. Just as the canonical, or usual, basis set for a two-dimensional vector space is the set $\left{[1, \ 0], [0,\ 1]\right}$, the canonical basis set for signals is the set of all Kronecker delta functions, each at a different point in time or space.
End of explanation
"""
# Define Our Kernels
pad = 15; padding = [0]*pad
delta = [1]+padding
delay = padding+delta
echo = [1]+padding+[1]+padding
kernels = [delta,delay,echo]
kernelNames = ['f',"f'","(f+f')"]
numKernels = len(kernels)
# Plot Our Kernels
kernelsPlot(kernels,kernelNames)
plt.suptitle('Kernels!',fontsize=20,weight='bold',y=1.1);
"""
Explanation: Understanding Simple Transformations
For us, a simple transformation is one whose response to a signal is just the sum of its responses to all of the components in the signal and which doesn't depend on time. In mathematical terms, it is linear and translation-invariant. Since the components of our signals are just single points at different times, this means we can learn everything we want to learn about a simple transformation by seeing what it does to a single point at a single time.
This response is called a kernel or an impulse response. If we put in two points at once, the output is just two kernels stacked on top of one another -- note that this is the same thing as multiplying by 2. If we put in one point, then wait one time step and put in a second point, we get two copies of the kernel, separated by one time step.
Let's take a look at some simple transformations. We start by defining and plotting their kernels.
End of explanation
"""
simpleSignal = [1]
simpleSignals = [simpleSignal]
for kernel in kernels:
simpleSignals.append(np.convolve(simpleSignal,kernel))
convolutionPlot(simpleSignals,'just one point',kernels,kernelNames)
"""
Explanation: Now, lets apply these kernels to our "unit impulse" -- one point. Notice what comes out: it's just the kernel!
End of explanation
"""
simpleSignal = [1/2]
simpleSignals = [simpleSignal]
for kernel in kernels:
simpleSignals.append(np.convolve(simpleSignal,kernel))
convolutionPlot(simpleSignals,'just one point',kernels,kernelNames)
"""
Explanation: The "unit" in "unit impulse" means "having height 1". What happens if we scale our unit impulse so that it has a different height?
The response is just a scaled version of the kernel, just as when we put in two points at the same time.
End of explanation
"""
twoPointSignal = [1,0,1,0,0,0]
twoPointSignals = [twoPointSignal]
for kernel in kernels:
twoPointSignals.append(np.convolve(twoPointSignal,kernel))
convolutionPlot(twoPointSignals,'two points',kernels,kernelNames)
"""
Explanation: Notice that the third filter's kernel is just the sum of the first two kernels. The response of the third filter is also a sum: it's the sum of the responses to the first two kernels.
This is also true if we put two signals in: the response to a sum of inputs is just the sum of the responses to the individual inputs.
End of explanation
"""
# Define our filter kernels
pad = 1; padding = [0]*pad
difference = padding+[1/2,0,-1/2]+padding;
average = padding+[1/2,1/2]+padding
kernels = [difference,average]
kernelNames = ['difference','average']
numKernels = len(kernels)
# Plot Our Kernels
kernelsPlot(kernels,kernelNames)
plt.suptitle('More Kernels!',fontsize=20,weight='bold',y=1.1);
"""
Explanation: So if we have a more complicated signal, what we need to do to get the response to that signal is combine scaled, time-shifted copies of the kernel.
Put more formally: any signal can be broken up into scaled delta functions at different times, and the response of our simple transformation to a scaled delta function at some time $t$ is just a scaled copy of the kernel starting at that time $t$.
Let's put that into mathematical terms.
$$
\begin{align}
response_{filter}(signal) =& \sum_t response_{filter}(signal(t)) \
=& \sum_t response_{filter}(impulse(t))*signal(t)
\end{align}
$$
Let's simplify our notation: we'll call the signal $s$ and the response of the filter $r_f$. We'll call the response to an impulse $k$. That makes our statement above:
$$
r_{f}(s) = \sum_t k*s(t)
$$
This gives us the overall response, but what if we wanted to know the response at a particular time $t$? In order to do that, we have to keep track of all the possible contributions to the response at that time. That is, the response at $t$ will include the beginning of the response to the point at time $t$ plus the ongoing responses to the points at time $t-1$, at time $t-2$, and so on. We'll call that time difference $\Delta$.
And what is the value of the response at $t$ to a point at $t-\Delta$? If $\Delta$ is $0$, then it's the zeroth* element of the kernel, scaled by $s(t)$. If $\Delta$ is $1$, then it's the first element of the kernel, scaled by $s(t-1)$. If $\Delta$ is $2$, then it's the second element of the kernel, scaled by $s(t-2)$.
There's an obvious pattern here: the response at $t$ to a point at $t-\Delta$ is just the $\Delta$th element of the kernel times the value of the signal at $t-\Delta$.
We should give a name to $t-\Delta$. The traditional one is $\tau$.
To reiterate: the response to a point at time $\tau$ is going to be a $s(\tau)$-scaled copy of the kernel response. At any given time $t$ after $\tau$, there will usually be several copies stacked on top of each other, each of them at a different point in their response, depending on how long after $\tau$ the time $t$ is. This time difference is $\Delta$.
Let's write that out mathematically:
$$
r_{f}(s)(t) = \sum_{\tau+\Delta = t} s(\tau)*r(\Delta)
$$
where summing over $\tau+\Delta=t$ means summing over all pairs of $\tau$ and $\Delta$ that sum to $t$. This mathematical expression just says what we said above: the response of a filter to a signal is a group of time-shifted kernels scaled by the past values of the signal. This expression just narrows that down to a single point.
The mathematical expression above is exactly a convolution! It's not usually written that way, so for the benefit of anyone who has seen convolutions before, let's make some notation changes. First, we'll use the normal symbols: $f$ and $g$ instead of $r_f$ and $s$, and put a $*$ between them instead of putting parentheses. The result looks like
$$
gf(t) = \sum_{\tau+\Delta = t} g(\tau)f(\Delta)
$$
It's also standard to use $\tau$ by itself, making use of the fact that $\Delta=t-\tau$:
$$
gf(t) = \sum_{\tau} g(\tau)f(t-\tau)
$$
This is the more familiar expression of convolution, but I think it hides more than it shows. It's meant to evoke a dot product between the signal and a "flipped around" version of the kernel, but that ends up deeply confusing people. There's nothing "flipped around" about the kernel at all -- it's just that the later elements of the signal are contributing the earliest part of their responses. That is, as we look backwards in the signal, we find components that have made it further through their repsonse.
* Using Python notation
Convolution for Filtering
The kernels we used above were somewhat contrived: they mostly just repeated the signal. What if we wanted to do something more useful?
Below, we define two more useful kernels: one that takes the difference between two points, and another that takes their average.
End of explanation
"""
# Use Our Kernels
signals = [signals[0]]
for kernel in kernels:
signals.append(np.convolve(signals[0],kernel))
convolutionPlot(signals,signalName,kernels,kernelNames)
"""
Explanation: The average kernel has two points, both at height $1/2$. Let's plug that definition into our convolution expression:
$$
\begin{align}
r_{f}(s)(t) &= \sum_{\tau+\Delta = t} s(\tau)r(\Delta) \ \
&= s(t)r(0) + s(t-1)r(1) \ \
&= s(t)1/2 + s(t-1)*1/2 \ \
&= \frac{s(t) + s(t-1)}{2}
\end{align}
$$
Notice that the final line is an expression for the average of two points. When we convolve this kernel with a signal, we get a moving average. How would we do a three-point moving average? What about an $N$-point moving average?
The difference kernel also has two points, but one is at height $1/2$ and the other is at height $-1/2$, and there's one at height $0$ in between. Let's plug that definition into our convolution expression:
$$
\begin{align}
r_{f}(s)(t) &= \sum_{\tau+\Delta = t} s(\tau)r(\Delta) \ \
&= s(t)r(0) + s(t-1)r(1) + s(t-2)r(2) \ \
&= s(t)1/2 + s(t-2)-1/2 \ \
&= \frac{s(t) - s(t-2)}{2}
\end{align}
$$
Calculus aficionados will recognize that last line: it's the definition of a derivative! It is the change in the signal divided by the change in time -- "the rise over the run". Convolving with this kernel gives us the first derivative of our signal -- though usually, we call it a difference to remind ourselves that we aren't dealing with infinite numbers of points. What do you think might happen if we applied this kernel repeatedly?
Below, we use our kernels on the signal that we generated at the beginning of the section. You might be familiar with frequency-based analysis of signals. Can you see what our difference and average filters do to the signal in terms of frequnecy? Try generating new signals by rerunning the code block under Understanding Signals. It can be especially helpful to make longer ones by increasing the variable tMax.
End of explanation
"""
pmfCoin = np.asarray([1/2,1/2])
plt.figure(); locs = [0,2]; edge = 4
labels = ['0H','1H']
plt.bar(locs,pmfCoin,align='center');
probabilityPlot(plt.gca(),locs,edge,labels)
plt.suptitle("One Coin Flip",size=24,weight='bold',y=1.);
"""
Explanation: Convolution and Probability
But convolutions are much more than a simple way of expressing what simple transformations do to signals. In fact, they arise whenever we need to keep track of multiple possible contributions to a given value.
Say we want to know the probability of getting exactly two heads in three coin tosses.
First, we need to know the probability that the coin lands heads up. That plot appears below.
End of explanation
"""
pmfs = [pmfCoin]; #kernel = [1/2,1/2]
iters = 10; #change me to get different numbers of flips
mx = iters
locs = list(range(mx+2))
extendedPMF = np.hstack([pmfs[0],[0]*(mx+2-len(pmfs[0]))])
edge = 2
fig = plt.figure(figsize=(12,8)); pmfAx = plt.subplot(111);
pmfBars = pmfAx.bar(locs,extendedPMF,align='center')
labels = [str(n)+"H" for n in range(mx+1)]
probabilityPlot(plt.gca(),locs,edge,labels)
plt.suptitle("A Series of "+str(iters)+" Coin Flips",size=24,weight='bold',y=1.)
def init():
return
def animate(_,pmfs):
[pmfBars[idx].set_height(h)
for idx,h in enumerate(pmfs[-1])]
pmfs.append(np.convolve(pmfs[-1],pmfs[0])) #Convolution!
return
anim = animation.FuncAnimation(fig, animate, init_func=init,
fargs=[pmfs], frames=iters,
interval=2)
display_animation(anim,fps=10) #change the FramesPerSecond here, for longer or shorter videos
"""
Explanation: Where $1H$ refers to getting a head and $0H$ refers to getting zero heads (aka a tails).
What are the possible ways we can get two heads in three coin tosses? A table appears below.
| First toss | Second Toss | Third Toss |
|:----------:|:-----------:|:----------:|
| H | H | T |
| H | T | H |
| T | H | H |
How do we compute the probability of each of those outcomes? For an individual coin flip, the probability of a head or a tail is $1/2$. The probability of any pair $HH$, $HT$, etc. is $1/2*1/2 = 1/4$. When we look at combined events, all we have to do is multiply the probabilities.
And what is the overall probability that we get two heads? We just need to add up all the probabilities of the individual ways to get two heads.
To write that out mathematically, we split our result into two parts: the first two tosses, and the third toss. We'll call those components $A$ and $B$, and call our result $C$. In order to figure out the overall probability of our result $C$, we need to add up all the probabilities of combinations $A$ and $B$ that give us $C$. In our case, those combinations would be:
| A | B |
|:--:|:-:|
| HH | T |
| HT | H |
| TH | H |
We can find the probability of any given combination by multiplying together the probabilities of its components $A$ and $B$, but to find the probability of our outcome $C$, we need to take all combinations into account:
$$
p(C) = \sum_{A+B=C} p(A)*p(B)
$$
Now doesn't that formula look familiar! It's a convolution, where the two functions being convolved are the probability functions of $A$ and $B$!
Note that this applies for any number of repetitions. If we want to know how likely it is to get any particular number $k$ of heads in some number of coin tosses $n$, we just need to look at the $n$th convolution of the coin flip probability distribution with itself.
The code block below generates an animated version of this process, showing you the result of each convolution in turn. Feel free to change the pmfCoin above (I suggest [1/6]*6, which gives the probability distribution for a six-sided coin, also known as a "die") or change the number of iters below.
End of explanation
"""
|
hvillanua/deep-learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_split, target_split = source_text.split('\n'), target_text.split('\n')
source_to_int, target_to_int = [], []
for source, target in zip(source_split, target_split):
source_to_int.append([source_vocab_to_int[word] for word in source.split()])
targets = [target_vocab_to_int[word] for word in target.split()]
targets.append((target_vocab_to_int['<EOS>']))
target_to_int.append(targets)
#print(source_to_int, target_to_int)
return source_to_int, target_to_int
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
#max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text])
#max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text])
#max_source_len = np.max([max_tar_seq_len, max_sour_seq_len])
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32)
return output, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length)
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True, maximum_iterations=max_target_sequence_length)
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
#embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_output, inference_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:max_target_sentence_length: Maximum target sequence lenght
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_output, inference_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 254
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
sentence = sentence.lower()
sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')]
return sentence_to_id
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
andrew-lundgren/detchar
|
Notebooks/NoiseHunting/IncoherentSubtraction.ipynb
|
gpl-3.0
|
fftlen=32
overlap=24
coh=darm.coherence(aux,fftlen,overlap)
psd=darm.psd(fftlen,overlap)
"""
Explanation: Find the PSD of DARM, and the coherence with the aux channel.
End of explanation
"""
coh_long=zeros(len(psd),dtype=coh.dtype)
coh_long[:len(coh)]=coh.value
psd_sub=(1.-coh_long)*psd
p1=psd.plot()
p1.gca().plot(psd_sub,label='Subtracting DHARD\_Y')
p1.set_xlim(10,100)
p1.set_ylim(1e-47,1e-38)
p1=(psd**0.5).plot()
p1.gca().plot(psd_sub**0.5,label='Subtracting DHARD\_Y')
p1.set_xlim(10,100)
p1.set_ylim(1e-24,1e-19)
def eff_range(my_psd,f_low,f_high):
""" Calculate the sensemon range. This is just a quick
estimation. It's mostly useful for comparing ranges
with different PSDs or different frequency limits."""
norm=1.8e-21 # Normalizing factor, eyeballed
idx1=int(f_low/my_psd.df.value)
idx2=int(f_high/my_psd.df.value)
integrand=(my_psd.frequencies**(-7./3.)/my_psd).value
return norm*sqrt(sum(integrand[idx1:idx2]))
eff_range(psd_sub,20,300)/eff_range(psd,20,300)
"""
Explanation: Subtract the predicted effect of the aux channel on the PSD. Noises should add incoherently.
End of explanation
"""
|
chapman-phys227-2016s/hw-1-seama107
|
Homework1Notebook.ipynb
|
mit
|
def some_function(x):
return x**4 + x**2
print(p1.adaptive_trapezint(some_function, 0, 20))
"""
Explanation: Homework 2
Michael Seaman
2/12/16
Problem 3.8: Adaptive Trapazoid Approximation
Using the trapazoid approximation to find areas under the curve, we can get a good guess at bounded integration, however, we can find how many trapazoids we want to use by specifying a maximum error $\epsilon$ and then using the following equation:
$n = (b-a)\sqrt{\frac{(b-a) max\left|f''(x)\right|}{12\epsilon}}$
In this example, "some_function" is $f(x) = x^{4} + x^{2}$ evaluated from 0 to 20.
Actual result? $$\int_{0}^{20} (x^4 + x^2) dx = 642667 $$
End of explanation
"""
p2.print_error_results()
"""
Explanation: Problem 3.15: Fourier series Approximation
The function we're trying to model is a piecewise function following:
$f(t) = \left{
\begin{array}{ll}
1 & 0<t<T/2 \
0 & t = T/2 \
-1 & T/2<t<T \
\end{array}
\right.$
We're trying to approximate it with the sinosoidal Fourier approximation:
$S(t;n) = \frac{4}{\pi}\sum_{i=0}^{n}\frac{1}{2i-1}sin(\frac{2(2i+1)\pi t}{T})$
We will see how accurate S is at approximating f by trying different vaules of n (the number of sinewaves used to approximate) and with different vaues for $\alpha$ where $\alpha=\frac{t}{T}$.
End of explanation
"""
p3.application()
"""
Explanation: Problem 3.18 Numerical differentiation
The "diff" function follows the formula:
$f'(x) \approx \frac{f(x+h) - f(x-h)}{2h}$
in order to approximate the derivative of f at x using a very small value for h.
Below we use the functions:
$f_{1}(x) = e^{x}$ at $x=0$
$f_{2}(x) = e^{-2x^{2}}$ at $x=0$
$f_{3}(x) = cos{x}$ at $x=2\pi$
$f_{4}(x) = ln{x}$ at $x=1$
and calculate the difference of our differentiating function's output with h = .01 with the actual result.
End of explanation
"""
p4.find_primes(10)
print(p4.find_primes(500))
"""
Explanation: 3.34 Finding Prime numbers
This application uses the sieve of Eratosthenes algorithm to find pirme numbers less than or equal to the input. The algorithm checks off multiples of the lowest candidate that is still prime and removes them from the candidates; then moving up to the next.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.0/tutorials/beaming_boosting.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.0,<2.1"
"""
Explanation: Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b['rpole@primary'] = 1.8
b['rpole@secondary'] = 0.96
b['teff@primary'] = 10000
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5200
b['gravb_bol@secondary'] = 0.32
b['q@binary'] = 0.96/1.8
b['incl@binary'] = 88
b['period@binary'] = 1.0
b['sma@binary'] = 6.0
"""
Explanation: Let's make our system so that the boosting effects will be quite noticeable.
End of explanation
"""
times = np.linspace(0,1,101)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times[::10], dataset='mesh01')
"""
Explanation: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
End of explanation
"""
b.set_value('irrad_method', 'none')
print b['boosting_method@compute']
print b['boosting_method@compute'].choices
"""
Explanation: Relevant Parameters
End of explanation
"""
b.run_compute(boosting_method='none', model='boosting_none')
b.run_compute(boosting_method='linear', model='boosting_linear')
axs, artists = b['lc01'].plot()
leg = plt.legend()
axs, artists = b['lc01'].plot(ylim=(1.01,1.03))
leg = plt.legend()
"""
Explanation: Influence on Light Curves (fluxes)
End of explanation
"""
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['rv01@boosting_none'].plot(ax=ax1)
axs, artists = b['rv01@boosting_linear'].plot(ax=ax2)
"""
Explanation: Influence on Radial Velocities
End of explanation
"""
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['mesh@boosting_none'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax1)
axs, artists = b['mesh@boosting_linear'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax2)
"""
Explanation: Influence on Meshes
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
|
apache-2.0
|
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
# This Colab requires TF 2.5.
!pip install -U "tensorflow>=2.5"
import os
import pathlib
import matplotlib
import matplotlib.pyplot as plt
import io
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from six.moves.urllib.request import urlopen
import tensorflow as tf
import tensorflow_hub as hub
tf.get_logger().setLevel('ERROR')
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/tensorflow/collections/object_detection/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
TensorFlow Hub Object Detection Colab
Welcome to the TensorFlow Hub Object Detection Colab! This notebook will take you through the steps of running an "out-of-the-box" object detection model on images.
More models
This collection contains TF2 object detection models that have been trained on the COCO 2017 dataset. Here you can find all object detection models that are currently hosted on tfhub.dev.
Imports and Setup
Let's start with the base imports.
End of explanation
"""
# @title Run this!!
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
image = None
if(path.startswith('http')):
response = urlopen(path)
image_data = response.read()
image_data = BytesIO(image_data)
image = Image.open(image_data)
else:
image_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(image_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(1, im_height, im_width, 3)).astype(np.uint8)
ALL_MODELS = {
'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',
'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',
'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',
'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',
'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',
'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',
'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',
'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',
'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',
'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',
'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',
'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',
'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',
'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',
'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',
'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',
'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',
'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',
'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',
'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',
'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',
'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',
'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',
'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',
'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',
'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',
'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',
'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',
'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',
'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',
'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',
'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',
'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',
'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',
'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',
'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',
'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',
'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'
}
IMAGES_FOR_TEST = {
'Beach' : 'models/research/object_detection/test_images/image2.jpg',
'Dogs' : 'models/research/object_detection/test_images/image1.jpg',
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg',
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg',
}
COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),
(0, 2),
(1, 3),
(2, 4),
(0, 5),
(0, 6),
(5, 7),
(7, 9),
(6, 8),
(8, 10),
(5, 6),
(5, 11),
(6, 12),
(11, 12),
(11, 13),
(13, 15),
(12, 14),
(14, 16)]
"""
Explanation: Utilities
Run the following cell to create some utils that will be needed later:
Helper method to load an image
Map of Model Name to TF Hub handle
List of tuples with Human Keypoints for the COCO 2017 dataset. This is needed for models with keypoints.
End of explanation
"""
# Clone the tensorflow models repository
!git clone --depth 1 https://github.com/tensorflow/models
"""
Explanation: Visualization tools
To visualize the images with the proper detected boxes, keypoints and segmentation, we will use the TensorFlow Object Detection API. To install it we will clone the repo.
End of explanation
"""
%%bash
sudo apt install -y protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
"""
Explanation: Intalling the Object Detection API
End of explanation
"""
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import ops as utils_ops
%matplotlib inline
"""
Explanation: Now we can import the dependencies we will need later
End of explanation
"""
PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
"""
Explanation: Load label map data (for plotting).
Label maps correspond index numbers to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine.
We are going, for simplicity, to load from the repository that we loaded the Object Detection API code
End of explanation
"""
#@title Model Selection { display-mode: "form", run: "auto" }
model_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']
model_handle = ALL_MODELS[model_display_name]
print('Selected model:'+ model_display_name)
print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
"""
Explanation: Build a detection model and load pre-trained model weights
Here we will choose which Object Detection model we will use.
Select the architecture and it will be loaded automatically.
If you want to change the model to try other architectures later, just change the next cell and execute following ones.
Tip: if you want to read more details about the selected model, you can follow the link (model handle) and read additional documentation on TF Hub. After you select a model, we will print the handle to make it easier.
End of explanation
"""
print('loading model...')
hub_model = hub.load(model_handle)
print('model loaded!')
"""
Explanation: Loading the selected model from TensorFlow Hub
Here we just need the model handle that was selected and use the Tensorflow Hub library to load it to memory.
End of explanation
"""
#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"}
selected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 'Birds']
flip_image_horizontally = False #@param {type:"boolean"}
convert_image_to_grayscale = False #@param {type:"boolean"}
image_path = IMAGES_FOR_TEST[selected_image]
image_np = load_image_into_numpy_array(image_path)
# Flip horizontally
if(flip_image_horizontally):
image_np[0] = np.fliplr(image_np[0]).copy()
# Convert image to grayscale
if(convert_image_to_grayscale):
image_np[0] = np.tile(
np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)
plt.figure(figsize=(24,32))
plt.imshow(image_np[0])
plt.show()
"""
Explanation: Loading an image
Let's try the model on a simple image. To help with this, we provide a list of test images.
Here are some simple things to try out if you are curious:
* Try running inference on your own images, just upload them to colab and load the same way it's done in the cell below.
* Modify some of the input images and see if detection still works. Some simple things to try out here include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).
Be careful: when using images with an alpha channel, the model expect 3 channels images and the alpha will count as a 4th.
End of explanation
"""
# running inference
results = hub_model(image_np)
# different object detection models have additional results
# all of them are explained in the documentation
result = {key:value.numpy() for key,value in results.items()}
print(result.keys())
"""
Explanation: Doing the inference
To do the inference we just need to call our TF Hub loaded model.
Things you can try:
* Print out result['detection_boxes'] and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).
* inspect other output keys present in the result. A full documentation can be seen on the models documentation page (pointing your browser to the model handle printed earlier)
End of explanation
"""
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if 'detection_keypoints' in result:
keypoints = result['detection_keypoints'][0]
keypoint_scores = result['detection_keypoint_scores'][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_detections[0])
plt.show()
"""
Explanation: Visualizing the results
Here is where we will need the TensorFlow Object Detection API to show the squares from the inference step (and the keypoints when available).
the full documentation of this method can be seen here
Here you can, for example, set min_score_thresh to other values (between 0 and 1) to allow more detections in or to filter out more detections.
End of explanation
"""
# Handle models with masks:
image_np_with_mask = image_np.copy()
if 'detection_masks' in result:
# we need to convert np.arrays to tensors
detection_masks = tf.convert_to_tensor(result['detection_masks'][0])
detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])
# Reframe the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes,
image_np.shape[1], image_np.shape[2])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
result['detection_masks_reframed'] = detection_masks_reframed.numpy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_mask[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
instance_masks=result.get('detection_masks_reframed', None),
line_thickness=8)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_mask[0])
plt.show()
"""
Explanation: [Optional]
Among the available object detection models there's Mask R-CNN and the output of this model allows instance segmentation.
To visualize it we will use the same method we did before but adding an aditional parameter: instance_masks=output_dict.get('detection_masks_reframed', None)
End of explanation
"""
|
NYUDataBootcamp/Projects
|
UG_F16/Kukoff-NYC.ipynb
|
mit
|
# import packages
import pandas as pd
import matplotlib.pyplot as plt
import sys
from itertools import cycle, islice
import math
import numpy as np
%matplotlib inline
"""
Explanation: Misdemeanor Amounts New York City
Data Bootcamp Final Project (Fall 2016)
by Zak Kukoff (kukoff@nyu.edu)
About this project
There's been much discussion in New York about the city's fluctuating amount of petty crime under Mayor Michael Bloomberg. Using publicly available data from the City of New York's website, I wanted to check and see if there really was a change in petty crime over the course of his three terms as Mayor of New York City. For the purposes of this project, I chose to consider only the rate of misdemeanors in New York City as representative of petty crime.
Importing packages and data
I began by importing a variety of Python packages that would allow me to properly plot and analyze the data found on the City's website. I then imported the data found on the City of New York's data website: https://data.cityofnewyork.us/Public-Safety/Historical-New-York-City-Crime-Data/hqhv-9zeg.
That dataset includes a variety of crime data on not only misdemeanors but also on felonies and violation offenses. For this project, I'll only be examining the file called Misdemeanor Offenses 2000-2011.xls
End of explanation
"""
path = '/Users/zak/Dropbox/*Classes Fall 2016/Data Bootcamp/Misdemeanor Data.csv'
data = pd.read_csv(path)
data.columns
"""
Explanation: After importing the required packages, I then imported the data I previously downloaded. For the below code, replace the pathway to the excel spreadsheet with its location on your computer. For the sake of convinience, I created a CSV file from the data so that it would be easier to work with directly.
End of explanation
"""
data.plot(kind ='bar')
path = '/Users/zak/Dropbox/*Classes Fall 2016/Data Bootcamp/Misdemeanor Data.csv'
newdata = pd.read_csv(path, skiprows = 0-17, usecols = [3,4,5,6,7,8,9,10])
newd1 = newdata.transpose()
newd1.plot(kind='bar')
"""
Explanation: This gives us a sense for the data we have to work with: at both the individual offense and total offenses levels, we have data on the number of offenses from 2000-2011. Because this doesn't fully cover Mayor Bloomberg's third term in office (which began in 2010 and ended in 2013), we'll only be examining his first two terms in office: 2002 through 2009.
Plotting the total number of offenses during Mayor Bloomberg's two terms
To begin with, I'll plot all of the petty crimes committed in New York over the full period of the dataset. The following labels apply to the below numbers on the graph:
0 MISDEMEANOR POSSESSION OF STOLEN PROPERTY
1 MISDEMEANOR SEX CRIMES (4)
2 MISDEMEANOR DANGEROUS DRUGS (1)
3 MISDEMEANOR DANGEROUS WEAPONS (5)
4 PETIT LARCENY
5 ASSAULT 3 & RELATED OFFENSES
6 INTOXICATED & IMPAIRED DRIVING
7 VEHICLE AND TRAFFIC LAWS
8 MISD. CRIMINAL MISCHIEF & RELATED OFFENSES
9 CRIMINAL TRESPASS
10 UNAUTHORIZED USE OF A VEHICLE
11 OFFENSES AGAINST THE PERSON (7)
12 OFFENSES AGAINST PUBLIC ADMINISTRATION (2)
13 ADMINISTRATIVE CODE (6)
14 FRAUDS (3)
15 AGGRAVATED HARASSMENT 2
16 OTHER MISDEMEANORS (8)
17 TOTAL MISDEMEANOR OFFENSES
Then, I'll plot the total number of crimes over the years 2002-2009. That will give us a baseline idea of the total amount of crime in the city of New York over those years.
End of explanation
"""
|
yashdeeph709/Algorithms
|
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Errors and Exceptions Handling-checkpoint.ipynb
|
apache-2.0
|
print 'Hello
"""
Explanation: Errors and Exception Handling
In this lecture we will learn about Errors and Exception Handling in Python. You've definitely already encountered erros by this point in the course. For example:
End of explanation
"""
try:
f = open('testfile','w')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then excute this print statement
print "Error: Could not find file or read data"
else:
print "Content written succesfully"
f.close()
"""
Explanation: Note how we get a SyntaxError, with the further description that it was an EOL (End of Line Error) while scanning the string literal. This is specific enough for us to see that we forgot a single quote at the end of the line. Understanding these various error types will help you debug your code much faster.
This type of error and description is known as an Exception. Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal.
You can check out the full list of built-in exceptions here. now lets learn how to handle errors and exceptions in our own code.
try and except
The basic terminology and syntax used to handle errors in Python is the try and except statements. The code which can cause an exception to occue is put in the try block and the handling of the exception is the implemented in the except block of code. The syntax form is:
try:
You do your operations here...
...
except ExceptionI:
If there is ExceptionI, then execute this block.
except ExceptionII:
If there is ExceptionII, then execute this block.
...
else:
If there is no exception then execute this block.
We can also just check for any exception with just using except: To get a better understanding of all this lets check out an example: We will look at some code that opens and writes a file:
End of explanation
"""
try:
f = open('testfile','r')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then excute this print statement
print "Error: Could not find file or read data"
else:
print "Content written succesfully"
f.close()
"""
Explanation: Now lets see what would happen if we did not have write permission (opening only with 'r'):
End of explanation
"""
try:
f = open('testfile','r')
f.write('Test write this')
except:
# This will check for any exception and then excute this print statement
print "Error: Could not find file or read data"
else:
print "Content written succesfully"
f.close()
"""
Explanation: Great! Notice how we only printed a statement! The code still ran and we were able to continue doing actions and running code blocks. This is extremely useful when you have to account for possible input errors in your code. You can be prepared for the error and keep running code, instead of your code just breaking as we saw above.
We could have also just said except: if we weren't sure what exception would occur. For example:
End of explanation
"""
try:
f = open("testfile", "w")
f.write("Test write statement")
finally:
print "Always execute finally code blocks"
"""
Explanation: Great! Now we don't actually need to memorize that list of exception types! Now what if we kept wanting to run code after the exception occured? This is where finally comes in.
finally
The finally: block of code will always be run regardless if there was an exception in the try code block. The syntax is:
try:
Code block here
...
Due to any exception, this code may be skipped!
finally:
This code block would always be executed.
For example:
End of explanation
"""
def askint():
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
finally:
print "Finally, I executed!"
print val
askint()
askint()
"""
Explanation: We can use this in conjunction with except. Lets see a new example that will take into account a user putting in the wrong input:
End of explanation
"""
def askint():
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
val = int(raw_input("Try again-Please enter an integer: "))
finally:
print "Finally, I executed!"
print val
askint()
"""
Explanation: Notice how we got an error when trying to print val (because it was never properly assigned) Lets remedy this by asking the user and checking to make sure the input type is an integer:
End of explanation
"""
def askint():
while True:
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
continue
else:
print 'Yep thats an integer!'
break
finally:
print "Finally, I executed!"
print val
askint()
"""
Explanation: Hmmm...that only did one check. How can we continually keep checking? We can use a while loop!
End of explanation
"""
|
rcrehuet/Python_for_Scientists_2017
|
notebooks/2_0_Loops.ipynb
|
gpl-3.0
|
for t in range(41):
if t % 5 == 0:
print(t+273.15)
for t in range(0,41,5):
print(t+273.15)
"""
Explanation: Introductory exercices: Loops
Celsius to Kelvin
Print the conversion from Celsius degrees to Kelvin, from 0ºC to 40ºC, with a step of 5. That is, 0, 5, 10, 15...
End of explanation
"""
for n in range(26):
#Finish
"""
Explanation: Multiples
Print all the multiples of 3 from 0 to 25 that are not multiples of 5 or 7.
End of explanation
"""
for i in range(10):
print('before:', i)
if i==3 or i==7: i=i+2 #Tying to skip values 4 and 8...
print('after: ',i)
print('----------')
"""
Explanation: Now, instead of printing, generate a list of all the multiples of 3 from 0 to 25 that are not multiples of 5 or 7.
Messing with loops
What do you expect this loop to do? Check it.
End of explanation
"""
queue=['Mariona','Ramon', 'Joan', 'Quique', 'Laia']
while queue:
print("popping name : ",queue.pop(0), "remaining", queue)
queue.pop(0), queue
"""
Explanation: From the previous example you should deduce that it is better not to modify the loop variable. So now translate the previous incorrect loop into into a while loop that really skips i==4 and i==8.
Queuing system
You have a list that should act as a kind of queueing system:
queue=['Mariona','Ramon', 'Joan', 'Quique', 'Laia']
You want to do something (say print it) with each element of the list, and then remove it form the list. (pop can be a useful method). Check that at the and, the list is empty.
End of explanation
"""
import math
math.factorial(100)
result = 1
for i in range(100):
result = result*(i+1)
result
result = 1
for i in range(1,101):
result = result*i
result = str(result)
suma = 0
for caracter in result:
suma = suma + int(caracter)
suma
"""
Explanation: Factorial
Find the sum of the digits in 100! (answer is 648)
End of explanation
"""
# Check different possibilities
keywords={'basis':'6-31+G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}
#keywords={'basis':'6-31G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}
#keywords={'basis':'6-31+G', 'SCF':['XQC',], 'Opt':['TS', 'NoEigenTest']}
#keywords={'basis':'6-31+G', 'Opt':['TS', 'NoEigenTest']}
if #Finish...
print('When using diffuse functions, "Tight" should be used in the SCF!')
"""
Explanation: Dictionaries
Checking for keys
Soft software use keyword to define the type of calculations to be performed. As an example, here we will use the quantum chemistry software Gaussian. Imagine we have stored Gaussian keywords in a dictionary as in:
keywords={'basis':'6-31+G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}
Check that if there is a diffuse function in the basis set, SCF has 'tight' as one of its keywords.
End of explanation
"""
#Finish
"""
Explanation: What happens if the 'SCF' keyword is not present as in here?
keywords={'basis':'6-31+G', 'Opt':['TS, 'NoEigenTest']}
End of explanation
"""
def common_keys(d1, d2):
"""
Return the keys shared by dictionaries d1 and d2
returns a set
"""
#Finish
#Test it
d1 = makedict(red=1, green=2, blue=3)
d2 = makedict(purple=3, green=5, blue=6, yellow=1)
print(common_keys(d1, d2))
"""
Explanation: Common keys
Given two dictionaries, find the keys that are present in both dictionaries. (Hint: you can use sets)
End of explanation
"""
gencode = {
'ATA':'I', 'ATC':'I', 'ATT':'I', 'ATG':'M',
'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACT':'T',
'AAC':'N', 'AAT':'N', 'AAA':'K', 'AAG':'K',
'AGC':'S', 'AGT':'S', 'AGA':'R', 'AGG':'R',
'CTA':'L', 'CTC':'L', 'CTG':'L', 'CTT':'L',
'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCT':'P',
'CAC':'H', 'CAT':'H', 'CAA':'Q', 'CAG':'Q',
'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGT':'R',
'GTA':'V', 'GTC':'V', 'GTG':'V', 'GTT':'V',
'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCT':'A',
'GAC':'D', 'GAT':'D', 'GAA':'E', 'GAG':'E',
'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGT':'G',
'TCA':'S', 'TCC':'S', 'TCG':'S', 'TCT':'S',
'TTC':'F', 'TTT':'F', 'TTA':'L', 'TTG':'L',
'TAC':'Y', 'TAT':'Y', 'TAA':'_', 'TAG':'_',
'TGC':'C', 'TGT':'C', 'TGA':'_', 'TGG':'W'}
"""
Explanation: Genetic code (difficult!)
Given the genetic code dictionary, calculate how many codons code each aminoacid. which aminoacid is coded by more codons? The underscore means the STOP codon. (Answer: R and L, 6 times each)
End of explanation
"""
#Finish
"""
Explanation: Remember that you can iterate a dictionary keys with: for k in gencode: and its values with: for v in gencode.values(): Or access the values like this:
for k in d:
v =d[k]
This exercice has many possible solutions.
How many of Leu(L) codons differ in only one point mutations from a Ile(I) codon?
End of explanation
"""
|
massimo-nocentini/simulation-methods
|
notes/matrices-functions/fibonacci-generation-matrix.ipynb
|
mit
|
from sympy import *
from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a
from commons import *
from matrix_functions import *
from sequences import *
import functions_catalog
init_printing()
"""
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
<small>
<br>November 9, 2016: splitting from "big" notebook
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
Theory of matrix functions applied to Fibonacci matrix and a matrix with a eigenvalue with multiplicity greater than 1.
</div>
End of explanation
"""
F = define(Symbol(r'\mathcal{F}'), Matrix([[1, 1], [1, 0]]))
F
m = F.rhs.rows
eigendata = spectrum(F)
eigendata
data, eigenvals, multiplicities = eigendata.rhs
Phi_poly = Phi_poly_ctor(deg=m-1)
Phi_poly
Phi_polynomials = component_polynomials(eigendata)
Phi_polynomials
cmatrices = component_matrices(F, Phi_polynomials)
cmatrices
Z_11 = cmatrices[1,1].rhs
assert (F.rhs * Z_11 - Z_11 * F.rhs).simplify() == zeros(2)
assert (Z_11*Z_11 - Z_11).subs(eigenvals).applyfunc(simplify) == zeros(2)
Zi1 = list(cm.rhs.as_immutable() for (i, j), cm in cmatrices.items() if j == 1)
s = zeros(m)
for Z in Zi1:
s += Z
s, s.simplify(), s.subs(eigenvals).applyfunc(simplify)
v = IndexedBase('v')
v_vector = Matrix(m, 1, lambda i, _: v[i])
M_space_ctor = M_space(cmatrices)
M_space_v = M_space_ctor(v_vector)
M_space_v
i = 1
eq = Eq(F.rhs*M_space_v[i][1].rhs, M_space_v[i][1].rhs.applyfunc(lambda k: k * data[i][0]))
eq
assert (eq.lhs.applyfunc(lambda i: i.subs(eigenvals).ratsimp()) ==
eq.rhs.applyfunc(lambda i: i.subs(eigenvals).ratsimp()))
GEs = generalized_eigenvectors_matrices(M_space_v)
GEs # actually, not necessary for Jordan Normal Form computation
relations = generalized_eigenvectors_relations(eigendata)
eqs = relations(F.rhs, M_space_v,post=lambda i: i.subs(eigenvals).ratsimp())
eqs
miniblocks = Jordan_blocks(eigendata)
miniblocks
X, J = Jordan_normalform(eigendata, matrices=(F.rhs, M_space_v, miniblocks))
X
J
assert ((F.rhs*X.rhs).applyfunc(lambda i: i.subs(eigenvals).ratsimp()) ==
(X.rhs*J.rhs).applyfunc(lambda i: i.subs(eigenvals).ratsimp()))
fq = (X.rhs**(-1)*F.rhs*X.rhs).applyfunc(lambda i: i.subs(eigenvals).simplify())
fq
x = symbols('x', positive=True)
assert (fq[0, 0].subs({v[0]:1, v[1]:x}).radsimp() == eigenvals[data[1][0]].radsimp())
"""
Explanation: Fibonacci matrix
End of explanation
"""
f = Function('f')
f_power = define(let=f(z), be=z**r)
f_power
g_power = Hermite_interpolation_polynomial(f_power, eigendata, Phi_polynomials)
g_power
g_power = g_power.subs(eigenvals)
g_power
with lift_to_matrix_function(g_power) as G_power:
m_power = G_power(F)
m_power
m_power.rhs.subs(eigenvals).subs({r:8}).applyfunc(simplify)
F.rhs**r
_.subs({r:8}).applyfunc(simplify)
"""
Explanation: power function
End of explanation
"""
m = define(Symbol(r'\mathcal{A}'), Matrix([
[1, 0, 0],
[1, 1, -1],
[-2, 0, 3],
]))
m
eigendata = spectrum(m)
eigendata
data, eigenvals, multiplicities = eigendata.rhs
m_bar = 3 # degree of \Xi minimal polynomial
Phi_poly = Phi_poly_ctor(deg=m_bar-1)
Phi_poly
Phi_polynomials = component_polynomials(eigendata)
Phi_polynomials
cmatrices = component_matrices(m, Phi_polynomials)
cmatrices
Zi1 = list(cm.rhs.as_immutable() for (i, j), cm in cmatrices.items() if j == 1)
s = zeros(m.rhs.rows)
for Z in Zi1:
s += Z
s, s.simplify(), s.subs(eigenvals)
"""
Explanation: A multiplicity greater than 1
End of explanation
"""
f, h = Function('f'), Function('h')
f_inverse = define(let=f(z), be=1/(z))
f_inverse
g_inverse = Hermite_interpolation_polynomial(f_inverse, eigendata, Phi_polynomials)
g_inverse
g_inverse = g_inverse.subs(eigenvals)
g_inverse
with lift_to_matrix_function(g_inverse) as G_inverse:
m_inverse = G_inverse(m)
m_inverse
m.rhs**(-1)
"""
Explanation: inverse function
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session11/Day1/InvestigatingDetectorsSolutions.ipynb
|
mit
|
from astropy.io import fits
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.dpi'] = 120
"""
Explanation: Investigating Detectors
Version 0.1
Understanding the behavior of the CCDs in a camera requires digging deep into calibration exposures. That is where you can uncover effects that might not be noticible in on-sky exposures, but may subtly contaminate the data if left uncorrected. It is also how camera engineering teams optimize and debug the performance of the camera when it's still in the lab.
We're going to look at two test exposures taken with one of the Rubin Observatory CCDs. They're both biases; each image has a zero second exposure time and the detector was not illuminated.
Please download a tarball of the images for this notebook: investigating_detectors.tar.gz. As a reminder, you can unpack these files via tar -zxvf investigating_detectors.tar.gz
By C Slater (University of Washington)
End of explanation
"""
def simulated_image(signal_level, read_noise, gain):
"""
Return a 1-D simulated "image" with the noise properties of
a CCD sensor. The image is always 1000 pixels long.
signal_level is the mean number of electrons in each pixel.
read_noise is the noise of the readout amplifier, in electrons.
gain is the number of electrons per ADU.
"""
return (1/gain) * (read_noise*np.random.randn(1000) + np.random.poisson(signal_level, size=1000))
"""
Explanation: Photon Transfer Curve
1) Simulated Images
The "Photon Transfer Curve" is the name given to the relationship between the signal level and the noise level in a sensor. We're going to do a few experiments to show how it works in principle, and then we'll look at some real images and make some diagnostic measurements.
First we need a model of the noise in CCD image. I'm going to give this to you so we all start out on the same page
End of explanation
"""
# Answer
noise_levels = []
measured_signal_levels = []
input_signal_levels = np.logspace(0, 4, 30)
for input_signal_level in input_signal_levels:
image = simulated_image(input_signal_level, 5, 0.8)
noise_levels.append(np.std(image))
measured_signal_levels.append(np.mean(image))
noise_levels = np.array(noise_levels)
measured_signal_levels = np.array(measured_signal_levels)
"""
Explanation: Before diving in to programming, take a careful look at the components in the simulated image. What are the two noise sources, and why do they have that functional form? We're going to be looking a lot at the image "gain"; does it make sense how that is applied?
Let's make some simulations. What we want to do is loop over a set of input levels light levels, from zero to "full well" capacity (on order of 10,000 electrons). For each simulated image, we want to measure the mean signal level (because that's what we see as users of a CCD) and the standard deviation of that image. Save those in two lists, but at the end convert those back to numpy arrays to make downstream usage easier.
For right now, set the read noise to 5, and the gain to 0.8.
End of explanation
"""
# Answer
high_counts, = np.where(signal_levels > 100)
fit = np.polyfit(np.log10(measured_signal_levels[high_counts]), np.log10(noise_levels[high_counts]), 1)
print(fit)
x = np.logspace(2, 4, 10)
plt.loglog(measured_signal_levels, noise_levels, 'ko')
plt.plot(x, 10**(fit[0]*np.log10(x) ), '-')
plt.ylabel("Standard Deviation")
plt.xlabel("Measured Signal Level")
"""
Explanation: Plot the noise vs. the measured signal level, on a log-log plot.
What is the behavior you see? What are the two different noise regimes?
Fit a straight line to the "bright" portion of the data (high signal levels) and print the resulting coefficients. Remember that you're looking at a log-log plot, and so you want to fit the logs of the variables. You can add this to the plot in the cell above.
Why does the line have that value of the slope?
End of explanation
"""
# Answer
high_counts, = np.where(measured_signal_levels > 100)
plt.plot(measured_signal_levels[high_counts], (noise_levels[high_counts])**2, 'ko')
fit = np.polyfit(measured_signal_levels, noise_levels**2, 1)
print(1/fit)
plt.plot(x, fit[0] * x, '-')
plt.ylabel("Variance")
plt.xlabel("Measured Signal Level")
"""
Explanation: Now we're going to plot something slightly different. Plot the variance this time, and on a linear plot instead of log-log (again vs. measured signal level). Fit a straight line to the data (in linear space) and print the coefficients. Also print the reciprocal of the slope.
Where did this slope come from?
End of explanation
"""
bias1_file = fits.open("00258334360-S10-det003.fits")
bias1_data = bias1_file[1].data
plt.imshow(bias1_data, cmap='gray',
vmin=(np.median(data) - 0.2*np.std(data)),
vmax=(np.median(data) + 0.2*np.std(data)))
plt.colorbar()
"""
Explanation: The slope here is related to the gain (either proportionally or inversely, depending on how one chooses to define gain). This can be summarized as
$$ \frac{1}{\textrm{gain}} = \langle \frac{\textrm{Variance}}{\textrm{Mean Signal Level}} \rangle $$
It's a clever and useful trick, or at least it seems like a trick, because the standard deviation plot wasn't affected by the gain at all. Go back and try varying the gain and re-run the plots, and you'll see what does and doesn't change.
One way to think of it is that the measured signal level is affected linearly by the gain, but the variance is affected by the square of the gain. Dividing these two gives you a linear relation back, but when dividing the square root of the variance, the gain cancels out.
2) Looking at real bias frames
Remember that a bias frame is an image that is exposed for zero seconds; it's just immediately read-out without being exposed to light. You might think that is a pretty boring image, particularly if you're at the telescope and getting ready for a night of observing. But to a camera engineer, bias frames hold lots of information about how the camera is operating.
We're going to look at images from one example LSST sensor; this was taken on a test stand and not the actual camera, so don't take it as representative of real camera performance.
Our first step is, as usual, to look at the image and make sure it seems reasonable.
End of explanation
"""
columns_summed = np.mean(bias1_data, axis=0)
plt.plot(columns_summed[10:])
rows_summed = np.mean(bias1_data, axis=1)
plt.plot(rows_summed)
for hdu in range(1, 16):
hdu_data = bias1_file[hdu].data
columns_summed = np.sum(hdu_data, axis=0)
plt.plot(columns_summed[10:] - np.median(columns_summed))
"""
Explanation: Notice that when we plotted bias1_file[1].data, the image we get is 2048 by 576 pixels. Because LSST sensors have 16 separate amplifiers, the data from each one of them is put in a different "header data unit" (HDU) in the FITS file. You can get to them by substituting n in bias1_file[n], where n is the amplifier number.
3) Looking for structure
The bias looks mostly like Gaussian noise, but if you look carefully some parts of the image look like they have some "structure".
Let's make a few plots: try plotting the mean of the data along columns in one plot, and along rows in another.
Start with just a single amplifier, but if you like you can learn more by plotting each amplifier as a different line. Hint: the amplifiers each have different mean levels that you probably want to subtract off.
End of explanation
"""
bias2_file = fits.open("00258334672-S10-det003.fits")
# Answer
measured_stddevs = {}
for hdu in range(1, 16):
hdu_difference = bias1_file[hdu].data - bias2_file[hdu].data
stddev = np.std(hdu_difference)
measured_stddevs[hdu] = stddev
measured_stddevs
"""
Explanation: These "simple" bias frames turn out to have a lot of structure in them, particularly at the start of columns. This isn't something we can dive much further into, because it's really an electronics problem (that was known about at the time). It's also worth noting that it's fractionally a small effect. We will have to make sure our subsequent analyses are not affected by the issue though.
4) Measuring the noise
Bias images usually have some repeatable structure to them, so a useful trick is to use the difference of two bias frames taken close in time. Let's measure the standard deviation for the differences between the biases, doing so separately for each amplifier. This isn't the final read noise value yet, because it's still in ADU and not in electrons. We will store the results in a dictionary for later use.
We load the second image:
End of explanation
"""
flat1_file = fits.open("00258342968-S10-det003.fits")
flat2_file = fits.open("00258343136-S10-det003.fits")
"""
Explanation: 5) Measuring the gain
We have just one more step before we can report the read noise. We need to measure the gains so we can convert the noise in ADU into electrons. To do that, we're going to use the trick we saw at the start of this notebook. We need to add two things though: we want to use pairs of images, to cancel out any fixed spatial patterns, and we need images with significant counts in them so that we're not just measuring read noise. The formula we want to implement is thus:
$$ \frac{1}{\textrm{gain}} = \langle \frac{(I_1 - I_2)^2}{I_1 + I_2} \rangle $$
where $I_1$ and $I_2$ are the pixel values from each image, and the $\langle$ $\rangle$ brackets denote taking the mean of this ratio over all pixels.
We have some flat field images from those same sensors that we can use:
End of explanation
"""
# Answer
for hdu in range(1,16):
flat1_data = flat1_file[hdu].data
flat2_data = flat2_file[hdu].data
bias1 = bias1_file[hdu].data
bias2 = bias2_file[hdu].data
debiased_flat1 = flat1_data - bias1
debiased_flat2 = flat2_data - bias2
squared_noise = (debiased_flat1 - debiased_flat2 )**2
summed_intensity = ((debiased_flat1) + (debiased_flat2))
ok_values = (summed_intensity > 5000)
reciprocal_gain = np.mean(squared_noise[ok_values]/(summed_intensity[ok_values]) )
print(hdu, reciprocal_gain, 1/reciprocal_gain, measured_stddevs[hdu]/reciprocal_gain/np.sqrt(2))
"""
Explanation: Since each amplifier can have a slightly different gain, we want to run this per-HDU and output a table of values. Since we're looping over the HDUs, we can also print the finished read noise values at the same time. Note that those have a factor of $\sqrt{2}$ because we took the difference of two bias frames, so the noise is greater than a single image.
End of explanation
"""
|
xdnian/pyml
|
code/ch02/ch02.ipynb
|
mit
|
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib
"""
Explanation: Copyright (c) 2015, 2016
Sebastian Raschka
Li-Yi Wei
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 2 - Training Machine Learning Algorithms for Classification
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
from IPython.display import Image
"""
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Linear perceptron
A very simple model to illustrate machine learning
* how to perform classification
* how to learn from data via optimization
* connection to biological brains
Overview
Artificial neurons - a brief glimpse into the early history
of machine learning
Implementing a perceptron learning algorithm in Python
Training a perceptron model on the Iris dataset
Adaptive linear neurons and the convergence of learning
Minimizing cost functions with gradient descent
Implementing an Adaptive Linear Neuron in Python
Large scale machine learning and stochastic gradient descent
Summary
End of explanation
"""
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
"""
Explanation: Artificial neurons - a brief glimpse into the early history of machine learning
<img src="./images/02_01.png" width=100%>
Biological neurons were the early inspirations for machine learning.
Current neural networks implement neurons via programs, but people are starting to build electronic circuits to more efficiently emulate neurons.
<a href="http://www.economist.com/news/science-and-technology/21703301-narrowing-gap-between-biological-brains-and-electronic-ones-researchers-have">
<img src="http://cdn.static-economist.com/sites/default/files/imagecache/full-width/images/2016/08/articles/main/20160806_stp003.jpg" width=80% alt="Researchers have built an artificial neuron"></a>
Perceptron
Proposed by Rosenblatt in 1957 as a very simple model for artifical neurons.
Representation
A single artificial neuron can be used for binary classification via:
$$
\phi(z) =
\begin{cases}
1 \; z \geq \theta \
-1 \; z < \theta
\end{cases}
$$
, where $\pm 1$ are the class labels, $\phi$ is the activation function, $\theta$ is some activation threshold, and $z$ is computed from the input $\mathbf{x}$ and neuron weights $\mathbf{w}$
$$
z = \mathbf{w}^T \mathbf{x}
$$
In terms of our general model $f\left(\mathbf{X}, \Theta \right)$, the parameter set are the weights $\Theta = \mathbf{w}$ and the input set $\mathbf{X} = \mathbf{x}$:
$$\phi(z) = f(\mathbf{x}, \mathbf{w}) = 2 \left( \mathbf{w}^T \mathbf{x} \geq \theta \right) - 1$$
Note that both $\mathbf{w}$ and $\mathbf{x}$ are vectors and the above equations is simply computing their inner products.
Specifically,
$$
\begin{align}
\mathbf{w} =
\begin{bmatrix}
w_0 \
\vdots \
w_m
\end{bmatrix}
,
\mathbf{x} =
\begin{bmatrix}
x_0 \
\vdots \
x_m
\end{bmatrix}
\end{align}
$$
For example $m = 3$:
$$
\begin{align}
\mathbf{w} =
\begin{bmatrix}
1 \
2 \
3
\end{bmatrix}
,
\mathbf{x} =
\begin{bmatrix}
4 \
5 \
6
\end{bmatrix}
\end{align}
$$
$$
\mathbf{w}^T \mathbf{x} =
\begin{bmatrix}
1 & 2 & 3
\end{bmatrix}
\begin{bmatrix}
4 \
5 \
6
\end{bmatrix}
=
1 \times 4 + 2 \times 5 + 3 \times 6
= 32
$$
The convention is to let $x_0 = 1$ and $w_0 = -\theta$ (the bias term), so that we can simplify the model to:
$$
\phi(z) =
\begin{cases}
1 \; z \geq 0 \
-1 \; z < 0
\end{cases}
$$
Example
2D case, i.e. $m = 2$
The left visualizes $\phi(z)$, the right illustrates a binary classification:
<img src = "./images/02_02.png" width=100%>
Training
How to train a perceptron so that it can learn from new data?
Let the training data be $\left( \mathbf{X}, \mathbf{T} \right)$, where
* $\mathbf{X}$: input vectors
* $\mathbf{T}$: labels, $\pm 1$ for each input vector
The training steps are as follows:
* Initialize $\mathbf{w}$ to $0$ or random
* For each training pair $\left(\mathbf{x}^{(i)}, t^{(i)}\right)$
1. Compute the predicted output $y^{(i)} = \phi(\mathbf{w}^T \mathbf{x}^{(i)})$
2. Compute $\delta \mathbf{w} = \eta (t^{(i)} - y^{(i)}) \mathbf{x}^{(i)}$, and update $\mathbf{w} \leftarrow \mathbf{w} + \delta \mathbf{w}$
$\eta$ is called the learning rate.
<img src="./images/02_04.png" width=80%>
Why this works
If the prediction is correct, i.e. $y^{(i)} = t^{(i)}$, $\rightarrow$ $\delta \mathbf{w} = 0$, so no update.
If the prediction is incorrect
* $t^{(i)} = 1, y^{(i)} = -1$ $\rightarrow$ $\delta \mathbf{w} = \eta(2)\mathbf(x)^{(i)}$
* $t^{(i)} = -1, y^{(i)} = 1$ $\rightarrow$ $\delta \mathbf{w} = \eta(-2)\mathbf(x)^{(i)}$
$\delta \mathbf{w}$ will move towards the right direction, more positively/negatively correlated with $\mathbf{x}$.
Linear separability
A single perception is limited to linear classification.
We can handle more general cases via neural networks consisting of multiple perceptrons/neurons.
<img src="./images/02_03.png" width=80%>
The binary threshold activation function is not differentiable; we will fix this later via other activation functions.
<img src = "./images/02_02.png" width=80%>
Implementing a perceptron learning algorithm in Python
End of explanation
"""
import pandas as pd
data_src = '../datasets/iris/iris.data'
#data_src = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
df = pd.read_csv(data_src, header=None)
df.tail()
"""
Explanation: Additional Note (1)
Please note that the learning rate $\eta$ (eta) only has an effect on the classification outcome if the weights are initialized to non-zero values. If all the weights
are initialized to 0, only the scale of the weight vector, not the direction. To have the learning rate influence the classification outcome, the weights need to be initialized to non-zero values. The respective lines in the code that need to be changed to accomplish that are highlighted on below:
```python
def init(self, eta=0.01, n_iter=50, random_seed=1): # add random_seed=1
...
self.random_seed = random_seed # add this line
def fit(self, X, y):
...
# self.w_ = np.zeros(1 + X.shape[1]) ## remove this line
rgen = np.random.RandomState(self.random_seed) # add this line
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1]) # add this line
```
Additional Note (2)
I received a note by a reader who asked about the net input function:
On page 27, you describe the code.
the net_input method simply calculates the vector product wTx
However, there is more than a simple vector product in the code:
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
In addition to the dot product, there is an addition. The text does not mention anything about what is this + self.w_[0]
Can you (or anyone) explain why that's there?
Sorry that I went over that so briefly. The self.w_[0] is basically the "threshold" or so-called "bias unit." I simply included the bias unit in the weight vector, which makes the math part easier, but on the other hand, it may make the code more confusing as you mentioned.
Let's say we have a 3x2 dimensional dataset X (3 training samples with 2 features). Also, let's just assume we have a weight 2 for feature 1 and a weight 3 for feature 2, and we set the bias unit to 4.
```
import numpy as np
bias = 4.
X = np.array([[2., 3.],
... [4., 5.],
... [6., 7.]])
w = np.array([bias, 2., 3.])
```
In order to match the mathematical notation, we would have to add a vector of 1s to compute the dot-product:
```
ones = np.ones((X.shape[0], 1))
X_with1 = np.hstack((ones, X))
X_with1
np.dot(X_with1, w)
array([ 17., 27., 37.])
```
However, I thought that adding a vector of 1s to the training array each time we want to make a prediction would be fairly inefficient. So, instead, we can just "add" the bias unit (w[0]) to the do product (it's equivalent, since 1.0 * w_0 = w_0:
```
np.dot(X, w[1:]) + w[0]
array([ 17., 27., 37.])
```
Maybe it is helpful to walk through the matrix-vector multiplication by hand. E.g.,
| 1 2 3 | | 4 | | 1*4 + 2*2 + 3*3 | | 17 |
| 1 4 5 | x | 2 | = | 1*4 + 4*2 + 5*3 | = | 27 |
| 1 6 7 | | 3 | | 1*4 + 6*2 + 7*3 | | 37 |
which is the same as
| 2 3 | | 4 | | 2*2 + 3*3 | | 13 + bias | | 17 |
| 4 5 | x | 2 | + bias = | 4*2 + 5*3 | + bias = | 23 + bias | = | 27 |
| 6 7 | | 3 | | 6*2 + 7*3 | | 33 + bias | | 37 |
Additional Note (3)
For simplicity at this point, we don't talk about shuffling at this point; I wanted to introduce concepts incrementally so that it's not too overwhelming all at once. Since a reader asked me about this, I wanted to add a note about shuffling, which you may want to use if you are using a Perceptron in practice. I borrowed the code from the AdalineSGD section below to modify the Perceptron algorithm accordingly (new lines are marked by trailing "# new" inline comment):
```python
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
shuffle : bool (default: True)
Shuffles training data every epoch if True to prevent cycles.
random_state : int (default: None)
Set random state for shuffling and initializing the weights.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=10,
shuffle=True, random_state=None): # new
self.eta = eta
self.n_iter = n_iter
self.shuffle = shuffle # new
if random_state: # new
np.random.seed(random_state) # new
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
if self.shuffle: # new
X, y = self._shuffle(X, y) # new
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def _shuffle(self, X, y): # new
"""Shuffle training data""" # new
r = np.random.permutation(len(y)) # new
return X[r], y[r] # new
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
```
Training a perceptron model on the Iris dataset
Reading-in the Iris data
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# plot data
plt.scatter(X[:50, 0], X[:50, 1],
color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
#plt.savefig('./images/02_06.png', dpi=300)
plt.show()
"""
Explanation: <hr>
Note:
If the link to the Iris dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/iris/iris.data.
Or you could fetch it via https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/iris/iris.data
Plotting the Iris data
End of explanation
"""
ppn = Perceptron(eta=0.1, n_iter=10)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Number of misclassifications')
plt.tight_layout()
# plt.savefig('./perceptron_1.png', dpi=300)
plt.show()
"""
Explanation: Training the perceptron model
End of explanation
"""
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
plot_decision_regions(X, y, classifier=ppn)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./perceptron_2.png', dpi=300)
plt.show()
"""
Explanation: A function for plotting decision regions
End of explanation
"""
class AdalineGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
net_input = self.net_input(X)
# Please note that the "activation" method has no effect
# in the code since it is simply an identity function. We
# could write `output = self.net_input(X)` directly instead.
# The purpose of the activation is more conceptual, i.e.,
# in the case of logistic regression, we could change it to
# a sigmoid function to implement a logistic regression classifier.
output = self.activation(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ada1 = AdalineGD(n_iter=10, eta=0.01).fit(X, y)
ax[0].plot(range(1, len(ada1.cost_) + 1), np.log10(ada1.cost_), marker='o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(Sum-squared-error)')
ax[0].set_title('Adaline - Learning rate 0.01')
ada2 = AdalineGD(n_iter=10, eta=0.0001).fit(X, y)
ax[1].plot(range(1, len(ada2.cost_) + 1), ada2.cost_, marker='o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('Sum-squared-error')
ax[1].set_title('Adaline - Learning rate 0.0001')
plt.tight_layout()
# plt.savefig('./adaline_1.png', dpi=300)
plt.show()
"""
Explanation: Additional Note (3)
The plt.scatter function in the plot_decision_regions plot may raise errors if you have matplotlib <= 1.5.0 installed if you use this function to plot more than 4 classes as a reader pointed out: "[...] if there are four items to be displayed as the RGBA tuple is mis-interpreted as a list of colours".
python
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
To address this problem in older matplotlib versions, you can replace c=cmap(idx) by c=colors[idx].
Adaptive linear neurons and the convergence of learning
Let's change the activation function of the perceptron from binary to linear, i.e.
$$
\begin{align}
\phi(z) &= z \
\phi(\mathbf{w}^T \mathbf{x}) &= \mathbf{w}^T \mathbf{x}
\end{align}
$$
And separate out the quantizer as another unit, so that we can optimize via gradient descent.
This is called Adaline for <b>Ada</b>ptive <b>li</b>near <b>ne</b>uron.
Adaline:
<img src = "./images/02_09.png" width=80%>
Perceptron:
<img src="./images/02_04.png" width=80%>
Minimizing cost functions with gradient descent
Let's define the loss/objective function via SSE (sum of squared errors) as follows:
$$
\begin{align}
L(\mathbf{X}, \mathbf{T}, \Theta) &= J(\mathbf{w}) = \frac{1}{2} \sum_i \left(t^{(i)} - z^{(i)} \right)^2 \
z^{(i)} &= \phi\left(\mathbf{w}^T \mathbf{x}^{(i)}\right) = \mathbf{w}^T \mathbf{x}
\end{align}
$$
Note that we are optimizing for $z$, the output of the linear activation $\phi$, instead of the final output $y$, which is:
$$
y =
\begin{cases}
+1 \; z \geq 0 \
-1 \; z < 0
\end{cases}
$$
However, this would work, as $y$ and $z$ are proportional, so increasing/decreasing $z$ will also increase/decrease $y$.
Moreover, unlike the binary threshold activation function, the linear activation function is differentiable, so we can use calculus to optimize the above equation.
In particular, we can use gradient descent.
$$
\delta J = \frac{\partial J}{\partial \mathbf{w}} = \sum_i \mathbf{x}^{(i) }\left( \mathbf{w}^T \mathbf{x}^{(i)} - \mathbf{t}^{(i)} \right)
$$
$$
\begin{align}
\delta \mathbf{w} &= -\eta \delta J \
\mathbf{w} & \leftarrow \mathbf{w} + \delta \mathbf{w}
\end{align}
$$
<img src="./images/02_10.png" width=80%>
Implementing an adaptive linear neuron in Python
End of explanation
"""
# standardize features
X_std = np.copy(X)
X_std[:, 0] = (X[:, 0] - X[:, 0].mean()) / X[:, 0].std()
X_std[:, 1] = (X[:, 1] - X[:, 1].mean()) / X[:, 1].std()
ada = AdalineGD(n_iter=15, eta=0.01)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./adaline_2.png', dpi=300)
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.title('Adaline - Learning rate 0.01')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.tight_layout()
# plt.savefig('./adaline_3.png', dpi=300)
plt.show()
"""
Explanation: Learning rate
We need to choose the right learning rate $\eta$
* too small will take many steps to reach minimum
* too large will oscillate and miss the minimum
<img src="./images/02_12.png" width=80%>
Scaling
For better performance, it is often a good idea to normalize/scale the dimensions of the data so that they have similar orders of magnitude:
$$
\mathbf{x}_j = \frac{\mathbf{x}_j - \mu_j}{\sigma_j}
$$
, where $\mu$ and $\sigma$ are the mean and std.
End of explanation
"""
from numpy.random import seed
class AdalineSGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
shuffle : bool (default: True)
Shuffles training data every epoch if True to prevent cycles.
random_state : int (default: None)
Set random state for shuffling and initializing the weights.
"""
def __init__(self, eta=0.01, n_iter=10, shuffle=True, random_state=None):
self.eta = eta
self.n_iter = n_iter
self.w_initialized = False
self.shuffle = shuffle
if random_state:
seed(random_state)
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self._initialize_weights(X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
if self.shuffle:
X, y = self._shuffle(X, y)
cost = []
for xi, target in zip(X, y):
cost.append(self._update_weights(xi, target))
avg_cost = sum(cost) / len(y)
self.cost_.append(avg_cost)
return self
def partial_fit(self, X, y):
"""Fit training data without reinitializing the weights"""
if not self.w_initialized:
self._initialize_weights(X.shape[1])
if y.ravel().shape[0] > 1:
for xi, target in zip(X, y):
self._update_weights(xi, target)
else:
self._update_weights(X, y)
return self
def _shuffle(self, X, y):
"""Shuffle training data"""
r = np.random.permutation(len(y))
return X[r], y[r]
def _initialize_weights(self, m):
"""Initialize weights to zeros"""
self.w_ = np.zeros(1 + m)
self.w_initialized = True
def _update_weights(self, xi, target):
"""Apply Adaline learning rule to update the weights"""
output = self.net_input(xi)
error = (target - output)
self.w_[1:] += self.eta * xi.dot(error)
self.w_[0] += self.eta * error
cost = 0.5 * error**2
return cost
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
ada = AdalineSGD(n_iter=15, eta=0.01, random_state=1)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Stochastic Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
#plt.savefig('./adaline_4.png', dpi=300)
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Average Cost')
plt.tight_layout()
# plt.savefig('./adaline_5.png', dpi=300)
plt.show()
_ = ada.partial_fit(X_std[0, :], y[0])
"""
Explanation: Large scale machine learning and stochastic gradient descent
Recall for gradient descent, we compute $\delta J$ from all $N$ data samples:
$$
\begin{align}
\delta J &= \frac{\partial J}{\partial \mathbf{w}} = \sum_{i = 1}^{N} \mathbf{x}^{(i) }\left( \mathbf{w}^T \mathbf{x}^{(i)} - \mathbf{t}^{(i)} \right) \
\delta \mathbf{w} &= -\eta \delta J \
\mathbf{w} & \leftarrow \mathbf{w} + \delta \mathbf{w}
\end{align}
$$
This can be slow if $N$ is large.
One solution is to compute $\delta J$ from only a small subset of all data, and iterate over the subsets.
This is called stochastic gradient descent.
Each subset is often called a mini-batch.
For each $batch_j \subset{N}$:
$$
\begin{align}
\delta J &= \frac{\partial J}{\partial \mathbf{w}} = \sum_{i \in batch_j} \mathbf{x}^{(i) }\left( \mathbf{w}^T \mathbf{x}^{(i)} - \mathbf{t}^{(i)} \right) \
\delta \mathbf{w} &= -\eta \delta J \
\mathbf{w} & \leftarrow \mathbf{w} + \delta \mathbf{w}
\end{align}
$$
Machine learning libraries usually let you tune the subset, or mini-batch, size, between two extremes:
* just one sample
* entire data set $\rightarrow$ traditional gradient descent
Example
Input data: 10 samples total with indices
* $[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]$
Random shuffle them for better performance at the beginning of each iteration/epoch:
* $[7, 1, 5, 6, 0, 3, 8, 2, 9, 4]$
Say the mini-batch size is 2, then we train the network with the following mini-batches:
* $[7, 1]$
* $[5, 6]$
* $[0, 3]$
* $[8, 2]$
* $[9, 4]$
Continue for more iterations/epochs, over the entire dataset, until convergence
* i.e. go back to the random shuffle step above with a different permutation, e.g. $[9, 1, 4, 7, 2, 5, 6, 0, 3, 8]$
More background information
<a href="https://en.wikipedia.org/wiki/Lombard_Street_(San_Francisco)">
<img src="https://upload.wikimedia.org/wikipedia/commons/c/c6/Lombard_Street_San_Francisco.jpg" align=right width=40%>
</a>
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
End of explanation
"""
class LogisticRegressionGD(object):
"""Logistic regression classifier via gradient descent.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
net_input = self.net_input(X)
output = self.activation(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
# note that we compute the logistic `cost` now
# instead of the sum of squared errors cost
cost = -y.dot(np.log(output)) - ((1 - y).dot(np.log(1 - output)))
self.cost_.append(cost)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
# We use the more common convention for logistic
# regression returning class labels 0 and 1
# instead of -1 and 1. Also, the threshold then
# changes from 0.0 to 0.5
return np.where(self.activation(X) >= 0.5, 1, 0)
# The Content of `activation` changed
# from linear (Adaline) to sigmoid.
# Note that this method is now returning the
# probability of the positive class
# also "predict_proba" in scikit-learn
def activation(self, X):
""" Compute sigmoid activation."""
z = self.net_input(X)
sigmoid = 1.0 / (1.0 + np.exp(-z))
return sigmoid
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data[:100, [0, 2]], iris.target[:100]
X_std = np.copy(X)
X_std[:, 0] = (X[:, 0] - X[:, 0].mean()) / X[:, 0].std()
X_std[:, 1] = (X[:, 1] - X[:, 1].mean()) / X[:, 1].std()
lr = LogisticRegressionGD(n_iter=25, eta=0.15)
lr.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=lr)
plt.title('Logistic Regression - Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('sepal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
plt.plot(range(1, len(lr.cost_) + 1), lr.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Logistic Cost')
plt.tight_layout()
# plt.savefig('./adaline_3.png', dpi=300)
plt.show()
"""
Explanation: Reading
PML Chapter 2
IML Chapter 10.1-10.4, linear discrimination
Assignment
ex02
Appendix
The code below (not in the book) is a simplified, example implementation of a logistic regression classifier trained via gradient descent. The AdalineGD classifier was used as template and I commented the respective lines that were changed to turn it into a logistic regression classifier (as briefly mentioned in the "logistic regression" sections of Chapter 3).
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.