repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
seg/2016-ml-contest
geoLEARN/Submission_4_XtraTrees.ipynb
apache-2.0
###### Importing all used packages %matplotlib inline import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np from pandas import set_option pd.options.mode.chained_assignment = None ###### Import packages needed for the make_vars functions import Feature_Engineering as FE ##### import stuff from scikit learn from sklearn.ensemble import ExtraTreesClassifier, RandomForestRegressor from sklearn.multiclass import OneVsRestClassifier from sklearn.model_selection import LeaveOneGroupOut, cross_val_predict filename = '../facies_vectors.csv' df = pd.read_csv(filename) df.head() ####### create X_train and y_train X_train = df[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']][df.PE.notnull()] y_train = df['PE'][df.PE.notnull()] groups_train = df['Well Name'][df.PE.notnull()] X_fit = df[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']][df.PE.isnull()] Cl = RandomForestRegressor(n_estimators=100) Cl.fit(X_train, y_train) y_predict = Cl.predict(X_fit) df['PE'][df.PE.isnull()] = y_predict training_data = df """ Explanation: Facies classification using ExtraTrees, OneVsRest and Feature imputation Contest entry by :geoLEARN Martin Blouin, Lorenzo Perozzi, Antoine Catรฉ <br> in collaboration with Erwan Gloaguen Original contest notebook by Brendon Hall, Enthought In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types. Exploring the dataset First, we import and examine the dataset used to train the classifier. End of explanation """ ##### cD From wavelet db1 dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cA From wavelet db1 dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cD From wavelet db3 dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### cA From wavelet db3 dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### From entropy entropy_df = FE.make_entropy_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], l_foots=[2, 3, 4, 5, 7, 10]) ###### From gradient gradient_df = FE.make_gradient_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], dx_list=[2, 3, 4, 5, 6, 10, 20]) ##### From rolling average moving_av_df = FE.make_moving_av_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[1, 2, 5, 10, 20]) ##### From rolling standard deviation moving_std_df = FE.make_moving_std_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ##### From rolling max moving_max_df = FE.make_moving_max_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3, 4, 5, 7, 10, 15, 20]) ##### From rolling min moving_min_df = FE.make_moving_min_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ###### From rolling NM/M ratio rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=training_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200]) ###### From distance to NM and M, up and down dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=training_data) dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=training_data) dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=training_data) dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=training_data) list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df, entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df, rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df] combined_df = training_data for var_df in list_df_var: temp_df = var_df combined_df = pd.concat([combined_df,temp_df],axis=1) combined_df.replace(to_replace=np.nan, value='-1', inplace=True) print (combined_df.shape) combined_df.head(5) """ Explanation: Feature engineering As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below. The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment. Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies. A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are: detail and approximation coeficients at various levels of two wavelet transforms (using two types of Daubechies wavelets); measures of the local entropy with variable observation windows; measures of the local gradient with variable observation windows; rolling statistical calculations (i.e., mean, standard deviation, min and max) with variable observation windows; ratios between marine and non-marine lithofacies with different observation windows; distances from the nearest marine or non-marine occurence uphole and downhole. Functions used to build these variables are located in the Feature Engineering python script. All the data exploration work related to the conception and study of these variables is not presented here. End of explanation """ X = combined_df.iloc[:, 4:] y = combined_df['Facies'] groups = combined_df['Well Name'] Xtrees = ExtraTreesClassifier(n_estimators=500, min_samples_split=100, max_features = 0.5, class_weight='balanced', n_jobs=-1) Cl = OneVsRestClassifier(Xtrees, n_jobs=-1) """ Explanation: Building a prediction model from these variables End of explanation """ filename = '../validation_data_nofacies.csv' test_data = pd.read_csv(filename) test_data.head(5) ##### cD From wavelet db1 dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cA From wavelet db1 dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cD From wavelet db3 dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### cA From wavelet db3 dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### From entropy entropy_df = FE.make_entropy_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], l_foots=[2, 3, 4, 5, 7, 10]) ###### From gradient gradient_df = FE.make_gradient_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], dx_list=[2, 3, 4, 5, 6, 10, 20]) ##### From rolling average moving_av_df = FE.make_moving_av_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[1, 2, 5, 10, 20]) ##### From rolling standard deviation moving_std_df = FE.make_moving_std_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ##### From rolling max moving_max_df = FE.make_moving_max_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3, 4, 5, 7, 10, 15, 20]) ##### From rolling min moving_min_df = FE.make_moving_min_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ###### From rolling NM/M ratio rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=test_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200]) ###### From distance to NM and M, up and down dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=test_data) dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=test_data) dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=test_data) dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=test_data) combined_test_df = test_data list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df, entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df, rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df] for var_df in list_df_var: temp_df = var_df combined_test_df = pd.concat([combined_test_df,temp_df],axis=1) combined_test_df.replace(to_replace=np.nan, value='-99999', inplace=True) X_test = combined_test_df.iloc[:, 3:] test_pred_df = combined_test_df[['Well Name', 'Depth']] for i in range(100): Cl.fit(X, y) y_test = Cl.predict(X_test) y_test = pd.DataFrame(y_test, columns=['Predicted Facies #' + str(i)]) test_pred_df = pd.concat([test_pred_df, y_test], axis=1) print (test_pred_df.shape) test_pred_df.head() """ Explanation: Applying the classification model to test data End of explanation """ test_pred_df.to_pickle('Prediction_submission4_XtraTrees.pkl') """ Explanation: Exporting results End of explanation """
liganega/Gongsu-DataSci
previous/notes2017/W01/GongSu02_Anaconda_Installation.ipynb
gpl-3.0
a = 2 b = 3 a + b """ Explanation: ์•„๋‚˜์ฝ˜๋‹ค(Anaconda) ์†Œ๊ฐœ ์•„๋‚˜์ฝ˜๋‹ค ํŒจํ‚ค์ง€ ์†Œ๊ฐœ ํŒŒ์ด์ฌ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด ๊ฐœ๋ฐœํ™˜๊ฒฝ ํŒŒ์ด์ฌ ๊ธฐ๋ณธ ํŒจํ‚ค์ง€ ์ด์™ธ์— ๋ฐ์ดํ„ฐ๋ถ„์„์šฉ ํ•„์ˆ˜ ํŒจํ‚ค์ง€ ํฌํ•จ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ŠคํŒŒ์ด๋” ์—๋””ํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ฐ•์˜ ์ง„ํ–‰ ์•„๋‚˜์ฝ˜๋‹ค ํŒจํ‚ค์ง€ ๋‹ค์šด๋กœ๋“œ ์•„๋‚˜์ฝ˜๋‹ค ํŒจํ‚ค์ง€๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•˜๋ ค๋ฉด ์•„๋ž˜ ์‚ฌ์ดํŠธ๋ฅผ ๋ฐฉ๋ฌธํ•œ๋‹ค https://www.anaconda.com/download/ ์ดํ›„ ์•„๋ž˜ ๊ทธ๋ฆผ์„ ์ฐธ์กฐํ•˜์—ฌ ๋‹ค์šด๋ฐ›๋Š”๋‹ค. ์ฃผ์˜: ๊ฐ•์˜์—์„œ๋Š” ํŒŒ์ด์ฌ 2.7 ๋ฒ„์ „์„ ์‚ฌ์šฉํ•œ๋‹ค. <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda01.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda02.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda03.PNG" style="height:60"> </td> </tr> </table> </p> ์•„๋‚˜์ฝ˜๋‹ค ํŒจํ‚ค์ง€ ์„ค์น˜ ์•„๋ž˜ ๊ทธ๋ฆผ์— ํ‘œ์‹œ๋œ ๋ถ€๋ถ„์— ์ฃผ์˜ํ•˜์—ฌ ์„ค์น˜ํ•œ๋‹ค. ์ฃผ์˜: ๊ฒฝ๋กœ์„ค์ • ๋ถ€๋ถ„์€ ๋ฌด์Šจ ์˜๋ฏธ์ธ์ง€ ์•Œ๊ณ  ์žˆ๋Š” ๊ฒฝ์šฐ ์ฒดํฌ ๊ฐ€๋Šฅ. ๋ชจ๋ฅด๋ฉด ํ•ด์ œ ๊ถŒ์žฅ. <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda04.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda05.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda06.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda07.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda08.PNG" style="height:60"> </td> </tr> </table> </p> ์ŠคํŒŒ์ด๋”(Spyder) ํŒŒ์ด์ฌ ํŽธ์ง‘๊ธฐ ์‹คํ–‰ ์œˆ๋„์šฐํ‚ค๋ฅผ ๋ˆ„๋ฅธ ํ›„ ์ŠคํŒŒ์ด๋”(Spyder) ์„ ํƒ ๋ฐฉํ™”๋ฒฝ ์„ค์ •์€ ๊ธฐ๋ณธ ์‚ฌ์šฉ ์—…๋ฐ์ดํŠธ ํ™•์ธ ๋ถ€๋ถ„์€ ์ฒดํฌ ํ•ด์ œ <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda09.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda10.PNG" style="height=60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda11.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda12.PNG" style="height:60"> </td> </tr> </table> </p> ์ŠคํŒŒ์ด๋”(Spyder) ํŒŒ์ด์ฌ ํŽธ์ง‘๊ธฐ ํ™œ์šฉ ์ฃผ์˜: ์—…๊ทธ๋ ˆ์ด๋“œ ํ™•์ธ ๋ถ€๋ถ„์€ ์ฒดํฌ ํ•ด์ œํ•  ๊ฒƒ. ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์ž„์˜๋กœ ํ•˜์ง€ ๋ง ๊ฒƒ. ์ŠคํŒŒ์ด๋”๋Š” ํŽธ์ง‘๊ธฐ ๊ธฐ๋Šฅ๊ณผ ํ„ฐ๋ฏธ๋„ ๊ธฐ๋Šฅ์„ ๋™์‹œ์— ์ œ๊ณต ํŽธ์ง‘๊ธฐ ๋ถ€๋ถ„์€ ๊ธด ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•  ๋•Œ ์‚ฌ์šฉ ํ„ฐ๋ฏธ๋„ ๋ถ€๋ถ„์€ ์งง์€ ์ฝ”๋“œ๋ฅผ ํ…Œ์ŠคํŠธํ•  ๋•Œ ์‚ฌ์šฉ <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda13.PNG" style="height:60"> </td> </tr> </table> </p> ์‹คํ–‰๋ฒ„ํŠผ์„ ์ฒ˜์Œ์œผ๋กœ ๋ˆŒ๋ €์„ ๊ฒฝ์šฐ ํŒŒ์ด์ฌ ํ•ด์„๊ธฐ์™€ ๊ด€๋ จ๋œ ์„ค์ •์ฐฝ์ด ๋œฌ๋‹ค. ์„ค์ •์„ ๊ตณ์ด ๋ณ€๊ฒฝํ•  ํ•„์š” ์—†์ด Run ๋ฒ„ํŠผ์„ ๋ˆ„๋ฅธ๋‹ค. <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda14.PNG" style="height:60"> </td> </tr> </table> </p> ์ŠคํŒŒ์ด๋”(Spyder) ํŒŒ์ด์ฌ ํŽธ์ง‘๊ธฐ ํ™œ์šฉ ์˜ˆ์ œ ํŽธ์ง‘๊ธฐ ๋ถ€๋ถ„๊ณผ ํ„ฐ๋ฏธ๋„ ๋ถ€๋ถ„์€ ํŒŒ์ด์ฌ ํ•ด์„๊ธฐ๋ฅผ ๊ณต์œ ํ•œ๋‹ค. ํŽธ์ง‘๊ธฐ ๋ถ€๋ถ„์— ์ฝ”๋“œ ์ž…๋ ฅ ํ›„ ์‹คํ–‰ํ•˜๋ฉด, ํ„ฐ๋ฏธ๋„ ๋ถ€๋ถ„์—์„œ๋„ ํŽธ์ง‘๊ธฐ ๋ถ€๋ถ„์—์„œ ์ •์˜๋œ ๋ณ€์ˆ˜, ํ•จ์ˆ˜ ๋“ฑ ์‚ฌ์šฉ ๊ฐ€๋Šฅ ๋˜ํ•œ ํ„ฐ๋ฏธ๋„ ๋ถ€๋ถ„์—์„œ ์ •์˜๋œ ๋ณ€์ˆ˜, ํ•จ์ˆ˜ ๋“ฑ๋„ ํŽธ์ง‘๊ธฐ์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅ. ์ฃผ์˜: ์ด ๋ฐฉ์‹์€ ์ถ”์ฒœํ•˜์ง€ ์•Š์Œ. ํŽธ์ง‘๊ธฐ ๋ถ€๋ถ„์„ ์ €์žฅํ•  ๋•Œ ํ„ฐ๋ฏธ๋„ ๋ถ€๋ถ„์— ์ •์˜๋œ ์ฝ”๋“œ๋Š” ์ €์žฅ๋˜์ง€ ์•Š์Œ. <p> <table cellspacing="20"> <tr> <td> <img src="../../images/anaconda/anaconda15.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda16.PNG" style="height:60"> </td> </tr> <tr> <td> <img src="../../images/anaconda/anaconda17.PNG" style="height:60"> </td> </tr> </table> </p> ์ŠคํŒŒ์ด๋” ํ„ฐ๋ฏธ๋„ ์ฐฝ์—์„œ ํŒŒ์ด์ฌ ์ฝ”๋“œ ์‹คํ–‰ํ•˜๊ธฐ ์•„๋ž˜์™€ ๊ฐ™์ด ๋ช…๋ น์„ ๋ฐ”๋กœ ์‹คํ–‰ํ•˜์—ฌ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค. End of explanation """ a_number * 2 a_number = 5 type(a_number) print(a_number) a_number * 2 """ Explanation: ๋ณ€์ˆ˜๋Š” ์„ ์–ธ์„ ๋จผ์ € ํ•ด์•ผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. End of explanation """ if a_number > 2: print('Greater than 2!') else: print('Not greater than 2!') """ Explanation: ๊ฐ„๋‹จํ•œ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค. End of explanation """ # ๋จผ์ € `a_`๊นŒ์ง€๋งŒ ์ž‘์„ฑํ•œ ํ›„์— ํƒญํ‚ค๋ฅผ ๋ˆ„๋ฅด๋ฉด ๋‚˜๋จธ์ง€๊ฐ€ ์ž๋™์œผ๋กœ ์™„์„ฑ๋œ๋‹ค. a_number """ Explanation: ์ŠคํŒŒ์ด๋” ์—๋””ํ„ฐ ์ฐฝ์—์„œ ํŒŒ์ด์ฌ ์ฝ”๋“œ ์ž‘์„ฑ ์š”๋ น ํƒญ์™„์„ฑ ๊ธฐ๋Šฅ๊ณผ ๋‹ค์–‘ํ•œ ๋‹จ์ถ•ํ‚ค ๊ธฐ๋Šฅ์„ ํ™œ์šฉํ•˜์—ฌ ๋งค์šฐ ํšจ์œจ์ ์ธ ์ฝ”๋”ฉ์„ ํ•  ์ˆ˜ ์žˆ๋‹ค. ํƒญ์™„์„ฑ ๊ธฐ๋Šฅ ํƒญ์™„์„ฑ ๊ธฐ๋Šฅ์€ ํŽธ์ง‘๊ธฐ ๋ฐ ํ„ฐ๋ฏธ๋„ ๋ชจ๋‘์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. End of explanation """
mjabri/holoviews
doc/Tutorials/Continuous_Coordinates.ipynb
bsd-3-clause
import numpy as np import holoviews as hv %reload_ext holoviews.ipython np.set_printoptions(precision=2, linewidth=80) %opts HeatMap (cmap="hot") """ Explanation: HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this tutorial we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates. End of explanation """ def f(x,y): return x+y/3.1 region=(-0.5,-0.5,0.5,0.5) def coords(bounds,samples): l,b,r,t=bounds hc=0.5/samples return np.meshgrid(np.linspace(l+hc,r-hc,samples), np.linspace(b+hc,t-hc,samples)) """ Explanation: First, let's consider: <dl compact> <dt>``f(x,y)``</dt><dd>a simple function that accepts a location in a 2D plane specified in millimeters (mm)</dd> <dt>``region``</dt><dd>a 1mm&times;1mm square region of this 2D plane, centered at the origin, and</dd> <dt>``coords``</dt><dd>a function returning a square (s&times;s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell:</dd> </dl> End of explanation """ f5=f(*coords(region,5)) f5 """ Explanation: Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm: End of explanation """ r5 = hv.Raster(f5, label="R5") i5 = hv.Image( f5, label="I5", bounds=region) h5 = hv.HeatMap({(x, y): f5[4-y,x] for x in range(0,5) for y in range(0,5)}, label="H5") r5+i5+h5 """ Explanation: We can visualize this array (and thus the function f) either using a Raster, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an Image, which uses a continuous coordinate system, or as a HeatMap labelling each value explicitly: End of explanation """ "r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \ (r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1]) """ Explanation: Both the Raster and Image Element types accept the same input data, but a visualization of the Raster type reveals the underlying raw array indexing, while the Image type has been labelled with the coordinate system from which we know the data has been sampled. All Image operations work with this continuous coordinate system instead, while the corresponding operations on a Raster use raw array indexing. For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row: End of explanation """ f10=f(*coords(region,10)) f10 r10 = hv.Raster(f10, label="R10") i10 = hv.Image(f10, label="I10", bounds=region) r10+i10 """ Explanation: You can see that the Raster and the underlying .data elements use Numpy's integer indexing, while the Image uses floating-point values that are then mapped onto the appropriate array element. This diagram should help show the relationships between the Raster coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the Array coordinates figure), and the underlying Continuous coordinates: <TABLE style='border:5'> <TR> <TH><CENTER>Array coordinates</CENTER></TH> <TH><CENTER>Continuous coordinates</CENTER></TH> </TR> <TR> <TD><IMG src="http://ioam.github.io/topographica/_images/matrix_coords.png"></TD> <TD><IMG src="http://ioam.github.io/topographica/_images/sheet_coords_-0.2_0.4.png"></TD> </TR> </TABLE> Importantly, although we used a 5&times;5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space: End of explanation """ "r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \ (r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1]) """ Explanation: The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: <TABLE style='border:5'> <TR> <TH><CENTER>Array coordinates</CENTER></TH> <TH><CENTER>Continuous coordinates</CENTER></TH> </TR> <TR> <TD><IMG src="http://ioam.github.io/topographica/_images/matrix_coords_hidensity.png"></TD> <TD><IMG src="http://ioam.github.io/topographica/_images/sheet_coords_-0.2_0.4.png"></TD> </TR> </TABLE> Indexing the exact same coordinates as above now gets very different results: End of explanation """ sl10=i10[-0.275:0.025,-0.0125:0.2885] sl10.data sl10 """ Explanation: The array-based indexes used by Raster and the Numpy array in .data still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do not refer to the same location in continuous space as they did for the other array density, so this type of indexing is not independent of density or resolution. Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space. For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates: End of explanation """ r5[0:3,1:3] + r5[0:3,1:2] """ Explanation: This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: <TABLE style='border:5'> <TR> <TH><CENTER>Array coordinates</CENTER></TH> <TH><CENTER>Continuous coordinates</CENTER></TH> </TR> <TR> <TD><IMG src="http://ioam.github.io/topographica/_images/connection_field.png"></TD> <TD><IMG src="http://ioam.github.io/topographica/_images/sheet_coords_-0.275_-0.0125_0.025_0.2885.png"></TD> </TR> </TABLE> Slicing also works for Raster elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0): End of explanation """ e10=i10.sample(x=-0.275, y=0.2885) e10 """ Explanation: Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like Image so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed in the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in holoviews.core.sheetcoords.SheetCoordinateSystem. Similar continuous coordinates and slicing are also supported for Chart elements, such as Curves, but using a single index and allowing arbitrary irregular spacing, implemented in holoviews.elements.chart.Chart. They also work the same for the n-dimensional coordinates and slicing supported by the container types HoloMap, NdLayout, and NdOverlay, implemented in holoviews.core.dimension.Dimensioned and again allowing arbitrary irregular spacing. Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full n-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10&times;10 array from above, we can ask for the value at a given location, which will come back as a Table, i.e. a dictionary with one (key,value) pair: End of explanation """ r10=i10.sample(y=0.2885) r10 r10.data """ Explanation: Similarly, if we ask for the value of a given y location in continuous space, we will get a Curve with the array row closest to that y value in the Image 2D array returned as an array of $x$ values and the corresponding z value from the image: End of explanation """
SECOORA/GUTILS
docs/notebooks/0001 - Converting Slocum data to a standard DataFrame.ipynb
mit
from IPython.lib.pretty import pprint import logging logger = logging.getLogger('gutils') logger.handlers = [logging.StreamHandler()] logger.setLevel(logging.DEBUG) import sys from pathlib import Path # Just a hack to be able to `import gutils` sys.path.append(str(Path('.').absolute().parent.parent)) binary_folder = Path('.').absolute().parent.parent / 'gutils' / 'tests' / 'resources' / 'slocum' / 'real' / 'binary' bass_binary = binary_folder / 'bass-20160909T1733' !ls $bass_binary """ Explanation: Converting Slocum data to a standard DataFrame End of explanation """ import tempfile from gutils.slocum import SlocumMerger ascii_output = tempfile.mkdtemp() merger = SlocumMerger( str(bass_binary), ascii_output, globs=[ 'usf-bass-2016-252-1-12.sbd', 'usf-bass-2016-252-1-12.tbd' ] ) # The merge results contain a reference to the new produced ASCII file # as well as which binary files were involved in its creation merge_results = merger.convert() """ Explanation: SlocumMerger Convert binary (*.bd) files into ASCII Merge a subset of binary files If you know the flight/science pair you wish to merge End of explanation """ merger = SlocumMerger( str(bass_binary), ascii_output, ) # The merge results contain a reference to the new produced ASCII file as well as what binary files went into it. merge_results = merger.convert() """ Explanation: Merge all files in a directory This matches science and flight files together End of explanation """ ascii_file = merge_results[0]['ascii'] !cat $ascii_file """ Explanation: What does the ASCII file look like? End of explanation """ import json from gutils.slocum import SlocumReader slocum_data = SlocumReader(ascii_file) print('Mode: ', slocum_data.mode) print('ASCII: ', slocum_data.ascii_file) print('Headers: ', json.dumps(slocum_data.metadata, indent=4)) slocum_data.data.columns.tolist() slocum_data.data.head(20)[[ 'sci_m_present_time', 'm_depth', 'm_gps_lat', 'm_gps_lon', 'sci_water_pressure', 'sci_water_temp' ]] """ Explanation: SlocumReader Load the ASCII file into a pandas DataFrame End of explanation """ standard = slocum_data.standardize() # Which columns were added? set(standard.columns).difference(slocum_data.data.columns) standard.head(20)[[ 't', 'z', 'y', 'x', 'pressure', 'temperature' ]] """ Explanation: Standardize into a glider-independent DataFrame Lossless (adds columns) Common axis names Common variable names used in computations of density, salinity, etc. Interpolates GPS coordinates Converts to decimal degrees Calcualtes depth from pressure if available Calculates pressure from depth if need be Calculates density and salinity End of explanation """
kit-cel/wt
mloc/ch6_Unsupervised_Learning/KMeans_Illustration_Animated.ipynb
gpl-2.0
import numpy as np import matplotlib as mpl mpl.use('TkAgg') import matplotlib.pyplot as plt import sklearn.datasets as sk from matplotlib import animation from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs. %matplotlib inline """ Explanation: Illustration of the K-Means Algorithm This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br> This code illustrates * The simple k-means algorithm for hard clustering * Convergence of the k-means algorithm for different datasets * Generating an animated GIF of the convergence process End of explanation """ data_selection = 1 # Choose your data set. if data_selection==1: m = 4 # 4 classes for QPSK demodulation N = m*100 # must be divisible by 4 phase_shift = -np.pi/8 # phase shift # index to symbol mapper number2symbol = { 0 : np.array([1.0*np.cos(phase_shift), 1.0*np.sin(phase_shift)]), 1 : np.array([1.0*np.sin(phase_shift), -1.0*np.cos(phase_shift)]), 2 : np.array([-1.0*np.cos(phase_shift), -1.0*np.sin(phase_shift)]), 3 : np.array([-1.0*np.sin(phase_shift), 1.0*np.cos(phase_shift)]) } training_set = np.array([]) for i in range(0, m): # Assign N/4 constellation points to each QPSK symbol. constellation_points = np.add(np.random.randn(N//m,2)/5, number2symbol[i]) training_set = np.concatenate((training_set, constellation_points)) \ if training_set.size else constellation_points elif data_selection==2: m = 3 # You can change m arbitrarily. N = 800 random_state = 7 transformation = [[1.0, 0.1], [-0.5, 0.3]] training_set = np.dot(sk.make_blobs(n_samples=N, centers=m, n_features=2, random_state=random_state)[0],transformation) elif data_selection==3: m = 3 # You can change m arbitrarily. N = 200 random_state = 170 transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]] training_set = np.dot(sk.make_blobs(n_samples=N, centers=m, n_features=2, random_state=random_state)[0],transformation) elif data_selection==4: m = 6 # You can change m arbitrarily. N = m*100 training_set = np.random.rand(N,2) elif data_selection==5: m = 2 N = 200 training_set = sk.make_circles(n_samples=N, noise=0.05, random_state=None, factor=0.4)[0] elif data_selection==6: m = 2 N = 200 training_set = sk.make_moons(n_samples=N, noise=.05)[0] # Plot data set. plt.figure(num=None, figsize=(9, 8)) plt.scatter(training_set[:,0], training_set[:,1], marker='o', c='royalblue', s=50, alpha=0.5) plt.title('Dataset on which we want to apply clustering.', fontsize=14) plt.tick_params(axis='both', labelsize=14) plt.show() """ Explanation: The k-means algorithm is one of the most popular clustering algorithms and very simple with respect to the implementation. Clustering has the goal to categorize a set of objects into different clusters, such that the properties of the objects in one cluster are as close as possible. Objects of different clusters should have dissimilar properties. This following jupyter notebook implements and visualizes the $k$-means clustering algorithm. At first we choose a data set on which we want to apply the clustering. Choose one of the following data sets or define your own one: 1. Phase shifted QPSK constellation with AWGN. (as a communications related example) Due to the phase shift, we cannot use our common fixed axis decision borders. Instead of a phase synchronization, we apply the $k$-means algorithm to quantize each constellation point to 1 of 4 QPSK symbols. This is an easy data set, because we can visually separate the 4 clusters. By increasing the noise power, the clusters become more and more blurred. 2. Anisotropically distributed Gaussian blobs. Another example of visually separable blobs. 3. Anisotropically distributed Gaussian blobs. Datasets are closer togehter than in 2. 4. Uniform distribution. Visually not separable. 5. Concentric circles. This is an example where the downside of $k$-means is shown. Because the algorithm assumes that the elements of each cluster are located within a sphere around the center of the cluster, these 2 concentric circles cannot be separated despite the fact, that one can separate them visually pretty well. 6. Two interleaving half circles. Another example one which the algorithm fails. End of explanation """ max_iter = 16 # Set the maximum number of iterations. update_empty_centers = True # If True, a center point with no assigned data points gets a new random position. # Set random init center positions in range of training_data. def get_rand_centers(num): centers_x1 = (np.max(training_set[:,0])-np.min(training_set[:,0]))*np.random.rand(num)+np.min(training_set[:,0]) centers_x2 = (np.max(training_set[:,1])-np.min(training_set[:,1]))*np.random.rand(num)+np.min(training_set[:,1]) return np.stack((centers_x1, centers_x2), axis=1) centers = get_rand_centers(m) new_centers = np.empty([m,2]) center_history = np.array([centers]) argmin_history = np.empty((0,N)) # Prepare plot. fig = plt.figure(1, figsize=(16,4*(-(-max_iter//4)))) if m > 6: cmap=plt.cm.Paired else: cmap = mpl.colors.ListedColormap(['royalblue', 'red', 'green', 'm', 'darkorange', 'gray'][:m]) boundaries = np.arange(0,m,1.0) norm = mpl.colors.BoundaryNorm(np.arange(0,m+1,1), cmap.N) # Start iteration. for n in range(0,max_iter): # Calculate the Euclidean distance from each data point to each center point. distances = np.sqrt(np.subtract(training_set[:,0,None],np.repeat(np.array([centers[:,0]]), repeats=N, axis=0))**2 + np.subtract(training_set[:,1,None],np.repeat(np.array([centers[:,1]]), repeats=N, axis=0))**2) # Assignment step. Identify the closest center point to each data point. argmin = np.argmin(distances, axis=1) argmin_history = np.append(argmin_history, [argmin], axis=0) # Plot current center positions and center assignments of the data points. ax = fig.add_subplot(-(-max_iter//4), 4, n+1) plt.scatter(training_set[:,0], training_set[:,1], marker='o', s=30, c=argmin, cmap=cmap, norm=norm, alpha=0.5) plt.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=200) plt.tick_params(axis='both', labelsize=12) plt.title('Iteration %d' % n, fontsize=12) # Update step. for i in range(0,m): new_centers[i] = np.sum(training_set[argmin==i], axis=0) / len(argmin[argmin==i]) if len(argmin[argmin==i])>0 else get_rand_centers(1) if update_empty_centers else centers[i] # Calc the movement of all center points as a stopping criterion. center_movement = np.sum(np.sqrt((new_centers[:,0]-centers[:,0])**2 + (new_centers[:,1]-centers[:,1])**2)) if center_movement < 0.0001: print("Finished early after %d iterations." % n) break centers = np.array(new_centers, copy=True) center_history = np.append(center_history, [centers], axis=0) fig.subplots_adjust(hspace=0.3) plt.show() """ Explanation: Now we apply the $k$-means algorithm. Therefore we: Set the maximum number of iterations. Set random inital center positions. Iteratively: 1. Assignment step: Calculate Euclidean distances between each data point and each center position and assign to nearest one. 2. Plot the assigned data points (just for visualization). 3. Update step: Move each center point to the average position of the data points which are currently assigned to this center. If a center point has no assigned data points at all, define a new random position. (This is not part of the original algorithm. You can disable this feature at the beginning of the following code.) 4. Stop* if movement of the center positions is below a certain threshold or if the maximum number of iterations is reached. End of explanation """ # Plot trajectory of center positions. plt.figure(num=None, figsize=(10, 9)) plt.scatter(training_set[:,0], training_set[:,1], marker='o', s=100, c=argmin, cmap=cmap, norm=norm, alpha=0.5) plt.plot(center_history[:,:,0], center_history[:,:,1],marker='.',color='k',linewidth=2) plt.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=400) plt.tick_params(axis='both', labelsize=14) plt.savefig('k_means_gauss_trajectory.pdf',box_inches='tight') plt.show() """ Explanation: Visualize the trajectory of the center positions. End of explanation """ %matplotlib notebook %matplotlib notebook # First set up the figure, the axis, and the plot element we want to animate. fig = plt.figure(num=None, figsize=(12, 10)) ax = plt.axes() plt.tick_params(axis='both', labelsize=23) scat_train = ax.scatter(training_set[:,0], training_set[:,1], marker='o', s=100, c=argmin, cmap=cmap, norm=norm, alpha=0.5) scat_center = ax.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=400) lines = [] for n in range(0,m): line, = ax.plot([], [], marker='.',color='k',linewidth=2) lines.append(line) # Initialization function. def init(): return scat_center, # Animation function. This is called sequentially. def animate(i): scat_center.set_offsets(center_history[i,:,:]) scat_train.set_array(argmin_history[i]) for n in range(0,m): lines[n].set_data(center_history[0:i+1,n,0], center_history[0:i+1,n,1]) ax.set_title("Iteration {}".format(i), fontsize=20) return scat_center, # Call the animator. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(argmin_history), interval=2000, blit=True) # If you want to save the animation, use the following line. fig.show() #anim.save('k_means.gif', writer=PillowWriter(fps=.5)) """ Explanation: Visualize with matplotlib animation. If the animation returns errors you can: * Disable fig.show() and save the gif externally (uncomment the corresponding line in the code). * Use the jupyter notebook k_means.ipynb (without animation). End of explanation """
seanware/try_quantopian
mentorship.ipynb
mit
import datetime import numpy as np import pandas as pd import zipline %matplotlib inline STOCKS = ['AMD', 'CERN', 'COST', 'DELL', 'GPS', 'INTC', 'MMM'] """ Explanation: <img src="http://photos3.meetupstatic.com/photos/event/f/9/d/global_432903997.jpeg" style="display:inline;width:100px"></img> Mentorship Program Description of the Quantopian API <img src="http://sparkcapital.com/wp-content/uploads/2013/01/Quantopian_S1.png" style="display:inline;width:250px"></img> Quantopian provides a platform for you to build, test, and execute trading algorithms. End of explanation """ class BUY_APPLE(zipline.TradingAlgorithm): """ Copy the sample trading algorithm from Quantopian and see if we can run it in zipline (what needs to change to convert between their platform th) """ def initialize(self): pass def handle_data(self, data): self.order(zipline.api.symbol('AAPL'), 10) self.record(APPL=data[zipline.api.symbol('AAPL')].price) start = datetime.datetime(2001, 8, 1) end = datetime.datetime(2013, 2, 1) data = zipline.utils.factory.load_from_yahoo(stocks=['AAPL', 'AMD'], indexes={}, start=start, end=end) def run_buy_apple(): buy_apple = BUY_APPLE(); results = buy_apple.run(data) return results.portfolio_value results_buy_apple = run_buy_apple() results_buy_apple.tail() results_buy_apple.head() results_buy_apple.plot() from collections import deque as moving_window class DualMovingAverage(zipline.TradingAlgorithm): """ Implements the Dual Moving average """ def initialize(self, short_window=100, long_window=300): self.short_window = moving_window(maxlen=short_window) self.long_window = moving_window(maxlen=long_window) def handle_data(self, data): self.short_window.append(data[zipline.api.symbol('AAPL')].price) self.long_window.append(data[zipline.api.symbol('AAPL')].price) short_mavg = np.mean(self.short_window) long_mavg = np.mean(self.long_window) #Trading logic if short_mavg > long_mavg: self.order_target(zipline.api.symbol('AAPL'), 100) elif short_mavg < long_mavg: self.order_target(zipline.api.symbol('AAPL'), 0) self.record(APPL=data[zipline.api.symbol('AAPL')].price, short_mavg=short_mavg, long_mavg=long_mavg) def run_dual_moving_ave(): moving_ave = DualMovingAverage(); results = moving_ave.run(data) return results.portfolio_value results_DMA = run_dual_moving_ave() results_DMA.plot() """ Explanation: Example 1: Buy Apple Stock End of explanation """ # This is a module we wrote using pg8000 to access our Postgres database on Heroku from database import Database db = Database() #list of Chicago's fortune 500 companies' ticker symbols chicago_companies_lookup = dict( ABT = "Abbot", ADM = "Archer-Daniels Midland", ALL = "Allstate", BA = "Boeing", CF = "CF Industries (Fertilizer)", DFS = "Discover", DOV = "Dover Corporation (industrial products)", EXC = "Exelon", GWW = "Grainger", ITW = "Illinois Tool Works", MCD = "McDonalds", MDLZ = "Mondelez", MSI = "Motorola", NI = "Nicor", TEG = "Integrys (energy)") chicago_companies = chicago_companies_lookup.keys() returns = db.select( ('SELECT dt, "{}" FROM return ' 'WHERE dt BETWEEN \'2012-01-01\' AND \'2012-12-31\'' 'ORDER BY dt;').format( '", "'.join((c.lower() for c in chicago_companies))), columns=["Date"] + chicago_companies) sp_dates = [row.pop("Date") for row in returns] returns = pd.DataFrame(returns, index=sp_dates) #cluster to determine if sectors move similarly in the marketplace from scipy.cluster.vq import whiten from sklearn.cluster import KMeans import matplotlib.pyplot as plt %matplotlib inline normalize = whiten(returns.transpose().dropna()) steps = range(2,10) inertias = [KMeans(i).fit(normalize).inertia_ for i in steps] plt.plot(steps, inertias, 'go-') plt.title("Pick 5 clusters (but the dropoff looks linear)") nclust = 5 km = KMeans(n_clusters = nclust) km.fit(normalize) clustered_companies = [set() for i in range(nclust)] for i in range(len(normalize.index)): company = normalize.index[i] cluster_id = km.labels_[i] clustered_companies[cluster_id].add(company) print "Here are the clusters...." for c in clustered_companies: print len(c), " companies:\n ", ", ".join(chicago_companies_lookup[co] for co in c) import scipy.spatial.distance as dist import scipy.cluster.hierarchy as hclust chicago_dist = dist.pdist(normalize, 'euclidean') links = hclust.linkage(chicago_dist) plt.figure(figsize=(3,4)) den = hclust.dendrogram( links, labels=[chicago_companies_lookup[co] for co in normalize.index], orientation="left") plt.ylabel('Samples', fontsize=9) plt.xlabel('Distance') plt.suptitle('Stocks clustered by similarity', fontweight='bold', fontsize=14); """ Explanation: Machine Learning Scikit Learn's home page divides up the space of machine learning well, but the Mahout algorithms list has a more comprehensive list of algorithms. From both: - Collaborative filtering<br/> 'because you bought these, we recommend this' - Classification<br/> 'people with these characteristics, if sent a mailer, will buy something 30% of the time' - Clustering<br/> 'our customers naturally fall into these groups: urban singles, guys with dogs, women 25-35 who like rap' - Dimension reduction<br/> 'a preprocessing step before regression that can also identify the most significant contributors to variation' - Topics<br/> 'the posts in this user group are related to either local politics, music, or sports' The S&P 500 dataset is great for us to quickly explore regression, clustering, and principal component analysis. Example: K-means Clustering Goal is to cluster Chicago-area Fortune 500 stocks by similar day-to-day returns in 2012. Steps: Get and transform the data (one row per company, one column per day in the year) Iteratively try different 'K' values for k-means and pick one See what the clusters say about which stocks are similar (expect similarity within industrial group) End of explanation """
jdhp-docs/python-notebooks
python_sklearn_mlp_fr.ipynb
mit
import sklearn # version >= 0.18 is required version = [int(num) for num in sklearn.__version__.split('.')] assert (version[0] >= 1) or (version[1] >= 18) """ Explanation: Le perceptron multicouche avec scikit-learn Documentation officielle: http://scikit-learn.org/stable/modules/neural_networks_supervised.html Notebooks associรฉs: - http://www.jdhp.org/docs/notebooks/ai_multilayer_perceptron_fr.html Vรฉrification de la version de la bibliothรจque scikit-learn Attention: le Perceptron Multicouche n'est implรฉmentรฉ dans scikit-learn que depuis la version 0.18 (septembre 2016). Le code source de cette implรฉmentation est disponible sur github. Le long fil de discussion qui prรฉcรฉdรฉ l'intรฉgration de cette implรฉmentation est disponible sur la page suivante: issue #3204. End of explanation """ from sklearn.neural_network import MLPClassifier X = [[0., 0.], [1., 1.]] y = [0, 1] clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) clf.fit(X, y) """ Explanation: Classification C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification Premier exemple End of explanation """ clf.predict([[2., 2.], [-1., -2.]]) """ Explanation: Une fois le rรฉseau de neurones entrainรฉ, on peut tester de nouveaux exemples: End of explanation """ clf.coefs_ [coef.shape for coef in clf.coefs_] """ Explanation: clf.coefs_ contient les poids du rรฉseau de neurones (une liste d'array): End of explanation """ clf.predict_proba([[2., 2.], [-1., -2.]]) """ Explanation: Vector of probability estimates $P(y|x)$ per sample $x$: End of explanation """ from sklearn.neural_network import MLPRegressor X = [[0., 0.], [1., 1.]] y = [0, 1] reg = MLPRegressor(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) reg.fit(X, y) """ Explanation: Rรฉgression C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regression Premier exemple End of explanation """ reg.predict([[2., 2.], [-1., -2.]]) """ Explanation: Une fois le rรฉseau de neurones entrainรฉ, on peut tester de nouveaux exemples: End of explanation """ reg.coefs_ [coef.shape for coef in reg.coefs_] """ Explanation: clf.coefs_ contient les poids du rรฉseau de neurones (une liste d'array): End of explanation """ # TODO... """ Explanation: Rรฉgularisation C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regularization End of explanation """ X = [[0., 0.], [1., 1.]] y = [0, 1] clf = MLPClassifier(hidden_layer_sizes=(15,), random_state=1, max_iter=1, # <- ! warm_start=True) # <- ! for i in range(10): clf.fit(X, y) print(clf.coefs_) """ Explanation: Normalisation des donnรฉes d'entrรฉe C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#tips-on-practical-use Itรฉrer manuellement C.f. http://scikit-learn.org/stable/modules/neural_networks_supervised.html#more-control-with-warm-start Itรฉrer manuellement la boucle d'apprentissage peut รชtre pratique pour suivre son รฉvolution ou pour l'orienter. Voici un exemple oรน on suit l'รฉvolution des poids du rรฉseau sur 10 itรฉrations: End of explanation """
tensorflow/docs-l10n
site/zh-cn/hub/tutorials/bert_experts.ipynb
apache-2.0
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2020 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ !pip3 install --quiet tensorflow !pip3 install --quiet tensorflow_text import seaborn as sns from sklearn.metrics import pairwise import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text # Imports TF ops for preprocessing. #@title Configure the model { run: "auto" } BERT_MODEL = "https://tfhub.dev/google/experts/bert/wiki_books/2" # @param {type: "string"} ["https://tfhub.dev/google/experts/bert/wiki_books/2", "https://tfhub.dev/google/experts/bert/wiki_books/mnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qqp/2", "https://tfhub.dev/google/experts/bert/wiki_books/squad2/2", "https://tfhub.dev/google/experts/bert/wiki_books/sst2/2", "https://tfhub.dev/google/experts/bert/pubmed/2", "https://tfhub.dev/google/experts/bert/pubmed/squad2/2"] # Preprocessing must match the model, but all the above use the same. PREPROCESS_MODEL = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3" """ Explanation: <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/bert_experts"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View ๅœจ TensorFlow.org ไธŠๆŸฅ็œ‹</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">ๅœจ Google Colab ไธญ่ฟ่กŒ </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">ๅœจ GitHub ไธŠๆŸฅ็œ‹ๆบไปฃ็ </a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">ไธ‹่ฝฝ็ฌ”่ฎฐๆœฌ</a></td> <td><a href="https://tfhub.dev/s?q=experts%2Fbert"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">ๆŸฅ็œ‹ TF Hub ๆจกๅž‹</a></td> </table> TF-Hub ไธญ็š„ BERT ไธ“ๅฎถๆจกๅž‹ ๆญค Colab ๆผ”็คบไบ†ๅฆ‚ไฝ•ๆ‰ง่กŒไปฅไธ‹ๆ“ไฝœ๏ผš ไปŽ TensorFlow Hub ๅŠ ่ฝฝๅทฒ้’ˆๅฏนไธๅŒไปปๅŠก๏ผˆๅŒ…ๆ‹ฌ MNLIใ€SQuAD ๅ’Œ PubMed๏ผ‰่ฟ›่กŒ่ฎญ็ปƒ็š„ BERT ๆจกๅž‹ ไฝฟ็”จๅŒน้…็š„้ข„ๅค„็†ๆจกๅž‹ๅฐ†ๅŽŸๅง‹ๆ–‡ๆœฌ่ฏไพ‹ๅŒ–ๅนถ่ฝฌๆขไธบ ID ไฝฟ็”จๅŠ ่ฝฝ็š„ๆจกๅž‹ไปŽ่ฏไพ‹่พ“ๅ…ฅ ID ็”Ÿๆˆๆฑ ๅŒ–ๅ’Œๅบๅˆ—่พ“ๅ‡บ ๆŸฅ็œ‹ไธๅŒๅฅๅญ็š„ๆฑ ๅŒ–่พ“ๅ‡บ็š„่ฏญไน‰็›ธไผผๅบฆ ๆณจ๏ผšๆญค Colab ๅบ”ไธŽ GPU ่ฟ่กŒๆ—ถไธ€่ตท่ฟ่กŒ ่ฎพ็ฝฎๅ’Œๅฏผๅ…ฅ End of explanation """ sentences = [ "Here We Go Then, You And I is a 1999 album by Norwegian pop artist Morten Abel. It was Abel's second CD as a solo artist.", "The album went straight to number one on the Norwegian album chart, and sold to double platinum.", "Among the singles released from the album were the songs \"Be My Lover\" and \"Hard To Stay Awake\".", "Riccardo Zegna is an Italian jazz musician.", "Rajko Maksimoviฤ‡ is a composer, writer, and music pedagogue.", "One of the most significant Serbian composers of our time, Maksimoviฤ‡ has been and remains active in creating works for different ensembles.", "Ceylon spinach is a common name for several plants and may refer to: Basella alba Talinum fruticosum", "A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth.", "A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.", ] """ Explanation: ๅฅๅญ ๆˆ‘ไปฌไปŽ Wikipedia ไธญ่Žทๅ–ไธ€ไบ›ๅฅๅญไปฅ้€š่ฟ‡ๆจกๅž‹่ฟ่กŒ End of explanation """ preprocess = hub.load(PREPROCESS_MODEL) bert = hub.load(BERT_MODEL) inputs = preprocess(sentences) outputs = bert(inputs) print("Sentences:") print(sentences) print("\nBERT inputs:") print(inputs) print("\nPooled embeddings:") print(outputs["pooled_output"]) print("\nPer token embeddings:") print(outputs["sequence_output"]) """ Explanation: ่ฟ่กŒๆจกๅž‹ ๆˆ‘ไปฌๅฐ†ไปŽ TF-Hub ๅŠ ่ฝฝ BERT ๆจกๅž‹๏ผŒไฝฟ็”จ TF-Hub ไธญ็š„ๅŒน้…้ข„ๅค„็†ๆจกๅž‹ๅฐ†ๅฅๅญ่ฏไพ‹ๅŒ–๏ผŒ็„ถๅŽๅฐ†่ฏไพ‹ๅŒ–ๅฅๅญ้ฆˆๅ…ฅๆจกๅž‹ใ€‚ไธบไบ†่ฎฉๆญค Colab ๅ˜ๅพ—ๅฟซ้€Ÿ่€Œ็ฎ€ๅ•๏ผŒๆˆ‘ไปฌๅปบ่ฎฎๅœจ GPU ไธŠ่ฟ่กŒใ€‚ ่ฝฌๅˆฐ Runtime โ†’ Change runtime type ไปฅ็กฎไฟ้€‰ๆ‹ฉ GPU End of explanation """ #@title Helper functions def plot_similarity(features, labels): """Plot a similarity matrix of the embeddings.""" cos_sim = pairwise.cosine_similarity(features) sns.set(font_scale=1.2) cbar_kws=dict(use_gridspec=False, location="left") g = sns.heatmap( cos_sim, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="Blues", cbar_kws=cbar_kws) g.tick_params(labelright=True, labelleft=False) g.set_yticklabels(labels, rotation=0) g.set_title("Semantic Textual Similarity") plot_similarity(outputs["pooled_output"], sentences) """ Explanation: ่ฏญไน‰็›ธไผผๅบฆ ็Žฐๅœจ๏ผŒๆˆ‘ไปฌ็œ‹ไธ€ไธ‹ๅฅๅญ็š„ pooled_output ๅตŒๅ…ฅๅ‘้‡๏ผŒๅนถๆฏ”่พƒๅฎƒไปฌๅœจๅฅๅญไธญ็š„็›ธไผผ็จ‹ๅบฆใ€‚ End of explanation """
dolittle007/dolittle007.github.io
notebooks/survival_analysis.ipynb
gpl-3.0
%matplotlib inline from matplotlib import pyplot as plt import numpy as np import pymc3 as pm from pymc3.distributions.timeseries import GaussianRandomWalk import seaborn as sns from statsmodels import datasets from theano import tensor as T """ Explanation: Bayesian Survival Analysis Author: Austin Rochford Survival analysis studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3. We illustrate these concepts by analyzing a mastectomy data set from R's HSAUR package. End of explanation """ df = datasets.get_rdataset('mastectomy', 'HSAUR', cache=True).data df.event = df.event.astype(np.int64) df.metastized = (df.metastized == 'yes').astype(np.int64) n_patients = df.shape[0] patients = np.arange(n_patients) df.head() n_patients """ Explanation: Fortunately, statsmodels.datasets makes it quite easy to load a number of data sets from R. End of explanation """ df.event.mean() """ Explanation: Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column time represents the time (in months) post-surgery that the woman was observed. The column event indicates whether or not the woman died during the observation period. The column metastized represents whether the cancer had metastized prior to surgery. This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysis First we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function $$S(t) = P(T > t) = 1 - F(t),$$ where $F$ is the CDF of $T$. It is mathematically convenient to express the survival function in terms of the hazard rate, $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is, $$\begin{align} \lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t\ |\ T > t)}{\Delta t} \ & = \lim_{\Delta t \to 0} \frac{P(t < T < t + \Delta t)}{\Delta t \cdot P(T > t)} \ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}. \end{align}$$ Solving this differential equation for the survival function shows that $$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$ This representation of the survival function shows that the cumulative hazard function $$\Lambda(t) = \int_0^t \lambda(s)\ ds$$ is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$ An important, but subtle, point in survival analysis is censoring. Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, df.event is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored). End of explanation """ fig, ax = plt.subplots(figsize=(8, 6)) blue, _, red = sns.color_palette()[:3] ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time, color=blue, label='Censored') ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time, color=red, label='Uncensored') ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1], color='k', zorder=10, label='Metastized') ax.set_xlim(left=0) ax.set_xlabel('Months since mastectomy') ax.set_yticks([]) ax.set_ylabel('Subject') ax.set_ylim(-0.25, n_patients + 0.25) ax.legend(loc='center right'); """ Explanation: Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below. End of explanation """ interval_length = 3 interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length) n_intervals = interval_bounds.size - 1 intervals = np.arange(n_intervals) """ Explanation: When an observation is censored (df.event is zero), df.time is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds df.time. This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards model The two most basic estimators in survial analysis are the Kaplan-Meier estimator of the survival function and the Nelson-Aalen estimator of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is Cox's proportional hazards model. In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as $$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$ Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector df.metastized. Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes unidentifiable. To illustrate this unidentifiability, suppose that $$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$ If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable. In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$. A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long. End of explanation """ fig, ax = plt.subplots(figsize=(8, 6)) ax.hist(df[df.event == 1].time.values, bins=interval_bounds, color=red, alpha=0.5, lw=0, label='Uncensored'); ax.hist(df[df.event == 0].time.values, bins=interval_bounds, color=blue, alpha=0.5, lw=0, label='Censored'); ax.set_xlim(0, interval_bounds[-1]); ax.set_xlabel('Months since mastectomy'); ax.set_yticks([0, 1, 2, 3]); ax.set_ylabel('Number of observations'); ax.legend(); """ Explanation: We see how deaths and censored observations are distributed in these intervals. End of explanation """ last_period = np.floor((df.time - 0.01) / interval_length).astype(int) death = np.zeros((n_patients, n_intervals)) death[patients, last_period] = df.event """ Explanation: With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with pymc3. The key observation is that the piecewise-constant proportional hazard model is closely related to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germรกn Rodrรญguez's WWS 509 course notes.) We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval, $$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \ 0 & \textrm{otherwise} \end{cases}.$$ End of explanation """ exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length exposure[patients, last_period] = df.time - interval_bounds[last_period] """ Explanation: We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval. End of explanation """ SEED = 5078864 # from random.org with pm.Model() as model: lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals) beta = pm.Normal('beta', 0, sd=1000) lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0)) mu = pm.Deterministic('mu', exposure * lambda_) obs = pm.Poisson('obs', mu, observed=death) """ Explanation: Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$. We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following pymc3 model. End of explanation """ n_samples = 1000 with model: trace_ = pm.sample(n_samples,random_seed=SEED) trace = trace_[100:] """ Explanation: We now sample from the model. End of explanation """ np.exp(trace['beta'].mean()) pm.plot_posterior(trace, varnames=['beta'], color='#87ceeb'); pm.autocorrplot(trace, varnames=['beta']); """ Explanation: We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized. End of explanation """ base_hazard = trace['lambda0'] met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T) def cum_hazard(hazard): return (interval_length * hazard).cumsum(axis=-1) def survival(hazard): return np.exp(-cum_hazard(hazard)) def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05): mean = f(hazard.mean(axis=0)) percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.]) hpd = np.percentile(f(hazard), percentiles, axis=0) ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25) ax.step(x, mean, color=color, label=label); fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6)) plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard, hazard_ax, color=blue, label='Had not metastized') plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard, hazard_ax, color=red, label='Metastized') hazard_ax.set_xlim(0, df.time.max()); hazard_ax.set_xlabel('Months since mastectomy'); hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$'); hazard_ax.legend(loc=2); plot_with_hpd(interval_bounds[:-1], base_hazard, survival, surv_ax, color=blue) plot_with_hpd(interval_bounds[:-1], met_hazard, survival, surv_ax, color=red) surv_ax.set_xlim(0, df.time.max()); surv_ax.set_xlabel('Months since mastectomy'); surv_ax.set_ylabel('Survival function $S(t)$'); fig.suptitle('Bayesian survival model'); """ Explanation: We now examine the effect of metastization on both the cumulative hazard and on the survival function. End of explanation """ with pm.Model() as time_varying_model: lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals) beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals) lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta))) mu = pm.Deterministic('mu', exposure * lambda_) obs = pm.Poisson('obs', mu, observed=death) """ Explanation: We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard. These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with pymc3 is the inherent quantification of uncertainty in our estimates. Time varying effects Another of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$. We implement this model in pymc3 as follows. End of explanation """ with time_varying_model: time_varying_trace_ = pm.sample(n_samples, random_seed=SEED) time_varying_trace = time_varying_trace_[100:] pm.forestplot(time_varying_trace, varnames=['beta']); """ Explanation: We proceed to sample from this model. End of explanation """ fig, ax = plt.subplots(figsize=(8, 6)) beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0) beta_low = beta_hpd[0] beta_high = beta_hpd[1] ax.fill_between(interval_bounds[:-1], beta_low, beta_high, color=blue, alpha=0.25); beta_hat = time_varying_trace['beta'].mean(axis=0) ax.step(interval_bounds[:-1], beta_hat, color=blue); ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]], beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]], c=red, zorder=10, label='Died, cancer metastized'); ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]], beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]], c=blue, zorder=10, label='Censored, cancer metastized'); ax.set_xlim(0, df.time.max()); ax.set_xlabel('Months since mastectomy'); ax.set_ylabel(r'$\beta_j$'); ax.legend(); """ Explanation: We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually. End of explanation """ tv_base_hazard = time_varying_trace['lambda0'] tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta'])) fig, ax = plt.subplots(figsize=(8, 6)) ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)), color=blue, label='Had not metastized'); ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)), color=red, label='Metastized'); ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)), color=blue, linestyle='--', label='Had not metastized (time varying effect)'); ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)), color=red, linestyle='--', label='Metastized (time varying effect)'); ax.set_xlim(0, df.time.max() - 4); ax.set_xlabel('Months since mastectomy'); ax.set_ylim(0, 2); ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$'); ax.legend(loc=2); fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6)) plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard, hazard_ax, color=blue, label='Had not metastized') plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard, hazard_ax, color=red, label='Metastized') hazard_ax.set_xlim(0, df.time.max()); hazard_ax.set_xlabel('Months since mastectomy'); hazard_ax.set_ylim(0, 2); hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$'); hazard_ax.legend(loc=2); plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival, surv_ax, color=blue) plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival, surv_ax, color=red) surv_ax.set_xlim(0, df.time.max()); surv_ax.set_xlabel('Months since mastectomy'); surv_ax.set_ylabel('Survival function $S(t)$'); fig.suptitle('Bayesian survival model with time varying effects'); """ Explanation: The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study. The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots. End of explanation """
hajicj/FEL-NLP-IR_2016
tutorial/tutorial.ipynb
apache-2.0
import npfl103 """ Explanation: Information Retrieval This is a tutorial for the npfl103 package for Information Retrieval assignments. Big picture In simple IR systems that we'll build in this lab session, two major things are happening more or less independently on each other. One: the similarity index of documents to retrieve has to be built. Two: the queries are processed and documents get returned. The first part is all about representing the documents in your collection as points in a vector space. In the second part, you then convert the queries into the same vector space, and return the documents according to how close to the query point they are represented. Your job will be mostly to deal with designing a clever vector space, so that the closest documents to a query happen to be the right ones to retrieve. There are some pesky technicalities that have to be taken care of: reading the documents, writing the outputs, evaluating, etc. This package does its best to help you not to have to deal with these parts, but you kind of have to be aware of them, so the tutorial does go through how they are handled. The plan The tutorial goes through the following steps: Loading documents and queries (topics) Processing documents and queries into a vector space (!!!) Transforming the document vectors from one space to another (!!!) Making queries Writing the outputs Evaluating Points 2 and 3 are where you're supposed to modify things and come up with ideas. The linguistic stuff (lemmatization/stemming, part-of-speech filtering, etc.) comes in step 2, the math (TF-IDF, pivot normalization, topic models...) come up in step 3. Note that the Python classes you're supposed to use have documentation inside, with quite detailed examples. We don't cover all of that here -- the tutorial focuses on how the library fits together. End of explanation """ import os dpath = os.path.join('.', 'tutorial-assignment') dlist = os.path.join(dpath, 'documents.list') qlist = os.path.join(dpath, 'topics.list') """ Explanation: Tutorial data The data for the tutorial lives in the tutorial_assignment subfolder. It mirrors the assignment folder in structure and file types. End of explanation """ from npfl103.io import Collection coll_docs = Collection(dlist) """ Explanation: Loading documents and queries For loading data, use the npfl103.io module. From the top down, there is a class for representing the entire collection of documents (Collection), a Document class for representing one document and an equivalent Topic class for representing one query topic, a VText class for representing a block of text in one zone of a document/query, and a VToken class for representing one word (or equivalent token) in a zone text. These classes are nothing interesting: they merely represent the data (both the queries and the documents that should be retrieved). Check out their documentation strings for details on how to operate them. The main entry point to data loading is the Collection class. Initialize it with the documents.list (or topics.list) file: End of explanation """ from npfl103.io import Topic coll_queries = Collection(qlist, document_cls=Topic) """ Explanation: Notice that creating the Collection was fast. This is because the whole pipeline in npfl103 is lazy: no document is read until it is actually requested. This helps keep time and especially memory requirements down; the library is designed to have a constant memory footprint. Collection item classes A Collection consists of individual documents. There are two implemented document types: the Document class from npfl103.io, and the Topic class. Collections are by default created as collections of Documents; however, for reading the queries, we use the same Collection mechanism and explicitly supply the Topic class. End of explanation """ from npfl103.io import BinaryVectorizer, TermFrequencyVectorizer """ Explanation: Caching The constant memory footprint is not exactly true: for speeding up repeated document reads, the Collection caches the documents it read. Eventually, you may run out of memory. In that case, try creating a Collection with nocache=True. Processing documents and queries into a vector space To run a vector space information retrieval experiment, we now have to convert the loaded representations of documents and queries into a vector space. For this, we provide the Vectorizer classes. End of explanation """ vectorizer = TermFrequencyVectorizer(field='lemma') """ Explanation: The purpose of a Vectorizer is to take a stream of a document's tokens and convert it into one vector representing this document in some vector space. Each token is used as a dimension of the vector space. If your tokens are just part of speech tags (noun, verb, etc.), then your space will have just some 10 dimensions; if your tokens are word forms, then there will be thousands of dimensions. (The vectors will be sparse vectors, only remembering the nonzero elements -- implemented simply as a dict.) When we initialize a Vectorizer, we need to specify two things: What the stream of tokens should contain (what the dimensions of the space will be), How the values of the vector items will be computed (binary? frequencies? etc.) Let's build a term frequency vectorizer that iterates over lemmas. End of explanation """ vdocs = (vectorizer.transform(d) for d in coll_docs) """ Explanation: The vectorizer provides a transform method that does the processing. End of explanation """ cw_vectorizer = TermFrequencyVectorizer(field='lemma', token_filter=lambda t: t.pos in 'NADV') """ Explanation: (Notice that we're still using generator expressions, so nothing really gets computed so far.) After running through a Vectorizer, there is no implementation difference between what a Document and a Topic look like. Which tokens? Not all tokens are relevant. For instance, you might only want to represent a document using its title and headings, not the texts themselves. Or you might want to only use tokens which are "content words" -- usually defined as nouns, adjectives, verbs, or adverbs. The Vectorizers accept some arguments to specify which tokens should be used: Zones (TITLE, HEADING, ...) Field (form, lemma, pos, full_pos, ... - see format specification in the assignment README) Start and end token (e.g. you might only want to read the beginnings of documents) Token filter We've already seen an example using field. The zones for documents are TITLE, HEADING, and TEXT. The zones for queries (topics) are title, narr, and desc. Take a moment to look into the data, to make sure you know what the roles of the zones are. Except for zones, make sure you use the same vectorization settings for both the Documents and the Topics! Token filter The token filter argument is a function that returns True or False when called with a VToken object. This enables filtering out tokens based on a different field than the one used to build the vector space. For instance, the aforementioned content word filtering would be iplemented as: End of explanation """ vdocs = [vectorizer.transform(d) for d in coll_docs] # This actually parses the documents. cw_docs = [cw_vectorizer.transform(d) for d in coll_docs] d = vdocs[0] cw_d = cw_docs[0] # Print the top 10 most frequent tokens import pprint, operator print('All words:') pprint.pprint({w: n for w, n in sorted(d.items(), key=operator.itemgetter(1), reverse=True)[:10]}) print('----------------------\nContent words:') pprint.pprint({w: n for w, n in sorted(cw_d.items(), key=operator.itemgetter(1), reverse=True)[:10]}) """ Explanation: We can compare the results of the "plain" vectorizer and the content word vectorizer: End of explanation """ from npfl103.transform import TransformCorpus """ Explanation: We can see that token filtering can make a pretty large difference. Transforming vectors While the Vectorizers got us from the raw data to a vector space, we might not be particularly happy with the immediate results. For instance, in the above example, we see very general words like "new" or "be", and we might wish to apply the Inverse Document Frequency transform. Or we want to normalize the frequencies to sum to 1, or use pivot normalization, or... whatever. To do operations on vector spaces, we use TransformCorpus objects as "pipeline sections" that operate on the flow of data. End of explanation """ # This is the transformation we want to apply. def normalize(vec): doc_length = sum(vec.values()) return {k: v / doc_length for k, v in vec.items()} normalized_docs = TransformCorpus(corpus=cw_docs, transform=normalize, name='normalized_docs') """ Explanation: These "pipeline" components get two parameters: the transformation they should be applying, and the source of the data to apply it on. Let's make an example transformation: normalizing the frequencies to 1. End of explanation """ cw_queries = (cw_vectorizer.transform(q) for q in coll_queries) # Generator, again normalized_queries = TransformCorpus(corpus=cw_queries, transform=normalize, name='normalized_queries') """ Explanation: The corpus is an iterable that contains dictionary-like objects as sparse document vectors. The transform parameter is a callable: either a function, or a class that implements a __call__ method. Notice also the name parameter: this is for yourself, to be able to keep track of what each pipeline component does. Let's do the same thing for queries: End of explanation """ cw_docs = TransformCorpus(corpus=coll_docs, transform=cw_vectorizer.transform, name='vectorized_docs') cw_queries = TransformCorpus(corpus=coll_queries, transform=cw_vectorizer.transform, name='vectorized_queries') """ Explanation: Vectorization as transformation It is maybe more elegant to implement vectorization also as a pipeline component instead of having lists or generators floating around. End of explanation """ normalized_docs = TransformCorpus(corpus=cw_docs, transform=normalize, name='normalized_docs') normalized_queries = TransformCorpus(corpus=cw_queries, transform=normalize, name='normalized_docs') """ Explanation: Chaining transformations The pipelines can, of course, build on top of each other. Using the previous pipeline stages cw_docs and cw_queries objects of TransformCorpus class as the corpus parameter, we can put the normalization on top of these: End of explanation """ from npfl103.similarity import Similarity # The similarity is initialized with the document corpus. similarity = Similarity(corpus=normalized_docs, k=10) # Returning the top 10 documents. Use None for all docs. similarity_corpus = TransformCorpus(corpus=normalized_queries, transform=similarity, name='sim') """ Explanation: How would you implement TF-IDF in this system? Vectorize with TermFrequencyVectorizer Implement IDF as a class with a __call__ method that can be used as a transformation. (Hint: it needs to see the training corpus of documents at initialization time, to initialize the inverse document frequencies for the individual terms.) Add a TransformCorpus that gets this IDF transformer as a transform method on top of the vectorized corpus (that was also used as input for the IDF transformer's initialization). Similarity queries Assuming we're happy with the vector space in which our documents now live, we want to find for a query the similarity scores for all documents. The same transformation mechanism is used. This time, we transform a query from the same space as the documents into a similarity space: the dimensions of this space are the documents which should be retrieved, and the values are the similarity scores for the query and that given document. End of explanation """ import io # The system io, not npfl103.io hdl = io.StringIO() # Technical workaround, so that the tutorial does not create files at this point. # This is what writes the output. In practice, you'll probably use "with open(...) as hdl:" to write to a file. Similarity.write_trec(similarity_corpus, similarity, hdl) """ Explanation: Recapitulation At this point, the retrieval pipeline is set up. We have: Vectorized and processed the documents which we want to retrieve, We can vectorize and process an incoming query in the same way, We can use the query to compute similarities and return the top K documents. Now, we only have to worry about recording our retrieval results and evaluating them against human judgments of relevant vs. non-relevant documents. Writing the outputs In order to record the outputs, use the Similarity.write_trec static method: End of explanation """ from npfl103.evaluation import do_eval, print_eval """ Explanation: Evaluation You should already have compiled trec_eval using the instructions in the README in npfl103/eval. The npfl103.evaluation package provides a do_eval() and print_eval() function to run evaluation from within the package. End of explanation """ results_file = 'tutorial-assignment/tutorial-output.dat' with open(results_file, 'w') as outstream: Similarity.write_trec(similarity_corpus, similarity, outstream) """ Explanation: Since trec_eval (which is called inside these functions) needs an input file, not a stream, we have to dump our results to a file. End of explanation """ qrels_file = 'tutorial-assignment/qrels.txt' print_eval(qrels_file, results_file) """ Explanation: The tutorial assignment has its ground truth file: End of explanation """ print_eval(qrels_file, results_file, results_by_query=True) """ Explanation: You can also break down the results by query, by setting results_by_query=True: End of explanation """ results = do_eval(qrels_file, results_file, results_by_query=True) pprint.pprint([q for q in results]) pprint.pprint(results['10.2452/401-AH']) """ Explanation: If you want to do further processing with the results, use do_eval(). Instead of printing results, it returns them as a dictionary. Again, you can request the results by query (it will come in an OrderedDict of OrderedDicts, see do_eval() docstring). End of explanation """
cbpygit/pypmj
examples/Setting up a configuration file.ipynb
gpl-3.0
import config_tools as ct """ Explanation: Getting a config parser The pypmj-module uses a configuration file in which all information about the JCMsuite-installation, data storage, servers and so on are set. This makes pypmj very flexible, as you can generate as many configuration files as you like. Here, we show how to easily set up your configuration using the config_tools shipped with pypmj. We first import the config_tools. End of explanation """ config = ct.get_config_parser() """ Explanation: We can get a suitable config parser for convenient setting of our preferences. End of explanation """ config.sections() """ Explanation: This parser already contains some default values and the standard sections: End of explanation """ # config.set('User', 'email', 'your_address@your_provider.com') """ Explanation: We will go through the different sections and show which values can to be set. Note: If a configuration option is not set, a default value will be used by pypmj. So you only need to uncomment and set the options that you like. Sections User Set your e-mail address here if you like to receive status e-mail. End of explanation """ # config.set('Storage', 'base', '/path/to/your/global/storage/folder') """ Explanation: Storage Set up a base folder into which all the simulation data should be stored. The SimulationSet class of pypmj offers a convenient way to organize your simulations inside this folder. You can also set the special value 'CWD', which will cause that current working directory will be used instead. End of explanation """ # config.set('Data', 'projects', 'project/collection/folder') """ Explanation: Data To keep your projects in one place, you can set a global projects folder. If you initialize a JCMProject unsing the JCMProject-class of pypmj, you can then give the path to your project relative to this directory. pypmj will leave the contents if these folders untouched and copy the contents to a working directory. If you don't like to use a global folder, you can also pass absolute paths to JCMProject. End of explanation """ # config.set('Data', 'refractiveIndexDatabase', '/path/to/your/RefractiveIndex/database') """ Explanation: Note: Be sure that this path is set to the project-folder shipped with pypmj to successfully run the Using pypmj - the mie2D-project notebook. If you are using the materials-extension of pypmj, a RefractiveIndex database is needed and the path is configured here. Please contact one of the maintainers of pypmj for info on such a database. End of explanation """ config.set('JCMsuite', 'root', '/path/to/your/parent/CJMsuite/install/dir') config.set('JCMsuite', 'dir', 'JCMsuite_X_Y_Z') # <- this is simply the folder name config.set('JCMsuite', 'kernel', 3) """ Explanation: JCMsuite It is assumed that your installation(s) of JCMsuite are in a fixed directory, which is configured using the root key. That way, you can change the version of JCMsuite to use easily by only changing the directory name with the key dir. Some versions of JCMsuite provide different kernels, which can be set using the kay kernel. End of explanation """ # config.set('Logging', 'level', 'INFO') # config.set('Logging', 'write_logfile', True) # config.set('Logging', 'log_directory', 'logs') # <- can be a relative or an absolute path # config.set('Logging', 'log_filename', 'from_date') # config.set('Logging', 'send_mail', True) # config.set('Logging', 'mail_server', 'localhost') """ Explanation: Logging For the logging, you can specify the logging level ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', or 'NOTSET'), whether or not to write a log-file and if status mails should be send by the run_simusets_in_save_mode utility function. For the latter, you further need to configure the mail server used by smtplib.SMTP . End of explanation """ ct.add_server? """ Explanation: Adding servers Finally, you can add one or more servers which can be used by the JCMdaemon. Have a look at the doc string to see the possible configurations: End of explanation """ # ct.add_server(config, 'localhost', # multiplicity_default=1, # n_threads_default=1) """ Explanation: Minimally, the localhost needs to be added, because otherwise there will be no resources for the JCMdaemon. This is done by using 'localhost' as the hostname and your local username as the login: End of explanation """ # ct.add_server(config, 'myserver.something.com', 'YOUR_LOGIN', # JCM_root='/path/on/server/to/your/jcm_installations', # multiplicity_default=6, # n_threads_default=6, # nickname='myserver') """ Explanation: But you may have additional server power. Let's assume you have installed JCMsuite on a server called myserver which you can reach via ssh by typing ssh YOUR_LOGIN@myserver.something.com. The directory into which your JCMsuite version(s) is(are) installed may be /path/on/server/to/your/jcm_installations. The JCMsuite directory name needs to be the same as configured in the section JCMsuite under key dir! You may further want to set a nickname to manage all your servers later more easily, e.g. myserver. Finaly, you want to set 6 workers and 6 threads per worker as a default. Then just write: End of explanation """ ct.write_config_file(config, 'config.cfg') """ Explanation: Note: You will need a password free login to these servers. Saving the configuration file So you are done and all that is left is saving the configuration to a config file: End of explanation """ import os os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg' """ Explanation: Using the configuration file with pypmj Using a specific configuration file is easily done by setting the environment variable 'PYPMJ_CONFIG_FILE'. If this is not set, pypmj will look for a config.cfg in the current working directory. Setting the environment variable can be done using the os module: End of explanation """
NeuroDataDesign/fngs
docs/ebridge2/fngs_specs/week_0309/specs.ipynb
apache-2.0
%%script false ## disklog.sh #!/bin/bash -e # run this in the background with nohup ./disklog.sh > disk.txt & # while true; do echo "$(du -s $1 | awk '{print $1}')" sleep 30 done ##cpulog.sh import psutil import time import argparse def cpulog(outfile): with open(outfile, 'w') as outf: while(True): cores = psutil.cpu_percent(percpu=True) corestr = ",".join([str(core) for core in cores]) outf.write(corestr + '\n') outf.flush() time.sleep(1) # delay for 1 second def main(): parser = argparse.ArgumentParser() parser.add_argument('outfile', help='the file to write core usage to.') args = parser.parse_args() cpulog(args.outfile) if __name__ == "__main__": main() ## memlog.sh #!/bin/bash -e # run this in the background with nohup ./memlog.sh > mem.txt & # while true; do echo "$(free -m | grep buffers/cache | awk '{print $3}')" sleep 1 done ## runonesub.sh # A function for generating memory and cpu summaries for fngs pipeline. # # Usage: ./generate_statistics.sh /path/to/rest /path/to/anat /path/to/output rm -rf $3 mkdir $3 ./memlog.sh > ${3}/mem.txt & memkey=$! python cpulog.py ${3}/cpu.txt & cpukey=$! ./disklog.sh $3 > ${3}/disk.txt & diskkey=$! res=2mm atlas='/FNGS_server/atlases/atlas/MNI152_T1-${res}.nii.gz' atlas_brain='/FNGS_server/atlases/atlas/MNI152_T1-${res}_brain.nii.gz' atlas_mask='/FNGS_server/atlases/mask/MNI152_T1-${res}_brain_mask.nii.gz' lv_mask='/FNGS_server/atlases/mask/HarvOx_lv_thr25-${res}.nii.gz' label='/FNGS_server/atlases/label/desikan-${res}.nii.gz' exec 4<$1 exec 5<$2 fngs_pipeline $1 $2 1 $atlas $atlas_brain $atlas_mask $lv_mask $3 $label --fmt graphml kill $memkey $cpukey $diskkey %matplotlib inline import numpy as np import re import matplotlib.pyplot as plt def memory_function(infile, dataset): with open(infile, 'r') as mem: lines = mem.readlines() testar = np.asarray([line.strip() for line in lines]).astype(float)/1000 fig=plt.figure() ax = fig.add_subplot(111) ax.plot(range(0, testar.shape[0]), testar - min(testar)) ax.set_ylabel('memory usage in GB') ax.set_xlabel('Time (s)') ax.set_title(dataset + ' Memory Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar))) return fig def cpu_function(infile, dataset): with open(infile, 'r') as cpuf: lines = cpuf.readlines() testar = [re.split(',',line.strip()) for line in lines][0:-1] corear = np.zeros((len(testar), len(testar[0]))) for i in range(0, len(testar)): corear[i,:] = np.array([float(cpu) for cpu in testar[i]]) fig=plt.figure() ax = fig.add_subplot(111) lines = [ax.plot(corear[:,i], '--', label='cpu '+ str(i), alpha=0.5)[0] for i in range(0, corear.shape[1])] total = corear.sum(axis=1) lines.append(ax.plot(total, label='all cores')[0]) labels = [h.get_label() for h in lines] fig.legend(handles=lines, labels=labels, loc='lower right', prop={'size':6}) ax.set_ylabel('CPU usage (%)') ax.set_ylim([0, max(total)+10]) ax.set_xlabel('Time (s)') ax.set_title(dataset + ' Processor Usage; max = %.1f per; mean = %.1f per' % (max(total), np.mean(total))) return fig def disk_function(infile, dataset): with open(infile, 'r') as disk: lines = disk.readlines() testar = np.asarray([line.strip() for line in lines]).astype(float)/1000000 fig=plt.figure() ax = fig.add_subplot(111) ax.plot(range(0, testar.shape[0]), testar - min(testar)) ax.set_ylabel('Disk usage GB') ax.set_xlabel('Time (30 s)') ax.set_title(dataset + ' Disk Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar))) return fig memfig = memory_function('/data/BNU_sub/BNU_single/mem.txt', 'BNU 1 single') diskfig = disk_function('/data/BNU_sub/BNU_single/disk.txt', 'BNU 1 single') cpufig = cpu_function('/data/BNU_sub/BNU_single/cpu.txt', 'BNU 1 single') memfig.show() diskfig.show() cpufig.show() """ Explanation: Performance Overview Here, we will example the performance of FNGS as a function of time on several datasets. These investigations were performed on a 4 core machine (4 threads) with a 4.0 GhZ processor. BNU1 End of explanation """ memfig = memory_function('/data/HNU_sub/HNU_single/mem.txt', 'HNU 1 single') diskfig = disk_function('/data/HNU_sub/HNU_single/disk.txt', 'HNU 1 single') cpufig = cpu_function('/data/HNU_sub/HNU_single/cpu.txt', 'HNU 1 single') memfig.show() diskfig.show() cpufig.show() """ Explanation: HNU Dataset End of explanation """ memfig = memory_function('/data/DC_sub/DC_single/mem.txt', 'DC 1 single') diskfig = disk_function('/data/DC_sub/DC_single/disk.txt', 'DC 1 single') cpufig = cpu_function('/data/DC_sub/DC_single/cpu.txt', 'DC 1 single') memfig.show() diskfig.show() cpufig.show() memfig = memory_function('/data/NKI_sub/NKI_single/mem.txt', 'NKI 1 single') diskfig = disk_function('/data/NKI_sub/NKI_single/disk.txt', 'NKI 1 single') cpufig = cpu_function('/data/NKI_sub/NKI_single/cpu.txt', 'NKI 1 single') memfig.show() diskfig.show() cpufig.show() """ Explanation: DC1 Dataset End of explanation """ memfig = memory_function('/data/NKI_sub/NKI_parallel2/mem.txt', 'NKI 1 multi parallel') diskfig = disk_function('/data/NKI_sub/NKI_parallel2/disk.txt', 'NKI 1 multi parallel') cpufig = cpu_function('/data/NKI_sub/NKI_parallel2/cpu.txt', 'NKI 1 multi parallel') memfig.show() diskfig.show() cpufig.show() memfig = memory_function('/data/BNU_sub/BNU_parallel1/mem.txt', 'BNU 1 multi parallel') diskfig = disk_function('/data/BNU_sub/BNU_parallel1/disk.txt', 'BNU 1 multi parallel') cpufig = cpu_function('/data/BNU_sub/BNU_parallel1/cpu.txt', 'BNU 1 multi parallel') memfig.show() diskfig.show() cpufig.show() """ Explanation: Multi Subject Here, we look at two datasets to see how multi subject performance works. Note that statistics are shown for a single subject; ie, the disk usage in particular reflects a single subject's disk usage. Two subjects in parallel (run at same time) End of explanation """ memfig = memory_function('/data/NKI_sub/NKI_parallel3/mem.txt', 'NKI 1 multi offset') diskfig = disk_function('/data/NKI_sub/NKI_parallel3/disk.txt', 'NKI 1 multi offset') cpufig = cpu_function('/data/NKI_sub/NKI_parallel3/cpu.txt', 'NKI 1 multi offset') memfig.show() diskfig.show() cpufig.show() """ Explanation: Two subjects Offset End of explanation """
gautam1858/tensorflow
tensorflow/lite/g3doc/tutorials/pose_classification.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ !pip install -q opencv-python import csv import cv2 import itertools import numpy as np import pandas as pd import os import sys import tempfile import tqdm from matplotlib import pyplot as plt from matplotlib.collections import LineCollection import tensorflow as tf import tensorflow_hub as hub from tensorflow import keras from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, classification_report, confusion_matrix """ Explanation: Human Pose Classification with MoveNet and TensorFlow Lite This notebook teaches you how to train a pose classification model using MoveNet and TensorFlow Lite. The result is a new TensorFlow Lite model that accepts the output from the MoveNet model as its input, and outputs a pose classification, such as the name of a yoga pose. The procedure in this notebook consists of 3 parts: * Part 1: Preprocess the pose classification training data into a CSV file that specifies the landmarks (body keypoints) detected by the MoveNet model, along with the ground truth pose labels. * Part 2: Build and train a pose classification model that takes the landmark coordinates from the CSV file as input, and outputs the predicted labels. * Part 3: Convert the pose classification model to TFLite. By default, this notebook uses an image dataset with labeled yoga poses, but we've also included a section in Part 1 where you can upload your own image dataset of poses. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/pose_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/s?q=movenet"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> Preparation In this section, you'll import the necessary libraries and define several functions to preprocess the training images into a CSV file that contains the landmark coordinates and ground truth labels. Nothing observable happens here, but you can expand the hidden code cells to see the implementation for some of the functions we'll be calling later on. If you only want to create the CSV file without knowing all the details, just run this section and proceed to Part 1. End of explanation """ #@title Functions to run pose estimation with MoveNet #@markdown You'll download the MoveNet Thunder model from [TensorFlow Hub](https://www.google.com/url?sa=D&q=https%3A%2F%2Ftfhub.dev%2Fs%3Fq%3Dmovenet), and reuse some inference and visualization logic from the [MoveNet Raspberry Pi (Python)](https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/raspberry_pi) sample app to detect landmarks (ear, nose, wrist etc.) from the input images. #@markdown *Note: You should use the most accurate pose estimation model (i.e. MoveNet Thunder) to detect the keypoints and use them to train the pose classification model to achieve the best accuracy. When running inference, you can use a pose estimation model of your choice (e.g. either MoveNet Lightning or Thunder).* # Download model from TF Hub and check out inference code from GitHub !wget -q -O movenet_thunder.tflite https://tfhub.dev/google/lite-model/movenet/singlepose/thunder/tflite/float16/4?lite-format=tflite !git clone https://github.com/tensorflow/examples.git pose_sample_rpi_path = os.path.join(os.getcwd(), 'examples/lite/examples/pose_estimation/raspberry_pi') sys.path.append(pose_sample_rpi_path) # Load MoveNet Thunder model import utils from data import BodyPart from ml import Movenet movenet = Movenet('movenet_thunder') # Define function to run pose estimation using MoveNet Thunder. # You'll apply MoveNet's cropping algorithm and run inference multiple times on # the input image to improve pose estimation accuracy. def detect(input_tensor, inference_count=3): """Runs detection on an input image. Args: input_tensor: A [height, width, 3] Tensor of type tf.float32. Note that height and width can be anything since the image will be immediately resized according to the needs of the model within this function. inference_count: Number of times the model should run repeatly on the same input image to improve detection accuracy. Returns: A Person entity detected by the MoveNet.SinglePose. """ image_height, image_width, channel = input_tensor.shape # Detect pose using the full input image movenet.detect(input_tensor.numpy(), reset_crop_region=True) # Repeatedly using previous detection result to identify the region of # interest and only croping that region to improve detection accuracy for _ in range(inference_count - 1): person = movenet.detect(input_tensor.numpy(), reset_crop_region=False) return person #@title Functions to visualize the pose estimation results. def draw_prediction_on_image( image, person, crop_region=None, close_figure=True, keep_input_size=False): """Draws the keypoint predictions on image. Args: image: An numpy array with shape [height, width, channel] representing the pixel values of the input image. person: A person entity returned from the MoveNet.SinglePose model. close_figure: Whether to close the plt figure after the function returns. keep_input_size: Whether to keep the size of the input image. Returns: An numpy array with shape [out_height, out_width, channel] representing the image overlaid with keypoint predictions. """ # Draw the detection result on top of the image. image_np = utils.visualize(image, [person]) # Plot the image with detection results. height, width, channel = image.shape aspect_ratio = float(width) / height fig, ax = plt.subplots(figsize=(12 * aspect_ratio, 12)) im = ax.imshow(image_np) if close_figure: plt.close(fig) if not keep_input_size: image_np = utils.keep_aspect_ratio_resizer(image_np, (512, 512)) return image_np #@title Code to load the images, detect pose landmarks and save them into a CSV file class MoveNetPreprocessor(object): """Helper class to preprocess pose sample images for classification.""" def __init__(self, images_in_folder, images_out_folder, csvs_out_path): """Creates a preprocessor to detection pose from images and save as CSV. Args: images_in_folder: Path to the folder with the input images. It should follow this structure: yoga_poses |__ downdog |______ 00000128.jpg |______ 00000181.bmp |______ ... |__ goddess |______ 00000243.jpg |______ 00000306.jpg |______ ... ... images_out_folder: Path to write the images overlay with detected landmarks. These images are useful when you need to debug accuracy issues. csvs_out_path: Path to write the CSV containing the detected landmark coordinates and label of each image that can be used to train a pose classification model. """ self._images_in_folder = images_in_folder self._images_out_folder = images_out_folder self._csvs_out_path = csvs_out_path self._messages = [] # Create a temp dir to store the pose CSVs per class self._csvs_out_folder_per_class = tempfile.mkdtemp() # Get list of pose classes and print image statistics self._pose_class_names = sorted( [n for n in os.listdir(self._images_in_folder) if not n.startswith('.')] ) def process(self, per_pose_class_limit=None, detection_threshold=0.1): """Preprocesses images in the given folder. Args: per_pose_class_limit: Number of images to load. As preprocessing usually takes time, this parameter can be specified to make the reduce of the dataset for testing. detection_threshold: Only keep images with all landmark confidence score above this threshold. """ # Loop through the classes and preprocess its images for pose_class_name in self._pose_class_names: print('Preprocessing', pose_class_name, file=sys.stderr) # Paths for the pose class. images_in_folder = os.path.join(self._images_in_folder, pose_class_name) images_out_folder = os.path.join(self._images_out_folder, pose_class_name) csv_out_path = os.path.join(self._csvs_out_folder_per_class, pose_class_name + '.csv') if not os.path.exists(images_out_folder): os.makedirs(images_out_folder) # Detect landmarks in each image and write it to a CSV file with open(csv_out_path, 'w') as csv_out_file: csv_out_writer = csv.writer(csv_out_file, delimiter=',', quoting=csv.QUOTE_MINIMAL) # Get list of images image_names = sorted( [n for n in os.listdir(images_in_folder) if not n.startswith('.')]) if per_pose_class_limit is not None: image_names = image_names[:per_pose_class_limit] valid_image_count = 0 # Detect pose landmarks from each image for image_name in tqdm.tqdm(image_names): image_path = os.path.join(images_in_folder, image_name) try: image = tf.io.read_file(image_path) image = tf.io.decode_jpeg(image) except: self._messages.append('Skipped ' + image_path + '. Invalid image.') continue else: image = tf.io.read_file(image_path) image = tf.io.decode_jpeg(image) image_height, image_width, channel = image.shape # Skip images that isn't RGB because Movenet requires RGB images if channel != 3: self._messages.append('Skipped ' + image_path + '. Image isn\'t in RGB format.') continue person = detect(image) # Save landmarks if all landmarks were detected min_landmark_score = min( [keypoint.score for keypoint in person.keypoints]) should_keep_image = min_landmark_score >= detection_threshold if not should_keep_image: self._messages.append('Skipped ' + image_path + '. No pose was confidentlly detected.') continue valid_image_count += 1 # Draw the prediction result on top of the image for debugging later output_overlay = draw_prediction_on_image( image.numpy().astype(np.uint8), person, close_figure=True, keep_input_size=True) # Write detection result into an image file output_frame = cv2.cvtColor(output_overlay, cv2.COLOR_RGB2BGR) cv2.imwrite(os.path.join(images_out_folder, image_name), output_frame) # Get landmarks and scale it to the same size as the input image pose_landmarks = np.array( [[keypoint.coordinate.x, keypoint.coordinate.y, keypoint.score] for keypoint in person.keypoints], dtype=np.float32) # Write the landmark coordinates to its per-class CSV file coordinates = pose_landmarks.flatten().astype(np.str).tolist() csv_out_writer.writerow([image_name] + coordinates) if not valid_image_count: raise RuntimeError( 'No valid images found for the "{}" class.' .format(pose_class_name)) # Print the error message collected during preprocessing. print('\n'.join(self._messages)) # Combine all per-class CSVs into a single output file all_landmarks_df = self._all_landmarks_as_dataframe() all_landmarks_df.to_csv(self._csvs_out_path, index=False) def class_names(self): """List of classes found in the training dataset.""" return self._pose_class_names def _all_landmarks_as_dataframe(self): """Merge all per-class CSVs into a single dataframe.""" total_df = None for class_index, class_name in enumerate(self._pose_class_names): csv_out_path = os.path.join(self._csvs_out_folder_per_class, class_name + '.csv') per_class_df = pd.read_csv(csv_out_path, header=None) # Add the labels per_class_df['class_no'] = [class_index]*len(per_class_df) per_class_df['class_name'] = [class_name]*len(per_class_df) # Append the folder name to the filename column (first column) per_class_df[per_class_df.columns[0]] = (os.path.join(class_name, '') + per_class_df[per_class_df.columns[0]].astype(str)) if total_df is None: # For the first class, assign its data to the total dataframe total_df = per_class_df else: # Concatenate each class's data into the total dataframe total_df = pd.concat([total_df, per_class_df], axis=0) list_name = [[bodypart.name + '_x', bodypart.name + '_y', bodypart.name + '_score'] for bodypart in BodyPart] header_name = [] for columns_name in list_name: header_name += columns_name header_name = ['file_name'] + header_name header_map = {total_df.columns[i]: header_name[i] for i in range(len(header_name))} total_df.rename(header_map, axis=1, inplace=True) return total_df #@title (Optional) Code snippet to try out the Movenet pose estimation logic #@markdown You can download an image from the internet, run the pose estimation logic on it and plot the detected landmarks on top of the input image. #@markdown *Note: This code snippet is also useful for debugging when you encounter an image with bad pose classification accuracy. You can run pose estimation on the image and see if the detected landmarks look correct or not before investigating the pose classification logic.* test_image_url = "https://cdn.pixabay.com/photo/2017/03/03/17/30/yoga-2114512_960_720.jpg" #@param {type:"string"} !wget -O /tmp/image.jpeg {test_image_url} if len(test_image_url): image = tf.io.read_file('/tmp/image.jpeg') image = tf.io.decode_jpeg(image) person = detect(image) _ = draw_prediction_on_image(image.numpy(), person, crop_region=None, close_figure=False, keep_input_size=True) """ Explanation: Code to run pose estimation using MoveNet End of explanation """ is_skip_step_1 = False #@param ["False", "True"] {type:"raw"} """ Explanation: Part 1: Preprocess the input images Because the input for our pose classifier is the output landmarks from the MoveNet model, we need to generate our training dataset by running labeled images through MoveNet and then capturing all the landmark data and ground truth labels into a CSV file. The dataset we've provided for this tutorial is a CG-generated yoga pose dataset. It contains images of multiple CG-generated models doing 5 different yoga poses. The directory is already split into a train dataset and a test dataset. So in this section, we'll download the yoga dataset and run it through MoveNet so we can capture all the landmarks into a CSV file... However, it takes about 15 minutes to feed our yoga dataset to MoveNet and generate this CSV file. So as an alternative, you can download a pre-existing CSV file for the yoga dataset by setting is_skip_step_1 parameter below to True. That way, you'll skip this step and instead download the same CSV file that will be created in this preprocessing step. On the other hand, if you want to train the pose classifier with your own image dataset, you need to upload your images and run this preprocessing step (leave is_skip_step_1 False)โ€”follow the instructions below to upload your own pose dataset. End of explanation """ use_custom_dataset = False #@param ["False", "True"] {type:"raw"} dataset_is_split = False #@param ["False", "True"] {type:"raw"} """ Explanation: (Optional) Upload your own pose dataset End of explanation """ #@markdown Be sure you run this cell. It's hiding the `split_into_train_test()` function that's called in the next code block. import os import random import shutil def split_into_train_test(images_origin, images_dest, test_split): """Splits a directory of sorted images into training and test sets. Args: images_origin: Path to the directory with your images. This directory must include subdirectories for each of your labeled classes. For example: yoga_poses/ |__ downdog/ |______ 00000128.jpg |______ 00000181.jpg |______ ... |__ goddess/ |______ 00000243.jpg |______ 00000306.jpg |______ ... ... images_dest: Path to a directory where you want the split dataset to be saved. The results looks like this: split_yoga_poses/ |__ train/ |__ downdog/ |______ 00000128.jpg |______ ... |__ test/ |__ downdog/ |______ 00000181.jpg |______ ... test_split: Fraction of data to reserve for test (float between 0 and 1). """ _, dirs, _ = next(os.walk(images_origin)) TRAIN_DIR = os.path.join(images_dest, 'train') TEST_DIR = os.path.join(images_dest, 'test') os.makedirs(TRAIN_DIR, exist_ok=True) os.makedirs(TEST_DIR, exist_ok=True) for dir in dirs: # Get all filenames for this dir, filtered by filetype filenames = os.listdir(os.path.join(images_origin, dir)) filenames = [os.path.join(images_origin, dir, f) for f in filenames if ( f.endswith('.png') or f.endswith('.jpg') or f.endswith('.jpeg') or f.endswith('.bmp'))] # Shuffle the files, deterministically filenames.sort() random.seed(42) random.shuffle(filenames) # Divide them into train/test dirs os.makedirs(os.path.join(TEST_DIR, dir), exist_ok=True) os.makedirs(os.path.join(TRAIN_DIR, dir), exist_ok=True) test_count = int(len(filenames) * test_split) for i, file in enumerate(filenames): if i < test_count: destination = os.path.join(TEST_DIR, dir, os.path.split(file)[1]) else: destination = os.path.join(TRAIN_DIR, dir, os.path.split(file)[1]) shutil.copyfile(file, destination) print(f'Moved {test_count} of {len(filenames)} from class "{dir}" into test.') print(f'Your split dataset is in "{images_dest}"') if use_custom_dataset: # ATTENTION: # You must edit these two lines to match your archive and images folder name: # !tar -xf YOUR_DATASET_ARCHIVE_NAME.tar !unzip -q YOUR_DATASET_ARCHIVE_NAME.zip dataset_in = 'YOUR_DATASET_DIR_NAME' # You can leave the rest alone: if not os.path.isdir(dataset_in): raise Exception("dataset_in is not a valid directory") if dataset_is_split: IMAGES_ROOT = dataset_in else: dataset_out = 'split_' + dataset_in split_into_train_test(dataset_in, dataset_out, test_split=0.2) IMAGES_ROOT = dataset_out """ Explanation: If you want to train the pose classifier with your own labeled poses (they can be any poses, not just yoga poses), follow these steps: Set the above use_custom_dataset option to True. Prepare an archive file (ZIP, TAR, or other) that includes a folder with your images dataset. The folder must include sorted images of your poses as follows. If you've already split your dataset into train and test sets, then set dataset_is_split to True. That is, your images folder must include "train" and "test" directories like this: ``` yoga_poses/ |__ train/ |__ downdog/ |______ 00000128.jpg |______ ... |__ test/ |__ downdog/ |______ 00000181.jpg |______ ... ``` Or, if your dataset is NOT split yet, then set `dataset_is_split` to **False** and we'll split it up based on a specified split fraction. That is, your uploaded images folder should look like this: ``` yoga_poses/ |__ downdog/ |______ 00000128.jpg |______ 00000181.jpg |______ ... |__ goddess/ |______ 00000243.jpg |______ 00000306.jpg |______ ... ``` Click the Files tab on the left (folder icon) and then click Upload to session storage (file icon). Select your archive file and wait until it finishes uploading before you proceed. Edit the following code block to specify the name of your archive file and images directory. (By default, we expect a ZIP file, so you'll need to also modify that part if your archive is another format.) Now run the rest of the notebook. End of explanation """ if not is_skip_step_1 and not use_custom_dataset: !wget -O yoga_poses.zip http://download.tensorflow.org/data/pose_classification/yoga_poses.zip !unzip -q yoga_poses.zip -d yoga_cg IMAGES_ROOT = "yoga_cg" """ Explanation: Note: If you're using split_into_train_test() to split the dataset, it expects all images to be PNG, JPEG, or BMPโ€”it ignores other file types. Download the yoga dataset End of explanation """ if not is_skip_step_1: images_in_train_folder = os.path.join(IMAGES_ROOT, 'train') images_out_train_folder = 'poses_images_out_train' csvs_out_train_path = 'train_data.csv' preprocessor = MoveNetPreprocessor( images_in_folder=images_in_train_folder, images_out_folder=images_out_train_folder, csvs_out_path=csvs_out_train_path, ) preprocessor.process(per_pose_class_limit=None) """ Explanation: Preprocess the TRAIN dataset End of explanation """ if not is_skip_step_1: images_in_test_folder = os.path.join(IMAGES_ROOT, 'test') images_out_test_folder = 'poses_images_out_test' csvs_out_test_path = 'test_data.csv' preprocessor = MoveNetPreprocessor( images_in_folder=images_in_test_folder, images_out_folder=images_out_test_folder, csvs_out_path=csvs_out_test_path, ) preprocessor.process(per_pose_class_limit=None) """ Explanation: Preprocess the TEST dataset End of explanation """ # Download the preprocessed CSV files which are the same as the output of step 1 if is_skip_step_1: !wget -O train_data.csv http://download.tensorflow.org/data/pose_classification/yoga_train_data.csv !wget -O test_data.csv http://download.tensorflow.org/data/pose_classification/yoga_test_data.csv csvs_out_train_path = 'train_data.csv' csvs_out_test_path = 'test_data.csv' is_skipped_step_1 = True """ Explanation: Part 2: Train a pose classification model that takes the landmark coordinates as input, and output the predicted labels. You'll build a TensorFlow model that takes the landmark coordinates and predicts the pose class that the person in the input image performs. The model consists of two submodels: Submodel 1 calculates a pose embedding (a.k.a feature vector) from the detected landmark coordinates. Submodel 2 feeds pose embedding through several Dense layer to predict the pose class. You'll then train the model based on the dataset that were preprocessed in part 1. (Optional) Download the preprocessed dataset if you didn't run part 1 End of explanation """ def load_pose_landmarks(csv_path): """Loads a CSV created by MoveNetPreprocessor. Returns: X: Detected landmark coordinates and scores of shape (N, 17 * 3) y: Ground truth labels of shape (N, label_count) classes: The list of all class names found in the dataset dataframe: The CSV loaded as a Pandas dataframe features (X) and ground truth labels (y) to use later to train a pose classification model. """ # Load the CSV file dataframe = pd.read_csv(csv_path) df_to_process = dataframe.copy() # Drop the file_name columns as you don't need it during training. df_to_process.drop(columns=['file_name'], inplace=True) # Extract the list of class names classes = df_to_process.pop('class_name').unique() # Extract the labels y = df_to_process.pop('class_no') # Convert the input features and labels into the correct format for training. X = df_to_process.astype('float64') y = keras.utils.to_categorical(y) return X, y, classes, dataframe """ Explanation: Load the preprocessed CSVs into TRAIN and TEST datasets. End of explanation """ # Load the train data X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path) # Split training data (X, y) into (X_train, y_train) and (X_val, y_val) X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.15) # Load the test data X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path) """ Explanation: Load and split the original TRAIN dataset into TRAIN (85% of the data) and VALIDATE (the remaining 15%). End of explanation """ def get_center_point(landmarks, left_bodypart, right_bodypart): """Calculates the center point of the two given landmarks.""" left = tf.gather(landmarks, left_bodypart.value, axis=1) right = tf.gather(landmarks, right_bodypart.value, axis=1) center = left * 0.5 + right * 0.5 return center def get_pose_size(landmarks, torso_size_multiplier=2.5): """Calculates pose size. It is the maximum of two values: * Torso size multiplied by `torso_size_multiplier` * Maximum distance from pose center to any pose landmark """ # Hips center hips_center = get_center_point(landmarks, BodyPart.LEFT_HIP, BodyPart.RIGHT_HIP) # Shoulders center shoulders_center = get_center_point(landmarks, BodyPart.LEFT_SHOULDER, BodyPart.RIGHT_SHOULDER) # Torso size as the minimum body size torso_size = tf.linalg.norm(shoulders_center - hips_center) # Pose center pose_center_new = get_center_point(landmarks, BodyPart.LEFT_HIP, BodyPart.RIGHT_HIP) pose_center_new = tf.expand_dims(pose_center_new, axis=1) # Broadcast the pose center to the same size as the landmark vector to # perform substraction pose_center_new = tf.broadcast_to(pose_center_new, [tf.size(landmarks) // (17*2), 17, 2]) # Dist to pose center d = tf.gather(landmarks - pose_center_new, 0, axis=0, name="dist_to_pose_center") # Max dist to pose center max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0)) # Normalize scale pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist) return pose_size def normalize_pose_landmarks(landmarks): """Normalizes the landmarks translation by moving the pose center to (0,0) and scaling it to a constant pose size. """ # Move landmarks so that the pose center becomes (0,0) pose_center = get_center_point(landmarks, BodyPart.LEFT_HIP, BodyPart.RIGHT_HIP) pose_center = tf.expand_dims(pose_center, axis=1) # Broadcast the pose center to the same size as the landmark vector to perform # substraction pose_center = tf.broadcast_to(pose_center, [tf.size(landmarks) // (17*2), 17, 2]) landmarks = landmarks - pose_center # Scale the landmarks to a constant pose size pose_size = get_pose_size(landmarks) landmarks /= pose_size return landmarks def landmarks_to_embedding(landmarks_and_scores): """Converts the input landmarks into a pose embedding.""" # Reshape the flat input into a matrix with shape=(17, 3) reshaped_inputs = keras.layers.Reshape((17, 3))(landmarks_and_scores) # Normalize landmarks 2D landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :2]) # Flatten the normalized landmark coordinates into a vector embedding = keras.layers.Flatten()(landmarks) return embedding """ Explanation: Define functions to convert the pose landmarks to a pose embedding (a.k.a. feature vector) for pose classification Next, convert the landmark coordinates to a feature vector by: 1. Moving the pose center to the origin. 2. Scaling the pose so that the pose size becomes 1 3. Flattening these coordinates into a feature vector Then use this feature vector to train a neural-network based pose classifier. End of explanation """ # Define the model inputs = tf.keras.Input(shape=(51)) embedding = landmarks_to_embedding(inputs) layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding) layer = keras.layers.Dropout(0.5)(layer) layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer) layer = keras.layers.Dropout(0.5)(layer) outputs = keras.layers.Dense(len(class_names), activation="softmax")(layer) model = keras.Model(inputs, outputs) model.summary() model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) # Add a checkpoint callback to store the checkpoint that has the highest # validation accuracy. checkpoint_path = "weights.best.hdf5" checkpoint = keras.callbacks.ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max') earlystopping = keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=20) # Start training history = model.fit(X_train, y_train, epochs=200, batch_size=16, validation_data=(X_val, y_val), callbacks=[checkpoint, earlystopping]) # Visualize the training history to see whether you're overfitting. plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['TRAIN', 'VAL'], loc='lower right') plt.show() # Evaluate the model using the TEST dataset loss, accuracy = model.evaluate(X_test, y_test) """ Explanation: Define a Keras model for pose classification Our Keras model takes the detected pose landmarks, then calculates the pose embedding and predicts the pose class. End of explanation """ def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """Plots the confusion matrix.""" if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=55) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # Classify pose in the TEST dataset using the trained model y_pred = model.predict(X_test) # Convert the prediction result to class name y_pred_label = [class_names[i] for i in np.argmax(y_pred, axis=1)] y_true_label = [class_names[i] for i in np.argmax(y_test, axis=1)] # Plot the confusion matrix cm = confusion_matrix(np.argmax(y_test, axis=1), np.argmax(y_pred, axis=1)) plot_confusion_matrix(cm, class_names, title ='Confusion Matrix of Pose Classification Model') # Print the classification report print('\nClassification Report:\n', classification_report(y_true_label, y_pred_label)) """ Explanation: Draw the confusion matrix to better understand the model performance End of explanation """ if is_skip_step_1: raise RuntimeError('You must have run step 1 to run this cell.') # If step 1 was skipped, skip this step. IMAGE_PER_ROW = 3 MAX_NO_OF_IMAGE_TO_PLOT = 30 # Extract the list of incorrectly predicted poses false_predict = [id_in_df for id_in_df in range(len(y_test)) \ if y_pred_label[id_in_df] != y_true_label[id_in_df]] if len(false_predict) > MAX_NO_OF_IMAGE_TO_PLOT: false_predict = false_predict[:MAX_NO_OF_IMAGE_TO_PLOT] # Plot the incorrectly predicted images row_count = len(false_predict) // IMAGE_PER_ROW + 1 fig = plt.figure(figsize=(10 * IMAGE_PER_ROW, 10 * row_count)) for i, id_in_df in enumerate(false_predict): ax = fig.add_subplot(row_count, IMAGE_PER_ROW, i + 1) image_path = os.path.join(images_out_test_folder, df_test.iloc[id_in_df]['file_name']) image = cv2.imread(image_path) plt.title("Predict: %s; Actual: %s" % (y_pred_label[id_in_df], y_true_label[id_in_df])) plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) plt.show() """ Explanation: (Optional) Investigate incorrect predictions You can look at the poses from the TEST dataset that were incorrectly predicted to see whether the model accuracy can be improved. Note: This only works if you have run step 1 because you need the pose image files on your local machine to display them. End of explanation """ converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() print('Model size: %dKB' % (len(tflite_model) / 1024)) with open('pose_classifier.tflite', 'wb') as f: f.write(tflite_model) """ Explanation: Part 3: Convert the pose classification model to TensorFlow Lite You'll convert the Keras pose classification model to the TensorFlow Lite format so that you can deploy it to mobile apps, web browsers and edge devices. When converting the model, you'll apply dynamic range quantization to reduce the pose classification TensorFlow Lite model size by about 4 times with insignificant accuracy loss. Note: TensorFlow Lite supports multiple quantization schemes. See the documentation if you are interested to learn more. End of explanation """ with open('pose_labels.txt', 'w') as f: f.write('\n'.join(class_names)) """ Explanation: Then you'll write the label file which contains mapping from the class indexes to the human readable class names. End of explanation """ def evaluate_model(interpreter, X, y_true): """Evaluates the given TFLite model and return its accuracy.""" input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on all given poses. y_pred = [] for i in range(len(y_true)): # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = X[i: i + 1].astype('float32') interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the class with highest # probability. output = interpreter.tensor(output_index) predicted_label = np.argmax(output()[0]) y_pred.append(predicted_label) # Compare prediction results with ground truth labels to calculate accuracy. y_pred = keras.utils.to_categorical(y_pred) return accuracy_score(y_true, y_pred) # Evaluate the accuracy of the converted TFLite model classifier_interpreter = tf.lite.Interpreter(model_content=tflite_model) classifier_interpreter.allocate_tensors() print('Accuracy of TFLite model: %s' % evaluate_model(classifier_interpreter, X_test, y_test)) """ Explanation: As you've applied quantization to reduce the model size, let's evaluate the quantized TFLite model to check whether the accuracy drop is acceptable. End of explanation """ !zip pose_classifier.zip pose_labels.txt pose_classifier.tflite # Download the zip archive if running on Colab. try: from google.colab import files files.download('pose_classifier.zip') except: pass """ Explanation: Now you can download the TFLite model (pose_classifier.tflite) and the label file (pose_labels.txt) to classify custom poses. See the Android and Python/Raspberry Pi sample app for an end-to-end example of how to use the TFLite pose classification model. End of explanation """
robertoalotufo/ia898
src/sat.ipynb
mit
def sat(f): return f.cumsum(axis=1).cumsum(axis=0) def satarea(sat,r0_c0,r1_c1): a,b,c,d = 0,0,0,0 r0,c0 = r0_c0 r1,c1 = r1_c1 if ((r0 - 1 >= 0) and (c0 - 1 >= 0)): a = sat[r0-1,c0-1] if (r0 - 1 >= 0): b = sat[r0-1,c1] if (c0 - 1 >= 0): c = sat[r1,c0-1] d = sat[r1,c1] return a + d - c - b """ Explanation: sat - Summed Area Table (integral image) Synopse The sat function is used to calculate from a given grayscale image, the summed area table (integral image). g = iasat(f) Output g: ndarray with the summed area table. Input f: ndarray with a grayscale image. Description The Integral Image is used as a quick and effective way of calculating the sum of values (pixel values) in a given image or a rectangular subset of a grid (the given image). It can also, or is mainly, used for calculating the average intensity within a given image. End of explanation """ testing = (__name__ == "__main__") if testing: ! jupyter nbconvert --to python sat.ipynb %matplotlib inline import numpy as np import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia import matplotlib.image as mpimg """ Explanation: Examples End of explanation """ if testing: f = np.array([[0,1,1,0,0,0,0,0,0], [1,0,0,0,0,0,0,1,0], [1,0,0,1,0,0,0,1,0], [0,0,0,0,0,1,1,0,0]], dtype=np.uint8) s = ia.sat(f) print('f (input):\n',f) print('s (output):\n',s) a = ia.satarea(s,(0,0),(3,8)) print('area:',a) """ Explanation: Numerical example: End of explanation """ if testing: f = mpimg.imread('../data/lenina.pgm')[::2,::2] nb = ia.nbshow(2) nb.nbshow(f, 'Original Image') nb.nbshow(ia.normalize(ia.sat(f)), 'Integral Image') nb.nbshow() """ Explanation: Image example End of explanation """ if testing: f = mpimg.imread('../data/lenina.pgm')[::2,::2] H,W = f.shape s = ia.sat(f) a0 = ia.satarea(s,(0,0),(H-1,W-1)) atopleft = ia.satarea(s,( 0,0 ),(H//2-1,W//2-1)) abotleft = ia.satarea(s,(H//2,0 ),(H-1, W//2-1)) atopright = ia.satarea(s,( 0,W//2),(H//2-1,W-1)) abotright = ia.satarea(s,(H//2,W//2),(H-1, W-1)) print('Area Total: ', a0) print('Area Top-left: ', atopleft) print('Area Bot-left: ', abotleft) print('Area Top-right: ', atopright) print('Area Bot-right: ', abotright) print('Area Total:', atopleft+abotleft+atopright+abotright) """ Explanation: Calculating a rectangle area with SAT (Summed Area Table) End of explanation """
awhite40/pymks
notebooks/intro.ipynb
mit
%matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt """ Explanation: Meet PyMKS In this short introduction, we will demonstrate the functionality of PyMKS to compute 2-point statistics in order to objectively quantify microstructures, predict effective properties using homogenization and predict local properties using localization. If you would like more technical details abount any of these methods please see the theory section. End of explanation """ from pymks.datasets import make_microstructure X_1 = make_microstructure(n_samples=1, grain_size=(25, 25)) X_2 = make_microstructure(n_samples=1, grain_size=(15, 95)) X = np.concatenate((X_1, X_2)) """ Explanation: Quantify Microstructures using 2-Point Statistics Lets make two dual-phase microstructures with different morphologies. End of explanation """ from pymks.tools import draw_microstructures draw_microstructures(X) """ Explanation: Throughout PyMKS X is used to represent microstructures. Now that we have made the two microstructures, lets take a look at them. End of explanation """ from pymks import PrimitiveBasis from pymks.stats import correlate prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) X_ = prim_basis.discretize(X) X_corr = correlate(X_, periodic_axes=[0, 1]) """ Explanation: We can compute the 2-point statistics for these two periodic microstructures using the correlate function from pymks.stats. This function computes all of the autocorrelations and cross-correlation(s) for a microstructure. Before we compute the 2-point statistics, we will discretize them using the PrimitiveBasis function. End of explanation """ from pymks.tools import draw_correlations print X_corr[0].shape draw_correlations(X_corr[0]) draw_correlations(X_corr[1]) """ Explanation: Let's take a look at the two autocorrelations and the cross-correlation for these two microstructures. End of explanation """ from pymks.datasets import make_elastic_stress_random grain_size = [(47, 6), (4, 49), (14, 14)] n_samples = [200, 200, 200] X_train, y_train = make_elastic_stress_random(n_samples=n_samples, size=(51, 51), grain_size=grain_size, seed=0) """ Explanation: 2-Point statistics provide an object way to compare microstructures, and have been shown as an effective input to machine learning methods. Predict Homogenized Properties In this section of the intro, we are going to predict the effective stiffness for two-phase microstructures using the MKSHomogenizationModel, but we could have chosen any other effective material property. First we need to make some microstructures and their effective stress values to fit our model. Let's create 200 random instances 3 different types of microstructures, totaling to 600 microstructures. End of explanation """ draw_microstructures(X_train[::200]) """ Explanation: Once again, X_train is our microstructures. Throughout PyMKS y is used as either the property, or the field we would like to predict. In this case y_train is the effective stress values for X_train. Let's look at one of each of the three different types of microstructures. End of explanation """ from pymks import MKSHomogenizationModel prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) homogenize_model = MKSHomogenizationModel(basis=prim_basis, correlations=[(0, 0), (1, 1), (0, 1)]) """ Explanation: The MKSHomogenizationModel uses 2-point statistics, so we need to provide a discretization method for the microstructures by providing a basis function. We will also specify which correlations we want. End of explanation """ homogenize_model.fit(X_train, y_train, periodic_axes=[0, 1]) """ Explanation: Let's fit our model with the data we created. End of explanation """ n_samples = [10, 10, 10] X_test, y_test = make_elastic_stress_random(n_samples=n_samples, size=(51, 51), grain_size=grain_size, seed=100) """ Explanation: Now let's make some new data to see how good our model is. End of explanation """ y_pred = homogenize_model.predict(X_test, periodic_axes=[0, 1]) """ Explanation: We will try and predict the effective stress of our X_test microstructures. End of explanation """ from pymks.tools import draw_components draw_components([homogenize_model.reduced_fit_data, homogenize_model.reduced_predict_data], ['Training Data', 'Test Data']) """ Explanation: The MKSHomogenizationModel generates low dimensional representations of microstructures and regression methods to predict effective properties. Take a look at the low-dimensional representations. End of explanation """ from pymks.tools import draw_goodness_of_fit fit_data = np.array([y_train, homogenize_model.predict(X_train, periodic_axes=[0, 1])]) pred_data = np.array([y_test, y_pred]) draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Test Data']) """ Explanation: Now let's look at a goodness of fit plot for our MKSHomogenizationModel. End of explanation """ from pymks.datasets import make_elastic_FE_strain_delta X_delta, y_delta = make_elastic_FE_strain_delta() """ Explanation: Looks good. The MKSHomogenizationModel can be used to predict effective properties and processing-structure evolutions. Predict Local Properties In this section of the intro, we are going to predict the local strain field in a microstructure using MKSLocalizationModel, but we could have predicted another local property. First we need some data, so let's make some. End of explanation """ from pymks import MKSLocalizationModel prim_basis = PrimitiveBasis(n_states=2) localize_model = MKSLocalizationModel(basis=prim_basis) """ Explanation: Once again, X_delta is our microstructures and y_delta is our local strain fields. We need to discretize the microstructure again, so we will also use the same basis function. End of explanation """ localize_model.fit(X_delta, y_delta) """ Explanation: Let's use the data to fit our MKSLocalizationModel. End of explanation """ from pymks.datasets import make_elastic_FE_strain_random X_test, y_test = make_elastic_FE_strain_random() """ Explanation: Now that we have fit our model, we will create a random microstructure and compute its local strain field, using finite element analysis. We will then try and reproduce the same strain field with our model. End of explanation """ from pymks.tools import draw_microstructure_strain draw_microstructure_strain(X_test[0], y_test[0]) """ Explanation: Let's look at the microstructure and its local strain field. End of explanation """ from pymks.tools import draw_strains_compare y_pred = localize_model.predict(X_test) draw_strains_compare(y_test[0], y_pred[0]) """ Explanation: Now let's pass that same microstructure to our MKSLocalizationModel and compare the predicted and computed local strain field. End of explanation """
msampathkumar/kaggle-quora-tensorflow
references/sentiment-rnn/Sentiment RNN.ipynb
apache-2.0
import numpy as np import tensorflow as tf with open('../sentiment_network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment_network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] """ Explanation: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. End of explanation """ from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] """ Explanation: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. End of explanation """ reviews[:2] from collections import Counter words_dummy = ['qwe','ert','yui', 'fgh', 'dfg', 'kjg','fgh', 'dfg', 'kjg'] counts_dummy = Counter(words_dummy) print(counts_dummy) v = enumerate(counts_dummy,1) print(list(v)) print(counts_dummy.get('qwe')) vocab_dummy = sorted(counts_dummy, key=counts_dummy.get, reverse=True) vocab_to_int_dummy = {word: ii for ii, word in enumerate(vocab_dummy, 1)} print(vocab_dummy) print(vocab_to_int_dummy) counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) """ Explanation: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. End of explanation """ review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) print(max(review_lens)) print(min(review_lens)) # max? """ Explanation: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively. End of explanation """ # Filter out that review with 0 length reviews_ints = [each for each in reviews_ints if len(each) > 0] """ Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list. End of explanation """ seq_len = 200 features = np.zeros((len(reviews), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] """ Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. End of explanation """ split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) """ Explanation: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. End of explanation """ lstm_size = 256 lstm_layers = 1 batch_size = 250 learning_rate = 0.001 """ Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2501, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate End of explanation """ n_words = len(vocab) # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') """ Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder. End of explanation """ # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) """ Explanation: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. End of explanation """ with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) """ Explanation: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out. End of explanation """ with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) """ Explanation: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed. End of explanation """ with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. End of explanation """ with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. End of explanation """ def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] """ Explanation: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. End of explanation """ epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") """ Explanation: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. End of explanation """ test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) """ Explanation: Testing End of explanation """
bregmanstudio/SoundscapeEcology
SoundscapeComponentAnalysis.ipynb
mit
from pylab import * # numpy, matplotlib, plt from bregman.suite import * # Bregman audio feature extraction library from soundscapeecology import * # 2D time-frequency shift-invariant convolutive matrix factorization %matplotlib inline rcParams['figure.figsize'] = (15.0, 9.0) """ Explanation: <h1>Soundscape Analysis by Shift-Invariant Latent Components</h1> <h2>Michael Casey - Bregman Labs, Dartmouth College</h2> A toolkit for matrix factorization of soundscape spectrograms into independent streams of sound objects, possibly representing individual species or independent group behaviours. The method employs shift-invariant probabilistic latent component analysis (SIPLCA) for factoring a time-frequency matrix (2D array) into a convolution of 2D kernels (patches) with sparse activation functions. Methods are based on the following: Smaragdis, P, B. Raj, and M.V. Shashanka, 2008. Sparse and shift-invariant feature extraction from non-negative data. In proceedings IEEE International Conference on Audio and Speech Signal Processing, Las Vegas, Nevada, USA. Smaragdis, P. and Raj, B. 2007. Shift-Invariant Probabilistic Latent Component Analysis, tech report, MERL technical report, Camrbidge, MA. A. C. Eldridge, M. Casey, P. Moscoso, and M. Peck (2015) A New Method for Ecoacoustics? Toward the Extraction and Evaluation of Ecologically-Meaningful Sound Objects using Sparse Coding Methods. PeerJ PrePrints, 3(e1855) 1407v2 [In Review] End of explanation """ sound_path = 'sounds' sounds = os.listdir(sound_path) print "sounds:", sounds """ Explanation: <h2>Example audio</h2> Load an example audio file from the 'sounds' directory, 44.1kHz, stereo, 60 seconds duration. End of explanation """ N=4096; H=N/4 x,sr,fmt = wavread(os.path.join(sound_path,sounds[0])) print "sample_rate:", sr, "(Hz), fft size:", (1000*N)/sr, "(ms), hop size:", (1000*H)/sr, "(ms)" """ Explanation: <h2>Spectrum Analysis Parameters</h2> Inspect the soundfile by loading it (using wavread) and printing some useful parameters. A window size of 4096 with a hop of 1024 translates to 92ms and 23ms respectively for an audio samplerate of 44100Hz End of explanation """ # 1. Instantiate a new SoundscapeEcololgy object using the spectral analysis parameters defined above S = SoundscapeEcology(nfft=N, wfft=N/2, nhop=H) # Inspect the contents of this object print S.__dict__ # 2. load_audio() - sample segments of the soundfile without replacement, to speed up analysis # The computational complexity of the analysis is high, and the information in a soundscape is largely redundant # So, draw 25 random segments in time order, each consisting of 20 STFT frames (~500ms) of audio data S.load_audio(os.path.join(sound_path,sounds[0]), num_samples=25, frames_per_sample=20) # num_samples=None means analyze the whole sound file # 3. analyze() into shift-invariant kernels # The STFT spectrum will be converted to a constant-Q transform by averaging over logarithmically spaced bins # The shift-invariant kernels will have shift and time-extent dimensions # The default kernel shape yields 1-octave of shift (self.feature_params['nbpo']), # and its duration is frames_per_sample. Here, the num_components and win parameters are illustrated. S.analyze(num_components=7, win=(S.feature_params['nbpo'], S.frames_per_sample)) # 4. visualize() - visualize the spectrum reconstruction and the individual components # inputs: # plotXi - visualize individual reconstructed component spectra [True] # plotX - visualize original (pre-analysis) spectrum and reconstruction [False] # plotW - visualize component time-frequency kernels [False] # plotH - visualize component shift-time activation functions [False] # **pargs - plotting key word arguments [**self.plt_args] S.visualize(plotX=True, plotXi=True, plotW=True, plotH=True) # 5. resynthesize() - sonify the results # First, listen to the original (inverse STFT) and the full component reconstruction (inverse CQFT with random phases) x_orig = S.F.inverse(S.X) x_recon = S.F.inverse(S.X_hat, Phi_hat=(np.random.rand(*S.F.STFT.shape)*2-1)*np.pi) # random phase reconstruction play(balance_signal(x_orig)) play(balance_signal(x_recon)) # First, listen to the original (inverse CQFT with original phases in STFT reconstruction) # and the all-components reconstruction (inverse CQFT with random phases) # Second, listen to the individual component reconstructions # Use the notebook's "interrupt kernel" button (stop button) if this is too long (n_comps x audio sequence) # See above plots for the individual component spectrograms for k in range(S.n_components): x_hat = S.resynthesize(k) # resynthesize individual component play(balance_signal(x_hat)) # play it back """ Explanation: <h2>SoundscapeEcology Toolkit</h2> Analyze species-specific patterns in environmental recordings SoundscapeEcology methods: load_audio() - load sample of a soundscape recording sample_audio_dir() - load group sample from multiple recordings analyze() - extract per-species?? time-frequency partitioning from loaded audio visualize() - show component spectrograms resynthesize() - reconstruct audio for component spectrograms to sonify model_fit_resynhesize() - generative statistical model of time-shift kernels summarize() - show soundscape ecology entropy statistics SoundscapeEcology static methods: batch_analyze() - multiple analyses for a list of recordings entropy() - compute entropy (in nats) of an acoustic feature distribution gen_test_data() - generate an artificial soundscape for testing Workflows: [load_audio(), sample_audio_dir()] -&gt; analyze() -&gt; [visualize(), resynthesize(), summarize()] In the following example we will: 1. Instantiate a new SoundscapeEcololgy object 2. load_audio() and sample segments of it without replacement 3. analyze() extract Constant-Q Frequency Transform (CQFT) and extract shift-invariant kernels 4. visualize() - reconstruct individual component features (CQFT) and make subplots 5. resynthesize() - invert individual feature reconstructions back to audio for sonifying End of explanation """
kit-cel/wt
ccgbc/ch2_Codes_Basic_Concepts/BEC_FiniteLength_Upper_Lower_Bounds.ipynb
gpl-2.0
import numpy as np import matplotlib import matplotlib.pyplot as plt # plotting options font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=matplotlib.checkdep_usetex(True)) matplotlib.rc('figure', figsize=(18, 6) ) """ Explanation: Finite-Length Performance on the BEC Channel This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods. This code illustrates * Calculating an upper and lower bound on the error rate of codes on the BEC End of explanation """ # capacity of the BSC def C_BEC(epsilon): return 1 - epsilon """ Explanation: Binary Erasure Channel (BEC) For the BEC, we have the capacity \begin{equation} C_\text{BEC} = 1 - \epsilon \end{equation} End of explanation """ from scipy.special import comb def get_Pe_RCU_BEC(n, r, epsilon): return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) * min(1, 2**(-n*(1-r)+t)) for t in range(n+1)]) """ Explanation: Random Coding Union Bound for the BEC We now additionally show the Random Coding Union (RCU) bound [2, Th. 16] for the BEC, as it is a fairly easy to calculate the bound in this case. The RCU bound is not part of the lecture and shown here for completeness. To get the RCU bound, we assume that we perform ML decoding of the random code with $\boldsymbol{x}^{[1]}$ transmitted. We assume that the channel introduces a total number of $t$ erasures. At the non-erased positions, the bits have been received correctly. Then let $E_m$ denote the event that codeword $\boldsymbol{x}^{[m]}$ has the same code bits at the non-erased positions as $\boldsymbol{x}^{[1]}$. In this case, the decoder cannot make a decision which codeword to select (they have the same likelihood). It can resolve this tie by randomly selecting a codeword, which may produce a decoding error. Hence, the error probability can be bounded as \begin{align} P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) &\leq P\left(\bigcup_{m=2}^M E_m | \boldsymbol{Y}, t\text{ erasures}\right) \ &\stackrel{(a)}{\leq} \sum_{m=2}^M P\left(E_m | \boldsymbol{Y}, t\text{ erasures}\right) \ &= (M-1)\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \ &\leq M\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \ &\stackrel{(b)}{=} M\left(\frac{1}{2}\right)^{n-t} \ &= 2^{-n(1-r)+t} \end{align} where $(a)$ is the union bound and $(b)$ is due to the fact that the probability of choosing $n-t$ positions that are identical to $\boldsymbol{x}^{[1]}$ in these positions is $(\frac12)^{n-t} = 2^{t-n}$. The main trick of [2] is now to observe that the union bound can be often loose and $2^{-n(1-r)+t}$ can become larger than 1. Hence, [2] introduced the tighter bound \begin{equation} P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \leq \min\left(1, 2^{-n(1-r)+t}\right) \end{equation} The total probability of error is then obtained by noticing that the erasures in the BEC follow a binomial distribution, and we we can state that \begin{align} P_e &= \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t}P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \ &\leq \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t} \min\left(1, 2^{-n(1-r)+t}\right) \end{align} The bound states that for the BEC with erasure probability $\epsilon$, there exists a code (the random code) with $M$ codewords of length $n$ (and rate $r = \frac{\log_2(M)}{n}$) that has an error probability upper bounded by the above bound under ML decoding. [2] Y. Polyanskiy, H. V. Poor and S. Verdรบ, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory , vol. 56, no. 5, pp. 2307-2359, May 2010 End of explanation """ def get_Pe_Singleton_BEC(n, r, epsilon): return 1.0 - np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) for t in range(int(np.ceil(n*(1-r)))+1)]) #return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) for t in range(int(np.ceil(n*(1-r)+1)),n+1)]) epsilon_range = np.linspace(0.2,0.6,100) Pe_RCU_BEC_r12_n100 = [get_Pe_RCU_BEC(100, 0.5, epsilon) for epsilon in epsilon_range] Pe_RCU_BEC_r12_n250 = [get_Pe_RCU_BEC(250, 0.5, epsilon) for epsilon in epsilon_range] Pe_RCU_BEC_r12_n500 = [get_Pe_RCU_BEC(500, 0.5, epsilon) for epsilon in epsilon_range] Pe_RCU_BEC_r12_n1000 = [get_Pe_RCU_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range] Pe_Singleton_BEC_r12_n100 = [get_Pe_Singleton_BEC(100, 0.5, epsilon) for epsilon in epsilon_range] Pe_Singleton_BEC_r12_n250 = [get_Pe_Singleton_BEC(250, 0.5, epsilon) for epsilon in epsilon_range] Pe_Singleton_BEC_r12_n500 = [get_Pe_Singleton_BEC(500, 0.5, epsilon) for epsilon in epsilon_range] Pe_Singleton_BEC_r12_n1000 = [get_Pe_Singleton_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range] fig = plt.figure(1,figsize=(12,7)) plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n100) plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n250) plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n500) plt.semilogy(epsilon_range, Pe_Singleton_BEC_r12_n1000) plt.axvline(x=0.5, color='k') plt.gca().set_prop_cycle(None) plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n100, '--') plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n250, '--') plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n500, '--') plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n1000, '--') plt.axvspan(0.5, 0.55, alpha=0.5, color='gray') plt.axvline(x=0.5, color='k') plt.ylim((1e-8,1)) plt.xlim((0.2,0.55)) plt.xlabel('BEC erasure probability $\epsilon$', fontsize=16) plt.ylabel('$P_e$', fontsize=16) plt.legend(['$n = 100$', '$n=250$','$n=500$', '$n=1000$', 'C'], fontsize=16) plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90}) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.grid(True) plt.savefig('BEC_Singleton_RCU_R12.pdf',bbox_inches='tight') """ Explanation: Singleton Bound for the BEC In order to get an upper bound on the decoding performance, we ask the question, what would be the error rate of the best possible code? The number of erasures that a code can correct is $d_{\min}-1$. We know from the Singleton bound that $d_{\min} \leq n-k+1$, i.e., the best possible code can correct at most $n-k = n(1-r)$ erasures. Hence, the probability of error of \schlagwort{any} code, is always higher than the probability of error of the best code. We have $$ P_e \geq \sum_{t=n(1-r)+1}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t} = 1-\sum_{t=0}^{n(1-r)}\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t} $$ End of explanation """
NathanYee/ThinkBayes2
bayesianLinearRegression/nathanTest.ipynb
gpl-2.0
from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf import thinkplot import pandas as pd import matplotlib.pyplot as plt """ Explanation: Bayesian Linear Regression - A study of child height in Kalama, Egypt Computational bayes final project. Nathan Yee Uma Desai Description: Mean heights of a group of children in Kalama, an Egyptian village that is the site of a study of nutrition in developing countries. The data were obtained by measuring the heights of all 161 children in the village each month over several years. First example to gain understanding is taken from Cypress Frankenfeld. http://allendowney.blogspot.com/2015/04/two-hour-marathon-by-2041-probably.html End of explanation """ df = pd.read_csv('ageVsHeight.csv', skiprows=0, delimiter='\t') df """ Explanation: Load data from csv file End of explanation """ ages = np.array(df['age']) heights = np.array(df['height']) """ Explanation: Create x and y vectors. x is the ages, y is the heights End of explanation """ def leastSquares(x, y): """ leastSquares takes in two arrays of values. Then it returns the slope and intercept of the least squares of the two. Args: x (numpy array): numpy array of values. y (numpy array): numpy array of values. Returns: slope, intercept (tuple): returns a tuple of floats. """ A = np.vstack([x, np.ones(len(x))]).T slope, intercept = np.linalg.lstsq(A, y)[0] return slope, intercept """ Explanation: Abstract least squares function using a function End of explanation """ slope, intercept = leastSquares(ages, heights) print(slope, intercept) alpha_range = .005 * intercept beta_range = .005 * slope """ Explanation: Use the leastSquares function to get a slope and intercept. Then use the slope and intercept to calculate the size of our alpha and beta ranges End of explanation """ plt.plot(ages, heights, 'o', label='Original data', markersize=10) plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line') plt.legend() plt.show() """ Explanation: Visualize the slope and intercept on the same plot as the data so make sure it is working correctly End of explanation """ alphas = np.linspace(intercept - alpha_range, intercept + alpha_range, 10) betas = np.linspace(slope - beta_range, slope + beta_range, 10) sigmas = np.linspace(2, 4, 10) # alphas = np.linspace(intercept * (1 - alpha_range), # intercept * (1 + alpha_range), # 5) # betas = np.linspace(slope * (1 - beta_range), # slope * (1 + beta_range), # 5) # sigmas = np.linspace(.1, .2, 5) """ Explanation: Make range of alphas (intercepts), betas (slopes), and sigmas (errors) End of explanation """ hypos = ((alpha, beta, sigma) for alpha in alphas for beta in betas for sigma in sigmas) """ Explanation: Turn those alphas, betas, and sigmas into our hypotheses End of explanation """ data = [(age, height) for age in ages for height in heights] """ Explanation: Make data End of explanation """ class leastSquaresHypos(Suite): def Likelihood(self, data, hypo): """ Likelihood calculates the probability of a particular line (hypo) based on data (ages Vs height) of our original dataset. This is done with a normal pmf as each hypo also contains a sigma. Args: data (tuple): tuple that contains ages (float), heights (float) hypo (tuple): intercept (float), slope (float), sigma (float) Returns: P(data|hypo) """ intercept, slope, sigma = hypo total_likelihood = 1 for age, measured_height in data: hypothesized_height = slope * age + intercept error = measured_height - hypothesized_height total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma) return total_likelihood LeastSquaresHypos = leastSquaresHypos(hypos) for item in data: LeastSquaresHypos.Update([item]) LeastSquaresHypos[LeastSquaresHypos.MaximumLikelihood()] def getHeights(hypo_samples, random_months): random_heights = np.zeros(len(random_months)) for i in range(len(random_heights)): intercept = hypo_samples[i][0] slope = hypo_samples[i][1] sigma = hypo_samples[i][2] month = random_months[i] random_heights[i] = np.random.normal((slope * month + intercept), sigma, 1) return random_heights def getRandomData(start_month, end_month, n, LeastSquaresHypos): """ n - number of samples """ random_hypos = LeastSquaresHypos.Sample(n) random_months = np.random.uniform(start_month, end_month, n) random_heights = getHeights(random_hypos, random_months) return random_months, random_heights """ Explanation: Next make age class where likelihood is calculated based on error from data End of explanation """ num_samples = 10000 random_months, random_heights = getRandomData(18, 40, num_samples, LeastSquaresHypos) """ Explanation: Get random samples of pairs of months and heights. Here we want at least 10000 items to get very smooth sampling End of explanation """ num_buckets = 70 # num_buckets^2 is actual number # create horizontal and vertical linearly spaced ranges as buckets. hori_range, hori_step = np.linspace(18, 40 , num_buckets, retstep=True) vert_range, vert_step = np.linspace(65, 100, num_buckets, retstep=True) hori_step = hori_step / 2 vert_step = vert_step / 2 # store each bucket as a tuple in a the buckets dictionary. buckets = dict() keys = [(hori, vert) for hori in hori_range for vert in vert_range] # set each bucket as empty for key in keys: buckets[key] = 0 # loop through the randomly sampled data for month, height in zip(random_months, random_heights): # check each bucket and see if randomly sampled data for key in buckets: if month > key[0] - hori_step and month < key[0] + hori_step: if height > key[1] - vert_step and height < key[1] + vert_step: buckets[key] += 1 break # can only fit in a single bucket pcolor_months = [] pcolor_heights = [] pcolor_intensities = [] for key in buckets: pcolor_months.append(key[0]) pcolor_heights.append(key[1]) pcolor_intensities.append(buckets[key]) print(len(pcolor_months), len(pcolor_heights), len(pcolor_intensities)) plt.plot(random_months, random_heights, 'o', label='Random Sampling') plt.plot(ages, heights, 'o', label='Original data', markersize=10) plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line') # plt.legend() plt.show() """ Explanation: Next, we want to get the intensity of the data at locations. We do adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range. End of explanation """ def append_to_file(path, data): """ append_to_file appends a line of data to specified file. Then adds new line Args: path (string): the file path Return: VOID """ with open(path, 'a') as file: file.write(data + '\n') def delete_file_contents(path): """ delete_file_contents deletes the contents of a file Args: path: (string): the file path Return: VOID """ with open(path, 'w'): pass def intensityCSV(x, y, z): file_name = 'intensityData.csv' delete_file_contents(file_name) for xi, yi, zi in zip(x, y, z): append_to_file(file_name, "{}, {}, {}".format(xi, yi, zi)) def monthHeightCSV(ages, heights): file_name = 'monthsHeights.csv' delete_file_contents(file_name) for month, height in zip(ages, heights): append_to_file(file_name, "{}, {}".format(month, height)) def fittedLineCSV(ages, slope, intercept): file_name = 'fittedLineCSV.csv' delete_file_contents(file_name) for age in ages: append_to_file(file_name, "{}, {}".format(age, slope*age + intercept)) def makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept): intensityCSV(pcolor_months, pcolor_heights, pcolor_intensities) twoSequenceCSV(ages, heights) fittedLineCSV(ages, slope, intercept) makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept) """ Explanation: Since density plotting is much easier in Mathematica, we are going to export all our data to csv files and plot them in Mathematica End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/08_image/mnist_models.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst from datetime import datetime import os PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 MODEL_TYPE = "dnn" # "linear", "dnn_dropout", "cnn", or "dnn" # Do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["MODEL_TYPE"] = MODEL_TYPE os.environ["TFVERSION"] = "2.1" # Tensorflow version os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnistmodel") %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine This notebook demonstrates how to implement different image models on MNIST using Estimator. Note the MODEL_TYPE and change it to try out different models. End of explanation """ %%writefile mnistmodel/trainer/task.py import argparse import json import os import sys from . import model def _parse_arguments(argv): """Parses command-line arguments.""" parser = argparse.ArgumentParser() parser.add_argument( '--model_type', help='Which model type to use', type=str, default='linear') parser.add_argument( '--epochs', help='The number of epochs to train', type=int, default=10) parser.add_argument( '--steps_per_epoch', help='The number of steps per epoch to train', type=int, default=100) parser.add_argument( '--job-dir', help='Directory where to save the given model', type=str, default='mnistmodel/') return parser.parse_known_args(argv) def main(): """Parses command line arguments and kicks off model training.""" args = _parse_arguments(sys.argv[1:])[0] # Configure path for hyperparameter tuning. trial_id = json.loads( os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '') output_path = args.job_dir if not trial_id else args.job_dir + '/' model_layers = model.get_layers(args.model_type) image_model = model.build_model(model_layers, args.job_dir) model_history = model.train_and_evaluate( image_model, args.epochs, args.steps_per_epoch, args.job_dir) if __name__ == '__main__': main() """ Explanation: Building a dynamic model The boilerplate structure for this module has already been set up in the folder mnistmodel. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnistmodel/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them. Let's start with the trainer file first. This file parses command line arguments to feed into the model. End of explanation """ %%writefile mnistmodel/trainer/util.py import tensorflow as tf def scale(image, label): """Scales images from a 0-255 int range to a 0-1 float range""" image = tf.cast(image, tf.float32) image /= 255 image = tf.expand_dims(image, -1) return image, label def load_dataset( data, training=True, buffer_size=5000, batch_size=100, nclasses=10): """Loads MNIST dataset into a tf.data.Dataset""" (x_train, y_train), (x_test, y_test) = data x = x_train if training else x_test y = y_train if training else y_test # One-hot encode the classes y = tf.keras.utils.to_categorical(y, nclasses) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.map(scale).batch(batch_size) if training: dataset = dataset.shuffle(buffer_size).repeat() return dataset """ Explanation: Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab. End of explanation """ %%writefile mnistmodel/trainer/model.py import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.layers import ( Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax) from . import util # Image Variables WIDTH = 28 HEIGHT = 28 def get_layers( model_type, nclasses=10, hidden_layer_1_neurons=400, hidden_layer_2_neurons=100, dropout_rate=0.25, num_filters_1=64, kernel_size_1=3, pooling_size_1=2, num_filters_2=32, kernel_size_2=3, pooling_size_2=2): """Constructs layers for a keras model based on a dict of model types.""" model_layers = { 'linear': [ Flatten(), Dense(nclasses), Softmax() ], 'dnn': [ Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dense(nclasses), Softmax() ], 'dnn_dropout': [ Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ], 'dnn': [ Conv2D(num_filters_1, kernel_size=kernel_size_1, activation='relu', input_shape=(WIDTH, HEIGHT, 1)), MaxPooling2D(pooling_size_1), Conv2D(num_filters_2, kernel_size=kernel_size_2, activation='relu'), MaxPooling2D(pooling_size_2), Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ] } return model_layers[model_type] def build_model(layers, output_dir): """Compiles keras model for image classification.""" model = Sequential(layers) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir): """Compiles keras model and loads data into it for training.""" mnist = tf.keras.datasets.mnist.load_data() train_data = util.load_dataset(mnist) validation_data = util.load_dataset(mnist, training=False) callbacks = [] if output_dir: tensorboard_callback = TensorBoard(log_dir=output_dir) callbacks = [tensorboard_callback] history = model.fit( train_data, validation_data=validation_data, epochs=num_epochs, steps_per_epoch=steps_per_epoch, verbose=2, callbacks=callbacks) if output_dir: export_path = os.path.join(output_dir, 'keras_export') model.save(export_path, save_format='tf') return history """ Explanation: Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate. These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance. End of explanation """ current_time = datetime.now().strftime("%y%m%d_%H%M%S") model_type = 'dnn' os.environ["MODEL_TYPE"] = model_type os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format( BUCKET, model_type, current_time) os.environ["JOB_NAME"] = "mnist_{}_{}".format( model_type, current_time) """ Explanation: Local Training Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line. The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. You can change the model_type to try out different models. End of explanation """ %%bash python3 -m mnistmodel.trainer.task \ --job-dir=$JOB_DIR \ --epochs=5 \ --steps_per_epoch=50 \ --model_type=$MODEL_TYPE """ Explanation: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnistmodel/trainer/task.py file. End of explanation """ %%writefile mnistmodel/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu COPY mnistmodel/trainer /mnistmodel/trainer ENTRYPOINT ["python3", "-m", "mnistmodel.trainer.task"] """ Explanation: Training on the cloud Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a Deep Learning Container in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple Dockerlife which copies our code to be used in a TF2 environment. End of explanation """ !docker build -f mnistmodel/Dockerfile -t $IMAGE_URI ./ !docker push $IMAGE_URI """ Explanation: The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnistmodel. (Click here to enable Cloud Build) End of explanation """ %%bash echo $JOB_DIR $REGION $JOB_NAME gcloud ai-platform jobs submit training $JOB_NAME \ --staging-bucket=gs://$BUCKET \ --region=$REGION \ --master-image-uri=$IMAGE_URI \ --scale-tier=BASIC_GPU \ --job-dir=$JOB_DIR \ -- \ --model_type=$MODEL_TYPE """ Explanation: Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag. End of explanation """ %%bash MODEL_NAME="mnist" MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=${JOB_DIR}keras_export/ echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} #yes | gcloud ai-platform models delete ${MODEL_NAME} gcloud ai-platform models create ${MODEL_NAME} --regions $REGION gcloud ai-platform versions create ${MODEL_VERSION} \ --model ${MODEL_NAME} \ --origin ${MODEL_LOCATION} \ --framework tensorflow \ --runtime-version=2.1 """ Explanation: Can't wait to see the results? Run the code below and copy the output into the Google Cloud Shell to follow. Deploying and predicting with model Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path. Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model. End of explanation """
cosmicBboy/themis-ml
paper/Evaluating Themis-ml.ipynb
mit
from themis_ml import datasets from themis_ml.datasets.german_credit_data_map import \ preprocess_german_credit_data from themis_ml.metrics import mean_difference, normalized_mean_difference, \ mean_confidence_interval german_credit = datasets.german_credit() german_credit[ ["credit_risk", "purpose", "age_in_years", "foreign_worker"]].head() german_credit_preprocessed = ( preprocess_german_credit_data(german_credit) # the following binary variable indicates whether someone is female or # not since the unique values in `personal_status` are: # 'personal_status_and_sex_female_divorced/separated/married' # 'personal_status_and_sex_male_divorced/separated' # 'personal_status_and_sex_male_married/widowed' # 'personal_status_and_sex_male_single' .assign(female=lambda df: df["personal_status_and_sex_female_divorced/separated/married"]) # we're going to hypothesize here that young people, aged below 25, # might be considered to have bad credit risk moreso than other groups .assign(age_below_25=lambda df: df["age_in_years"] <= 25) ) """ Explanation: The Utility-Fairness Tradeoff In this post, I'll be taking a dive into the capabilities of themis_ml as a tool to measure and mitigate discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes. The overall goal of this research is to come up with a reasonable way to think about how to make machine learning algorithms more fair. While the mathematical formalization of fairness is not sufficient to solve the problem of discrimination, our ability to understand and articulate what it means for an algorithm to be fair is a step in the right direction. Since the "discrimination" is an value-laden term in this context, I'll refer to the opposite of fairness as potential discrimination (PD) since the any socially biased patterns we'll be measuring in the training data did not necessarily arise from discriminatory processes. I'll be using the German Credit data, which consists of ~1000 loan application containing roughly 20 input variables (including foreign_worker, housing, and credit_history) and 1 binary target variable credit_risk, which is either good or bad. In the context of a good/bad credit_risk binary predict task and an explicit definition of fairness, our objectives will be to: Measure the degree of discrimination in the dataset with respect to some discrimination metric and protected class. Establish a baseline performance level with respect to utility and fairness metrics with models trained on a fairness-unaware machine learning pipeline. Measure and compare the baseline metrics with fairness aware models. Load Data End of explanation """ credit_risk = german_credit_preprocessed.credit_risk credit_risk.value_counts() """ Explanation: Measure Social Bias target variable: credit_risk 1 = low risk (good) 0 = high risk (bad) End of explanation """ is_female = german_credit_preprocessed.female is_female.value_counts() def report_metric(metric, mean_diff, lower, upper): print("{metric}: {md:0.02f} - 95% CI [{lower:0.02f}, {upper:0.02f}]" .format(metric=metric, md=mean_diff, lower=lower, upper=upper)) report_metric( "mean difference", *map(lambda x: x * 100, mean_difference(credit_risk, is_female))) report_metric( "normalized mean difference", *map(lambda x: x * 100, normalized_mean_difference(credit_risk, is_female))) """ Explanation: protected class: sex advantaged group: men disadvantaged group: women End of explanation """ is_foreign = german_credit_preprocessed.foreign_worker is_foreign.value_counts() report_metric( "mean difference", *map(lambda x: x * 100, mean_difference(credit_risk, is_foreign))) report_metric( "normalized mean difference", *map(lambda x: x * 100, normalized_mean_difference(credit_risk, is_foreign))) """ Explanation: protected class: immigration status advantaged group: citizen worker disadvantaged group: foreign worker End of explanation """ age_below_25 = german_credit_preprocessed.age_below_25 age_below_25.value_counts() report_metric( "mean difference", *map(lambda x: x * 100, mean_difference(credit_risk, age_below_25))) report_metric( "normalized mean difference", *map(lambda x: x * 100, normalized_mean_difference(credit_risk, age_below_25))) """ Explanation: protected class: age advantaged group: age above 25 disadvantaged group: age below 25 End of explanation """ import itertools import numpy as np import pandas as pd from sklearn.model_selection import StratifiedKFold, RepeatedStratifiedKFold from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier from sklearn.metrics import ( accuracy_score, roc_auc_score, f1_score) # specify feature set. Note that we're excluding the `is_female` # and `age_below_25` columns that we created above. feature_set_1 = [ 'duration_in_month', 'credit_amount', 'installment_rate_in_percentage_of_disposable_income', 'present_residence_since', 'age_in_years', 'number_of_existing_credits_at_this_bank', 'number_of_people_being_liable_to_provide_maintenance_for', 'status_of_existing_checking_account', 'savings_account/bonds', 'present_employment_since', 'job', 'telephone', 'foreign_worker', 'credit_history_all_credits_at_this_bank_paid_back_duly', 'credit_history_critical_account/other_credits_existing_not_at_this_bank', 'credit_history_delay_in_paying_off_in_the_past', 'credit_history_existing_credits_paid_back_duly_till_now', 'credit_history_no_credits_taken/all_credits_paid_back_duly', 'purpose_business', 'purpose_car_(new)', 'purpose_car_(used)', 'purpose_domestic_appliances', 'purpose_education', 'purpose_furniture/equipment', 'purpose_others', 'purpose_radio/television', 'purpose_repairs', 'purpose_retraining', 'personal_status_and_sex_female_divorced/separated/married', 'personal_status_and_sex_male_divorced/separated', 'personal_status_and_sex_male_married/widowed', 'personal_status_and_sex_male_single', 'other_debtors/guarantors_co-applicant', 'other_debtors/guarantors_guarantor', 'other_debtors/guarantors_none', 'property_building_society_savings_agreement/life_insurance', 'property_car_or_other', 'property_real_estate', 'property_unknown/no_property', 'other_installment_plans_bank', 'other_installment_plans_none', 'other_installment_plans_stores', 'housing_for free', 'housing_own', 'housing_rent', ] N_SPLITS = 10 N_REPEATS = 5 RANDOM_STATE = 1000 def get_estimator_name(e): return "".join([x for x in str(type(e)).split(".")[-1] if x.isalpha()]) def get_grid_params(grid_params_dict): """Get outer product of grid search parameters.""" return [ dict(params) for params in itertools.product( *[[(k, v_i) for v_i in v] for k, v in grid_params_dict.items()])] def fit_with_s(estimator): has_relabeller = getattr(estimator, "relabeller", None) is not None child_estimator = getattr(estimator, "estimator", None) estimator_fit_with_s = getattr(estimator, "S_ON_FIT", False) child_estimator_fit_with_s = getattr(child_estimator, "S_ON_FIT", False) return has_relabeller or estimator_fit_with_s or\ child_estimator_fit_with_s def predict_with_s(estimator): estimator_pred_with_s = getattr(estimator, "S_ON_PREDICT", False) child_estimator = getattr(estimator, "estimator", None) return estimator_pred_with_s or \ getattr(child_estimator, "S_ON_PREDICT", False) def cross_validation_experiment(estimators, X, y, s, s_name, verbose=True): msg = "Training models: protected_class = %s" % s_name if verbose: print(msg) print("-" * len(msg)) performance_scores = [] # stratified groups tries to balance out y and s groups = [i + j for i, j in zip(y.astype(str), s_female.astype(str))] cv = RepeatedStratifiedKFold( n_splits=N_SPLITS, n_repeats=N_REPEATS, random_state=RANDOM_STATE) for e_name, e in estimators: if verbose: print("%s, fold:" % e_name), for i, (train, test) in enumerate(cv.split(X, y, groups=groups)): if verbose: print(i), # create train and validation fold partitions X_train, X_test = X[train], X[test] y_train, y_test = y[train], y[test] s_train, s_test = s[train], s[test] # fit model and generate train and test predictions if fit_with_s(e): e.fit(X_train, y_train, s_train) else: e.fit(X_train, y_train) train_pred_args = (X_train, s_train) if predict_with_s(e) \ else (X_train, ) test_pred_args = (X_test, s_test) if predict_with_s(e) \ else (X_test, ) train_pred_prob = e.predict_proba(*train_pred_args)[:, 1] train_pred = e.predict(*train_pred_args) test_pred_prob = e.predict_proba(*test_pred_args)[:, 1] test_pred = e.predict(*test_pred_args) # train scores performance_scores.append([ s_name, e_name, i, "train", # regular metrics roc_auc_score(y_train, train_pred_prob), # fairness metrics mean_difference(train_pred, s_train)[0], ]) # test scores performance_scores.append([ s_name, e_name, i, "test", # regular metrics roc_auc_score(y_test, test_pred_prob), # fairness metrics mean_difference(test_pred, s_test)[0] ]) if verbose: print("") if verbose: print("") return pd.DataFrame( performance_scores, columns=[ "protected_class", "estimator", "cv_fold", "fold_type", "auc", "mean_diff"]) # training and target data X = german_credit_preprocessed[feature_set_1].values y = german_credit_preprocessed["credit_risk"].values s_female = german_credit_preprocessed["female"].values s_foreign = german_credit_preprocessed["foreign_worker"].values s_age_below_25 = german_credit_preprocessed["age_below_25"].values LOGISTIC_REGRESSION = LogisticRegression( penalty="l2", C=0.001, class_weight="balanced") DECISION_TREE_CLF = DecisionTreeClassifier( criterion="entropy", max_depth=10, min_samples_leaf=10, max_features=10, class_weight="balanced") RANDOM_FOREST_CLF = RandomForestClassifier( criterion="entropy", n_estimators=50, max_depth=10, max_features=10, min_samples_leaf=10, class_weight="balanced") estimators = [ ("LogisticRegression", LOGISTIC_REGRESSION), ("DecisionTree", DECISION_TREE_CLF), ("RandomForest", RANDOM_FOREST_CLF) ] experiment_baseline_female = cross_validation_experiment( estimators, X, y, s_female, "female") experiment_baseline_foreign = cross_validation_experiment( estimators, X, y, s_foreign, "foreign_worker") experiment_baseline_age_below_25 = cross_validation_experiment( estimators, X, y, s_age_below_25, "age_below_25") import seaborn as sns import matplotlib.pyplot as plt % matplotlib inline UTILITY_METRICS = ["auc"] FAIRNESS_METRICS = ["mean_diff"] def summarize_experiment_results(experiment_df): return ( experiment_df .drop("cv_fold", axis=1) .groupby(["protected_class", "estimator", "fold_type"]) .mean()) experiment_baseline = pd.concat([ experiment_baseline_female, experiment_baseline_foreign, experiment_baseline_age_below_25 ]) experiment_baseline_summary = summarize_experiment_results( experiment_baseline) experiment_baseline_summary.query("fold_type == 'test'") baseline_df = ( experiment_baseline .query("fold_type == 'test' and estimator == 'LogisticRegression'") ) sns.factorplot(y="protected_class", x="mean_diff", orient="h", data=baseline_df, size=4, aspect=2, join=False) protected_classes = ["female", "foreign_worker", "age_below_25"] for s in protected_classes: mean_ci = mean_confidence_interval( plot_df.query("protected_class == @s").mean_diff.dropna()) print( "grand_mean(mean_diff) for %s - mean: %0.03f, 95%% CI(%0.03f, %0.03f)" % (s, mean_ci[0], mean_ci[1], mean_ci[2])) def plot_experiment_results(experiment_results): return ( experiment_results .query("fold_type == 'test'") .drop(["fold_type", "cv_fold"], axis=1) .pipe(pd.melt, id_vars=["protected_class", "estimator"], var_name="metric", value_name="score") .pipe((sns.factorplot, "data"), y="metric", x="score", hue="estimator", col="protected_class", col_wrap=3, size=3.5, aspect=1.2, join=False, dodge=0.4)) plot_experiment_results(experiment_baseline); """ Explanation: These mean differences and confidence interval bounds suggest that on average: men have "good" credit risk at a 7.48% higher rate than women, with a lower bound of 1.35% and upper bound of 13.61%. citizen workers have "good" credit risk at a 19.93% higher rate than foreign workers, with a lower bound of 4.91% and upper bound of 34.94%. people above the age of 25 have "good" credit risk at a 14.94% higher rate than those below 25 with a lower bound of 8.97% and upper bound of 25.61%. Establish Baseline Metrics Suppose that Unjust Bank wants to use these data to train a machine learning algorithm to classify new observations into the "good credit risk"/"bad credit risk" buckets. In scenario 1, let's also suppose that the data scientists at Unjust Bank are using typical, fairness-unaware modeling techniques. Furthermore, they give absolutely no thought into what inputs go into the learning process. Using this kitchen sink approach, they plan on using variables like sex, age_below_25, and foreign_worker to learn the classifier. However, a rogue element in the data science team is interested in at least measuring the potentially discriminatory (PD) patterns in the learned algorithms, so in addition to measure performance with metrics like accuracy or ROC area under the curve, also measures the degree to which the algorithm generates PD predictions that favor one social group over another. Procedure Specify model hyperparameter settings for training models. Partition the training data into 10 validation folds. For each of the validation folds, train model on the rest of the data on each of the hyperparameter settings. Evaluate the performance of the model on the validation fold. Pick model with the best average performance to deploy to production. Below we use StratifiedKFold so that we can partition our data according to the protected class of interest and train the the following models: LogisticRegression DecisionTreeClassifier RandomForest End of explanation """ from IPython.display import Markdown, display def print_best_metrics(experiment_results, protected_classes): for pclass in protected_classes: msg = "#### protected class = %s:" % pclass display(Markdown(msg)) exp_df = experiment_results[ (experiment_results["protected_class"] == pclass) & (experiment_results["fold_type"] == "test")] msg = "" for m in UTILITY_METRICS: utility_msg = \ "- best utility measured by %s (higher is better)" % m best_model = ( exp_df .sort_values(m, ascending=False) .drop(["fold_type"], axis=1) .iloc[0][[m, "estimator"]]) msg += utility_msg + " = %0.03f: %s\n" % \ (best_model[0], best_model[1]) for m in FAIRNESS_METRICS: fairness_msg = \ "- best fairness measured by %s (lower is better)" % m best_model = ( exp_df # score closer to zero is better .assign(abs_measure=lambda df: df[m].abs()) .sort_values("abs_measure") .drop(["abs_measure", "fold_type"], axis=1) .iloc[0][[m, "estimator"]]) msg += fairness_msg + " = %0.03f: %s\n" % \ (best_model[0], best_model[1]) display(Markdown(msg)) print_best_metrics( experiment_baseline_summary.reset_index(), ["female", "foreign_worker", "age_below_25"]) """ Explanation: It appears that the variance of normalized_mean_difference across the 10 cross-validation folds is higher than mean_difference, likely because the normalization factor d_max depends on the rate of positive labels in the data. End of explanation """ # create feature sets that remove variables with protected class information feature_set_no_sex = [ f for f in feature_set_1 if f not in [ 'personal_status_and_sex_female_divorced/separated/married', 'personal_status_and_sex_male_divorced/separated', 'personal_status_and_sex_male_married/widowed', 'personal_status_and_sex_male_single']] feature_set_no_foreign = [f for f in feature_set_1 if f != "foreign_worker"] feature_set_no_age = [f for f in feature_set_1 if f != "age_in_years"] # training and target data X_no_sex = german_credit_preprocessed[feature_set_no_sex].values X_no_foreign = german_credit_preprocessed[feature_set_no_foreign].values X_no_age = german_credit_preprocessed[feature_set_no_age].values experiment_naive_female = cross_validation_experiment( estimators, X_no_sex, y, s_female, "female") experiment_naive_foreign = cross_validation_experiment( estimators, X_no_foreign, y, s_foreign, "foreign_worker") experiment_naive_age_below_25 = cross_validation_experiment( estimators, X_no_age, y, s_age_below_25, "age_below_25") experiment_naive = pd.concat([ experiment_naive_female, experiment_naive_foreign, experiment_naive_age_below_25 ]) experiment_naive_summary = summarize_experiment_results(experiment_naive) experiment_naive_summary.query("fold_type == 'test'") plot_experiment_results(experiment_naive); print_best_metrics( experiment_naive_summary.reset_index(), ["female", "foreign_worker", "age_below_25"]) """ Explanation: Naive Fairness-aware Approach: Remove Protected Class The naive approach to training fairness-aware models is to remove the protected class variables from the input data. While at face value this approach might seem like a good measure to prevent the model from learning the discriminatory patterns in the raw data, it doesn't preclude the possibility of other non-protected class variables highly correlate with protected class variables. An well-known example of this is how zipcode correlates with race, so zipcode essentially serves as a proxy for race in the training data even if race is excluded from the input data. End of explanation """ from sklearn.base import clone from themis_ml.preprocessing.relabelling import Relabeller from themis_ml.meta_estimators import FairnessAwareMetaEstimator # here we use the relabeller class to create new y vectors for each of the # protected class contexts. # we also use the FairnessAwareMetaEstimator as a convenience class to # compose together different fairness-aware methods. This wraps around the # estimators that we defined in the previous relabeller = Relabeller() relabelling_estimators = [ (name, FairnessAwareMetaEstimator(e, relabeller=relabeller)) for name, e in estimators] experiment_relabel_female = cross_validation_experiment( relabelling_estimators, X_no_sex, y, s_female, "female") experiment_relabel_foreign = cross_validation_experiment( relabelling_estimators, X_no_foreign, y, s_foreign, "foreign_worker") experiment_relabel_age_below_25 = cross_validation_experiment( relabelling_estimators, X_no_age, y, s_age_below_25, "age_below_25") experiment_relabel = pd.concat([ experiment_relabel_female, experiment_relabel_foreign, experiment_relabel_age_below_25 ]) experiment_relabel_summary = summarize_experiment_results(experiment_relabel) experiment_relabel_summary.query("fold_type == 'test'") plot_experiment_results(experiment_relabel); print_best_metrics( experiment_relabel_summary.reset_index(), ["female", "foreign_worker", "age_below_25"]) """ Explanation: Fairness-aware Method: Relabelling In this and the following fairness-aware modeling runs, we exclude the protected class variables as in the Naive Fairness-aware Approach section in addition to the explicit fairness-aware technique. End of explanation """ LOGREG_L2_PARAM = [ 3, 1, 3e-1, 1e-1, 3e-2, 1e-2, 3e-3, 1e-3, 3e-4, 1e-4, 3e-5, 1e-5, 3e-6, 1e-6, 3e-7, 1e-7, 3e-8, 1e-8] def validation_curve_experiment( estimator_name, estimator, param_name, param_list, update_func): validaton_curve_experiment = [] for param in param_list: e = clone(estimator) e = update_func(e, param_name, param) estimators = [(estimator_name, e)] experiment_relabel_female = cross_validation_experiment( estimators, X_no_sex, y, s_female, "female", verbose=False) experiment_relabel_foreign = cross_validation_experiment( estimators, X_no_foreign, y, s_foreign, "foreign_worker", verbose=False) experiment_relabel_age_below_25 = cross_validation_experiment( estimators, X_no_age, y, s_age_below_25, "age_below_25", verbose=False) validaton_curve_experiment.extend( [experiment_relabel_female.assign(**{param_name: param}), experiment_relabel_foreign.assign(**{param_name: param}), experiment_relabel_age_below_25.assign(**{param_name: param})]) return pd.concat(validaton_curve_experiment) def update_relabeller(e, param_name, param): e = clone(e) child_estimator = clone(e.estimator) child_estimator.set_params(**{param_name: param}) e.set_params(estimator=child_estimator) return e relabel_validaton_curve_experiment = validation_curve_experiment( "LogisticRegression", FairnessAwareMetaEstimator( LOGISTIC_REGRESSION, relabeller=Relabeller()), "C", LOGREG_L2_PARAM, update_relabeller) def validation_curve_plot(x, y, **kwargs): ax = plt.gca() lw = 2.5 data = kwargs.pop("data") train_data = data.query("fold_type == 'train'") test_data = data.query("fold_type == 'test'") grp_data_train = train_data.groupby(x) grp_data_test = test_data.groupby(x) mean_data_train = grp_data_train[y].mean() mean_data_test = grp_data_test[y].mean() std_data_train = grp_data_train[y].std() std_data_test = grp_data_test[y].std() ax.semilogx(mean_data_train.index, mean_data_train, label="train", color="#848484", lw=lw) ax.semilogx(mean_data_test.index, mean_data_test, label="test", color="#ae33bf", lw=lw) # # Add error region ax.fill_between(mean_data_train.index, mean_data_train - std_data_train, mean_data_train + std_data_train, alpha=0.2, color="darkorange", lw=lw) ax.fill_between(mean_data_test.index, mean_data_test - std_data_test, mean_data_test + std_data_test, alpha=0.1, color="navy", lw=lw) relabel_validaton_curve_experiment_df = ( relabel_validaton_curve_experiment .pipe(pd.melt, id_vars=["protected_class", "estimator", "cv_fold", "fold_type", "C"], value_vars=["auc", "mean_diff"], var_name="metric", value_name="score") .assign( protected_class=lambda df: df.protected_class.str.replace("_", " "), metric=lambda df: df.metric.str.replace("_", " ")) .rename(columns={"score": "mean score"}) ) # relabel_validaton_curve_experiment_df g = sns.FacetGrid( relabel_validaton_curve_experiment_df, row="protected_class", col="metric", size=2.5, aspect=1.1, sharey=False, margin_titles=False) g = g.map_dataframe(validation_curve_plot, "C", "mean score") g.set_titles(template="{row_name}, {col_name}") # g.add_legend() # g.add_legend(bbox_to_anchor=(0.275, 0.91)) g.add_legend(bbox_to_anchor=(0.28, 0.9)) g.fig.tight_layout() g.savefig("IMG/logistic_regression_validation_curve.png"); """ Explanation: Validation Curve: Logistic Regression End of explanation """ from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestRegressor from sklearn.tree import DecisionTreeRegressor from themis_ml.linear_model.counterfactually_fair_models import \ LinearACFClassifier LINEAR_REG = LinearRegression() DECISION_TREE_REG = DecisionTreeRegressor(max_depth=10, min_samples_leaf=10) RANDOM_FOREST_REG = RandomForestRegressor( n_estimators=50, max_depth=10, min_samples_leaf=10) # use the estimators defined above to define the linear additive # counterfactually fair models linear_acf_estimators = [ (name, LinearACFClassifier( target_estimator=e, binary_residual_type="absolute")) for name, e in estimators] experiment_acf_female = cross_validation_experiment( linear_acf_estimators, X_no_sex, y, s_female, "female") experiment_acf_foreign = cross_validation_experiment( linear_acf_estimators, X_no_foreign, y, s_foreign, "foreign_worker") experiment_acf_age_below_25 = cross_validation_experiment( linear_acf_estimators, X_no_age, y, s_age_below_25, "age_below_25") experiment_acf = pd.concat([ experiment_acf_female, experiment_acf_foreign, experiment_acf_age_below_25 ]) experiment_acf_summary = summarize_experiment_results(experiment_acf) experiment_acf_summary.query("fold_type == 'test'") experiment_acf = pd.concat([ experiment_acf_female, experiment_acf_foreign, experiment_acf_age_below_25 ]) experiment_acf_summary = summarize_experiment_results(experiment_acf) experiment_acf_summary.query("fold_type == 'test'") plot_experiment_results(experiment_acf); print_best_metrics( experiment_acf_summary.reset_index(), ["female", "foreign_worker", "age_below_25"]) """ Explanation: Fairness-aware Method: Additive Counterfactually Fair Model End of explanation """ from themis_ml.postprocessing.reject_option_classification import \ SingleROClassifier # use the estimators defined above to define the linear additive # counterfactually fair models single_roc_clf_estimators = [ (name, SingleROClassifier(estimator=e)) for name, e in estimators] experiment_single_roc_female = cross_validation_experiment( single_roc_clf_estimators, X_no_sex, y, s_female, "female") experiment_single_roc_foreign = cross_validation_experiment( single_roc_clf_estimators, X_no_foreign, y, s_foreign, "foreign_worker") experiment_single_roc_age_below_25 = cross_validation_experiment( single_roc_clf_estimators, X_no_age, y, s_age_below_25, "age_below_25") experiment_single_roc = pd.concat([ experiment_single_roc_female, experiment_single_roc_foreign, experiment_single_roc_age_below_25 ]) experiment_single_roc_summary = summarize_experiment_results( experiment_single_roc) experiment_single_roc_summary.query("fold_type == 'test'") plot_experiment_results(experiment_acf); print_best_metrics( experiment_acf.reset_index(), ["female", "foreign_worker", "age_below_25"]) """ Explanation: Fairness-aware Method: Reject-option Classification End of explanation """ compare_experiments = ( pd.concat([ experiment_baseline.assign(experiment="B"), experiment_naive.assign(experiment="RPA"), experiment_relabel.assign(experiment="RTV"), experiment_acf.assign(experiment="CFM"), experiment_single_roc.assign(experiment="ROC") ]) .assign( protected_class=lambda df: df.protected_class.str.replace("_", " "), ) ) compare_experiments.head() comparison_palette = sns.color_palette("Dark2", n_colors=8) def compare_experiment_results_multiple_model(experiment_results): g = ( experiment_results .query("fold_type == 'test'") .drop(["cv_fold"], axis=1) .pipe(pd.melt, id_vars=["experiment", "protected_class", "estimator", "fold_type"], var_name="metric", value_name="score") .assign( metric=lambda df: df.metric.str.replace("_", " ")) .pipe((sns.factorplot, "data"), y="experiment", x="score", hue="metric", col="protected_class", row="estimator", join=False, size=3, aspect=1.1, dodge=0.3, palette=comparison_palette, margin_titles=True, legend=False)) g.set_axis_labels("mean score (95% CI)") for ax in g.axes.ravel(): ax.set_ylabel("") plt.setp(ax.texts, text="") g.set_titles(row_template="{row_name}", col_template="{col_name}") plt.legend(title="metric", loc=9, bbox_to_anchor=(-0.65, -0.4)) g.fig.legend(loc=9, bbox_to_anchor=(0.5, -0.3)) g.fig.tight_layout() g.savefig("IMG/fairness_aware_comparison.png", dpi=500); compare_experiment_results_multiple_model( compare_experiments.query("estimator == 'LogisticRegression'")); """ Explanation: Comparison of Fairness-aware Techniques End of explanation """ from scipy import stats def compute_corr_pearson(x, y, ci=0.95): corr = stats.pearsonr(x, y) z = np.arctanh(corr[0]) sigma = (1 / ((len(x) - 3) ** 0.5)) cint = z + np.array([-1, 1]) * sigma * stats.norm.ppf((1 + ci ) / 2) return corr, np.tanh(cint) black_palette = sns.color_palette(["#222222"]) def plot_utility_fairness_tradeoff(x, y, **kwargs): ax = plt.gca() data = kwargs.pop("data") sns_ax = sns.regplot(x=x, y=y, data=data, scatter_kws={'alpha':0.5}, **kwargs) (corr, p_val), ci = compute_corr_pearson(data[x], data[y]) r_text = 'r = %0.02f (%0.02f, %0.02f)' % \ (corr, ci[0], ci[1]) sns_ax.annotate( r_text, xy=(0.7, 0), xytext=(0.07, 0.91), textcoords='axes fraction', fontweight="bold", fontsize=9, color="gray" ) bottom_padding = 0.05 top_padding = 0.5 ylim = (data[y].min() - bottom_padding, data[y].max() + top_padding) sns_ax.set_ylim(*ylim) g = sns.FacetGrid( ( compare_experiments .drop("cv_fold", axis=1) .reset_index() .query("fold_type == 'test'") .rename( columns={"mean_diff": "mean diff"}) ), col="protected_class", row="experiment", hue="experiment", size=2.0, aspect=1.3, sharey=True, palette=black_palette) g.map_dataframe(plot_utility_fairness_tradeoff, "auc", "mean diff") g.set_titles(template="{row_name}, {col_name}") g.fig.tight_layout() g.savefig("IMG/fairness_utility_tradeoff.png", dpi=500); g = sns.FacetGrid( ( compare_experiments .drop("cv_fold", axis=1) .reset_index() .query("fold_type == 'test'") .rename( columns={"mean_diff": "mean diff"}) ), col="protected_class", row="estimator", hue="estimator", size=3.5, aspect=1, sharey=True, sharex=False, palette=black_palette) g.map_dataframe(plot_utility_fairness_tradeoff, "auc", "mean diff") g.set_titles(template="{row_name}, {col_name}") g.fig.tight_layout() """ Explanation: We can make some interesting observations when comparing the results from different fairness-aware techniques. End of explanation """
daviddesancho/mdtraj
examples/solvent-accessible-surface-area.ipynb
lgpl-2.1
%matplotlib inline from __future__ import print_function import numpy as np import mdtraj as md """ Explanation: In this example, we'll compute the solvent accessible surface area of one of the residues in our protien accross each frame in a MD trajectory. We're going to use our trustly alanine dipeptide trajectory for this calculation, but in a real-world situtation you'll probably want to use a more interesting peptide or protein, especially one with a Trp residue. End of explanation """ help(md.shrake_rupley) trajectory = md.load('ala2.h5') sasa = md.shrake_rupley(trajectory) print(trajectory) print('sasa data shape', sasa.shape) """ Explanation: We'll use the algorithm from Shrake and Rupley for computing the SASA. Here's the function in MDTraj: End of explanation """ total_sasa = sasa.sum(axis=1) print(total_sasa.shape) from matplotlib.pylab import * plot(trajectory.time, total_sasa) xlabel('Time [ps]', size=16) ylabel('Total SASA (nm)^2', size=16) show() """ Explanation: The computed sasa array contains the solvent accessible surface area for each atom in each frame of the trajectory. Let's sum over all of the atoms to get the total SASA from all of the atoms in each frame. End of explanation """ def autocorr(x): "Compute an autocorrelation with numpy" x = x - np.mean(x) result = np.correlate(x, x, mode='full') result = result[result.size//2:] return result / result[0] semilogx(trajectory.time, autocorr(total_sasa)) xlabel('Time [ps]', size=16) ylabel('SASA autocorrelation', size=16) show() """ Explanation: We probably don't really have enough data do compute a meaningful autocorrelation, but for more realistic datasets, this might be something that you want to do. End of explanation """
H-E-L-P/XID_plus
docs/build/html/notebooks/examples/XID+example_run_script-PACS.ipynb
mit
import numpy as np from astropy.io import fits from astropy import wcs import pickle import dill import sys import os import xidplus import copy from xidplus import moc_routines, catalogue from xidplus import posterior_maps as postmaps from builtins import input """ Explanation: XID+ Example Run Script (This is based on a Jupyter notebook, available in the XID+ package and can be interactively run and edited) XID+ is a probababilistic deblender for confusion dominated maps. It is designed to: Use a MCMC based approach to get FULL posterior probability distribution on flux Provide a natural framework to introduce additional prior information Allows more representative estimation of source flux density uncertainties Provides a platform for doing science with the maps (e.g XID+ Hierarchical stacking, Luminosity function from the map etc) Cross-identification tends to be done with catalogues, then science with the matched catalogues. XID+ takes a different philosophy. Catalogues are a form of data compression. OK in some cases, not so much in others, i.e. confused images: catalogue compression loses correlation information. Ideally, science should be done without compression. XID+ provides a framework to cross identify galaxies we know about in different maps, with the idea that it can be extended to do science with the maps!! Philosophy: build a probabilistic generative model for the SPIRE maps Infer model on SPIRE maps Bayes Theorem $p(\mathbf{f}|\mathbf{d}) \propto p(\mathbf{d}|\mathbf{f}) \times p(\mathbf{f})$ In order to carry out Bayesian inference, we need a model to carry out inference on. For the SPIRE maps, our model is quite simple, with likelihood defined as: $L = p(\mathbf{d}|\mathbf{f}) \propto |\mathbf{N_d}|^{-1/2} \exp\big{ -\frac{1}{2}(\mathbf{d}-\mathbf{Af})^T\mathbf{N_d}^{-1}(\mathbf{d}-\mathbf{Af})\big}$ where: $\mathbf{N_{d,ii}} =\sigma_{inst.,ii}^2+\sigma_{conf.}^2$ Simplest model for XID+ assumes following: All sources are known and have positive flux (fi) A global background (B) contributes to all pixels PRF is fixed and known Confusion noise is constant and not correlated across pixels Because we are getting the joint probability distribution, our model is generative i.e. given parameters, we generate data and vica-versa Compared to discriminative model (i.e. neural network), which only obtains conditional probability distribution i.e. Neural network, give inputs, get output. Can't go other way' Generative model is full probabilistic model. Allows more complex relationships between observed and target variables End of explanation """ from healpy import pixelfunc order_large=6 order_small=10 tile_large=21875 output_folder='../../../test_files/' outfile=output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'.pkl' with open(outfile, 'rb') as f: obj=pickle.load(f) priors=obj['priors'] theta, phi =pixelfunc.pix2ang(2**order_large, tile_large, nest=True) tile_small = pixelfunc.ang2pix(2**order_small, theta, phi, nest=True) priors[0].moc.write(output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'_moc_.fits') from astropy.table import Table, join, vstack,hstack Table([priors[0].sra,priors[0].sdec]).write(output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'_table.fits') moc=moc_routines.get_fitting_region(order_small,tile_small) for p in priors: p.moc=moc p.cut_down_prior() p.prior_bkg(0.0,1) p.get_pointing_matrix() moc.write(output_folder+'Tile_'+str(tile_small)+'_'+str(order_small)+'_moc_.fits') print('fitting '+ str(priors[0].nsrc)+' sources \n') print('there are '+ str(priors[0].snpix)+' pixels') %%time from xidplus.stan_fit import PACS #priors[0].upper_lim_map() #priors[0].prior_flux_upper=(priors[0].prior_flux_upper-10.0+0.02)/np.max(priors[0].prf) fit=PACS.all_bands(priors[0],priors[1],iter=1000) Took 13205.7 seconds (3.6 hours) outfile=output_folder+'Tile_'+str(tile_small)+'_'+str(order_small) posterior=xidplus.posterior_stan(fit,priors) xidplus.save(priors,posterior,outfile) post_rep_map = postmaps.replicated_maps(priors, posterior, nrep=2000) band = ['PACS_100', 'PACS_160'] for i, p in enumerate(priors): Bayesian_Pval = postmaps.make_Bayesian_pval_maps(priors[i], post_rep_map[i]) wcs_temp = wcs.WCS(priors[i].imhdu) ra, dec = wcs_temp.wcs_pix2world(priors[i].sx_pix, priors[i].sy_pix, 0) kept_pixels = np.array(moc_routines.sources_in_tile([tile_small], order_small, ra, dec)) Bayesian_Pval[np.invert(kept_pixels)] = np.nan Bayes_map = postmaps.make_fits_image(priors[i], Bayesian_Pval) Bayes_map.writeto(outfile + '_' + band[i] + '_Bayes_Pval.fits', overwrite=True) cat = catalogue.create_PACS_cat(posterior, priors[0], priors[1]) kept_sources = moc_routines.sources_in_tile([tile_small], order_small, priors[0].sra, priors[0].sdec) kept_sources = np.array(kept_sources) cat[1].data = cat[1].data[kept_sources] cat.writeto(outfile + '_PACS_cat.fits', overwrite=True) """ Explanation: Work out what small tiles are in the test large tile file for PACS End of explanation """ %%time from xidplus.numpyro_fit import PACS fit_numpyro=PACS.all_bands(priors) outfile=output_folder+'Tile_'+str(tile_small)+'_'+str(order_small)+'_numpyro' posterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,priors) xidplus.save(priors,posterior_numpyro,outfile) post_rep_map = postmaps.replicated_maps(priors, posterior_numpyro, nrep=2000) band = ['PACS_100', 'PACS_160'] for i, p in enumerate(priors): Bayesian_Pval = postmaps.make_Bayesian_pval_maps(priors[i], post_rep_map[i]) wcs_temp = wcs.WCS(priors[i].imhdu) ra, dec = wcs_temp.wcs_pix2world(priors[i].sx_pix, priors[i].sy_pix, 0) kept_pixels = np.array(moc_routines.sources_in_tile([tile_small], order_small, ra, dec)) Bayesian_Pval[np.invert(kept_pixels)] = np.nan Bayes_map = postmaps.make_fits_image(priors[i], Bayesian_Pval) Bayes_map.writeto(outfile + '_' + band[i] + '_Bayes_Pval_numpyro.fits', overwrite=True) cat = catalogue.create_PACS_cat(posterior_numpyro, priors[0], priors[1]) kept_sources = moc_routines.sources_in_tile([tile_small], order_small, priors[0].sra, priors[0].sdec) kept_sources = np.array(kept_sources) cat[1].data = cat[1].data[kept_sources] cat.writeto(outfile + '_PACS_cat_numpyro.fits', overwrite=True) moc.area_sq_deg 100.0*(20.0*np.pi*(1.0/3600.0)**2)/moc.area_sq_deg """ Explanation: You can fit with the numpyro backend. End of explanation """
desihub/desisim
doc/nb/simqso-templates.ipynb
bsd-3-clause
import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Polygon from desisim.templates import SIMQSO, QSO import multiprocessing nproc = multiprocessing.cpu_count() // 2 plt.style.use('seaborn-talk') %matplotlib inline """ Explanation: Simulate QSO spectra. The purpose of this notebook is to demonstrate how to simulate QSO spectra using simqso. We also compare the results with the default (PCA-based) QSO template-generating code. End of explanation """ seed = 555 rand = np.random.RandomState(seed) """ Explanation: Specify the random seed so the results are reproducible. End of explanation """ nmodel = 9 simqso = SIMQSO(maxwave=4e4) %time flux, wave, meta = simqso.make_templates(nmodel, seed=seed, zrange=(2, 4)) meta def qaplot_handful(xlim=None, norm=False, logwave=True, loc='upper right'): from matplotlib.ticker import FormatStrFormatter srt = np.argsort(meta['REDSHIFT']) zz = meta['REDSHIFT'][srt].data #print(zz, 1215*(1+zz)) fig, ax1 = plt.subplots(figsize=(14, 9)) for ii in range(nmodel): if norm: ax1.plot(wave, flux[srt[ii], :] / np.interp(6300, wave, flux[srt[ii], :]) + ii*2, alpha=0.75, label='z={:.2f}'.format(zz[ii])) else: ax1.plot(wave, flux[srt[ii], :], alpha=0.75, label='z={:.2f}'.format(zz[ii])) if xlim: ax1.set_xlim(xlim) if logwave: ax1.set_xscale('log') ax1.set_ylim(5e-2, ax1.get_ylim()[1]) #ax1.yaxis.set_major_formatter(FormatStrFormatter('%.0f')) ax1.set_xlabel('Observed-Frame Wavelength ($\AA$)') ax1.legend(loc=loc, ncol=3) if norm: ax1.set_ylabel('Relative Flux (offset for clarity)') else: ax1.set_yscale('log') ax1.set_ylabel(r'Flux ($10^{-17}\ erg\ s^{-1}\ cm^{-2}\ \AA^{-1}$)') ax1.margins(0.02) """ Explanation: Generate a handful of spectra with z=[2-4] extending into the mid-IR and display them. End of explanation """ qaplot_handful() """ Explanation: Plot the full wavelength range. End of explanation """ qaplot_handful((3600, 6100), norm=True, logwave=False, loc='upper left') """ Explanation: Normalize and zoom into Lyman-alpha and the Lyman-alpha forest. End of explanation """ qso = QSO(minwave=simqso.wave.min(), maxwave=simqso.wave.max()) zin, magin = meta['REDSHIFT'].data, meta['MAG'].data %time qflux, qwave, qmeta = qso.make_templates(nmodel, seed=seed, redshift=zin, mag=magin, nocolorcuts=True) assert(np.all(qwave == wave)) assert(np.all(meta['REDSHIFT'].data == qmeta['REDSHIFT'].data)) assert(np.all(meta['MAG'].data == qmeta['MAG'].data)) def compare_templates(nplot=nmodel, ncol=3, xlim=None): """Plot a random sampling of the basis templates.""" if xlim is None: xlim = (3600, 8000) nspec, npix = flux.shape nrow = np.ceil(nplot / ncol).astype('int') these = np.argsort(meta['REDSHIFT'].data) fig, ax = plt.subplots(nrow, ncol, figsize=(4*ncol, 3*nrow), sharey=False, sharex=True) for ii, (thisax, indx) in enumerate(zip(ax.flat, these)): zz = meta['REDSHIFT'].data[indx] mm = meta['MAG'].data[indx] thisax.plot(wave, flux[indx, :], label='SIMQSO') thisax.plot(qwave, qflux[indx, :], alpha=0.7, label='QSO') thisax.set_xlim(xlim) ww = (wave > xlim[0]) * (wave < xlim[1]) ylim = (flux[indx, ww].min(), flux[indx, ww].max()) thisax.set_ylim((0.1, ylim[1])) #thisax.set_xscale('log') thisax.set_yscale('log') thisax.yaxis.set_major_locator(plt.NullLocator()) thisax.yaxis.set_minor_locator(plt.NullLocator()) #if ii < nspec-ncol-1: # thisax.xaxis.set_major_locator(plt.NullLocator()) # thisax.xaxis.set_minor_locator(plt.NullLocator()) thisax.text(0.88, 0.88, 'z={:.2f}\nr={:.2f}'.format(zz, mm), horizontalalignment='center', verticalalignment='center', transform=thisax.transAxes, fontsize=10) if ii == 0: thisax.legend(loc='upper left') #handles, labels = thisax.get_legend_handles_labels() #plt.figlegend(handles, labels, loc=(0.89, 0.88)) fig.subplots_adjust(wspace=0.02, hspace=0.05) compare_templates() """ Explanation: Make some templates using the PCA-based QSO() class and compare them. End of explanation """ nmoremodel = 500 redshift = rand.uniform(2, 4, nmoremodel) %time moreflux, morewave, moremeta, moreqsometa = simqso.make_templates(nmoremodel, seed=seed, redshift=redshift, \ return_qsometa=True, lyaforest=False) %time moreqflux, _, moreqmeta = qso.make_templates(nmoremodel, seed=seed, redshift=redshift, \ mag=moremeta['MAG'].data, lyaforest=False) def qaplot_props(): bins = nmoremodel//15 fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 4), sharey=True) _ = ax1.hist(moremeta['REDSHIFT'], bins=bins) ax1.set_xlabel('Redshift') ax1.set_ylabel('Number of Galaxies') _ = ax2.hist(moremeta['MAG'], bins=bins) ax2.set_xlabel(r'$r_{\rm DECaLS}$') _ = ax3.hist(moreqsometa.data['absMag'], bins=bins) ax3.set_xlabel(r'$M_{1450}$') ax3.set_xlim(ax3.get_xlim()[::-1]) plt.subplots_adjust(wspace=0.1) """ Explanation: Now make more spectra with the redshift priors specified. Turn off the Lyman-alpha forest for speed. End of explanation """ qaplot_props() def qaplot_vsz(): from matplotlib.ticker import MaxNLocator fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4)) ax1.scatter(moremeta['REDSHIFT'], moremeta['MAG'], edgecolor='k', alpha=0.9, s=50) ax1.set_xlabel('Redshift') ax1.set_ylabel(r'$r_{\rm DECaLS}$') ax1.yaxis.set_major_locator(MaxNLocator(integer=True)) ax2.scatter(moremeta['REDSHIFT'], moreqsometa.data['absMag'], edgecolor='k', alpha=0.9, s=50) ax2.set_ylabel(r'$M_{1450}$') ax2.set_xlabel('Redshift') ax2.set_ylim(ax2.get_ylim()[::-1]) plt.subplots_adjust(wspace=0.3) """ Explanation: Show the distribution of redshift, apparent magnitude, and absolute magnitude. End of explanation """ qaplot_vsz() def flux2colors(cat): """Convert DECam/WISE fluxes to magnitudes and colors.""" colors = dict() #with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs) # warnings.simplefilter('ignore') colors['g'] = 22.5 - 2.5 * np.log10(cat['FLUX_G']) colors['r'] = 22.5 - 2.5 * np.log10(cat['FLUX_R']) colors['z'] = 22.5 - 2.5 * np.log10(cat['FLUX_Z']) colors['gr'] = colors['g'] - colors['r'] colors['gz'] = colors['g'] - colors['z'] colors['rz'] = colors['r'] - colors['z'] colors['grz'] = 22.5-2.5*np.log10(cat['FLUX_G'] + 0.8 * cat['FLUX_R'] + 0.5 * cat['FLUX_G'] / 2.3) with np.errstate(invalid='ignore'): colors['W1'] = 22.5 - 2.5 * np.log10(cat['FLUX_W1']) colors['W2'] = 22.5 - 2.5 * np.log10(cat['FLUX_W2']) colors['W'] = 22.5 - 2.5 * np.log10(0.75 * cat['FLUX_W1'] + 0.25 * cat['FLUX_W2']) colors['rW'] = colors['r'] - colors['W'] colors['W1W2'] = colors['W1'] - colors['W2'] colors['grzW'] = colors['grz'] - colors['W'] return colors def qso_colorbox(ax, plottype='gr-rz', verts=None): """Draw the QSO selection boxes.""" xlim = ax.get_xlim() ylim = ax.get_ylim() if plottype == 'gr-rz': verts = [(-0.3, 1.3), (1.1, 1.3), (1.1, ylim[0]-0.05), (-0.3, ylim[0]-0.05) ] if plottype == 'r-W1W2': verts = None ax.axvline(x=22.7, ls='--', lw=2, color='k') ax.axhline(y=-0.4, ls='--', lw=2, color='k') if plottype == 'gz-grzW': gzaxis = np.linspace(-0.5, 2.0, 50) ax.plot(gzaxis, np.polyval([1.0, -1.0], gzaxis), ls='--', lw=2, color='k') if verts: ax.add_patch(Polygon(verts, fill=False, ls='--', lw=2, color='k')) def qaplot_colorcolor(old=False): fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14, 4)) ax1.scatter(colors['rz'], colors['gr'], edgecolor='k', alpha=0.9, s=20, label='SIMQSO') if old: ax1.scatter(qcolors['rz'], qcolors['gr'], edgecolor='k', alpha=0.9, s=20, label='QSO') ax1.set_xlabel('$r - z$') ax1.set_ylabel('$g - r$') ax1.set_xlim(-0.5, 1.5) ax1.set_ylim(-1.0, 1.8) qso_colorbox(ax1, 'gr-rz') ax1.legend(loc='upper right', ncol=2) ax2.scatter(colors['r'], colors['W1W2'], edgecolor='k', alpha=0.9, s=20) ax2.set_xlabel('$r$') ax2.set_ylabel('$W_{1} - W_{2}$') ax2.set_xlim(18, 23.5) ax2.set_ylim(-0.5, 1) qso_colorbox(ax2, 'r-W1W2') ax3.scatter(colors['gz'], colors['grzW'], edgecolor='k', alpha=0.9, s=20) ax3.set_xlabel('$g - z$') ax3.set_ylabel('$grz - W$') ax3.set_xlim(-1, 1.5) ax3.set_ylim(-1, 1.5) qso_colorbox(ax3, 'gz-grzW') plt.subplots_adjust(wspace=0.35) colors = flux2colors(moremeta) qcolors = flux2colors(moreqmeta) """ Explanation: Show apparent and absolute magnitude vs redshift. End of explanation """ qaplot_colorcolor(old=True) """ Explanation: Plot various color-color diagrams with the DESI/QSO selection boundaries overlaid. End of explanation """ nmodel = 9 seed = 5 simqso = SIMQSO(maxwave=4e4) %time flux, wave, meta, qmeta = simqso.make_templates(5, seed=seed, return_qsometa=True) %time ff, ww, mm, qq = simqso.make_templates(input_qsometa=qmeta, return_qsometa=True) meta mm """ Explanation: Demonstrate how to regenerate templates from an input metadata table. End of explanation """
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
ex31-Harmonic Analysis - Monthly Mean Temperature at Orange, Australia.ipynb
mit
import numpy as np import pandas as pd import matplotlib.pyplot as plt from HA_helpers import * %matplotlib inline # Set some parameters to apply to all plots. These can be overridden import matplotlib # Plot size to 12" x 7" matplotlib.rc('figure', figsize = (15, 7)) # Font size to 14 matplotlib.rc('font', size = 14) # Do not display top and right frame lines matplotlib.rc('axes.spines', top = False, right = False) # Remove grid lines matplotlib.rc('axes', grid = False) # Set backgound color to white matplotlib.rc('axes', facecolor = 'white') """ Explanation: Harmonic Analysis - Monthly Mean Temperature Harmonic analysis is a branch of mathematics concerned with the representation of functions or signals as the superposition of basic waves, and the study of and generalization of the notions of Fourier series and Fourier transforms (i.e. an extended form of Fourier analysis). Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis complex problems could be reduced to manageable terms by the technique of breaking complicated mathematical curves into sums of comparatively simple components. Various researchers of climatology and meteorology have suggested that the harmonic analysis is advantageous to study the seasonal variation or oscillation of meteorological or climatological parameters. The development and the mathematical formulation of the method of harmonic analysis have been discussed by a good number of researchers in study of different climatological parameters like as Azzali and Menenti (2001), Yuan and Li (2008), van Loon (1967), Meehl (2006), Goswami (2000). In this study, the monthly mean temperature data downloaded from the BOM of Australia are used in the harmonic analysis. End of explanation """ df = pd.read_csv('data\Orange.csv', usecols=[4], ) df.columns = ['Tmean'] df.index = pd.date_range('1976-01', '2019-01', freq='M') df = df['1976':'2018'] """ Explanation: 1. Read data The site location information: - Lat: 33.32ยฐ S - Lon: 149.08ยฐ E - Elevation: 922 m. Use pandas to import the data from the CSV file and add a datetime index. End of explanation """ fft, freq = fourier_transform(df[['Tmean']].values[:,0], 1/12.) """ Explanation: 2. Carry out harmonic analysis We now compute the Fourier transform and the spectral density of the signal. 2.1 Calculate the Fourier Transform using scipy.fftpack to get Coefficients and Wave frequency. Because we use monthly data, so set the sampling frequency as 1/12. End of explanation """ spd, pos_freqs = spectrum(fft, freq) f, ax = plt.subplots(1,1) ax.plot(pos_freqs, spd) _ = ax.set_xlabel('Frequency(units: cycles per length of domain)', fontsize=14) _ = ax.set_title('Spectral density', fontsize=16) """ Explanation: 2.2 Calculate the spectral density We now plot the power spectral density of our signal, as a function of the frequency. we observe a peak for f=44. Now, we cut out frequencies higher than the fundamental frequency. End of explanation """ data_f = np.real(inverse_fourier_transform(fft, freq, max_freq=44)) df['Tmean_fft'] = data_f """ Explanation: 2.3 Inverse Fourier Transform Perform an inverse FFT to convert the modified Fourier transform back to the temporal domain. This way, we recover a signal that mainly contains the fundamental frequency (cutofff = 44) End of explanation """ ax = df[['Tmean', 'Tmean_fft']].plot() _ = ax.set_xlim(df.index.min(), df.index.max()) _ = ax.set_xlabel('Date') _ = ax.set_title('Monthly Mean Temperature($^oC$, 1976-2018)', fontsize=16) """ Explanation: 3. Visualize Here, the Hilbert transform is applied, which excludes the negative part of the fft spectrum. End of explanation """
Jackporter415/phys202-2015-work
assignments/assignment05/InteractEx04.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 4 Imports End of explanation """ def random_line(m, b, sigma, size=10): """Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0] Parameters ---------- m : float The slope of the line. b : float The y-intercept of the line. sigma : float The standard deviation of the y direction normal distribution noise. size : int The number of points to create for the line. Returns ------- x : array of floats The array of x values for the line with `size` points. y : array of floats The array of y values for the lines with `size` points. """ x = np.linspace(-1.0,1.0,size) if sigma == 0.0: y = np.array([i*m+b for i in x]) else: y = np.array([i*m+b+np.random.normal(0,sigma**2) for i in x]) return x,y m = 0.0; b = 1.0; sigma=0.0; size=3 x, y = random_line(m, b, sigma, size) assert len(x)==len(y)==size assert list(x)==[-1.0,0.0,1.0] assert list(y)==[1.0,1.0,1.0] sigma = 1.0 m = 0.0; b = 0.0 size = 500 x, y = random_line(m, b, sigma, size) assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1) assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1) """ Explanation: Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$: $$ y = m x + b + N(0,\sigma^2) $$ Be careful about the sigma=0.0 case. End of explanation """ def ticks_out(ax): """Move the ticks to the outside of the box.""" ax.get_xaxis().set_tick_params(direction='out', width=1, which='both') ax.get_yaxis().set_tick_params(direction='out', width=1, which='both') def plot_random_line(m, b, sigma, size=10, color='red'): """Plot a random line with slope m, intercept b and size points.""" x,y = random_line(m,b,sigma,size = 10) plt.plot(x,y,color) ax = plt.gca() plt.title('Soliton Wave') plt.xlabel('X') plt.ylabel('Y') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.get_xaxis().tick_bottom() ax.axes.get_yaxis().tick_left() ticks_out(ax) plt.xlim(-1.1,1.1) plt.ylim(-10.0,10.0) plot_random_line(5.0, -1.0, 2.0, 50) assert True # use this cell to grade the plot_random_line function """ Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function: Make the marker color settable through a color keyword argument with a default of red. Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$. Customize your plot to make it effective and beautiful. End of explanation """ interact(plot_random_line, m = (-10.0,10.0,0.1), b = (-5.0,5.0,0.1), sigma = (0.0,5.0,0.1),size = (10,100,10),color = {'red': 'r','green':'g','blue':'b'}) #### assert True # use this cell to grade the plot_random_line interact """ Explanation: Use interact to explore the plot_random_line function using: m: a float valued slider from -10.0 to 10.0 with steps of 0.1. b: a float valued slider from -5.0 to 5.0 with steps of 0.1. sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01. size: an int valued slider from 10 to 100 with steps of 10. color: a dropdown with options for red, green and blue. End of explanation """
wbarfuss/pymofa
tutorial/02_LocalParallelization.ipynb
mit
from ipyparallel import Client import os c = Client() view = c[:] print(c.ids) %%px def find(name, path): for root, dirs, files in os.walk(path): if name in files: return root path = find('02_LocalParallelization.ipynb', '/home/') print(path) os.chdir(path) """ Explanation: How to locally run parallel code with mpi4py in an IPython notebook: The prerequisite for this is a working installation of some MPI distribution. Using Ubuntu or some derivative, I recommend using OpenMPI which can be istalled from the repository by means of the following packages: libopenmpi-dev, openmpi-bin, openmpi-doc. Now, you can already run MPI enabled code from you shell by calling $mpirun -n [numbmer_of_threads] python [script_to_run.py] To use MPI with iPython, one has to install ipyparallel: via pip: $pip install ipyparallel via conda: $conda install ipyparallel and then enable the Clusters tab in ipython via $ ipcluster nbextension enable To make MPI acessable via mpi4py in an ipython notebook, one has to do the following: open one shell and start the ipcontroller: $ipcontroller open another shell and start a number of engines: $mpirun -n [number of engines] ipengine --mpi=mpi4py and then connect to the engines via the following fragment of code: End of explanation """ %%px from mpi4py import MPI com = MPI.COMM_WORLD print(com.Get_rank()) """ Explanation: Now, to make the code run on all of our engines (and not just on one), the following cells have to start with the parallel magic command %%px End of explanation """ %%px import numpy as np import matplotlib.pyplot as plt def predprey_model(prey_birth_rate=0.1, prey_mortality=0.1, predator_efficiency=0.1, predator_death_rate=0.01, initial_prey=1., initial_predators=1., time_length=1000): """Discrete predetor prey model.""" A = -1 * np.ones(time_length) B = -1 * np.ones(time_length) A[0] = initial_prey B[0] = initial_predators for t in range(1, time_length): A[t] = A[t-1] + prey_birth_rate * A[t-1] - prey_mortality * B[t-1]*A[t-1] B[t] = B[t-1] + predator_efficiency * B[t-1]*A[t-1] - predator_death_rate * B[t-1] +\ 0.02 * (0.5 - np.random.rand()) return A, B #preys, predators = predprey_model() #plt.plot(preys, label="preys") #plt.plot(predators, label="predators") #plt.legend() #plt.show() """ Explanation: Now, that we have MPI running, and mpi4py recognizing the nodes and their ranks, we can continue with the predator prey exercise, that we know from the first tutorial. First, define the model: End of explanation """ %%px # imports from pymofa.experiment_handling import experiment_handling as eh import itertools as it import pandas as pd # import cPickle #Definingh the experiment execution function # it gets paramater you want to investigate, plus `filename` as the last parameter def RUN_FUNC(prey_birth_rate=0.1, prey_mortality=0.1, predator_efficiency=0.1, predator_death_rate=0.01, initial_prey=1., initial_predators=1., time_length=1000, filename='./'): """Insightful docstring.""" print(prey_birth_rate, prey_mortality, predator_efficiency, predator_death_rate, initial_prey, initial_predators, time_length) # one could also do more complicated stuff here, e.g. drawing something from a random distribution # running the model # TO DO: there seems to be a problem passing arguments to function #preys, predators = predprey_model(prey_birth_rate, prey_mortality, # predator_efficiency, predator_death_rate, # initial_prey, initial_predators, # time_length) preys, predators = predprey_model( ) print(preys) # preparing the data res = pd.DataFrame({"preys": np.array(preys), "predators": np.array(predators)}) # Save Result res.to_pickle(filename) # determine exit status (if something went wrong) # if exit status > 0 == run passen # if exit status < 0 == Run Failed exit_status = 1 # RUN_FUNC needs to return exit_status return exit_status """ Explanation: Then import the experiment_handling class from pymofa and define a run function: End of explanation """ %%px # Path where to Store the simulated Data SAVE_PATH_RAW = "./dummy/pymofatutorial" # Parameter combinations to investiage prey_birth_rate = [0.1] predator_death_rate = [0.1] initial_pop = [1.] PARAM_COMBS = list(it.product(prey_birth_rate, predator_death_rate, initial_pop)) # Sample Size SAMPLE_SIZE = 5 # INDEX INDEX = {0: 'prey_birth_rate', 1: 'predator_death_rate', 2: 'initial_prey'} # initiate handle instance with experiment variables handle = eh(SAMPLE_SIZE, PARAM_COMBS, INDEX, SAVE_PATH_RAW) """ Explanation: Specify the necessary parameters, generate their combinations and feed them to an experiment handle: End of explanation """ %%px # Compute experiemnts raw data handle.compute(RUN_FUNC) """ Explanation: And finally run the model - now in parallel: End of explanation """
JJINDAHOUSE/deep-learning
autoencoder/Simple_Autoencoder.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) """ Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation """ img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation """ # Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value # Input and target placeholders inputs_ = targets_ = # Output of hidden layer, single fully connected layer here with ReLU activation encoded = # Output layer logits, fully connected layer with no activation logits = # Sigmoid output from logits decoded = # Sigmoid cross-entropy loss loss = # Mean of the loss cost = # Adam optimizer opt = """ Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation """ # Create the session sess = tf.Session() """ Explanation: Training End of explanation """ epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation """
bccp/imaginglss-notebooks
BrickInvestigation.ipynb
artistic-2.0
from imaginglss.analysis import completeness from imaginglss.analysis import targetselection from imaginglss.utils.npyquery import Column as C b = dr.brickindex.get_brick(dr.brickindex.search_by_name('2445p072')) tractor = dr.catalogue.open(b) sigma = {'r':5, 'z':5, 'g':5} LRG = targetselection.LRG(tractor) QSO = targetselection.QSO(tractor) ELG = completeness.ELG(sigma)(targetselection.ELG(tractor)) BGS = completeness.BGS(sigma)(targetselection.BGS(tractor)) depth = dr.read_depths((tractor['RA'], tractor['DEC']), 'rgz') """ Explanation: Matching http://legacysurvey.org/viewer/?ra=244.6758&dec=7.3071&zoom=13&layer=decals-dr2 End of explanation """ _ = hist(tractor['DECAM_MW_TRANSMISSION'][:, 4] / depth['DECAM_MW_TRANSMISSION'][:, 4] - 1, range=(-.01, .01), bins=100, log=True) xlabel('Relative Discrepency') """ Explanation: MW_TRANSMISION in Tractor Catalogue and from Tractor Images End of explanation """ _ = hist(tractor['DECAM_DEPTH'][:, 4] / depth['DECAM_DEPTH'][:, 4] - 1, range=(-0.04, 0.04), bins=100, log=True) xlabel('Relative Discrepency') #loglog() #bad = abs(tractor['DECAM_DEPTH'][:, 4] / depth['DECAM_FLUX_IVAR'][:, 4] - 1) > 0.01 #print bad.sum(), len(tractor) """ Explanation: DECAM_DEPTH in Tractor Catalogue and from Tractor Images End of explanation """ figure(figsize=(8, 8)) rimg = dr.images['image']['r'] gimg = dr.images['image']['g'] zimg = dr.images['image']['z'] composite = array([ zimg.open(b).clip(0, 0.05), rimg.open(b).clip(0, 0.05), gimg.open(b).clip(0, 0.05), ]).transpose((1,2,0)) composite /= composite.max() #plot(tractor['RA'][bad], tractor['DEC'][bad], '+', mfc='none', mew=2, mec='yellow', label='bad') plot(LRG['RA'], LRG['DEC'], 'o', mfc='none', mew=2, mec='cyan', label='LRG') plot(QSO['RA'], QSO['DEC'], 'o', mfc='none', mew=2, mec='blue', label='QSO') plot(ELG['RA'], ELG['DEC'], 'o', mfc='none', mew=2, mec='white', label='ELG') plot(BGS['RA'], BGS['DEC'], 'o', mfc='none', mew=2, mec='green', label='BGS') plot(decals.tycho['RA'], decals.tycho['DEC'], 'x', markersize=10, mew=2, mfc='none', mec='gray') imshow(composite, extent=(b.ra2, b.ra1, b.dec2, b.dec1)) xlabel('RA') ylabel('DEC') legend() %%bash git stash git pull git add BrickInvestigation.ipynb git commit -m "update BrickInvestigation.ipynb" git push """ Explanation: Slightly Mislocated Tycho stars And what's going on with the objects? End of explanation """
atlury/deep-opencl
DL0110EN/3.3.3practice_predicting_MNIST.ipynb
lgpl-3.0
!conda install -y torchvision import torch import torch.nn as nn import torchvision.transforms as transforms import torchvision.datasets as dsets import matplotlib.pylab as plt import numpy as np """ Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a> <img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center"> <h1 align=center><font size = 5>Practice: Softmax Classifer Using Sequential </font></h1> # Table of Contents In this lab, you will use a single layer Softmax to classify handwritten digits from the MNIST database. <div class="alert alert-block alert-info" style="margin-top: 20px"> <li><a href="#ref0">Helper Functions</a></li> <li><a href="#ref1">Prepare Data</a></li> <li><a href="#ref2">Create a Softmax classifier Using Sequential</a></li> <li><a href="#ref3">Criterion function, Optimizer, and Train the Model</a></li> <li><a href="#ref4">Analyze Results</a></li> <br> <p></p> Estimated Time Needed: <strong>25 min</strong> </div> <hr> <a id="ref0"></a> <h2 align=center>Helper functions </h2> End of explanation """ def show_data(data_sample): plt.imshow(data_sample[0].numpy().reshape(28,28),cmap='gray') #print(data_sample[1].item()) plt.title('y= '+ str(data_sample[1].item())) """ Explanation: Use the following function to visualize data: End of explanation """ train_dataset=dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor()) train_dataset """ Explanation: <a id="ref1"></a> <h2 align=center>Prepare Data </h2> Load the training dataset by setting the parameters <code>train</code> to <code>True</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>. End of explanation """ validation_dataset=dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor()) validation_dataset """ Explanation: Load the testing dataset by setting the parameters train <code>False</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>. End of explanation """ train_dataset[0][1].type() """ Explanation: Note that the data type is long: End of explanation """ train_dataset[3][1] """ Explanation: Data Visualization Each element in the rectangular tensor corresponds to a number that represents a pixel intensity as demonstrated by the following image: <img src = "https://ibm.box.com/shared/static/7024mnculm8w2oh0080y71cpa48cib2k.png" width = 550, align = "center"></a> Print out the third label: End of explanation """ show_data(train_dataset[3]) """ Explanation: Plot the 3rd sample: End of explanation """ show_data(train_dataset[2]) """ Explanation: You see its a 1. Now, plot the second sample: End of explanation """ train_dataset[0][0].shape """ Explanation: The Softmax function requires vector inputs. If you see the vector shape, you'll note it's 28x28. End of explanation """ input_dim=28*28 output_dim=10 input_dim """ Explanation: Flatten the tensor as shown in this image: <img src = "https://ibm.box.com/shared/static/0cjl5inks3d8ay0sckgywowc3hw2j1sa.gif" width = 550, align = "center"></a> The size of the tensor is now 784. <img src = "https://ibm.box.com/shared/static/lhezcvgm82gtdewooueopxp98ztq2pbv.png" width = 550, align = "center"></a> Set the input size and output size. <a id="ref3"></a> Create a Softmax Classifier by Using Sequential End of explanation """ print('W:',list(model.parameters())[0].size()) print('b',list(model.parameters())[1].size()) """ Explanation: Double-click here for the solution. <!-- model=nn.Sequential(nn.Linear(input_dim,output_dim)) --> <a id="ref3"></a> <h2>Define the Softmax Classifier, Criterion function, Optimizer, and Train the Model</h2> View the size of the model parameters: End of explanation """ criterion=nn.CrossEntropyLoss() """ Explanation: Cover the model parameters for each class to a rectangular grid: <a> <img src = "https://ibm.box.com/shared/static/9cuuwsvhwygbgoogmg464oht1o8ubkg2.gif" width = 550, align = "center"></a> Plot the model parameters for each class: Loss function: End of explanation """ learning_rate=0.1 optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate) """ Explanation: Optimizer class: End of explanation """ train_loader=torch.utils.data.DataLoader(dataset=train_dataset,batch_size=100) validation_loader=torch.utils.data.DataLoader(dataset=validation_dataset,batch_size=5000) """ Explanation: Define the dataset loader: End of explanation """ n_epochs=10 loss_list=[] accuracy_list=[] N_test=len(validation_dataset) #n_epochs for epoch in range(n_epochs): for x, y in train_loader: #clear gradient optimizer.zero_grad() #make a prediction z=model(x.view(-1,28*28)) # calculate loss loss=criterion(z,y) # calculate gradients of parameters loss.backward() # update parameters optimizer.step() correct=0 #perform a prediction on the validation data for x_test, y_test in validation_loader: z=model(x_test.view(-1,28*28)) _,yhat=torch.max(z.data,1) correct+=(yhat==y_test).sum().item() accuracy=correct/N_test accuracy_list.append(accuracy) loss_list.append(loss.data) accuracy_list.append(accuracy) """ Explanation: Train the model and determine validation accuracy: End of explanation """ fig, ax1 = plt.subplots() color = 'tab:red' ax1.plot(loss_list,color=color) ax1.set_xlabel('epoch',color=color) ax1.set_ylabel('total loss',color=color) ax1.tick_params(axis='y', color=color) ax2 = ax1.twinx() color = 'tab:blue' ax2.set_ylabel('accuracy', color=color) ax2.plot( accuracy_list, color=color) ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() """ Explanation: <a id="ref3"></a> <h2 align=center>Analyze Results</h2> Plot the loss and accuracy on the validation data: End of explanation """ count=0 for x,y in validation_dataset: z=model(x.reshape(-1,28*28)) _,yhat=torch.max(z,1) if yhat!=y: show_data((x,y)) plt.show() print("yhat:",yhat) count+=1 if count>=5: break """ Explanation: Plot the first five misclassified samples: End of explanation """
tensorflow/docs-l10n
site/en-snapshot/io/tutorials/orc.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ !pip install tensorflow-io import tensorflow as tf import tensorflow_io as tfio """ Explanation: Apache ORC Reader <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/orc"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/orc.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/orc.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/orc.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview Apache ORC is a popular columnar storage format. tensorflow-io package provides a default implementation of reading Apache ORC files. Setup Install required packages, and restart runtime End of explanation """ !curl -OL https://github.com/tensorflow/io/raw/master/tests/test_orc/iris.orc !ls -l iris.orc """ Explanation: Download a sample dataset file in ORC The dataset you will use here is the Iris Data Set from UCI. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. It has 4 attributes: (1) sepal length, (2) sepal width, (3) petal length, (4) petal width, and the last column contains the class label. End of explanation """ dataset = tfio.IODataset.from_orc("iris.orc", capacity=15).batch(1) """ Explanation: Create a dataset from the file End of explanation """ for item in dataset.take(1): print(item) """ Explanation: Examine the dataset: End of explanation """ feature_cols = ["sepal_length", "sepal_width", "petal_length", "petal_width"] label_cols = ["species"] # select feature columns feature_dataset = tfio.IODataset.from_orc("iris.orc", columns=feature_cols) # select label columns label_dataset = tfio.IODataset.from_orc("iris.orc", columns=label_cols) """ Explanation: Let's walk through an end-to-end example of tf.keras model training with ORC dataset based on iris dataset. Data preprocessing Configure which columns are features, and which column is label: End of explanation """ vocab_init = tf.lookup.KeyValueTensorInitializer( keys=tf.constant(["virginica", "versicolor", "setosa"]), values=tf.constant([0, 1, 2], dtype=tf.int64)) vocab_table = tf.lookup.StaticVocabularyTable( vocab_init, num_oov_buckets=4) label_dataset = label_dataset.map(vocab_table.lookup) dataset = tf.data.Dataset.zip((feature_dataset, label_dataset)) dataset = dataset.batch(1) def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features), axis=1) return features, labels dataset = dataset.map(pack_features_vector) """ Explanation: A util function to map species to float numbers for model training: End of explanation """ model = tf.keras.Sequential( [ tf.keras.layers.Dense( 10, activation=tf.nn.relu, input_shape=(4,) ), tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3), ] ) model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"]) model.fit(dataset, epochs=5) """ Explanation: Build, compile and train the model Finally, you are ready to build the model and train it! You will build a 3 layer keras model to predict the class of the iris plant from the dataset you just processed. End of explanation """
zhuanxuhit/deep-learning
intro-to-tensorflow/.ipynb_checkpoints/intro_to_tensorflow-checkpoint.ipynb
mit
import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All modules imported.') """ Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font. The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported". End of explanation """ def download(url, file): """ Download file from <url> :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): """ Uncompress features and labels from a zip file :param file: The zip file to extract the data from """ features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with a docker container docker_size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.') """ Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). End of explanation """ # Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') """ Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here. End of explanation """ %matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.') """ Explanation: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. End of explanation """ # All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # TODO: Set the features and labels tensors # features = # labels = # TODO: Set the weights and biases tensors # weights = # biases = ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.') """ Explanation: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here. End of explanation """ # Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration # epochs = # learning_rate = ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy)) """ Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here. End of explanation """ ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict=test_feed_dict) assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy) print('Nice Job! Test Accuracy is {}'.format(test_accuracy)) """ Explanation: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/docker_and_kubernetes/solutions/2_intro_k8s.ipynb
apache-2.0
import os CLUSTER_NAME = "asl-cluster" ZONE = "us-central1-a" os.environ["CLUSTER_NAME"] = CLUSTER_NAME os.environ["ZONE"] = ZONE """ Explanation: Introduction to Kubernetes Learning Objectives * Create GKE cluster from command line * Deploy an application to your cluster * Cleanup, delete the cluster Overview Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters; from public clouds to on-premise deployments; from virtual machines to bare metal. The goal of this lab is to provide a short introduction to Kubernetes (k8s) and some basic functionality. Create a GKE cluster A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. Note: Cluster names must start with a letter and end with an alphanumeric, and cannot be longer than 40 characters. We'll call our cluster asl-cluster. End of explanation """ !gcloud container clusters list """ Explanation: We'll set our default compute zone to us-central1-a and use gcloud container clusters create ... to create the GKE cluster. Let's first look at all the clusters we currently have. End of explanation """ %%bash gcloud config set compute/zone ${ZONE} gcloud container clusters create ${CLUSTER_NAME} """ Explanation: Then we'll use gcloud container clusters create to create a new cluster using the CLUSTER_NAME we set above. This takes a few minutes... End of explanation """ !gcloud container clusters list """ Explanation: Now when we list our clusters again, we should see the cluster we created. End of explanation """ %%bash gcloud container clusters get-credentials ${CLUSTER_NAME} """ Explanation: Get authentication credentials and deploy and application After creating your cluster, you need authentication credentials to interact with it. Use get-credentials to authenticate the cluster. End of explanation """ !kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 """ Explanation: You can now deploy a containerized application to the cluster. For this lab, you'll run hello-app in your cluster. GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet. Use the kubectl create command to create a new Deployment hello-server from the hello-app container image. The --image flag to specify a container image to deploy. The kubectl create command pulls the example image from a Container Registry bucket. Here, use gcr.io/google-samples/hello-app:1.0 to indicate the specific image version to pull. If a version is not specified, the latest version is used. End of explanation """ !kubectl expose deployment hello-server --type=LoadBalancer --port 8080 """ Explanation: This Kubernetes command creates a Deployment object that represents hello-server. To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your application to external traffic, run the kubectl expose command. In this command, * --port specifies the port that the container exposes. * type="LoadBalancer" creates a Compute Engine load balancer for your container. End of explanation """ !kubectl get service """ Explanation: Use the kubectl get service command to inspect the hello-server Service. Note: It might take a minute for an external IP address to be generated. Run the previous command again if the EXTERNAL-IP column for hello-server status is pending. End of explanation """ %%bash gcloud container clusters --quiet delete ${CLUSTER_NAME} """ Explanation: You can now view the application from your web browser, open a new tab and enter the following address, replacing EXTERNAL IP with the EXTERNAL-IP for hello-server: bash http://[EXTERNAL_IP]:8080 You should see a simple page which displays bash Hello, world! Version: 1.0.0 Hostname: hello-server-5bfd595c65-7jqkn Cleanup Delete the cluster using gcloud to free up those resources. Use the --quiet flag if you are executing this in a notebook. Deleting the cluster can take a few minutes. End of explanation """
esa-as/2016-ml-contest
Kr1m/Kr1m_SEG_ML_Attempt1.ipynb
apache-2.0
import warnings warnings.filterwarnings("ignore") %matplotlib inline import sys sys.path.append("..") #Import standard pydata libs import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data.describe() #Visualize the distribution of facies for each well wells = training_data['Well Name'].unique() fig, ax = plt.subplots(5,2, figsize=(20,20)) for i, well in enumerate(wells): row = i % ax.shape[0] column = i // ax.shape[0] counts = training_data[training_data['Well Name']==well].Facies.value_counts() data_for_well = [counts[j] if j in counts.index else 0 for j in range(1,10)] ax[row, column].bar(range(1,10), data_for_well, align='center') ax[row, column].set_title("{well}".format(well=well)) ax[row, column].set_ylabel("Counts") ax[row, column].set_xticks(range(1,10)) plt.show() plt.figure(figsize=(10,10)) sns.heatmap(training_data.drop(['Formation', 'Well Name'], axis=1).corr()) """ Explanation: Facies classification utilizing an Adaptive Boosted Random Forest Ryan Thielke In the following, we provide a possible solution to the facies classification problem described in https://github.com/seg/2016-ml-contest. Exploring the data End of explanation """ dfs = [] for well in training_data['Well Name'].unique(): df = training_data[training_data['Well Name']==well].copy(deep=True) df.sort_values('Depth', inplace=True) for col in ['PE', 'GR']: smooth_col = 'smooth_'+col df[smooth_col] = pd.rolling_mean(df[col], window=25) df[smooth_col].fillna(method='ffill', inplace=True) df[smooth_col].fillna(method='bfill', inplace=True) dfs.append(df) training_data = pd.concat(dfs) pe_mean = training_data.PE.mean() sm_pe_mean = training_data.smooth_PE.mean() training_data['PE'] = training_data.PE.replace({np.nan:pe_mean}) training_data['smooth_PE'] = training_data['smooth_PE'].replace({np.nan:sm_pe_mean}) formation_encoder = dict(zip(training_data.Formation.unique(), range(len(training_data.Formation.unique())))) training_data['enc_formation'] = training_data.Formation.map(formation_encoder) training_data.describe() """ Explanation: Feature Engineering Here we will do a couple things to clean the data and attempt to create new features for our model to consume. First, we will smooth the PE and GR features. Second, we replace missing PE values with the mean of the entire dataset (might want to investigate other methods) Last, we will encode the formations into integer values End of explanation """ #Let's build a model from sklearn import preprocessing from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn import metrics, cross_validation from classification_utilities import display_cm #We will take a look at an F1 score for each well n_estimators=100 learning_rate=.01 random_state=0 facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] title_length = 20 wells = training_data['Well Name'].unique() for well in wells: blind = training_data[training_data['Well Name']==well] train = training_data[(training_data['Well Name']!=well)] train_X = train.drop(['Formation', 'Well Name', 'Depth', 'Facies'], axis=1) train_Y = train.Facies.values test_X = blind.drop(['Formation', 'Well Name', 'Facies', 'Depth'], axis=1) test_Y = blind.Facies.values clf = AdaBoostClassifier(RandomForestClassifier(), n_estimators=200, learning_rate=learning_rate, random_state=random_state, algorithm='SAMME.R') clf.fit(X=train_X, y=train_Y) pred_Y = clf.predict(test_X) f1 = metrics.f1_score(test_Y, pred_Y, average='micro') print("*"*title_length) print("{well}={f1:.4f}".format(well=well,f1=f1)) print("*"*title_length) train_X, test_X, train_Y, test_Y = cross_validation.train_test_split(training_data.drop(['Formation', 'Well Name','Facies', 'Depth'], axis=1), training_data.Facies.values, test_size=.2) print(train_X.shape) print(train_Y.shape) print(test_X.shape) print(test_Y.shape) clf = AdaBoostClassifier(RandomForestClassifier(), n_estimators=200, learning_rate=learning_rate, random_state=0, algorithm='SAMME.R') clf.fit(train_X, train_Y) pred_Y = clf.predict(test_X) cm = metrics.confusion_matrix(y_true=test_Y, y_pred=pred_Y) display_cm(cm, facies_labels, display_metrics=True) validation_data = pd.read_csv("../validation_data_nofacies.csv") dfs = [] for well in validation_data['Well Name'].unique(): df = validation_data[validation_data['Well Name']==well].copy(deep=True) df.sort_values('Depth', inplace=True) for col in ['PE', 'GR']: smooth_col = 'smooth_'+col df[smooth_col] = pd.rolling_mean(df[col], window=25) df[smooth_col].fillna(method='ffill', inplace=True) df[smooth_col].fillna(method='bfill', inplace=True) dfs.append(df) validation_data = pd.concat(dfs) validation_data['enc_formation'] = validation_data.Formation.map(formation_encoder) validation_data.describe() X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1) Y = training_data.Facies.values test_X = validation_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) clf = AdaBoostClassifier(RandomForestClassifier(), n_estimators=200, learning_rate=learning_rate, random_state=0) clf.fit(X,Y) predicted_facies = clf.predict(test_X) validation_data['Facies'] = predicted_facies validation_data.to_csv("Kr1m_SEG_ML_Attempt1.csv") """ Explanation: Building the model and parameter tuning In the section below we will create a Adaptive Boosted Random Forest Classifier from the Scikit-Learn ML Library End of explanation """
google/physics-math-tutorials
colabs/QNN_hands_on.ipynb
apache-2.0
# install published dev version # !pip install cirq~=0.4.0.dev # install directly from HEAD: !pip install git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0 """ Explanation: Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Quantum Neural Networks This notebook provides an introduction to Quantum Neural Networks (QNNs) using the Cirq. The presentation mostly follows Farhi and Neven. We will construct a simple network for classification to demonstrate its utility on some randomly generated toy data. First we need to install cirq, which has to be done each time this notebook is run. Executing the following cell will do that. End of explanation """ import cirq import numpy as np import matplotlib.pyplot as plt print(cirq.google.Foxtail) """ Explanation: To verify that Cirq is installed in your environment, try to import cirq and print out a diagram of the Foxtail device. It should produce a 2x11 grid of qubits. End of explanation """ a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = .25 # Put your own weight here. angle = 2*np.pi*w circuit = cirq.Circuit.from_ops(cirq.ControlledGate(cirq.Rx(angle)).on(a,b)) print(circuit) circuit.to_unitary_matrix().round(2) """ Explanation: The QNN Idea We'll begin by describing here the QNN model we are pursuing. We'll discuss the quantum circuit describing a very simple neuron, and how it can be trained. As in an ordinary neural network, a QNN takes in data, processes that data, and then returns an answer. In the quantum case, the data will be encoded into the initial quantum state, and the processing step is the action of a quantum circuit on that quantum state. At the end we will measure one or more of the qubits, and the statistics of those measurements are the output of the net. Classical vs Quantum An ordinary neural network can only handle classical input. The input to a QNN, though, is a quantum state, which consists of $2^n$ complex amplitudes for $n$-qubits. If you attached your quantum computer directly to some physics experiment, for example, then you could have a QNN do some post-processing on the experimental wavefunction in lieu of a more traditional measurement. There are some very exciting possiblities there, but unfortunately we wil not be considering them in this Colab. It requires significantly more quantum background to understand what's going on, and it's harder to give examples because the input states themselves can be quite complicated. For recent examples of that kind of network, though, check out this paper and this paper. The basic ingredients are similar to what we'll cover here In this Colab we'll focus on classical inputs, by which I mean the specification of one of the computational basis states as the initial state. There are a total of $2^n$ of these states for $n$ qubits. Note the crucial difference between this case and the quantum case: in the quantum case the input is $2^n$-dimensional, while in the classical case there are $2^n$ possible inputs. The quantum neural network can process these inputs in a "quantum" way, meaning that it may be able to evaluate certain functions on these inputs more efficiently than a classical network. Whether the "quantum" processing is actually useful in practice remains to be seen, and in this Colab we will not have time to really get into that aspect of a QNN. Data Processing Given the classical input state, what will we do with it? At this stage it's helpful to be more specific and definite about the problem we are trying to solve. The problem we're going to focus on in this Colab is two-category classicfication. That means that after the quantum circuit has finished running, we measure one of the qubits, the readout qubit, and the value of that qubit will tell us which of the two categories our classical input state belonged to. Since this is quantum, the output that qubit is going to be random according to some probability distributuion. So really we're going to repeat the computation many times and take a majority vote. Our classical input data is a bitstring that is converted into a computational basis state. We want to influence the readout qubit in a way that depends on this state. Our main tool for this a gate we call the $ZX$-gate, which acts on two qubits as $$ \exp(i \pi w Z \otimes X) = \begin{bmatrix} \cos \pi w & i\sin\pi w &0&0\ i\sin\pi w & \cos \pi w &0&0\ 0&0& \cos \pi w & -i\sin\pi w \ 0&0 & -i\sin\pi w & \cos\pi w \end{bmatrix}, $$ where $w$ is a free parameter ($w$ stands for weight). This gate rotates the second qubit around the $X$-axis (on the Bloch sphere) either clockwise or counterclockwise depending on the state of the first qubit as seen in the computational basis. The amount of the rotation is determined by $w$. If we connect each of our input qubits to the readout qubit using one of these gates, then the result is that the readout qubit will be rotated in a way that depeonds the initial state in a straightforward way. This rotation is in the $YZ$ plane, so will change the statistics of measurements in either the $Z$ basis or the $Y$ basis for the readout qubit. We're going to choose to have the initial state of the readout qubit to be a standard computational basis state as usual, which is a $Z$ eigenstate but "neutral" with respect to $Y$ (i.e., 50/50 probabilty of $Y=+1$ or $Y=-1$). Then after all of the rotations are complete we'll measure the readout qubit in the $Y$ basis. If all goes well, then the net rotation induced by the $ZX$ gates will place the readout qubit near one of the two $Y$ eigenstates in a way that depends on the initial data. To summarize, here is our strategy for processing the two-category classification problem: 1) Prepare a computational basis state corresponding to the input that should be categorized. 2) Use $ZX$ gates to rotate the state of the readout qubit in a way that depends on the input. 3) Measure the readout qubit in the $Y$ basis to get the predicted label. Take a majority vote after many repetitions. This is the simplest possible kind of network, and really only corresponds to a single neuron. We'll talk about more complicated possibilities after understanding how to implement this one. Custom Two-Qubit Gate Our first task is to code up the $ZX$ gate described above, which is given by the matrix $$ \exp(i \pi w Z \otimes X) = \begin{bmatrix} \cos \pi w & i\sin\pi w &0&0\ i\sin\pi w & \cos \pi w &0&0\ 0&0& \cos \pi w & -i\sin\pi w \ 0&0 & -i\sin\pi w & \cos\pi w \end{bmatrix}, $$ Just from the form of the gate we can see that it performs a rotation by angle $\pm \pi w$ on the second qubit depending on the value of the first qubit. If we only had one or the other of these two blocks, then this gate would literally be a controlled rotation. For example, using the Cirq conventions, $$ CR_X(\theta) = \begin{bmatrix} 1 & 0 &0&0\ 0 & 1 &0&0\ 0&0& \cos \theta/2 & -i\sin \theta/2 \ 0&0 & -i\sin\theta/2 & \cos\theta/2 \end{bmatrix}, $$ which means that setting $\theta = 2\pi w$ should give us (part) of our desired transformation. End of explanation """ a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = 0.25 # Put your own weight here. angle = 2*np.pi*w circuit = cirq.Circuit.from_ops([cirq.X(a), cirq.ControlledGate(cirq.Rx(-angle)).on(a,b), cirq.X(a)]) print(circuit) circuit.to_unitary_matrix().round(2) """ Explanation: Question: The rotation in the upper-left block is by the opposite angle. But how do we get the rotation to happen in the upper-left block of the $4\times 4$ matrix in the first place? What is the circuit? Solution Switching the upper-left and lower-right blocks of a controlled gate corresponds to activating when the control qubit is in the $|0\rangle$ state instead of the $|1\rangle$ state. We can arrange this to happen by taking the control gate we already have and conjugating the control qubit by $X$ gates (which implement the NOT operation). Don't forget to also rotate by the opposite angle. End of explanation """ class ZXGate(cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight def _decompose_(self, qubits): a, b = qubits ## YOUR CODE HERE # This lets the weight be a Symbol. Useful for paramterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight) """ Explanation: The Full $ZX$ Gate We can put together the two controlled rotations to make the full $ZX$ gate. Having discussed the decomposition already, we can make our own class and specify its action using the _decpompose_ method. Fill in the following code block to implement this gate. End of explanation """ class ZXGate(cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight def _decompose_(self, qubits): a, b = qubits yield cirq.ControlledGate(cirq.Rx(2*np.pi*self.weight)).on(a,b) yield cirq.X(a) yield cirq.ControlledGate(cirq.Rx(-2*np.pi*self.weight)).on(a,b) yield cirq.X(a) # This lets the weight be a Symbol. Useful for paramterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight) """ Explanation: Solution End of explanation """ class ZXGate(cirq.ops.eigen_gate.EigenGate, cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight super().__init__(exponent=weight) # Automatically handles weights other than 1 def _eigen_components(self): return [ (1, np.array([[0.5, 0.5, 0, 0], [ 0.5, 0.5, 0, 0], [0, 0, 0.5, -0.5], [0, 0, -0.5, 0.5]])), (??, ??) # YOUR CODE HERE: phase and projector for the other eigenvalue ] # This lets the weight be a Symbol. Useful for parameterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight) """ Explanation: EigenGate Implementation Another way to specify how a gate works is by an explicit eigen-action. In our case that is also easy, since we know that the gate acts as a phase (the eigenvalue) when the first qubit is in a $Z$ eigenstate (i.e., a computational basis state) and the second qubit is in an $X$ eigenstate. The way we specify eigen-actions in Cirq is through the _eigen_components method, where we need to specify the eigenvalue as a phase together with a projector onto the eigenspace of that phase. We also conventionally specify the gate at $w=1$ and set $w$ internally to be the exponent of the gate, which automatically implements other values of $w$ for us. In the case of the $ZX$ gate with $w=1$, one of our eigenvalues is $\exp(+i\pi)$, which is specified as $1$ in Cirq. (Because $1$ is the coefficeint of $i\pi$ in the exponential.) This is the phase when when the first qubit is in the $Z=+1$ state and the second qubit is in the $X=+1$ state, or when the first qubit is in the $Z=-1$ state and the second qubit is in the $X=-1$ state. The projector onto these states is $$ \begin{align} P &= |0+\rangle \langle 0{+}| + |1-\rangle \langle 1{-}|\ &= \frac{1}{2}\big(|00\rangle \langle 00| +|00\rangle \langle 01|+|01\rangle \langle 00|+|01\rangle \langle 01|+ |10\rangle \langle 10|-|10\rangle \langle 11|-|11\rangle \langle 10|+|11\rangle \langle 11|\big)\ &=\frac{1}{2}\begin{bmatrix} 1 & 1 &0&0\ 1 & 1 &0&0\ 0&0& 1 & -1 \ 0&0 & -1 & 1 \end{bmatrix} \end{align} $$ A similar formula holds for the eigenvalue $\exp(-i\pi)$ with the two blocks in the projector flipped. Exercise: Implement the $ZX$ gate as an EigenGate using this decomposition. The following codeblock will get you started. End of explanation """ class ZXGate(cirq.ops.eigen_gate.EigenGate, cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight super().__init__(exponent=weight) # Automatically handles weights other than 1 def _eigen_components(self): return [ (1, np.array([[0.5, 0.5, 0, 0], [ 0.5, 0.5, 0, 0], [0, 0, 0.5, -0.5], [0, 0, -0.5, 0.5]])), (-1, np.array([[0.5, -0.5, 0, 0], [ -0.5, 0.5, 0, 0], [0, 0, 0.5, 0.5], [0, 0, 0.5, 0.5]])) ] # This lets the weight be a Symbol. Useful for parameterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight) """ Explanation: Solution End of explanation """ a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = .15 # Put your own weight here. Try using a cirq.Symbol. circuit = cirq.Circuit.from_ops(ZXGate(w).on(a,b)) print(circuit) """ Explanation: Testing the Gate BEFORE MOVING ON make sure you've executed the EigenGate solution of the $ZX$ gate implementation. That's the one assumed for the code below, though other implementations may work just as well. In general, the cells in this Colab may depend on previous cells. Let's test out our gate. First we'll make a simple test circuit to see that the ASCII diagrams are diplaying properly: End of explanation """ test_matrix = np.array([[np.cos(np.pi*w), 1j*np.sin(np.pi*w), 0, 0], [1j*np.sin(np.pi*w), np.cos(np.pi*w), 0, 0], [0, 0, np.cos(np.pi*w), -1j*np.sin(np.pi*w)], [0, 0, -1j*np.sin(np.pi*w),np.cos(np.pi*w)]]) # Test for five digits of accuracy. Won't work with cirq.Symbol assert (circuit.to_unitary_matrix().round(5) == test_matrix.round(5)).all() """ Explanation: We should also check that the matrix is what we expect: End of explanation """ # Total number of data qubits INPUT_SIZE = 9 data_qubits = cirq.LineQubit.range(INPUT_SIZE) readout = cirq.NamedQubit('r') # Initialize parameters of the circuit params = {'w': 0} def ZX_layer(): """Adds a ZX gate between each data qubit and the readout. All gates are given the same cirq.Symbol for a weight.""" for qubit in data_qubits: yield ZXGate(cirq.Symbol('w')).on(qubit, readout) """ Explanation: Create Circuit Now we have to create the QNN circuit. We are simply going to let a $ZX$ gate act between each data qubit and the readout qubit. For simplicity, let's share a single weight between all of the gates. You are invited to experiment with making the weights different, but in our example problem below we can set them all equal by symmetry. Question: What about the order of these actions? Which data qubits should interact with the readout qubit first? Remember that we also want to measure the readout qubit in the $Y$ basis. Fundamentally speaking, all measurements in Cirq are computational basis measurements, and so we have to implement the change of basis by hand. Question: What is the circuit for a basis transformation from the $Y$ basis to the computational basis? We want to choose our transformation so that an eigenstate with $Y=+1$ becomes an eigenstate with $Z=+1$ prior to measurement. Solutions The $ZX$ gates all commute with each other, so the order of implementation doesn't matter! We want a transformation that maps $\big(|0\rangle + i |1\rangle\big)/\sqrt{2}$ to $|0\rangle$ and $\big(|0\rangle - i |1\rangle\big)\sqrt{2}$ to $|1\rangle$. Recall that the phase gate $S$ is given in matrix form by $$ S = \begin{bmatrix} 1 & 0 \ 0 & i \end{bmatrix}, $$ and the Hadamard transform is given by $$ H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \ 1 & -1 \end{bmatrix}, $$ So acting with $S^{-1}$ and then $H$ gives what we want. We'll add these two gates to the end of the circuit on the readout qubit so that the final measurement effectively occurs in the $Y$ basis. Make Circuit A clean way of making circuits is to define generators for logically-related circuit elements, and then append those to the circuit you want to make. Here is a code snippet that initializes our qubits and defines a generator for a single layer of $ZX$ gates: End of explanation """ qnn = cirq.Circuit() qnn.append(???) # YOUR CODE HERE """ Explanation: Use this generator to create the QNN circuit. Don't forget to add the basis change for the readout qubit at the end! End of explanation """ qnn = cirq.Circuit() qnn.append(ZX_layer()) qnn.append([cirq.S(readout)**-1, cirq.H(readout)]) # Basis transformation """ Explanation: Solution End of explanation """ print(qnn) """ Explanation: View the Circuit It's usually a good idea to view the ASCII diagram of your circuit to make sure it's doing what you want. This can be displayed by printing the circuit. End of explanation """ def readout_expectation(state): """Takes in a specification of a state as an array of 0s and 1s and returns the expectation value of Z on ther readout qubit. Uses the XmonSimulator to calculate the wavefunction exactly.""" # A convenient representation of the state as an integer state_num = int(np.sum(state*2**np.arange(len(state)))) resolver = cirq.ParamResolver(params) simulator = cirq.Simulator() # Specify an explicit qubit order so that we know which qubit is the readout result = simulator.simulate(qnn, resolver, qubit_order=[readout]+data_qubits, initial_state=state_num) wf = result.final_state # Becase we specified qubit order, the Z value of the readout is the most # significant bit. Z_readout = np.append(np.ones(2**INPUT_SIZE), -np.ones(2**INPUT_SIZE)) return np.sum(np.abs(wf)**2 * Z_readout) """ Explanation: You can experiment with adding more layers of $ZX$ gates (or adding other kinds of transformations!) to your QNN, but we can use this simplest kind of circuit to analyze a simple toy problem, which is what we will do next. A Toy Problem: Biased Coin Flips As a toy problem, let's get our quantum neuron to decide whether a coin is biased toward heads or toward tails based on a sequence of coin flips. To be specific, let's try to train a QNN to distinguish between a coin that yields "heads" with probability $p$, and one that yields "heads" with probability $1-p$. Without loss of generality, let's say that $p\leq 0.5$. We don't need a fancy QNN to come up with a winning strategy: given a series of coin flips, you guess $p$ if the majority of flips are "tails" and $1-p$ if the majority are "heads." But for purposes of illustration, let's do it the fancy way. To translate this problem into our QNN language, we need to encode the sequence of coin flips into a computational basis state. Let's associate $0$ with tails and $1$ with heads. So the sequence of coin flips becomes a sequence of $0$s and $1$s, and these define a computational basis state. We also need to define a convention for our labeling of the two coins. We'll say that the $p$ coin (majority tails) gets the label $-1$ and the $1-p$ coin (majority heads) gets the label $+1$. So when we measure $Y$ at the end of the computation we can say that the majority-vote of the $Y$ outcome is our predicted label. To be a little more nuanced (and to aid the formulation of the problem), let's say that the expectation value $\langle Y \rangle$ for a given input state defines our estimator for the label of that state. We're going to use that to define a loss function for training next. Define Loss Function Suppose we have a collection of $N$ (bitstring, label) pairs. A useful loss function to characterize the effectiveness of our QNN on this collection is $$ \text{Loss} = \frac{1}{2N}\sum_{j=1}^n (1- \ell_j\langle Y \rangle_j), $$ where $\ell_j$ is the label of the $j$th pair and $\langle Y \rangle_j$ is the expectation value of $Y$ on the readout qubit using the $j$th bitstring as input. If the network is perfect, the loss is equal to zero. If the network is maximally unsure about the labels (so that $\langle Y \rangle_j = 0$ for all $j$) then the loss is equal to $1/2$. And if the network gets everything wrong, then the loss is equal to $1$. We're going to train our network using this loss function, so next we'll write some functions to compute the loss. Another useful function to have around is the average classification error. Recall that our prescription was to execute the quantum circuit many times and take a majority vote to compute the predicted label. The majority vote for the readout is the same as $\text{sign}(\langle Y \rangle)$, so we can write a formula for the error in this procedure as $$ \text{Error} = \frac{1}{2N}\sum_{j=1}^n \big(1- \ell_j\text{sign}\big(\langle Y \rangle_j\big)\big). $$ This is not so useful as a loss function because it is not smooth and does not provide an incentive to make $|\langle Y \rangle|$ large, but it can be an informative quantity to compute. Question: Why would we want $|\langle Y \rangle|$ to be large? Solution When we implement this algorithm on the actual hardware, $\langle Y \rangle$ can only be estimated by repeatedly executing the circuit and measuring the result. The more measurements we make, the better our estimate of $\langle Y \rangle$ will be. Even if we are only interested in $\text{sign}\big(\langle Y \rangle\big)$, we will need to meake enough measurements to be sure that our estimate has the correct sign, and if $|\langle Y \rangle|$ is large then fewer measurements will be required to have high confidence in the sign. Furthermore, if the machine is noisy (which it will be), then the noise will induce some errors in our estimate of $\langle Y \rangle$. If $|\langle Y \rangle|$ is small then it's likely that the noise will lead to the wrong sign. Expectation Value Our first function computes the expectation value of the readout qubit for our circuit given a specification of the initial state. Rather than a bitstring, we'll specify the initial state as an array of $0$s and $1$s. These are the outputs of the coin flips in our toy problem. We'll compute the expectation value exactly using the wavefunction for now. End of explanation """ def loss(states, labels): loss=0 for state, label in zip(states,labels): loss += 1 - label*readout_expectation(state) return loss/(2*len(states)) def classification_error(states, labels): error=0 for state,label in zip(states,labels): error += 1 - label*np.sign(readout_expectation(state)) return error/(2*len(states)) """ Explanation: Loss and Error The next functions take a list of states (each specified as an array of $0$s and $1$s as before) and a corresponding list of labels and computes the loss and error, respectively, of that list. End of explanation """ def make_batch(): """Generates a set of labels, then uses those labels to generate inputs. label = -1 corresponds to majority 0 in the sate, label = +1 corresponds to majority 1. """ np.random.seed(0) # For consistency in demo labels = (-1)**np.random.choice(2, size=100) # Smaller batch sizes will speed up computation states = [] for label in labels: states.append(np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2])) return states, labels states, labels = make_batch() """ Explanation: Generating Data For our toy problem we'll want to be able to generate a batch of data. Here is a helper function for that task: End of explanation """ # Using cirq.Simulator with the EigenGate implementation of ZZ, this takes # about 30s to run. Using the XmonSimulator took about 40 minutes the last # time I tried it! %%time linspace = np.linspace(start=-1, stop=1, num=80) train_losses = [] error_rates = [] for p in linspace: params = {'w': p} train_losses.append(loss(states, labels)) error_rates.append(classification_error(states, labels)) plt.plot(linspace, train_losses) plt.xlabel('Weight') plt.ylabel('Loss') plt.title('Loss as a Function of Weight') plt.show() plt.plot(linspace, error_rates) plt.xlabel('Weight') plt.ylabel('Error Rate') plt.title('Error Rate as a Function of Weight') plt.show() """ Explanation: Training Now we'll try to find the optimal weight to solve our toy problem. For illustration, we'll do both a brute force search of the paramter space as well as a stochastic gradient descent. Brute Force Search Let's compute both the loss and error rate on a batch of data as a function of the shared weight between all the gates. End of explanation """ def stochastic_grad_loss(): """Generates a new data point and computes the gradient of the loss using that data point.""" # Randomly generate the data point. label = (-1)**np.random.choice(2) state = np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2]) # Compute the gradient using finite difference eps = 10**-5 # Discretization of gradient. Try different values. params['w'] -= eps loss1 = loss([state],[label]) params['w'] += 2*eps grad = (loss([state],[label])-loss1)/(2*eps) params['w'] -= eps # Reset the parameter value return grad """ Explanation: Question: Why are the loss and error functions periodic with period $1$ when the $ZX$ gate is periodic with period $2$? Solution This kind of "halving" of the periodicity of $\langle Y \rangle$ compared to the period of the gates itself is typical of qubit systems. We can analyze how it works mathematically in a simpler setting. Instead of the $ZX$ Gate, let's just imagine that we rotate the readout qubit around the $X$ axis by some fixed amout. This is the effective calculation for a single fixed data input. $$ \begin{align} \langle Y \rangle &= \langle 0 |\exp(-i \pi w X) Y \exp(i \pi w X) |0 \rangle\ &= \langle 0 |\big(\cos \pi w - i X\sin \pi w \big) Y \big(\cos \pi w + i X \sin \pi w \big) |0 \rangle\ &= \langle 0 |\big(Y\cos 2\pi w +Z \sin 2\pi w \big) |0 \rangle\ &= \sin 2\pi w. \end{align} $$ Stochastic Gradient Descent To train the network we'll use stochastic gradient descent. Note that this isn't necessarily a good idea since the loss function is far from convex, and there's a good chance we'll get stuck in very inefficient local minimum if we initialize the paramters randomly. But as an exercise we'll do it anyway. In the next section we'll discuss other ways to train these sorts of networks. We'll compute the gradient of the loss function using a symmetric finite-difference approximation: $f'(x) \approx (f(x + \epsilon) - f(x-\epsilon))/2\epsilon$. This is the most straightforward way to do it using the quantum computer. We'll also generate a new instance of the problem each time. End of explanation """ eta = 10**-4 # Learning rate. Try different values. params = {'w': 0} # Initialize weight. Try different values. for i in range(201): if not i%25: print('Step: {} Loss: {}'.format(i, loss(states, labels))) grad = stochastic_grad_loss() params['w'] += -eta*grad print('Final Weight: {}'.format(params['w'])) """ Explanation: We can apply this function repeatedly to flow toward the minimum: End of explanation """ def readout_expectation_sample(state): """Takes in a specification of a state as an array of 0s and 1s and returns the expectation value of Z on ther readout qubit. Uses the XmonSimulator to sample the final wavefunction.""" # We still need to resolve the parameters in the circuit. resolver = cirq.ParamResolver(params) # Make a copy of the QNN to avoid making changes to the global variable. measurement_circuit = qnn.copy() # Modify the measurement circuit to account for the desired input state. # YOUR CODE HERE # Add appropriate measurement gate(s) to the circuit. # YOUR CODE HERE simulator = cirq.google.XmonSimulator() result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions # Return the Z expectation value return ((-1)**result.measurements['m']).mean() """ Explanation: Use Sampling Instead of Calculating from the Wavefunction On real hardware we will have to use sampling to find results instead of computing the exact wavefunction. Rewrite the readout_expectation function to compute the expectation value using sampling instead. Unlike with the wavefunction calculation, we also need to build our circuit in a way that accounts for the initial state (we are always assumed to start in the all $|0\rangle$ state) End of explanation """ def readout_expectation_sample(state): """Takes in a specification of a state as an array of 0s and 1s and returns the expectation value of Z on ther readout qubit. Uses the XmonSimulator to sample the final wavefunction.""" # We still need to resolve the parameters in the circuit. resolver = cirq.ParamResolver(params) # Make a copy of the QNN to avoid making changes to the global variable. measurement_circuit = qnn.copy() # Modify the measurement circuit to account for the desired input state. for i, qubit in enumerate(data_qubits): if state[i]: measurement_circuit.insert(0,cirq.X(qubit)) # Add appropriate measurement gate(s) to the circuit. measurement_circuit.append(cirq.measure(readout, key='m')) simulator = cirq.Simulator() result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions # Return the Z expectation value return ((-1)**result.measurements['m']).mean() """ Explanation: Solution End of explanation """ state = [0,0,0,1,0,1,1,0,1] # Try different initial states. params = {'w': 0.05} # Try different weights. print("Exact expectation value: {}".format(readout_expectation(state))) print("Estimates from sampling:") for _ in range(5): print(readout_expectation_sample(state)) """ Explanation: Comparison of Sampling with the Exact Wavefunction Just to illustrate the difference between sampling and using the wavefunction, try running the two methods several times on identical input: End of explanation """ print(cirq.google.Foxtail) """ Explanation: As an exercise, try repeating some of the above calculations (e.g., the SGD optimization) using readout_expectation_sample in place of readout_expectation. How many repetitions should you use? How should the hyperparameters eps and eta be adjusted in response to the number of repetitions? Optimizing For Hardware There are more issues to think about if you want to run your network on real hardware. First is the connectivity issue, and second is minimizing the number of two-qubit operations. Consider the Foxtail device: End of explanation """ qnn_fox = cirq.Circuit() w = 0.2 # Want an explicit numerical weight for later for i in range(10): qnn_fox.append([ZXGate(w).on(cirq.GridQubit(1,i), cirq.GridQubit(0,i)), ZXGate(w).on(cirq.GridQubit(0,i+1), cirq.GridQubit(0,i)), cirq.SWAP(cirq.GridQubit(0,i), cirq.GridQubit(0,i+1))]) qnn_fox.append(ZXGate(w).on(cirq.GridQubit(1,10), cirq.GridQubit(0,10))) qnn_fox.append([(cirq.S**-1)(cirq.GridQubit(0,10)),cirq.H(cirq.GridQubit(0,10)), cirq.measure(cirq.GridQubit(0,10))]) print(qnn_fox) """ Explanation: The qubits are arranged in two rows of eleven qubits each, and qubits can only communicate to their nearest neighbors along the horizontal and vertial connections. That does not mesh well with the QNN we designed, where all of the data qubits need to interact with the readout qubit. There is no in-principle restriction on the kinds of algorithms you are allowed to run. The solution to the connectivity problem is to make use of SWAP gates, which have the effect of exchanging the states of two (neighboring) qubits. It's equivalent to what you would get if you physically exchanged the positions of two of the qubits in the grid. The problem is that each SWAP operation is costly, so you want to avoid SWAPing as much as possible. We need to think carefully about our algorithm design to minimize the number of SWAPs performed as the circuit is executed. Question: How should we modify our QNN circuit so that it can runs efficiently on the Foxtail device? Solution One strategy is to move the readout qubit around as it talks to the other qubits. Suppose the readout qubit starts in the $(0,0)$ position. First it can interact with the qubits in the $(1,0)$ and $(0,1)$ positons like normal, then SWAP with the $(0,1)$ qubit. Now the readout qubit is in the $(0,1)$ position and can interact with the $(1,1)$ and $(0,2)$ qubits before SWAPing with the $(0,2)$ qubit. It continues down the line in this fashion. Let's code up this circuit: End of explanation """ cirq.google.optimized_for_xmon(qnn_fox, new_device=cirq.google.Foxtail, allow_partial_czs=True) """ Explanation: As coded, this circuit still won't run on the Foxtail device. That's because the gates we've defined are not native gates. Cirq has a built-in method that will convert our gates to Xmon gates (which are native for the Foxtail device) and attempt to optimze the circuit by reducing the total number of gates: End of explanation """
google-research/ott
docs/notebooks/fairness.ipynb
apache-2.0
fig, ax = plt.subplots(1, 1, figsize=(8, 5)) plot_quantiles(logits, groups, ax) ax.tick_params(axis='both', which='major', labelsize=16) ax.set_title(f'Baseline Quantiles', fontsize=22) ax.set_xlabel('Quantile Level', fontsize=18) ax.set_ylabel('Prediction', fontsize=18) """ Explanation: Fairness regularizers In this tutorial, we cover how to use OTT to build a differentiable regularizer to ensure an algorithm can take into account fairness constraints during training. This tutorial will focus on group fairness. Group Fairness problems arise when a ML system produces different distributions of outcomes for different sub-populations or groups. The attributes defining the groups are called the protected attributes, and can typically be the gender, the race, the age, etc. For instance, a face recognition system may have better precision or recall on young people than on old people, or a classifier for loan approval may favor men over women, reproducing biases. In the first case, it would be preferable that the distribution of errors made by the classifier be similar across groups, focusing on the equalized odds metric, while in the second case it would be preferable that the distributions of predictions match, focusing instead on the demographic parity metric. Some fairness methods focus on manipulating the dataset, some others try to come up with different criteria or thresholds for the different groups. In our case we leverage the ability of optimal transport to deal with distributions and directly use the Wasserstein distance between the per-group distributions as a loss. Adult dataset We apply our differentiable fairness regularizer on a common benchmark dataset, the adult dataset: a binary classification problem that aims at predicting whether a given individual earns more or less than $50k a year, based on features such as the age, the gender, the education level, the country of birth, the workclass, etc. The considered protected attribute is the gender. There are therefore two groups. Training a multi-layer perceptron (MLP) out of the box by minimizing the binary cross-entropy on the training data leads systematically predict more often that men, rather than women, earn more than $50k. The following figure is key to understand the fairness issue: it represents the quantiles of the predictions for men and for women for a trained classifier which accuracy is about 80%. While being overall quite good in terms of classification, it is doing poorly in terms of fairness since accuracy might be obtained at giving more importance to those several datapoints that were men. The goal of group fairness is to close the gap between those two curves. End of explanation """ N = 24 rng = jax.random.PRNGKey(1) rng, *rngs = jax.random.split(rng, 3) y_pred = 3 * jax.random.uniform(rngs[0], (N,)) groups = jax.random.uniform(rngs[1], (N,)) < 0.25 support_0 = jnp.linspace(0, 1, N - jnp.sum(groups)) support_1 = jnp.linspace(0, 1, jnp.sum(groups)) quantiles_0 = jnp.sort(y_pred[jnp.logical_not(groups)]) quantiles_1 = jnp.sort(y_pred[groups]) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) ax.plot(support_0, quantiles_0, lw=3, marker='o', markersize=10, label='group 0', markeredgecolor='k') ax.plot(support_1, quantiles_1, lw=3, marker='o', markersize=10, label='group 1', markeredgecolor='k') ax.set_xlabel('Quantile level', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) ax.legend(fontsize=16) """ Explanation: To obtain these curves, we sort the predictions made by the classifier from the smallest to the biggest for each group and put them on a $[0, 1]$ scale on the x-axis. The value corresponding to $x=0.5$ is the median of the distribution. Similarly for each quantile level in $[0,1]$ we obtain the corresponding quantile of the distribution. In this example we observe that the median prediction for women is about 5% while the median prediction for men is about 25%. If we were to set to 0.25 the threshold to consider a prediction positive, we would keep half of the men, but would reject about 90% of the women. Quantiles and Wasserstein distance. The gap between the two quantile functions is corresponds to the Wasserstein distance between the two distributions. Closing the gap between the two curves is equivalent to minimizing the Wasserstein distance between the two distributions. Note that in practice, we approximate the gap by the average of the distances between corresponding quantile levels. For this we need to interpolate the values over the union of the supports of the two discrete quantile maps. Let's do this on an example. We sample some points uniformly at random, and assign randomly groups to them and plot the quantiles functions. End of explanation """ import scipy kinds = ['linear', 'nearest'] fig, axes = plt.subplots(1, len(kinds), figsize=(8 * len(kinds), 5)) for ax, kind in zip(axes, kinds): q0 = scipy.interpolate.interp1d(support_0, quantiles_0, kind=kind) q1 = scipy.interpolate.interp1d(support_1, quantiles_1, kind=kind) support_01 = jnp.sort(jnp.concatenate([support_0, support_1])) ax.plot(support_01, q0(support_01), label='group 0', lw=3, marker='o', markersize=10, markeredgecolor='k') ax.plot(support_01, q1(support_01), label='group 1', lw=3, marker='o', markersize=10, markeredgecolor='k') ax.fill_between(support_01, q0(support_01), q1(support_01), color='y', hatch='|', fc='w') ax.set_xlabel('Quantile level', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) ax.legend(fontsize=16) ax.set_title(f'Interpolation {kind}', fontsize=20) """ Explanation: We can see on this figure that the support of the two quantile function is different, since the number of points in the two groups is different. In order to compute the gap between the two curves, we first interpolate the two curves on the union of the supports. The Wasserstein distance corresponds to the gap between the two quantile functions. Here we show two interpolations schemes that make it easy to estimate the Wasserstein distance between two 1D measures. End of explanation """ import functools @functools.partial(jax.jit, static_argnums=(2,)) def sort_group(inputs: jnp.ndarray, group: jnp.ndarray, target_size: int = 16): a = group / jnp.sum(group) b = jnp.ones(target_size) / target_size ot = ott.tools.soft_sort.transport_for_sort(inputs, a, b, dict(epsilon=1e-3)) return 1.0 / b * ot.apply(inputs, axis=0) """ Explanation: Soft Wasserstein Computing the Wasserstein distance involves complex operations such as sorting and interpolating. Fortunately, regularized optimal transport and its implementation with OTT provides accelerator-friendly differentiable approaches to sort according to a group (setting the weights of the outsiders to zero) while mapping onto a common support (sorting onto a fixed target of the same size, no matter what the group is). Here is an example of how to use OTT to obtain a sorted vector of a fixed size for each group. Note how simple this function is. End of explanation """ target_sizes = [4, 16, 64] _, axes = plt.subplots(1, len(target_sizes), figsize=(len(target_sizes * 8), 5)) for ax, target_size in zip(axes, target_sizes): ax.plot(sort_group(y_pred, jnp.logical_not(groups), target_size), lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0') ax.plot(sort_group(y_pred, groups, target_size), lw=3, marker='o', markersize=10, markeredgecolor='k', label='group 0') ax.legend(fontsize=16) ax.tick_params(axis='both', which='major', labelsize=16) ax.set_title(f'Group soft sorting on support of size {target_size}', fontsize=20) """ Explanation: It is noteworthy to see that the obtained interpolation corresponds to a smooth version of the 'nearest' interpolation. End of explanation """ import matplotlib.pyplot as plt fig, axes = plt.subplots(2, 2, figsize=(16, 10)) for weight, curves in result.items(): for ax_row, metric in zip(axes, ['loss', 'accuracy']): for ax, phase in zip(ax_row, ['train', 'eval']): arr = np.array(curves[f'{phase}_{metric}']) ax.plot(arr[:, 0], arr[:, 1], label=f'$\lambda={weight:.0f}$', lw=5, marker='o', markersize=12, markeredgecolor='k', markevery=10) ax.set_title(f'{metric} / {phase}', fontsize=20) ax.legend(fontsize=18) ax.set_xlabel('Epoch', fontsize=18) ax.tick_params(axis='both', which='major', labelsize=16) plt.tight_layout() """ Explanation: Training a network In order to train our classifier with a fairness regularizer, we first turn the categorical features $x$ of the adult dataset into dense ones (using 16 dimensions) and pass the obtained vector to an MLP $f_\theta$ with 2 hidden layers of 64 neurons. We optimize a loss which is the sum of the binary crossentropy and the Wasserstein distance between the distributions of predictions for the two classes. Since we want to work with minibatches and we do not want to change the common optimization scheme, we decide to use rather big batches of size $512$, in order to ensure that we have enough predictions across groups in a batch for the Wasserstein distance between them to make sense. We scale the Wasserstein distance by a factor $\lambda$ to control the balance between the fitness term (binary crossentropy) and the fairness regularization term (Wasserstein distance). We run the training procedure for 100 epochs with the Adam optimizer with learning rate $10^{-4}$, an entropic regularization factor $\epsilon=10^{-3}$ and a common interpolation support of size $12$. We compare the results for $\lambda \in {1, 10, 100, 1000}$ in terms of demographic parity as well as accuracy. Loss and Accuracy Let's first compare the performance of all those classifiers. End of explanation """ num_rows = 2 num_cols = len(weights[1:]) // 2 fig, axes = plt.subplots(num_rows, num_cols, figsize=(7 * num_cols, 5 * num_rows)) for ax, w in zip(axes.ravel(), weights[1:]): logits, groups = get_predictions(ds_test, config, states[w]) plot_quantiles(logits, groups, ax) ax.set_title(f'$\lambda = {w:.0f}$', fontsize=22) ax.set_ylabel('Prediction', fontsize=18) plt.tight_layout() """ Explanation: We can see that when we increase the fairness regularization factor $\lambda$, the training accuracy slightly decreases but it does not impact too much the eval accuracy. The fairness regularizer is a rather good regularizer. For $\lambda = 1000$ the training metrics are a bit more degraded as well as the eval ones, but we also note that after 100 epochs this classifier has not converged yet, so we could also imagine that it would catch up in terms of eval metrics. Demographic Parity Now that we have seen the effect of the fairness regularizer on the classification performance, we focus on the applicability of this regularizer on the distributions of predictions for the two groups. For this, we compute all the predictions, sort them and plot the quantile functions. The smaller the area between them, the more fair the classifier is. End of explanation """
hanhanwu/Hanhan_Data_Science_Practice
make_sense_dimension_reduction.ipynb
mit
import sklearn.datasets as ds from sklearn.decomposition import PCA import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler import numpy as np %matplotlib inline data = ds.load_breast_cancer()['data'] data.shape # 30 features z_scaler = StandardScaler() z_data = z_scaler.fit_transform(data) pca_trafo = PCA().fit(z_data); fig, ax1 = plt.subplots(figsize = (10,6.5)) ax1.semilogy(pca_trafo.explained_variance_ratio_, '--o', label = 'explained variance ratio', color='purple'); ax1.set_xlabel('principal component', fontsize = 20); for tl in ax1.get_yticklabels(): tl.set_color('purple') plt.legend(loc=(0.01, 0.075) ,fontsize = 18); ax2 = ax1.twinx() ax2.semilogy(pca_trafo.explained_variance_ratio_.cumsum(), '--go', label = 'cumulative explained variance ratio'); for tl in ax2.get_yticklabels(): tl.set_color('g') ax1.tick_params(axis='both', which='major', labelsize=18); ax1.tick_params(axis='both', which='minor', labelsize=12); ax2.tick_params(axis='both', which='major', labelsize=18); ax2.tick_params(axis='both', which='minor', labelsize=12); plt.xlim([0, 29]); plt.legend(loc=(0.01, 0),fontsize = 18); """ Explanation: PCA is for linear dimensional reduction t-SNE is for non-linear dimension reduction. t-SNE is famous for visualization for 2D projection (project higher dimensional data into 2D). But it won't help you tell feature importance. For t-SNE visualization, just check my code: https://github.com/hanhanwu/Hanhan_Data_Science_Practice/blob/master/Outliers_and_Clustering/dimensional_reduction_visualization.ipynb I'm trying to see how to make their output or plot make sense. References * PCA - http://jotterbach.github.io/2016/03/24/Principal_Component_Analysis/ * code: https://github.com/jotterbach/Data-Exploration-and-Numerical-Experimentation/blob/master/Data-Analytics/PCA_Pitfalls.ipynb End of explanation """ n_comp =30 pca_data = pca_trafo.fit_transform(z_data) pca_inv_data = pca_trafo.inverse_transform(np.eye(n_comp)) fig = plt.figure(figsize=(10, 6.5)) plt.plot(pca_inv_data.mean(axis=0), '--o', label = 'mean') plt.plot(np.square(pca_inv_data.std(axis=0)), '--o', label = 'variance') plt.legend(loc='lower right') plt.ylabel('feature contribution', fontsize=20); plt.xlabel('feature index', fontsize=20); plt.tick_params(axis='both', which='major', labelsize=18); plt.tick_params(axis='both', which='minor', labelsize=12); plt.xlim([0, 29]) plt.legend(loc='lower left', fontsize=18) """ Explanation: From the above visualization, I personaly recommend just to check cumulative explained variance ratio plot with the marks on the right side. When principle component=6, we have 0.9 (90%), it means 6 principle components can explain 90% of the full variance. Then we can choose the features below based on their contribution. End of explanation """
Kaggle/learntools
notebooks/intro_to_programming/raw/ex3.ipynb
apache-2.0
# Set up the exercise from learntools.core import binder binder.bind(globals()) from learntools.intro_to_programming.ex3 import * print('Setup complete.') """ Explanation: In the tutorial, you learned about four different data types: floats, integers, strings, and booleans. In this exercise, you'll experiment with them. Set up the notebook Run the next code cell without changes to set up the notebook. End of explanation """ # Define a float y = 1. print(y) print(type(y)) # Convert float to integer with the int function z = int(y) print(z) print(type(z)) """ Explanation: Question 1 You have seen how to convert a float to an integer with the int function. Try this out yourself by running the code cell below. End of explanation """ # Uncomment and run this code to get started! #print(int(1.2321)) #print(int(1.747)) #print(int(-3.94535)) #print(int(-2.19774)) """ Explanation: In this case, the float you are using has no numbers after the decimal. - But what happens when you try to convert a float with a fractional part to an integer? - How does the outcome of the int function change for positive and negative numbers? Use the next code cell to investigate and answer these questions. Feel free to add or remove any lines of code -- it is your workspace! End of explanation """ # Check your answer (Run this code cell to receive credit!) q1.check() """ Explanation: Once you have an answer, run the code cell below to see the solution. Viewing the solution will give you credit for answering the problem. End of explanation """ # Uncomment and run this code to get started! print(3 * True) print(-3.1 * True) print(type("abc" * False)) print(len("abc" * False)) """ Explanation: Question 2 In the tutorial, you learned about booleans (which can take a value of True or False), in addition to integers, floats, and strings. For this question, your goal is to determine what happens when you multiply a boolean by any of these data types. Specifically, - What happens when you multiply an integer or float by True? What happens when you multiply them by False? How does the answer change if the numbers are positive or negative? - What happens when you multiply a string by True? By False? Use the next code cell for your investigation. End of explanation """ # Check your answer (Run this code cell to receive credit!) q2.check() """ Explanation: Once you have an answer, run the code cell below to see the solution. Viewing the solution will give you credit for answering the problem. End of explanation """ # TODO: Complete the function def get_expected_cost(beds, baths, has_basement): value = ____ return value # Check your answer q3.check() #%%RM_IF(PROD)%% def get_expected_cost(beds, baths, has_basement): value = 80000 + 30000 * beds + 10000 * baths + 40000 * has_basement return value q3.assert_check_passed() # Uncomment to see a hint #_COMMENT_IF(PROD)_ q3.hint() # Uncomment to view the solution #_COMMENT_IF(PROD)_ q3.solution() """ Explanation: Question 3 In this question, you will build off your work from the previous exercise to write a function that estimates the value of a house. Use the next code cell to create a function get_expected_cost that takes as input three variables: - beds - number of bedrooms (data type float) - baths - number of bathrooms (data type float) - has_basement - whether or not the house has a basement (data type boolean) It should return the expected cost of a house with those characteristics. Assume that: - the expected cost for a house with 0 bedrooms and 0 bathrooms, and no basement is 80000, - each bedroom adds 30000 to the expected cost, - each bathroom adds 10000 to the expected cost, and - a basement adds 40000 to the expected cost. For instance, - a house with 1 bedroom, 1 bathroom, and no basement has an expected cost of 80000 + 30000 + 10000 = 120000. This value will be calculated with get_expected_cost(1, 1, False). - a house with 2 bedrooms, 1 bathroom, and a basement has an expected cost of 80000 + 2*30000 + 10000 + 40000 = 190000. This value will be calculated with get_expected_cost(2, 1, True). Remember you can always get a hint by uncommenting q3.hint() in the code cell following the next! End of explanation """ print(False + False) print(True + False) print(False + True) print(True + True) print(False + True + True + True) """ Explanation: Question 4 We'll continue our study of boolean arithmetic. For this question, your task is to provide a description of what happpens when you add booleans. Use the next code cell for your investigation. Feel free to add or remove any lines of code - use it as your workspace! End of explanation """ # Check your answer (Run this code cell to receive credit!) q4.check() """ Explanation: Once you have an answer, run the code cell below to see the solution. Viewing the solution will give you credit for answering the problem. End of explanation """ def cost_of_project(engraving, solid_gold): cost = ____ return cost # Check your answer q5.check() #%%RM_IF(PROD)%% def cost_of_project(engraving, solid_gold): cost = solid_gold * (100 + 10 * len(engraving)) + (not solid_gold) * (50 + 7 * len(engraving)) return cost q5.assert_check_passed() # Uncomment to see a hint #_COMMENT_IF(PROD)_ q5.hint() # Uncomment to view the solution #_COMMENT_IF(PROD)_ q5.solution() """ Explanation: ๐ŸŒถ๏ธ Question 5 You own an online shop where you sell rings with custom engravings. You offer both gold plated and solid gold rings. - Gold plated rings have a base cost of \$50, and you charge \$7 per engraved unit. - Solid gold rings have a base cost of \$100, and you charge \$10 per engraved unit. - Spaces and punctuation are counted as engraved units. Write a function cost_of_project() that takes two arguments: - engraving - a Python string with the text of the engraving - solid_gold - a Boolean that indicates whether the ring is solid gold It should return the cost of the project. This question should be fairly challenging, and you may need a hint. End of explanation """ project_one = cost_of_project("Charlie+Denver", True) print(project_one) """ Explanation: Run the next code cell to calculate the cost of engraving Charlie+Denver on a solid gold ring. End of explanation """ project_two = cost_of_project("08/10/2000", False) print(project_two) """ Explanation: Use the next code cell to calculate the cost of engraving 08/10/2000 on a gold plated ring. End of explanation """
joshnsolomon/phys202-2015-work
assignments/assignment05/InteractEx01.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display """ Explanation: Interact Exercise 01 Import End of explanation """ def print_sum(a, b): """Print the sum of the arguments a and b.""" print(a+b) """ Explanation: Interact basics Write a print_sum function that prints the sum of its arguments a and b. End of explanation """ interact(print_sum,a=(-10.,10.,.1),b=(-8,8,2)); assert True # leave this for grading the print_sum exercise """ Explanation: Use the interact function to interact with the print_sum function. a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1 b should be an integer slider the interval [-8, 8] with step sizes of 2. End of explanation """ def print_string(s, length=False): """Print the string s and optionally its length.""" print(s) if length == True: print(len(s)) """ Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True. End of explanation """ # YOUR CODE HERE interact(print_string,s='Hello World!',length=True); assert True # leave this for grading the print_string exercise """ Explanation: Use the interact function to interact with the print_string function. s should be a textbox with the initial value "Hello World!". length should be a checkbox with an initial value of True. End of explanation """
susantabiswas/Natural-Language-Processing
Notebooks/Word_Prediction_Add-1_Smoothing_with_Interpolation.ipynb
mit
from nltk.util import ngrams from collections import defaultdict from collections import OrderedDict import string import time import gc from math import log10 start_time = time.time() """ Explanation: <u>Word prediction</u> Language Model based on n-gram Probabilistic Model Add-1 Smoothing Used with Interpolation Highest Order n-gram used is Quadgram <u>Import corpus</u> End of explanation """ #returns: string #arg: string #remove punctuations, change to lowercase ,retain the apostrophe mark def removePunctuations(sen): #split the string into word tokens temp_l = sen.split() #print(temp_l) i = 0 j = 0 #changes the word to lowercase and removes punctuations from it for word in temp_l : j = 0 #print(len(word)) for l in word : if l in string.punctuation: if l == "'": if j+1<len(word) and word[j+1] == 's': j = j + 1 continue word = word.replace(l," ") #print(j,word[j]) j += 1 temp_l[i] = word.lower() i=i+1 #spliting is being done here beacause in sentences line here---so after punctuation removal it should #become "here so" content = " ".join(temp_l) return content """ Explanation: <u>Do preprocessing</u>: Remove the punctuations and lowercase the tokens End of explanation """ #returns : void #arg: string,dict,dict,dict,dict #loads the corpus for the dataset and makes the frequency count of quadgram ,bigram and trigram strings def loadCorpus(file_path, bi_dict, tri_dict, quad_dict, vocab_dict): w1 = '' #for storing the 3rd last word to be used for next token set w2 = '' #for storing the 2nd last word to be used for next token set w3 = '' #for storing the last word to be used for next token set token = [] #total no. of words in the corpus word_len = 0 #open the corpus file and read it line by line with open(file_path,'r') as file: for line in file: #split the string into word tokens temp_l = line.split() i = 0 j = 0 #does the same as the removePunctuations() function,implicit declaration for performance reasons #changes the word to lowercase and removes punctuations from it for word in temp_l : j = 0 #print(len(word)) for l in word : if l in string.punctuation: if l == "'": if j+1<len(word) and word[j+1] == 's': j = j + 1 continue word = word.replace(l," ") #print(j,word[j]) j += 1 temp_l[i] = word.lower() i=i+1 #spliting is being done here beacause in sentences line here---so after punctuation removal it should #become "here so" content = " ".join(temp_l) token = content.split() word_len = word_len + len(token) if not token: continue #add the last word from previous line if w3!= '': token.insert(0,w3) temp0 = list(ngrams(token,2)) #since we are reading line by line some combinations of word might get missed for pairing #for trigram #first add the previous words if w2!= '': token.insert(0,w2) #tokens for trigrams temp1 = list(ngrams(token,3)) #insert the 3rd last word from previous line for quadgram pairing if w1!= '': token.insert(0,w1) #add new unique words to the vocaulary set if available for word in token: if word not in vocab_dict: vocab_dict[word] = 1 else: vocab_dict[word]+= 1 #tokens for quadgrams temp2 = list(ngrams(token,4)) #count the frequency of the bigram sentences for t in temp0: sen = ' '.join(t) bi_dict[sen] += 1 #count the frequency of the trigram sentences for t in temp1: sen = ' '.join(t) tri_dict[sen] += 1 #count the frequency of the quadgram sentences for t in temp2: sen = ' '.join(t) quad_dict[sen] += 1 #then take out the last 3 words n = len(token) #store the last few words for the next sentence pairing w1 = token[n -3] w2 = token[n -2] w3 = token[n -1] return word_len """ Explanation: Tokenize and load the corpus data End of explanation """ #creates dict for storing probable words with their probabilities for a trigram sentence # ADD 1 Smoothing used #returns: void #arg: dict,dict,dict,dict,dict def findQuadgramProbAdd1(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict): i = 0 V = len(vocab_dict) #using the fourth word of the quadgram sentence as the probable word and calculate its #probability,here ADD 1 smoothing has been used during the probability calculation for quad_sen in quad_dict: quad_token = quad_sen.split() #trigram sentence for key tri_sen = ' '.join(quad_token[:3]) #find the probability #add 1 smoothing has been used prob = ( quad_dict[quad_sen] + 1 ) / ( tri_dict[tri_sen] + V) #if the trigram sentence is not present in the Dictionary then add it if tri_sen not in quad_prob_dict: quad_prob_dict[tri_sen] = [] quad_prob_dict[tri_sen].append([prob,quad_token[-1]]) #the trigram sentence is present but the probable word is missing,then add it else: quad_prob_dict[tri_sen].append([prob,quad_token[-1]]) prob = None quad_token = None tri_sen = None """ Explanation: Create a Hash Table for Probable words for Trigram sentences End of explanation """ #for creating prob dict for trigram probabilities #creates dict for storing probable words with their probabilities for a trigram sentence # ADD 1 Smoothing used #returns: void #arg: dict,dict,dict,dict def findTrigramProbAdd1(vocab_dict, bi_dict, tri_dict, tri_prob_dict): #vocabulary length V = len(vocab_dict) #create a dictionary of probable words with their probabilities for #trigram probabilites,key is a bigram and value is a list of prob and word for tri in tri_dict: tri_token = tri.split() #bigram sentence for key bi_sen = ' '.join(tri_token[:2]) #find the probability #add 1 smoothing has been used prob = ( tri_dict[tri] + 1 ) / ( bi_dict[bi_sen] + V) #tri_prob_dict is a dict of list #if the bigram sentence is not present in the Dictionary then add it if bi_sen not in tri_prob_dict: tri_prob_dict[bi_sen] = [] tri_prob_dict[bi_sen].append([prob,tri_token[-1]]) #the bigram sentence is present but the probable word is missing,then add it else: tri_prob_dict[bi_sen].append([prob,tri_token[-1]]) prob = None tri_token = None bi_sen = None """ Explanation: For creating Probability Dictionary for Trigram Probabilities End of explanation """ #for creating prob dict for bigram probabilities #creates dict for storing probable words with their probabilities for a trigram sentence # ADD 1 Smoothing used #returns: void #arg: dict,dict,dict,dict def findBigramProbAdd1(vocab_dict, bi_dict, bi_prob_dict): V = len(vocab_dict) #create a dictionary of probable words with their probabilities for bigram probabilites for bi in bi_dict: bi_token = bi.split() #unigram for key unigram = bi_token[0] #find the probability #add 1 smoothing has been used prob = ( bi_dict[bi] + 1 ) / ( vocab_dict[unigram] + V) #bi_prob_dict is a dict of list #if the unigram sentence is not present in the Dictionary then add it if unigram not in bi_prob_dict: bi_prob_dict[unigram] = [] bi_prob_dict[unigram].append([prob,bi_token[-1]]) #the unigram sentence is present but the probable word is missing,then add it else: bi_prob_dict[unigram].append([prob,bi_token[-1]]) prob = None bi_token = None unigram = None """ Explanation: For creating Probability Dictionary for Bigram Probabilities End of explanation """ #finds the lambda values required for doing Interpolation #arg: int, dict, dict, dict, dict #returns: list def estimateParameters(token_len, vocab_dict, bi_dict, tri_dict, quad_dict): max_prob = -9999999999999999999.0 curr_prob = 0.0 parameters = [0.0,0.0,0.0,0.0] i = 1 #load the held out data file = open('held_out_corpus.txt','r') held_out_data = file.read() file.close() #remove punctuations and other cleaning stuff held_out_data = removePunctuations(held_out_data) held_out_data = held_out_data.split() #make quad tokens for parameter estimation quad_token_heldout = list(ngrams(held_out_data,4)) #for storing the stats #f = open('interpolation_prob_stats.txt','w') #lambda values1 and 4 l1 = 0 l4 = 0 while l1 <= 1.0: l2 = 0 while l2 <= 1.0: l3 = 0 while l3 <= 1.0: #when the sum of lambdas is greater than 1 or when all 4 are zero we don't need to check so skip if l1 == 0 and l2 == 0 and l3 == 0 or ((l1+l2+l3)>1): l3 += 0.1 i += 1 continue #find lambda 4 l4 = 1- (l1 + l2 + l3) curr_prob = 0 qc = [0] bc = [0] tc = [0] #find the probability for the held out set using the current lambda values for quad in quad_token_heldout: #take log of prob to avoid underflow curr_prob += log10( interpolatedProbability(quad,token_len, vocab_dict, bi_dict, tri_dict, quad_dict,qc,bc,tc,l1, l2, l3, l4) ) if curr_prob > max_prob: max_prob = curr_prob parameters[0] = l1 parameters[1] = l2 parameters[2] = l3 parameters[3] = l4 l3 += 0.1 i += 1 l2 += 0.1 l1 += 0.1 #f.write('\n\n\nL1: '+str(parameters[0])+' L2: '+str(parameters[1])+' L3: '+str(parameters[2])+' L4: '+str(parameters[3])+' MAX PROB: '+str(max_prob)+'\n') #f.close() return parameters """ Explanation: <u>Parameter estimation for Interpolation </u> For estimating parameters we try to maximise the value of lambdas L1,L2,L3 and L4<br> We do that by try all possible combinations of lambdas with step size 0.1 and try to maximise the <br> probabilty of held out data End of explanation """ #returns: float #arg: list,list,dict,dict,dict,dict,float,float,float,float #for calculating the interpolated probablity given the Trigram sentence and the given word def interpolatedProbability(quad_token,token_len, vocab_dict, bi_dict, tri_dict, quad_dict, qc, tc, bc, l1 = 0.25, l2 = 0.25, l3 = 0.25 , l4 = 0.25): V = len(vocab_dict) sen = ' '.join(quad_token) prob = ( l1*((quad_dict[sen] + 1)/ (tri_dict[' '.join(quad_token[0:3])] + V)) + l2*((tri_dict[' '.join(quad_token[1:4])] + 1) / (bi_dict[' '.join(quad_token[1:3])] + V)) + l3*((bi_dict[' '.join(quad_token[2:4])] + 1) / (vocab_dict[quad_token[2]] + V)) + l4*((vocab_dict[quad_token[3]] + 1) / (token_len + V)) ) if sen in quad_dict: qc[0] += 1 if ' '.join(quad_token[1:4]) in tri_dict: tc[0] += 1 if ' '.join(quad_token[2:4]) in bi_dict: bc[0] += 1 #since log10(1) is zero so it doesn't add upto anything but log10(0) is undefined if prob <= 0: return 1 return prob """ Explanation: <u> For Computing Interpolated Probability</u> End of explanation """ #for sorting the probable word acc. to their probabilities #returns: void #arg: dict, dict, dict def sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict): #sort bigram dict for key in bi_prob_dict: if len(bi_prob_dict[key])>1: bi_prob_dict[key] = sorted(bi_prob_dict[key],reverse = True) #sort trigram dict for key in tri_prob_dict: if len(tri_prob_dict[key])>1: tri_prob_dict[key] = sorted(tri_prob_dict[key],reverse = True) #sort quadgram dict for key in quad_prob_dict: if len(quad_prob_dict[key])>1: quad_prob_dict[key] = sorted(quad_prob_dict[key],reverse = True)[:2] """ Explanation: Sort the probable words End of explanation """ #pick the top most probable words from bi,tri and quad prob dict as word prediction candidates #returns: list[float,string] #arg: string,dict,dict,dict def chooseWords(sen, bi_prob_dict, tri_prob_dict, quad_prob_dict): word_choice = [] token = sen.split() if token[-1] in bi_prob_dict: word_choice += bi_prob_dict[token[-1]][:1] #print('Word Choice bi dict') if ' '.join(token[1:]) in tri_prob_dict: word_choice += tri_prob_dict[' '.join(token[1:])][:1] #print('Word Choice tri_dict') if ' '.join(token) in quad_prob_dict: word_choice += quad_prob_dict[' '.join(token)][:1] #print('Word Choice quad_dict') return word_choice """ Explanation: <u>Word Prediction related Driver functions</u> For choosing prediction word candidates for Word Prediction End of explanation """ #does prediction for the the sentence using Interpolation #Uses Add-1 Smoothing #returns: string #arg: string,dict,dict,dict,dict,int,list,list def doInterpolatedPredictionAdd1(sen, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, word_choice, param): pred = '' max_prob = 0.0 V = len(vocab_dict) #for each word choice find the interpolated probability and decide for word in word_choice: key = sen + ' ' + word[1] quad_token = key.split() prob = ( param[0]*((quad_dict[key] + 1)/ (tri_dict[' '.join(quad_token[0:3])] + V)) + param[1]*((tri_dict[' '.join(quad_token[1:4])] + 1) / (bi_dict[' '.join(quad_token[1:3])] + V)) + param[2]*((bi_dict[' '.join(quad_token[2:4])] + 1) / (vocab_dict[quad_token[2]] + V)) + param[3]*((vocab_dict[quad_token[3]] + 1) / (token_len + V)) ) if prob > max_prob: max_prob = prob pred = word #return only pred to get word with its prob if pred: return pred[1] else: return '' """ Explanation: Finds the Predicted Word using Interpolation with Add -1 Smoothing End of explanation """ #for taking input from user #returns: string #arg: void def takeInput(): cond = False #take input while(cond == False): sen = input('Enter the string\n') sen = removePunctuations(sen) temp = sen.split() if len(temp) < 3: print("Please enter atleast 3 words !") else: cond = True temp = temp[-3:] sen = " ".join(temp) return sen """ Explanation: <u>For Taking input from the User</u> End of explanation """ #computes the score for test data i,e no. of right predictions against no. of wrong predictions #return:int #arg:list,dict,dict,dict,dict def computeTestScore(test_token,bi_dict, tri_dict, quad_dict, vocab_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict, token_len,param): #increment the score value if correct prediction is made else decrement its value score = 0 wrong = 0 total = 0 with open('Test_Scores/add1_smoothing_score.txt','w') as w: for sent in test_token: sen_token = sent[:3] sen = " ".join(sen_token) correct_word = sent[3] #select probable word candidates for prediction word_choice = chooseWords(sen, bi_prob_dict, tri_prob_dict, quad_prob_dict) result = doInterpolatedPredictionAdd1(sen, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, word_choice, param) if result == correct_word: score+=1 else: wrong += 1 total += 1 w.write('Total Word Prdictions: '+str(total) + '\n' +'Correct Prdictions: '+str(score) + '\n'+'Wrong Prdictions: '+str(wrong) + '\n'+'ACCURACY: '+str((score/total)*100)+'%' ) #print stats print('Total Word Prdictions: '+str(total) + '\n' +'Correct Prdictions: '+str(score) + '\n'+'Wrong Prdictions: '+str(wrong) + '\n'+'ACCURACY:'+str((score/total)*100) ) return score """ Explanation: <u>Test Score ,Perplexity Calculation:</u> For computing the Test Score End of explanation """ #return:float #arg:list,int,dict,dict,dict,dict #computes the score for test data def computePerplexity(test_quadgrams,token_len,tri_dict,quad_dict,vocab_dict,prob_dict): perplexity = float(1.0) n = token_len V = len(vocab_dict) for item in quad_dict: sen_token = item.split() tri_sen = ' '.join(sen_token[0:3]) prob = (quad_dict[item] + 1) / (tri_dict[tri_sen] + V) perplexity = perplexity * ( prob**(1./n)) with open('Test_Scores/add1_smoothing_score.txt','w') as w: w.write('\nPerplexity: '+str(perplexity)) return perplexity """ Explanation: For Computing the Perplexity End of explanation """ #return: void #arg:string,string,dict,dict,dict,dict,dict #Used for testing the Language Model def trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,prob_dict): score = 0 #load the training corpus for the dataset token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) print("---Processing Time for Corpus Loading: %s seconds ---" % (time.time() - start_time)) start_time1 = time.time() #estimate the lambdas for interpolation #found earlier usign estimate Function param = [0.7,0.1,0.1,0.1] #param = estimateParameters(token_len, vocab_dict, bi_dict, tri_dict, quad_dict) #print(param) #create trigram Probability Dictionary findTrigramProbAdd1(vocab_dict, bi_dict, tri_dict, tri_prob_dict) #create bigram Probability Dictionary findBigramProbAdd1(vocab_dict, bi_dict, bi_prob_dict) #create quadgram Probability Dictionary findQuadgramProbAdd1(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict) #sort the probability dictionaries sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) gc.collect() print("---Preprocessing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1)) ### TESTING WITH TEST CORPUS test_data = '' #Now load the test corpus with open('test_corpus.txt','r') as file : test_data = file.read() #remove punctuations from the test data test_data = removePunctuations(test_data) test_token = test_data.split() #split the test data into 4 words list test_token = test_data.split() test_quadgrams = list(ngrams(test_token,4)) #choose most probable words for prediction start_time2 = time.time() score = computeTestScore(test_quadgrams,bi_dict, tri_dict, quad_dict, vocab_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict, token_len,param) print('Score:',score) print("---Processing Time for computing score: %s seconds ---" % (time.time() - start_time2)) start_time3 = time.time() perplexity = computePerplexity(test_token,token_len,tri_dict,quad_dict,vocab_dict,prob_dict) print('Perplexity:',perplexity) print("---Processing Time for computing Perplexity: %s seconds ---" % (time.time() - start_time3)) """ Explanation: <u>Driver Function for Testing the Language Model</u> End of explanation """ def main(): #variable declaration vocab_dict = defaultdict(int) #for storing the different words with their frequencies bi_dict = defaultdict(int) #for keeping count of sentences of two words tri_dict = defaultdict(int) #for keeping count of sentences of three words quad_dict = defaultdict(int) #for keeping count of sentences of four words quad_prob_dict = defaultdict(list) #for storing the probable words for Quadgram sentences tri_prob_dict = defaultdict(list) #for storing the probable words for Trigram sentences bi_prob_dict = defaultdict(list) #for storing the probable words for Bigram sentences train_file = 'corpusfile.txt' #load the corpus for the dataset token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) #estimate the lambdas for interpolation #param = estimateParameters(token_len, vocab_dict, bi_dict, tri_dict, quad_dict) param = [0.7,0.1,0.1,0.1] #create bigram Probability Dictionary findBigramProbAdd1(vocab_dict, bi_dict, bi_prob_dict) #create trigram Probability Dictionary findTrigramProbAdd1(vocab_dict, bi_dict, tri_dict, tri_prob_dict) #create quadgram Probability Dictionary findQuadgramProbAdd1(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict) #sort the probability dictionaries sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) #take user input input_sen = takeInput() ### PREDICTION #choose most probable words for prediction word_choice = chooseWords(input_sen, bi_prob_dict, tri_prob_dict, quad_prob_dict) prediction = doInterpolatedPredictionAdd1(input_sen, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, word_choice, param) print('Word Prediction:',prediction) if __name__ == '__main__': main() """ Explanation: <u>main function</u> End of explanation """ #variable declaration vocab_dict = defaultdict(int) #for storing the different words with their frequencies bi_dict = defaultdict(int) #for keeping count of sentences of two words tri_dict = defaultdict(int) #for keeping count of sentences of three words quad_dict = defaultdict(int) #for keeping count of sentences of four words quad_prob_dict = defaultdict(list) #for storing the probable words for Quadgram sentences tri_prob_dict = defaultdict(list) #for storing the probable words for Trigram sentences bi_prob_dict = defaultdict(list) #for storing the probable words for Bigram sentences """ Explanation: <i><u>For Debugging Purpose Only</u></i> <i>Ignore running the cells below if not debugging</i> End of explanation """ train_file = 'training_corpus.txt' test_file = 'test_corpus.txt' #load the corpus for the dataset token_len = trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,quad_prob_dict) train_file = 'corpusfile.txt' #load the corpus for the dataset token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) #estimate the lambdas for interpolation #param = estimateParameters(token_len, vocab_dict, bi_dict, tri_dict, quad_dict) param = [0.8,0.2,0.0,0.0] #create bigram Probability Dictionary findBigramProbAdd1(vocab_dict, bi_dict, bi_prob_dict) #create trigram Probability Dictionary findTrigramProbAdd1(vocab_dict, bi_dict, tri_dict, tri_prob_dict) #create quadgram Probability Dictionary findQuadgramProbAdd1(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict) #sort the probability dictionaries sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) #FOR DEBUGGING ONLY writeProbDicts(bi_prob_dict, tri_prob_dict, quad_prob_dict) #take user input input_sen = takeInput() ### PREDICTION start_time2 = time.time() #choose most probable words for prediction word_choice = chooseWords(input_sen, bi_prob_dict, tri_prob_dict, quad_prob_dict) prediction = doInterpolatedPredictionAdd1(input_sen, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, word_choice, param) #prediction = doPrediction(input_sen,prob_dict) print('Word Prediction:',prediction) print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time2)) """ Explanation: For Testing the Language Model Calculates % Accuracy and Perplexity<br> NOTE : If this is run then no need to run the cells following it End of explanation """
ioshchepkov/SHTOOLS
examples/notebooks/tutorial_6.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function # only necessary if using Python 2.x import numpy as np from pyshtools import SHCoeffs lmax = 30 coeffs = SHCoeffs.from_zeros(lmax) coeffs.set_coeffs(values=[1], ls=[10], ms=[0]) """ Explanation: 3D Spherical Harmonic Plots This example demonstrates how to generate a simple 3-dimensional plot of the data in an SHGrid class instance. We start by generating a set of spherical harmonic coefficients that is zero, whith the exception of a single harmonic: End of explanation """ grid = coeffs.expand() fig, ax = grid.plot3d(elevation=20, azimuth=30) """ Explanation: To plot the data, we first expand it on a grid, and then use the method plot3d(): End of explanation """ ldata = 30 degrees = np.arange(ldata+1, dtype=float) degrees[0] = np.inf power = degrees**(-2) coeffs2 = SHCoeffs.from_random(power) grid2 = coeffs2.expand() fig, ax = grid2.plot3d(elevation=20, azimuth=30) """ Explanation: Let's try a somewhat more complicated function. Here we will calculate a random realization of a process whose power spectrum follows a power law with exponent -2: End of explanation """
mattssilva/UW-Machine-Learning-Specialization
Week 1/.ipynb_checkpoints/Getting Started with SFrames-checkpoint.ipynb
mit
import graphlab # Set product key on this computer. After running this cell, you will not need to re-enter your product key. graphlab.product_key.set_product_key('your product key here') # Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing. graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4) # Output active product key. graphlab.product_key.get_product_key() """ Explanation: Fire up GraphLab Create We always start with this line before using any part of GraphLab Create. It can take up to 30 seconds to load the GraphLab library - be patient! The first time you use GraphLab create, you must enter a product key to license the software for non-commerical academic use. To register for a free one-year academic license and obtain your key, go to dato.com. End of explanation """ sf = graphlab.SFrame('people-example.csv') """ Explanation: Load a tabular data set End of explanation """ sf # we can view first few lines of table sf.tail() # view end of the table """ Explanation: SFrame basics End of explanation """ # .show() visualizes any data structure in GraphLab Create # If you want Canvas visualization to show up on this notebook, # add this line: graphlab.canvas.set_target('ipynb') sf['age'].show(view='Categorical') """ Explanation: GraphLab Canvas End of explanation """ sf['Country'] sf['age'] """ Explanation: Inspect columns of dataset End of explanation """ sf['age'].mean() sf['age'].max() """ Explanation: Some simple columnar operations End of explanation """ sf sf['Full Name'] = sf['First Name'] + ' ' + sf['Last Name'] sf sf['age'] * sf['age'] """ Explanation: Create new columns in our SFrame End of explanation """ sf['Country'] sf['Country'].show() def transform_country(country): if country == 'USA': return 'United States' else: return country transform_country('Brazil') transform_country('Brasil') transform_country('USA') sf['Country'].apply(transform_country) sf['Country'] = sf['Country'].apply(transform_country) sf """ Explanation: Use the apply function to do a advance transformation of our data End of explanation """
jepegit/cellpy
dev_utils/lookup/cellpy_hdf5_tweaking.ipynb
mit
%load_ext autoreload %autoreload 2 from pathlib import Path from pprint import pprint import pandas as pd import cellpy """ Explanation: Tweaking the cellpy file format A cellpy file is a hdf5-type file. From v.5 it contains five top-level directories. ```python from cellreader.py raw_dir = prms._cellpyfile_raw step_dir = prms._cellpyfile_step summary_dir = prms._cellpyfile_summary meta_dir = "/info" # hard-coded fid_dir = prms._cellpyfile_fid from prms.py _cellpyfile_root = "CellpyData" _cellpyfile_raw = "/raw" _cellpyfile_step = "/steps" _cellpyfile_summary = "/summary" _cellpyfile_fid = "/fid" ``` End of explanation """ create_cellpyfile = False filename_full = Path( "/Users/jepe/cellpy_data/cellpyfiles/20181026_cen31_03_GITT_cc_01.h5" ) filename_first = Path( "/Users/jepe/cellpy_data/cellpyfiles/20181026_cen31_03_GITT_cc_01_a.h5" ) rawfile_full = Path("/Users/jepe/cellpy_data/raw/20181026_cen31_03_GITT_cc_01.res") rawfile_full2 = Path("/Users/jepe/cellpy_data/raw/20181026_cen31_03_GITT_cc_02.res") if create_cellpyfile: print("--loading raw-file".ljust(50, "-")) c = cellpy.get(rawfile_full, mass=0.23) print("--saving".ljust(50, "-")) c.save(filename_full) print("--splitting".ljust(50, "-")) c1, c2 = c.split(4) c1.save(filename_first) else: print("--loading cellpy-files".ljust(50, "-")) c1 = cellpy.get(filename_first) c = cellpy.get(filename_full) """ Explanation: Creating a fresh file from a raw-file End of explanation """ cellpy.log.setup_logging(default_level="INFO") raw_files = [rawfile_full, rawfile_full2] # raw_files = [rawfile_full2] cellpy_file = filename_full c = cellpy.cellreader.CellpyData().dev_update_loadcell(raw_files, cellpy_file) """ Explanation: Update with loadcell End of explanation """ c1 = cellpy.get(filename_first, logging_mode="INFO") c1.dev_update(rawfile_full) """ Explanation: Update with update End of explanation """ from cellpy import prms parent_level = prms._cellpyfile_root raw_dir = prms._cellpyfile_raw step_dir = prms._cellpyfile_step summary_dir = prms._cellpyfile_summary meta_dir = "/info" # hard-coded fid_dir = prms._cellpyfile_fid raw_dir parent_level + raw_dir """ Explanation: Looking at cellpyยดs internal parameter End of explanation """ print(f"name: {filename_full.name}") print(f"size: {filename_full.stat().st_size/1_048_576:0.2f} Mb") with pd.HDFStore(filename_full) as store: pprint(store.keys()) store = pd.HDFStore(filename_full) m = store.select(parent_level + meta_dir) s = store.select(parent_level + summary_dir) t = store.select(parent_level + step_dir) f = store.select(parent_level + fid_dir) store.close() f.T """ Explanation: Looking at a cellpy file using pandas End of explanation """ c = cellpy.get(filename_full) cc = c.cell fid = cc.raw_data_files[0] fid.last_data_point # This should be used when I will implement reading only new data """ Explanation: Looking at a cellpy file using cellpy End of explanation """
ramseylab/networkscompbio
class20_partialcorr_python3.ipynb
apache-2.0
import pandas ## data file loading import numpy import sklearn.covariance ## for covariance matrix calculation import matplotlib.pyplot import matplotlib import pylab import scipy.stats ## for calculating the CDF of normal distribution import igraph ## for network visualization and finding components import math """ Explanation: CS446/546 - Class Session 20 - Partial correlation network In this class session we will continue with analyzing the tumor gene expression dataset from the NIH human bladder cancer cohort (M=414 tumors), building on what we learned in Class Session 19 (Correlation Network). In order to keep the analysis simple&ast;, in this notebook will restrict our analysis to a set of N=164 genes that are very highly expressed in bladder cancer. Using the 164 x 414 matrix of transcript abundance measurements, we will construct a network based on gene-gene partial correlation coefficients. We will also compare the distribution of partial correlation coefficients to the distribution of Pearson correlation coefficients. Do you think they will be different? In what way would you expect them to be different? &ast; Here, "simple" means that the covariance matrix will be nonsingular, so that we can obtain the partial correlation matrix by inversion. We'll import all of the python modules that we will need for this exercise End of explanation """ gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t") """ Explanation: Read the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples). (use Pandas, pandas.read.csv, and as_matrix). As always, sanity check that the file that you loaded has the expected dimensions (4,473 x 414) using shape. End of explanation """ gene_matrix_for_network = gene_matrix_for_network_df.as_matrix() """ Explanation: Convert your data frame to a numpy matrix, using the pandas.DataFrame.as_matrix method. End of explanation """ print(gene_matrix_for_network.shape) """ Explanation: As always, sanity check that the file that you loaded has the expected dimensions (4,473 x 414) using shape. End of explanation """ genes_median_expression = numpy.median(gene_matrix_for_network, axis=1) """ Explanation: Compute the median expression level for each row of your matrix End of explanation """ gene_matrix_np = numpy.array(gene_matrix_for_network) genes_keep = numpy.where(genes_median_expression > 12) matrix_filt = gene_matrix_np[genes_keep, ][0] matrix_filt.shape N = matrix_filt.shape[0] """ Explanation: Filter the matrix to include only rows for which the gene's median expression is > 12; matrix should now be 164 x 414; this will enable us to easily compute the partial correlation matrix using the inverse of the covariance matrix. Print the size of the filtered matrix, as a sanity check. End of explanation """ matrix_filt.shape """ Explanation: Print the shape of your filtered matrix, as a sanity check. It should be 164x414. End of explanation """ matrix_cor = numpy.corrcoef(matrix_filt) """ Explanation: Compute the 164 x 164 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a random variable, so you don't have to do any transposing of the matrix, unlike the situation in R). End of explanation """ matrix_cov = sklearn.covariance.empirical_covariance(numpy.matrix.transpose(matrix_filt)) """ Explanation: Compute the covariance matrix using sklearn.covariance.empirical_covariance (from the sklearn.covariance package, . Make sure you take the transpose of the matrix_filt matrix before passing it to the empirical_covariance function! End of explanation """ matrix_cov_inv = numpy.linalg.inv(matrix_cov) """ Explanation: Use numpy.linalg.inv to get the inverse matrix. End of explanation """ matrix_pcor = -matrix_cov_inv for i in range(N): for j in range(N): matrix_pcor[i,j] /= numpy.sqrt(matrix_cov_inv[i,i]*matrix_cov_inv[j,j]) print(matrix_pcor.shape) """ Explanation: Use a double for loop to "scale" the negative of the precision matrix, which will give you the partial correlation. Print the dimensions of the matrix you get back, as a sanity check. End of explanation """ cor_values = matrix_cor[numpy.where(numpy.tri(*matrix_cor.shape, k=-1))] pcor_values = matrix_pcor[numpy.where(numpy.tri(*matrix_pcor.shape, k=-1))] print(len(cor_values)) print(len(pcor_values)) """ Explanation: Get the correlation coefficients and the partial correlation coefficients of the lower triangle of the matrix (not including the diagonal), as two vectors cor_values and pcor_values; your resulting vectors should each have length 13,366. You will want to use numpy.tri and numpy.where (see class session 19 exercise) End of explanation """ matplotlib.pyplot.hist(cor_values, normed=1, alpha=0.5, label="cor") matplotlib.pyplot.hist(pcor_values, normed=1, alpha=0.5, label="pcor") matplotlib.pyplot.legend(loc="upper left") matplotlib.pyplot.xlabel("R") matplotlib.pyplot.ylabel("frequency") matplotlib.pyplot.show() """ Explanation: plot the histograms of the correlation coefficients (upper triangle only) and the partial correlation coefficients, on the same plot using alpha blending (refer to class session 17 exercise) End of explanation """ z_scores = 0.5*numpy.log((1+pcor_values)/ (1-pcor_values)) """ Explanation: Fisher transform the partial correlation values, using numpy.log: End of explanation """ M = gene_matrix_for_network_df.shape[1] P_values = 2*scipy.stats.norm.cdf(-numpy.abs(z_scores)*(math.sqrt((M-N-5)))) """ Explanation: Compute a p-value for each gene pair (upper triangle only), using the fact that sqrt(M-N-5) times the fisher Z sore should be approximately univariate normal (with zero mean) under the null hypothesis that a given gene pair's measurements (conditioned on the measurements for all the other 162 genes) are independent. You will want to use scipy.stats.norm.cdf, numpy.abs, and math.sqrt function (see class session 19 exercise). End of explanation """ len(numpy.where(P_values < 0.01)[0]) """ Explanation: How many gene pairs have a P value less than 0.01? (use which and length) End of explanation """ inds_tri = numpy.where(numpy.tri(*matrix_pcor.shape, k=-1)) inds_sig = numpy.where(P_values < 0.01) graph_edge_list = list(zip(inds_tri[1][inds_sig].tolist(), inds_tri[0][inds_sig].tolist())) final_network = igraph.Graph.TupleList(graph_edge_list, directed=False) final_network.summary() """ Explanation: What are the sizes of the components in the undirected graph whose edges have P &lt; 0.05 in the statistical test that you did? You will need to use zip, tolist, list, and igraph.Graph.TupleList (see class session 19 exercise) End of explanation """ degree_dist = final_network.degree_distribution() xs, ys = zip(*[(left, count) for left, _, count in degree_dist.bins()]) matplotlib.pyplot.loglog(xs, ys, marker="o") pylab.xlabel("k") pylab.ylabel("N(k)") pylab.show() """ Explanation: Plot the graph degree distribution on log-log scale End of explanation """
roebius/deeplearning_keras2
nbs2/pytorch-tut.ipynb
apache-2.0
x = torch.Tensor(5, 3); x x = torch.rand(5, 3); x x.size() y = torch.rand(5, 3) x + y torch.add(x, y) result = torch.Tensor(5, 3) torch.add(x, y, out=result) result1 = torch.Tensor(5, 3) result1 = x + y result1 # anything ending in '_' is an in-place operation y.add_(x) # adds x to y in-place # standard numpy-like indexing with all bells and whistles x[:,1] """ Explanation: Getting Started Tensors are similar to numpy's ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing. End of explanation """ a = torch.ones(5) a b = a.numpy() b a.add_(1) print(a) print(b) # see how the numpy array changed in value """ Explanation: Numpy Bridge The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other. Converting torch Tensor to numpy Array End of explanation """ a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) # see how changing the np array changed the torch Tensor automatically """ Explanation: Converting numpy Array to torch Tensor End of explanation """ x = x.cuda() y = y.cuda() x+y """ Explanation: CUDA Tensors Tensors can be moved onto GPU using the .cuda function. End of explanation """ x = Variable(torch.ones(2, 2), requires_grad = True); x y = x + 2; y # y.creator # - creator seems not to be available with current pytorch version z = y * y * 3; z out = z.mean(); out # You never have to look at these in practice - this is just showing how the # computation graph is stored # - creator seems not to be available with current pytorch version # print(out.creator.previous_functions[0][0]) # print(out.creator.previous_functions[0][0].previous_functions[0][0]) out.backward() # d(out)/dx x.grad """ Explanation: Autograd: automatic differentiation Central to all neural networks in PyTorch is the autograd package. The autograd package provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different. autograd.Variable is the central class of the package. It wraps a Tensor, and supports nearly all of operations defined on it. Once you finish your computation you can call .backward() and have all the gradients computed automatically. You can access the raw tensor through the .data attribute, while the gradient w.r.t. this variable is accumulated into .grad. If you want to compute the derivatives, you can call .backward() on a Variable. End of explanation """ x = torch.randn(3) x = Variable(x, requires_grad = True) y = x * 2 while y.data.norm() < 1000: y = y * 2 y gradients = torch.FloatTensor([0.1, 1.0, 0.0001]) y.backward(gradients) x.grad """ Explanation: You should have got a matrix of 4.5. Because PyTorch is a dynamic computation framework, we can take the gradients of all kinds of interesting computations, even loops! End of explanation """ class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5) # 1 input channel, 6 output channels, 5x5 kernel self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16*5*5, 120) # like keras' Dense() self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): return reduce(operator.mul, x.size()[1:]) net = Net(); net """ Explanation: Neural Networks Neural networks can be constructed using the torch.nn package. An nn.Module contains layers, and a method forward(input)that returns the output. End of explanation """ net.cuda(); params = list(net.parameters()) len(params), params[0].size() """ Explanation: You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. The learnable parameters of a model are returned by net.parameters() End of explanation """ input = Variable(torch.randn(1, 1, 32, 32)).cuda() out = net(input); out net.zero_grad() # zeroes the gradient buffers of all parameters out.backward(torch.randn(1, 10).cuda()) # backprops with random gradients """ Explanation: The input to the forward is a Variable, and so is the output. End of explanation """ output = net(input) target = Variable(torch.range(1, 10)).cuda() # a dummy target, for example loss = nn.MSELoss()(output, target); loss """ Explanation: A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target. There are several different loss functions under the nn package. A simple loss is: nn.MSELoss which computes the mean-squared error between the input and the target. End of explanation """ # now we shall call loss.backward(), and have a look at gradients before and after net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad) optimizer = optim.SGD(net.parameters(), lr = 0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = nn.MSELoss()(output, target) loss.backward() optimizer.step() # Does the update """ Explanation: Now, if you follow loss in the backward direction, using it's .creator attribute, you will see a graph of computations that looks like this: input -&gt; conv2d -&gt; relu -&gt; maxpool2d -&gt; conv2d -&gt; relu -&gt; maxpool2d -&gt; view -&gt; linear -&gt; relu -&gt; linear -&gt; relu -&gt; linear -&gt; MSELoss -&gt; loss So, when we call loss.backward(), the whole graph is differentiated w.r.t. the loss, and all Variables in the graph will have their .grad Variable accumulated with the gradient. End of explanation """ import torchvision from torchvision import transforms, datasets # The output of torchvision datasets are PILImage images of range [0, 1]. # We transform them to Tensors of normalized range [-1, 1] transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) trainset = datasets.CIFAR10(root='./data/cifar10', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True, num_workers=2) testset = datasets.CIFAR10(root='./data/cifar10', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') def imshow(img): plt.imshow(np.transpose((img / 2 + 0.5).numpy(), (1,2,0))) # show some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('{}'.format([classes[labels[j]] for j in range(4)]))) """ Explanation: Example complete process For vision, there is a package called torch.vision, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images. For this tutorial, we will use the CIFAR10 dataset. Training an image classifier We will do the following steps in order: Load and normalizing the CIFAR10 training and test datasets using torchvision Define a Convolution Neural Network Define a loss function Train the network on the training data Test the network on the test data 1. Loading and normalizing CIFAR10 Using torch.vision, it's extremely easy to load CIFAR10. End of explanation """ class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2,2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16*5*5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16*5*5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net().cuda() """ Explanation: 2. Define a Convolution Neural Network End of explanation """ criterion = nn.CrossEntropyLoss().cuda() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) """ Explanation: 2. Define a Loss function and optimizer End of explanation """ for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # wrap them in Variable inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda()) # forward + backward + optimize optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.data[0] if i % 2000 == 1999: # print every 2000 mini-batches print('[{}, {}] loss: {}'.format(epoch+1, i+1, running_loss / 2000)) running_loss = 0.0 """ Explanation: 3. Train the network This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize End of explanation """ dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) ' '.join('{}'.format(classes[labels[j]] for j in range(4))) """ Explanation: We will check what the model has learned by predicting the class label, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions. First, let's display an image from the test set to get familiar. End of explanation """ outputs = net(Variable(images).cuda()) _, predicted = torch.max(outputs.data, 1) # ' '.join('%5s'% classes[predicted[j][0]] for j in range(4)) # - "'int' object is not subscriptable" issue ' '.join('{}'.format([classes[predicted[j]] for j in range(4)])) """ Explanation: Okay, now let us see what the neural network thinks these examples above are: End of explanation """ correct,total = 0,0 for data in testloader: images, labels = data outputs = net(Variable(images).cuda()) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels.cuda()).sum() print('Accuracy of the network on the 10000 test images: {} %%'.format(100 * correct / total)) """ Explanation: The results seem pretty good. Let us look at how the network performs on the whole dataset. End of explanation """
keras-team/keras-io
examples/generative/ipynb/adain.ipynb
apache-2.0
import os import glob import imageio import numpy as np from tqdm import tqdm import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_datasets as tfds from tensorflow.keras import layers # Defining the global variables. IMAGE_SIZE = (224, 224) BATCH_SIZE = 64 # Training for single epoch for time constraint. # Please use atleast 30 epochs to see good results. EPOCHS = 1 AUTOTUNE = tf.data.AUTOTUNE """ Explanation: Neural Style Transfer with AdaIN Author: Aritra Roy Gosthipaty, Ritwik Raha<br> Date created: 2021/11/08<br> Last modified: 2021/11/08<br> Description: Neural Style Transfer with Adaptive Instance Normalization. Introduction Neural Style Transfer is the process of transferring the style of one image onto the content of another. This was first introduced in the seminal paper "A Neural Algorithm of Artistic Style" by Gatys et al. A major limitation of the technique proposed in this work is in its runtime, as the algorithm uses a slow iterative optimization process. Follow-up papers that introduced Batch Normalization, Instance Normalization and Conditional Instance Normalization allowed Style Transfer to be performed in new ways, no longer requiring a slow iterative process. Following these papers, the authors Xun Huang and Serge Belongie propose Adaptive Instance Normalization (AdaIN), which allows arbitrary style transfer in real time. In this example we implement Adapative Instance Normalization for Neural Style Transfer. We show in the below figure the output of our AdaIN model trained for only 30 epochs. You can also try out the model with your own images with this Hugging Face demo. Setup We begin with importing the necessary packages. We also set the seed for reproducibility. The global variables are hyperparameters which we can change as we like. End of explanation """ def decode_and_resize(image_path): """Decodes and resizes an image from the image file path. Args: image_path: The image file path. size: The size of the image to be resized to. Returns: A resized image. """ image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.convert_image_dtype(image, dtype="float32") image = tf.image.resize(image, IMAGE_SIZE) return image def extract_image_from_voc(element): """Extracts image from the PascalVOC dataset. Args: element: A dictionary of data. size: The size of the image to be resized to. Returns: A resized image. """ image = element["image"] image = tf.image.convert_image_dtype(image, dtype="float32") image = tf.image.resize(image, IMAGE_SIZE) return image # Get the image file paths for the style images. style_images = os.listdir("/content/artwork/resized") style_images = [os.path.join("/content/artwork/resized", path) for path in style_images] # split the style images in train, val and test total_style_images = len(style_images) train_style = style_images[: int(0.8 * total_style_images)] val_style = style_images[int(0.8 * total_style_images) : int(0.9 * total_style_images)] test_style = style_images[int(0.9 * total_style_images) :] # Build the style and content tf.data datasets. train_style_ds = ( tf.data.Dataset.from_tensor_slices(train_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) train_content_ds = tfds.load("voc", split="train").map(extract_image_from_voc).repeat() val_style_ds = ( tf.data.Dataset.from_tensor_slices(val_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) val_content_ds = ( tfds.load("voc", split="validation").map(extract_image_from_voc).repeat() ) test_style_ds = ( tf.data.Dataset.from_tensor_slices(test_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) test_content_ds = ( tfds.load("voc", split="test") .map(extract_image_from_voc, num_parallel_calls=AUTOTUNE) .repeat() ) # Zipping the style and content datasets. train_ds = ( tf.data.Dataset.zip((train_style_ds, train_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) val_ds = ( tf.data.Dataset.zip((val_style_ds, val_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) test_ds = ( tf.data.Dataset.zip((test_style_ds, test_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) """ Explanation: Style transfer sample gallery For Neural Style Transfer we need style images and content images. In this example we will use the Best Artworks of All Time as our style dataset and Pascal VOC as our content dataset. This is a deviation from the original paper implementation by the authors, where they use WIKI-Art as style and MSCOCO as content datasets respectively. We do this to create a minimal yet reproducible example. Downloading the dataset from Kaggle The Best Artworks of All Time dataset is hosted on Kaggle and one can easily download it in Colab by following these steps: Follow the instructions here in order to obtain your Kaggle API keys in case you don't have them. Use the following command to upload the Kaggle API keys. python from google.colab import files files.upload() Use the following commands to move the API keys to the proper directory and download the dataset. shell $ mkdir ~/.kaggle $ cp kaggle.json ~/.kaggle/ $ chmod 600 ~/.kaggle/kaggle.json $ kaggle datasets download ikarus777/best-artworks-of-all-time $ unzip -qq best-artworks-of-all-time.zip $ rm -rf images $ mv resized artwork $ rm best-artworks-of-all-time.zip artists.csv tf.data pipeline In this section, we will build the tf.data pipeline for the project. For the style dataset, we decode, convert and resize the images from the folder. For the content images we are already presented with a tf.data dataset as we use the tfds module. After we have our style and content data pipeline ready, we zip the two together to obtain the data pipeline that our model will consume. End of explanation """ style, content = next(iter(train_ds)) fig, axes = plt.subplots(nrows=10, ncols=2, figsize=(5, 30)) [ax.axis("off") for ax in np.ravel(axes)] for (axis, style_image, content_image) in zip(axes, style[0:10], content[0:10]): (ax_style, ax_content) = axis ax_style.imshow(style_image) ax_style.set_title("Style Image") ax_content.imshow(content_image) ax_content.set_title("Content Image") """ Explanation: Visualizing the data It is always better to visualize the data before training. To ensure the correctness of our preprocessing pipeline, we visualize 10 samples from our dataset. End of explanation """ def get_encoder(): vgg19 = keras.applications.VGG19( include_top=False, weights="imagenet", input_shape=(*IMAGE_SIZE, 3), ) vgg19.trainable = False mini_vgg19 = keras.Model(vgg19.input, vgg19.get_layer("block4_conv1").output) inputs = layers.Input([*IMAGE_SIZE, 3]) mini_vgg19_out = mini_vgg19(inputs) return keras.Model(inputs, mini_vgg19_out, name="mini_vgg19") """ Explanation: Architecture The style transfer network takes a content image and a style image as inputs and outputs the style transfered image. The authors of AdaIN propose a simple encoder-decoder structure for achieving this. The content image (C) and the style image (S) are both fed to the encoder networks. The output from these encoder networks (feature maps) are then fed to the AdaIN layer. The AdaIN layer computes a combined feature map. This feature map is then fed into a randomly initialized decoder network that serves as the generator for the neural style transfered image. The style feature map (fs) and the content feature map (fc) are fed to the AdaIN layer. This layer produced the combined feature map t. The function g represents the decoder (generator) network. Encoder The encoder is a part of the pretrained (pretrained on imagenet) VGG19 model. We slice the model from the block4-conv1 layer. The output layer is as suggested by the authors in their paper. End of explanation """ def get_mean_std(x, epsilon=1e-5): axes = [1, 2] # Compute the mean and standard deviation of a tensor. mean, variance = tf.nn.moments(x, axes=axes, keepdims=True) standard_deviation = tf.sqrt(variance + epsilon) return mean, standard_deviation def ada_in(style, content): """Computes the AdaIn feature map. Args: style: The style feature map. content: The content feature map. Returns: The AdaIN feature map. """ content_mean, content_std = get_mean_std(content) style_mean, style_std = get_mean_std(style) t = style_std * (content - content_mean) / content_std + style_mean return t """ Explanation: Adaptive Instance Normalization The AdaIN layer takes in the features of the content and style image. The layer can be defined via the following equation: where sigma is the standard deviation and mu is the mean for the concerned variable. In the above equation the mean and variance of the content feature map fc is aligned with the mean and variance of the style feature maps fs. It is important to note that the AdaIN layer proposed by the authors uses no other parameters apart from mean and variance. The layer also does not have any trainable parameters. This is why we use a Python function instead of using a Keras layer. The function takes style and content feature maps, computes the mean and standard deviation of the images and returns the adaptive instance normalized feature map. End of explanation """ def get_decoder(): config = {"kernel_size": 3, "strides": 1, "padding": "same", "activation": "relu"} decoder = keras.Sequential( [ layers.InputLayer((None, None, 512)), layers.Conv2D(filters=512, **config), layers.UpSampling2D(), layers.Conv2D(filters=256, **config), layers.Conv2D(filters=256, **config), layers.Conv2D(filters=256, **config), layers.Conv2D(filters=256, **config), layers.UpSampling2D(), layers.Conv2D(filters=128, **config), layers.Conv2D(filters=128, **config), layers.UpSampling2D(), layers.Conv2D(filters=64, **config), layers.Conv2D( filters=3, kernel_size=3, strides=1, padding="same", activation="sigmoid", ), ] ) return decoder """ Explanation: Decoder The authors specify that the decoder network must mirror the encoder network. We have symmetrically inverted the encoder to build our decoder. We have used UpSampling2D layers to increase the spatial resolution of the feature maps. Note that the authors warn against using any normalization layer in the decoder network, and do indeed go on to show that including batch normalization or instance normalization hurts the performance of the overall network. This is the only portion of the entire architecture that is trainable. End of explanation """ def get_loss_net(): vgg19 = keras.applications.VGG19( include_top=False, weights="imagenet", input_shape=(*IMAGE_SIZE, 3) ) vgg19.trainable = False layer_names = ["block1_conv1", "block2_conv1", "block3_conv1", "block4_conv1"] outputs = [vgg19.get_layer(name).output for name in layer_names] mini_vgg19 = keras.Model(vgg19.input, outputs) inputs = layers.Input([*IMAGE_SIZE, 3]) mini_vgg19_out = mini_vgg19(inputs) return keras.Model(inputs, mini_vgg19_out, name="loss_net") """ Explanation: Loss functions Here we build the loss functions for the neural style transfer model. The authors propose to use a pretrained VGG-19 to compute the loss function of the network. It is important to keep in mind that this will be used for training only the decoder netwrok. The total loss (Lt) is a weighted combination of content loss (Lc) and style loss (Ls). The lambda term is used to vary the amount of style transfered. Content Loss This is the Euclidean distance between the content image features and the features of the neural style transferred image. Here the authors propose to use the output from the AdaIn layer t as the content target rather than using features of the original image as target. This is done to speed up convergence. Style Loss Rather than using the more commonly used Gram Matrix, the authors propose to compute the difference between the statistical features (mean and variance) which makes it conceptually cleaner. This can be easily visualized via the following equation: where theta denotes the layers in VGG-19 used to compute the loss. In this case this corresponds to: block1_conv1 block1_conv2 block1_conv3 block1_conv4 End of explanation """ class NeuralStyleTransfer(tf.keras.Model): def __init__(self, encoder, decoder, loss_net, style_weight, **kwargs): super().__init__(**kwargs) self.encoder = encoder self.decoder = decoder self.loss_net = loss_net self.style_weight = style_weight def compile(self, optimizer, loss_fn): super().compile() self.optimizer = optimizer self.loss_fn = loss_fn self.style_loss_tracker = keras.metrics.Mean(name="style_loss") self.content_loss_tracker = keras.metrics.Mean(name="content_loss") self.total_loss_tracker = keras.metrics.Mean(name="total_loss") def train_step(self, inputs): style, content = inputs # Initialize the content and style loss. loss_content = 0.0 loss_style = 0.0 with tf.GradientTape() as tape: # Encode the style and content image. style_encoded = self.encoder(style) content_encoded = self.encoder(content) # Compute the AdaIN target feature maps. t = ada_in(style=style_encoded, content=content_encoded) # Generate the neural style transferred image. reconstructed_image = self.decoder(t) # Compute the losses. reconstructed_vgg_features = self.loss_net(reconstructed_image) style_vgg_features = self.loss_net(style) loss_content = self.loss_fn(t, reconstructed_vgg_features[-1]) for inp, out in zip(style_vgg_features, reconstructed_vgg_features): mean_inp, std_inp = get_mean_std(inp) mean_out, std_out = get_mean_std(out) loss_style += self.loss_fn(mean_inp, mean_out) + self.loss_fn( std_inp, std_out ) loss_style = self.style_weight * loss_style total_loss = loss_content + loss_style # Compute gradients and optimize the decoder. trainable_vars = self.decoder.trainable_variables gradients = tape.gradient(total_loss, trainable_vars) self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the trackers. self.style_loss_tracker.update_state(loss_style) self.content_loss_tracker.update_state(loss_content) self.total_loss_tracker.update_state(total_loss) return { "style_loss": self.style_loss_tracker.result(), "content_loss": self.content_loss_tracker.result(), "total_loss": self.total_loss_tracker.result(), } def test_step(self, inputs): style, content = inputs # Initialize the content and style loss. loss_content = 0.0 loss_style = 0.0 # Encode the style and content image. style_encoded = self.encoder(style) content_encoded = self.encoder(content) # Compute the AdaIN target feature maps. t = ada_in(style=style_encoded, content=content_encoded) # Generate the neural style transferred image. reconstructed_image = self.decoder(t) # Compute the losses. recons_vgg_features = self.loss_net(reconstructed_image) style_vgg_features = self.loss_net(style) loss_content = self.loss_fn(t, recons_vgg_features[-1]) for inp, out in zip(style_vgg_features, recons_vgg_features): mean_inp, std_inp = get_mean_std(inp) mean_out, std_out = get_mean_std(out) loss_style += self.loss_fn(mean_inp, mean_out) + self.loss_fn( std_inp, std_out ) loss_style = self.style_weight * loss_style total_loss = loss_content + loss_style # Update the trackers. self.style_loss_tracker.update_state(loss_style) self.content_loss_tracker.update_state(loss_content) self.total_loss_tracker.update_state(total_loss) return { "style_loss": self.style_loss_tracker.result(), "content_loss": self.content_loss_tracker.result(), "total_loss": self.total_loss_tracker.result(), } @property def metrics(self): return [ self.style_loss_tracker, self.content_loss_tracker, self.total_loss_tracker, ] """ Explanation: Neural Style Transfer This is the trainer module. We wrap the encoder and decoder inside of a tf.keras.Model subclass. This allows us to customize what happens in the model.fit() loop. End of explanation """ test_style, test_content = next(iter(test_ds)) class TrainMonitor(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): # Encode the style and content image. test_style_encoded = self.model.encoder(test_style) test_content_encoded = self.model.encoder(test_content) # Compute the AdaIN features. test_t = ada_in(style=test_style_encoded, content=test_content_encoded) test_reconstructed_image = self.model.decoder(test_t) # Plot the Style, Content and the NST image. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(20, 5)) ax[0].imshow(tf.keras.preprocessing.image.array_to_img(test_style[0])) ax[0].set_title(f"Style: {epoch:03d}") ax[1].imshow(tf.keras.preprocessing.image.array_to_img(test_content[0])) ax[1].set_title(f"Content: {epoch:03d}") ax[2].imshow( tf.keras.preprocessing.image.array_to_img(test_reconstructed_image[0]) ) ax[2].set_title(f"NST: {epoch:03d}") plt.show() plt.close() """ Explanation: Train Monitor callback This callback is used to visualize the style transfer output of the model at the end of each epoch. The objective of style transfer cannot be quantified properly, and is to be subjectively evaluated by an audience. For this reason, visualization is a key aspect of evaluating the model. End of explanation """ optimizer = keras.optimizers.Adam(learning_rate=1e-5) loss_fn = keras.losses.MeanSquaredError() encoder = get_encoder() loss_net = get_loss_net() decoder = get_decoder() model = NeuralStyleTransfer( encoder=encoder, decoder=decoder, loss_net=loss_net, style_weight=4.0 ) model.compile(optimizer=optimizer, loss_fn=loss_fn) history = model.fit( train_ds, epochs=EPOCHS, steps_per_epoch=50, validation_data=val_ds, validation_steps=50, callbacks=[TrainMonitor()], ) """ Explanation: Train the model In this section, we define the optimizer, the loss funtion, and the trainer module. We compile the trainer module with the optimizer and the loss function and then train it. Note: We train the model for a single epoch for time constranints, but we will need to train is for atleast 30 epochs to see good results. End of explanation """ for style, content in test_ds.take(1): style_encoded = model.encoder(style) content_encoded = model.encoder(content) t = ada_in(style=style_encoded, content=content_encoded) reconstructed_image = model.decoder(t) fig, axes = plt.subplots(nrows=10, ncols=3, figsize=(10, 30)) [ax.axis("off") for ax in np.ravel(axes)] for axis, style_image, content_image, reconstructed_image in zip( axes, style[0:10], content[0:10], reconstructed_image[0:10] ): (ax_style, ax_content, ax_reconstructed) = axis ax_style.imshow(style_image) ax_style.set_title("Style Image") ax_content.imshow(content_image) ax_content.set_title("Content Image") ax_reconstructed.imshow(reconstructed_image) ax_reconstructed.set_title("NST Image") """ Explanation: Inference After we train the model, we now need to run inference with it. We will pass arbitrary content and style images from the test dataset and take a look at the output images. NOTE: To try out the model on your own images, you can use this Hugging Face demo. End of explanation """
bhargavvader/gensim
docs/notebooks/gensim Quick Start.ipynb
lgpl-2.1
raw_corpus = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] """ Explanation: # Getting Started with gensim This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example. Core Concepts and Simple Example At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them. Corpus A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised. For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence. End of explanation """ # Create a set of frequent words stoplist = set('for a of the and to in'.split(' ')) # Lowercase each document, split it by white space and filter out stopwords texts = [[word for word in document.lower().split() if word not in stoplist] for document in raw_corpus] # Count word frequencies from collections import defaultdict frequency = defaultdict(int) for text in texts: for token in text: frequency[token] += 1 # Only keep words that appear more than once processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts] processed_corpus """ Explanation: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest. After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenise our data. Tokenization breaks up the documents into words (in this case using space as a delimiter). End of explanation """ from gensim import corpora dictionary = corpora.Dictionary(processed_corpus) print(dictionary) """ Explanation: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about. End of explanation """ print(dictionary.token2id) """ Explanation: Because our corpus is small, there are only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common. Vector To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from. Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to: End of explanation """ new_doc = "Human computer interaction" new_vec = dictionary.doc2bow(new_doc.lower().split()) new_vec """ Explanation: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts: End of explanation """ bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus] bow_corpus """ Explanation: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token. Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure. We can convert our entire original corpus to a list of vectors: End of explanation """ from gensim import models # train the model tfidf = models.TfidfModel(bow_corpus) # transform the "system minors" string tfidf[dictionary.doc2bow("system minors".lower().split())] """ Explanation: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details. Model Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus. One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus. Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors": End of explanation """
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
chapters/machine_learning/notebooks/regularization.ipynb
mit
from IPython.display import Image Image('../../../python_for_probability_statistics_and_machine_learning.jpg') """ Explanation: Regularization End of explanation """ import sympy as S S.var('x:2 l',real=True) J=S.Matrix([x0,x1]).norm()**2 + l*(1-x0-2*x1) sol=S.solve(map(J.diff,[x0,x1,l])) print(sol) """ Explanation: We have referred to regularization in earlier sections, but we want to develop this important idea more fully. Regularization is the mechanism by which we navigate the bias/variance trade-off. To get started, let's consider a classic constrained least squares problem, $$ \begin{aligned} & \underset{\mathbf{x}}{\text{minimize}} & & \Vert\mathbf{x}\Vert_2^2 \ & \text{subject to:} & & x_0 + 2 x_1 = 1 \end{aligned} $$ where $\Vert\mathbf{x}\Vert_2=\sqrt{x_0^2+x_1^2}$ is the $L_2$ norm. Without the constraint, it would be easy to minimize the objective function --- just take $\mathbf{x}=0$. Otherwise, suppose we somehow know that $\Vert\mathbf{x}\Vert_2<c$, then the locus of points defined by this inequality is the circle in Figure. The constraint is the line in the same figure. Because every value of $c$ defines a circle, the constraint is satisfied when the circle touches the line. The circle can touch the line at many different points, but we are only interested in the smallest such circle because this is a minimization problem. Intuitively, this means that we inflate a $L_2$ ball at the origin and stop when it just touches the contraint. The point of contact is our $L_2$ minimization solution. <!-- dom:FIGURE: [fig-machine_learning/regularization_001.png, width=500 frac=0.75] The solution of the constrained $L_2$ minimization problem is at the point where the constraint (dark line) intersects the $L_2$ ball (gray circle) centered at the origin. The point of intersection is indicated by the dark circle. The two neighboring squares indicate points on the line that are close to the solution. <div id="fig:regularization_001"></div> --> <!-- begin figure --> <div id="fig:regularization_001"></div> <p>The solution of the constrained $L_2$ minimization problem is at the point where the constraint (dark line) intersects the $L_2$ ball (gray circle) centered at the origin. The point of intersection is indicated by the dark circle. The two neighboring squares indicate points on the line that are close to the solution.</p> <img src="fig-machine_learning/regularization_001.png" width=500> <!-- end figure --> We can obtain the same result using the method of Lagrange multipliers. We can rewrite the entire $L_2$ minimization problem as one objective function using the Lagrange multiplier, $\lambda$, $$ J(x_0,x_1,\lambda) = x_0^2+x_1^2 + \lambda (1-x_0-x_1) $$ and solve this as an ordinary function using calculus. Let's do this using Sympy. End of explanation """ %matplotlib inline from __future__ import division import numpy as np from numpy import pi, linspace, sqrt from matplotlib.patches import Circle from matplotlib.pylab import subplots x1 = linspace(-1,1,10) dx=linspace(.7,1.3,3) fline = lambda x:(1-x)/2. fig,ax=subplots() _=ax.plot(dx*1/5,fline(dx*1/5),'s',ms=10,color='gray') _=ax.plot(x1,fline(x1),color='gray',lw=3) _=ax.add_patch(Circle((0,0),1/sqrt(5),alpha=0.3,color='gray')) _=ax.plot(1/5,2/5,'o',color='k',ms=15) _=ax.set_xlabel('$x_1$',fontsize=24) _=ax.set_ylabel('$x_2$',fontsize=24) _=ax.axis((-0.6,0.6,-0.6,0.6)) ax.set_aspect(1) fig.tight_layout() """ Explanation: Programming Tip. Using the Matrix object is overkill for this problem but it does demonstrate how Sympy's matrix machinery works. In this case, we are using the norm method to compute the $L_2$ norm of the given elements. Using S.var defines Sympy variables and injects them into the global namespace. It is more Pythonic to do something like x0 = S.symbols('x0',real=True) instead but the other way is quicker, especially for variables with many dimensions. The solution defines the exact point where the line is tangent to the circle in Figure. The Lagrange multiplier has incorporated the constraint into the objective function. End of explanation """ from cvxpy import Variable, Problem, Minimize, norm1, norm2 x=Variable(2,1,name='x') constr=[np.matrix([[1,2]])*x==1] obj=Minimize(norm1(x)) p= Problem(obj,constr) p.solve() print(x.value) """ Explanation: There is something subtle and very important about the nature of the solution, however. Notice that there are other points very close to the solution on the circle, indicated by the squares in Figure. This closeness could be a good thing, in case it helps us actually find a solution in the first place, but it may be unhelpful in so far as it creates ambiguity. Let's hold that thought and try the same problem using the $L_1$ norm instead of the $L_2$ norm. Recall that $$ \Vert \mathbf{x}\Vert_1 = \sum_{i=1}^d \vert x_i \vert $$ where $d$ is the dimension of the vector $\mathbf{x}$. Thus, we can reformulate the same problem in the $L_1$ norm as in the following, $$ \begin{aligned} & \underset{\mathbf{x}}{\text{minimize}} & & \Vert\mathbf{x}\Vert_1 \ & \text{subject to:} & & x_1 + 2 x_2 = 1 \end{aligned} $$ It turns out that this problem is somewhat harder to solve using Sympy, but we have convex optimization modules in Python that can help. End of explanation """ constr=[np.matrix([[1,2]])*x==1] obj=Minimize(norm2(x)) #L2 norm p= Problem(obj,constr) p.solve() print(x.value) """ Explanation: Programming Tip. The cvxy module provides a unified and accessible interface to the powerful cvxopt convex optimization package, as well as other open-source solver packages. As shown in Figure, the constant-norm contour in the $L_1$ norm is shaped like a diamond instead of a circle. Furthermore, the solutions found in each case are different. Geometrically, this is because inflating the circular $L_2$ reaches out in all directions whereas the $L_1$ ball creeps out along the principal axes. This effect is much more pronounced in higher dimensional spaces where $L_1$-balls get more spikey [^spikey]. Like the $L_2$ case, there are also neighboring points on the constraint line, but notice that these are not close to the boundary of the corresponding $L_1$ ball, as they were in the $L_2$ case. This means that these would be harder to confuse with the optimal solution because they correspond to a substantially different $L_1$ ball. [^spikey]: We discussed the geometry of high dimensional space when we covered the curse of dimensionality in the statistics chapter. To double-check our earlier $L_2$ result, we can also use the cvxpy module to find the $L_2$ solution as in the following code, End of explanation """ x=Variable(4,1,name='x') constr=[np.matrix([[1,2,3,4]])*x==1] obj=Minimize(norm1(x)) p= Problem(obj,constr) p.solve() print(x.value) """ Explanation: The only change to the code is the $L_2$ norm and we get the same solution as before. Let's see what happens in higher dimensions for both $L_2$ and $L_1$ as we move from two dimensions to four dimensions. End of explanation """ constr=[np.matrix([[1,2,3,4]])*x==1] obj=Minimize(norm2(x)) p= Problem(obj,constr) p.solve() print(x.value) """ Explanation: And also in the $L_2$ case with the following code, End of explanation """ from matplotlib.patches import Rectangle, RegularPolygon r=RegularPolygon((0,0),4,1/2,pi/2,alpha=0.5,color='gray') fig,ax=subplots() dx = np.array([-0.1,0.1]) _=ax.plot(dx,fline(dx),'s',ms=10,color='gray') _=ax.plot(x1,fline(x1),color='gray',lw=3) _=ax.plot(0,1/2,'o',color='k',ms=15) _=ax.add_patch(r) _=ax.set_xlabel('$x_1$',fontsize=24) _=ax.set_ylabel('$x_2$',fontsize=24) _=ax.axis((-0.6,0.6,-0.6,0.6)) _=ax.set_aspect(1) fig.tight_layout() """ Explanation: Note that the $L_1$ solution has selected out only one dimension for the solution, as the other components are effectively zero. This is not so with the $L_2$ solution, which has meaningful elements in multiple coordinates. This is because the $L_1$ problem has many pointy corners in the four dimensional space that poke at the hyperplane that is defined by the constraint. This essentially means the subsets (namely, the points at the corners) are found as solutions because these touch the hyperplane. This effect becomes more pronounced in higher dimensions, which is the main benefit of using the $L_1$ norm as we will see in the next section. End of explanation """ import sympy as S from sympy import Matrix X = Matrix([[1,2,3], [3,4,5]]) y = Matrix([[1,2]]).T """ Explanation: <!-- dom:FIGURE: [fig-machine_learning/regularization_002.png, width=500 frac=0.75] The diamond is the $L_1$ ball in two dimensions and the line is the constraint. The point of intersection is the solution to the optimization problem. Note that for $L_1$ optimization, the two nearby points on the constraint (squares) do not touch the $L_1$ ball. Compare this with [Figure](#fig:regularization_001). <div id="fig:regularization_002"></div> --> <!-- begin figure --> <div id="fig:regularization_002"></div> <p>The diamond is the $L_1$ ball in two dimensions and the line is the constraint. The point of intersection is the solution to the optimization problem. Note that for $L_1$ optimization, the two nearby points on the constraint (squares) do not touch the $L_1$ ball. Compare this with [Figure](#fig:regularization_001).</p> <img src="fig-machine_learning/regularization_002.png" width=500> <!-- end figure --> Ridge Regression Now that we have a sense of the geometry of the situation, let's revisit our classic linear regression probem. To recap, we want to solve the following problem, $$ \min_{\boldsymbol{\beta}\in \mathbb{R}^p} \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert $$ where $\mathbf{X}=\left[ \mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_p \right]$ and $\mathbf{x}_i\in \mathbb{R}^n$. Furthermore, we assume that the $p$ column vectors are linearly independent (i.e., $\texttt{rank}(\mathbf{X})=p$). Linear regression produces the $\boldsymbol{\beta}$ that minimizes the mean squared error above. In the case where $p=n$, there is a unique solution to this problem. However, when $p<n$, then there are infinitely many solutions. To make this concrete, let's work this out using Sympy. First, let's define an example $\mathbf{X}$ and $\mathbf{y}$ matrix, End of explanation """ b0,b1,b2=S.symbols('b:3',real=True) beta = Matrix([[b0,b1,b2]]).T # transpose """ Explanation: Now, we can define our coefficient vector $\boldsymbol{\beta}$ using the following code, End of explanation """ obj=(X*beta -y).norm(ord=2)**2 """ Explanation: Next, we define the objective function we are trying to minimize End of explanation """ sol=S.solve([obj.diff(i) for i in beta]) beta.subs(sol) """ Explanation: Programming Tip. The Sympy Matrix class has useful methods like the norm function used above to define the objective function. The ord=2 means we want to use the $L_2$ norm. The expression in parenthesis evaluates to a Matrix object. Note that it is helpful to define real variables using the keyword argument whenever applicable because it relieves Sympy's internal machinery of dealing with complex numbers. Finally, we can use calculus to solve this by setting the derivatives of the objective function to zero. End of explanation """ obj.subs(sol) """ Explanation: Notice that the solution does not uniquely specify all the components of the beta variable. This is a consequence of the $p<n$ nature of this problem where $p=2$ and $n=3$. While the the existence of this ambiguity does not alter the solution, End of explanation """ beta.subs(sol).norm(2) """ Explanation: But it does change the length of the solution vector beta, End of explanation """ S.solve((beta.subs(sol).norm()**2).diff()) """ Explanation: If we want to minimize this length we can easily use the same calculus as before, End of explanation """ betaL2=beta.subs(sol).subs(b2,S.Rational(1,6)) betaL2 """ Explanation: This provides the solution of minimum length in the $L_2$ sense, End of explanation """ from sklearn.linear_model import Ridge clf = Ridge(alpha=100.0,fit_intercept=False) clf.fit(np.array(X).astype(float),np.array(y).astype(float)) """ Explanation: But what is so special about solutions of minimum length? For machine learning, driving the objective function to zero is symptomatic of overfitting the data. Usually, at the zero bound, the machine learning method has essentially memorized the training data, which is bad for generalization. Thus, we can effectively stall this problem by defining a region for the solution that is away from the zero-bound. $$ \begin{aligned} & \underset{\boldsymbol{\beta}}{\text{minimize}} & & \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert_2^2 \ & \text{subject to:} & & \Vert\boldsymbol{\beta}\Vert_2 < c \end{aligned} $$ where $c$ is the tuning parameter. Using the same process as before, we can re-write this as the following, $$ \min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert y-\mathbf{X}\boldsymbol{\beta}\Vert_2^2 +\alpha\Vert\boldsymbol{\beta}\Vert_2^2 $$ where $\alpha$ is the tuning parameter. These are the penalized or Lagrange forms of these problems derived from the constrained versions. The objective function is penalized by the $\Vert\boldsymbol{\beta}\Vert_2$ term. For $L_2$ penalization, this is called ridge regression. This is implemented in Scikit-learn as Ridge. The following code sets this up for our example, End of explanation """ print(clf.coef_) """ Explanation: Note that the alpha scales of the penalty for the $\Vert\boldsymbol{\beta}\Vert_2$. We set the fit_intercept=False argument to omit the extra offset term from our example. The corresponding solution is the following, End of explanation """ from scipy.optimize import minimize f = S.lambdify((b0,b1,b2),obj+beta.norm()**2*100.) g = lambda x:f(x[0],x[1],x[2]) out = minimize(g,[.1,.2,.3]) # initial guess out.x """ Explanation: To double-check the solution, we can use some optimization tools from Scipy and our previous Sympy analysis, as in the following, End of explanation """ betaLS=X.T*(X*X.T).inv()*y betaLS """ Explanation: Programming Tip. We had to define the additional g function from the lambda function we created from the Sympy expression in f because the minimize function expects a single object vector as input instead of a three separate arguments. which produces the same answer as the Ridge object. To better understand the meaning of this result, we can re-compute the mean squared error solution to this problem in one step using matrix algebra instead of calculus, End of explanation """ X*betaLS-y """ Explanation: Notice that this solves the posited problem exactly, End of explanation """ print(betaLS.norm().evalf(), np.linalg.norm(clf.coef_)) """ Explanation: This means that the first term in the objective function goes to zero, $$ \Vert y-\mathbf{X}\boldsymbol{\beta}_{LS}\Vert=0 $$ But, let's examine the $L_2$ length of this solution versus the ridge regression solution, End of explanation """ print((y-X*clf.coef_.T).norm()**2) """ Explanation: Thus, the ridge regression solution is shorter in the $L_2$ sense, but the first term in the objective function is not zero for ridge regression, End of explanation """ # create chirp signal xi = np.linspace(0,1,100)[:,None] # sample chirp randomly xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None] # create sampled waveform y = cos(2*pi*(xin+xin**2)) # create full waveform for reference yi = cos(2*pi*(xi+xi**2)) # create polynomial features qfit = PolynomialFeatures(degree=8) # quadratic Xq = qfit.fit_transform(xin) # reformat input as polynomial Xiq = qfit.fit_transform(xi) lr=LinearRegression() # create linear model lr.fit(Xq,y) # fit linear model # create ridge regression model and fit clf = Ridge(alpha=1e-9,fit_intercept=False) clf.fit(Xq,y) from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression import numpy as np from numpy import cos, pi np.random.seed(1234567) xi = np.linspace(0,1,100)[:,None] xin = np.linspace(0,1,20)[:,None] xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None] f0 = 1 # init frequency BW = 2 y = cos(2*pi*(f0*xin+(BW/2.0)*xin**2)) yi = cos(2*pi*(f0*xi+(BW/2.0)*xi**2)) qfit = PolynomialFeatures(degree=8) # quadratic Xq = qfit.fit_transform(xin) Xiq = qfit.fit_transform(xi) lr=LinearRegression() # create linear model _=lr.fit(Xq,y) fig,axs=subplots(2,1,sharex=True,sharey=True) fig.set_size_inches((6,6)) ax=axs[0] _=ax.plot(xi,yi,label='true',ls='--',color='k') _=ax.plot(xi,lr.predict(Xiq),label=r'$\beta_{LS}$',color='k') _=ax.legend(loc=0) _=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal') _=ax.fill_between(xi.flatten(),yi.flatten(),lr.predict(Xiq).flatten(),color='gray',alpha=.3) _=ax.set_title('Polynomial Regression of Chirp Signal') _=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3) clf = Ridge(alpha=1e-9,fit_intercept=False) _=clf.fit(Xq,y) ax=axs[1] _=ax.plot(xi,yi,label=r'true',ls='--',color='k') _=ax.plot(xi,clf.predict(Xiq),label=r'$\beta_{RR}$',color='k') _=ax.legend(loc=(0.25,0.70)) _=ax.fill_between(xi.flatten(),yi.flatten(),clf.predict(Xiq).flatten(),color='gray',alpha=.3) # add rug plot _=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3) _=ax.set_xlabel('$x$',fontsize=22) _=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal') _=ax.set_title('Ridge Regression of Chirp Signal') """ Explanation: Ridge regression solution trades fitting error ($\Vert y-\mathbf{X} \boldsymbol{\beta}\Vert_2$) for solution length ($\Vert\boldsymbol{\beta}\Vert_2$). Let's see this in action with a familiar example from ch:stats:sec:nnreg. Consider Figure. For this example, we created our usual chirp signal and attempted to fit it with a high-dimensional polynomial, as we did in the section ch:ml:sec:cv. The lower panel is the same except with ridge regression. The shaded gray area is the space between the true signal and the approximant in both cases. The horizontal hash marks indicate the subset of $x_i$ values that each regressor was trained on. Thus, the training set represents a non-uniform sample of the underlying chirp waveform. The top panel shows the usual polynomial regression. Note that the regressor fits the given points extremely well, but fails at the endpoint. The ridge regressor misses many of the points in the middle, as indicated by the gray area, but does not overshoot at the ends as much as the plain polynomial regression. This is the basic trade-off for ridge regression. The Jupyter/IPython notebook has the code for this graph, but the main steps are shown in the following, End of explanation """ X = np.matrix([[1,2,3], [3,4,5]]) y = np.matrix([[1,2]]).T from sklearn.linear_model import Lasso lr = Lasso(alpha=1.0,fit_intercept=False) _=lr.fit(X,y) print(lr.coef_) """ Explanation: <!-- dom:FIGURE: [fig-machine_learning/regularization_003.png, width=500 frac=0.85] The top figure shows polynomial regression and the lower panel shows polynomial ridge regression. The ridge regression does not match as well throughout most of the domain, but it does not flare as violently at the ends. This is because the ridge constraint holds the coefficient vector down at the expense of poorer performance along the middle of the domain. <div id="fig:regularization_003"></div> --> <!-- begin figure --> <div id="fig:regularization_003"></div> <p>The top figure shows polynomial regression and the lower panel shows polynomial ridge regression. The ridge regression does not match as well throughout most of the domain, but it does not flare as violently at the ends. This is because the ridge constraint holds the coefficient vector down at the expense of poorer performance along the middle of the domain.</p> <img src="fig-machine_learning/regularization_003.png" width=500> <!-- end figure --> Lasso Lasso regression follows the same basic pattern as ridge regression, except with the $L_1$ norm in the objective function. $$ \min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert y-\mathbf{X}\boldsymbol{\beta}\Vert^2 +\alpha\Vert\boldsymbol{\beta}\Vert_1 $$ The interface in Scikit-learn is likewise the same. The following is the same problem as before using lasso instead of ridge regression, End of explanation """ from scipy.optimize import fmin obj = 1/4.*(X*beta-y).norm(2)**2 + beta.norm(1)*l f = S.lambdify((b0,b1,b2),obj.subs(l,1.0)) g = lambda x:f(x[0],x[1],x[2]) fmin(g,[0.1,0.2,0.3]) """ Explanation: As before, we can use the optimization tools in Scipy to solve this also, End of explanation """ o=[] alphas= np.logspace(-3,0,10) for a in alphas: clf = Lasso(alpha=a,fit_intercept=False) _=clf.fit(X,y) o.append(clf.coef_) fig,ax=subplots() fig.set_size_inches((8,5)) k=np.vstack(o) ls = ['-','--',':','-.'] for i in range(k.shape[1]): _=ax.semilogx(alphas,k[:,i],'o-', label='coef %d'%(i), color='k',ls=ls[i], alpha=.8,) _=ax.axis(ymin=-1e-1) _=ax.legend(loc=0) _=ax.set_xlabel(r'$\alpha$',fontsize=20) _=ax.set_ylabel(r'Lasso coefficients',fontsize=16) fig.tight_layout() """ Explanation: Programming Tip. The fmin function from Scipy's optimization module uses an algorithm that does not depend upon derivatives. This is useful because, unlike the $L_2$ norm, the $L_1$ norm has sharp corners that make it harder to estimate derivatives. This result matches the previous one from the Scikit-learn Lasso object. Solving it using Scipy is motivating and provides a good sanity check, but specialized algorithms are required in practice. The following code block re-runs the lasso with varying $\alpha$ and plots the coefficients in Figure. Notice that as $\alpha$ increases, all but one of the coefficients is driven to zero. Increasing $\alpha$ makes the trade-off between fitting the data in the $L_2$ sense and wanting to reduce the number of nonzero coefficients (equivalently, the number of features used) in the model. For a given problem, it may be more practical to focus on reducing the number of features in the model (i.e., large $\alpha$) than the quality of the data fit in the training data. The lasso provides a clean way to navigate this trade-off. The following code loops over a set of $\alpha$ values and collects the corresponding lasso coefficients to be plotted in Figure End of explanation """
mit-eicu/eicu-code
notebooks/nursecharting.ipynb
mit
# Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import psycopg2 import getpass # for configuring connection from configobj import ConfigObj import os %matplotlib inline # Create a database connection using settings from config file config='../db/config.ini' # connection info conn_info = dict() if os.path.isfile(config): config = ConfigObj(config) conn_info["sqluser"] = config['username'] conn_info["sqlpass"] = config['password'] conn_info["sqlhost"] = config['host'] conn_info["sqlport"] = config['port'] conn_info["dbname"] = config['dbname'] conn_info["schema_name"] = config['schema_name'] else: conn_info["sqluser"] = 'postgres' conn_info["sqlpass"] = '' conn_info["sqlhost"] = 'localhost' conn_info["sqlport"] = 5432 conn_info["dbname"] = 'eicu' conn_info["schema_name"] = 'public,eicu_crd' # Connect to the eICU database print('Database: {}'.format(conn_info['dbname'])) print('Username: {}'.format(conn_info["sqluser"])) if conn_info["sqlpass"] == '': # try connecting without password, i.e. peer or OS authentication try: if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'): con = psycopg2.connect(dbname=conn_info["dbname"], user=conn_info["sqluser"]) else: con = psycopg2.connect(dbname=conn_info["dbname"], host=conn_info["sqlhost"], port=conn_info["sqlport"], user=conn_info["sqluser"]) except: conn_info["sqlpass"] = getpass.getpass('Password: ') con = psycopg2.connect(dbname=conn_info["dbname"], host=conn_info["sqlhost"], port=conn_info["sqlport"], user=conn_info["sqluser"], password=conn_info["sqlpass"]) query_schema = 'set search_path to ' + conn_info['schema_name'] + ';' """ Explanation: nurseCharting The nurseCharting table is the largest table in eICU-CRD, and contains information entered in a semi-structured form by care staff. The three columns nursingchartcelltypecat, nursingchartcelltypevallabel and nursingchartcelltypevalname provide an organised structure for the data, but values entered in nursingchartvalue are free text entry and therefore fairly unstructured. Nurse charting data can be entered directy into the system or can represent interfaced data from charting in the bedside EMR. At the moment, publicly available data for this table has been chosen based upon whether the information in nursingchartvalue is structured: data where this is highly structured has been made available (including scores such as the Glasgow Coma Scale or vital signs such as heart rate), conversely data where this is highly unstructured (free-text comments, nursing assessments) are not currently publicly available. End of explanation """ patientunitstayid = 141168 query = query_schema + """ select * from nursecharting where patientunitstayid = {} order by nursingchartoffset """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head() df.columns # Look at a subset of columns cols = ['nursingchartid','patientunitstayid', 'nursingchartoffset','nursingchartentryoffset', 'nursingchartcelltypecat', 'nursingchartcelltypevallabel', 'nursingchartcelltypevalname', 'nursingchartvalue'] df[cols].head() """ Explanation: Examine a single patient End of explanation """ vitals = df['nursingchartcelltypevallabel'].value_counts() vitals # list of lists # for each element, the list is: # [nursingchartcelltypevallabel, nursingchartcelltypevalname] vitals = [['Heart Rate','Heart Rate'], ['O2 Saturation','O2 Saturation'], ['Temperature','Temperature (C)'], ['Non-Invasive BP','Non-Invasive BP Systolic'], ['Non-Invasive BP','Non-Invasive BP Diastolic'], ['Invasive BP','Invasive BP Systolic'], ['Invasive BP','Invasive BP Diastolic'], ['MAP (mmHg)','Value']] plt.figure(figsize=[12,8]) for v in vitals: idx = (df['nursingchartcelltypevallabel'] == v[0]) & (df['nursingchartcelltypevalname'] == v[1]) df_plot = df.loc[idx, :] if 'Systolic' in v[1]: marker = '^-' elif 'Diastolic' in v[1]: marker = 'v-' else: marker = 'o-' plt.plot(df_plot['nursingchartoffset'], pd.to_numeric(df_plot['nursingchartvalue'], errors='coerce'), marker, markersize=8, lw=2, label=v) plt.xlabel('Time since ICU admission (minutes)') plt.ylabel('Measurement value') plt.legend(loc='upper right') plt.show() """ Explanation: Plot patient vitals over time End of explanation """ query = query_schema + """ with t as ( select distinct patientunitstayid from nursecharting ) select pt.hospitalid , count(distinct pt.patientunitstayid) as number_of_patients , count(distinct t.patientunitstayid) as number_of_patients_with_tbl from patient pt left join t on pt.patientunitstayid = t.patientunitstayid group by pt.hospitalid """.format(patientunitstayid) df = pd.read_sql_query(query, con) df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0 df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True) df.head(n=10) plt.figure(figsize=[10,6]) plt.hist(df['data completion'], bins=np.linspace(0, 100, 10)) plt.xlabel('Percent of patients with data in nursecharting') plt.ylabel('Number of hospitals') plt.show() """ Explanation: Hospitals with data available End of explanation """
blevine37/pySpawn17
examples/spawn_analysis.ipynb
mit
print "Currently in directory:", os.getcwd() # THIS IS THE ONLY PART OF THE CODE THAT NEEDS TO BE CHANGED dir_name = "/Users/Dmitry/Documents/Research/MSU/4tce/cis/" h5filename = "sim.1.hdf5" os.chdir(dir_name) an = pyspawn.fafile(h5filename) an.fill_electronic_state_populations(column_filename="N.dat") an.fill_labels() an.fill_istates() an.get_numstates() times = an.datasets["quantum_times"] el_pop = an.datasets["electronic_state_populations"] istates = an.datasets["istates"] labels = an.datasets["labels"] ntraj = len(an.datasets["labels"]) nstates = an.datasets['numstates'] an.fill_nuclear_bf_populations() # write files with energy data for each trajectory an.fill_trajectory_energies(column_file_prefix="E") # write file with time derivative couplings for each trajectory an.fill_trajectory_tdcs(column_file_prefix="tdc") # compute Mulliken population of each trajectory an.fill_mulliken_populations(column_filename="mull.dat") mull_pop = an.datasets["electronic_state_populations"] # istates dict an.create_istate_dict() istates_dict = an.datasets['istates_dict'] """ Explanation: Single simulation analysis Here we create a fafile object that pulls the data from the sim.hdf5 file and outputs the arrays for plotting. End of explanation """ # writing xyz files an.write_xyzs() # list of bonds to keep track of bonds_list = np.array([[3, 11]]) # write datasets for bonds an.fill_trajectory_bonds(bonds_list, column_file_prefix="bonds") # dihedral angles list diheds_list = np.array([[2, 6, 9, 10]]) # write datasets for dihedral angles an.fill_trajectory_diheds(diheds_list, column_file_prefix="diheds") """ Explanation: This part takes care of xyz trajectory files, bonds, angles (need to have atoms array for this to work) End of explanation """ arrays = ("poten", "pop", "toten", "aven", "kinen", "time", "tdc", "bonds", "diheds") # creating dictionary for the datasets we want to plot # keys are trajectory labels for array in arrays: exec(array + "= dict()") for traj in an.datasets["labels"]: poten[traj] = an.datasets[traj + "_poten"] toten[traj] = an.datasets[traj + "_toten"] kinen[traj] = an.datasets[traj + "_kinen"] time[traj] = an.datasets[traj + "_time"] tdc[traj] = an.datasets[traj + "_tdc"] bonds[traj] = an.datasets[traj + "_bonds"] diheds[traj] = an.datasets[traj + "_diheds"] """ Explanation: Loading the arrays for plotting End of explanation """ colors = ("r", "g", "b", "m", "y", "k", "k") linestyles = ("-", "--", "-.", ":","-","-","-","-","-","-","-","-","-","-","-","-") markers=("None","None","None","None","d","o","v","^","s","p","d","o","v","^","s","p") large_size = 20 medium_size = 18 small_size = 16 """ Explanation: Setting plotting parameters (Perhaps there is a better way to do it, right now these hardcoded color and styles limit us to 7 electronic states and 16 trajectories. However, one could argue that more lines on a single plot would not be very informative anyway) End of explanation """ labels_to_plot_widget = widgets.SelectMultiple( options=labels, value=['00'], rows=10, description='Trajectories', disabled=False ) display(labels_to_plot_widget) """ Explanation: This widget picks the trajectories we want to plot in case there are many of them End of explanation """ %matplotlib notebook traj_plot.plot_total_energies(time, toten, labels_to_plot_widget.value, istates_dict, colors, markers, linestyles) """ Explanation: Plotting Total Energies End of explanation """ populated_states = np.amax(istates) + 1 traj_plot.plot_total_pop(times, mull_pop, populated_states, colors) """ Explanation: Total Population End of explanation """ display(labels_to_plot_widget) %matplotlib notebook traj_plot.plot_energies(labels_to_plot_widget.value, time, poten, nstates, colors, linestyles) """ Explanation: Plotting Potential Energies End of explanation """ display(labels_to_plot_widget) %matplotlib notebook # Gap between ground and first excited states state1 = 0 state2 = 1 traj_plot.plot_e_gap(time, poten, labels_to_plot_widget.value, state1, state2, istates_dict, colors, linestyles, markers) %matplotlib notebook spawnthresh = 0.00785 # plot_tdc(labels, time, tdc, nstates, spawnthresh) traj_plot.plot_tdc(time, tdc, labels_to_plot_widget.value, nstates, istates_dict, spawnthresh, colors, linestyles, markers) """ Explanation: Plotting Energy gaps End of explanation """ display(labels_to_plot_widget) %matplotlib notebook traj_plot.plot_bonds(time, labels_to_plot_widget.value, bonds_list, bonds, colors, linestyles) """ Explanation: Bonds End of explanation """ %matplotlib notebook traj_plot.plot_diheds(time, labels_to_plot_widget.value, diheds_list, diheds, colors, linestyles) plt.savefig('/Users/Dmitry/Documents/Research/MSU/4tce/cis/angles.png', dpi=300) """ Explanation: Dihedral Angles End of explanation """ xyz_widget = widgets.RadioButtons( options=labels, # value='pineapple', description='Trajectory:', disabled=False ) display(xyz_widget) print "Trajectory:", xyz_widget.value path_to_xyz = dir_name + "/traj_" + xyz_widget.value + ".xyz" print "Path to xyz file:", path_to_xyz traj = mda.Universe(path_to_xyz) w = nv.show_mdanalysis(traj) w """ Explanation: Trajectory visualization In this widget we pick the trajectory label to visualize End of explanation """
nbokulich/short-read-tax-assignment
ipynb/mock-community/taxonomy-assignment-vsearch.ipynb
bsd-3-clause
from os.path import join, expandvars from joblib import Parallel, delayed from glob import glob from os import system from tax_credit.framework_functions import (parameter_sweep, generate_per_method_biom_tables, move_results_to_repository) project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment") analysis_name= "mock-community" data_dir = join(project_dir, "data", analysis_name) reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/") results_dir = expandvars("$HOME/Desktop/projects/mock-community/") """ Explanation: Data generation: using python to sweep over methods and parameters This notebook demonstrates taxonomy classification using vsearch followed by consensus assignment in QIIME2's q2-feature-classifier. Environment preparation End of explanation """ dataset_reference_combinations = [ ('mock-1', 'gg_13_8_otus'), # formerly S16S-1 ('mock-2', 'gg_13_8_otus'), # formerly S16S-2 ('mock-3', 'gg_13_8_otus'), # formerly Broad-1 ('mock-4', 'gg_13_8_otus'), # formerly Broad-2 ('mock-5', 'gg_13_8_otus'), # formerly Broad-3 ('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1 ('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2 ('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3 ('mock-9', 'unite_20.11.2016_clean_fullITS'), # formerly ITS1 ('mock-10', 'unite_20.11.2016_clean_fullITS'), # formerly ITS2-SAG ('mock-12', 'gg_13_8_otus'), # Extreme ('mock-13', 'gg_13_8_otus_full16S_clean'), # kozich-1 ('mock-14', 'gg_13_8_otus_full16S_clean'), # kozich-2 ('mock-15', 'gg_13_8_otus_full16S_clean'), # kozich-3 ('mock-16', 'gg_13_8_otus'), # schirmer-1 ('mock-18', 'gg_13_8_otus'), ('mock-19', 'gg_13_8_otus'), ('mock-20', 'gg_13_8_otus'), ('mock-21', 'gg_13_8_otus'), ('mock-22', 'gg_13_8_otus'), ('mock-23', 'gg_13_8_otus'), ('mock-24', 'unite_20.11.2016_clean_fullITS'), ('mock-25', 'unite_20.11.2016_clean_fullITS'), ('mock-26-ITS1', 'unite_20.11.2016_clean_fullITS'), ('mock-26-ITS9', 'unite_20.11.2016_clean_fullITS'), ] reference_dbs = {'gg_13_8_otus_clean' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), 'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r_trim250.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), 'gg_13_8_otus_full16S_clean' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), 'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus.qza'), join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')), 'unite_20.11.2016_clean_fullITS' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'), join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), 'unite_20.11.2016_clean' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r.qza'), join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza')), 'unite_20.11.2016' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_ITS1Ff-ITS2r_trim250.qza'), join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza'))} """ Explanation: Preparing data set sweep First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. End of explanation """ method_parameters_combinations = { 'vsearch' : {'p-maxaccepts': [1, 10, 100], 'p-perc-identity': [0.80, 0.90, 0.97, 0.99], 'p-min-consensus': [0.51, 0.75, 0.99]} } """ Explanation: Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below. End of explanation """ command_template = "mkdir -p {0}; qiime feature-classifier vsearch --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.qza', output_name='rep_seqs_tax_assignments.qza') """ Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters End of explanation """ print(len(commands)) commands[0] """ Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated. End of explanation """ Parallel(n_jobs=4)(delayed(system)(command) for command in commands) """ Explanation: Finally, we run our commands. End of explanation """ taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'taxonomy.tsv') generate_per_method_biom_tables(taxonomy_glob, data_dir) """ Explanation: Generate per-method biom tables Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells. End of explanation """ precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) method_dirs = glob(join(results_dir, '*', '*', '*', '*')) move_results_to_repository(method_dirs, precomputed_results_dir) """ Explanation: Move result files to repository Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. End of explanation """
chetnapriyadarshini/deep-learning
autoencoder/Convolutional_Autoencoder.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation """ learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32,(None,28,28,1),name="inputs") targets_ = tf.placeholder(tf.float32,(None,28,28,1),name="targets") ### Encoder conv1 = tf.layers.conv2d(inputs_,16,(3,3),padding="same",activation = tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2), strides=(2,2),padding="same") # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1,8,(3,3),padding="same",activation = tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2), strides=(2,2),padding="same") # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2,8,(3,3),padding="same",activation = tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2), strides=(2,2),padding="same") # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded,(7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1,8,(3,3),padding="same",activation = tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4,(14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2,8,(3,3),padding="same",activation = tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5,(28,28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3,16,(3,3),padding="same",activation = tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6,1,(3,3),padding="same",activation = None) #Now 28x28x1 print(logits.shape) # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits,name='decoded') # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation """ sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation """ learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = # Now 28x28x32 maxpool1 = # Now 14x14x32 conv2 = # Now 14x14x32 maxpool2 = # Now 7x7x32 conv3 = # Now 7x7x16 encoded = # Now 4x4x16 ### Decoder upsample1 = # Now 7x7x16 conv4 = # Now 7x7x16 upsample2 = # Now 14x14x16 conv5 = # Now 14x14x32 upsample3 = # Now 28x28x32 conv6 = # Now 28x28x32 logits = #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = # Pass logits through sigmoid and calculate the cross-entropy loss loss = # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) """ Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation """
sujitpal/polydlot
src/mxnet/01-mnist-fcn.ipynb
apache-2.0
from __future__ import division, print_function from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.preprocessing import OneHotEncoder import matplotlib.pyplot as plt import mxnet as mx import numpy as np import os %matplotlib inline DATA_DIR = "../../data" TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv") TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv") MODEL_FILE = os.path.join(DATA_DIR, "mxnet-mnist-fcn") LEARNING_RATE = 0.001 INPUT_SIZE = 28*28 BATCH_SIZE = 128 NUM_CLASSES = 10 NUM_EPOCHS = 10 """ Explanation: MNIST Digit Classification - FCN End of explanation """ def parse_file(filename): xdata, ydata = [], [] fin = open(filename, "rb") i = 0 for line in fin: if i % 10000 == 0: print("{:s}: {:d} lines read".format( os.path.basename(filename), i)) cols = line.strip().split(",") ydata.append(int(cols[0])) # xdata.append([float(x) / 255. for x in cols[1:]]) xdata.append([float(x) for x in cols[1:]]) i += 1 fin.close() print("{:s}: {:d} lines read".format(os.path.basename(filename), i)) X = np.array(xdata) ohe = OneHotEncoder(n_values=NUM_CLASSES) Y = ohe.fit_transform([ydata]).todense().reshape(len(ydata), -1) return X, Y Xtrain, Ytrain = parse_file(TRAIN_FILE) Xtest, Ytest = parse_file(TEST_FILE) print(Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape) train_gen = mx.io.NDArrayIter(Xtrain, label=Ytrain, batch_size=BATCH_SIZE, shuffle=True) val_gen = mx.io.NDArrayIter(Xtest, label=Ytest, batch_size=BATCH_SIZE) """ Explanation: Prepare Data End of explanation """ # Create a place holder variable for the input data data = mx.sym.Variable('data') # FC1: 784 => 128 fc1 = mx.sym.FullyConnected(data=data, name='fc1', num_hidden=128) fc1 = mx.sym.Activation(data=fc1, name='relu1', act_type="relu") fc1 = mx.sym.Dropout(data=fc1, name="drop1", p=0.2) # FC2: 128 => 64 fc2 = mx.sym.FullyConnected(data=fc1, name='fc2', num_hidden=64) fc2 = mx.sym.Activation(data=fc2, name='relu2', act_type="relu") fc2 = mx.sym.Dropout(data=fc2, name="drop2", p=0.2) # FC3: 64 => 10 fc3 = mx.sym.FullyConnected(data=fc2, name='fc3', num_hidden=NUM_CLASSES) # The softmax and loss layer net = mx.sym.SoftmaxOutput(data=fc3, name='softmax') """ Explanation: Define Network End of explanation """ import logging logging.getLogger().setLevel(logging.DEBUG) train_gen.reset() val_gen.reset() model = mx.mod.Module(symbol=net, data_names=["data"], label_names=["softmax_label"]) checkpoint = mx.callback.do_checkpoint(MODEL_FILE) num_batches_per_epoch = len(Xtrain) // BATCH_SIZE model.fit(train_gen, eval_data=val_gen, optimizer="adam", optimizer_params={"learning_rate": LEARNING_RATE}, eval_metric="acc", num_epoch=NUM_EPOCHS, epoch_end_callback=checkpoint) """ Explanation: Train Network No built-in method to capture loss and accuracy during training. One can register a custom callback to collect the training accuracy at the end of every epoch, but apparently the general approach in the MXNet community is to eyeball the numbers. End of explanation """ test_gen = mx.io.NDArrayIter(Xtest, label=Ytest, batch_size=BATCH_SIZE) test_accuracy = model.score(test_gen, eval_metric="acc") print(test_accuracy) """ Explanation: Evaluate Network End of explanation """
Chipe1/aima-python
planning_hierarchical_search.ipynb
mit
from planning import * from notebook import psource psource(Problem.refinements) """ Explanation: Hierarchical Search Hierarchical search is a a planning algorithm in high level of abstraction. <br> Instead of actions as in classical planning (chapter 10) (primitive actions) we now use high level actions (HLAs) (see planning.ipynb) <br> Refinements Each HLA has one or more refinements into a sequence of actions, each of which may be an HLA or a primitive action (which has no refinements by definition).<br> For example: - (a) the high level action "Go to San Fransisco airport" (Go(Home, SFO)), might have two possible refinements, "Drive to San Fransisco airport" and "Taxi to San Fransisco airport". <br> - (b) A recursive refinement for navigation in the vacuum world would be: to get to a destination, take a step, and then go to the destination. <br> <br> - implementation: An HLA refinement that contains only primitive actions is called an implementation of the HLA - An implementation of a high-level plan (a sequence of HLAs) is the concatenation of implementations of each HLA in the sequence - A high-level plan achieves the goal from a given state if at least one of its implementations achieves the goal from that state <br> The refinements function input is: - hla: the HLA of which we want to compute its refinements - state: the knoweledge base of the current problem (Problem.init) - library: the hierarchy of the actions in the planning problem End of explanation """ psource(Problem.hierarchical_search) """ Explanation: Hierarchical search Hierarchical search is a breadth-first implementation of hierarchical forward planning search in the space of refinements. (i.e. repeatedly choose an HLA in the current plan and replace it with one of its refinements, until the plan achieves the goal.) <br> The algorithms input is: problem and hierarchy - problem: is of type Problem - hierarchy: is a dictionary consisting of all the actions and the order in which they are performed. <br> In top level call, initialPlan contains [act] (i.e. is the action to be performed) End of explanation """ library = { 'HLA': ['Go(Home,SFO)', 'Go(Home,SFO)', 'Drive(Home, SFOLongTermParking)', 'Shuttle(SFOLongTermParking, SFO)', 'Taxi(Home, SFO)'], 'steps': [['Drive(Home, SFOLongTermParking)', 'Shuttle(SFOLongTermParking, SFO)'], ['Taxi(Home, SFO)'], [], [], []], 'precond': [['At(Home) & Have(Car)'], ['At(Home)'], ['At(Home) & Have(Car)'], ['At(SFOLongTermParking)'], ['At(Home)']], 'effect': [['At(SFO) & ~At(Home)'], ['At(SFO) & ~At(Home) & ~Have(Cash)'], ['At(SFOLongTermParking) & ~At(Home)'], ['At(SFO) & ~At(LongTermParking)'], ['At(SFO) & ~At(Home) & ~Have(Cash)']] } """ Explanation: Example Suppose that somebody wants to get to the airport. The possible ways to do so is either get a taxi, or drive to the airport. <br> Those two actions have some preconditions and some effects. If you get the taxi, you need to have cash, whereas if you drive you need to have a car. <br> Thus we define the following hierarchy of possible actions. hierarchy End of explanation """ go_SFO = HLA('Go(Home,SFO)', precond='At(Home)', effect='At(SFO) & ~At(Home)') taxi_SFO = HLA('Taxi(Home,SFO)', precond='At(Home)', effect='At(SFO) & ~At(Home) & ~Have(Cash)') drive_SFOLongTermParking = HLA('Drive(Home, SFOLongTermParking)', 'At(Home) & Have(Car)','At(SFOLongTermParking) & ~At(Home)' ) shuttle_SFO = HLA('Shuttle(SFOLongTermParking, SFO)', 'At(SFOLongTermParking)', 'At(SFO) & ~At(LongTermParking)') """ Explanation: the possible actions are the following: End of explanation """ prob = Problem('At(Home) & Have(Cash) & Have(Car)', 'At(SFO) & Have(Cash)', [go_SFO]) """ Explanation: Suppose that (our preconditionds are that) we are Home and we have cash and car and our goal is to get to SFO and maintain our cash, and our possible actions are the above. <br> Then our problem is: End of explanation """ for sequence in Problem.refinements(go_SFO, prob, library): print (sequence) print([x.__dict__ for x in sequence ], '\n') """ Explanation: Refinements The refinements of the action Go(Home, SFO), are defined as: <br> ['Drive(Home,SFOLongTermParking)', 'Shuttle(SFOLongTermParking, SFO)'], ['Taxi(Home, SFO)'] End of explanation """ plan= Problem.hierarchical_search(prob, library) print (plan, '\n') print ([x.__dict__ for x in plan]) """ Explanation: Run the hierarchical search Top level call End of explanation """ library_2 = { 'HLA': ['Go(Home,SFO)', 'Go(Home,SFO)', 'Bus(Home, MetroStop)', 'Metro(MetroStop, SFO)' , 'Metro(MetroStop, SFO)', 'Metro1(MetroStop, SFO)', 'Metro2(MetroStop, SFO)' ,'Taxi(Home, SFO)'], 'steps': [['Bus(Home, MetroStop)', 'Metro(MetroStop, SFO)'], ['Taxi(Home, SFO)'], [], ['Metro1(MetroStop, SFO)'], ['Metro2(MetroStop, SFO)'],[],[],[]], 'precond': [['At(Home)'], ['At(Home)'], ['At(Home)'], ['At(MetroStop)'], ['At(MetroStop)'],['At(MetroStop)'], ['At(MetroStop)'] ,['At(Home) & Have(Cash)']], 'effect': [['At(SFO) & ~At(Home)'], ['At(SFO) & ~At(Home) & ~Have(Cash)'], ['At(MetroStop) & ~At(Home)'], ['At(SFO) & ~At(MetroStop)'], ['At(SFO) & ~At(MetroStop)'], ['At(SFO) & ~At(MetroStop)'] , ['At(SFO) & ~At(MetroStop)'] ,['At(SFO) & ~At(Home) & ~Have(Cash)']] } plan_2 = Problem.hierarchical_search(prob, library_2) print(plan_2, '\n') print([x.__dict__ for x in plan_2]) """ Explanation: Example 2 End of explanation """
YzPaul3/h2o-3
h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb
apache-2.0
import h2o # Start an H2O Cluster on your local machine h2o.init() """ Explanation: H2O Tutorial: Breast Cancer Classification Author: Erin LeDell Contact: erin@h2o.ai This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms. Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai. Install H2O in Python Prerequisites This tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies): bash pip install requests pip install tabulate pip install scikit-learn If you have any problems (for example, installing the scikit-learn package), check out this page for tips. Install h2o Once the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is called "Tibshirani-3." The installation instructions are on the "Install in Python" tab on this page. ```bash The following command removes the H2O module for Python (if it already exists). pip uninstall h2o Next, use pip to install this version of the H2O Python module. pip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/3/Python/h2o-3.6.0.3-py2.py3-none-any.whl ``` Start up an H2O cluster In a Python terminal, we can import the h2o package and start up an H2O cluster. End of explanation """ # This will not actually do anything since it's a fake IP address # h2o.init(ip="123.45.67.89", port=54321) """ Explanation: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows: End of explanation """ csv_url = "http://www.stat.berkeley.edu/~ledell/data/wisc-diag-breast-cancer-shuffled.csv" data = h2o.import_file(csv_url) """ Explanation: Download Data The following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset. We can import the data directly into H2O using the Python API. End of explanation """ data.shape """ Explanation: Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame: End of explanation """ data.head() """ Explanation: Now let's take a look at the top of the frame: End of explanation """ data.columns """ Explanation: The first two columns contain an ID and the resposne. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors. End of explanation """ columns = ["id", "diagnosis", "area_mean"] data[columns].head() """ Explanation: To select a subset of the columns to look at, typical Pandas indexing applies: End of explanation """ data['diagnosis'] """ Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely: End of explanation """ data['diagnosis'].unique() data['diagnosis'].nlevels() """ Explanation: It looks like a binary response, but let's validate that assumption: End of explanation """ data['diagnosis'].levels() """ Explanation: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis): End of explanation """ data.isna() data['diagnosis'].isna() """ Explanation: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column. End of explanation """ data['diagnosis'].isna().sum() """ Explanation: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look: End of explanation """ data.isna().sum() """ Explanation: Great, no missing labels. Out of curiosity, let's see if there is any missing data in this frame: End of explanation """ # TO DO: Insert a bar chart or something showing the proportion of M to B in the response. data['diagnosis'].table() """ Explanation: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically. End of explanation """ n = data.shape[0] # Total number of training samples data['diagnosis'].table()['Count']/n """ Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). End of explanation """ y = 'diagnosis' x = data.columns del x[0:1] x """ Explanation: Machine Learning in H2O We will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms. Specify the predictor set and response The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis'). End of explanation """ train, test = data.split_frame(ratios=[0.75], seed=1) train.shape test.shape """ Explanation: Split H2O Frame into a train and test set End of explanation """ # Import H2O GBM: from h2o.estimators.gbm import H2OGradientBoostingEstimator """ Explanation: Train and Test a GBM model End of explanation """ model = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1) """ Explanation: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters. End of explanation """ model.train(x=x, y=y, training_frame=train, validation_frame=test) """ Explanation: The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables. End of explanation """ print(model) """ Explanation: Inspect Model The type of results shown when you print a model, are determined by the following: - Model class of the estimator (e.g. GBM, RF, GLM, DL) - The type of machine learning problem (e.g. binary classification, multiclass classification, regression) - The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds) Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score. The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF. Lastly, for tree-based methods (GBM and RF), we also print variable importance. End of explanation """ perf = model.model_performance(test) perf.r2() perf.auc() """ Explanation: Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance. However, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces: Training, Validation and Test. After training a variety of models using different parameters (and evaluating them on a validation set), the user may choose a single model and then evaluate model performance on a separate test set. This is when the model_performance method, shown below, is most useful. End of explanation """ cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1, nfolds=5) cvmodel.train(x=x, y=y, training_frame=data) """ Explanation: Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument. When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data. End of explanation """ ntrees_opt = [5,50,100] max_depth_opt = [2,3,5] learn_rate_opt = [0.1,0.2] hyper_params = {'ntrees': ntrees_opt, 'max_depth': max_depth_opt, 'learn_rate': learn_rate_opt} """ Explanation: Grid Search One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over: - ntrees: Number of trees - max_depth: Maximum depth of a tree - learn_rate: Learning rate in the GBM We will define a grid as follows: End of explanation """ from h2o.grid.grid_search import H2OGridSearch gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params) """ Explanation: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters: End of explanation """ gs.train(x=x, y=y, training_frame=train, validation_frame=test) """ Explanation: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid. End of explanation """ print(gs) # print out the auc for all of the models for g in gs: print(g.model_id + " auc: " + str(g.auc())) #TO DO: Compare grid search models """ Explanation: Compare Models End of explanation """
slundberg/shap
notebooks/tabular_examples/tree_based_models/Explaining a simple OR function.ipynb
mit
import numpy as np import xgboost import shap """ Explanation: Explaining a simple OR function This notebook examines what it looks like to explain an OR function using SHAP values. It is based on a simple example with two features is_young and is_female, roughly motivated by the Titanic survival dataset where women and children were given priority during the evacuation and so were more likely to survive. In this simulated example this effect is taken to the extreme, where all children and women survive and no adult men survive. End of explanation """ N = 40000 M = 2 # randomly create binary features for (is_young, and is_female) X = (np.random.randn(N,2) > 0) * 1 # force the first sample to be a young boy X[0,0] = 1 X[0,1] = 0 # you survive only if you are young or female y = ((X[:,0] + X[:,1]) > 0) * 1 """ Explanation: Create a dataset following an OR function End of explanation """ model = xgboost.XGBRegressor(n_estimators=100, learning_rate=0.1) model.fit(X, y) model.predict(X) """ Explanation: Train an XGBoost model to mimic this OR function End of explanation """ explainer = shap.TreeExplainer(model, X, feature_dependence="independent") shap_values = explainer.shap_values(X[:1,:]) print("explainer.expected_value:", explainer.expected_value.round(4)) print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4)) print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4)) """ Explanation: Explain the prediction for a young boy Using the training set for the background distribution Note that in the example explanation below is_young = True has a positive value (meaning it increases the model output, and hence the prediction of survival), while is_female = False has a negative value (meaning it decreases the model output). While one could argue that is_female = False should have no impact because we already know that the person is young, SHAP values account for the impact a feature has even when we don't nessecarily know the other features, which is why is_female = False still has a negative impact on the prediction. End of explanation """ explainer = shap.TreeExplainer(model, X[y == 0,:], feature_dependence="independent") shap_values = explainer.shap_values(X[:1,:]) print("explainer.expected_value:", explainer.expected_value.round(4)) print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4)) print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4)) """ Explanation: Using only negative examples for the background distribution The point of this second explanation example is to demonstrate how using a different background distribution can change the allocation of credit among the input features. This happens because we are now comparing the importance of a feature as compared to being someone who died (an adult man). The only thing different about the young boy from someone who died is that the boy is young, so all the credit goes to the is_young = True feature. This highlights that often explanations are clearer when a well defined background group is used. In this case it changes the explanation from how this sample is different than typical, to how this sample is different from those who died (in other words, why did you live?). End of explanation """ explainer = shap.TreeExplainer(model, X[y == 1,:], feature_dependence="independent") shap_values = explainer.shap_values(X[:1,:]) print("explainer.expected_value:", explainer.expected_value.round(4)) print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4)) print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4)) """ Explanation: Using only positive examples for the background distribution We could also use only positive examples for our background distribution, and since the difference between the expected output of the model (under our background distribution) and the current output for the young boy is zero, the sum of the SHAP values will be also be zero. End of explanation """ explainer = shap.TreeExplainer(model, np.ones((1,M)), feature_dependence="independent") shap_values = explainer.shap_values(X[:10,:]) shap_values[0:3].round(4) """ Explanation: Using young women for the background distribution If we compare the sample to young women then neither of the features matter except for adult men, in which both features are given equal credit for their death (as one might intuitively expect). End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-1/cmip6/models/sandbox-3/atmoschem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-3', 'atmoschem') """ Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: TEST-INSTITUTE-1 Source ID: SANDBOX-3 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:43 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) """ Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) """ Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) """ Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) """ Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation """
caiyunapp/theano_lstm
Tutorial.ipynb
bsd-3-clause
## Fake dataset: class Sampler: def __init__(self, prob_table): total_prob = 0.0 if type(prob_table) is dict: for key, value in prob_table.items(): total_prob += value elif type(prob_table) is list: prob_table_gen = {} for key in prob_table: prob_table_gen[key] = 1.0 / (float(len(prob_table))) total_prob = 1.0 prob_table = prob_table_gen else: raise ArgumentError("__init__ takes either a dict or a list as its first argument") if total_prob <= 0.0: raise ValueError("Probability is not strictly positive.") self._keys = [] self._probs = [] for key in prob_table: self._keys.append(key) self._probs.append(prob_table[key] / total_prob) def __call__(self): sample = random.random() seen_prob = 0.0 for key, prob in zip(self._keys, self._probs): if (seen_prob + prob) >= sample: return key else: seen_prob += prob return key """ Explanation: A Nonsensical Language Model using Theano LSTM Today we will train a nonsensical language model ! We will first collect some language data, convert it to numbers, and then feed it to a recurrent neural network and ask it to predict upcoming words. When we are done we will have a machine that can generate sentences from our made-up language ad-infinitum ! Collect Language Data The first step here is to get some data. Since we are basing our language on nonsense, we need to generate good nonsense using a sampler. Our sampler will take a probability table as input, e.g. a language where people are equally likely to say "a" or "b" would be written as follows: nonsense = Sampler({"a": 0.5, "b": 0.5}) We get samples from this language like this: word = nonsense() We overloaded the __call__ method and got this syntactic sugar. End of explanation """ samplers = { "punctuation": Sampler({".": 0.49, ",": 0.5, ";": 0.03, "?": 0.05, "!": 0.05}), "stop": Sampler({"the": 10, "from": 5, "a": 9, "they": 3, "he": 3, "it" : 2.5, "she": 2.7, "in": 4.5}), "noun": Sampler(["cat", "broom", "boat", "dog", "car", "wrangler", "mexico", "lantern", "book", "paper", "joke","calendar", "ship", "event"]), "verb": Sampler(["ran", "stole", "carried", "could", "would", "do", "can", "carry", "catapult", "jump", "duck"]), "adverb": Sampler(["rapidly", "calmly", "cooly", "in jest", "fantastically", "angrily", "dazily"]) } """ Explanation: Parts of Speech Now that we have a Sampler we can create a couple different word groups that our language uses to distinguish between different probability distributions easily: End of explanation """ def generate_nonsense(word = ""): if word.endswith("."): return word else: if len(word) > 0: word += " " word += samplers["stop"]() word += " " + samplers["noun"]() if random.random() > 0.7: word += " " + samplers["adverb"]() if random.random() > 0.7: word += " " + samplers["adverb"]() word += " " + samplers["verb"]() if random.random() > 0.8: word += " " + samplers["noun"]() if random.random() > 0.9: word += "-" + samplers["noun"]() if len(word) > 500: word += "." else: word += " " + samplers["punctuation"]() return generate_nonsense(word) def generate_dataset(total_size, ): sentences = [] for i in range(total_size): sentences.append(generate_nonsense()) return sentences # generate dataset lines = generate_dataset(100) """ Explanation: Simple Grammar To create sentences from our language we create a simple recursion that goes as follows: If the sentence we have ends with a full stop, a question mark, or an exclamation point then end at once! Else our sentence should have: A stop word A noun An adverb (with prob 0.3), or 2 adverbs (with prob 0.3*0.3=0.09) A verb Another noun (with prob 0.2), or 2 more nouns connected by a dash (with prob 0.2*0.1=0.02) If our sentence is now over 500 characters, add a full stop and end at once! Else add some punctuation and go back to (1) End of explanation """ ### Utilities: class Vocab: __slots__ = ["word2index", "index2word", "unknown"] def __init__(self, index2word = None): self.word2index = {} self.index2word = [] # add unknown word: self.add_words(["**UNKNOWN**"]) self.unknown = 0 if index2word is not None: self.add_words(index2word) def add_words(self, words): for word in words: if word not in self.word2index: self.word2index[word] = len(self.word2index) self.index2word.append(word) def __call__(self, line): """ Convert from numerical representation to words and vice-versa. """ if type(line) is np.ndarray: return " ".join([self.index2word[word] for word in line]) if type(line) is list: if len(line) > 0: if line[0] is int: return " ".join([self.index2word[word] for word in line]) indices = np.zeros(len(line), dtype=np.int32) else: line = line.split(" ") indices = np.zeros(len(line), dtype=np.int32) for i, word in enumerate(line): indices[i] = self.word2index.get(word, self.unknown) return indices @property def size(self): return len(self.index2word) def __len__(self): return len(self.index2word) """ Explanation: Utilities Now that we have our training corpus for our language model (optionally you could gather an actual corpus from the web :), we can now create our first utility, Vocab, that will hold the mapping from words to an index, and perfom the conversions from words to indices and vice-versa: End of explanation """ vocab = Vocab() for line in lines: vocab.add_words(line.split(" ")) """ Explanation: Create a Mapping from numbers to words Now we can use the Vocab class to gather all the words and store an Index: End of explanation """ def pad_into_matrix(rows, padding = 0): if len(rows) == 0: return np.array([0, 0], dtype=np.int32) lengths = map(len, rows) width = max(lengths) height = len(rows) mat = np.empty([height, width], dtype=rows[0].dtype) mat.fill(padding) for i, row in enumerate(rows): mat[i, 0:len(row)] = row return mat, list(lengths) # transform into big numerical matrix of sentences: numerical_lines = [] for line in lines: numerical_lines.append(vocab(line)) numerical_lines, numerical_lengths = pad_into_matrix(numerical_lines) """ Explanation: To send our sentences in one big chunk to our neural network we transform each sentence into a row vector and place each of these rows into a bigger matrix that holds all these rows. Not all sentences have the same length, so we will pad those that are too short with 0s in pad_into_matrix: End of explanation """ from theano_lstm import Embedding, LSTM, RNN, StackedCells, Layer, create_optimization_updates, masked_loss def softmax(x): """ Wrapper for softmax, helps with pickling, and removing one extra dimension that Theano adds during its exponential normalization. """ return T.nnet.softmax(x.T) def has_hidden(layer): """ Whether a layer has a trainable initial hidden state. """ return hasattr(layer, 'initial_hidden_state') def matrixify(vector, n): return T.repeat(T.shape_padleft(vector), n, axis=0) def initial_state(layer, dimensions = None): """ Initalizes the recurrence relation with an initial hidden state if needed, else replaces with a "None" to tell Theano that the network **will** return something, but it does not need to send it to the next step of the recurrence """ if dimensions is None: return layer.initial_hidden_state if has_hidden(layer) else None else: return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None def initial_state_with_taps(layer, dimensions = None): """Optionally wrap tensor variable into a dict with taps=[-1]""" state = initial_state(layer, dimensions) if state is not None: return dict(initial=state, taps=[-1]) else: return None class Model: """ Simple predictive model for forecasting words from sequence using LSTMs. Choose how many LSTMs to stack what size their memory should be, and how many words can be predicted. """ def __init__(self, hidden_size, input_size, vocab_size, stack_size=1, celltype=LSTM): # declare model self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size) # add an embedding self.model.layers.insert(0, Embedding(vocab_size, input_size)) # add a classifier: self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax)) # inputs are matrices of indices, # each row is a sentence, each column a timestep self._stop_word = theano.shared(np.int32(999999999), name="stop word") self.for_how_long = T.ivector() self.input_mat = T.imatrix() self.priming_word = T.iscalar() self.srng = T.shared_randomstreams.RandomStreams(np.random.randint(0, 1024)) # create symbolic variables for prediction: self.predictions = self.create_prediction() # create symbolic variable for greedy search: self.greedy_predictions = self.create_prediction(greedy=True) # create gradient training functions: self.create_cost_fun() self.create_training_function() self.create_predict_function() def stop_on(self, idx): self._stop_word.set_value(idx) @property def params(self): return self.model.params def create_prediction(self, greedy=False): def step(idx, *states): # new hiddens are the states we need to pass to LSTMs # from past. Because the StackedCells also include # the embeddings, and those have no state, we pass # a "None" instead: new_hiddens = [None] + list(states) new_states = self.model.forward(idx, prev_hiddens = new_hiddens) if greedy: new_idxes = new_states[-1] new_idx = new_idxes.argmax() # provide a stopping condition for greedy search: return ([new_idx.astype(self.priming_word.dtype)] + new_states[1:-1]), theano.scan_module.until(T.eq(new_idx,self._stop_word)) else: return new_states[1:] # in sequence forecasting scenario we take everything # up to the before last step, and predict subsequent # steps ergo, 0 ... n - 1, hence: inputs = self.input_mat[:, 0:-1] num_examples = inputs.shape[0] # pass this to Theano's recurrence relation function: # choose what gets outputted at each timestep: if greedy: outputs_info = [dict(initial=self.priming_word, taps=[-1])] + [initial_state_with_taps(layer) for layer in self.model.layers[1:-1]] result, _ = theano.scan(fn=step, n_steps=200, outputs_info=outputs_info) else: outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]] result, _ = theano.scan(fn=step, sequences=[inputs.T], outputs_info=outputs_info) if greedy: return result[0] # softmaxes are the last layer of our network, # and are at the end of our results list: return result[-1].transpose((2,0,1)) # we reorder the predictions to be: # 1. what row / example # 2. what timestep # 3. softmax dimension def create_cost_fun (self): # create a cost function that # takes each prediction at every timestep # and guesses next timestep's value: what_to_predict = self.input_mat[:, 1:] # because some sentences are shorter, we # place masks where the sentences end: # (for how long is zero indexed, e.g. an example going from `[2,3)`) # has this value set 0 (here we substract by 1): for_how_long = self.for_how_long - 1 # all sentences start at T=0: starting_when = T.zeros_like(self.for_how_long) self.cost = masked_loss(self.predictions, what_to_predict, for_how_long, starting_when).sum() def create_predict_function(self): self.pred_fun = theano.function( inputs=[self.input_mat], outputs =self.predictions, allow_input_downcast=True ) self.greedy_fun = theano.function( inputs=[self.priming_word], outputs=T.concatenate([T.shape_padleft(self.priming_word), self.greedy_predictions]), allow_input_downcast=True ) def create_training_function(self): updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta") self.update_fun = theano.function( inputs=[self.input_mat, self.for_how_long], outputs=self.cost, updates=updates, allow_input_downcast=True) def __call__(self, x): return self.pred_fun(x) """ Explanation: Build a Recurrent Neural Network Now the real work is upon us! Thank goodness we have our language data ready. We now create a recurrent neural network by connecting an Embedding $E$ for each word in our corpus, and stacking some special cells together to form a prediction function. Mathematically we want: $$\mathrm{argmax_{E, \Phi}} {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi) = f(x, h)$$ with $f(\cdot, \cdot)$ the function our recurrent neural network performs at each timestep that takes as inputs: an observation $x$, and a previous state $h$, and outputs a probability distribution $\hat{p}$ over the next word. We have $x = E[ w_{k}]$ our observation at time $k$, and $h$ the internal state of our neural network, and $\Phi$ is the set of parameters used by our classifier, and recurrent neural network, and $E$ is the embedding for our words. In practice we will obtain $E$ and $\Phi$ iteratively using gradient descent on the error our network is making in its prediction. To do this we define our error as the Kullback-Leibler divergence (a distance between probability distributions) between our estimate of $\hat{p} = {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi)$ and the actual value of ${\bf P}(w_{k+1}| w_{k}, \dots, w_{0})$ from the data (e.g. a probability distribution that is 1 for word $w_k$ and 0 elsewhere). Theano LSTM StackedCells function To build this predictive model we make use of theano_lstm, a Python module for building recurrent neural networks using Theano. The first step we take is to declare what kind of cells we want to use by declaring a celltype. There are many different celltypes we can use, but the most common these days (and incidentally most effective) are RNN and LSTM. For a more in-depth discussion of how these work I suggest checking out Arxiv, or Alex Graves' website, or Wikipedia. Here we use celltype = LSTM. self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size) Once we've declared what kind of cells we want to use, we can now choose to add an Embedding to map integers (indices) to vectors (and in our case map words to their indices, then indices to word vectors we wish to train). Intuitively this lets the network separate and recognize what it is "seeing" or "receiving" at each timestep. To add an Embedding we create Embedding(vocabulary_size, size_of_embedding_vectors) and insert it at the begging of the StackedCells's layers list (thereby telling StackedCells that this Embedding layer needs to be activated before the other ones): # add an embedding self.model.layers.insert(0, Embedding(vocab_size, input_size)) The final output of our network needs to be a probability distribution over the next words (but in different application areas this could be a sentiment classification, a decision, a topic, etc...) so we add another layer that maps the internal state of the LSTMs to a probability distribution over the all the words in our language. To ensure that our prediction is indeed a probability distribution we "activate" our layer with a Softmax, meaning that we will exponentiate every value of the output, $q_i = e^{x_i}$, so that all values are positive, and then we will divide the output by its sum so that the output sums to 1: $$p_i = \frac{q_i}{\sum_j q_j}\text{, and }\sum_i p_i = 1.$$ # add a classifier: self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax)) For convenience we wrap this all in one class below. Prediction We have now defined our network. At each timestep we can produce a probability distribution for each input index: def create_prediction(self, greedy=False): def step(idx, *states): # new hiddens are the states we need to pass to LSTMs # from past. Because the StackedCells also include # the embeddings, and those have no state, we pass # a "None" instead: new_hiddens = [None] + list(states) new_states = self.model.forward(idx, prev_hiddens = new_hiddens) return new_states[1:] ... Our inputs are an integer matrix Theano symbolic variable: ... # in sequence forecasting scenario we take everything # up to the before last step, and predict subsequent # steps ergo, 0 ... n - 1, hence: inputs = self.input_mat[:, 0:-1] num_examples = inputs.shape[0] # pass this to Theano's recurrence relation function: .... Scan receives our recurrence relation step from above, and also needs to know what will be outputted at each step in outputs_info. We give outputs_info a set of variables corresponding to the hidden states of our StackedCells. Some of the layers have no hidden state, and thus we should simply pass a None to Theano, while others do require some initial state. In those cases with wrap their initial state inside a dictionary: def has_hidden(layer): """ Whether a layer has a trainable initial hidden state. """ return hasattr(layer, 'initial_hidden_state') def matrixify(vector, n): return T.repeat(T.shape_padleft(vector), n, axis=0) def initial_state(layer, dimensions = None): """ Initalizes the recurrence relation with an initial hidden state if needed, else replaces with a "None" to tell Theano that the network **will** return something, but it does not need to send it to the next step of the recurrence """ if dimensions is None: return layer.initial_hidden_state if has_hidden(layer) else None else: return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None def initial_state_with_taps(layer, dimensions = None): """Optionally wrap tensor variable into a dict with taps=[-1]""" state = initial_state(layer, dimensions) if state is not None: return dict(initial=state, taps=[-1]) else: return None Let's now create these inital states (note how we skip layer 1, the embeddings by doing self.model.layers[1:] in the iteration, this is because there is no point in passing these embeddings around in our recurrence because word vectors are only seen at the timestep they are received in this network): # choose what gets outputted at each timestep: outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]] result, _ = theano.scan(fn=step, sequences=[inputs.T], outputs_info=outputs_info) if greedy: return result[0] # softmaxes are the last layer of our network, # and are at the end of our results list: return result[-1].transpose((2,0,1)) # we reorder the predictions to be: # 1. what row / example # 2. what timestep # 3. softmax dimension Error Function: Our error function uses theano_lstm's masked_loss method. This method allows us to define ranges over which a probability distribution should obey a particular target distribution. We control this method by setting start and end points for these ranges. In doing so we mask the areas where we do not care what the network predicted. In our case our network predicts words we care about during the sentence, but when we pad our short sentences with 0s to fill our matrix, we do not care what the network does there, because this is happening outside the sentence we collected: def create_cost_fun (self): # create a cost function that # takes each prediction at every timestep # and guesses next timestep's value: what_to_predict = self.input_mat[:, 1:] # because some sentences are shorter, we # place masks where the sentences end: # (for how long is zero indexed, e.g. an example going from `[2,3)`) # has this value set 0 (here we substract by 1): for_how_long = self.for_how_long - 1 # all sentences start at T=0: starting_when = T.zeros_like(self.for_how_long) self.cost = masked_loss(self.predictions, what_to_predict, for_how_long, starting_when).sum() Training Function We now have a cost function. To perform gradient descent we now need to tell Theano how each parameter must be updated at every training epoch. We theano_lstm's create_optimization_udpates method to generate a dictionary of updates and to apply special gradient descent rules that accelerate and facilitate training (for instance scaling the gradients when they are too large or too little, and preventing gradients from becoming too big and making our model numerically unstable -- in this example we use Adadelta: def create_training_function(self): updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta") self.update_fun = theano.function( inputs=[self.input_mat, self.for_how_long], outputs=self.cost, updates=updates, allow_input_downcast=True) PS: our parameters are obtained by calling self.model.params: @property def params(self): return self.model.params Final Code End of explanation """ # construct model & theano functions: model = Model( input_size=10, hidden_size=10, vocab_size=len(vocab), stack_size=1, # make this bigger, but makes compilation slow celltype=RNN # use RNN or LSTM ) model.stop_on(vocab.word2index["."]) """ Explanation: Construct model We now declare the model and parametrize it to use an RNN, and make predictions in the range provided by our vocabulary. We also tell the greedy reconstruction search that it can consider a sentence as being over when the symbol corresponding to a period appears: End of explanation """ # train: for i in range(10000): error = model.update_fun(numerical_lines, numerical_lengths) if i % 100 == 0: print("epoch %(epoch)d, error=%(error).2f" % ({"epoch": i, "error": error})) if i % 500 == 0: print(vocab(model.greedy_fun(vocab.word2index["the"]))) a=1 print a import numpy as np import os from setuptools import setup, find_packages """ Explanation: Train Model We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the": End of explanation """
catedrasaes-umu/NoSQLDataEngineering
projects/es.um.nosql.streaminginference.json2dbschema/benchmark/Benchmark.ipynb
mit
%%bash java -version """ Explanation: Pruebas de rendimiento sobre Streaming Inference Es necesario tener instalada la versiรณn de java 1.8: End of explanation """ import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator import seaborn as sns from matplotlib import pylab import numpy as np pylab.rcParams['figure.figsize'] = (16.0, 8.0) sns.set(style="whitegrid") """ Explanation: Tambiรฉn es necesario tener aรฑadida al PATH la carpeta bin de spark 2.2.1 para hadoop 2.7 o posterior (descarga). End of explanation """ def createTestFileCollection(elements=120, entities=2, versions=2, depth=2, fields=2, batch=12): !rm -rf input !mkdir -p input out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --elements $elements \ --entities $entities \ --versions $versions \ --depth $depth \ --fields $fields \ --mode file \ --flow stream \ --batch $batch \ --output input/collection.json \ --delay 10 """ Explanation: Creaciรณn de las colecciones de test Esta funciรณn crea en la carpeta input una lista de archivos json con colecciones de elementos: End of explanation """ def createTestMongoCollection(elements=120, entities=2, versions=2, depth=2, fields=2): out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --elements $elements \ --entities $entities \ --versions $versions \ --depth $depth \ --fields $fields \ --mode mongo \ --host localhost \ --port 27017 \ --database benchmark """ Explanation: Esta funciรณn rellena la base de datos benchmark con entidades de prueba: End of explanation """ def createTestSingleCollection(elements=120, entities=2, versions=2, depth=2, fields=2): !rm -rf input !mkdir -p input out = !java -jar es.um.nosql.streaminginference.benchmark-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --elements $elements \ --entities $entities \ --versions $versions \ --depth $depth \ --fields $fields \ --mode file \ --output input/collection.json """ Explanation: Esta funciรณn crea un รบnico archivo json con una colecciรณn de elementos en la carpeta input con nombre "collection": End of explanation """ def createTestCollection(mode="file", elements=120, entities=2, versions=2, depth=2, fields=2, batch=12): !mkdir -p output if (mode == "file"): createTestFileCollection(elements, entities, versions, depth, fields, batch) elif (mode == "mongo"): createTestMongoCollection(elements, entities, versions, depth, fields) elif (mode == "single"): createTestSingleCollection(elements, entities, versions, depth, fields) """ Explanation: Esta funciรณn determina el comando a utilizar en funciรณn del modo de funcionamiento: End of explanation """ def benchmarkFile(interval=1000, kryo="true"): out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --mode file \ --input input \ --benchmark true \ --interval $interval \ --kryo $kryo """ Explanation: Benchmarking de aplicaciones Esta funciรณn ejecuta la aplicaciรณn de inferencia sobre una serie de colecciones previamente creada y vuelca en stats.csv los resultados: End of explanation """ def benchmarkMongo(interval=1000, block=200, kryo="true"): out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --mode mongo \ --database benchmark \ --host localhost \ --port 27017 \ --benchmark true \ --interval $interval \ --block-interval $block \ --kryo $kryo """ Explanation: Esta funciรณn ejecuta la aplicaciรณn de inferencia sobre la base de datos previamente creada y genera el archivo stats.csv: End of explanation """ def benchmarkSingle(): out = !spark-submit --driver-memory 8g --master local[*] es.um.nosql.streaminginference.json2dbschema-0.0.1-SNAPSHOT-jar-with-dependencies.jar \ --mode single \ --input input/collection.json \ --benchmark true """ Explanation: Esta funciรณn ejecuta la aplicaciรณn de inferencia sobre la colecciรณn creada genera el archivo stats.csv, en este caso solamente se mostrarรก el tiempo de procesamiento: End of explanation """ def benchmarkSparkApp(mode="file", interval=1000, block=200, kryo="true"): if (mode == "file"): benchmarkFile(interval, kryo) elif (mode == "mongo"): benchmarkMongo(interval, block, kryo) elif (mode== "single"): benchmarkSingle() """ Explanation: Esta funciรณn determina el comando a utilizar en funciรณn del modo de funcionamiento: End of explanation """ def benchmark(mode="file", interval=1000, block=200, elements=120, entities=2, versions=2, depth=2, fields=2, batch=12, kryo="true"): global benchmarked !rm -f output/stats.csv createTestCollection(mode, elements, entities, versions, depth, fields, batch) for x in range(0, 10): benchmarkSparkApp(mode, interval, block, kryo) benchmarked = pd.read_csv("output/stats.csv") return benchmarked """ Explanation: Todo junto La siguiente funciรณn compone las funciones anteriores para ejecutar una prueba con los parรกmetros introducidos: End of explanation """ createTestCollection(mode="file", elements=60000, batch=12000) """ Explanation: Pruebas Creaciรณn de una colecciรณn de 60000 elementos segmentada en 5 archivos: End of explanation """ createTestCollection(mode="single", elements=60000) """ Explanation: Creaciรณn de un รบnico archivo con 60000 elementos: End of explanation """ createTestCollection(mode="mongo", elements=60000) """ Explanation: Inserciรณn en la base de datos "benchmark" de MongoDB de 60000 elementos: End of explanation """ benchmark(mode="file",elements=60000, batch=12000) """ Explanation: Prueba de ejecuciรณn de 60000 elementos en modo file, en batches de 12000 elementos: End of explanation """ benchmark(mode="single",elements=60000) """ Explanation: Prueba de ejecuciรณn de 30000 elementos en modo single: End of explanation """ benchmark(mode="mongo", elements=60000) """ Explanation: Prueba de ejecuciรณn de 1200 elementos en modo mongo: End of explanation """ results = pd.DataFrame() df = benchmark(mode="file", elements=2400000, batch=80000, entities=30, versions=30, depth=5, fields=4, kryo="true") df.to_csv("kryo-enabled.csv") results["kryo enabled"] = df["TOTAL_PROCESSING"] df = benchmark(mode="file", elements=2400000, batch=80000, entities=30, versions=30, depth=5, fields=4, kryo="false") df.to_csv("kryo-disabled.csv") results["kryo disabled"] = df["TOTAL_PROCESSING"] ax = sns.barplot(data=results) ax.set_ylabel("Milisegundos de procesamiento") """ Explanation: Mediciรณn de parรกmetros Estudio del efecto de la serializaciรณn Kryo en la aplicaciรณn: End of explanation """ ents = np.array([]) mode = np.array([]) millis = np.array([]) for entities in [1, 50, 100, 200, 400]: df = benchmark(mode="file", elements=2400000, batch=80000, entities=entities, versions=1, depth=2, fields=2, kryo="true") df.to_csv("file-entities-"+str(entities)+".csv") length = df["TOTAL_PROCESSING"].size ents = np.append(ents, np.repeat(entities, length)) mode = np.append(mode, np.repeat("Paralelo", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) df = benchmark(mode="single", elements=2400000, entities=entities, versions=1, depth=2, fields=2) df.to_csv("original-file-entities-"+str(entities)+".csv") length = df["TOTAL_PROCESSING"].size ents = np.append(ents, np.repeat(entities, length)) mode = np.append(mode, np.repeat("Original", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Entidades":ents, "Modo": mode, "Milisegundos de procesamiento": millis}) sns.factorplot(x="Entidades", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=7) """ Explanation: Estudio del efecto del nรบmero de entidades en el tiempo de procesamiento: End of explanation """ vers = np.array([]) mode = np.array([]) millis = np.array([]) for versions in [1, 50, 100, 200, 400]: df = benchmark(mode="file", elements=2400000, batch=80000, entities=1, versions=versions, depth=2, fields=2, kryo="true") df.to_csv("file-versions-"+str(versions)+".csv") length = df["TOTAL_PROCESSING"].size vers = np.append(vers, np.repeat(versions, length)) mode = np.append(mode, np.repeat("Paralelo", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) df = benchmark(mode="single", elements=2400000, entities=1, versions=versions, depth=2, fields=2) df.to_csv("original-file-versions-"+str(versions)+".csv") vers = np.append(vers, np.repeat(versions, length)) mode = np.append(mode, np.repeat("Original", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Versiones":vers, "Modo": mode, "Milisegundos de procesamiento": millis}) sns.factorplot(x="Versiones", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=7) """ Explanation: Estudio del efecto del nรบmero de versiones en el tiempo de procesamiento: End of explanation """ elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]: df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=1, versions=1, depth=2, fields=2, kryo="true") df.to_csv("light-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Paralelo", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = benchmark(mode="single", elements=elements, entities=1, versions=1, depth=2, fields=2) df.to_csv("light-original-file-elements-"+str(elements)+".csv") elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Original", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros}) sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7) elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]: df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=20, versions=20, depth=2, fields=2, kryo="true") df.to_csv("medium-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Paralelo", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = benchmark(mode="single", elements=elements, entities=20, versions=20, depth=2, fields=2) df.to_csv("medium-original-file-elements-"+str(elements)+".csv") elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Original", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros}) sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7) elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [60000, 120000, 480000, 1200000, 2400000, 3600000]: df = benchmark(mode="file", elements=elements, batch=(elements/30), entities=50, versions=50, depth=2, fields=2, kryo="true") df.to_csv("hard-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Paralelo", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = benchmark(mode="single", elements=elements, entities=50, versions=50, depth=2, fields=2) df.to_csv("hard-original-file-elements-"+str(elements)+".csv") elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Original", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros}) sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", data=results, kind="bar", size=7) """ Explanation: Estudio del efecto del nรบmero de elementos en el tiempo de procesamiento: End of explanation """ parts = np.array([]) millis = np.array([]) for partitions in [1, 2, 4, 8, 16]: df = benchmark(mode="file", elements=2400000, batch=(elements/partitions), entities=1, versions=1, depth=2, fields=2, kryo="true") df.to_csv("file-partitions-"+str(partitions)+".csv") length = df["TOTAL_PROCESSING"].size parts = np.append(parts, np.repeat(partitions, length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Particiones":parts, "Milisegundos de procesamiento": millis}) sns.factorplot(x="Particiones", y="Milisegundos de procesamiento", data=results, kind="bar", size=7) elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [480000, 1200000, 2400000, 3600000]: for executors in [4, 16]: df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-"+str(executors)+"-1.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-1-"+str(executors), length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Elementos":elems, "Modo": mode, "Microsegundos por elemento": micros}) sns.factorplot(x="Elementos", y="Microsegundos por elemento", col="Modo", col_wrap=3, data=results, kind="bar", size=5) """ Explanation: Estudio del efecto del nรบmero de particiones en el tiempo de procesamiento: End of explanation """ import matplotlib.pyplot as plt import os.path f, ax = plt.subplots(1,3, figsize=(11, 7)) f.tight_layout() cmap = sns.color_palette("Blues", n_colors=1000) row = 0 for version in [1, 20, 50]: elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [480000, 1200000, 2400000, 3600000]: if version == 1: strVersion = "light" if version == 20: strVersion = "medium" elif version == 50: strVersion = "hard" df = pd.read_csv("local/"+strVersion+"-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("PARALELO", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("local/"+strVersion+"-original-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("ORIGINAL", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) for executors in [4, 16]: df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-"+str(executors)+"-1.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-"+str(executors).zfill(2)+"-1", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-2-8.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-02-8", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-8-2.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-08-2", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros}) grouped = results.groupby(['Documentos', 'Modo'], as_index=False).mean() grouped.sort_values("Modo") pivoted = grouped.pivot("Modo", "Documentos", "Microsegundos por documento") #display(pivoted) sns.heatmap(pivoted, annot=True, linewidths=.5, fmt="1.2f", ax=ax[row], cmap=cmap, cbar=False, annot_kws={"size": 14}) #ax[row].yticks(np.arange(0, 1, step=0.2)) row += 1 plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.3) plt.show() """ Explanation: Lectura de resultados obtenidos Mapa de calor con ejecuciones en mรกquina local y CESGA End of explanation """ import matplotlib.pyplot as plt import os.path cmap = sns.color_palette("Blues", n_colors=1000) f, ax = plt.subplots(1,1, figsize=(12.95, 4.5)) elems = np.array([]) mode = np.array([]) micros = np.array([]) bestMode = "" bestMicros = 9999999 originalMicros = 0 labels = pd.DataFrame(columns=["Modo", "Documentos", "Candidato"]) results = pd.DataFrame(columns=["Modo", "Documentos", "Speedup"]) for version in [1, 20, 50]: for elements in [480000, 1200000, 2400000]: if version == 1: strVersion = "light" labelVersion = u"1 entidad\n1 versiรณn" if version == 20: strVersion = "medium" labelVersion = "20 entidades\n20 versiones" elif version == 50: strVersion = "hard" labelVersion = "50 entidades\n50 versiones" df = pd.read_csv("local/"+strVersion+"-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) bestMode = "Local" bestMicros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean() df = pd.read_csv("local/"+strVersion+"-original-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) originalMicros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean() if (originalMicros < bestMicros): bestMicros = originalMicros bestMode = "Original" for executors in [4, 16]: df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-"+str(executors)+"-1.csv") length = df["TOTAL_PROCESSING"].size micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean() if (micros < bestMicros): bestMicros = micros bestMode = "CESGA\n" + str(executors) + " executors 1 core" df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-2-8.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean() if (micros < bestMicros): bestMicros = micros bestMode = "CESGA\n2 executors 8 cores" df = pd.read_csv("cesga/results-"+str(elements)+"-"+str(version)+"-"+str(version)+"-"+str(elements/30)+"-8-2.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()).mean() if (micros < bestMicros): bestMicros = micros bestMode = "CESGA\n8 executors 2 cores" speedup = originalMicros/bestMicros bestMode += "\nSpeedup: " + "{0:.2f}".format(speedup) results = results.append({"Modo": labelVersion, "Documentos": elements, "Speedup": speedup}, ignore_index=True) labels = labels.append({"Modo": labelVersion, "Documentos": elements, "Candidato": bestMode}, ignore_index=True) #results["Tipo"] = results["Tipo"].astype(int) results["Documentos"] = results["Documentos"].astype(int) results = results.pivot("Modo", "Documentos", "Speedup") labels = labels.pivot("Modo", "Documentos", "Candidato") sns.heatmap(results, annot=labels, linewidths=.5, fmt="", cmap=cmap, cbar=False, annot_kws={"size": 16}, ax=ax) ax.set_ylabel('') ax.set_xlabel("Documentos",fontsize=14) ax.tick_params(labelsize="large") plt.yticks(rotation=0) plt.show() """ Explanation: Mapa de calor con mejor alternativa y speedup respecto a proceso de inferencia original End of explanation """ elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [480000, 1200000, 2400000, 3600000]: for executors in [4, 16]: df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-"+str(executors)+"-1.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-"+str(executors).zfill(2)+"-1", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros}) sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=3) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de executors (1 entidad 1 versiรณn) End of explanation """ elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [480000, 1200000, 2400000, 3600000]: for executors in [4, 16]: df = pd.read_csv("cesga/results-"+str(elements)+"-50-50-"+str(elements/30)+"-"+str(executors)+"-1.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA "+str(executors)+" Executors", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros}) sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=3.5) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de executors (50 entidades 50 versiones) End of explanation """ elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [480000, 1200000, 2400000, 3600000]: df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-16-1.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-16-1", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-8-2.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-08-2", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("cesga/results-"+str(elements)+"-1-1-"+str(elements/30)+"-2-8.csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("CESGA-02-8", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros}) sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", col_wrap=3, data=results, kind="bar", size=4) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de cores (1 entidad 1 versiรณn) End of explanation """ ents = np.array([]) mode = np.array([]) millis = np.array([]) for entities in [1, 50, 100, 200, 400]: df = pd.read_csv("local/file-entities-"+str(entities)+".csv") length = df["TOTAL_PROCESSING"].size ents = np.append(ents, np.repeat(entities, length)) mode = np.append(mode, np.repeat("Paralelo", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) df = pd.read_csv("local/original-file-entities-"+str(entities)+".csv") length = df["TOTAL_PROCESSING"].size ents = np.append(ents, np.repeat(entities, length)) mode = np.append(mode, np.repeat("Original", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Entidades":ents.astype(int), "Modo": mode, "Milisegundos de procesamiento": millis}) sns.factorplot(x="Entidades", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=3.5) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de entidades End of explanation """ vers = np.array([]) mode = np.array([]) millis = np.array([]) for versions in [1, 50, 100, 200, 400]: df = pd.read_csv("local/file-versions-"+str(versions)+".csv") length = df["TOTAL_PROCESSING"].size vers = np.append(vers, np.repeat(versions, length)) mode = np.append(mode, np.repeat("Paralelo", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) df = pd.read_csv("local/original-file-versions-"+str(versions)+".csv") vers = np.append(vers, np.repeat(versions, length)) mode = np.append(mode, np.repeat("Original", length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Versiones":vers.astype(int), "Modo": mode, "Milisegundos de procesamiento": millis}) sns.factorplot(x="Versiones", y="Milisegundos de procesamiento", col="Modo", data=results, kind="bar", size=3.5) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de versiones End of explanation """ elems = np.array([]) mode = np.array([]) micros = np.array([]) for elements in [60000, 120000, 480000, 1200000, 2400000]: df = pd.read_csv("local/light-file-elements-"+str(elements)+".csv") length = df["TOTAL_PROCESSING"].size elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Paralelo", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) df = pd.read_csv("local/light-original-file-elements-"+str(elements)+".csv") elems = np.append(elems, np.repeat(elements, length)) mode = np.append(mode, np.repeat("Original", length)) micros = np.append(micros, (df["TOTAL_PROCESSING"]*1000/elements).as_matrix()) results = pd.DataFrame({"Documentos":elems.astype(int), "Modo": mode, "Microsegundos por documento": micros}) sns.factorplot(x="Documentos", y="Microsegundos por documento", col="Modo", data=results, kind="bar", size=3.5) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de documentos End of explanation """ parts = np.array([]) millis = np.array([]) for partitions in [1, 2, 4, 8, 16]: df = pd.read_csv("local/file-partitions-"+str(partitions)+".csv") length = df["TOTAL_PROCESSING"].size parts = np.append(parts, np.repeat(partitions, length)) millis = np.append(millis, df["TOTAL_PROCESSING"].as_matrix()) results = pd.DataFrame({"Ficheros de entrada":parts.astype(int), "Milisegundos de procesamiento": millis}) sns.factorplot(x="Ficheros de entrada", y="Milisegundos de procesamiento", data=results, kind="bar", size=3.5) """ Explanation: Evoluciรณn del tiempo de ejecuciรณn en funciรณn del nรบmero de ficheros de entrada End of explanation """
fjaviersanchez/JupyterTutorial
index.ipynb
mit
import datetime print(datetime.datetime.now()) """ Explanation: Introduction to Jupyter Notebooks Tutorial by Javier Sรกnchez, University of California, Irvine Prepared for the DESC Collaboration Meeting - Oxford - July 2016. Requirements: * anaconda (includes jupyter, astropy, numpy, scipy and matplotlib) * seaborn (pip install seaborn or conda install seaborn) * bokeh (pip install bokeh or conda install bokeh) * sklearn (pip install sklearn, conda install sklearn) * speclite (pip install speclite) Jupyter creates an easy-to-read document that you can view in your web-browser with code (that runs and creates plots inside the document on the fly!) and text (with even math). The name "Jupyter" is a combination of Julia, Python, and R. However, it has support for over 40 programming languages. Jupyter is based on iPython notebooks, and, in fact you can still launch jupyter by typing ipython notebook on your terminal. The concept is similar to Mathematica and it works similarly (to run a code cell you can press shift+enter) 1) How to launch Jupyter You can launch a Jupyter notebook by just typing jupyter notebook on your terminal and this will open a new tab or window on your default browser. You can also select a different browser by setting the environment variable $BROWSER to the path of the browser that you want to use before launching or using the --browser option in the command line. In Windows under "Search programs and files" from the Start menu, type jupyter notebook and select "Jupyter notebook." 2) Cells A Jupyter notebook is internally a JSON document but appears as a collection of "cells". Each segment of this document is a called cell. There are several types of cells but we are interested mainly in two types: 2.1. Markdown cells: Used for explanatory text (like this), and written in GitHub-flavored markdown. A markdown cells is usually displayed in output format, but a double click will switch it to input mode. Try that now on this cell. Use SHIFT+RETURN to toggle back to output format. Markdown cells can contain latex math, for example: $$ \frac{d\log G(z)}{d\log a} \simeq \left[ \frac{\Omega_m (1 + z)^3}{\Omega_m (1 + z)^3 + \Omega_\Lambda} \right]^{0.55} $$ 2.2 Code cells: Contain executable source code in the language of the documentโ€™s associated kernel (usually python). Use SHIFT+RETURN to execute the code in a cell and see its output directly below. Try that now for the code cell below. Note that the output is not editable and that each code cell has an associated label, e.g. In [3], where the number records the order in which cells are executed (which is arbitrary since it depends on you). Re-run the code cell below and note that its number increases each time. End of explanation """ %pylab inline """ Explanation: More info on notebooks and cells is here. 3) Getting Started: Boilerplate and "magic functions" We will now focus on Python. To start a notebook it is a good practice to import all the packages and define the styles that we want to use in our "boilerplate". A good starting point is: import numpy as np import matplotlib.pyplot as plt With these commands we set up our notebook to use the numpy package and the matplotlib package. If we use them like that, the plots will pop-up in a new window instead of being shown in the notebook. To see them in the notebook we should use a "magic function". There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument. A useful example is: End of explanation """ !ls """ Explanation: The magic %pylab sets up the interactive namespace from numpy and matplotlib and inline adds the plots to the notebook. These plots are rendered in PNG format by default. More useful magic commands: %time and %timeit measure execution time. %run runs a Python script and loads all its data on the interactive namespace. %config InlineBackend.figure_formats = {'png', 'retina'} Enables high-resolution PNG rendering and if we change 'png' to 'svg' or any other format we change the format of plots rendered within the notebook. The magic %load is really useful since it allows us to load any other Python script. It has an option -s that allows us to modify the code inside the notebook. We use %load below to reveal solutions to some of the exercises. Command line magic: You can run any system shell command using ! before it. Example: End of explanation """ #Example of how to compute the sum of two lists def add(x,y): add=0 for element_x in x: add=add+element_x for element_y in y: add=add+element_y return add my_list = range(0,100) print(my_list) %timeit -n10 sum_1=add(my_list,my_list) #I compute 10 iterations #Example using numpy arrays my_array = np.arange(0,100,1) print(my_array) %timeit -n10 np.sum(my_array+my_array) #I compute 10 iterations """ Explanation: Advanced magic commands: %load_ext Cython %cython or %%cython More on "magics": * https://ipython.org/ipython-doc/3/interactive/magics.html * https://ipython.org/ipython-doc/3/interactive/tutorial.html 4) Numpy Numpy is a Python package that implements N-dimensional arrays objects and it is designed for scientific computing. It also implements a multitude of mathematical functions to operate efficiently with these arrays. The use of numpy arrays can significantly boost the performance of your Python script to be comparable to compiled C code. Some useful examples and tutorials can be found here and here. End of explanation """ #%load ex1.py #my_list = [[1,2],[3,4]] #my_array=np.arange(1,5,1) #my_array=my_array.reshape(2,2) """ Explanation: The improvement is especially significant when you have to use vectors or matrices Exercise 1: Compute the product element by element of a 2x2 list. Compare with numpy End of explanation """ #In this example we will split a random array into three different categories taking advantage of the numpy masks #We generate an array with 1000 random elements in the interval [0,1) my_array = np.random.random(1000) %time mask = [np.logical_and(my_array>i/3.,my_array<(i+1)/3.) for i in range(0,3)] print(len(my_array[mask[0]]), len(my_array[mask[1]]), len(my_array[mask[2]])) """ Explanation: A very useful feature of numpy are masks and masked arrays. You can easily select all the values of a vector or an array that fulfill certain condition using a mask. End of explanation """ #This is a very simple implementation. #Maybe sorting the list first or using a matrix instead of lists it would be faster %time #Use %%time for python2.x arr1=[] arr2=[] arr3=[] for element in my_array: if(element>0 and element<1./3.): arr1.append(element) if(element>1./3. and element<2./3.): arr2.append(element) else: arr3.append(element) """ Explanation: Let's compare to a traditional brute-force approach End of explanation """ #First we are going to set up the plots to be SVGs instead of the default PNGs ### Uncomment this cell to use SVG #%config InlineBackend.figure_formats = {'svg',} #We will sample the function in 100 points from 0 to pi x = np.linspace(0,np.pi,100) #We compute the sine of the numpy array x y = np.sin(x) #We make the plot (it automatically generates the figure) plt.plot(x,y,'-',color='green',label='$\sin(x)$') #We add the label to the X and Y axes plt.xlabel('$x$') plt.ylabel('$\sin(x)$') #We generate the legend plt.legend() #We change the limits of the X and Y axes plt.xlim(-0.05,np.pi+0.05) plt.ylim(-0.05,1.05) """ Explanation: 4) Plotting 4.1) Matplotlib (http://matplotlib.org) This is the most widespread package for plotting in Python. There are tons of examples on the web, and it is very well integrated in Jupyter. Example: Let's plot a sinusoidal wave using matplotlib. (It is imported already since we used the magic %pylab inline) End of explanation """ #plt.hist2d # %load ex2.py def ex2(): rs = np.random.RandomState(112) x=np.linspace(0,10,11) y=np.linspace(0,10,11) X,Y = np.meshgrid(x,y) X=X.flatten() Y=Y.flatten() weights=np.random.random(len(X)) plt.hist2d(X,Y,weights=weights); #The semicolon here avoids that Jupyter shows the resulting arrays ex2() """ Explanation: Exercise: Generate and plot a 2D histogram using matplotlib End of explanation """ #First let's import seaborn (a warning will appear because it conflicts with %pylab inline) import seaborn as sns """ Explanation: 4.2) Seaborn (https://web.stanford.edu/~mwaskom/software/seaborn/) Seaborn is a Python package based on matplotlib that includes some convenient plotting functions for statistical analysis. (Some people also like more its default style) End of explanation """ #Compare with matplotlib style (you can still use the same commands but they will render in seaborn style) #We make the plot (it automatically generates the figure) plt.plot(x,y,'-',color='green',label='$\sin(x)$') #We add the label to the X and Y axes plt.xlabel('$x$') plt.ylabel('$\sin(x)$') #We generate the legend plt.legend() #We change the limits of the X and Y axes plt.xlim(-0.05,np.pi+0.05) plt.ylim(-0.05,1.05) """ Explanation: Warning messages: Jupyter will output with a pink background all its warning messages. Most of them will tell us about deprecation or definition conflicts. The messages only appear the first time you run a cell that arises a warning. End of explanation """ #sns.jointplot() # %load ex3.py def ex3(): rs = np.random.RandomState(112) x=np.linspace(0,10,11) y=np.linspace(0,10,11) X,Y = np.meshgrid(x,y) X=X.flatten() Y=Y.flatten() weights=np.random.random(len(X)) sns.jointplot(X,Y,kind='hex',joint_kws={'C':weights}); #The semicolon here avoids that Jupyter shows the resulting arrays ex3() """ Explanation: Exercise: Plot again your 2D histogram using seaborn jointplot (https://web.stanford.edu/~mwaskom/software/seaborn/examples/hexbin_marginals.html) End of explanation """ #We import the package needed to read the file import astropy.io.fits as fits path = './downloaded_data/LSST_i_trimmed.fits.gz' #We open the file and it gives us an hdulist hdulist = fits.open(path) #We can check what this hdulist has using print print(hdulist) #We are going to see what is in the image, we use imshow and select a gray colormap #we also select a minimum of 0 in the colorbar (vmin) and a maximum of 250 (vmax) plt.imshow(hdulist[0].data,vmin=0,vmax=250,cmap='gray') #Show the colorbar plt.colorbar() """ Explanation: 5) Use interactive documentation Jupyter also makes easier the use of new packages providing interactive documentation. The command help(name_of_the_package) lists the available documentation for a pacakge. ?name provides information about the package. shift+tab provides the arguments to a function. 6) Reading astronomical data (FITS) Some of us have struggled a little while creating a FITS file using, for example, cfitsio (you have to initialize status and things like that). The syntax is also kind of obscure and you have to be sure of the format of the variables you are reading. Reading images or FITS tables using Python and Jupyter is much easier and intuitive (and it is not much slower). There are basically two ways of reading a fits file using astropy: Using astropy.io.fits: The astropy.io.fits module (originally PyFITS) is a โ€œpure Pythonโ€ FITS reader in that all the code for parsing the FITS file format is in Python, though Numpy is used to provide access to the FITS data. astropy.io.fits currently also accesses the CFITSIO to support the FITS Tile Compression convention, but this feature is optional. It does not use CFITSIO outside of reading compressed images. Using astropy.table: It uses internally astropy.io.fits it is very convenient for BinarytableHDU in FITS. There exist other ways to read fits files using Python. For example, you can use the fitsio package (to install it do pip install fitsio). This other package is faster and works better for large files than astropy, making it necessary when performance is a strong requirement or constrained. However, it doesn't work under Windows and it needs to have a C compiler installed. The fitsio interface is pretty similar to astropy.table but, it is not identical (some of the things learned here can be directly applied and some other cannot) 6.1) Reading and plotting an image First we are going to download a small image from the WeakLensingDeblending package, which simulates one CCD chip in LSST at full depth (http://weaklensingdeblending.readthedocs.io/en/latest/products.html). The data can be downloaded using the link in here: ftp://ftp.slac.stanford.edu/groups/desc/WL/LSST_i_trimmed.fits.gz or from this repository. First we are going to use astropy.io.fits to read the FITS file as an hdulist (that includes an image HDU and a BinaryTableHDU) End of explanation """ #Importing astropy.table import astropy.table #reading the table. In a multi-hdu file we can specify the hdu with read(path,hdu=num_hdu) table = astropy.table.Table.read(path) #we show the contents of the table table """ Explanation: Now we are going to use astropy.table to read the BinaryTableHDU. We could also read it using hdulist[1].data but let's make use of this nice package End of explanation """ #We print the purity column of the table print(table['purity']) """ Explanation: We can also select any column by simply using table['NAME_OF_THE_COLUMN'] End of explanation """ plt.hist # %load ex4.py def ex4(): masks = [np.logical_and(table['purity']>i/4.,table['purity']<(i+1)/4.) for i in range(0,4)] for i in range(0,4): label = str(i/4.)+' < purity < '+str((i+1)/4.) plt.hist(table['snr_iso'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True) plt.legend() plt.figure() for i in range(0,4): label = str(i/4.)+' < purity < '+str((i+1)/4.) plt.hist(table['snr_grpf'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True) plt.legend() ex4() """ Explanation: Exercise: Make a histogram of the signal to noise snr_iso for different purity cuts (Hint: lookup the documentation for np.hist and make use of numpy masks) End of explanation """ #We are going to use some columns of the table above to produce a useful pairplot #We make use of numpy masks! selection = np.empty(len(table['snr_grpf']),dtype='a20') mask_03 = table['purity']<=0.3 mask_06 = np.logical_and(table['purity']>0.3,table['purity']<=0.6) mask_09 = np.logical_and(table['purity']>0.6,table['purity']<=0.9) mask_1 = table['purity']>0.9 selection[mask_03]="purity<=0.3" selection[mask_06]="0.3<purity<=0.6" selection[mask_09]="0.6<purity<=0.9" selection[mask_1]="purity>0.9" #We require the values dg1 and dg2 to be finite in order that seaborn creates automatically the histograms masked_array = np.logical_not(np.logical_or(np.isinf(table['dg1_grp']),np.isinf(table['dg2_grp']))) #We are going to plot just 1000 points nobj=500 #We will use certain columns of the table cols = [selection[masked_array][0:nobj],table['dg1_grp'][masked_array][0:nobj], \ table['dg2_grp'][masked_array][0:nobj],table['e1'][masked_array][0:nobj], \ table['e2'][masked_array][0:nobj]] new_table = astropy.table.Table(cols,names=('selection','dg1_grp','dg2_grp','e1','e2')) #Seaborn pairplot requires a pandas data frame df = new_table.to_pandas() sns.pairplot(df, hue='selection') #We are going to check the correlations using heatmap corr = df.corr() sns.heatmap(corr) """ Explanation: Exercise: Repeat that with snr_grpf 6.2) Using seaborn to create useful plots End of explanation """ import astropy.units as u x = 10*u.km x.to(u.imperial.mile) + 10*u.Mpc """ Explanation: 7) Keep track of the units. Use astropy.units Sometimes it is difficult to keep track of which units you are using when you write very long programs. This is simplified when you use astropy.units (http://docs.astropy.org/en/stable/units/). The package also handles equivalences and makes easy the unit conversion. It raises an error if you are operating with incompatible units. End of explanation """ #We read a quasar-catalog data table quasar_table = astropy.table.Table.read('./downloaded_data/quasar_table.fits') #We import speclite to compute magnitudes import speclite import speclite.filters sdss = speclite.filters.load_filters('sdss2010-*') #Spectrum of quasar #40 wave = np.load('./downloaded_data/wave.npy') #No units included but units are Angstroms flux = np.load('./downloaded_data/flux.npy') #It comes without units but they're 1e-17 erg/cm**2/s/AA #We use get magnitudes to compute the magnitudes. If the units are not included, it assumes (erg/cm**2/s/AA, AA)<-(flux, wave) mags = sdss.get_ab_magnitudes(flux*1e-17*u.erg/u.cm**2/u.s/u.AA,wave*u.AA) #If we don't use the correct units... mags_wrong = sdss.get_ab_magnitudes(flux,wave) mags_boss = np.hstack(quasar_table['PSFMAG_%d' %f][40] for f in range(0,5)) print(mags) print(mags_boss) print(mags_wrong) """ Explanation: Let's see an example where some units are assumed End of explanation """ #Now we are going to prepare a Boosted decision tree photo-z estimator from sklearn.ensemble import GradientBoostingRegressor #Prepare the training array mags = np.vstack([quasar_table['PSFMAG_%d' % f] for f in range(0,5)]).T z = quasar_table['Z_VI'] print(len(z)) #train on 20% of the points mag_train = mags[::5] z_train = z[::5] print(len(z_train)) #test on 5% of the points mag_test = mags[::18] z_test = z[::18] #Set up the tree clf = GradientBoostingRegressor(n_estimators=500, learning_rate=0.1,max_depth=3, random_state=0) #Train the tree clf.fit(mag_train, z_train) #Test it! z_fit_train = clf.predict(mag_train) z_fit = clf.predict(mag_test) #Compute rms in the training set and test set rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2)) rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2)) plt.scatter(z_test,z_fit, color='k', s=0.1) plt.plot([-0.1, 6], [-0.1, 6], ':k') plt.text(0.04, 5, "rms = %.3f" % (rms_test)) plt.xlabel('$z_{true}$') plt.ylabel('$z_{fit}$') """ Explanation: 8) A complete example: How to make a redshift fitter (photo-z) using sklearn. End of explanation """ # %load ex6.py def ex6(): colors = np.vstack([quasar_table['PSFMAG_%d' % f]-quasar_table['PSFMAG_%d' % (f+1)] for f in range(0,4)]).T color_train = colors[::5] color_test = colors[::18] clf.fit(color_train, z_train) #Test it! z_fit_train = clf.predict(color_train) z_fit = clf.predict(color_test) #Compute rms in the training set and test set rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2)) rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2)) plt.scatter(z_test,z_fit, color='k', s=0.1) plt.plot([-0.1, 6], [-0.1, 6], ':k') plt.text(0.04, 5, "rms = %.3f" % (rms_test)) plt.xlabel('$z_{true}$') plt.ylabel('$z_{fit}$') ex6() """ Explanation: Exercise: Train and evaluate the performance of the tree using colors instead of the magnitudes themselves End of explanation """ # %load opt_ex1.py """ Explanation: Optional exercise: Create a nearest-neighbors estimator (KNN) using from sklearn.neighbors import KNeighborsRegressor End of explanation """ # %load opt_nn.py """ Explanation: 8.b) Extra: Deep Neural Network photo-z (you need keras and theano or tensorflow for this part) I am going to use a Recurrent Neural network, it may not be the optimal choice but, this is to illustrate how to set up the network. More on recurrent neural networks here: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ Optional exercise: Create your own Neural Network photo-z estimator End of explanation """ import randomfield %time generator = randomfield.Generator(8, 128, 1024, grid_spacing_Mpc_h=1.0, verbose=True) delta = generator.generate_delta_field(smoothing_length_Mpc_h=2.0, seed=123, show_plot=True) """ Explanation: 9) Create a lognormal simulation and compute its correlation function We will use randomfield to create a gaussian random field End of explanation """ %%time #Let's compute a simple version of the correlation function in the direction of the direction of the line-of-sight corr = np.zeros(delta.shape[2]) for i in range(1,delta.shape[2]-1): corr[i]=np.sum(delta[:,:,i:]*delta[:,:,:-i])/(delta.shape[0]*delta.shape[1]*(delta.shape[2]-1)) r = np.linspace(0,delta.shape[2],delta.shape[2]+1) plt.plot(r[1:-1],r[1:-1]**2*corr[1:]) plt.xlim(0,200) plt.xlabel(r'$r_{\parallel}$ [Mpc h$^{-1}$]') plt.ylabel(r'$r_{\parallel}^{2}*\xi_{\parallel}(r_{\parallel})$ [Mpc$^{2}$ h$^{-2}$]') plt.ylim(-4500,300); """ Explanation: We will try to calculate the correlation function in the direction of the line-of-sight: $$\xi_{\parallel}(r)=\langle \delta(r') \delta(r+r')\rangle$$ End of explanation """ def plot_sky(ra, dec, data=None, nside=4, label='', projection='eck4', cmap=plt.get_cmap('jet'), norm=None, hide_galactic_plane=False, healpy=False): from mpl_toolkits.basemap import Basemap from matplotlib.collections import PolyCollection from astropy.coordinates import SkyCoord ra=ra.to(u.deg).value dec=dec.to(u.deg).value if(healpy): import healpy as hp # get pixel area in degrees pixel_area = hp.pixelfunc.nside2pixarea(nside, degrees=True) # find healpixels associated with input vectors pixels = hp.ang2pix(nside, 0.5*np.pi-np.radians(dec), np.radians(ra)) # find unique pixels unique_pixels = np.unique(pixels) # count number of points in each pixel bincounts = np.bincount(pixels) # if no data provided, show counts per sq degree # otherwise, show mean per pixel if data is None: values = bincounts[unique_pixels]/pixel_area else: weighted_counts = np.bincount(pixels, weights=data) values = weighted_counts[unique_pixels]/bincounts[unique_pixels] # find pixel boundaries corners = hp.boundaries(nside, unique_pixels, step=1) corner_theta, corner_phi = hp.vec2ang(corners.transpose(0,2,1)) corner_ra, corner_dec = np.degrees(corner_phi), np.degrees(np.pi/2-corner_theta) # set up basemap m = Basemap(projection=projection, lon_0=-90, resolution='c', celestial=True) m.drawmeridians(np.arange(0, 360, 30), labels=[0,0,1,0], labelstyle='+/-') m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-') m.drawmapboundary() # convert sky coords to map coords x,y = m(corner_ra, corner_dec) # regroup into pixel corners verts = np.array([x.reshape(-1,4), y.reshape(-1,4)]).transpose(1,2,0) # Make the collection and add it to the plot. coll = PolyCollection(verts, array=values, cmap=cmap, norm=norm, edgecolors='none') plt.gca().add_collection(coll) plt.gca().autoscale_view() if not hide_galactic_plane: # generate vector in galactic coordinates and convert to equatorial coordinates galactic_l = np.linspace(0, 2*np.pi, 1000) galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5 # project to map coordinates galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree) m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k') # Add a colorbar for the PolyCollection plt.colorbar(coll, orientation='horizontal', pad=0.01, aspect=40, label=label) else: nx, ny = nside, nside ra_bins = numpy.linspace(-180, 180, nx+1) cth_bins = numpy.linspace(-1., 1., ny+1) ra[ra>180]=ra[ra>180]-360 density, _, _ = numpy.histogram2d(ra, np.sin(dec*np.pi/180.), [ra_bins, cth_bins]) ra_bins_2d, cth_bins_2d = numpy.meshgrid(ra_bins, cth_bins) m = Basemap(projection=projection, lon_0=0, resolution='l', celestial=True) m.drawmeridians(np.arange(0, 360, 60), labels=[0,0,1,0], labelstyle='+/-') m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-') m.drawmapboundary() xs, ys = m(ra_bins_2d, np.arcsin(cth_bins_2d)*180/np.pi) pcm = plt.pcolormesh(xs, ys, density) plt.colorbar(pcm,orientation='horizontal', pad=0.04, label=label) if not hide_galactic_plane: # generate vector in galactic coordinates and convert to equatorial coordinates galactic_l = np.linspace(0, 2*np.pi, 1000) galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5 # project to map coordinates galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree) m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k') ra = 360*np.random.random(10000)*u.deg dec = np.arcsin(-1+2*np.random.random(10000))*180/np.pi*u.deg plot_sky(ra,dec,healpy=False, nside=16, projection='eck4', label='Galaxies per pixel') """ Explanation: 10) Create sky plots Healpy (http://healpy.readthedocs.io/en/latest/) includes tools for visualizing skymaps but, what if we want to use different projections? Or what if we cannot use healpy? See here, and here for more info. End of explanation """ # %load ex7.py def ex7(): plot_sky(quasar_table['RA']*u.deg,quasar_table['DEC']*u.deg,nside=128, healpy=False) ex7() """ Explanation: Exercise: Plot the positions of the quasars End of explanation """ from astropy.cosmology import Planck15 print(Planck15.__doc__) z=np.logspace(-4,4,30) om=Planck15.Om(z) ob=Planck15.Ob(z) plt.plot(z,om,label=r'$\Omega_{m}(z)$') plt.plot(z,ob,label=r'$\Omega_{b}(z)$') plt.legend(loc=2) plt.xscale('log') plt.xlabel(r'$z$') plt.ylabel(r'$\Omega(z)$') h=Planck15.H(z) plt.plot(z,h,label=r'$H(z)$') plt.legend(loc=2) plt.xscale('log') plt.yscale('log') plt.xlabel(r'$z$') plt.ylabel(r'$H(z)$ %s' % h.unit) from astropy.cosmology import z_at_value z_at_value(Planck15.comoving_distance, 1200 *u.Mpc) from astropy.cosmology import w0waCDM cosmo = w0waCDM(H0=75*u.km/u.s/u.Mpc,Om0=0.3,Ode0=0.7,w0=-1.2,wa=-3,Neff=4,Ob0=0.044,m_nu=1e-5*u.eV) h_cosmo = cosmo.H(z) plt.plot(z,h_cosmo, label='Random cosmology') plt.plot(z,h, label='Planck15') plt.legend(loc=2) plt.xscale('log') plt.yscale('log') plt.xlabel(r'$z$') plt.ylabel(r'$H(z)$ %s' % h.unit) plt.plot(z,h_cosmo/h-1) plt.legend(loc=2) plt.xscale('log') plt.yscale('log') plt.xlabel(r'$z$') plt.ylabel(r'$H_{cosmo}(z)/H_{Planck15}(z)$') """ Explanation: 11) Using astropy.cosmology (http://docs.astropy.org/en/stable/cosmology/) astropy.cosmology is a subpackage that contains several cosmologies implemented (LCDM, wCDM, etc) and computes some useful quantities for them such as: comoving distance, $H(z)$ or transverse separations from angular separations at redshift $z$. Example: Using $\Lambda CDM$ with Planck 2015 cosmological parameters End of explanation """
metpy/MetPy
v1.0/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d from scipy.spatial.distance import euclidean from metpy.interpolate import geometry from metpy.interpolate.points import natural_neighbor_point """ Explanation: Natural Neighbor Verification Walks through the steps of Natural Neighbor interpolation to validate that the algorithmic approach taken in MetPy is correct. Find natural neighbors visual test A triangle is a natural neighbor for a point if the circumscribed circle &lt;https://en.wikipedia.org/wiki/Circumscribed_circle&gt;_ of the triangle contains that point. It is important that we correctly grab the correct triangles for each point before proceeding with the interpolation. Algorithmically: We place all of the grid points in a KDTree. These provide worst-case O(n) time complexity for spatial searches. We generate a Delaunay Triangulation &lt;https://docs.scipy.org/doc/scipy/ reference/tutorial/spatial.html#delaunay-triangulations&gt;_ using the locations of the provided observations. For each triangle, we calculate its circumcenter and circumradius. Using KDTree, we then assign each grid a triangle that has a circumcenter within a circumradius of the grid's location. The resulting dictionary uses the grid index as a key and a set of natural neighbor triangles in the form of triangle codes from the Delaunay triangulation. This dictionary is then iterated through to calculate interpolation values. We then traverse the ordered natural neighbor edge vertices for a particular grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate proportional polygon areas. Circumcenter of (n - 1), n, grid_location Circumcenter of (n + 1), n, grid_location Determine what existing circumcenters (ie, Delaunay circumcenters) are associated with vertex n, and add those as polygon vertices. Calculate the area of this polygon. Increment the current edges to be checked, i.e.: n - 1 = n, n = n + 1, n + 1 = n + 2 Repeat steps 5 & 6 until all of the edge combinations of 3 have been visited. Repeat steps 4 through 7 for each grid cell. End of explanation """ np.random.seed(100) pts = np.random.randint(0, 100, (10, 2)) xp = pts[:, 0] yp = pts[:, 1] zp = (pts[:, 0] * pts[:, 0]) / 1000 tri = Delaunay(pts) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility delaunay_plot_2d(tri, ax=ax) for i, zval in enumerate(zp): ax.annotate(f'{zval} F', xy=(pts[i, 0] + 2, pts[i, 1])) sim_gridx = [30., 60.] sim_gridy = [30., 60.] ax.plot(sim_gridx, sim_gridy, '+', markersize=10) ax.set_aspect('equal', 'datalim') ax.set_title('Triangulation of observations and test grid cell ' 'natural neighbor interpolation values') members, circumcenters = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy))) val = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], circumcenters) ax.annotate(f'grid 0: {val:.3f}', xy=(sim_gridx[0] + 2, sim_gridy[0])) val = natural_neighbor_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], circumcenters) ax.annotate(f'grid 1: {val:.3f}', xy=(sim_gridx[1] + 2, sim_gridy[1])) """ Explanation: For a test case, we generate 10 random points and observations, where the observation values are just the x coordinate value times the y coordinate value divided by 1000. We then create two test points (grid 0 & grid 1) at which we want to estimate a value using natural neighbor interpolation. The locations of these observations are then used to generate a Delaunay triangulation. End of explanation """ def draw_circle(ax, x, y, r, m, label): th = np.linspace(0, 2 * np.pi, 100) nx = x + r * np.cos(th) ny = y + r * np.sin(th) ax.plot(nx, ny, m, label=label) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility delaunay_plot_2d(tri, ax=ax) ax.plot(sim_gridx, sim_gridy, 'ks', markersize=10) for i, (x_t, y_t) in enumerate(circumcenters): r = geometry.circumcircle_radius(*tri.points[tri.simplices[i]]) if i in members[1] and i in members[0]: draw_circle(ax, x_t, y_t, r, 'm-', str(i) + ': grid 1 & 2') ax.annotate(str(i), xy=(x_t, y_t), fontsize=15) elif i in members[0]: draw_circle(ax, x_t, y_t, r, 'r-', str(i) + ': grid 0') ax.annotate(str(i), xy=(x_t, y_t), fontsize=15) elif i in members[1]: draw_circle(ax, x_t, y_t, r, 'b-', str(i) + ': grid 1') ax.annotate(str(i), xy=(x_t, y_t), fontsize=15) else: draw_circle(ax, x_t, y_t, r, 'k:', str(i) + ': no match') ax.annotate(str(i), xy=(x_t, y_t), fontsize=9) ax.set_aspect('equal', 'datalim') ax.legend() """ Explanation: Using the circumcenter and circumcircle radius information from :func:metpy.interpolate.geometry.find_natural_neighbors, we can visually examine the results to see if they are correct. End of explanation """ x_t, y_t = circumcenters[8] r = geometry.circumcircle_radius(*tri.points[tri.simplices[8]]) print('Distance between grid0 and Triangle 8 circumcenter:', euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]])) print('Triangle 8 circumradius:', r) """ Explanation: What?....the circle from triangle 8 looks pretty darn close. Why isn't grid 0 included in that circle? End of explanation """ cc = np.array(circumcenters) r = np.array([geometry.circumcircle_radius(*tri.points[tri.simplices[m]]) for m in members[0]]) print('circumcenters:\n', cc) print('radii\n', r) """ Explanation: Lets do a manual check of the above interpolation value for grid 0 (southernmost grid) Grab the circumcenters and radii for natural neighbors End of explanation """ vor = Voronoi(list(zip(xp, yp))) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility voronoi_plot_2d(vor, ax=ax) nn_ind = np.array([0, 5, 7, 8]) z_0 = zp[nn_ind] x_0 = xp[nn_ind] y_0 = yp[nn_ind] for x, y, z in zip(x_0, y_0, z_0): ax.annotate(f'{x}, {y}: {z:.3f} F', xy=(x, y)) ax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10) ax.annotate(f'{sim_gridx[0]}, {sim_gridy[0]}', xy=(sim_gridx[0] + 2, sim_gridy[0])) ax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none', label='natural neighbor\ncircumcenters') for center in cc: ax.annotate(f'{center[0]:.3f}, {center[1]:.3f}', xy=(center[0] + 1, center[1] + 1)) tris = tri.points[tri.simplices[members[0]]] for triangle in tris: x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]] y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]] ax.plot(x, y, ':', linewidth=2) ax.legend() ax.set_aspect('equal', 'datalim') def draw_polygon_with_info(ax, polygon, off_x=0, off_y=0): """Draw one of the natural neighbor polygons with some information.""" pts = np.array(polygon)[ConvexHull(polygon).vertices] for i, pt in enumerate(pts): ax.plot([pt[0], pts[(i + 1) % len(pts)][0]], [pt[1], pts[(i + 1) % len(pts)][1]], 'k-') avex, avey = np.mean(pts, axis=0) ax.annotate(f'area: {geometry.area(pts):.3f}', xy=(avex + off_x, avey + off_y), fontsize=12) cc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30)) cc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30)) draw_polygon_with_info(ax, [cc[0], cc1, cc2]) cc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30)) cc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30)) draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3) cc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30)) cc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30)) draw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15) cc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30)) cc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30)) draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2]) """ Explanation: Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram &lt;https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams&gt;_ which serves as a complementary (but not necessary) spatial data structure that we use here simply to show areal ratios. Notice that the two natural neighbor triangle circumcenters are also vertices in the Voronoi plot (green dots), and the observations are in the polygons (blue dots). End of explanation """ areas = np.array([60.434, 448.296, 25.916, 70.647]) values = np.array([0.064, 1.156, 2.809, 0.225]) total_area = np.sum(areas) print(total_area) """ Explanation: Put all of the generated polygon areas and their affiliated values in arrays. Calculate the total area of all of the generated polygons. End of explanation """ proportions = areas / total_area print(proportions) """ Explanation: For each polygon area, calculate its percent of total area. End of explanation """ contributions = proportions * values print(contributions) """ Explanation: Multiply the percent of total area by the respective values. End of explanation """ interpolation_value = np.sum(contributions) function_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], circumcenters) print(interpolation_value, function_output) """ Explanation: The sum of this array is the interpolation value! End of explanation """ plt.show() """ Explanation: The values are slightly different due to truncating the area values in the above visual example to the 3rd decimal place. End of explanation """
srcole/qwm
misc/Nonuniform phase distribution.ipynb
mit
from neurodsp import sim freq = 8 T = 60 Fs = 1000 x = sim.sim_bursty_oscillator(freq, T, Fs, rdsym = .2, prob_enter_burst=1, prob_leave_burst=0) # Cut out buffer time t = np.arange(0, T, 1/Fs) # Plot signal tlim = (0,2) tidx = np.logical_and(t>=tlim[0], t<tlim[1]) plt.figure(figsize=(16,3)) plt.plot(t[tidx], x[tidx], 'k') plt.xlabel('Time (seconds)') plt.xlim(tlim) """ Explanation: Simulate nonsinusoidal rhythm Nonsinusoidal rhythm with some variance in amplitude, period, and sawtoothness. But the average rise-decay symmetry is 0.2 (i.e. 20% of the cycle is in the rise and 80% is in the decay) End of explanation """ # Filter signal and compute phase time series import neurodsp from neurodsp import phase_by_time f_range = (4, 12) filter_length = .75 # Filter length in seconds x_filt = neurodsp.filter(x, Fs, 'bandpass', fc=f_range, N_seconds = filter_length, remove_edge_artifacts=False) pha = phase_by_time(x, Fs, f_range, filter_kwargs={'N_seconds': filter_length}) # Plot filtered signal on original plt.figure(figsize=(16,3)) plt.plot(t[tidx], x[tidx], 'k') plt.plot(t[tidx], x_filt[tidx], 'r') plt.xlabel('Time (seconds)') plt.xlim(tlim) # Plot distribution of phase plt.figure(figsize=(5,3)) plt.hist(pha, bins=np.linspace(-np.pi, np.pi, 21), color='.5', edgecolor='k'); plt.xticks([-np.pi, 0, np.pi]) plt.xlabel('Phase (rad)') plt.ylabel('Number of samples') """ Explanation: Filter nonsinusoidal rhythm 4-12 Hz and compute phase distribution See function in neurodsp End of explanation """ # Filter signal and compute phase time series from neurodsp import phase_by_time f_range = (4, 30) filter_length = .75 # Filter length in seconds x_filt = neurodsp.filter(x, Fs, 'bandpass', fc=f_range, N_seconds = filter_length, remove_edge_artifacts=False) pha = phase_by_time(x, Fs, f_range, filter_kwargs={'N_seconds': filter_length}) # Plot filtered signal on original plt.figure(figsize=(16,3)) plt.plot(t[tidx], x[tidx], 'k') plt.plot(t[tidx], x_filt[tidx], 'r') plt.xlabel('Time (seconds)') plt.xlim(tlim) # Plot distribution of phase plt.figure(figsize=(5,3)) plt.hist(pha, bins=np.linspace(-np.pi, np.pi, 21), color='.5', edgecolor='k'); plt.xticks([-np.pi, 0, np.pi]) plt.xlabel('Phase (rad)') plt.ylabel('Number of samples') """ Explanation: Filter signal 4-30Hz and recompute phase distribution End of explanation """ from neurodsp.shape.phase import extrema_interpolated_phase from neurodsp.shape.cyclepoints import find_extrema, find_zerox # Find peaks and troughs f_range = (4,12) Ps, Ts = find_extrema(x, Fs, f_range) # Find rise and decay midpoints zeroxR, zeroxD = find_zerox(x, Ps, Ts) # Compute phase by interpolating these points pha = extrema_interpolated_phase(x, Ps, Ts, zeroxR=zeroxR, zeroxD=zeroxD) # Plot distribution of phase plt.figure(figsize=(5,3)) plt.hist(pha, bins=np.linspace(-np.pi, np.pi, 21), color='.5', edgecolor='k'); plt.xticks([-np.pi, 0, np.pi]) plt.xlabel('Phase (rad)') plt.ylabel('Number of samples') """ Explanation: Use waveform shape to compute phase End of explanation """ np.savetxt('nonsinusoidal.txt', x) xload = np.loadtxt('nonsinusoidal.txt') plt.figure(figsize=(16,3)) plt.plot(xload[:2000], 'k') plt.xlim((0,2000)) """ Explanation: Save and load data End of explanation """
jeroarenas/MLBigData
0_Introduction/Intro_PySpark_1.ipynb
mit
fruits = ['apple', 'orange', 'banana', 'grape', 'watermelon', 'apple', 'orange', 'apple'] number_partitions = 4 dataRDD = sc.parallelize(fruits, number_partitions) print type(dataRDD) """ Explanation: Counting words 1.- Creating a simple RDD . We will create a simple RDD and apply basic operations End of explanation """ N_data = dataRDD.<COMPLETAR>() print "There are %d elements in the RDD\n" % N_data print "These are the first two:" print dataRDD.<COMPLETAR>(2) print "\nThese are the first two, alphabetically ordered:" print dataRDD.<COMPLETAR>(2) """ Explanation: Exercise: Apply the corresponding operation: - obtain the total number of elements in the RDD (count) - print the first two elements in the RDD (take) - print the first two alphabetically sorted elements in the RDD (takeOrdered) The answer should be: <pre><code> There are 8 elements in the RDD These are the first two: ['apple', 'orange'] These are the first two, alphabetically ordered: ['apple', 'apple'] </code></pre> End of explanation """ def complete_word(word): return <COMPLETAR> print "Testing the function:" print complete_word('apple') dataRDDprocessed = dataRDD.map(<COMPLETAR>) print "\nThese are all the elements in the RDD:" print dataRDDprocessed.<COMPLETAR>() """ Explanation: 2.- Simple transformations Exercise: Define a function 'complete_word' that adds ' fruit' to the input string. Use this function to process all elements in the RDD using map. Print all of the elements in the resulting RDD using collect(). The answer should be: <pre><code> Testing the function: apple fruit These are all the elements in the RDD: ['apple fruit', 'orange fruit', 'banana fruit', 'grape fruit', 'watermelon fruit', 'apple fruit', 'orange fruit', 'apple fruit'] </code></pre> End of explanation """ dataRDDprocessed_lambda = dataRDD.map(lambda x: x + ' fruit') print "Result with a lambda function:" print dataRDDprocessed_lambda.<COMPLETAR>() """ Explanation: We will use now a lambda function to do the same task End of explanation """ wordLengths = (dataRDDprocessed_lambda .map(<COMPLETAR>) .collect()) print wordLengths """ Explanation: Now let's count the number of characters of every processed word. The answer should be: <pre><code> [11, 12, 12, 11, 16, 11, 12, 11] </code></pre> End of explanation """ string1 = " ".join(<COMPLETAR>) print type(string1) print string1 string2 = dataRDD.reduce(lambda x, y: <COMPLETAR>) print type(string2) print string2 """ Explanation: Let's obtain a string with all the words in the original RDD using two different approaches. Exercise: Complete the code and discuss the results: The answer should be: <pre><code> type 'str' apple orange banana grape watermelon apple orange apple type 'str' apple orange banana grape watermelon apple orange apple </code></pre> End of explanation """ Nchars = sum(dataRDD.<COMPLETAR>) print Nchars Nchars = dataRDD.map(len).reduce(<COMPLETAR>) print Nchars """ Explanation: Exercise: Repeat the scheme above to obtain the total number of characters in the RDD: The answer should be: <pre><code> 48 48 </code></pre> End of explanation """ pairRDD = dataRDD.map(lambda x: (x, 1)) print pairRDD.collect() print "Result: (key, iterable):" groupedRDD = pairRDD.groupByKey() print groupedRDD.collect() print " " print "Result: (key, list of results):" groupedRDDprocessed = groupedRDD.mapValues(list) print groupedRDDprocessed.collect() print " " print "Result: (key, count):" groupedRDDprocessed = groupedRDD.mapValues(len) print groupedRDDprocessed.collect() print " " """ Explanation: 3.- Creating a pair RDD and counting Every element of a pair RDD is a tuple (k,v) where k is the key and v is the value. Exercise: Transform the original RDD into a pair RDD, where the value is always 1. The answer should be: <pre><code> [('apple', 1), ('orange', 1), ('banana', 1), ('grape', 1), ('watermelon', 1), ('apple', 1), ('orange', 1), ('apple', 1)] Grouped pairs as an interable: [('orange', <pyspark.resultiterable.ResultIterable object at 0xb0e455cc>), ('watermelon', <pyspark.resultiterable.ResultIterable object at 0xb0e45b8c>), ('grape', <pyspark.resultiterable.ResultIterable object at 0xb0e45bec>), ('apple', <pyspark.resultiterable.ResultIterable object at 0xb0e4546c>), ('banana', <pyspark.resultiterable.ResultIterable object at 0xb1f5dd6c>)] Grouped pairs as a list [('orange', [1, 1]), ('watermelon', [1]), ('grape', [1]), ('apple', [1, 1, 1]), ('banana', [1])] Grouped pairs + count [('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)] </code></pre> End of explanation """ print "Result: (key, count):" countRDD = pairRDD.groupByKey().map(<COMPLETAR>) print countRDD.collect() print " " """ Explanation: Exercise: Use groupByKey to count the frequencies of every word ( caution!: groupByKey transformation can be very inefficient, since it needs to exchange data among workers): The answer should be: <pre><code> Result: (key, count): [('apple', 1), ('orange', 1)] [('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)] </code></pre> End of explanation """ print "Result: (key, count):" countRDD = pairRDD.reduceByKey(<COMPLETAR>) print countRDD.collect() print " " """ Explanation: Exercise: Repeat the counting using reduceByKey, a much more efficient approach, since it operates at every worker before sharing results. The answer should be: <pre><code> Result: (key, count): [('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)] </code></pre> End of explanation """ counts = (dataRDD .<COMPLETAR> .<COMPLETAR> .<COMPLETAR> ) print counts """ Explanation: Exercise: Combine map, reduceByKey and collect to obtain the counts per word: The answer should be: <pre><code> [('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)] </code></pre> End of explanation """ N_unique_words = (dataRDD .<COMPLETAR> .<COMPLETAR> .filter(<COMPLETAR>) .count() ) print N_unique_words """ Explanation: 4.- Filtering a RDD Count the number of words that only appear once in the dataset. The answer should be: <pre><code> 3 </code></pre> End of explanation """ textRDD = sc.textFile('data/shakespeare.txt', 8) print "Number of lines of text = %d" % textRDD.count() """ Explanation: 5.- Counting words in a file We will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. End of explanation """ counts = (textRDD .map(lambda x: (x, 1)) .<COMPLETAR> .take(10) ) print counts """ Explanation: Exercise: Use the code written in the previous sections to obtain the counts for every word in the text. Print the first 10 results. Observe the result, is this what we want? What is going wrong? The answer should be: <pre><code> [(u'', 9493), (u' thou diest in thine unthankfulness, and thine ignorance makes', 1), (u" Which I shall send you written, be assur'd", 1), (u' I do beseech you, take it not amiss:', 1), (u' their mastiffs are of unmatchable courage.', 1), (u' With us in Venice, if it be denied,', 1), (u" Hot. I'll have it so. A little charge will do it.", 1), (u' By what yourself, too, late have spoke and done,', 1), (u" FIRST LORD. He's but a mad lord, and nought but humours sways him.", 1), (u' none will entertain it.', 1)] </code></pre> End of explanation """ counts = (textRDD .flatMap(lambda x: x.split()) .map(<COMPLETAR>) .<COMPLETAR> .take(10) ) print counts """ Explanation: Exercise: Modify the code by introducing a flatMap operation and observe the result. The answer should be: <pre><code> [(u'fawn', 11), (u'bishops.', 2), (u'divinely', 1), (u'mustachio', 1), (u'four', 114), (u'reproach-', 1), (u'drollery.', 1), (u'conjuring', 1), (u'slew.', 1), (u'Calen', 1)] </code></pre> End of explanation """ counts = (textRDD .flatMap(<COMPLETAR>) .map(<COMPLETAR>) .reduceByKey(<COMPLETAR>) .filter(<COMPLETAR>) .take(<COMPLETAR>) ) print counts """ Explanation: Exercise: Modify the code to obtain 5 words that appear exactly 111 times in the text. The answer should be: <pre><code> [(u'think,', 111), (u'see,', 111), (u'gone.', 111), (u"King's", 111), (u'having', 111)] </code></pre> End of explanation """ counts = (textRDD .<COMPLETAR> .<COMPLETAR> .<COMPLETAR> .takeOrdered(5,key = lambda x: <COMPLETAR>) ) print counts """ Explanation: Exercise: Modify the code to obtain the 5 words that most appear in the text. The answer should be: <pre><code> [(u'the', 23197), (u'I', 19540), (u'and', 18263), (u'to', 15592), (u'of', 15507)] </code></pre> End of explanation """ def clean_text(string): string = string.lower() return string counts = (textRDD .flatMap(<COMPLETAR>) .map(<COMPLETAR>) .map(<COMPLETAR>) .reduceByKey(<COMPLETAR>) .takeOrdered(<COMPLETAR>) ) print counts """ Explanation: 6.- Cleaning the text You may see in the results that we observe some words in capital letters, that some other punctuation characters appear as well. We will incorporate in the code a cleaning function such that we eliminate unwanted characters. We provide a simple cleaning function that lowers all the characters. Exercise: Use it in the code and verify that the word "I" is printed as "i". Note: Since we are modifying the strings, the counts will differ with respect to the previous values. The answer should be: <pre><code> [(u'the', 27267), (u'and', 25340), (u'i', 19540), (u'to', 18656), (u'of', 17301)] </code></pre> End of explanation """ countsRDD = (textRDD .flatMap(<COMPLETAR>) .map(<COMPLETAR>) .filter(lambda x: not x.isalpha()) .map(<COMPLETAR>) .reduceByKey(<COMPLETAR>) ) countsRDD.cache() print "The database has %d words that need cleaning, for example:\n" % countsRDD.count() print countsRDD.takeOrdered(20,key = lambda x: -x[1]) """ Explanation: We will now search for non-alphabetical characters in the dataset. We can use the Python method 'isalpha' to decide wether or not a string is composed of characters a-z. Exercise: Use that function to print the 20 words with non-alphabetic characters that most appear in the text and print the total number of strings with non-alphabetic characters. The answer should be: <pre><code> The database has 40957 words that need cleaning, for example: [(u"i'll", 1737), (u'you,', 1478), (u"'tis", 1367), (u'sir,', 1235), (u'me,', 1219), (u"th'", 1146), (u'o,', 1008), (u'lord,', 977), (u'come,', 875), (u'me.', 823), (u'you.', 813), (u'why,', 805), (u'now,', 785), (u'it.', 784), (u'him.', 755), (u'lord.', 702), (u'him,', 698), (u'ay,', 661), (u'well,', 647), (u'and,', 647)] </code></pre> End of explanation """ def new_clean_text(string): string = string.lower() list_of_chars = ['.', <COMPLETAR>] for c in <COMPLETAR>: string = string.replace(c,'') return string countsRDD = (textRDD .flatMap(<COMPLETAR>) .map(new_clean_text) .filter(lambda x: not x.isnumeric()) .filter(lambda x: len(x)>0) .filter(lambda x: not x.isalnum()) .map(<COMPLETAR>) .reduceByKey(<COMPLETAR>) ) countsRDD.cache() Npreprocess = countsRDD.count() print "The database has %d elements that need preprocessing, for example:" % Npreprocess print countsRDD.takeOrdered(20,key = lambda x: -x[1]) """ Explanation: You can clearly observe now all the punctuation symbols that have not been removed yet. Exercise: Write a new_clean_function such that all the unwanted symbols have been remode. As a hint, we include the code for removing the symbol '.' The answer should be: <pre><code> The database has 0 elements that need preprocessing, for example: [] </code></pre> End of explanation """ print "Processing the dataset to find the 20 most frequent strings:\n" countsRDDclean = (textRDD .<COMPLETAR> ) countsRDDclean.cache() print countsRDDclean.takeOrdered(20,key = lambda x: -x[1]) """ Explanation: Exercise: Now that we have completely cleaned the words, try to find the 20 most frequent cleaned strings. The answer should be: <pre><code> Processing the dataset to find the 20 most frequent strings: [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463), (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890), (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678), (u'for', 7558), (u'be', 6857), (u'his', 6857), (u'your', 6655), (u'this', 6602)] </code></pre> End of explanation """ import csv with open('data/english_stopwords.txt', 'rb') as csvfile: reader = csv.reader(csvfile) stopwords = [] for row in reader: stopwords.append(row[0].replace("'",'').replace('\t','')) stopwords = [unicode(s, "utf-8") for s in stopwords] print stopwords """ Explanation: 7.- Removing stopwords Many of the most frequent words obtained in the previous section are irrelevant to many tasks, they are know as stop-words. We will use here a stop list (list of meaningless words) to clean out those terms. Exercise: Observe the line used for converting the strings to unicode. This task could be implemented using a "for" loop, but we are using what is called a "List Comprehension". End of explanation """ countsRDDclean = (textRDD .<COMPLETAR> .filter(lambda x: <COMPLETAR> stopwords) .<COMPLETAR> ) countsRDDclean.cache() pairs = countsRDDclean.takeOrdered(50,key = lambda x: -x[1]) #print pairs words = ' '.join([x[0] for x in pairs]) print "These are the most frequent words:\n" print words """ Explanation: Exercise: Apply an extra filter that removes the stop words from the calculations. Print the 50 most frequent words ONLY THE WORDS separated with blank spaces. Are they informative about Shakespeare's books? The answer should be: <pre><code> These are the most frequent words: all no lord king good now sir come or let enter love hath man one go upon like say know may make us yet must see tis give can take speak mine first th duke tell time exeunt much think never heart exit queen doth art great hear lady death </code></pre> End of explanation """
dolejarz/engsci_capstone_transport
python/DDM/DDM.ipynb
mit
import pandas as pd import matplotlib.pyplot as plt import matplotlib import datetime as dt import scipy.stats as stats from scipy.stats import norm import numpy as np import math import seaborn as sns from InvarianceTestEllipsoid import InvarianceTestEllipsoid from autocorrelation import autocorrelation import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std import statsmodels.tsa.ar_model as ar_model import pickle %matplotlib inline """ Explanation: Explanatory Model for Car2go Demand D End of explanation """ T = pd.read_csv("DDMFactors.csv") T.drop('Total Seconds Available', axis = 1,inplace=True) T = T.fillna(0) means = T.mean() stds = T.std() T = (T-means)/stds T.hist() """ Explanation: 0. The model: The model that will be used to demand of zone i $T_i = B_{k = 1:n}X_{ki} + \epsilon_i$ End of explanation """ Y = T["Trip Demand"] #generating the factor space X = T[T.columns[1:-1]] X """ Explanation: 1. Regression to obtain $m(t)$ End of explanation """ y = Y L = [] model = sm.OLS(y, X) for i in range(10): results = model.fit_regularized(method = 'elastic_net',alpha=i/40, L1_wt=0.5) L.append(results.params) L = pd.DataFrame(L) L = L/L.max(axis=0) L.plot(title = "Coefficient values as the regularization term is varied") L cols = L.columns[L.ix[len(L)-1] > 0.001] Xs = X[cols] """ Explanation: 1.1 Using Lasso Regression to shrink factors to zero The plot below varies the magnitude of the lasso regularization to see which parameters go to zero Training data: $(x_t,y_t)$ Model Specification: $Y = \beta X + C$ Lasso regularization: $\underset{\beta}{\operatorname{argmin}}\sum_t(y_t - (\beta x_t + C))^2 + \lambda||\beta||_{l1} $ Depending on the value of $\lambda$, the coefficients in beta will shrink to zero End of explanation """ model = sm.OLS(y,Xs) results = model.fit() print(results.summary()) Comparison = pd.DataFrame(results.predict(Xs)) Comparison["Actual"] = y Comparison.rename(columns={Comparison.columns[0]: 'Predicted'}, inplace=True) Comparison.ix[len(y)-365:len(y)].plot(title = "Normalized Zonal Demand for Car2Go in Toronto") """ Explanation: 1.2 Mean Regression Results (p-values, coefficients .... ) End of explanation """ epsi = Comparison['Actual'] - Comparison['Predicted'] epsi = np.array(epsi) epsi = np.expand_dims(epsi, axis=0) lag_ = 10 # number of lags (for auto correlation test) acf = autocorrelation(epsi, lag_) lag = 10 # lag to be printed ell_scale = 2 # ellipsoid radius coefficient fit = 0 # normal fitting InvarianceTestEllipsoid(epsi, acf[0,1:], lag, fit, ell_scale); epsi = Comparison['Actual'] - Comparison['Predicted'] epsi = np.array(epsi) model = sm.tsa.AR(epsi) AResults= model.fit(maxlag = 30, ic = "bic",method = 'cmle') print("The maximum number of required lags for the residuals above according to the Bayes Information Criterion is:") sm.tsa.AR(epsi).select_order(maxlag = 10, ic = 'bic',method='cmle') np.array([epsi[1:-1],epsi[:-2]]).shape epsi[2:].shape ar_mod = sm.OLS(epsi[2:],np.array([epsi[1:-1],epsi[:-2]])) ar_res = ar_mod.fit() print(ar_res.summary()) ep = ar_res.predict() print(len(ep),len(epsi)) z = ep - epsi[2:] plt.plot(epsi[2:], color='black') plt.plot(ep, color='blue',linewidth=3) plt.title('AR(1) Process') plt.ylabel(" ") plt.xlabel("Days") plt.legend() """ Explanation: 2. AR(1) Process for the Residuals Lets check the residuals to make sure that they are approximately iid $X_{t+1}= \alpha X_t + \sigma (t)\epsilon$ 2.1 The code below is motivation for an AR(1) process for the residuals obtained from above: we see that there is significant correlation among the residuals from the mean process End of explanation """ z = np.expand_dims(z, axis=0) lag_ = 10 # number of lags (for auto correlation test) acf = autocorrelation(z, lag_) lag = 10 # lag to be printed ell_scale = 2 # ellipsoid radius coefficient fit = 0 # normal fitting InvarianceTestEllipsoid(z, acf[0,1:], lag, fit, ell_scale); """ Explanation: 2.2 Invariance check for the residuals of the AR(1) process End of explanation """ z = ep - epsi[1:] plt.plot(z**2) """ Explanation: 2.3 As per Benth lets see what the residuals of the AR(1) process are doing... End of explanation """
rokkamsatyakalyan/Machine_Learning
K_NEAREST_IMPLEMENTATION.ipynb
gpl-3.0
import pandas as pd import numpy as np from collections import Counter from math import sqrt import random import warnings """ Explanation: IMPLEMENTING K_NEAREST_NEIGHBOUR In the given data set we have to classify into which cluster a instance is going to fall Importing required predifined methods End of explanation """ df = pd.read_table('train.csv', sep=',', header=None, names=['Type', 'LifeStyle', 'Vacation', 'eCredit', 'Salary', 'Property', 'Label']) df.head() """ Explanation: reading the training data into a data frame and assigning headings End of explanation """ dft = pd.read_table('test.csv', sep=',', header=None, names=['Type', 'LifeStyle', 'Vacation', 'eCredit', 'Salary', 'Property', 'Label']) dft.head() """ Explanation: reading the testing data into a data frame and assigning headings End of explanation """ df['Type'] = df.Type.map({'student':1,'engineer':2,'librarian':3,'professor':4,'doctor':5 }) # df.head() df['LifeStyle'] = df.LifeStyle.map({'spend<<saving':1, 'spend<saving':2, 'spend>saving':3, 'spend>>saving':4}) df['Label'] = df.Label.map({'C1':1, 'C2':2 ,'C3':3 ,'C4':4 ,'C5':5}) # df['Vacation']=df['Vacation']/100 df.head() """ Explanation: changing the strings in the train data into to numbers so that they can be used while calculating euclidean distance End of explanation """ dft['Type'] = dft.Type.map({'student':1,'engineer':2,'librarian':3,'professor':4,'doctor':5 }) # df.head() dft['LifeStyle'] = dft.LifeStyle.map({'spend<<saving':1, 'spend<saving':2, 'spend>saving':3, 'spend>>saving':4}) dft['Label'] = dft.Label.map({'C1':1, 'C2':2 ,'C3':3 ,'C4':4 ,'C5':5}) # df['Vacation']=df['Vacation']/100 dft.head() """ Explanation: changing the strings in the test data into to numbers so that they can be used while calculating euclidean distance End of explanation """ vacmaxval=0 vacminval=0 ecrmaxval=0 ecrminval=0 salmaxval=0 salminval=0 prpminval=0 prpmaxval=0 for attribute in list(df.columns.values): if attribute == 'Vacation' or attribute == 'eCredit' or attribute == 'Salary' or attribute == 'Property': if attribute == 'Vacation': vacmaxval=max(df[attribute]) vacminval=min(df[attribute]) elif attribute == 'eCredit': ecrmaxval=max(df[attribute]) ecrminval=min(df[attribute]) elif attribute == 'Salary': salmaxval=max(df[attribute]) salminval=min(df[attribute]) elif attribute == 'Property': prpmaxval=max(df[attribute]) prpminval=min(df[attribute]) maxValue = max(df[attribute]) minValue = min(df[attribute]) norm = [] for i in df[attribute]: normalisedValue = (i - minValue + 0.0)/(maxValue - minValue + 0.0) norm.append(normalisedValue) df[attribute] = norm df.head() # uncomment the below line to get a csv file of the dataframe #df.to_csv('sanitizedData.csv', sep=',', encoding='utf-8', header=False) """ Explanation: Normalizing the values in the Vacation,eCredit,Salary,Property so that all the values range in between 0 to 1.we use a formula to do this.we are going to capture min an max for the categories which should be normalized so that we can use them for test data normalization also End of explanation """ minValue=0 maxValue=0 for attribute in list(dft.columns.values): if attribute == 'Vacation' or attribute == 'eCredit' or attribute == 'Salary' or attribute == 'Property': norm = [] for i in dft[attribute]: if attribute == 'Vacation': minValue=vacminval maxValue=vacmaxval elif attribute == 'eCredit': minValue=ecrminval maxValue=ecrmaxval elif attribute == 'Salary': minValue=salmaxval maxValue=salminval elif attribute == 'Property': minValue=prpminval maxValue=prpmaxval normalisedValue = (i - minValue + 0.0)/(maxValue - minValue + 0.0) norm.append(normalisedValue) dft[attribute] = norm dft.head() """ Explanation: normalizing the testing data too End of explanation """ dataframe= df.astype(float).values.tolist() dataframet= dft.astype(float).values.tolist() """ Explanation: converting all the data too float into make calculations accurate End of explanation """ #print dataframe[:10] random.shuffle(dataframe) #print dataframe[:10] """ Explanation: we have to shuffle data so that we can see how good our own function is classifing the instances into clusters End of explanation """ def k_nearest(data,predict,k=5): #if(len(data)>=k): # warnings.warn('bye') distances =[] for group in data: for features in data[group]: euclidean_distance=0 if(predict[0]!=features[0]): euclidean_distance+=1 if(predict[1]!=features[1]): euclidean_distance+=1 euclidean_distance += ((predict[2]-features[2])**2 + (predict[3]-features[3])**2 +(predict[4]-features[4])**2 + (predict[5]-features[5])**2) #euclidean_distance = np.linalg.norm(np.array(features)-np.array(predict)) euclidean_distance=sqrt(euclidean_distance) distances.append([euclidean_distance,group]) votes = [i[1] for i in sorted(distances) [:k]] #print distances print votes #print (Counter(votes).most_common(1)) vote_result = Counter(votes).most_common(1)[0][0] #print vote_result return vote_result """ Explanation: developing our own k nearest algorithm with respective to our requirments, we use similarity matrix to take decision between then two parameters, and the rest we can go with euclidean distance. End of explanation """ test_size=0.2 train_set={1.0:[],2.0:[],3.0:[],4.0:[],5.0:[]} test_set={1.0:[],2.0:[],3.0:[],4.0:[],5.0:[]} test_setnew={1.0:[],2.0:[],3.0:[],4.0:[],5.0:[]} train_setnew={1.0:[],2.0:[],3.0:[],4.0:[],5.0:[]} train_data= dataframe[:-int(test_size*len(dataframe))] test_data= dataframe[-int(test_size*len(dataframe)):] train_datanew= dataframe[:int(1*len(dataframe))] test_datanew=dataframet[:int(1*len(dataframet))] #print test_set #print test_datanew """ Explanation: splitting the data with respective to ratio and segregating the train an test data into respective clustres End of explanation """ for i in train_data: train_set[i[-1]].append(i[:-1]) for i in test_data: test_set[i[-1]].append(i[:-1]) #print test_set for i in test_datanew: test_setnew[i[-1]].append(i[:-1]) for i in train_datanew: train_setnew[i[-1]].append(i[:-1]) #print test_set #print test_setnew """ Explanation: Removing the cluster names from the instances and adding them to lists. End of explanation """ correct=0.0 total=0.0 for group in test_set: for data in test_set[group]: vote= k_nearest(train_set,data,k=5) #print '*****' #print group #print vote #print '*****' if group==vote: correct+=1 total +=1 print correct print total print ('Accuracy:', correct/total) """ Explanation: Sending our slpitted test and train from the given train data in order to cross validadate End of explanation """ correct =0.0 total =0.0 for group in test_setnew: for data in test_setnew[group]: vote= k_nearest(train_set,data,k=5) print '*****' print group print vote #print '*****' if group==vote: correct+=1 total +=1 print correct print total print ('Accuracy:', (correct)/total) """ Explanation: Sending our actual testing data to find the accuracy our model End of explanation """
CAChemE/curso-python-datos
notebooks/010-NumPy-Intro.ipynb
bsd-3-clause
import numpy as np #para ver la versiรณn que tenemos instalada: np.__version__ """ Explanation: Introducciรณn a NumPy _Hasta ahora hemos visto los tipos de datos mรกs bรกsicos que nos ofrece Python: integer, real, complex, boolean, list, tuple... Pero ยฟno echas algo de menos? Efectivamente, los arrays. _ En este notebook nos adentraremos en el paquete NumPy: aprenderemos a crear distintos arrays y a operar con ellos. ยฟQuรฉ es un array? Un array es un bloque de memoria que contiene elementos del mismo tipo. Bรกsicamente: nos recuerdan a los vectores, matrices, tensores... podemos almacenar el array con un nombre y acceder a sus elementos mediante sus รญndices. ayudan a gestionar de manera eficiente la memoria y a acelerar los cรกlculos. | รndice | 0 | 1 | 2 | 3 | ... | n-1 | n | | ---------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Valor | 2.1 | 3.6 | 7.8 | 1.5 | ... | 5.4 | 6.3 | ยฟQuรฉ solemos guardar en arrays? Vectores y matrices. Datos de experimentos: En distintos instantes discretos. En distintos puntos del espacio. Resultado de evaluar funciones con los datos anteriores. Discretizaciones para usar algoritmos de: integraciรณn, derivaciรณn, interpolaciรณn... ... ยฟQuรฉ es NumPy? NumPy es un paquete fundamental para la programaciรณn cientรญfica que proporciona un objeto tipo array para almacenar datos de forma eficiente y una serie de funciones para operar y manipular esos datos. Para usar NumPy lo primero que debemos hacer es importarlo: End of explanation """ import numpy as np # Array de una dimensiรณn mi_primer_array = np.array([1, 2, 3, 4]) mi_primer_array # Podemos usar print print(mi_primer_array) # Comprobar el tipo de mi_primer_array type(mi_primer_array) # Comprobar el tipo de datos que contiene mi_primer_array.dtype """ Explanation: Nuestro primer array ยฟNo decรญamos que Python era fรกcil? Pues creemos nuestros primeros arrays: End of explanation """ # Array de dos dimensiones mi_segundo_array = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) """ Explanation: Los arrays de una dimensiรณn se crean pasรกndole una lista como argumento a la funciรณn np.array. Para crear un array de dos dimensiones le pasaremos una lista de listas: End of explanation """ mi_segundo_array = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) """ Explanation: <div class="alert alert-info">Podemos continuar en la siguiente lรญnea usando `\`, pero no es necesario escribirlo dentro de parรฉntesis o corchetes</div> Esto serรญa una buena manera de definirlo, de acuerdo con el PEP 8 (indentation): End of explanation """ # Suma np.sum(mi_primer_array) # Mรกximo np.max(mi_primer_array) # Seno np.sin(mi_segundo_array) """ Explanation: Funciones y constantes de NumPy Hemos dicho que NumPy tambiรฉn incorporรก funciones. Un ejemplo sencillo: End of explanation """ np.pi, np.e """ Explanation: Y algunas constantes que podemos neccesitar: End of explanation """ # En una dimensiรณn np.zeros(100) # En dos dimensiones np.zeros([10,10]) """ Explanation: Funciones para crear arrays ยฟDemasiada teorรญa? vayamos a la prรกctica. Ya hemos visto que la funciรณn np.array() nos permite crear arrays con los valores que nosotros introduzcamos manualmente a travรฉs de listas. Mรกs adelante, aprenderemos a leer ficheros y almacenarlos en arrays. Mientras tanto, ยฟquรฉ puede hacernos falta? array de ceros End of explanation """ np.empty(10) """ Explanation: <div class="alert alert-info"><strong>Nota:</strong> En el caso 1D es vรกlido tanto `np.zeros([5])` como `np.zeros(5)` (sin los corchetes), pero no lo serรก para el caso nD </div> array "vacรญo" End of explanation """ np.ones([3, 2]) """ Explanation: <div class="alert alert-error"><strong>Importante:</strong> El array vacรญo se crea en un tiempo algo inferior al array de ceros. Sin embargo, el valor de sus elementos serรก arbitrario y dependerรก del estado de la memoria. Si lo utilizas asegรบrate de que luego llenas bien todos sus elementos porque podrรญas introducir resultados errรณneos. </div> array de unos End of explanation """ np.identity(4) """ Explanation: <div class="alert alert-info"><strong>Nota:</strong> Otras funciones muy รบtiles son `np.zeros_like` y `np.ones_like`. Usa la ayuda para ver lo que hacen si lo necesitas. </div> array identidad End of explanation """ a = np.arange(0, 5) a """ Explanation: <div class="alert alert-info"><strong>Nota:</strong> Tambiรฉn puedes probar `np.eye()` y `np.diag()`. </div> Rangos np.arange NumPy, dame un array que vaya de 0 a 5: End of explanation """ np.arange(0, 11, 3) """ Explanation: Mira con atenciรณn el resultado anterior, ยฟhay algo que deberรญas grabar en tu cabeza para simpre? El รบltimo elemento no es 5 sino 4 NumPy, dame un array que vaya de 0 a 10, de 3 en 3: End of explanation """ np.linspace(0, 10, 21) """ Explanation: np.linspace Si has tenido que usar MATLAB alguna vez, seguro que esto te suena: End of explanation """ a = np.arange(1, 10) M = np.reshape(a, [3, 3]) M # Tambiรฉn funciona como mรฉtodo N = a.reshape([3,3]) N """ Explanation: En este caso sรญ que se incluye el รบltimo elemento. <div class="alert alert-info"><strong>Nota:</strong> Tambiรฉn puedes probar `np.logspace()` </div> reshape Con np.arange() es posible crear "vectores" cuyos elementos tomen valores consecutivos o equiespaciados, como hemos visto anteriormente. ยฟPodemos hacer lo mismo con "matrices"? Pues sรญ, pero no usando una sola funciรณn. Imagina que quieres crear algo como esto: \begin{pmatrix} 1 & 2 & 3\ 4 & 5 & 6\ 7 & 8 & 9\ \end{pmatrix} Comenzaremos por crear un array 1d con los valores $(1,2,3,4,5,6,7,8,9)$ usando np.arange(). Luego le daremos forma de array 2d. con np.reshape(array, (dim0, dim1)). End of explanation """ #crear un arra y y sumarle un nรบmero arr = np.arange(11) arr + 55 #multiplicarlo por un nรบmero arr * 2 #elevarlo al cuadrado arr ** 2 #calcular una funciรณn np.tanh(arr) """ Explanation: <div class="alert alert-info"><strong>Nota:</strong> No vamos a entrar demasiado en quรฉ son los mรฉtodos, pero debes saber que estรกn asociados a la programaciรณn orientada a objetos y que en Python todo es un objeto. Lo que debes pensar es que son unas funciones especiales en las que el argumento mรกs importante (sobre el que se realiza la acciรณn) se escribe delante seguido de un punto. Por ejemplo: `<objeto>.mรฉtodo(argumentos)` </div> Operaciones Operaciones elemento a elemento Ahora que pocas cosas se nos escapan de los arrays, probemos a hacer algunas operaciones. El funcionamiento es el habitual en FORTRAN y MATLAB y poco hay que aรฑadir: End of explanation """ #creamos dos arrays arr1 = np.arange(0, 11) arr2 = np.arange(20, 31) #los sumamos arr1 + arr2 #multiplicamos arr1 * arr2 """ Explanation: <div class="alert alert-info"><strong>Entrenamiento:</strong> Puedes tratar de comparar la diferencia de tiempo entre realizar la operaciรณn en bloque, como ahora, y realizarla elemento a elemento, recorriendo el array con un bucle. </div> Si las operaciones involucran dos arrays tambiรฉn se realizan elemento a elemento End of explanation """ # >,< arr1 > arr2 # == arr1 == arr2 # ยกojo! los arrays son de integers, no de floats """ Explanation: Comparaciones End of explanation """
GoogleCloudPlatform/ml-design-patterns
03_problem_representation/reframing.ipynb
apache-2.0
import numpy as np import seaborn as sns from google.cloud import bigquery import matplotlib as plt %matplotlib inline bq = bigquery.Client() query = """ SELECT weight_pounds, is_male, gestation_weeks, mother_age, plurality, mother_race FROM `bigquery-public-data.samples.natality` WHERE weight_pounds IS NOT NULL AND is_male = true AND gestation_weeks = 38 AND mother_age = 28 AND mother_race = 1 AND plurality = 1 AND RAND() < 0.01 """ df = bq.query(query).to_dataframe() df.head() fig = sns.distplot(df[["weight_pounds"]]) fig.set_title("Distribution of baby weight") fig.set_xlabel("weight_pounds") fig.figure.savefig("weight_distrib.png") #average weight_pounds for this cross section np.mean(df.weight_pounds) np.std(df.weight_pounds) weeks = 36 age = 28 query = """ SELECT weight_pounds, is_male, gestation_weeks, mother_age, plurality, mother_race FROM `bigquery-public-data.samples.natality` WHERE weight_pounds IS NOT NULL AND is_male = true AND gestation_weeks = {} AND mother_age = {} AND mother_race = 1 AND plurality = 1 AND RAND() < 0.01 """.format(weeks, age) df = bq.query(query).to_dataframe() print('weeks={} age={} mean={} stddev={}'.format(weeks, age, np.mean(df.weight_pounds), np.std(df.weight_pounds))) """ Explanation: Reframing Design Pattern The Reframing design pattern refers to changing the representation of the output of a machine learning problem. For example, we could take something that is intuitively a regression problem and instead pose it as a classification problem (and vice versa). Let's look at the natality dataset. Notice that for a given set of inputs, the weight_pounds (the label) can take many different values. End of explanation """ import os import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras.utils import to_categorical from tensorflow import keras from tensorflow import feature_column as fc from tensorflow.keras import layers, models, Model %matplotlib inline df = pd.read_csv("./data/babyweight_train.csv") """ Explanation: Comparing categorical label and regression Since baby weight is a positive real value, this is intuitively a regression problem. However, we can train the model as a multi-class classification by bucketizing the output label. At inference time, the model then predicts a collection of probabilities corresponding to these potential outputs. Let's do both and see how they compare. End of explanation """ # prepare inputs df.is_male = df.is_male.astype(str) df.mother_race.fillna(0, inplace = True) df.mother_race = df.mother_race.astype(str) # create categorical label def categorical_weight(weight_pounds): if weight_pounds < 3.31: return 0 elif weight_pounds >= 3.31 and weight_pounds < 5.5: return 1 elif weight_pounds >= 5.5 and weight_pounds < 8.8: return 2 else: return 3 df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x)) df.head() def encode_labels(classes): one_hots = to_categorical(classes) return one_hots FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race'] LABEL_CLS = ['weight_category'] LABEL_REG = ['weight_pounds'] N_TRAIN = int(df.shape[0] * 0.80) X_train = df[FEATURES][:N_TRAIN] X_valid = df[FEATURES][N_TRAIN:] y_train_cls = encode_labels(df[LABEL_CLS][:N_TRAIN]) y_train_reg = df[LABEL_REG][:N_TRAIN] y_valid_cls = encode_labels(df[LABEL_CLS][N_TRAIN:]) y_valid_reg = df[LABEL_REG][N_TRAIN:] """ Explanation: We'll use the same features for both models. But we need to create a categorical weight label for the classification model. End of explanation """ # train/validation dataset for classification model cls_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_cls)) cls_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_cls)) # train/validation dataset for regression model reg_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_reg.values)) reg_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_reg.values)) # Examine the two datasets. Notice the different label values. for data_type in [cls_train_data, reg_train_data]: for dict_slice in data_type.take(1): print("{}\n".format(dict_slice)) # create feature columns to handle categorical variables numeric_columns = [fc.numeric_column("mother_age"), fc.numeric_column("gestation_weeks")] CATEGORIES = { 'plurality': list(df.plurality.unique()), 'is_male' : list(df.is_male.unique()), 'mother_race': list(df.mother_race.unique()) } categorical_columns = [] for feature, vocab in CATEGORIES.items(): cat_col = fc.categorical_column_with_vocabulary_list( key=feature, vocabulary_list=vocab, dtype=tf.string) categorical_columns.append(fc.indicator_column(cat_col)) # create Inputs for model inputs = {colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({colname: tf.keras.layers.Input( name=colname, shape=(), dtype=tf.string) for colname in ["plurality", "is_male", "mother_race"]}) # build DenseFeatures for the model dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs) # create hidden layers h1 = layers.Dense(20, activation="relu")(dnn_inputs) h2 = layers.Dense(10, activation="relu")(h1) # create classification model cls_output = layers.Dense(4, activation="softmax")(h2) cls_model = tf.keras.models.Model(inputs=inputs, outputs=cls_output) cls_model.compile(optimizer='adam', loss=tf.keras.losses.CategoricalCrossentropy(), metrics=['accuracy']) # create regression model reg_output = layers.Dense(1, activation="relu")(h2) reg_model = tf.keras.models.Model(inputs=inputs, outputs=reg_output) reg_model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError(), metrics=['mse']) """ Explanation: Create tf.data datsets for both classification and regression. End of explanation """ # train the classifcation model cls_model.fit(cls_train_data.batch(50), epochs=1) val_loss, val_accuracy = cls_model.evaluate(cls_valid_data.batch(X_valid.shape[0])) print("Validation accuracy for classifcation model: {}".format(val_accuracy)) """ Explanation: First, train the classification model and examine the validation accuracy. End of explanation """ # train the classifcation model reg_model.fit(reg_train_data.batch(50), epochs=1) val_loss, val_mse = reg_model.evaluate(reg_valid_data.batch(X_valid.shape[0])) print("Validation RMSE for regression model: {}".format(val_mse**0.5)) """ Explanation: Next, we'll train the regression model and examine the validation RMSE. End of explanation """ preds = reg_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]), "is_male": tf.convert_to_tensor(["True"]), "mother_age": tf.convert_to_tensor([28]), "mother_race": tf.convert_to_tensor(["1.0"]), "plurality": tf.convert_to_tensor(["Single(1)"])}, steps=1).squeeze() preds """ Explanation: The regression model gives a single numeric prediction of baby weight. End of explanation """ preds = cls_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]), "is_male": tf.convert_to_tensor(["True"]), "mother_age": tf.convert_to_tensor([28]), "mother_race": tf.convert_to_tensor(["1.0"]), "plurality": tf.convert_to_tensor(["Single(1)"])}, steps=1).squeeze() preds objects = ('very_low', 'low', 'average', 'high') y_pos = np.arange(len(objects)) predictions = list(preds) plt.bar(y_pos, predictions, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.title('Baby weight prediction') plt.show() """ Explanation: The classification model predicts a probability for each bucket of values. End of explanation """ # Read in the data and preprocess df = pd.read_csv("./data/babyweight_train.csv") # prepare inputs df.is_male = df.is_male.astype(str) df.mother_race.fillna(0, inplace = True) df.mother_race = df.mother_race.astype(str) # create categorical label MIN = np.min(df.weight_pounds) MAX = np.max(df.weight_pounds) NBUCKETS = 50 def categorical_weight(weight_pounds, weight_min, weight_max, nbuckets=10): buckets = np.linspace(weight_min, weight_max, nbuckets) return np.digitize(weight_pounds, buckets) - 1 df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x, MIN, MAX, NBUCKETS)) def encode_labels(classes): one_hots = to_categorical(classes) return one_hots FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race'] LABEL_COLUMN = ['weight_category'] N_TRAIN = int(df.shape[0] * 0.80) X_train, y_train = df[FEATURES][:N_TRAIN], encode_labels(df[LABEL_COLUMN][:N_TRAIN]) X_valid, y_valid = df[FEATURES][N_TRAIN:], encode_labels(df[LABEL_COLUMN][N_TRAIN:]) # create the training dataset train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train)) valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid)) """ Explanation: Increasing the number of categorical labels We'll generalize the code above to accommodate N label buckets, instead of just 4. End of explanation """ # create feature columns to handle categorical variables numeric_columns = [fc.numeric_column("mother_age"), fc.numeric_column("gestation_weeks")] CATEGORIES = { 'plurality': list(df.plurality.unique()), 'is_male' : list(df.is_male.unique()), 'mother_race': list(df.mother_race.unique()) } categorical_columns = [] for feature, vocab in CATEGORIES.items(): cat_col = fc.categorical_column_with_vocabulary_list( key=feature, vocabulary_list=vocab, dtype=tf.string) categorical_columns.append(fc.indicator_column(cat_col)) # create Inputs for model inputs = {colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({colname: tf.keras.layers.Input( name=colname, shape=(), dtype=tf.string) for colname in ["plurality", "is_male", "mother_race"]}) # build DenseFeatures for the model dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs) # model h1 = layers.Dense(20, activation="relu")(dnn_inputs) h2 = layers.Dense(10, activation="relu")(h1) output = layers.Dense(NBUCKETS, activation="softmax")(h2) model = tf.keras.models.Model(inputs=inputs, outputs=output) model.compile(optimizer='adam', loss=tf.keras.losses.CategoricalCrossentropy(), metrics=['accuracy']) # train the model model.fit(train_data.batch(50), epochs=1) """ Explanation: Create the feature columns and build the model. End of explanation """ preds = model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]), "is_male": tf.convert_to_tensor(["True"]), "mother_age": tf.convert_to_tensor([28]), "mother_race": tf.convert_to_tensor(["1.0"]), "plurality": tf.convert_to_tensor(["Single(1)"])}, steps=1).squeeze() objects = [str(_) for _ in range(NBUCKETS)] y_pos = np.arange(len(objects)) predictions = list(preds) plt.bar(y_pos, predictions, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.title('Baby weight prediction') plt.show() """ Explanation: Make a prediction on the example above. End of explanation """ import numpy as np import tensorflow as tf from tensorflow import keras MIN_Y = 3 MAX_Y = 20 input_size = 10 inputs = keras.layers.Input(shape=(input_size,)) h1 = keras.layers.Dense(20, 'relu')(inputs) h2 = keras.layers.Dense(1, 'sigmoid')(h1) # 0-1 range output = keras.layers.Lambda(lambda y : (y*(MAX_Y-MIN_Y) + MIN_Y))(h2) # scaled model = keras.Model(inputs, output) # fit the model model.compile(optimizer='adam', loss='mse') batch_size = 2048 for i in range(0, 10): x = np.random.rand(batch_size, input_size) y = 0.5*(x[:,0] + x[:,1]) * (MAX_Y-MIN_Y) + MIN_Y model.fit(x, y) # verify min_y = np.finfo(np.float64).max max_y = np.finfo(np.float64).min for i in range(0, 10): x = np.random.randn(batch_size, input_size) y = model.predict(x) min_y = min(y.min(), min_y) max_y = max(y.max(), max_y) print('min={} max={}'.format(min_y, max_y)) """ Explanation: Restricting the prediction range One way to restrict the prediction range is to make the last-but-one activation function sigmoid instead, and add a lambda layer to scale the (0,1) values to the desired range. The drawback is that it will be difficult for the neural network to reach the extreme values. End of explanation """
gee-community/gee_tools
notebooks/date/since_epoch.ipynb
mit
date_band = tools.date.getDateBand(test_image, 'day') ui.eprint(date_band) """ Explanation: get_date_band Get the date of an image, compute how many units (for example day) has ellpsed since the epoch (1970-01-01) and set it to a band (called date) and a property (called unit_since_epoch, for example, day_since_epoch) End of explanation """ image_date = date_band.get('day_since_epoch') date_since_epoch = tools.date.dateSinceEpoch(image_date) ui.eprint(date_since_epoch) """ Explanation: date_since_epoch Given an ellapsed time since epoch (for example the result of get_date_band) compute what day it is End of explanation """ date = ee.Date('2000-01-02') days = tools.date.unitSinceEpoch(date) ui.eprint(days) """ Explanation: unit_since_epoch Return the number of unit (for example, day) since the epoch (1970-1-1) End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch2-Problem_2-05.ipynb
unlicense
%pylab notebook %precision 4 from scipy import constants as c # we like to use some constants """ Explanation: Excercises Electric Machinery Fundamentals Chapter 2 Problem 2-5 End of explanation """ #60Hz side (North America) Vrms60 = 120 # [V] freq60 = 60 # [Hz] #50Hz side (Europe) Vrms50 = 240 # [V] freq50 = 50 # [Hz] """ Explanation: Description When travelers from the USA and Canada visit Europe, they encounter a different power distribution system. Wall voltages in North America are 120 V rms at 60 Hz, while typical wall voltages in Europe are 230 V at 50 Hz, which means: End of explanation """ S = 1000 # Apparent power (VA) NP60 = 500 # Primary turns at 115V side NP50 = 1000 # Primary turns at 230V side """ Explanation: Many travelers carry small step-up / step-down transformers so that they can use their appliances in the countries that they are visiting. A typical transformer might be rated at 1-kVA and 115/230 V. It has 500 turns of wire on the 115-V side and 1000 turns of wire on the 230-V side, through which it's known that: End of explanation """ #Load the magnetization curve data import pandas as pd # The data file is stored in the repository fileUrl = 'data/p22_mag.dat' data = pd.read_csv(fileUrl, # the address where to download the datafile from sep=' ', # our data source uses a blank space as separation comment='%', # ignore lines starting with a "%" skipinitialspace = True, # ignore intital spaces header=None, # we don't have a header line defined... names=['mmf_data', 'flux_data'] # ...instead we define the names here ) """ Explanation: The magnetization curve for this transformer is shown in Figure P2-2, and can be found in p22_mag.dat at this book's Web site. <img src="figs/FigC_P2-2.jpg" width="100%"> End of explanation """ w60 = 2 * pi * freq60 print('w = {:.4f} rad/s'.format(w60)) """ Explanation: (a) Suppose that this transformer is connected to a 120-V, 60 Hz power source with no load connected to the 240-V side. Sketch the magnetization current that would flow in the transformer. What is the rms amplitude of the magnetization current? What percentage of full-load current is the magnetization current? (b) Now suppose that this transformer is connected to a 240-V, 50 Hz power source with no load connected to the 120-V side. Sketch the magnetization current that would flow in the transformer. What is the rms amplitude of the magnetization current? What percentage of full-load current is the magnetization current? (c) In which case is the magnetization current a higher percentage of full-load current? Why? SOLUTION (a) When this transformer is connected to a 120-V 60 Hz source, the flux in the core will be given by the equation $$\phi(t) = - \frac{V_M}{\omega N_P}\cos(\omega t)$$ Calculate the angular velocity $\omega$: End of explanation """ VM60 = Vrms60 * sqrt(2) print('VM = {:.4f} V'.format(VM60) ) """ Explanation: Calculate the maximum voltage $V_M$: End of explanation """ time = linspace(0, 1./30, 100) # 0 to 1/30 sec flux60 = -VM60 / (w60 * NP60) * cos(w60 * time) """ Explanation: Calculate flux versus time $\phi(t)$ (saved as a vector): End of explanation """ mmf60 = interp(flux60, data['flux_data'], data['mmf_data']) """ Explanation: The magnetization current required for a given flux $\phi(t)$ can be found from Figure P2-2 or from the equivalent table in file p22_mag.dat by using the interpolation function: End of explanation """ im60 = mmf60 / NP60 """ Explanation: Calculate the magnetization current $i_m$: End of explanation """ irms60 = sqrt(sum(im60**2) / im60.size) print('The rms current at 120 V and 60 Hz is {:.4f} A'.format(irms60)) """ Explanation: Calculate the rms value of the current $i_\text{rms}$: End of explanation """ i_fl60 = S / Vrms60 """ Explanation: Calculate the full-load current: End of explanation """ percnt60 = irms60 / i_fl60 * 100 print('The magnetization current is {:.3f}% of full-load current.'.format(percnt60)) """ Explanation: Calculate the percentage of full-load current: End of explanation """ rc('text', usetex=True) # enable LaTeX commands for plot title(r'\bf Magnetization current at 60 Hz') xlabel(r'\bf Time (s)') ylabel(r'$\mathbf{I_m}$ \textbf{(A)}') axis([0,0.04,-0.5,0.5]) #set the axis range plot(time,im60) legend(('$60 Hz,\,I_{{RMS}} = {:.3f}\,A$'.format(irms60),), loc=4); grid() """ Explanation: Sketch the magnetization current $i_m$ that would flow in the transformer: End of explanation """ w50 = 2 * pi * freq50 print('w = {:.4f} rad/s'.format(w50) ) """ Explanation: (b) When this transformer is connected to a 240-V 50 Hz source, the flux in the core will be given by the equation $$\phi(t) = - \frac{V_M}{\omega N_S}\cos(\omega t)$$ Calculate the angular velocity $\omega$: End of explanation """ VM50 = Vrms50 * sqrt(2) print('VM = {:.4f} V'.format(VM50) ) """ Explanation: Calculate the maximum voltage $\text{V}_\text{M}$: End of explanation """ time = linspace(0, 1.0/25, 100) # 0 to 1/25 sec flux50 = -VM50 / (w50 * NP50) * cos(w50 * time) """ Explanation: Calculate flux versus time $\phi(t)$ (saved as a vector): End of explanation """ mmf50 = interp(flux50, data['flux_data'], data['mmf_data']) """ Explanation: The magnetization current required for a given flux $\phi(t)$ can be found from Figure P2-2 or from the equivalent table in file p22_mag.dat by using the interpolation function: End of explanation """ im50 = mmf50 / NP50 """ Explanation: Calculate the magnetization current $\text{i}_\text{m}$: End of explanation """ irms50 = sqrt(sum(im50**2) / im50.size) print('The rms current at 120 V and 50 Hz is {:.5f} A'.format(irms50)) """ Explanation: Calculate the rms value of the current $i_\text{rms}$: End of explanation """ i_fl50 = S / Vrms50 """ Explanation: Calculate the full-load current: End of explanation """ percnt50 = irms50 / i_fl50 * 100 print('The magnetization current is {:.3f}% of full-load current.'.format(percnt50)) """ Explanation: Calculate the percentage of full-load current: End of explanation """ rc('text', usetex=True) # enable LaTeX commands for plot title(r'\bf Magnetization current at 50 Hz') xlabel(r'\bf Time (s)') ylabel(r'$\mathbf{I_m}$ \textbf{(A)}') axis([0,0.04,-0.5,0.5]) #set the axis range plot(time,im50) legend(('$50 Hz,\,I_{{RMS}} = {:.3f} A$'.format(irms50),), loc=4); grid() """ Explanation: Sketch the magnetization current $i_m$ that would flow in the transformer: End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/7bb2e6f1056f5cae3a98ccc12aac266f/plot_eeg_no_mri.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Joan Massich <mailsik@gmail.com> # # License: BSD Style. import os.path as op import mne from mne.datasets import eegbci from mne.datasets import fetch_fsaverage # Download fsaverage files fs_dir = fetch_fsaverage(verbose=True) subjects_dir = op.dirname(fs_dir) # The files live in: subject = 'fsaverage' trans = op.join(fs_dir, 'bem', 'fsaverage-trans.fif') src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif') bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif') """ Explanation: EEG forward operator with a template MRI This tutorial explains how to compute the forward operator from EEG data using the standard template MRI subject fsaverage. .. important:: Source reconstruction without an individual T1 MRI from the subject will be less accurate. Do not over interpret activity locations which can be off by multiple centimeters. <div class="alert alert-info"><h4>Note</h4><p>`plot_montage` show all the standard montages in MNE-Python.</p></div> :depth: 2 End of explanation """ raw_fname, = eegbci.load_data(subject=1, runs=[6]) raw = mne.io.read_raw_edf(raw_fname, preload=True) # Clean channel names to be able to use a standard 1005 montage ch_names = [c.replace('.', '') for c in raw.ch_names] raw.rename_channels({old: new for old, new in zip(raw.ch_names, ch_names)}) # Read and set the EEG electrode locations montage = mne.channels.read_montage('standard_1005', ch_names=raw.ch_names, transform=True) raw.set_montage(montage) raw.set_eeg_reference(projection=True) # needed for inverse modeling # Check that the locations of EEG electrodes is correct with respect to MRI mne.viz.plot_alignment( raw.info, src=src, eeg=['original', 'projected'], trans=trans, dig=True) """ Explanation: Load the data We use here EEG data from the BCI dataset. End of explanation """ fwd = mne.make_forward_solution(raw.info, trans=trans, src=src, bem=bem, eeg=True, mindist=5.0, n_jobs=1) print(fwd) # for illustration purposes use fwd to compute the sensitivity map eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed') eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir, clim=dict(lims=[5, 50, 100])) """ Explanation: Setup source space and compute forward End of explanation """
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/statespace_sarimax_faq.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd rng = np.random.default_rng(20210819) eta = rng.standard_normal(5200) rho = 0.8 beta = 10 epsilon = eta.copy() for i in range(1, eta.shape[0]): epsilon[i] = rho * epsilon[i - 1] + eta[i] y = beta + epsilon y = y[200:] from statsmodels.tsa.api import SARIMAX, AutoReg from statsmodels.tsa.arima.model import ARIMA """ Explanation: SARIMAX and ARIMA: Frequently Asked Questions (FAQ) This notebook contains explanations for frequently asked questions. Comparing trends and exogenous variables in SARIMAX, ARIMA and AutoReg Reconstructing residuals, fitted values and forecasts in SARIMAX and ARIMA Initial residuals in SARIMAX and ARIMA Comparing trends and exogenous variables in SARIMAX, ARIMA and AutoReg ARIMA are formally OLS with ARMA errors. A basic AR(1) in the OLS with ARMA errors is described as $$ \begin{align} Y_t & = \delta + \epsilon_t \ \epsilon_t & = \rho \epsilon_{t-1} + \eta_t \ \eta_t & \sim WN(0,\sigma^2) \ \end{align} $$ In large samples, $\hat{\delta}\stackrel{p}{\rightarrow} E[Y]$. SARIMAX uses a different representation, so that the model when estimated using SARIMAX is $$ \begin{align} Y_t & = \phi + \rho Y_{t-1} + \eta_t \ \eta_t & \sim WN(0,\sigma^2) \ \end{align} $$ This is the same representation that is used when the model is estimated using OLS (AutoReg). In large samples, $\hat{\phi}\stackrel{p}{\rightarrow} EY$. In the next cell, we simulate a large sample and verify that these relationship hold in practice. End of explanation """ ar0_res = SARIMAX(y, order=(0, 0, 0), trend="c").fit() sarimax_res = SARIMAX(y, order=(1, 0, 0), trend="c").fit() arima_res = ARIMA(y, order=(1, 0, 0), trend="c").fit() autoreg_res = AutoReg(y, 1, trend="c").fit() """ Explanation: The three models are specified and estimated in the next cell. An AR(0) is included as a reference. The AR(0) is identical using all three estimators. End of explanation """ intercept = [ ar0_res.params[0], sarimax_res.params[0], arima_res.params[0], autoreg_res.params[0], ] rho_hat = [0] + [r.params[1] for r in (sarimax_res, arima_res, autoreg_res)] long_run = [ ar0_res.params[0], sarimax_res.params[0] / (1 - sarimax_res.params[1]), arima_res.params[0], autoreg_res.params[0] / (1 - autoreg_res.params[1]), ] cols = ["AR(0)", "SARIMAX", "ARIMA", "AutoReg"] pd.DataFrame( [intercept, rho_hat, long_run], columns=cols, index=["delta-or-phi", "rho", "long-run mean"], ) """ Explanation: The table below contains the estimated parameter in the model, the estimated AR(1) coefficient, and the long-run mean which is either equal to the estimated parameters (AR(0) or ARIMA), or depends on the ratio of the intercept to 1 minus the AR(1) parameter. End of explanation """ sarimax_exog_res = SARIMAX(y, exog=np.ones_like(y), order=(1, 0, 0), trend="n").fit() print(sarimax_exog_res.summary()) """ Explanation: Differences between trend and exog in SARIMAX When SARIMAX includes exog variables, then the exog are treated as OLS regressors, so that the model estimated is $$ \begin{align} Y_t - X_t \beta & = \delta + \rho (Y_{t-1} - X_{t-1}\beta) + \eta_t \ \eta_t & \sim WN(0,\sigma^2) \ \end{align} $$ In the next example, we omit the trend and instead include a column of 1, which produces a model that is equivalent, in large samples, to the case with no exogenous regressor and trend="c". Here the estimated value of const matches the value estimated using ARIMA. This happens since both exog in SARIMAX and the trend in ARIMA are treated as linear regression models with ARMA errors. End of explanation """ full_x = rng.standard_normal(eta.shape) x = full_x[200:] y += 3 * x sarimax_exog_res = SARIMAX(y, exog=x, order=(1, 0, 0), trend="c").fit() arima_exog_res = ARIMA(y, exog=x, order=(1, 0, 0), trend="c").fit() """ Explanation: Using exog in SARIMAX and ARIMA While exog are treated the same in both models, the intercept continues to differ. Below we add an exogenous regressor to y and then fit the model using all three methods. The data generating process is now $$ \begin{align} Y_t & = \delta + X_t \beta + \epsilon_t \ \epsilon_t & = \rho \epsilon_{t-1} + \eta_t \ \eta_t & \sim WN(0,\sigma^2) \ \end{align} $$ End of explanation """ def print_params(s): from io import StringIO return pd.read_csv(StringIO(s.tables[1].as_csv()), index_col=0) print_params(sarimax_exog_res.summary()) """ Explanation: Examining the parameter tables, we see that the parameter estimates on x1 are identical while the estimates of the intercept continue to differ due to the differences in the treatment of trends in these estimators. SARIMAX End of explanation """ print_params(arima_exog_res.summary()) """ Explanation: ARIMA End of explanation """ autoreg_exog_res = AutoReg(y, 1, exog=x, trend="c").fit() print_params(autoreg_exog_res.summary()) """ Explanation: exog in AutoReg When using AutoReg to estimate a model using OLS, the model differs from both SARIMAX and ARIMA. The AutoReg specification with exogenous variables is $$ \begin{align} Y_t & = \phi + \rho Y_{t-1} + X_{t}\beta + \eta_t \ \eta_t & \sim WN(0,\sigma^2) \ \end{align} $$ This specification is not equivalent to the specification estimated in SARIMAX and ARIMA. Here the difference is non-trivial, and naive estimation on the same time series results in different parameter values, even in large samples (and the limit). Estimating this model changes the parameter estimates on the AR(1) coefficient. AutoReg End of explanation """ y = beta + eta epsilon = eta.copy() for i in range(1, eta.shape[0]): y[i] = beta * (1 - rho) + rho * y[i - 1] + 3 * full_x[i] + eta[i] y = y[200:] """ Explanation: The key difference can be seen by writing the model in lag operator notation. $$ \begin{align} (1-\phi L ) Y_t & = X_{t}\beta + \eta_t \Rightarrow \ Y_t & = (1-\phi L )^{-1}\left(X_{t}\beta + \eta_t\right) \ Y_t & = \sum_{i=0}^{\infty} \phi^i \left(X_{t-i}\beta + \eta_{t-i}\right) \end{align} $$ where it is is assumed that $|\phi|<1$. Here we see that $Y_t$ depends on all lagged values of $X_t$ and $\eta_t$. This differs from the specification estimated by SARIMAX and ARIMA, which can be seen to be $$ \begin{align} Y_t - X_t \beta & = \delta + \rho (Y_{t-1} - X_{t-1}\beta) + \eta_t \ \left(1-\rho L \right)\left(Y_t - X_t \beta\right) & = \delta + \eta_t \ Y_t - X_t \beta & = \frac{\delta}{1-\rho} + \left(1-\rho L \right)^{-1}\eta_t \ Y_t - X_t \beta & = \frac{\delta}{1-\rho} + \sum_{i=0}^\infty \rho^i \eta_{t-i} \ Y_t & = \frac{\delta}{1-\rho} + X_t \beta + \sum_{i=0}^\infty \rho^i \eta_{t-i} \ \end{align} $$ In this specification, $Y_t$ only depends on $X_t$ and no other lags. Using the correct DGP with AutoReg Simulating the process that is estimated in AutoReg shows that the parameters are recovered from the true model. End of explanation """ autoreg_alt_exog_res = AutoReg(y, 1, exog=x, trend="c").fit() print_params(autoreg_alt_exog_res.summary()) """ Explanation: AutoReg with correct DGP End of explanation """ arima_res = ARIMA(y, order=(1, 0, 0), trend="c").fit() print_params(arima_res.summary()) arima_res.predict(0, 2) delta_hat, rho_hat = arima_res.params[:2] delta_hat + rho_hat * (y[0] - delta_hat) """ Explanation: Reconstructing residuals, fitted values and forecasts in SARIMAX and ARIMA In models that contain only autoregressive terms, trends and exogenous variables, fitted values and forecasts can be easily reconstructed once the maximum lag length in the model has been reached. In practice, this means after $(P+D)s+p+d$ periods. Earlier predictions and residuals are harder to reconstruct since the model builds the best prediction for $Y_t|Y_{t-1},Y_{t-2},...$. When the number of lags of $Y$ is less than the autoregressive order, then the expression for the optimal prediction differs from the model. For example, when predicting the very first value, $Y_1$, there is no information available from the history of $Y$, and so the best prediction is the unconditional mean. In the case of an AR(1), the second prediction will follow the model, so that when using ARIMA, the prediction is $$ Y_2 = \hat{\delta} + \hat{\rho} \left(Y_1 - \hat{\delta}\right) $$ since ARIMA treats both exogenous and trend terms as regression with ARMA errors. This can be seen in the next set of cells. End of explanation """ sarima_res = SARIMAX(y, order=(1, 0, 0), trend="c").fit() print_params(sarima_res.summary()) sarima_res.predict(0, 2) delta_hat, rho_hat = sarima_res.params[:2] delta_hat + rho_hat * y[0] """ Explanation: SARIMAX treats trend terms differently, and so the one-step forecast from a model estimated using SARIMAX is $$ Y_2 = \hat\delta + \hat\rho Y_1 $$ End of explanation """ rho = 0.8 beta = 10 epsilon = eta.copy() for i in range(1, eta.shape[0]): epsilon[i] = rho * eta[i - 1] + eta[i] y = beta + epsilon y = y[200:] ma_res = ARIMA(y, order=(0, 0, 1), trend="c").fit() print_params(ma_res.summary()) """ Explanation: Prediction with MA components When a model contains a MA component, the prediction is more complicated since errors are never directly observable. The prediction is still $Y_t|Y_{t-1},Y_{t-2},...$, and when the MA component is invertible, then the optimal prediction can be represented as a $t$-lag AR process. When $t$ is large, this should be very close to the prediction as if the errors were observable. For short lags, this can differ markedly. In the next cell we simulate an MA(1) process, and fit an MA model. End of explanation """ ma_res.predict(1, 5) """ Explanation: We start by looking at predictions near the beginning of the sample corresponding y[1], ..., y[5]. End of explanation """ ma_res.resid[:5] """ Explanation: and the corresponding residuals that are needed to produce the "direct" forecasts End of explanation """ delta_hat, rho_hat = ma_res.params[:2] direct = delta_hat + rho_hat * ma_res.resid[:5] direct """ Explanation: Using the model parameters, we can produce the "direct" forecasts using the MA(1) specification $$ \hat Y_t = \hat\delta + \hat\rho \hat\epsilon_{t-1} $$ We see that these are not especially close to the actual model predictions for the initial forecasts, but that the gap quickly reduces. End of explanation """ ma_res.predict(1, 5) - direct """ Explanation: The difference is nearly a standard deviation for the first but declines as the index increases. End of explanation """ t = y.shape[0] ma_res.predict(t - 3, t - 1) ma_res.resid[-4:-1] direct = delta_hat + rho_hat * ma_res.resid[-4:-1] direct """ Explanation: We next look at the end of the sample and the final three predictions. End of explanation """ ma_res.predict(t - 3, t - 1) - direct """ Explanation: The "direct" forecasts are identical. This happens since the effect of the short sample has disappeared by the end of the sample (In practice it is negligible by observations 100 or so, and numerically absent by around observation 160). End of explanation """ rho = 0.8 beta = 2 delta0 = 10 delta1 = 0.5 epsilon = eta.copy() for i in range(1, eta.shape[0]): epsilon[i] = rho * epsilon[i - 1] + eta[i] t = np.arange(epsilon.shape[0]) y = delta0 + delta1 * t + beta * full_x + epsilon y = y[200:] start = np.array([110, delta1, beta, rho, 1]) arx_res = ARIMA(y, exog=x, order=(1, 0, 0), trend="ct").fit() mod = SARIMAX(y, exog=x, order=(1, 0, 0), trend="ct") start[:2] *= 1 - rho sarimax_res = mod.fit(start_params=start, method="bfgs") """ Explanation: The same principle applies in more complicated model that include multiple lags or seasonal term - predictions in AR models are simple once the effective lag length has been reached, while predictions in models that contains MA components are only simple once the maximum root of the MA lag polynomial is sufficiently small so that the residuals are close to the true residuals. Prediction differences in SARIMAX and ARIMA The formulas used to make predictions from SARIMAX and ARIMA models differ in one key aspect - ARIMA treats all trend terms, e.g, the intercept or time trend, as part of the exogenous regressors. For example, an AR(1) model with an intercept and linear time trend estimated using ARIMA has the specification $$ \begin{align} Y_t - \delta_0 - \delta_1 t & = \epsilon_t \ \epsilon_t & = \rho \epsilon_{t-1} + \eta_t \end{align} $$ When the same model is estimated using SARIMAX, the specification is $$ \begin{align} Y_t & = \epsilon_t \ \epsilon_t & = \delta_0 + \delta_1 t + \rho \epsilon_{t-1} + \eta_t \end{align} $$ The differences are more apparent when the model contains exogenous regressors, $X_t$. The ARIMA specification is $$ \begin{align} Y_t - \delta_0 - \delta_1 t - X_t \beta & = \epsilon_t \ \epsilon_t & = \rho \epsilon_{t-1} + \eta_t \ & = \rho \left(Y_{t-1} - \delta_0 - \delta_1 (t-1) - X_{t-1} \beta\right) + \eta_t \end{align} $$ while the SARIMAX specification is $$ \begin{align} Y_t & = X_t \beta + \epsilon_t \ \epsilon_t & = \delta_0 + \delta_1 t + \rho \epsilon_{t-1} + \eta_t \ & = \delta_0 + \delta_1 t + \rho \left(Y_{t-1} - X_{t-1}\beta\right) + \eta_t \end{align} $$ The key difference between these two is that the intercept and the trend are effectively equivalent to exogenous regressions in ARIMA while they are more like standard ARMA terms in SARIMAX. The next cell simulates an ARX with a time trend using the specification in ARIMA and estimates the parameters using both estimators. End of explanation """ print(arx_res.summary()) print(sarimax_res.summary()) """ Explanation: The two estimators fit similarly, although there is a small difference in the log-likelihood. This is a numerical issue and should not materially affect the predictions. Importantly the two trend parameters, const and x1 (unfortunately named for the time trend), differ between the two. The other parameters are effectively identical. End of explanation """ import numpy as np import pandas as pd rho = 0.8 psi = -0.6 beta = 20 epsilon = eta.copy() for i in range(13, eta.shape[0]): epsilon[i] = ( rho * epsilon[i - 1] + psi * epsilon[i - 12] - (rho * psi) * epsilon[i - 13] + eta[i] ) y = beta + epsilon y = y[200:] """ Explanation: Initial residuals SARIMAX and ARIMA Residuals for observations before the maximal model order, which depends on the AR, MA, Seasonal AR, Seasonal MA and differencing parameters, are not reliable and should not be used for performance assessment. In general, in an ARIMA with orders $(p,d,q)\times(P,D,Q,s)$, the formula for residuals that are less well behaved is: $$ \max((P+D)s+p+d,Qs+q) $$ We can simulate some data from an ARIMA(1,0,0)(1,0,0,12) and examine the residuals. End of explanation """ res = ARIMA(y, order=(1, 0, 0), trend="c", seasonal_order=(1, 0, 0, 12)).fit() print(res.summary()) """ Explanation: With a large sample, the parameter estimates are very close to the DGP parameters. End of explanation """ import matplotlib.pyplot as plt plt.rc("figure", figsize=(10, 10)) plt.rc("font", size=14) _ = plt.scatter(res.resid[:13], eta[200 : 200 + 13]) """ Explanation: We can first examine the initial 13 residuals by plotting against the actual shocks in the model. While there is a correspondence, it is fairly weak and the correlation is much less than 1. End of explanation """ _ = plt.scatter(res.resid[13:37], eta[200 + 13 : 200 + 37]) """ Explanation: Looking at the next 24 residuals and shocks, we see there is nearly perfect correlation. This is expected in large samples once the less accurate residuals are ignored. End of explanation """ rng = np.random.default_rng(20210819) eta = rng.standard_normal(5200) rho = 0.8 beta = 20 epsilon = eta.copy() for i in range(2, eta.shape[0]): epsilon[i] = (1 + rho) * epsilon[i - 1] - rho * epsilon[i - 2] + eta[i] t = np.arange(epsilon.shape[0]) y = beta + 2 * t + epsilon y = y[200:] """ Explanation: Next, we simulate an ARIMA(1,1,0), and include a time trend. End of explanation """ res = ARIMA(y, order=(1, 1, 0), trend="t").fit() print(res.summary()) """ Explanation: Again the parameter estimates are very close to the DGP parameters. End of explanation """ res.resid[:5] """ Explanation: The residuals are not accurate, and the first residual is approximately 500. The others are closer, although in this model the first 2 should usually be ignored. End of explanation """ res.predict(0, 5) """ Explanation: The reason why the first residual is so large is that the optimal prediction of this value is the mean of the difference, which is 1.77. Once the first value is known, the second value makes use of the first value in its prediction and the prediction is substantially closer to the truth. End of explanation """ res.loglikelihood_burn, res.nobs_diffuse """ Explanation: It is worth noting that the results class contains two parameters than can be helpful in understanding which residuals are problematic, loglikelihood_burn and nobs_diffuse. End of explanation """
project-chip/connectedhomeip
docs/guides/repl/Matter - Multi Fabric Commissioning.ipynb
apache-2.0
import os, subprocess if os.path.isfile('/tmp/repl-storage.json'): os.remove('/tmp/repl-storage.json') # So that the all-clusters-app won't boot with stale prior state. os.system('rm -rf /tmp/chip_*') """ Explanation: Multi Fabric - Commissioning and Interactions <a href="http://35.236.121.59/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fproject-chip%2Fconnectedhomeip&urlpath=lab%2Ftree%2Fconnectedhomeip%2Fdocs%2Fguides%2Frepl%2FMatter%2520-%2520Multi%2520Fabric%2520Commissioning.ipynb&branch=master"> <img src="https://i.ibb.co/hR3yWsC/launch-playground.png" alt="drawing" width="130"/> </a> <br></br> This walks through creating multiple controllers on multiple fabrics, using those controllers to commission a target onto those fabrics and finally, interacting with them using the interaction model. FabricAdmins and Controllers The FabricAdmin class (present in the chip.FabricAdmin package) is responsible for adminstering a fabric. It houses the Fabric ID and Index, as well as an RCAC and ICAC that provides the certificate material grounding that fabric. The FabricAdmin can be used to vend ChipDeviceController objects that represent a controller instance with a specific identity grounded in the admin's fabric. This controller can then be used to commission and interact with devices. Clear Persisted Storage Let's clear out our persisted storage (if one exists) to start from a clean slate. End of explanation """ import chip.native import pkgutil module = pkgutil.get_loader('chip.ChipReplStartup') %run {module.path} """ Explanation: Initialization Let's first begin by setting up by importing some key modules that are needed to make it easier for us to interact with the Matter stack. ChipReplStartup.py is run within the global namespace. This results in all of its imports being made available here. NOTE: This is not needed if you launch the REPL from the command-line. End of explanation """ fabricAdmins devCtrl """ Explanation: At startup, the REPL will attempt to find any previously configured fabrics stored in persisted storage. If it can't find any (as is the case here), it will construct a default FabricAdmin object on Fabric 1 (Index 1) as well as construct a device controller (devCtrl) on that fabric. End of explanation """ import time, os import subprocess os.system('pkill -f chip-all-clusters-app') time.sleep(1) # The location of the all-clusters-app in the cloud playground is one level higher - adjust for this by testing for file presence. if (os.path.isfile('../../../out/debug/chip-all-clusters-app')): appPath = '../../../out/debug/chip-all-clusters-app' else: appPath = '../../../../out/debug/chip-all-clusters-app' process = subprocess.Popen(appPath, stdout=subprocess.DEVNULL) time.sleep(1) """ Explanation: Commission onto Fabric 1 Launch Server Let's launch an instance of the chip-all-clusters-app. End of explanation """ devCtrl.CommissionIP(b'127.0.0.1', 20202021, 2) """ Explanation: Commission Target Commission the target onto Fabric 1 using the default device controller instance with a NodeId of 1. End of explanation """ await devCtrl.ReadAttribute(2, [(Clusters.OperationalCredentials.Attributes.FabricsList)], fabricFiltered=False) """ Explanation: Read OpCreds Cluster Read out the OpCreds cluster to confirm membership into Fabric 1. End of explanation """ import chip.FabricAdmin as FabricAdmin fabric2 = FabricAdmin.FabricAdmin(fabricId = 2, fabricIndex = 2) """ Explanation: Commission onto Fabric 2 Create new FabricAdmin End of explanation """ builtins.chipStack.GetStorageManager().jsonData devCtrl2 = fabric2.NewController() """ Explanation: Here's a brief peek at the JSON data that is in the persisted storage file. End of explanation """ await devCtrl.SendCommand(2, 0, Clusters.AdministratorCommissioning.Commands.OpenBasicCommissioningWindow(180)) devCtrl2.CommissionIP(b'127.0.0.1', 20202021, 2) """ Explanation: Open Commissioning Window End of explanation """ await devCtrl2.ReadAttribute(2, [(Clusters.OperationalCredentials.Attributes.FabricsList)], fabricFiltered=False) """ Explanation: Read OpCreds Cluster Read out the OpCreds cluster to confirm membership into Fabric 2. End of explanation """ import chip.native import pkgutil module = pkgutil.get_loader('chip.ChipReplStartup') %run {module.path} """ Explanation: Relaunch REPL Let's simulate re-launching the REPL to show-case the capabilities of the persistence storage and its mechanics. End of explanation """ await devCtrl.SendCommand(2, 0, Clusters.OperationalCredentials.Commands.UpdateFabricLabel("Fabric1Label")) await devCtrl.ReadAttribute(2, [(Clusters.OperationalCredentials.Attributes.FabricsList)], fabricFiltered=False) """ Explanation: The REPL has now loaded the two fabrics that were created in the previous session into the fabricAdmins variable. It has also created a default controller on the first fabric in that list (Fabric 1) as devCtrl. Establish CASE and Read OpCreds To prove that we do indeed have two distinct fabrics and controllers on each fabric, let's go ahead and update the label of each fabric. To do so, you'd need to succcessfully establish a CASE session through a controller on the respective fabric, and call the 'UpdateLabel' command. Underneath the covers, each device controller will do operational discovery of the NodeId being read and establish a CASE session before issuing the IM interaction. End of explanation """ devCtrl2 = fabricAdmins[1].NewController() await devCtrl2.SendCommand(2, 0, Clusters.OperationalCredentials.Commands.UpdateFabricLabel("Fabric2Label")) await devCtrl2.ReadAttribute(2, [(Clusters.OperationalCredentials.Attributes.FabricsList)], fabricFiltered=False) devCtrl2.Shutdown() """ Explanation: Instantiate a controller on fabric 2 and use it to read out the op creds from that fabric. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst !pip install --user google-cloud-bigquery==1.25.0 """ Explanation: Creating Keras DNN model Learning Objectives Create input layers for raw features Create feature columns for inputs Create DNN dense hidden layers and output layer Build DNN model tying all of the pieces together Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born. We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Set up environment variables and load necessary libraries End of explanation """ from google.cloud import bigquery import pandas as pd import datetime import os import shutil import matplotlib.pyplot as plt import tensorflow as tf print(tf.__version__) """ Explanation: Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries. End of explanation """ %%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT PROJECT = "cloud-training-demos" # Replace with your PROJECT """ Explanation: Set environment variables so that we can use them throughout the notebook. End of explanation """ bq = bigquery.Client(project = PROJECT) """ Explanation: Create ML datasets by sampling using BigQuery We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab. End of explanation """ modulo_divisor = 100 train_percent = 80.0 eval_percent = 10.0 train_buckets = int(modulo_divisor * train_percent / 100.0) eval_buckets = int(modulo_divisor * eval_percent / 100.0) """ Explanation: We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash within the module. Feel free to play around with these values to get the perfect combination. End of explanation """ def display_dataframe_head_from_query(query, count=10): """Displays count rows from dataframe head from query. Args: query: str, query to be run on BigQuery, results stored in dataframe. count: int, number of results from head of dataframe to display. Returns: Dataframe head with count number of results. """ df = bq.query( query + " LIMIT {limit}".format( limit=count)).to_dataframe() return df.head(count) """ Explanation: We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows. End of explanation """ # Get label, features, and columns to hash and split into buckets hash_cols_fixed_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, year, month, CASE WHEN day IS NULL THEN CASE WHEN wday IS NULL THEN 0 ELSE wday END ELSE day END AS date, IFNULL(state, "Unknown") AS state, IFNULL(mother_birth_state, "Unknown") AS mother_birth_state FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0 """ display_dataframe_head_from_query(hash_cols_fixed_query) """ Explanation: For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results. End of explanation """ data_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT( CONCAT( CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING) ) ) AS hash_values FROM ({CTE_hash_cols_fixed}) """.format(CTE_hash_cols_fixed=hash_cols_fixed_query) display_dataframe_head_from_query(data_query) """ Explanation: Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here. Next query will combine our hash columns and will leave us just with our label, features, and our hash values. End of explanation """ # Get the counts of each of the unique hash of our splitting column first_bucketing_query = """ SELECT hash_values, COUNT(*) AS num_records FROM ({CTE_data}) GROUP BY hash_values """.format(CTE_data=data_query) display_dataframe_head_from_query(first_bucketing_query) """ Explanation: The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY. End of explanation """ # Get the number of records in each of the hash buckets second_bucketing_query = """ SELECT ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index, SUM(num_records) AS num_records FROM ({CTE_first_bucketing}) GROUP BY ABS(MOD(hash_values, {modulo_divisor})) """.format( CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(second_bucketing_query) """ Explanation: The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records. End of explanation """ # Calculate the overall percentages percentages_query = """ SELECT bucket_index, num_records, CAST(num_records AS FLOAT64) / ( SELECT SUM(num_records) FROM ({CTE_second_bucketing})) AS percent_records FROM ({CTE_second_bucketing}) """.format(CTE_second_bucketing=second_bucketing_query) display_dataframe_head_from_query(percentages_query) """ Explanation: The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query. End of explanation """ # Choose hash buckets for training and pull in their statistics train_query = """ SELECT *, "train" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= 0 AND bucket_index < {train_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets) display_dataframe_head_from_query(train_query) """ Explanation: We'll now select the range of buckets to be used in training. End of explanation """ # Choose hash buckets for validation and pull in their statistics eval_query = """ SELECT *, "eval" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {train_buckets} AND bucket_index < {cum_eval_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets, cum_eval_buckets=train_buckets + eval_buckets) display_dataframe_head_from_query(eval_query) """ Explanation: We'll do the same by selecting the range of buckets to be used evaluation. End of explanation """ # Choose hash buckets for testing and pull in their statistics test_query = """ SELECT *, "test" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {cum_eval_buckets} AND bucket_index < {modulo_divisor} """.format( CTE_percentages=percentages_query, cum_eval_buckets=train_buckets + eval_buckets, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(test_query) """ Explanation: Lastly, we'll select the hash buckets to be used for the test split. End of explanation """ # Union the training, validation, and testing dataset statistics union_query = """ SELECT 0 AS dataset_id, * FROM ({CTE_train}) UNION ALL SELECT 1 AS dataset_id, * FROM ({CTE_eval}) UNION ALL SELECT 2 AS dataset_id, * FROM ({CTE_test}) """.format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query) display_dataframe_head_from_query(union_query) """ Explanation: In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after. End of explanation """ # Show final splitting and associated statistics split_query = """ SELECT dataset_id, dataset_name, SUM(num_records) AS num_records, SUM(percent_records) AS percent_records FROM ({CTE_union}) GROUP BY dataset_id, dataset_name ORDER BY dataset_id """.format(CTE_union=union_query) display_dataframe_head_from_query(split_query) """ Explanation: Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to that we were hoping to get. End of explanation """ # every_n allows us to subsample from each of the hash values # This helps us get approximately the record counts we want every_n = 1000 splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor) def create_data_split_sample_df(query_string, splitting_string, lo, up): """Creates a dataframe with a sample of a data split. Args: query_string: str, query to run to generate splits. splitting_string: str, modulo string to split by. lo: float, lower bound for bucket filtering for split. up: float, upper bound for bucket filtering for split. Returns: Dataframe containing data split sample. """ query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format( query_string, splitting_string, int(lo), int(up)) df = bq.query(query).to_dataframe() return df train_df = create_data_split_sample_df( data_query, splitting_string, lo=0, up=train_percent) eval_df = create_data_split_sample_df( data_query, splitting_string, lo=train_percent, up=train_percent + eval_percent) test_df = create_data_split_sample_df( data_query, splitting_string, lo=train_percent + eval_percent, up=modulo_divisor) print("There are {} examples in the train dataset.".format(len(train_df))) print("There are {} examples in the validation dataset.".format(len(eval_df))) print("There are {} examples in the test dataset.".format(len(test_df))) """ Explanation: Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits. End of explanation """ train_df.head() """ Explanation: Preprocess data using Pandas We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is. End of explanation """ train_df.describe() """ Explanation: Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data) End of explanation """ def preprocess(df): """ Preprocess pandas dataframe for augmented babyweight data. Args: df: Dataframe containing raw babyweight data. Returns: Pandas dataframe containing preprocessed raw babyweight data as well as simulated no ultrasound data masking some of the original data. """ # Clean up raw data # Filter out what we don"t want to use for training df = df[df.weight_pounds > 0] df = df[df.mother_age > 0] df = df[df.gestation_weeks > 0] df = df[df.plurality > 0] # Modify plurality field to be a string twins_etc = dict(zip([1,2,3,4,5], ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"])) df["plurality"].replace(twins_etc, inplace=True) # Clone data and mask certain columns to simulate lack of ultrasound no_ultrasound = df.copy(deep=True) # Modify is_male no_ultrasound["is_male"] = "Unknown" # Modify plurality condition = no_ultrasound["plurality"] != "Single(1)" no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)" # Concatenate both datasets together and shuffle return pd.concat( [df, no_ultrasound]).sample(frac=1).reset_index(drop=True) """ Explanation: It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect. End of explanation """ train_df = preprocess(train_df) eval_df = preprocess(eval_df) test_df = preprocess(test_df) train_df.head() train_df.tail() """ Explanation: Let's process the train, eval, test set and see a small sample of the training data after our preprocessing: End of explanation """ train_df.describe() """ Explanation: Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up. End of explanation """ # Define columns columns = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Write out CSV files train_df.to_csv( path_or_buf="train.csv", columns=columns, header=False, index=False) eval_df.to_csv( path_or_buf="eval.csv", columns=columns, header=False, index=False) test_df.to_csv( path_or_buf="test.csv", columns=columns, header=False, index=False) %%bash wc -l *.csv %%bash head *.csv %%bash tail *.csv %%bash ls *.csv %%bash head -5 *.csv """ Explanation: Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers. End of explanation """ # Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]] """ Explanation: Create Keras model Set CSV Columns, label column, and column defaults. Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function. * CSV_COLUMNS is going to be our header name of our column. Make sure that they are in the same order as in the CSV files * LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary. * DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column. End of explanation """ def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # features, label def load_dataset(pattern, batch_size=1, mode='eval'): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: 'train' | 'eval' to determine if training or evaluating. Returns: `Dataset` object. """ # Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS, ignore_errors=True) # Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == 'train': dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset """ Explanation: Make dataset of features and label from CSV files. Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourselves from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors. End of explanation """ # TODO 1 def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({ colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality"]}) return inputs """ Explanation: Create input layers for raw features. We'll need to get the data to read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining: * shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known. * name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. * dtype: The data type expected by the input, as a string (float32, float64, int32...) End of explanation """ # TODO 2 def categorical_fc(name, values): """Helper function to wrap categorical feature by indicator column. Args: name: str, name of feature. values: list, list of strings of categorical values. Returns: Indicator column of categorical feature. """ cat_column = tf.feature_column.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) return tf.feature_column.indicator_column(categorical_column=cat_column) def create_feature_columns(): """Creates dictionary of feature columns from inputs. Returns: Dictionary of feature columns. """ feature_columns = { colname : tf.feature_column.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } feature_columns["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) feature_columns["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) return feature_columns """ Explanation: Create feature columns for inputs. Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN. End of explanation """ # TODO 3 def get_model_outputs(inputs): """Creates model architecture and returns outputs. Args: inputs: Dense tensor used as inputs to model. Returns: Dense tensor output from the model. """ # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = tf.keras.layers.Dense(64, activation="relu", name="h1")(inputs) h2 = tf.keras.layers.Dense(32, activation="relu", name="h2")(h1) # Final output is a linear activation because this is regression output = tf.keras.layers.Dense( units=1, activation="linear", name="weight")(h2) return output """ Explanation: Create DNN dense hidden layers and output layer. So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right. End of explanation """ def rmse(y_true, y_pred): """Calculates RMSE evaluation metric. Args: y_true: tensor, true labels. y_pred: tensor, predicted labels. Returns: Tensor with value of RMSE between true and predicted labels. """ return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2)) """ Explanation: Create custom evaluation metric. We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels. End of explanation """ # TODO 4 def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) dnn_inputs = tf.keras.layers.DenseFeatures( feature_columns=feature_columns.values())(inputs) # Get output of model given inputs output = get_model_outputs(dnn_inputs) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) return model print("Here is our DNN architecture so far:\n") model = build_dnn_model() print(model.summary()) """ Explanation: Build DNN model tying all of the pieces together. Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics. End of explanation """ tf.keras.utils.plot_model( model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR") """ Explanation: We can visualize the DNN using the Keras plot_model utility. End of explanation """ # TODO 5 TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 trainds = load_dataset( pattern="train*", batch_size=TRAIN_BATCH_SIZE, mode='train') evalds = load_dataset( pattern="eval*", batch_size=1000, mode='eval').take(count=NUM_EVAL_EXAMPLES // 1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) logdir = os.path.join( "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=logdir, histogram_freq=1) history = model.fit( trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch, callbacks=[tensorboard_callback]) """ Explanation: Run and evaluate model Train and evaluate. We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. End of explanation """ # Plot import matplotlib.pyplot as plt nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(["loss", "rmse"]): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history["val_{}".format(key)]) plt.title("model {}".format(key)) plt.ylabel(key) plt.xlabel("epoch") plt.legend(["train", "validation"], loc="upper left"); """ Explanation: Visualize loss curve End of explanation """ OUTPUT_DIR = "babyweight_trained" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join( OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( obj=model, export_dir=EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PATH)) !ls $EXPORT_PATH """ Explanation: Save the model End of explanation """
mastertrojan/Udacity
batch-norm/Batch_Normalization_Solutions.ipynb
mit
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False) """ Explanation: Batch Normalization โ€“ Solutions Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now. This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was: 1. Complicated enough that training would benefit from batch normalization. 2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization. 3. Simple enough that the architecture would be easy to understand without additional resources. This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package. Batch Normalization with tf.layers.batch_normalization Batch Normalization with tf.nn.batch_normalization The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook. End of explanation """ """ DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) return layer """ Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a> This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function. This version of the function does not include batch normalization. End of explanation """ """ DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) return conv_layer """ Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network. This version of the function does not include batch normalization. End of explanation """ """ DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) """ Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training. End of explanation """ def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) layer = tf.layers.batch_normalization(layer, training=is_training) layer = tf.nn.relu(layer) return layer """ Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. Add batch normalization To add batch normalization to the layers created by fully_connected, we did the following: 1. Added the is_training parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the dense layer. 3. Used tf.layers.batch_normalization to normalize the layer's output. Notice we pass is_training to this layer to ensure the network updates its population statistics appropriately. 4. Passed the normalized values into a ReLU activation function. End of explanation """ def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer """ Explanation: To add batch normalization to the layers created by conv_layer, we did the following: 1. Added the is_training parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the conv2d layer. 3. Used tf.layers.batch_normalization to normalize the convolutional layer's output. Notice we pass is_training to this layer to ensure the network updates its population statistics appropriately. 4. Passed the normalized values into a ReLU activation function. If you compare this function to fully_connected, you'll see that โ€“ย when using tf.layers โ€“ there really isn't any difference between normalizing a fully connected layer and a convolutional layer. However, if you look at the second example in this notebook, where we restrict ourselves to the tf.nn package, you'll see a small difference. End of explanation """ def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=None) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer """ Explanation: Batch normalization is still a new enough idea that researchers are still discovering how best to use it. In general, people seem to agree to remove the layer's bias (because the batch normalization already has terms for scaling and shifting) and add batch normalization before the layer's non-linear activation function. However, for some networks it will work well in other ways, too. Just to demonstrate this point, the following three versions of conv_layer show other ways to implement batch normalization. If you try running with any of these versions of the function, they should all still work fine (although some versions may still work better than others). Alternate solution that uses bias in the convolutional layer but still adds batch normalization before the ReLU activation function. End of explanation """ def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=tf.nn.relu) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) return conv_layer """ Explanation: Alternate solution that uses a bias and ReLU activation function before batch normalization. End of explanation """ def conv_layer(prev_layer, layer_num, is_training): strides = 2 if layer_num % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=False, activation=tf.nn.relu) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) return conv_layer """ Explanation: Alternate solution that uses a ReLU activation function before normalization, but no bias. End of explanation """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) # Tell TensorFlow to update the population statistics while training with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) """ Explanation: To modify train, we did the following: 1. Added is_training, a placeholder to store a boolean value indicating whether or not the network is training. 2. Passed is_training to the conv_layer and fully_connected functions. 3. Each time we call run on the session, we added to feed_dict the appropriate value for is_training. 4. Moved the creation of train_opt inside a with tf.control_dependencies... statement. This is necessary to get the normalization layers created with tf.layers.batch_normalization to update their population statistics, which we need when performing inference. End of explanation """ def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) gamma = tf.Variable(tf.ones([num_units])) beta = tf.Variable(tf.zeros([num_units])) pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False) pop_variance = tf.Variable(tf.ones([num_units]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(layer, [0]) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output) """ Explanation: With batch normalization, we now get excellent performance. In fact, validation accuracy is almost 94% after only 500 batches. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference. Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a> Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature โ€“ something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM โ€“ then you may need to know these sorts of things. This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization. This implementation of fully_connected is much more involved than the one that uses tf.layers. However, if you went through the Batch_Normalization_Lesson notebook, things should look pretty familiar. To add batch normalization, we did the following: 1. Added the is_training parameter to the function signature so we can pass that information to the batch normalization layer. 2. Removed the bias and activation function from the dense layer. 3. Added gamma, beta, pop_mean, and pop_variance variables. 4. Used tf.cond to make handle training and inference differently. 5. When training, we use tf.nn.moments to calculate the batch mean and variance. Then we update the population statistics and use tf.nn.batch_normalization to normalize the layer's output using the batch statistics. Notice the with tf.control_dependencies... statement - this is required to force TensorFlow to run the operations that update the population statistics. 6. During inference (i.e. when not training), we use tf.nn.batch_normalization to normalize the layer's output using the population statistics we calculated during training. 7. Passed the normalized values into a ReLU activation function. If any of thise code is unclear, it is almost identical to what we showed in the fully_connected function in the Batch_Normalization_Lesson notebook. Please see that for extensive comments. End of explanation """ def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 in_channels = prev_layer.get_shape().as_list()[3] out_channels = layer_depth*4 weights = tf.Variable( tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)) layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME') gamma = tf.Variable(tf.ones([out_channels])) beta = tf.Variable(tf.zeros([out_channels])) pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False) pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False) epsilon = 1e-3 def batch_norm_training(): # Important to use the correct dimensions here to ensure the mean and variance are calculated # per feature map instead of for the entire layer batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output) """ Explanation: The changes we made to conv_layer are almost exactly the same as the ones we made to fully_connected. However, there is an important difference. Convolutional layers have multiple feature maps, and each feature map uses shared weights. So we need to make sure we calculate our batch and population statistics per feature map instead of per node in the layer. To accomplish this, we do the same things that we did in fully_connected, with two exceptions: 1. The sizes of gamma, beta, pop_mean and pop_variance are set to the number of feature maps (output channels) instead of the number of output nodes. 2. We change the parameters we pass to tf.nn.moments to make sure it calculates the mean and variance for the correct dimensions. End of explanation """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate) """ Explanation: To modify train, we did the following: 1. Added is_training, a placeholder to store a boolean value indicating whether or not the network is training. 2. Each time we call run on the session, we added to feed_dict the appropriate value for is_training. 3. We did not need to add the with tf.control_dependencies... statement that we added in the network that used tf.layers.batch_normalization because we handled updating the population statistics ourselves in conv_layer and fully_connected. End of explanation """
yevheniyc/Projects
1m_ML_Security/notebooks/answers/Worksheet 5 - DGA Detection Feature Engineering - Answers.ipynb
mit
## Load data df = pd.read_csv('../../data/dga_data_small.csv') df.drop(['host', 'subclass'], axis=1, inplace=True) print(df.shape) df.sample(n=5).head() # print a random sample of the DataFrame df[df.isDGA == 'legit'].head() # Google's 10000 most common english words will be needed to derive a feature called ngrams... # therefore we already load them here. top_en_words = pd.read_csv('../../data/google-10000-english.txt', header=None, names=['words']) top_en_words.sample(n=5).head() # Source: https://github.com/first20hours/google-10000-english """ Explanation: Worksheet - Answer - DGA Detection using Machine Learning This worksheet is a step-by-step guide on how to detect domains that were generated using "Domain Generation Algorithm" (DGA). We will walk you through the process of transforming raw domain strings to Machine Learning features and creating a decision tree classifer which you will use to determine whether a given domain is legit or not. Once you have implemented the classifier, the worksheet will walk you through evaluating your model. Overview 2 main steps: Feature Engineering - from raw domain strings to numeric Machine Learning features using DataFrame manipulations Machine Learning Classification - predict whether a domain is legit or not using a Decision Tree Classifier DGA - Background "Various families of malware use domain generation algorithms (DGAs) to generate a large number of pseudo-random domain names to connect to a command and control (C2) server. In order to block DGA C2 traffic, security organizations must first discover the algorithm by reverse engineering malware samples, then generate a list of domains for a given seed. The domains are then either preregistered, sink-holed or published in a DNS blacklist. This process is not only tedious, but can be readily circumvented by malware authors. An alternative approach to stop malware from using DGAs is to intercept DNS queries on a network and predict whether domains are DGA generated. Much of the previous work in DGA detection is based on finding groupings of like domains and using their statistical properties to determine if they are DGA generated. However, these techniques are run over large time windows and cannot be used for real-time detection and prevention. In addition, many of these techniques also use contextual information such as passive DNS and aggregations of all NXDomains throughout a network. Such requirements are not only costly to integrate, they may not be possible due to real-world constraints of many systems (such as endpoint detection). An alternative to these systems is a much harder problem: detect DGA generation on a per domain basis with no information except for the domain name. Previous work to solve this harder problem exhibits poor performance and many of these systems rely heavily on manual creation of features; a time consuming process that can easily be circumvented by malware authors..." [Citation: Woodbridge et. al 2016: "Predicting Domain Generation Algorithms with Long Short-Term Memory Networks"] A better alternative for real-world deployment would be to use "featureless deep learning" - We have a separate notebook where you can see how this can be implemented! However, let's learn the basics first!!! Worksheet for Part 1 - Feature Engineering End of explanation """ def H_entropy (x): # Calculate Shannon Entropy prob = [ float(x.count(c)) / len(x) for c in dict.fromkeys(list(x)) ] H = - sum([ p * np.log2(p) for p in prob ]) return H def vowel_consonant_ratio (x): # Calculate vowel to consonant ratio x = x.lower() vowels_pattern = re.compile('([aeiou])') consonants_pattern = re.compile('([b-df-hj-np-tv-z])') vowels = re.findall(vowels_pattern, x) consonants = re.findall(consonants_pattern, x) try: ratio = len(vowels) / len(consonants) except: # catch zero devision exception ratio = 0 return ratio """ Explanation: Part 1 - Feature Engineering Option 1 to derive Machine Learning features is to manually hand-craft useful contextual information of the domain string. An alternative approach (not covered in this notebook) is "Featureless Deep Learning", where an embedding layer takes care of deriving features - a huge step towards more "AI". Previous academic research has focused on the following features that are based on contextual information: List of features: Length ["length"] Number of digits ["digits"] Entropy ["entropy"] - use H_entropy function provided Vowel to consonant ratio ["vowel-cons"] - use vowel_consonant_ratio function provided N-grams ["n-grams"] - use ngram functions provided Tasks: Split into A and B parts, see below... Please run the following function cell and then continue reading the next markdown cell with more details on how to derive those features. Have fun! End of explanation """ # derive features df['length'] = df.domain.str.len() df['digits'] = df.domain.str.count('[0-9]') df['entropy'] = df.domain.apply(H_entropy) df['vowel-cons'] = df.domain.apply(vowel_consonant_ratio) # encode strings of target variable as integers df.isDGA = df.isDGA.replace(to_replace = 'dga', value=1) df.isDGA = df.isDGA.replace(to_replace = 'legit', value=0) print(df.isDGA.value_counts()) # check intermediate 2D pandas DataFrame df.sample(n=5).head() """ Explanation: Tasks - A - Feature Engineering Please try to derive a new pandas 2D DataFrame with a new column for each of feature. Focus on length, digits, entropy and vowel-cons here. Also make sure to encode the isDGA column as integers. pandas.Series.str, pandas.Series.replace and pandas.Series,apply can be very helpful to quickly derive those features. Functions you need to apply here are provided in above cell. The ngram is a bit more complicated, see next instruction cell to add this feature... End of explanation """ # ngrams: Implementation according to Schiavoni 2014: "Phoenix: DGA-based Botnet Tracking and Intelligence" # http://s2lab.isg.rhul.ac.uk/papers/files/dimva2014.pdf def ngrams(word, n): # Extract all ngrams and return a regular Python list # Input word: can be a simple string or a list of strings # Input n: Can be one integer or a list of integers # if you want to extract multipe ngrams and have them all in one list l_ngrams = [] if isinstance(word, list): for w in word: if isinstance(n, list): for curr_n in n: ngrams = [w[i:i+curr_n] for i in range(0,len(w)-curr_n+1)] l_ngrams.extend(ngrams) else: ngrams = [w[i:i+n] for i in range(0,len(w)-n+1)] l_ngrams.extend(ngrams) else: if isinstance(n, list): for curr_n in n: ngrams = [word[i:i+curr_n] for i in range(0,len(word)-curr_n+1)] l_ngrams.extend(ngrams) else: ngrams = [word[i:i+n] for i in range(0,len(word)-n+1)] l_ngrams.extend(ngrams) # print(l_ngrams) return l_ngrams def ngram_feature(domain, d, n): # Input is your domain string or list of domain strings # a dictionary object d that contains the count for most common english words # finally you n either as int list or simple int defining the ngram length # Core magic: Looks up domain ngrams in english dictionary ngrams and sums up the # respective english dictionary counts for the respective domain ngram # sum is normalized l_ngrams = ngrams(domain, n) # print(l_ngrams) count_sum=0 for ngram in l_ngrams: if d[ngram]: count_sum+=d[ngram] try: feature = count_sum/(len(domain)-n+1) except: feature = 0 return feature def average_ngram_feature(l_ngram_feature): # input is a list of calls to ngram_feature(domain, d, n) # usually you would use various n values, like 1,2,3... return sum(l_ngram_feature)/len(l_ngram_feature) l_en_ngrams = ngrams(list(top_en_words['words']), [1,2,3]) d = Counter(l_en_ngrams) from six.moves import cPickle as pickle with open('../../data/d_common_en_words' + '.pickle', 'wb') as f: pickle.dump(d, f, pickle.HIGHEST_PROTOCOL) df['ngrams'] = df.domain.apply(lambda x: average_ngram_feature([ngram_feature(x, d, 1), ngram_feature(x, d, 2), ngram_feature(x, d, 3)])) # check final 2D pandas DataFrame containing all final features and the target vector isDGA df.sample(n=5).head() df_final = df df_final = df_final.drop(['domain'], axis=1) df_final.to_csv('../../data/dga_features_final_df.csv', index=False) df_final.head() """ Explanation: Tasks - B - Feature Engineering Finally, let's tackle the ngram feature. There are multiple steps involved to derive this feature. Here in this notebook, we use an implementation outlined in the this academic paper Schiavoni 2014: "Phoenix: DGA-based Botnet Tracking and Intelligence" - see section: Linguistic Features. What are ngrams??? Imagine a string like 'facebook', if I were to derive all n-grams for n=2 (aka bi-grams) I would get '['fa', 'ac', 'ce', 'eb', 'bo', 'oo', 'ok']', so you see that you slide with one step from the left and just group 2 characters together each time, a tri-gram for 'facebook' would yielfd '['fac', 'ace', 'ceb', 'ebo', 'boo', 'ook']'. Ngrams have a long history in natural language processing, but are also used a lot for example in detecting malicious executable (raw byte ngrams in this case). Steps involved: We have the 10000 most common english words (see data file we loaded, we call this DataFrame top_en_words in this notebook). Now we run the ngrams functions on a list of all these words. The output here is a list that contains ALL 1-grams, bi-grams and tri-grams of these 10000 most common english words. We use the Counter function from collections to derive a dictionary d that contains the counts of all unique 1-grams, bi-grams and tri-grams. Our ngram_feature function will do the core magic. It takes your domain as input, splits it into ngrams (n is a function parameter) and then looks up these ngrams in the english dictionary d we derived in step 2. Function returns the normalized sum of all ngrams that were contained in the english dictionary. For example, running ngram_feature('facebook', d, 2) will return 171.28 (this value is just like the one published in the Schiavoni paper). Finally average_ngram_feature wraps around ngram_feature. You will use this function as your task is to derive a feature that gives the average of the ngram_feature for n=1,2 and 3. Input to this function should be a simple list with entries calling ngram_feature with n=1,2 and 3, hence a list of 3 ngram_feature results. YOUR TURN: Apply average_ngram_feature to you domain column in the DataFrame thereby adding ngram to the df. YOUR TURN: Finally drop the domain column from your DataFrame. Please run the following function cell and then write your code in the following cell. End of explanation """ df_final = pd.read_csv('../../data/dga_features_final_df.csv') print(df_final.isDGA.value_counts()) df_final.head() """ Explanation: Breakpoint: Load Features and Labels If you got stuck in Part 1, please simply load the feature matrix we prepared for you, so you can move on to Part 2 and train a Decision Tree Classifier. End of explanation """ feature_names = ['length','digits','entropy','vowel-cons','ngrams'] features = df_final[feature_names] target = df_final.isDGA visualizer = Rank2D(algorithm='pearson',features=feature_names) visualizer.fit_transform( features ) visualizer.poof() """ Explanation: Visualizing the Results At this point, we've created a dataset which has many features that can be used for classification. Using YellowBrick, your final step is to visualize the features to see which will be of value and which will not. First, let's create a Rank2D visualizer to compute the correlations between all the features. Detailed documentation available here: http://www.scikit-yb.org/en/latest/examples/methods.html#feature-analysis End of explanation """ sns.pairplot(df_final, hue='isDGA', vars=feature_names) """ Explanation: Now let's use a Seaborn pairplot as well. This will really show you which features have clear dividing lines between the classes. Docs are available here: http://seaborn.pydata.org/generated/seaborn.pairplot.html End of explanation """ X = df_final[feature_names].as_matrix() y = df_final.isDGA.as_matrix() radvizualizer = RadViz(classes=['Benign','isDga'], features=feature_names) radvizualizer.fit_transform( X, y) radvizualizer.poof() """ Explanation: Finally, let's try making a RadViz of the features. This visualization will help us see whether there is too much noise to make accurate classifications. End of explanation """
ledeprogram/algorithms
class7/donow/wang_zhizhou_7_donow.ipynb
gpl-3.0
import pandas as pd %matplotlib inline import numpy as np from sklearn.linear_model import LogisticRegression """ Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model End of explanation """ df = pd.read_csv("../data/hanford.csv") df.head() """ Explanation: 2. Read in the hanford.csv file in the data/ folder End of explanation """ df.describe() df.corr() """ Explanation: <img src="../../images/hanford_variables.png"></img> 3. Calculate the basic descriptive statistics on the data End of explanation """ lm = LogisticRegression() df.std() q1 = df['Exposure'].quantile(q=0.25) q1 q3 = df['Exposure'].quantile(q=0.75) q3 df['Mortality'].hist(bins=5) df['Mort_high'] = df['Mortality'].apply(lambda x:1 if x>=147.1 else 0) df['Expo_high'] = df['Exposure'].apply(lambda x:1 if x>=3.41 else 0) df.head() ## ๅ’ŒไธŠ้ข้‚ฃไธชๆ˜ฏไธ€ๆ ท็š„๏ผ def exposure_high(x): if x >= 3.41: return 1 else return 0 """ Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data End of explanation """ lm = LogisticRegression() x = np.asarray(df['Expo_high']) y = np.asarray(df['Mort_high']) x,y lm = lm.fit(x,y) """ Explanation: 5. Create a logistic regression model End of explanation """
jdvelasq/ingenieria-economica
05-bonos.ipynb
mit
# Importa la librerรญa financiera. #ย Solo es necesario ejecutar la importaciรณn una sola vez. import cashflows as cf """ Explanation: Bonos Juan David Velรกsquez Henao jdvelasq@unal.edu.co Universidad Nacional de Colombia, Sede Medellรญn Facultad de Minas Medellรญn, Colombia Haga click aquรญ para acceder a la รบltima versiรณn online Haga click aquรญ para ver la รบltima versiรณn online en nbviewer. Preparaciรณn End of explanation """ cf.bond(face_value=1000, coupon_value=20, num_coupons=28, ytm=4) """ Explanation: bond(face_value=None, coupon_rate=None, coupon_value=None, num_coupons=None, value=None, ytm=None) Con: face_value -- es el valor pagado por el bono a la fecha de expiraciรณn. coupon_rate -- es la tasa de interรฉs pagada por el bono como un porcentaje del face_value. Se usa para calcular el coupon_value cuando dicho parรกmetro no es suministrado. coupon_value -- es el interรฉs pagado por el bono. Si este valor no es suministrado es calculado como coupon_rate * face_value. num_coupons -- nรบmero de pagos de interรฉs antes de la fecha de expiraciรณn. value -- es el valor presente del bono. ytm-- (yield-to-maturity) es la tasa de interรฉs del bono (perรญodica). Esta funciรณn retorna value cuando ytm es especificado; y ytm cuando value es especificado. Ejemplo.-- Se tiene un bono que pagarรก $ 1000 en 14 aรฑos y cupones semestrales a una tasa del 2%. ยฟCuรกl es el valor actual del bono si la tasa atractiva mรญnima es del 4%? End of explanation """ cf.bond(face_value=1000, coupon_value=20, num_coupons=28, ytm=[3, 4, 5]) """ Explanation: Ejemplo.-- ยฟCuรกl es el valor del bono anterior si se considerรกn valores entre el 3% y el 5% para la tasa de interรฉs? End of explanation """ cf.bond(face_value=1000, coupon_value=20, num_coupons=28, value=666.7, ytm=[3, 4, 5]) """ Explanation: Ejemplo.-- Realice un anรกlisis de sensibilidad si el valor del bono es 666.7 y se condieran tasas del 3, 4 y 5%? End of explanation """
scidash/sciunit
docs/chapter6.ipynb
mit
# Install SciUnit if necessary !pip install -q sciunit # Import the package import sciunit # Add some default CSS styles for these examples sciunit.utils.style() """ Explanation: <a href="https://colab.research.google.com/github/scidash/sciunit/blob/master/docs/chapter6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Chapter 6. Workshop Tutorial, Real Example (or back to Chapter 5) End of explanation """ from IPython.display import HTML HTML("""<style> .medium { border: 10px solid black; font-size: 100%; } .big { font-size: 120%; } </style>""") """ Explanation: <div style='font-size:200%; text-align: center;'>Toy example: A brief history of cosmology</div> <br> End of explanation """ !pip install sympy # Some imports to make the code below run from math import pi, sqrt, sin, cos, tan, atan from datetime import datetime, timedelta import numpy as np # SymPy is needed because one of Kepler's equations # is in implicit form and must be solved numerically! from sympy import Symbol, sin as sin_ from sympy.solvers.solvers import nsolve class ProducesOrbitalPosition(sciunit.Capability): """ A model `capability`, i.e. a collection of methods that a test is allowed to invoke on a model. These methods are unimplemented by design, and the model must implement them. """ def get_position(self, t: datetime) -> tuple: """Produce an orbital position from a time point in polar coordinates. Args: t (datetime): The time point to examine, relative to perihelion Returns: tuple: A pair of (r, theta) coordinates in the oribtal plane """ raise NotImplementedError("") @property def perihelion(self) -> datetime: """Return the time of last perihelion""" raise NotImplementedError("") @property def period(self) -> float: """Return the period of the orbit""" raise NotImplementedError("") @property def eccentricity(self) -> float: """Return the eccentricity of the orbit""" raise NotImplementedError("") def get_x_y(self, t: datetime) -> tuple: """Produce an orbital position from a time point, but in cartesian coordinates. This method does not require a model-specific implementation. Thus, a generic implementation can be provided in advance.""" raise NotImplementedError("") """ Explanation: <table> <tr style='background-color: #FFFFFF'> <td></td><th style='text-align: center'>Experimentalists</th> </tr> <tr> <th>Modelers</th><td><table style='border: 1px solid black'></td> <tr> <th></th><th class="medium">Babylonians</th><th class="medium">Brahe</th><th class="medium">Galileo</th><th class="medium">Le Verrier</th> </tr> <tr> <th class="medium">Ptolemy</th><td class='green'></td><td class='red'></td><td class='red'></td><td class='red'></td> </tr> <tr> <th class="medium">Copernicus</th><td class='green'></td><td class='red'></td><td class='red'></td><td class='red'></td> </tr> <tr> <th class="medium">Kepler</th><td class='green'></td><td style='background-color:#FF0000'></td><td class='grey'></td><td class='red'></td> </tr> <tr> <th class="medium">Newton</th><td class='green'></td><td class='green'></td><td class='green'></td><td class='red'></td> </tr> <tr> <th class="medium">Einstein</th><td class='green'></td><td class='green'></td><td class='green'></td><td class='green'></td> </tr> </table> </td> </tr> </table> <table style='border: 1px solid black'> <tr> <td class='green'>Pass</td><td class='red'>Fail</td><td class='grey'>Unclear</td> </tr> </table> Model validation goals: - Generate one unit tests for each experimental datum (or stylized fact about data) - Execute these tests against all models capable of taking them - Programatically display the results as a &ldquo;dashboard" of model validity Optionally record and display non-boolean test results, test artifacts, etc. High-level workflow for validation: ```python Hypothetical examples of data-driven tests from cosmounit.tests import brahe_test, galileo_test, leverrier_test Hypothetical examples of parameterized models from cosmounit.models import ptolemy_model, copernicus_model Execute one test against one model and return a test score score = brahe_test.judge(copernicus_model) ``` This is the only code-like cell of the tutorial that doesn't contain executable code, since it is a high-level abstraction. Don't worry, you'll be running real code just a few cells down! Q: How does a test &ldquo;know" how to test a model? A: Through guarantees that models provide to tests, called <i>&ldquo;Capabilities"</i>. Code for sciunit.capabilities on GitHub Next we show an example of a <i>Capability</i> relevant to the cosmology case outlined above. End of explanation """ # An extremely generic model capability from sciunit.capabilities import ProducesNumber # A specific model capability used in neurophysiology #from neuronunit.capabilities import HasMembranePotential """ Explanation: <i>SciUnit</i> (and domain specific libraries that build upon it) also define their own capabilities End of explanation """ class BaseKeplerModel(sciunit.Model, ProducesOrbitalPosition): """A sciunit model class corresponding to a Kepler-type model of an object in the solar system. This model has the `ProducesOrbitalPosition` capability by inheritance, so it must implement all of the unimplemented methods of that capability""" def get_position(self, t): """Implementation of polar coordinate position as a function of time""" r, theta = self.heliocentric_distance(t), self.true_anomaly(t) return r, theta @property def perihelion(self): """Implementation of time of last perihelion""" return self.params['perihelion'] @property def period(self): """Implementation of period of the orbit""" return self.params['period'] @property def eccentricity(self): """Implementation of orbital eccentricity (assuming elliptic orbit)""" a, b = self.params['semimajor_axis'], self.params['semiminor_axis'] return sqrt(1 - (b/a)**2) def get_x_y(self, t: datetime) -> tuple: """Produce an orbital position from a time point, but in cartesian coordinates. This method does not require a model-specific implementation. Thus, a generic implementation can be provided in advance.""" r, theta = self.get_position(t) x, y = r*cos(theta), r*sin(theta) return x, y class KeplerModel(BaseKeplerModel): """This 'full' model contains all of the methods required to complete the implementation of the `ProducesOrbitalPosition` capability""" def mean_anomaly(self, t): """How long into its period the object is at time `t`""" time_since_perihelion = t - self.perihelion return 2*pi*(time_since_perihelion % self.period)/self.period def eccentric_anomaly(self, t): """How far the object has gone into its period at time `t`""" E = Symbol('E') M, e = self.mean_anomaly(t), self.eccentricity expr = E - e*sin_(E) - M return nsolve(expr, 0) def true_anomaly(self, t): """Theta in a polar coordinate system at time `t`""" e, E = self.eccentricity, self.eccentric_anomaly(t) theta = 2*atan(sqrt(tan(E/2)**2 * (1+e)/(1-e))) return theta def heliocentric_distance(self, t): """R in a polar coordinate system at time `t`""" a, e = self.params['semimajor_axis'], self.eccentricity E = self.eccentric_anomaly(t) return a*(1-e*cos(E)) """ Explanation: Now we can define a <i>model class</i> that implements this ProducesOrbitalPosition capability by inheritance. All models are subclasses of sciunit.Model and typically one or more subclasses of sciunit.Capability. End of explanation """ # The quantities module to put dimensional units on values import quantities as pq # `earth_model` will be a specific instance of KeplerModel, with its own parameters earth_model = KeplerModel(name = "Kepler's Earth Model", semimajor_axis=149598023 * pq.km, semiminor_axis=149577161 * pq.km, period=timedelta(365, 22118), # Period of Earth's orbit perihelion=datetime(2019, 1, 3, 0, 19), # Time and date of Earth's last perihelion ) """ Explanation: Now we can instantiate a <i>specific model</i> from this class, e.g. one representing the orbital path of Earth (according to Kepler) End of explanation """ # The time right now t = datetime.now() # Predicted distance from the sun, right now r = earth_model.heliocentric_distance(t) print("Heliocentric distance of Earth right now is predicted to be %s" % r.round(1)) """ Explanation: We can use this model to make specific predictions, for example the current distance between Earth and the sun. End of explanation """ # Several score types available in SciUnit from sciunit.scores import BooleanScore, ZScore, RatioScore, PercentScore # etc., etc. """ Explanation: Now let's build a test class that we might use to validate (i.e. unit test to produce test scores) with this (and hopefully other) models First, what kind of scores do we want our test to return? End of explanation """ class PositionTest(sciunit.Test): """A test of a planetary position at some specified time""" # This test can only operate on models that implement # the `ProducesOrbitalPosition` capability. required_capabilities = (ProducesOrbitalPosition,) score_type = BooleanScore # This test's 'judge' method will return a BooleanScore. def generate_prediction(self, model): """Generate a prediction from a model""" t = self.observation['t'] # Get the time point from the test's observation x, y = model.get_x_y(t) # Get the predicted x, y coordinates from the model return {'t': t, 'x': x, 'y': y} # Roll this into a model prediction dictionary def compute_score(self, observation, prediction): """Compute a test score based on the agreement between the observation (data) and prediction (model)""" # Compare observation and prediction to get an error measure delta_x = observation['x'] - prediction['x'] delta_y = observation['y'] - prediction['y'] error = np.sqrt(delta_x**2 + delta_y**2) passing = bool(error < 1e5*pq.kilometer) # Turn this into a True/False score score = self.score_type(passing) # Create a sciunit.Score object score.set_raw(error) # Add some information about how this score was obtained score.description = ("Passing score if the prediction is " "within < 100,000 km of the observation") # Describe the scoring logic return score """ Explanation: Code for sciunit.scores on GitHub Here's a first shot a test class for assessing the agreement between predicted and observed positions of orbiting objects. All test classes are subclasses of sciunit.Test. End of explanation """ class StricterPositionTest(PositionTest): # Optional observation units to validate against units = pq.meter # Optional schema for the format of observed data observation_schema = {'t': {'min': 0, 'required': True}, 'x': {'units': True, 'required': True}, 'y': {'units': True, 'required': True}, 'phi': {'required': False}} def validate_observation(self, observation): """Additional checks on the observation""" assert isinstance(observation['t'], datetime) return observation # Optional schema for the format of test parameters params_schema = {'rotate': {'required': False}} # Optional schema for the format of default test parameters default_params = {'rotate': False} def compute_score(self, observation, prediction): """Optionally use additional information to compute model/data agreement""" observation_rotated = observation.copy() if 'phi' in observation: # Project x and y values onto the plane defined by `phi`. observation_rotated['x'] *= cos(observation['phi']) observation_rotated['y'] *= cos(observation['phi']) return super().compute_score(observation_rotated, prediction) """ Explanation: We might want to include extra checks and constraints on observed data, test parameters, or other contingent testing logic. End of explanation """ # A single test instance, best on the test class `StricterPositionTest` combined with # a specific set of observed data (a time and some x, y coordinates) # N.B.: This data is made up for illustration purposes earth_position_test_march = StricterPositionTest(name = "Earth Orbital Data on March 1st, 2019", observation = {'t': datetime(2019, 3, 1), 'x': 7.905e7 * pq.km, 'y': 1.254e8 * pq.km}) """ Explanation: Now we can instantiate a test. Each test instance is a combination of the test class, describing the testing logic and required capabilties, plus some <i>'observation'</i>, i.e. data. End of explanation """ # Execute `earth_position_test` against `earth_model` and return a score score = earth_position_test_march.judge(earth_model) # Display the score score """ Explanation: Finally, we can execute this one test against this one model End of explanation """ # Describe the score in plain language score.describe() # What were the prediction and observation used to compute the score? score.prediction, score.observation # What was the raw error before the decision criterion was applied? score.get_raw() """ Explanation: And we can get additional information about the test, including intermediate objects computed in order to generate a score. End of explanation """ # A new test for a new month: same test class, new observation (data) # N.B. I deliberately picked "observed" values that will make the model fail this test earth_position_test_april = StricterPositionTest(name = "Earth Orbital Data on April 1st, 2019", observation = {'t': datetime(2019, 4, 1), 'x': 160000 * pq.km, 'y': 70000 * pq.km}) # A test suite built from both of the tests that we have instantiated earth_position_suite = sciunit.TestSuite([earth_position_test_march, earth_position_test_april], name = 'Earth observations in Spring, 2019') """ Explanation: We may want to bundle many such tests into a TestSuite. This suite may contain test from multiple classes, or simply tests which differ only in the observation (data) used to instantiate them. End of explanation """ # Run the whole suite (two tests) against one model scores = earth_position_suite.judge(earth_model) """ Explanation: We can then test our model against this whole suite of tests End of explanation """ # Display the returned `scores` object scores """ Explanation: Rich HTML output is automatically produced when this score output is summarized End of explanation """ # Just like the Kepler model, but returning a random orbital angle class RandomModel(KeplerModel): def get_position(self, t): r, theta = super().get_position(t) return r, 2*pi*np.random.rand() # A new model instance, using the same parameters but a different underlying model class random_model = RandomModel(name = "Random Earth Model", semimajor_axis=149598023 * pq.km, semiminor_axis=149577161 * pq.km, period=timedelta(365, 22118), # Period of Earth's orbit perihelion=datetime(2019, 1, 3, 0, 19), # Time and date of Earth's last perihelion ) # Run the whole suite (two tests) against two models scores = earth_position_suite.judge([earth_model, random_model]) # Display the returned `scores` object scores """ Explanation: We can then expand this to multiple models End of explanation """ # All the scores for just one model scores[earth_model] # All the scores for just one test scores[earth_position_test_march] """ Explanation: Or extract just a slice: End of explanation """ # A simple model which has some capabilities, # but not the ones needed for the orbital position test class SimpleModel(sciunit.Model, sciunit.capabilities.ProducesNumber): pass simple_model = SimpleModel() # Run the whole suite (two tests) against two models scores = earth_position_suite.judge([earth_model, random_model, simple_model]) """ Explanation: What about models that <i>can't</i> take a certain test? Some models aren't capable (even in principle) of doing what the test is asking of them. End of explanation """ # Display the returned `scores` object scores """ Explanation: Incapable models don't fail, they get the equivalent of 'incomplete' grades End of explanation """
TeamHG-Memex/eli5
notebooks/Debugging scikit-learn text classification pipeline.ipynb
mit
from sklearn.datasets import fetch_20newsgroups categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med'] twenty_train = fetch_20newsgroups( subset='train', categories=categories, shuffle=True, random_state=42 ) twenty_test = fetch_20newsgroups( subset='test', categories=categories, shuffle=True, random_state=42 ) """ Explanation: Debugging scikit-learn text classification pipeline scikit-learn docs provide a nice text classification tutorial. Make sure to read it first. We'll be doing something similar to it, while taking more detailed look at classifier weights and predictions. 1. Baseline model First, we need some data. Let's load 20 Newsgroups data, keeping only 4 categories: End of explanation """ from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegressionCV from sklearn.pipeline import make_pipeline vec = CountVectorizer() clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target); """ Explanation: A basic text processing pipeline - bag of words features and Logistic Regression as a classifier: End of explanation """ from sklearn import metrics def print_report(pipe): y_test = twenty_test.target y_pred = pipe.predict(twenty_test.data) report = metrics.classification_report(y_test, y_pred, target_names=twenty_test.target_names) print(report) print("accuracy: {:0.3f}".format(metrics.accuracy_score(y_test, y_pred))) print_report(pipe) """ Explanation: We're using LogisticRegressionCV here to adjust regularization parameter C automatically. It allows to compare different vectorizers - optimal C value could be different for different input features (e.g. for bigrams or for character-level input). An alternative would be to use GridSearchCV or RandomizedSearchCV. Let's check quality of this pipeline: End of explanation """ import eli5 eli5.show_weights(clf, top=10) """ Explanation: Not bad. We can try other classifiers and preprocessing methods, but let's check first what the model learned using eli5.show_weights function: End of explanation """ # eli5.show_weights(clf, # feature_names=vec.get_feature_names(), # target_names=twenty_test.target_names) """ Explanation: The table above doesn't make any sense; the problem is that eli5 was not able to get feature and class names from the classifier object alone. We can provide feature and target names explicitly: End of explanation """ eli5.show_weights(clf, vec=vec, top=10, target_names=twenty_test.target_names) """ Explanation: The code above works, but a better way is to provide vectorizer instead and let eli5 figure out the details automatically: End of explanation """ eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names) """ Explanation: This starts to make more sense. Columns are target classes. In each column there are features and their weights. Intercept (bias) feature is shown as &lt;BIAS&gt; in the same table. We can inspect features and weights because we're using a bag-of-words vectorizer and a linear classifier (so there is a direct mapping between individual words and classifier coefficients). For other classifiers features can be harder to inspect. Some features look good, but some don't. It seems model learned some names specific to a dataset (email parts, etc.) though, instead of learning topic-specific words. Let's check prediction results on an example: End of explanation """ twenty_train = fetch_20newsgroups( subset='train', categories=categories, shuffle=True, random_state=42, remove=['headers', 'footers'], ) twenty_test = fetch_20newsgroups( subset='test', categories=categories, shuffle=True, random_state=42, remove=['headers', 'footers'], ) vec = CountVectorizer() clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target); """ Explanation: What can be highlighted in text is highlighted in text. There is also a separate table for features which can't be highlighted in text - &lt;BIAS&gt; in this case. If you hover mouse on a highlighted word it shows you a weight of this word in a title. Words are colored according to their weights. 2. Baseline model, improved data Aha, from the highlighting above it can be seen that a classifier learned some non-interesting stuff indeed, e.g. it remembered parts of email addresses. We should probably clean the data first to make it more interesting; improving model (trying different classifiers, etc.) doesn't make sense at this point - it may just learn to leverage these email addresses better. In practice we'd have to do cleaning yourselves; in this example 20 newsgroups dataset provides an option to remove footers and headers from the messages. Nice. Let's clean up the data and re-train a classifier. End of explanation """ print_report(pipe) """ Explanation: We just made the task harder and more realistic for a classifier. End of explanation """ eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names, targets=['sci.med']) """ Explanation: A great result - we just made quality worse! Does it mean pipeline is worse now? No, likely it has a better quality on unseen messages. It is evaluation which is more fair now. Inspecting features used by classifier allowed us to notice a problem with the data and made a good change, despite of numbers which told us not to do that. Instead of removing headers and footers we could have improved evaluation setup directly, using e.g. GroupKFold from scikit-learn. Then quality of old model would have dropped, we could have removed headers/footers and see increased accuracy, so the numbers would have told us to remove headers and footers. It is not obvious how to split data though, what groups to use with GroupKFold. So, what have the updated classifier learned? (output is less verbose because only a subset of classes is shown - see "targets" argument): End of explanation """ vec = CountVectorizer(stop_words='english') clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names, targets=['sci.med']) """ Explanation: Hm, it no longer uses email addresses, but it still doesn't look good: classifier assigns high weights to seemingly unrelated words like 'do' or 'my'. These words appear in many texts, so maybe classifier uses them as a proxy for bias. Or maybe some of them are more common in some of classes. 3. Pipeline improvements To help classifier we may filter out stop words: End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer vec = TfidfVectorizer() clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names, targets=['sci.med']) """ Explanation: Looks better, isn't it? Alternatively, we can use TF*IDF scheme; it should give a somewhat similar effect. Note that we're cross-validating LogisticRegression regularisation parameter here, like in other examples (LogisticRegressionCV, not LogisticRegression). TF*IDF values are different from word count values, so optimal C value can be different. We could draw a wrong conclusion if a classifier with fixed regularization strength is used - the chosen C value could have worked better for one kind of data. End of explanation """ vec = TfidfVectorizer(stop_words='english') clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names, targets=['sci.med']) """ Explanation: It helped, but didn't have quite the same effect. Why not do both? End of explanation """ vec = TfidfVectorizer(stop_words='english', analyzer='char', ngram_range=(3,5)) clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names) """ Explanation: This starts to look good! 4. Char-based pipeline Maybe we can get somewhat better quality by choosing a different classifier, but let's skip it for now. Let's try other analysers instead - use char n-grams instead of words: End of explanation """ vec = TfidfVectorizer(analyzer='char_wb', ngram_range=(3,5)) clf = LogisticRegressionCV() pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names) """ Explanation: It works, but quality is a bit worse. Also, it takes ages to train. It looks like stop_words have no effect now - in fact, this is documented in scikit-learn docs, so our stop_words='english' was useless. But at least it is now more obvious how the text looks like for a char ngram-based classifier. Grab a cup of tea and see how char_wb looks like: End of explanation """ from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import SGDClassifier vec = HashingVectorizer(stop_words='english', ngram_range=(1,2)) clf = SGDClassifier(n_iter=10, random_state=42) pipe = make_pipeline(vec, clf) pipe.fit(twenty_train.data, twenty_train.target) print_report(pipe) """ Explanation: The result is similar, with some minor changes. Quality is better for unknown reason; maybe cross-word dependencies are not that important. 5. Debugging HashingVectorizer To check that we can try fitting word n-grams instead of char n-grams. But let's deal with efficiency first. To handle large vocabularies we can use HashingVectorizer from scikit-learn; to make training faster we can employ SGDCLassifier: End of explanation """ eli5.show_prediction(clf, twenty_test.data[0], vec=vec, target_names=twenty_test.target_names, targets=['sci.med']) """ Explanation: It was super-fast! We're not choosing regularization parameter using cross-validation though. Let's check what model learned: End of explanation """ eli5.show_weights(clf, vec=vec, top=10, target_names=twenty_test.target_names) """ Explanation: Result looks similar to CountVectorizer. But with HashingVectorizer we don't even have a vocabulary! Why does it work? End of explanation """ from eli5.sklearn import InvertableHashingVectorizer import numpy as np ivec = InvertableHashingVectorizer(vec) sample_size = len(twenty_train.data) // 10 X_sample = np.random.choice(twenty_train.data, size=sample_size) ivec.fit(X_sample); eli5.show_weights(clf, vec=ivec, top=20, target_names=twenty_test.target_names) """ Explanation: Ok, we don't have a vocabulary, so we don't have feature names. Are we out of luck? Nope, eli5 has an answer for that: InvertableHashingVectorizer. It can be used to get feature names for HahshingVectorizer without fitiing a huge vocabulary. It still needs some data to learn words -> hashes mapping though; we can use a random subset of data to fit it. End of explanation """ rutgers_example = [x for x in twenty_train.data if 'rutgers' in x.lower()][0] print(rutgers_example) """ Explanation: There are collisions (hover mouse over features with "..."), and there are important features which were not seen in the random sample (FEATURE[...]), but overall it looks fine. "rutgers edu" bigram feature is suspicious though, it looks like a part of URL. End of explanation """ eli5.show_prediction(clf, rutgers_example, vec=vec, target_names=twenty_test.target_names, targets=['soc.religion.christian']) """ Explanation: Yep, it looks like model learned this address instead of learning something useful. End of explanation """
DavidDobr/icef_thesis
dobrinskiy_thesis_v2_october.ipynb
gpl-3.0
# You should be running python3 import sys print(sys.version) import pandas as pd # http://pandas.pydata.org/ import numpy as np # http://numpy.org/ import statsmodels.api as sm # http://statsmodels.sourceforge.net/stable/index.html import statsmodels.formula.api as smf import statsmodels print("Pandas Version: {}".format(pd.__version__)) # pandas version print("StatsModels Version: {}".format(statsmodels.__version__)) # StatsModels version """ Explanation: This is a Jupyter notebook for David Dobrinskiy's HSE Thesis How Venture Capital Affects Startups' Success End of explanation """ # load the pwc dataset from azure from azureml import Workspace ws = Workspace() ds = ws.datasets['pwc_moneytree.csv'] frame = ds.to_dataframe() frame.head() del frame['Grand Total'] frame.columns = ['year', 'type', 'q1', 'q2', 'q3', 'q4'] frame['year'] = frame['year'].fillna(method='ffill') frame.head() """ Explanation: Let us look at the dynamics of total US VC investment End of explanation """ deals_df = frame.iloc[0::2] investments_df = frame.iloc[1::2] # once separated, 'type' field is identical within each df # let's delete it del deals_df['type'] del investments_df['type'] deals_df.head() investments_df.head() def unstack_to_series(df): """ Takes q1-q4 in a dataframe and converts it to a series input: a dataframe containing ['q1', 'q2', 'q3', 'q4'] ouput: a pandas series """ quarters = ['q1', 'q2', 'q3', 'q4'] d = dict() for i, row in df.iterrows(): for q in quarters: key = str(int(row['year'])) + q d[key] = row[q] # print(key, q, row[q]) return pd.Series(d) deals = unstack_to_series(deals_df ).dropna() investments = unstack_to_series(investments_df).dropna() def string_to_int(money_string): numerals = [c if c.isnumeric() else '' for c in money_string] return int(''.join(numerals)) # convert deals from string to integers deals = deals.apply(string_to_int) deals.tail() # investment in billions USD # converts to integers - which is ok, since data is in dollars investments_b = investments.apply(string_to_int) # in python3 division automatically converts numbers to floats, we don't loose precicion investments_b = investments_b / 10**9 # round data to 2 decimals investments_b = investments_b.apply(round, ndigits=2) investments_b.tail() """ Explanation: Deals and investments are in alternating rows of frame, let's separate them End of explanation """ import matplotlib.pyplot as plt # http://matplotlib.org/ import matplotlib.patches as mpatches import matplotlib.ticker as ticker %matplotlib inline # change matplotlib inline display size # import matplotlib.pylab as pylab # pylab.rcParams['figure.figsize'] = (8, 6) # that's default image size for this interactive session fig, ax1 = plt.subplots() ax1.set_title("VC historical trend (US Data)") t = range(len(investments_b)) # need to substitute tickers for years later width = t[1]-t[0] y1 = investments_b # create filled step chart for investment amount ax1.bar(t, y1, width=width, facecolor='0.80', edgecolor='', label = 'Investment ($ Bln.)') ax1.set_ylabel('Investment ($ Bln.)') # set up xlabels with years years = [str(year)[:-2] for year in deals.index][::4] # get years without quarter ax1.set_xticks(t[::4]) # set 1 tick per year ax1.set_xticklabels(years, rotation=50) # set tick names ax1.set_xlabel('Year') # name X axis # format Y1 tickers to $ billions formatter = ticker.FormatStrFormatter('$%1.0f Bil.') ax1.yaxis.set_major_formatter(formatter) for tick in ax1.yaxis.get_major_ticks(): tick.label1On = False tick.label2On = True # create second Y2 axis for Num of Deals ax2 = ax1.twinx() y2 = deals ax2.plot(t, y2, color = 'k', ls = '-', label = 'Num. of Deals') ax2.set_ylabel('Num. of Deals') # add annotation bubbles ax2.annotate('1997-2000 dot-com bubble', xy=(23, 2100), xytext=(6, 1800), bbox=dict(boxstyle="round4", fc="w"), arrowprops=dict(arrowstyle="-|>", connectionstyle="arc3,rad=0.2", fc="w"), ) ax2.annotate('2007-08 Financial Crisis', xy=(57, 800), xytext=(40, 1300), bbox=dict(boxstyle="round4", fc="w"), arrowprops=dict(arrowstyle="-|>", connectionstyle="arc3,rad=-0.2", fc="w"), ) # add legend ax1.legend(loc="best") ax2.legend(bbox_to_anchor=(0.95, 0.88)) fig.tight_layout() # solves cropping problems when saving png fig.savefig('vc_trend_3.png', dpi=250) plt.show() def tex(df): """ Print dataframe contents in latex-ready format """ for line in df.to_latex().split('\n'): print(line) ds = ws.datasets['ipo_mna.csv'] frame = ds.to_dataframe() frame.tail() frame = frame.iloc[:-2] frame = frame.set_index('q') frame """ Explanation: Plot data from MoneyTree report http://www.pwcmoneytree.com End of explanation """ ds = ws.datasets['wsj_unicorns.csv'] frame = ds.to_dataframe() frame.tail() """ Explanation: WSJ Unicorns End of explanation """ # data from Founder Collective # http://www.foundercollective.com/ ds = ws.datasets['most_funded_ipo.csv'] frame = ds.to_dataframe() most_funded = frame.copy() most_funded.tail() from datetime import datetime most_funded['Firm age'] = datetime.now().year - most_funded['Founded'] most_funded['Years to IPO'] = most_funded['IPO Year'] - most_funded['Founded'] # extract all funding rounds # R1, R2, ... are funding rounds (Raising VC) most_funded.iloc[:,2:22:2].tail() # [axis = 1] to sum by row instead of by-column most_funded['VC'] = most_funded.iloc[:,2:22:2].sum(axis=1) # VC data is in MILLIONS of $ most_funded['IPO Raise'].head(3) # convert IPO string to MILLIONS of $ converter = lambda x: round(int((x.replace(',',''))[1:])/10**6, 2) most_funded['IPO Raise'] = most_funded['IPO Raise' ].apply(converter) most_funded['Current Market Cap'] = most_funded['Current Market Cap '].apply(converter) del most_funded['Current Market Cap '] most_funded['IPO Raise'].head(3) # MILLIONS of $ most_funded['VC and IPO'] = most_funded['VC'] + most_funded['IPO Raise'] # Price in ordinary $ most_funded['$ Price change'] = most_funded['Current Share Price'] - most_funded['IPO Share Price'] most_funded['% Price change'] = round(most_funded['$ Price change'] / most_funded['IPO Share Price'], 2) """ Explanation: Most funded IPO-reaching US startups End of explanation """ mask = most_funded['Firm'] == 'Facebook' most_funded[mask] # removing Facebook most_funded = most_funded[~mask] # look at all the columns [print(c) for c in most_funded.columns] None cols = most_funded.columns[:2].append(most_funded.columns[22:]) cols # remove individual funding rounds - we'll only analyze aggregates most_funded = most_funded[cols] from matplotlib.ticker import FuncFormatter x = most_funded['Firm'] y = sorted(most_funded['VC'], reverse=True) def millions(x, pos): 'The two args are the value and tick position' return '$%1.0fM' % (x) formatter = FuncFormatter(millions) fig, ax = plt.subplots(figsize=(6,4), dpi=200) ax.yaxis.set_major_formatter(formatter) #plt.figure(figsize=(6,4), dpi=200) # Create a new subplot from a grid of 1x1 # plt.subplot(111) plt.title("Total VC raised for unicorns") plt.bar(range(len(x)+2), [0,0]+y, width = 1, facecolor='0.80', edgecolor='k', linewidth=0.3) plt.ylabel('VC raised per firm\n(before IPO)') # plt.set_xticks(x) # set 1 tick per year plt.xlabel('Firms') plt.xticks([]) plt.show() cols = ['Firm', 'Sector', 'VC', 'Current Market Cap'] df = most_funded[cols] df.set_index('Firm', inplace = True) df.head(2) tmp = df.groupby('Sector').sum().applymap(int) tmp.index += ' Total' tmp.sort_index(ascending=False, inplace = True) tmp tmp2 = df.groupby('Sector').mean().applymap(int) tmp2.index += ' Average' tmp2.sort_index(ascending=False, inplace = True) tmp2 tmp.append(tmp2).applymap(lambda x: "${:,}".format(x)) tex(tmp.append(tmp2).applymap(lambda x: "${:,}".format(x))) most_funded['Mult'] = (most_funded['Current Market Cap'] / most_funded['VC']).replace([np.inf, -np.inf], np.nan) most_funded.head() tex(most_funded.iloc[:,list(range(8))+list(range(11,20))].head().T) most_funded.head() most_funded['Current Market Cap'] least_20 = most_funded.dropna().sort_values('VC')[1:21] least_20 = least_20[['VC', 'Current Market Cap','Mult']].mean() least_20 most_20 = most_funded.dropna().sort_values('VC')[-20:] most_20 = most_20[['VC', 'Current Market Cap','Mult']].mean() most_20 pd.DataFrame([most_20, least_20], index=['most_20', 'least_20']).applymap(lambda x: round(x, 2)) tex(pd.DataFrame([most_20, least_20], index=['most_20', 'least_20']).applymap(lambda x: round(x, 2))) cols = ['Sector', 'VC', '% Price change', 'Firm age', 'Years to IPO', 'Current Market Cap', 'Mult'] df = most_funded[cols] df.columns = ['Sector', 'VC', 'Growth', 'Age', 'yearsIPO', 'marketCAP', 'Mult'] df.head(2) res = smf.ols(formula='Growth ~ VC + yearsIPO + C(Sector)', data=df).fit() print(res.summary()) res = smf.ols(formula='Growth ~ VC + Age + yearsIPO + C(Sector)', data=df).fit() print(res.summary()) print(res.summary().as_latex()) res = smf.ols(formula='Mult ~ VC + yearsIPO + C(Sector)', data=df).fit() print(res.summary()) """ Explanation: Facebook is an extreme outlier in venture capital, let's exclude it from our analysis End of explanation """
lknelson/DH-Institute-2017
06-Literary Distinction (Probably)/Literary Patterns (Probably).ipynb
bsd-2-clause
import nltk nltk.download('stopwords') from sklearn.naive_bayes import MultinomialNB import pandas # Get texts of interest that belong to identifiably different categories unladen_swallow = 'high air-speed velocity' swallow_grasping_coconut = 'low air-speed velocity' # Transform them into a format scikit-learn can use columns = ['high','low','air-speed','velocity'] indices = ['unladen', 'coconut'] dtm = [[1,0,1,1],[0,1,1,1]] dtm_df = pandas.DataFrame(dtm, columns = columns, index = indices) dtm_df # Train the Naive Bayes classifier nb = MultinomialNB() nb.fit(dtm,indices) # Make a prediction! unknown_swallow = "high velocity" unknown_features = [1,0,0,1] nb.predict([unknown_features]) """ Explanation: <h1 align='center'>It Starts with a Research Question...</h1> <img src='Long, So 263, Fig 8.png' width="66%" height="66%"> <img src='Long, So 257, Fig 5.png' width="66%" height="66%"> Literary Distinction (Probably) <ul><li>Preview</li> <li>Review</li> <li>Pre-Processing</li> <ul> <li>Import Corpus</li> <li>Stop Words</li> <li>Feature Selection</li></ul> <li>Classification</li> <ul> <li>Training, Feature Importance, & Prediction</li> <li>Literary Distinction</li> <li>Extra: Cross-Validation</li></ul> </ul> Although Long and So's study of modernist haiku motivates this lesson, a substantial portion of their corpus remains under copyright so they have not made it available publicly. Instead we will apply their methods to the corpus distributed by Ted Underwood and Jordan Sellers in support of their own literary historical study on nineteenth- and early-twentieth century volumes of poetry that were reviewed in prestigious magazines versus not at all. (The idea being that even a negative review indicates valuable, critical engagement.) In essence, our task will be to learn the vocabulary of literary prestige, rather than that of haiku. We will however be deliberate in using Long and So's methods, since they reflect assumptions about language that are more appropriate to a general introduction. 0. Preview End of explanation """ # Read Moby Dick moby_string = open('Melville - Moby Dick.txt').read() # Inspect the text moby_string # Make the text lower case moby_lower = moby_string.lower() # Tokenize Moby Dick moby_tokens = moby_lower.split() # Check out the tokens moby_tokens # Just how long is Moby Dick anyway? len(moby_tokens) # Create a list comprehension, including an 'if' statement just_whales = [token for token in moby_tokens if token=='whale'] # Hast seen the White Whale? just_whales # Make a new list maritime = ['ship','harpoon','sail'] # Multiply it maritime * 2 # Another list whaling = ['whiteness','whale','ambergris'] # Concatenate maritime + whaling """ Explanation: 1. Review End of explanation """ import os # Assign file paths to each set of poems review_path = 'poems/reviewed/' random_path = 'poems/random/' # Get lists of text files in each directory review_files = os.listdir(review_path) random_files = os.listdir(random_path) # Inspect review_files # Read-in texts as strings from each location review_texts = [open(review_path+file_name).read() for file_name in review_files] random_texts = [open(random_path+file_name).read() for file_name in random_files] # Inspect review_texts[0] # Collect all texts in single list all_texts = review_texts + random_texts # Get all file names together all_file_names = review_files + random_files # Keep track of classes with labels all_labels = ['reviewed'] * len(review_texts) + ['random'] * len(random_texts) ## EX. How many file names are listed in the directory for reviewed texts? ## EX. How many texts got read into 'review_texts'? Does it match the number of files in the directory? """ Explanation: 2. Pre-Process In their paper, Long and So describe their pre-processing as consisting of three major steps: stop word removal, lemmatization of nouns, and feature selection (based on document frequency). In this workshop, we will focus on the first and third steps, since they can be integrated seamlessly with our workflow and Underwood and Sellers use them as well. Lemmatization -- the transformation of words into their dictionary forms; e.g. plural nouns become singular -- is particularly useful to Long and So, since they partly aim to study imagery. That is, they find it congenial to collapse the words <i>mountains</i> and <i>mountain</i> into the same token, since they express a similar image. For an introduction to Lemmatization (and a related technique, Stemming), see NLTK: http://www.nltk.org/book/ch03.html#sec-normalizing-text Import Corpus Note that due to issues of copyright, volumes' word order has not been retained, although their total word counts have been. Fortunately, our methods do not require word-order information. Underwood and Sellers's literary corpus has been divided into three folders: "reviewed", "random", "canonic". (The last of these are canonic poets but who did not have the opportunity to be reviewed, such as Emily Dickinson.) End of explanation """ # By default scikit-learn uses this list of English stop words from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS # Inspect ENGLISH_STOP_WORDS # How many are here? len(ENGLISH_STOP_WORDS) # NLTK has its own collection of stop words from nltk.corpus import stopwords # Pull up NLTK's list of English-language stop words stopwords.words('english') # How many stop words are in the list? len(stopwords.words('english')) # NLTK has stopwords for many Western languages stopwords.words('spanish') tokenized_sentence = ['what', 'is', 'the', 'air-speed', 'velocity', 'of', 'an', 'unladen', 'swallow'] # Remove stopwords from tokenized sentence [word for word in tokenized_sentence if word not in stopwords.words('english')] ## Q. Stop words are typically the most frequent words in a language, yet do not convey semantic meaning. ## Does this make sense based on the words in NLTK's list of English stop words? ## What about other languages with which you are familar? stopword_languages = ['danish', 'dutch', 'english', 'finnish', 'french', 'german', 'hungarian', 'italian',\ 'norwegian', 'portuguese', 'russian', 'spanish', 'swedish', 'turkish'] ## EX. Use either sklearn or NLTK's stopword list to remove those words from the 'token_list' below. ## EX. How many tokens did you remove from the 'token_list' in total? What percent were removed? ## CHALLENGE. Stop words are often instrumental in language detection for unknown texts. ## How might you write a program to do this? token_list = ['in', 'a', 'station', 'of', 'the', 'metro',\ 'the', 'apparition', 'of', 'these', 'faces', 'in', 'the', 'crowd',\ 'petals', 'on', 'a', 'wet', 'black', 'bough'] """ Explanation: Stop Words <i>Stop words</i>, sometimes refered to as <i>function words</i>, include articles, prepositions, pronouns, and conjunctions among others. Although their frequencies encode information about textual features like authorship, they do not convey semantic meanings and are often removed before analysis. End of explanation """ from sklearn.feature_extraction.text import CountVectorizer # Intitialize the function that will transform our list of texts to a DTM # 'min_df' and 'max_features' are arguments that enable flexible feature selection # 'binary' tells CountVectorizer only to record whether a word appeared in a text or not cv = CountVectorizer(stop_words = 'english', min_df=180, binary = True, max_features = None) # Transform our texts to DTM cv.fit_transform(all_texts) # Transform our texts to a dense DTM cv.fit_transform(all_texts).toarray() # Assign this to a variable dtm = cv.fit_transform(all_texts).toarray() # Get the column headings cv.get_feature_names() # Assign to a variable feature_list = cv.get_feature_names() # Place this in a dataframe for readability dtm_df = pandas.DataFrame(dtm, columns = feature_list, index = all_file_names) # Check out the dataframe dtm_df # Get the dataframe's dimensions (# texts, # features) dtm_df.shape ## EX. Re-initialize the CountVectorizer function above with the the argument min_df = 1. ## How many unique words are in there in the total vocabulary of the corpus? ## EX. Repeat the exercise above with min_df = 360. (That is, words are only included if they appear ## in at least half of all documents.) What is the size of the vocabulary now? ## Does the list of these very common words look as you would expect? """ Explanation: Feature Selection At this point, we transform our texts into a Document-Term Matrix in the same manner we have employed previously. However, it is important to note that neither Long and So nor Underwood and Sellers use all of the words that appear in their respective corpora when constructing their matrices. The process of choosing which words to comprise the columns is referred to as <i>feature selection</i>. While there are several approaches one may take when selecting features, both of the literary studies under consideration use <i>document frequency</i> as the deciding criterion. The intuition is that a word that appears in a single text out of hundreds will not carry much weight when trying to determine the text's class membership. In order to be selected as a feature, Long and So require that words appear in at least 2 texts, whereas Underwood and Sellers require that a word appear in about a quarter of all texts. Although this is quite a large difference (a minimum of 2 texts vs. ~180 texts), it perhaps makes sense since the texts are of very different lengths: individual haiku vs entire volumes of poetry. The latter will have much greater overlap in its vocabulary. The process of feature selection is intimately tied to the object under study and the statistical model chosen. End of explanation """ from sklearn.naive_bayes import MultinomialNB # Train the classifier and assign it to a variable nb = MultinomialNB() nb.fit(dtm, all_labels) # Hand-waving the underlying statistics here... def most_informative_features(text_class, vectorizer = cv, classifier = nb, top_n = 10): import numpy as np feature_names = vectorizer.get_feature_names() class_index = np.where(classifier.classes_==(text_class))[0][0] class_prob_distro = np.exp(classifier.feature_log_prob_[class_index]) alt_class_prob_distro = np.exp(classifier.feature_log_prob_[1 - class_index]) odds_ratios = class_prob_distro / alt_class_prob_distro odds_with_fns = sorted(zip(odds_ratios, feature_names), reverse = True) return odds_with_fns[:top_n] # Returns feature name and odds ratio for a given class most_informative_features('reviewed') # Similarly, for words that indicate 'random' class membership most_informative_features('random') # Let's load up two poems that aren't in the training set and make predictions dickinson_canonic = """Because I could not stop for Death โ€“ He kindly stopped for me โ€“ The Carriage held but just Ourselves โ€“ And Immortality. We slowly drove โ€“ He knew no haste And I had put away My labor and my leisure too, For His Civility โ€“ We passed the School, where Children strove At Recess โ€“ in the Ring โ€“ We passed the Fields of Gazing Grain โ€“ We passed the Setting Sun โ€“ Or rather โ€“ He passed us โ€“ The Dews drew quivering and chill โ€“ For only Gossamer, my Gown โ€“ My Tippet โ€“ only Tulle โ€“ We paused before a House that seemed A Swelling of the Ground โ€“ The Roof was scarcely visible โ€“ The Cornice โ€“ in the Ground โ€“ Since then โ€“ โ€˜tis Centuries โ€“ and yet Feels shorter than the Day I first surmised the Horsesโ€™ Heads Were toward Eternity โ€“ """ anthem_patriotic = """O! say can you see, by the dawn's early light, What so proudly we hailed at the twilight's last gleaming, Whose broad stripes and bright stars through the perilous fight, O'er the ramparts we watched, were so gallantly streaming? And the rockets' red glare, the bombs bursting in air, Gave proof through the night that our flag was still there; O! say does that star-spangled banner yet wave O'er the land of the free and the home of the brave?""" # Transform these into DTMs with the same feature-columns as previously unknown_dtm = cv.transform([dickinson_canonic,anthem_patriotic]).toarray() # What does the classifier think? nb.predict(unknown_dtm) # Although our classification is binary, Bayes theorem assigns # a probability of membership in either category # Just how confident is our classifier of its predictions? nb.predict_proba(unknown_dtm) ## Q. What kinds of patterns do you notice among the 'most informative features'? ## Try looking at the top fifty most informative words for each category. """ Explanation: 3. Classification Training, Feature Importance, and Prediction Long and So selected a classification algorithm that specifically relies on <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes' Theorem</a> to model relationships between textual features and categories in our corpus of poetry volumes. (See link for more information about the method and its assumptions.) Two ways that we learn about the model are its feature weights and predictions on new texts. The algorithm can explicity report to us which direction each word leans category-wise and how strongly. Based on those weights, it makes further predictions about the valences of previously unseen poetry volumes. End of explanation """ ## EX. Import and process the 'canonic' (albeit unreviewed) volumes of poetry. ## Use the poetry classifier to predict whether they might have been reviewed. ## Does the output make sense? Is it consistent with Underwood and Sellers's findings? canonic_path = 'poems/canonic/' """ Explanation: Literary Distinction In their study of critical taste, Underwood and Sellers find not only that literary standards change very slowly, but that contemporary evaluations of 'canonicity' resemble those of the nineteenth century. In order to test this idea, the authors trained a classifier on nineteenth- and early twentieth-century volumes of poetry that received reviews in a prestigious magazine versus those that didn't. The authors then used the classifier to predict a category for volumes of poetry that went unreviewed, in several cases because they were unpublished, but are now included in Norton anthologies. How closely does critical evaluation today match that of a century ago? End of explanation """ # Randomize the order of our texts import numpy randomized_review = numpy.random.permutation(review_texts) randomized_random = numpy.random.permutation(random_texts) # We'll train our classifier on the first 90% of texts in the randomized list # Then, we'll test it using the last 10% training_set = list(randomized_review[:324]) + list(randomized_random[:324]) test_set = list(randomized_review[324:]) + list(randomized_random[324:]) training_labels = ['reviewed'] * 324 + ['random'] * 324 test_labels = ['reviewed'] * 36 + ['random'] * 36 # Transform training and test texts into DTMs # Note that 'min_df' has been adjusted to one quarter of the size of the training set from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(stop_words = 'english', min_df = 162, binary=True) training_dtm = cv.fit_transform(training_set) test_dtm = cv.transform(test_set) # Train, Predict, Evaluate from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score nb = MultinomialNB() nb.fit(training_dtm, training_labels) predictions = nb.predict(test_dtm) accuracy_score(predictions, test_labels) ## CHALLENGE: In fact, when Underwood and Sellers cross-validate, they do so by setting aside a single ## author's texts (one or more) from the training set and making a prediction for that author alone. ## After doing this for all authors, they tally the number of texts that were correctly predicted ## to calculate their overall accuracy. Implement this. """ Explanation: Extra: Cross-Validation Just how good is our classifier? We can evaluate it by randomly selecting texts from each category and setting them aside before training. We then see how well the classifier predicts their (known) categories. Remember that if the classifier is trying to predict membership for just two categories, we would expect it to be correct about 50% of the time based on random chance. As a rule of thumb, if this kind of classifier has 65% accuracy or better under cross-validation, it has often identified a meaningful pattern. End of explanation """
brclark-usgs/flopy
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bsd-3-clause
%matplotlib inline import os import sys import platform import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import flopy print(sys.version) print('numpy version: {}'.format(np.__version__)) print('matplotlib version: {}'.format(mpl.__version__)) print('pandas version: {}'.format(pd.__version__)) print('flopy version: {}'.format(flopy.__version__)) # Set path to example datafiles loadpth = os.path.join('..', 'data', 'zonbud_examples') cbc_f = os.path.join(loadpth, 'freyberg_mlt', 'freyberg.gitcbc') """ Explanation: FloPy ZoneBudget Example This notebook demonstrates how to use the ZoneBudget class to extract budget information from the cell by cell budget file using an array of zones. First set the path and import the required packages. The flopy path doesn't have to be set if you install flopy from a binary installer. If you want to run this notebook, you have to set the path to your own flopy path. End of explanation """ from flopy.utils import read_zbarray zone_file = os.path.join(loadpth, 'zonef_mlt') zon = read_zbarray(zone_file) nlay, nrow, ncol = zon.shape fig = plt.figure(figsize=(10, 4)) for lay in range(nlay): ax = fig.add_subplot(1, nlay, lay+1) im = ax.pcolormesh(zon[lay, :, :]) cbar = plt.colorbar(im) plt.gca().set_aspect('equal') plt.show() np.unique(zon) """ Explanation: Read File Containing Zones Using the read_zbarray utility, we can import zonebudget-style array files. End of explanation """ # Create a ZoneBudget object and get the budget record array zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 1096)) zb.get_budget() # Get a list of the unique budget record names zb.get_record_names() # Look at a subset of fluxes names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN'] zb.get_budget(names=names) # Look at fluxes in from zone 2 names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN'] zones = ['ZONE_2'] zb.get_budget(names=names, zones=zones) # Look at all of the mass-balance records names = ['TOTAL_IN', 'TOTAL_OUT', 'IN-OUT', 'PERCENT_DISCREPANCY'] zb.get_budget(names=names) """ Explanation: Extract Budget Information from ZoneBudget Object At the core of the ZoneBudget object is a numpy structured array. The class provides some wrapper functions to help us interogate the array and save it to disk. End of explanation """ cmd = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 0)) cfd = cmd / 35.3147 inyr = (cfd / (250 * 250)) * 365 * 12 cmdbud = cmd.get_budget() cfdbud = cfd.get_budget() inyrbud = inyr.get_budget() names = ['RECHARGE_IN'] rowidx = np.in1d(cmdbud['name'], names) colidx = 'ZONE_1' print('{:,.1f} cubic meters/day'.format(cmdbud[rowidx][colidx][0])) print('{:,.1f} cubic feet/day'.format(cfdbud[rowidx][colidx][0])) print('{:,.1f} inches/year'.format(inyrbud[rowidx][colidx][0])) cmd is cfd """ Explanation: Convert Units The ZoneBudget class supports the use of mathematical operators and returns a new copy of the object. End of explanation """ aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'} zb = flopy.utils.ZoneBudget(cbc_f, zon, totim=[1097.], aliases=aliases) zb.get_budget() """ Explanation: Alias Names A dictionary of {zone: "alias"} pairs can be passed to replace the typical "ZONE_X" fieldnames of the ZoneBudget structured array with more descriptive names. End of explanation """ zon = np.ones((nlay, nrow, ncol), np.int) zon[1, :, :] = 2 zon[2, :, :] = 3 aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'} zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=None, aliases=aliases) df = zb.get_dataframes() print(df.head()) print(df.tail()) """ Explanation: Return the Budgets as a Pandas DataFrame Set kstpkper and totim keyword args to None (or omit) to return all times. The get_dataframes() method will return a DataFrame multi-indexed on totim and name. End of explanation """ dateidx1 = 1092. dateidx2 = 1097. names = ['RECHARGE_IN', 'WELLS_OUT', 'CONSTANT_HEAD'] zones = ['SURF', 'CONF'] df = zb.get_dataframes(names=names) df.loc[(slice(dateidx1, dateidx2), slice(None)), :][zones] """ Explanation: Slice the multi-index dataframe to retrieve a subset of the budget. NOTE: We can now pass "names" directly to the get_dataframes() method to return a subset of reocrds. By omitting the "_IN" or "_OUT" suffix we get both. End of explanation """ dateidx1 = 1092. dateidx2 = 1097. zones = ['SURF'] # Pull out the individual records of interest rech = df.loc[(slice(dateidx1, dateidx2), ['RECHARGE_IN']), :][zones] pump = df.loc[(slice(dateidx1, dateidx2), ['WELLS_OUT']), :][zones] # Remove the "record" field from the index so we can # take the difference of the two DataFrames rech = rech.reset_index() rech = rech.set_index(['totim']) rech = rech[zones] pump = pump.reset_index() pump = pump.set_index(['totim']) pump = pump[zones] * -1 # Compute pumping as a percentage of recharge pump_as_pct = (pump / rech) * 100. pump_as_pct """ Explanation: Look at pumpage (WELLS_OUT) as a percentage of recharge (RECHARGE_IN) End of explanation """ dateidx1 = pd.Timestamp('1972-12-01') dateidx2 = pd.Timestamp('1972-12-06') names = ['RECHARGE_IN', 'WELLS_OUT', 'CONSTANT_HEAD'] zones = ['SURF', 'CONF'] df = zb.get_dataframes(start_datetime='1970-01-01', timeunit='D', names=names) df.loc[(slice(dateidx1, dateidx2), slice(None)), :][zones] """ Explanation: Pass start_datetime and timeunit keyword arguments to return a dataframe with a datetime multi-index End of explanation """ df = zb.get_dataframes(index_key='kstpkper') df.head() """ Explanation: Pass index_key to indicate which fields to use in the multi-index (defualt is "totim"; valid keys are "totim" and "kstpkper") End of explanation """ zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=[(0, 0), (0, 1096)]) zb.to_csv(os.path.join(loadpth, 'zonbud.csv')) # Read the file in to see the contents fname = os.path.join(loadpth, 'zonbud.csv') try: import pandas as pd print(pd.read_csv(fname).to_string(index=False)) except: with open(fname, 'r') as f: for line in f.readlines(): print('\t'.join(line.split(','))) """ Explanation: Write Budget Output to CSV We can write the resulting recarray to a csv file with the .to_csv() method of the ZoneBudget object. End of explanation """ zon = np.ones((nlay, nrow, ncol), np.int) zon[1, :, :] = 2 zon[2, :, :] = 3 aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'} zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=None, aliases=aliases) cfd = zb.get_budget(names=['STORAGE', 'WELLS'], zones=['SURF', 'UFA'], net=True) cfd df = zb.get_dataframes(names=['STORAGE', 'WELLS'], zones=['SURF', 'UFA'], net=True) df.head(6) """ Explanation: Net Budget Using the "net" keyword argument, we can request a net budget for each zone/record name or for a subset of zones and record names. Note that we can identify the record names we want without the added "_IN" or "_OUT" string suffix. End of explanation """ def tick_label_formatter_comma_sep(x, pos): return '{:,.0f}'.format(x) def volumetric_budget_bar_plot(values_in, values_out, labels, **kwargs): if 'ax' in kwargs: ax = kwargs.pop('ax') else: ax = plt.gca() x_pos = np.arange(len(values_in)) rects_in = ax.bar(x_pos, values_in, align='center', alpha=0.5) x_pos = np.arange(len(values_out)) rects_out = ax.bar(x_pos, values_out, align='center', alpha=0.5) plt.xticks(list(x_pos), labels) ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=90) ax.get_yaxis().set_major_formatter(mpl.ticker.FuncFormatter(tick_label_formatter_comma_sep)) ymin, ymax = ax.get_ylim() if ymax != 0: if abs(ymin) / ymax < .33: ymin = -(ymax * .5) else: ymin *= 1.35 else: ymin *= 1.35 plt.ylim([ymin, ymax * 1.25]) for i, rect in enumerate(rects_in): label = '{:,.0f}'.format(values_in[i]) height = values_in[i] x = rect.get_x() + rect.get_width() / 2 y = height + (.02 * ymax) vertical_alignment = 'bottom' horizontal_alignment = 'center' ax.text(x, y, label, ha=horizontal_alignment, va=vertical_alignment, rotation=90) for i, rect in enumerate(rects_out): label = '{:,.0f}'.format(values_out[i]) height = values_out[i] x = rect.get_x() + rect.get_width() / 2 y = height + (.02 * ymin) vertical_alignment = 'top' horizontal_alignment = 'center' ax.text(x, y, label, ha=horizontal_alignment, va=vertical_alignment, rotation=90) # horizontal line indicating zero ax.plot([rects_in[0].get_x() - rects_in[0].get_width() / 2, rects_in[-1].get_x() + rects_in[-1].get_width()], [0, 0], "k") return rects_in, rects_out fig = plt.figure(figsize=(16, 5)) times = [2., 500., 1000., 1095.] for idx, time in enumerate(times): ax = fig.add_subplot(1, len(times), idx + 1) zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=time, aliases=aliases) recname = 'STORAGE' values_in = zb.get_dataframes(names='{}_IN'.format(recname)).T.squeeze() values_out = zb.get_dataframes(names='{}_OUT'.format(recname)).T.squeeze() * -1 labels = values_in.index.tolist() rects_in, rects_out = volumetric_budget_bar_plot(values_in, values_out, labels, ax=ax) plt.ylabel('Volumetric rate, in Mgal/d') plt.title('totim = {}'.format(time)) plt.tight_layout() plt.show() """ Explanation: Plot Budget Components The following is a function that can be used to better visualize the budget components using matplotlib. End of explanation """
letsgoexploring/teaching
winter2017/econ129/python/Econ129_Class_06_Complete.ipynb
mit
# Use the requests module to download cross country GDP per capita url = 'http://www.briancjenkins.com/data/international/csv/crossCountryIncomePerCapita.csv' filename='crossCountryIncomePerCapita.csv' r = requests.get(url,verify=True) with open(filename,'wb') as newFile: newFile.write(r.content) # Import the cross-country GDP data into a DataFrame called incomeDf with index_col=0 incomeDf = pd.read_csv('crossCountryIncomePerCapita.csv',index_col=0) # Print the first five rows of incomeDf print(incomeDf.head()) # Print the columns of incomeDf print(incomeDf.columns) # Print the number of countries represented in incomeDf print(len(incomeDf.columns)) # Print the index of incomeDf print(incomeDf.index) # Print the number of years of data in incomeDf print(len(incomeDf.index)) # Print the first five rows of the 'United States - USA 'column of incomeDf print(incomeDf['United States - USA'].head()) # Print the last five rows of the 'United States - USA' column of incomeDf print(incomeDf['United States - USA'].tail()) # Create a plot of income per capita from 1960 to 2011 for the US plt.plot(incomeDf['United States - USA'].index,incomeDf['United States - USA'],lw=3,alpha = 0.7) plt.grid() plt.ylabel('Dollars') plt.xlim([incomeDf.index[0],incomeDf.index[-1]]) plt.title('Income per capita: United States') # Create a plot of income per capita from 1960 to 2011 for another country in the dataset # Use the random module to randomly draw a value from the column titles of incomeDf import random some_country = random.choice(incomeDf.columns) plt.plot(incomeDf[some_country].index,incomeDf[some_country],lw=3,alpha = 0.7) plt.grid() plt.ylabel('Dollars') plt.xlim([incomeDf.index[0],incomeDf.index[-1]]) plt.title('Income per capita: '+some_country[:-6]) # Create a new variable called income60 equal to the 1960 row from incomeDf income60 = incomeDf.loc[1960] # Print the index of income60 print(income60) # Print the average world income per capita in 1960 print('average income per capita in 1960: ',np.mean(income60)) # Print the standard deviation in world income per capita in 1960 print('standard deviation of income per capita in 1960:',np.sqrt(np.var(income60))) # Print the names of the five countries with the highest five incomes per capita in 1960 print(income60.sort_values(ascending=False).head()) # Print the names of the five countries with the lowest five incomes per capita in 1960 print(income60.sort_values(ascending=True).head()) # Create a new variable called income11 equal to the 2011 row from incomeDf income11 = incomeDf.loc[2011] # Print the average world income per capita in 2011 print('average income per capita in 2011: ',np.mean(income11)) # Print the standard deviation in world income per capita in 2011 print('standard deviation of income per capita in 2011:',np.sqrt(np.var(income11))) # Print the names of the five countries with the highest five incomes per capita in 2011 print(income11.sort_values(ascending=False).head()) # Print the names of the five countries with the lowest five incomes per capita in 2011 print(income11.sort_values(ascending=True).head()) """ Explanation: Class 6: More Pandas Objectives: Analize some cross-country GDP per capita data Create a new DataFrame Export a DataFrame to a csv file Exercise: Cross-country income per capita statistics Download a file called corssCountryIncomePerCapita.csv by visiting http://www.briancjenkins.com/data/international/ and following the link for: "GDP per capita (constant US 2005 PPP $, levels)" End of explanation """ # Create a DataFrame called growthDf with columns 'income 1960' and 'income 2011' equal to income per capita # in 1960 and 2011 and an index equal to the index of income60 growthDf = pd.DataFrame({'income 1960':income60,'income 2011':income11},index=income60.index) # Create a new column equal to the difference between 'income 2011' and 'income 1960' for each country growthDf['difference'] = growthDf['income 2011']-growthDf['income 1960'] """ Explanation: Creating a new DataFrame Now we'll use our cross-country income per capita data to create a new DataFrame containing growth data. End of explanation """ # Create a new column equal to the average annual growth rate between for each country between 1960 and 2011 T = len(incomeDf.index) -1 growthDf['growth'] = (growthDf['income 2011']/growthDf['income 1960'])**(1/T) - 1 # Print the first five rows of growthDf print(growthDf.head()) # Print the names of the five countries with the highest average annual growth rates print(growthDf['growth'].sort_values(ascending=False).head()) # Print the names of the five countries with the lowest average annual growth rates print(growthDf['growth'].sort_values(ascending=True).head()) # Print the average annual growth rate of income per capita from 1960 to 2011 print('average growth in income per capita in 2011: ',np.mean(growthDf['growth'])) print() # Print the standard deviation of the annual growth rate of income per capita from 1960 to 2011 print('standard deviation of growth in income per capita in 2011:',np.sqrt(np.var(growthDf['growth']))) # Construct a scatter plot: # Use the plt.scatter function # income per capita in 1960 on the horizontal axis and average annual growth rate on the vertical axis # Set the opacity of the points to something like 0.25 - 0.35 # Label the plot clearly with axis labels and a title plt.scatter(growthDf['income 1960'],growthDf['growth'],s=100,alpha = 0.3) plt.xlim([-1000,20000]) plt.grid() plt.xlabel('income per capita in 1960') plt.ylabel('growth in income per capita\nfrom 1960 to 2011') plt.title('income per capita versus growth for '+str(len(growthDf.index))+' countries') """ Explanation: Let $y_t$ denotes income per capita for some country in some year $t$ and let $g$ denotes the average annual growth in income per capita between years 0 and $T$. $g$ is defined by: \begin{align} y_T & = (1+g)^T y_0 \end{align} which implies: \begin{align} g & = \left(\frac{y_T}{y_0}\right)^{1/T} - 1 \end{align} Note that since our data are from 1960 to 2011, $T = 51$. Which is also equal to len(incomeDf.index)-1. End of explanation """ # Export the growthDf DataFrame to a csv file called 'growth_data.csv' growthDf.to_csv('my_growth_data.csv') """ Explanation: Exporting a DataFrame to csv Use the DataFrame method to_csv(). End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/exams/td_note_2018_1.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: 1A.e - Enoncรฉ 12 dรฉcembre 2017 (1) Correction du premier รฉnoncรฉ de l'examen du 12 dรฉcembre 2017. Celui-ci mรจne ร  l'implรฉmentation d'un algorithme qui permet de retrouver une fonction $f$ en escalier ร  partir d'un ensemble de points $(X_i, f(X_i))$. End of explanation """ import random X = [random.random() * 16 for i in range(0,1000)] Y = [ int(x**0.5) % 2 for x in X] """ Explanation: Q1 - รฉchantillon alรฉatoire Gรฉnรฉrer un ensemble alรฉatoire de 1000 nombres $(X_i,Y_i)$ qui vรฉrifie : $X_i$ suit une loi uniforme sur $[0,16]$ $Y_i = \mathbb{1}_{[\sqrt{X_i}] \mod 2}$ oรน $[A]$ est la partie entiรจre de $A$. On pourra se servir de la fonction random du module random. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.plot(X, Y, '.') """ Explanation: Q1 - dessiner le nuage de points - donnรฉe Le code suivant vous est donnรฉ afin de vรฉrifier vos rรฉponses. End of explanation """ nuage = [(x,y) for x,y in zip(X,Y)] nuage.sort() nuage[:5] """ Explanation: Q2 - tri Trier les points selon les $X$. End of explanation """ def somme_diff(xy, i, j): m = sum(e[1] for e in xy[i:j]) / (j-i) return sum(abs(e[1]-m) for e in xy[i:j]) somme_diff(nuage, 0, 5), somme_diff(nuage, 0, len(nuage)) """ Explanation: Q3 - moyenne On suppose que les $Y$ sont triรฉs selon les $X$ croissants. Calculer la moyenne des diffรฉrences entre $Y$ et la moyenne $m$ des $Y$ (en valeur absolue) sur un intervalle $[i,j]$, $j$ exclu. Ecrire une fonction def somme_diff(nuage, i, j) qui exรฉcute ce calcul qui correspond ร  $\sum_{k=i}^{j-1} |Y_k - m|$ avec $m = (\sum_{k=i}^{j-1} Y_k) / (j-i)$. End of explanation """ def difference(nuage, i, j, k): m1 = somme_diff(nuage, i, k) m2 = somme_diff(nuage, k, j) m = somme_diff(nuage, i, j) return abs(m1+m2-m) difference(nuage, 0, len(nuage), 100) """ Explanation: Q4 - distance Soit $i,j$ deux entiers, on coupe l'intervalle en deux : $i,k$ et $k,j$. On calcule la somme_diff sur ces deux intervalles, on fait la somme des diffรฉrences (en valeurs absolues) de ces moyennes par rapport ร  la valeur sur le plus grand intervalle. On รฉcrit la fonction def difference(nuage, i, j, k):. End of explanation """ def fct(x, y): return abs(x-y) def distance_list(list_x, list_y, f): return sum(f(x,y) for x,y in zip(list_x, list_y)) distance_list([0, 1], [0, 2], fct) """ Explanation: Q5 - fonction comme paramรจtre Le langage Python permet de passer une fonction ร  une autre fonction en tant qu'argument. Un exemple : End of explanation """ def somme_diff(xy, i, j, f): m = sum(e[1] for e in xy[i:j]) / (j-i) # On a modifiรฉ les fonctions prรฉcรฉdentes pour calculer # une fonction d'erreur "custom" ou dรฉfinie par l'utilisateur. return sum(f(e[1], m) for e in xy[i:j]) def difference(nuage, i, j, k, f): m1 = somme_diff(nuage, i, k, f) m2 = somme_diff(nuage, k, j, f) m = somme_diff(nuage, i, j, f) return abs(m - m1) + abs(m - m2) difference(nuage, 0, len(nuage), 100, fct) """ Explanation: Ecrire la fonction prรฉcรฉdente en utilisant la fonction fct. End of explanation """ def optimise(nuage, i, j, f): mx = -1 ib = None for k in range(i+1,j-1): d = difference(nuage, i,j,k, f) if ib is None or d > mx: mx = d ib = k if ib is None: # Au cas oรน l'intervalle est vide, on retourne une coupure # รฉgale ร  i. ib = i mx = 0 return ib, mx optimise(nuage, 0, len(nuage), fct) import matplotlib.pyplot as plt x = nuage[552][0] plt.plot(X,Y,'.') plt.plot([x,x], [0,1]) """ Explanation: Q6 - optimiser On veut dรฉterminer le $i$ optimal, celui qui maximise la diffรฉrence dans l'intervalle $[i,j]$. On souhaite garder la fonction fct comme argument. Pour cela, implรฉmenter la fonction def optimise(nuage, i, j, f):. End of explanation """ optimise(nuage, 0, 68, fct), optimise(nuage, 68, len(nuage), fct) import matplotlib.pyplot as plt x = nuage[58][0] x2 = nuage[552][0] plt.plot(X,Y,'.') plt.plot([x,x], [0,1]) plt.plot([x2,x2], [0,1]) """ Explanation: Le premier point de coupure trouvรฉ (le trait orange) correspond ร  un des bords d'un des escaliers. Q7 - optimisation encore Recommencer sur les deux intervalles trouvรฉs. La question รฉtait juste histoire que le rรฉsultat ร  la question prรฉcรฉdente est reproductible sur d'autres intervalles. End of explanation """ def recursive(nuage, i, j, f, th=0.1): k, mx = optimise(nuage, i, j, f) if mx <= th: return None r1 = recursive(nuage, i, k, f, th=th) r2 = recursive(nuage, k, j, f, th=th) if r1 is None and r2 is None: return [k] elif r1 is None: return [k] + r2 elif r2 is None: return r1 + [k] else: return r1 + [k] + r2 r = recursive(nuage, 0, len(nuage), fct) r import matplotlib.pyplot as plt plt.plot(X, Y, '.') for i in r: x = nuage[i][0] plt.plot([x,x], [0,1]) """ Explanation: Q8 - fonction rรฉcursive Pouvez-vous imaginer une fonction rรฉcursive qui produit toutes les sรฉparations. Ecrire la fonction def recursive(nuage, i, j, f, th=0.1):. End of explanation """ def somme_diff_abs(xy, i, j): m = sum(e[1] for e in xy[i:j]) / (j-i) return sum(abs(e[1]-m) for e in xy[i:j]) def difference_abs(nuage, i, j, k): m1 = somme_diff_abs(nuage, i, k) m2 = somme_diff_abs(nuage, k, j) m = somme_diff_abs(nuage, i, j) return abs(m1+m2-m) def optimise_abs(nuage, i, j): mx = -1 ib = None for k in range(i+1,j-1): d = difference_abs(nuage, i,j,k) if ib is None or d > mx: mx = d ib = k if ib is None: ib = i mx = 0 return ib, mx %timeit optimise_abs(nuage, 0, len(nuage)) """ Explanation: Q9 - coรปt Quel est le coรปt de la fonction optimize en fonction de la taille de l'intervalle ? Peut-on mieux faire (ce qu'on n'implรฉmentera pas). Tel qu'il est implรฉmentรฉ, le coรปt est en $O(n^2)$, le coรปt peut รชtre linรฉaire en triant les รฉlรฉments dans l'ordre croissant, ce qui a รฉtรฉ fait, ou $n\ln n$ si on inclut le coรปt du tri bien qu'on ne le fasse qu'une fois. Voyons plus en dรฉtail comment se dรฉbarrasser du coรปt en $O(n^2)$. Tout d'abord la version actuelle. End of explanation """ # %prun optimise_abs(nuage, 0, len(nuage)) """ Explanation: L'instruction suivante permet de voir oรน le programme passe la majeure partie de son temps. End of explanation """ def histogramme_y(xy, i, j): d = [0, 0] for x, y in xy[i:j]: d[y] += 1 return d def somme_diff_histogramme(d): m = d[1] * 1.0 / (d[0] + d[1]) return (1-m) * d[1] + m * d[0] def optimise_rapide(nuage, i, j): # On calcule les histogrammes. d1 = histogramme_y(nuage, i, i+1) d2 = histogramme_y(nuage, i+1, j) d = d1.copy() d[0] += d2[0] d[1] += d2[1] m = somme_diff_histogramme(d) m1 = somme_diff_histogramme(d1) m2 = somme_diff_histogramme(d2) mx = -1 ib = None for k in range(i+1,j-1): d = abs(m1+m2-m) if ib is None or d > mx: mx = d ib = k # On met ร  jour les histogrammes. On ajoute d'un cรดtรฉ, on retranche de l'autre. y = nuage[k][1] d1[y] += 1 d2[y] -= 1 m1 = somme_diff_histogramme(d1) m2 = somme_diff_histogramme(d2) if ib is None: ib = i mx = 0 return ib, mx # On vรฉrifie qu'on obtient les mรชmes rรฉsultats. optimise_rapide(nuage, 0, len(nuage)), optimise_abs(nuage, 0, len(nuage)) """ Explanation: La fonction sum cache une boucle, avec la boucle for dans la fonction optimise, cela explique le coรปt en $O(n^2)$. Le fait qu'ร  chaque itรฉration, on passe une observation d'un cรดtรฉ ร  l'autre de la coupure puis on recalcule les moyennes... Il y a deux faรงons d'optimiser ce calcul selon qu'on tient compte du fait que les valeurs de $Y$ sont binaires ou non. Dans le premier cas, il suffit de compter les valeurs 0 ou 1 de part et d'autres de la coupure (histogrammes) pour calculer la moyenne. Lorsque $k$ varie, il suffit de mettre ร  jour les histogrammes en dรฉduisant et en ajoutant le $Y_k$ aux bons endroits. End of explanation """ %timeit optimise_rapide(nuage, 0, len(nuage)) """ Explanation: C'est carrรฉment plus rapide et cela marche pour toute fonction fct. End of explanation """ import random X2 = list(range(10)) Y2 = X2 import matplotlib.pyplot as plt plt.plot(X2,Y2,'.') nuage2 = [(x,y) for x,y in zip(X2,Y2)] nuage2.sort() r = recursive(nuage2, 0, len(nuage2), fct) len(r), r import matplotlib.pyplot as plt plt.plot(X2,Y2,'.') for i in r: x = nuage2[i][0] plt.plot([x,x], [0,10]) """ Explanation: Si on ne suppose pas que les $Y_i$ sont binaires et qu'ils sont quelconques, les histogrammes contiendront plus de deux รฉlรฉments. Dans ce cas, il faut conserver deux tableaux triรฉs des $Y_i$, de part et d'autres de la coupure. Lorsqu'on bouge la coupure $k$, cela revient ร  dรฉplacer $Y_k$ d'un tableau ร  l'autre ce qui se fera par recherche dichotomique donc en $O(\ln n)$. La mise ร  jour de la moyenne des valeurs absolues est immรฉdiate si la fonction fct=abs(x-y) et pas forcรฉment immรฉdiate dans le cas gรฉnรฉral. Lorsque c'est une valeur absolue, il faut utiliser quelques rรฉsultats sur la rรฉgression quantile. Q10 - autre nuage de points Comment l'algorithme se comporte-t-il lorsque tous les points sont distincts ? End of explanation """
graphistry/pygraphistry
demos/demos_databases_apis/gremlin-tinkerpop/TitanDemo.ipynb
bsd-3-clause
import asyncio import aiogremlin # Create event loop and initialize gremlin client loop = asyncio.get_event_loop() client = aiogremlin.GremlinClient(url='ws://localhost:8182/', loop=loop) # Default url """ Explanation: In this notebook, we demonstrate how to create and modify a Titan graph in python, and then visualize the result using Graphistry's visual graph explorer. We assume the gremlin server for our Titan graph is hosted locally on port 8182 - This notebook utilizes the python modules aiogremlin and asyncio. - The GremlinClient class of aiogremlin communicates asynchronously with the gremlin server using websockets via asyncio coroutines. - This implementation allows you to submit additional requests to the server before any responses are recieved, which is much faster than synchronous request / response cycles. - For more information about these modules, please visit: - aiogremlin: http://aiogremlin.readthedocs.org/en/latest/index.html - asyncio: https://pypi.python.org/pypi/asyncio End of explanation """ @asyncio.coroutine def add_vertex_routine(name, label): yield from client.execute("graph.addVertex(label, l, 'name', n)", bindings={"l":label, "n":name}) def add_vertex(name, label): loop.run_until_complete(add_vertex_routine(name, label)) @asyncio.coroutine def add_relationship_routine(who, relationship, whom): yield from client.execute("g.V().has('name', p1).next().addEdge(r, g.V().has('name', p2).next())", bindings={"p1":who, "p2":whom, "r":relationship}) def add_relationship(who, relationship, whom): loop.run_until_complete(add_relationship_routine(who, relationship, whom)) @asyncio.coroutine def remove_all_vertices_routine(): resp = yield from client.submit("g.V()") results = [] while True: msg = yield from resp.stream.read(); if msg is None: break if msg.data is None: break for vertex in msg.data: yield from client.submit("g.V(" + str(vertex['id']) + ").next().remove()") def remove_all_vertices(): results = loop.run_until_complete(remove_all_vertices_routine()) @asyncio.coroutine def remove_vertex_routine(name): return client.execute("g.V().has('name', n).next().remove()", bindings={"n":name}) def remove_vertex(name): return loop.run_until_complete(remove_vertex_routine(name)); """ Explanation: Functions for graph modification End of explanation """ @asyncio.coroutine def get_node_list_routine(): resp = yield from client.submit("g.V().as('node')\ .label().as('type')\ .select('node').values('name').as('name')\ .select('name', 'type')") results = []; while True: msg = yield from resp.stream.read(); if msg is None: break; if msg.data is None: break; else: results.extend(msg.data) return results def get_node_list(): results = loop.run_until_complete(get_node_list_routine()) return results @asyncio.coroutine def get_edge_list_routine(): resp = yield from client.submit("g.E().as('edge')\ .label().as('relationship')\ .select('edge').outV().values('name').as('source')\ .select('edge').inV().values('name').as('dest')\ .select('source', 'relationship', 'dest')") results = []; while True: msg = yield from resp.stream.read(); if msg is None: break; if msg.data is None: break; else: results.extend(msg.data) return results def get_edge_list(): results = loop.run_until_complete(get_edge_list_routine()) return results """ Explanation: Functions for translating a graph to node and edge lists: - Currently, our API can only upload data from a pandas DataFrame, but we plan to implement more flexible uploads in the future. - For now, we can rely on the following functions to create the necessary DataFrames from our graph. End of explanation """ remove_all_vertices() """ Explanation: Let's start with an empty graph: End of explanation """ add_vertex("Paden", "Person") add_vertex("Thibaud", "Person") add_vertex("Leo", "Person") add_vertex("Matt", "Person") add_vertex("Brian", "Person") add_vertex("Quinn", "Person") add_vertex("Paul", "Person") add_vertex("Lee", "Person") add_vertex("San Francisco", "Place") add_vertex("Oakland", "Place") add_vertex("Berkeley", "Place") add_vertex("Turkey", "Thing") add_vertex("Rocks", "Thing") add_vertex("Motorcycles", "Thing") add_relationship("Paden", "lives in", "Oakland") add_relationship("Quinn", "lives in", "Oakland") add_relationship("Thibaud", "lives in", "Berkeley") add_relationship("Matt", "lives in", "Berkeley") add_relationship("Leo", "lives in", "San Francisco") add_relationship("Paul", "lives in", "San Francisco") add_relationship("Brian", "lives in", "Oakland") add_relationship("Paden", "eats", "Turkey") add_relationship("Quinn", "cooks", "Turkey") add_relationship("Thibaud", "climbs", "Rocks") add_relationship("Matt", "climbs", "Rocks") add_relationship("Brian", "rides", "Motorcycles") add_vertex("Graphistry", "Work") add_relationship("Paden", "works at", "Graphistry") add_relationship("Thibaud", "works at", "Graphistry") add_relationship("Matt", "co-founded", "Graphistry") add_relationship("Leo", "co-founded", "Graphistry") add_relationship("Paul", "works at", "Graphistry") add_relationship("Quinn", "works at", "Graphistry") add_relationship("Brian", "works at", "Graphistry") """ Explanation: And then populate it with the Graphistry team members and some of thier relationships: End of explanation """ import pandas nodes = pandas.DataFrame(get_node_list()) edges = pandas.DataFrame(get_edge_list()) """ Explanation: Now, let's convert our graph database to a pandas DataFrame, so it can be uploaded into our tool: End of explanation """ # Assign different color to each type in a round robin fashion. # For more information and coloring options please visit: https://graphistry.github.io/docs/legacy/api/0.9.2/api.html unique_types = list(nodes['type'].unique()) nodes['color'] = nodes['type'].apply(lambda x: unique_types.index(x) % 11) nodes edges """ Explanation: And color the nodes based on their "type" property: End of explanation """ import graphistry # To specify Graphistry account & server, use: # graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com') # For more options, see https://github.com/graphistry/pygraphistry#configure g = graphistry.bind(source="source", destination="dest", node='name', point_color='color', edge_title='relationship') g.plot(edges, nodes) """ Explanation: Finally, let's vizualize the results! End of explanation """
mapagron/Boot_camp
hm7/Homework #7.ipynb
gpl-3.0
# Dependencies import numpy as np import pandas as pd import matplotlib.pyplot as plt import json import tweepy import time import seaborn as sns %pylab notebook # Initialize Sentiment Analyzer from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyzer = SentimentIntensityAnalyzer() # Twitter API Keys consumer_key = "8kRQQdDT8zGpyOBTUbXqGF2nc" consumer_secret = "1D2oGovKe15PwLTKEAlI7ao4nAnHqSYfkGH5mQAdx1T7BUdEmX" access_token = "68786821-99qoOdGVdmeskFhyanhuj5G1UgTjLXy3zsHtmBTB4" access_token_secret = "WrIEOcbavzeNmeEoCH2ZxfqIMlB1KbMYCVNnihgQTUC0c" # Setup Tweepy API Authentication auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) # Target Account target_terms = ["@CNN", "@BBC", "@CBS", "@Fox", "@nytimes"] # Counter counter = 1 # Variables for holding sentiments sentiments = [] # Loop through the list of targets for target in target_terms: # Get all tweets from home feed public_tweets = api.user_timeline(target, count = 100) #(total 100 tweets) tweetnumber = 1 # Loop through all tweets for tweet in public_tweets: # Print Tweets print("Tweet %s: %s" % (counter, tweet["text"])) # Run Vader Analysis on each tweet compound = analyzer.polarity_scores(tweet["text"])["compound"] pos = analyzer.polarity_scores(tweet["text"])["pos"] neu = analyzer.polarity_scores(tweet["text"])["neu"] neg = analyzer.polarity_scores(tweet["text"])["neg"] tweets_ago = tweetnumber # Add sentiments for each tweet into an array sentiments.append({"User": target, "Date": tweet["created_at"], "Compound": compound, "Positive": pos, "Negative": neu, "Neutral": neg, "Tweets Ago": tweetnumber}) # Add to counter tweetnumber += 1 counter = counter + 1 # sentiments to DataFrame sentiments_pd = pd.DataFrame.from_dict(sentiments) sentiments_pd.head(25) #Checking that it works #sentiments_pd.describe() #sentiments_pd.tail() sentiments_pd["User"].describe() # Convert sentiments to DataFrame sentiments_pd = pd.DataFrame.from_dict(sentiments) sentiments_pd.head() sentiments_pd.columns """ Explanation: Sentiment Analysis End of explanation """ sentiments_pd.to_csv("newsTweets.csv") """ Explanation: trying for several targets End of explanation """ # Firt : Create plot - general view plt.plot(np.arange(len(sentiments_pd["Compound"])), sentiments_pd["Compound"], marker="o", linewidth=0.5, alpha=0.8) # # Incorporate the other graph properties plt.title("Sentiment Analysis of Tweets (%s) for %s" % (time.strftime("%x"), target_terms)) plt.ylabel("Tweet Polarity") plt.xlabel("Tweets Ago") plt.show() plt.savefig('Overall.png') """ Explanation: Plots End of explanation """ news_colors = {"@CNN": "red", "@BBC": "blue", "@CBS": "yellow", "@Fox": "lightblue", "@nytimes" : "green"} # Create a scatterplot of, sns.set() plt.figure(figsize = (10,6)) plt.xlabel ("tweet Ago", fontweight = 'bold') plt.ylabel ("Tweet Polarity", fontweight ='bold') plt.title ("Sentiment Analysis") plt.xlim (102, -2, -1) plt.ylim(-1,1) for target_terms in news_colors.keys(): df = sentiments_pd[sentiments_pd["User"] == target_terms] sentiment_analysis = plt.scatter(df["Tweets Ago"],df["Compound"], label = target_terms, color = news_colors[target_terms], edgecolor = "black", s=125) plt.legend(bbox_to_anchor = (1,1), title = 'Media Sources') plt.show() sentiment_analysis.figure.savefig('SentimentAnalysis.png') """ Explanation: Scatterplot Scatter plot of sentiments of the last 100 tweets sent out by each news organization, ranging from -1.0 to 1.0, where a score of 0 expresses a neutral sentiment, -1 the most negative sentiment possible, and +1 the most positive sentiment possible. End of explanation """ # finding average with group by #taking user and finding the average of compound groupbyagency = sentiments_pd.groupby("User")["Compound"].mean() # count at 0 df[(df["Compound"]==0)].groupby("newsOutlet").count() groupbyagency_df = pd.DataFrame(groupbyagency) groupbyagency_df data = [0.099411, 0.203394, -0.087430 , 0.179426, 0.039392] #sns.barplot(x = "data", data = data, palette = news_colors) sns.barplot(x = data, y = data, data = groupbyagency_df, orient = "h") plt.title ("Sentiment Analysis") plt.show() plt.savefig('SentimentAnalysisII.png') """ Explanation: 3. bar plot visualizing the overall sentiments of the last 100 tweets from each organization. For this plot, you will again aggregate the compound sentiments analyzed by VADER. End of explanation """
matthewpecsok/development
imbd.ipynb
apache-2.0
import tensorflow as tf import numpy as np import pandas as pd (x_train, y_train), (x_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=10000) word_index = tf.keras.datasets.imdb.get_word_index() word_index['fawn'] # why in the world it's indexed by word? reverse_word_index = dict([(value,key) for (key,value) in word_index.items()]) reverse_word_index[4] reverse_word_index[1] min([max(sequence) for sequence in x_train]) pd.DataFrame(x_train) np.sum(y_train)/y_train.shape[0] """ Explanation: <a href="https://colab.research.google.com/github/matthewpecsok/development/blob/master/imbd.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ def vector_seq(seq,dim=10000): res = np.zeros((len(seq),dim)) for i, seq in enumerate(seq): res[i,seq] = 1. return(res) """ Explanation: encode fun End of explanation """ x_train_enc = vector_seq(x_train) x_test_enc = vector_seq(x_test) x_train_enc x_train_enc.shape x_train_enc.dtype """ Explanation: encode End of explanation """ y_train.dtype type(y_train) y_train = np.asarray(y_train).astype('float32') y_test = np.asarray(y_test).astype('float32') y_train.dtype type(y_train) np.dot((1,2),(2,3)) tf.keras.activations.relu(np.dot((1,2,3,4),(2,3,4,5))) tf.keras.activations.relu(np.dot((1,2,3,4),(2,3,4,5))) """ Explanation: we have an array of 25,000 rows and 10,000 columns. columns are words, rows are documents End of explanation """
boffi/boffi.github.io
dati_2018/04/EP_Exact+Numerical.ipynb
mit
def resp_elas(m,c,k, cC,cS,w, F, x0,v0): wn2 = k/m ; wn = sqrt(wn2) ; beta = w/wn z = c/(2*m*wn) wd = wn*sqrt(1-z*z) # xi(t) = R sin(w t) + S cos(w t) + D det = (1.-beta**2)**2+(2*beta*z)**2 R = ((1-beta**2)*cS + (2*beta*z)*cC)/det/k S = ((1-beta**2)*cC - (2*beta*z)*cS)/det/k D = F/k A = x0-S-D B = (v0+z*wn*A-w*R)/wd def x(t): return exp(-z*wn*t)*(A*cos(wd*t)+B*sin(wd*t))+R*sin(w*t)+S*cos(w*t)+D def v(t): return (-z*wn*exp(-z*wn*t)*(A*cos(wd*t)+B*sin(wd*t)) +wd*exp(-z*wn*t)*(B*cos(wd*t)-A*sin(wd*t)) +w*(R*cos(w*t)-S*sin(w*t))) return x,v """ Explanation: Exact Integration for an EP SDOF System We want to compute the response, using the constant acceleration algorithm plus MNR, of an Elasto Plastic (EP) system... but how we can confirm or reject our results? It turns out that computing the exact response of an EP system with a single degree of freedom is relatively simple. Here we discuss a program that computes the analytical solution of our problem. The main building blocks of the program will be two functions that compute, for the elastic phase and for the plastic phase, the analytical functions that give the displacement and the velocity as functions of time. Elastic response We are definining a function that, for a linear dynamic system, returns not the displacement or the velocity at a given time, but rather a couple of functions of time that we can use afterwards to compute displacements and velecities at any time of interest. The response depends on the parameters of the dynamic system $m,c,k,$ on the initial conditions $x_0, v_0,$ and on the characteristics of the external load. Here the external load is limited to a linear combination of a cosine modulated, a sine modulated (both with the same frequency $\omega$) and a constant force, <center>$P(t) = c_C \cos\omega t + c_S \sin\omega t + F,$</center> but that's all that is needed for the present problem. The particular integral being <center>$\xi(t) = S \cos\omega t + R \sin\omega t + D,$</center> substituting in the equation of motion and equating all the corresponding terms gives the undetermined coefficients in $\xi(t)$, then evaluation of the general integral and its time derivative for $t=0$ permits to find the constants in the homogeneous part of the integral. The final step is to define the displacement and the velocity function, according to the constants we have determined, and to return these two function to the caller End of explanation """ def resp_yield(m,c, cC,cS,w, F, x0,v0): # csi(t) = R sin(w t) + S cos(w t) + Q t Q = F/c det = w**2*(c**2+w**2*m**2) R = (+w*c*cC-w*w*m*cS)/det S = (-w*c*cS-w*w*m*cC)/det # x(t) = A exp(-c t/m) + B + R sin(w t) + S cos(w t) + Q t # v(t) = - c A/m exp(-c t/m) + w R cos(w t) - w S sin(w t) + Q # # v(0) = -c A / m + w R + Q = v0 A = m*(w*R + Q - v0)/c # x(0) = A + B + S = x0 B = x0 - A - S def x(t): return A*exp(-c*t/m)+B+R*sin(w*t)+S*cos(w*t)+Q*t def v(t): return -c*A*exp(-c*t/m)/m+w*R*cos(w*t)-w*S*sin(w*t)+Q return x,v """ Explanation: Plastic response In this case the equation of motion is <center>$m\ddot x + c \dot x = P(t),$</center> the homogeneous response is <center>$x(t)=A\exp(-\frac{c}{m}t)+B,$</center> and the particular integral, for a load described as in the previous case, is (slightly different...) <center>$\xi(t) = S \cos\omega t + R \sin\omega t + Dt.$</center> Having computed $R, S,$ and $D$ from substituting $\xi$ in the equation of motion, $A$ and $B$ by imposing the initial conditions,we can define the displacement and velocity functions and, finally, return these two functions to the caller. End of explanation """ def bisect(f,val,x0,x1): h = (x0+x1)/2.0 fh = f(h)-val if abs(fh)<1e-8 : return h f0 = f(x0)-val if f0*fh > 0 : return bisect(f, val, h, x1) else: return bisect(f, val, x0, h) """ Explanation: An utility function We need to find when the spring yields the velocity is zero to individuate the three ranges of different behaviour elastic plastic elastic, with permanent deformation. We can use the simple and robust algorithm of bisection to find the roots for <center>$x_{el}(t)=x_y \text{ and } \dot{x}_{ep}(t)=0$.</center> End of explanation """ mass = 1000. # kg k = 40000. # N/m zeta = 0.03 # damping ratio fy = 2500. # N print('Limit displacement Uy =', fy*1000/k, 'mm') """ Explanation: The system parameters End of explanation """ damp = 2*zeta*sqrt(k*mass) xy = fy/k # m """ Explanation: Derived quantities The damping coefficient $c$ and the first yielding displacement, $x_y$. End of explanation """ t1 = 0.3 # s w = pi/t1 # rad/s Po = 6000. # N """ Explanation: Load definition Our load is a half-sine impulse <center>$p(t)=\begin{cases}p_0\sin(\frac{\pi t}{t_1})&0\leq t\leq t_1,\\ 0&\text{otherwise.}\end{cases}$</center> In our exercise End of explanation """ x0=0.0 # m v0=0.0 # m/s x_next, v_next = resp_elas(mass,damp,k, 0.0,Po,w, 0.0, x0,v0) """ Explanation: The actual computations Elastic, initial conditions, get system functions End of explanation """ t_yield = bisect(x_next, xy, 0.0, t1) print(t_yield, x_next(t_yield)*k) """ Explanation: Yielding time is The time of yielding is found solving the equation $x_\text{next}(t) = x_y$ End of explanation """ t_el = linspace( 0.0, t_yield, 201) x_el = vectorize(x_next)(t_el) v_el = vectorize(v_next)(t_el) # ------------------------------ figure(0) plot(t_el,x_el, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') title("$x_{el}(t)$") xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(1) plot(t_el,v_el) title("$\dot x_{el}(t)$") xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: Forced response in elastic range is End of explanation """ x0=x_next(t_yield) v0=v_next(t_yield) print(x0, v0) """ Explanation: Preparing for EP response First, the system state at $t_y$ is the initial condition for the EP response End of explanation """ cS = Po*cos(w*t_yield) cC = Po*sin(w*t_yield) print(Po*sin(w*0.55), cS*sin(w*(0.55-t_yield))+cC*cos(w*(0.55-t_yield))) """ Explanation: now, the load must be expressed in function of a restarted time, <center> $\tau=t-t_y\;\rightarrow\;t=\tau+t_y\;\rightarrow\;\sin(\omega t)=\sin(\omega\tau+\omega t_y)$ </center> <center> $\rightarrow\;\sin(\omega t)=\sin(\omega\tau)\cos(\omega t_y)+\cos(\omega\tau)\sin(\omega t_y)$ </center> End of explanation """ x_next, v_next = resp_yield(mass, damp, cC,cS,w, -fy, x0,v0) """ Explanation: Now we generate the displacement and velocity functions for the yielded phase, please note that the yielded spring still exerts a constant force $f_y$ on the mass, and that this fact must be (and it is) taken into account. End of explanation """ t_y1 = linspace(t_yield, t1, 101) x_y1 = vectorize(x_next)(t_y1-t_yield) v_y1 = vectorize(v_next)(t_y1-t_yield) figure(3) plot(t_el,x_el, t_y1,x_y1, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(4) plot(t_el, v_el, t_y1, v_y1) xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: At this point I must confess that I have already peeked the numerical solution, hence I know that the velocity at $t=t_1$ is still greater than 0 and I know that the current solution is valid in the interval $t_y\leq t\leq t_1$. End of explanation """ x0 = x_next(t1-t_yield) v0 = v_next(t1-t_yield) print(x0, v0) x_next, v_next = resp_yield(mass, damp, 0, 0, w, -fy, x0, v0) t2 = t1 + bisect( v_next, 0.0, 0, 0.3) print(t2) t_y2 = linspace( t1, t2, 101) x_y2 = vectorize(x_next)(t_y2-t1) v_y2 = vectorize(v_next)(t_y2-t1) print(x_next(t2-t1)) figure(5) plot(t_el,x_el, t_y1,x_y1, t_y2, x_y2, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(6) plot(t_el, v_el, t_y1, v_y1, t_y2, v_y2) xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: In the next phase, still it is $\dot x> 0$ so that the spring is still yielding, but now $p(t)=0$, so we must compute two new state functions, starting as usual from the initial conditions (note that the yielding force is still applied) End of explanation """ x0 = x_next(t2-t1) ; v0 = 0.0 x_next, v_next = resp_elas(mass,damp,k, 0.0,0.0,w, k*x0-fy, x0,v0) t_e2 = linspace(t2,4.0,201) x_e2 = vectorize(x_next)(t_e2-t2) v_e2 = vectorize(v_next)(t_e2-t2) """ Explanation: Elastic unloading The only point worth commenting is the constant force that we apply to our system. The force-displacement relationship for an EP spring is <center>$f_\text{E} = k(x-x_\text{pl})= k x - k (x_\text{max}-x_y)$</center> taking the negative, constant part of the last expression into the right member of the equation of equilibrium we have a constant term, as follows End of explanation """ # ------------------------------ figure(7) ; plot(t_el, x_el, '-b', t_y1, x_y1, '-r', t_y2, x_y2, '-r', t_e2, x_e2, '-b', (0.6, 4.0), (x0-xy, x0-xy), '--y') title("In blue: elastic phases.\n"+ "In red: yielding phases.\n"+ "Dashed: permanent plastic deformation.") xlabel("Time, s") ylabel("Displacement, m") """ Explanation: now we are ready to plot the whole response End of explanation """ def make_p(p0,t1): """make_p(p0,t1) returns a 1/2 sine impulse load function, p(t)""" def p(t): "" if t<t1: return p0*sin(t*pi/t1) else: return 0.0 return p """ Explanation: Numerical solution first, we need the load function End of explanation """ def make_kt(k,fy): "make_kt(k,fy) returns a function kt(u,v,up) returning kt, up" def kt(u,v,up): f=k*(u-up) if (-fy)<f<fy: return k,up if fy<=f and v>0: up=u-uy;return 0,up if fy<=f and v<=0: up=u-uy;return k,up if f<=(-fy) and v<0: up=u+uy;return 0,up else: up=u+uy;return k,up return kt """ Explanation: and also a function that, given the displacement, the velocity and the total plastic deformation, returns the stiffness and the new p.d.; this function is defined in terms of the initial stiffness and the yielding load End of explanation """ # Exercise from lesson 04 # mass = 1000.00 # kilograms k = 40000.00 # Newtons per metre zeta = 0.03 # zeta is the damping ratio fy = 2500.00 # yelding force, Newtons t1 = 0.30 # half-sine impulse duration, seconds p0 = 6000.00 # half-sine impulse peak value, Newtons uy = fy/k # yelding displacement, metres """ Explanation: Problem data End of explanation """ # using the above constants, define the loading function p=make_p(p0,t1) # the following function, given the final displacement, the final # velocity and the initial plastic deformation returns a) the tangent # stiffness b) the final plastic deformation kt=make_kt(k,fy) # we need the damping coefficient "c", to compute its value from the # damping ratio we must first compute the undamped natural frequency wn=sqrt(k/mass) # natural frequency of the undamped system damp=2*mass*wn*zeta # the damping coefficient # the time step h=0.005 # required duration for the response t_end = 4.0 # the number of time steps to arrive at t_end nsteps=int((t_end+h/100)/h)+1 # the maximum number of iterations in the Newton-Raphson procedure maxiters = 30 # using the constant acceleration algorithm # below we define the relevant algorithmic constants gamma=0.5 beta=1./4. gb=gamma/beta a=mass/(beta*h)+damp*gb b=0.5*mass/beta+h*damp*(0.5*gb-1.0) """ Explanation: Initialize the algorithm compute the functions that return the load and the tangent sstiffness + plastic deformation compute the damping constant for a given time step, compute all the relevant algorithmic constants, with $\gamma=\frac12$ and $\beta=\frac14$ End of explanation """ t0=0.0 u0=0.0 up=0.0 v0=0.0 p0=p(t0) (k0, up)=kt(u0,v0,up) a0=(p0-damp*v0-k0*(u0-up))/mass time = []; disp = [] """ Explanation: System state initialization and a bit more, in species we create two empty vectors to hold the computation results End of explanation """ for i in range(nsteps): time.append(t0); disp.append(u0) # advance time, next external load value, etc t1 = t0 + h p1 = p(t1) Dp = p1 - p0 Dp_= Dp + a*v0 + b*a0 k_ = k0 + gb*damp/h + mass/(beta*h*h) # we prepare the machinery for the modified Newton-Raphson # algorithm. if we have no state change in the time step, then the # N-R algorithm is equivalent to the standard procedure u_init=u0; v_init=v0 # initial state f_spring=k*(u0-up) # the force in the spring DR=Dp_ # the unbalanced force, initially equal to the # external load increment for j in range(maxiters): Du=DR/k_ # the disp increment according to the initial stiffness u_next = u_init + Du v_next = v_init + gb*Du/h - gb*v_init + h*(1.0-0.5*gb)*a0 # we are interested in the total plastic elongation oops,up=kt(u_next,v_next,up) # because we need the spring force at the end # of the time step f_spring_next=k*(u_next-up) # so that we can compute the fraction of the # incremental force that's equilibrated at the # end of the time step df=f_spring_next-f_spring+(k_-k0)*Du # and finally the incremental forces unbalanced # at the end of the time step DR=DR-df # finish updating the system state u_init=u_next; v_init=v_next; f_spring=f_spring_next # if the unbalanced load is small enough (the # criteria used in practical programs are # energy based) exit the loop - note that we # have no plasticization/unloading DR==0 at the # end of the first iteration if abs(DR)<fy*1E-6: break # now the load increment is balanced by the spring force and # increments in inertial and damping forces, we need to compute the # full state at the end of the time step, and to change all # denominations to reflect the fact that we are starting a new time step. Du=u_init-u0 Dv=gamma*Du/(beta*h)-gamma*v0/beta+h*(1.0-0.5*gamma/beta)*a0 u1=u0+Du ; v1=v0+Dv k1,up=kt(u1,v1,up) a1=(p(t1)-damp*v1-k*(u1-up))/mass t0=t1; v0=v1; u0=u1 ; a0=a1 ; k0=k1 ; p0=p1 """ Explanation: Iteration We iterate over time and, if there is a state change, over the single time step to equilibrate the unbalanced loadings End of explanation """ figure(8) plot(time[::4],disp[::4],'xr') plot(t_el, x_el, '-b', t_y1, x_y1, '-r', t_y2, x_y2, '-r', t_e2, x_e2, '-b', (0.6, 4.0), (x0-xy, x0-xy), '--y') title("Continuous line: exact response.\n"+ "Red crosses: constant acceleration + MNR.\n") xlabel("Time, s") ylabel("Displacement, m"); """ Explanation: Plotting our results we plot red crosses for the numericaly computed response and a continuous line for the results of the analytical integration of the equation of motion. End of explanation """
pligor/predicting-future-product-prices
04_time_series_prediction/26_price_history_generate_train_test_and_baseline.ipynb
agpl-3.0
bltest = MyBaseline(npz_path=npz_test) bltest.getMSE() bltest.renderMSEs() plt.show() bltest.getHuberLoss() bltest.renderHuberLosses() plt.show() bltest.get_dtw() bltest.renderRandomTargetVsPrediction() plt.show() """ Explanation: Baseline is static, a straight line for each input - Test End of explanation """ cur_baseline = MyBaseline(npz_path=npz_train_reduced) cur_baseline.getMSE() cur_baseline.renderMSEs() plt.show() cur_baseline.getHuberLoss() cur_baseline.renderHuberLosses() plt.show() cur_baseline.get_dtw() cur_baseline.renderRandomTargetVsPrediction() plt.show() """ Explanation: Baseline is static, a straight line for each input - Train (reduced) End of explanation """ cur_baseline = MyBaseline(npz_path=npz_train) cur_baseline.getMSE() cur_baseline.renderMSEs() plt.show() cur_baseline.getHuberLoss() cur_baseline.renderHuberLosses() plt.show() cur_baseline.get_dtw() cur_baseline.renderRandomTargetVsPrediction() plt.show() """ Explanation: Baseline is static, a straight line for each input - Train Full End of explanation """
astroai/starnet
VAE/StarNet_VAE.ipynb
bsd-2-clause
import numpy as np import time import h5py import keras import matplotlib.pyplot as plt import sys from keras.layers import (Input, Dense, Lambda, Flatten, Reshape, BatchNormalization, Activation, Dropout, Conv1D, UpSampling1D, MaxPooling1D, ZeroPadding1D, LeakyReLU) from keras.engine.topology import Layer from keras.optimizers import Adam from keras.models import Model from keras import backend as K plt.switch_backend('agg') """ Explanation: training an unsupervised VAE with APOGEE DR14 spectra this notebooke takes you through the building and training of a fairly deep VAE. I have not actually done too much work with DR14, so it may pose some potential difficulties, but this should be a good start. The training is suited to be put into a python script and run from the command line. If you're inclined, you may want to experiment with the model architecture, but I'm pretty sure this will work. End of explanation """ # Define edges of detectors (for APOGEE) blue_chip_begin = 322 blue_chip_end = 3242 green_chip_begin = 3648 green_chip_end = 6048 red_chip_begin = 6412 red_chip_end = 8306 # function for loading data def load_train_data_weighted(data_file,indices=None): # grab all if indices is None: with h5py.File(data_file,"r") as F: ap_spectra = F['spectrum'][:] ap_err_spectra = F['error_spectrum'][:] # grab a batch else: with h5py.File(data_file, "r") as F: indices_bool = np.ones((len(F['spectrum']),),dtype=bool) indices_bool[:] = False indices_bool[indices] = True ap_spectra = F['spectrum'][indices_bool,:] ap_err_spectra = F['error_spectrum'][indices_bool,:] # combine chips ap_spectra = np.hstack((ap_spectra[:,blue_chip_begin:blue_chip_end], ap_spectra[:,green_chip_begin:green_chip_end], ap_spectra[:,red_chip_begin:red_chip_end])) # set nan values to zero ap_spectra[np.isnan(ap_spectra)]=0. ap_err_spectra = np.hstack((ap_err_spectra[:,blue_chip_begin:blue_chip_end], ap_err_spectra[:,green_chip_begin:green_chip_end], ap_err_spectra[:,red_chip_begin:red_chip_end])) return ap_spectra,ap_err_spectra # function for reshaping spectra into appropriate format for CNN def cnn_reshape(spectra): return spectra.reshape(spectra.shape[0],spectra.shape[1],1) """ Explanation: load data this is file dependent this particular function expects the aspcap dr14 h5 file that can be downloaded from vos:starnet/public End of explanation """ img_cols, img_chns = 7214, 1 num_fluxes=7214 input_shape=(num_fluxes,1) # z_dims is the dimension of the latent space z_dims = 64 batch_size = 64 epsilon_std = 1.0 learning_rate = 0.001 decay = 0.0 padding=u'same' kernel_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.01) bias_init = keras.initializers.Zeros() """ Explanation: set some model hyper-parameters End of explanation """ # zero-augmentation layer (a trick I use to input chunks of zeros into the input spectra) class ZeroAugmentLayer(Layer): def __init__(self, **kwargs): self.is_placeholder = True super(ZeroAugmentLayer, self).__init__(**kwargs) def zero_agument(self, x_real, zero_mask): return x_real*zero_mask def call(self, inputs): x_real = inputs[0] zero_mask = inputs[1] x_augmented = self.zero_agument(x_real, zero_mask) return x_augmented # a function for creating the zero-masks used during training def create_zero_mask(spectra,min_chunks,max_chunks,chunk_size,dataset=None,ones_padded=False): if dataset is None: zero_mask = np.ones_like(spectra) elif dataset=='apogee': zero_mask = np.ones((spectra.shape[0],7214)) elif dataset=='segue': zero_mask = np.ones((spectra.shape[0],3688)) num_spec = zero_mask.shape[0] len_spec = zero_mask.shape[1] num_bins = len_spec/chunk_size remainder = len_spec%chunk_size spec_sizes = np.array([chunk_size for i in range(num_bins)]) spec_sizes[-1]=spec_sizes[-1]+remainder num_bins_removed = np.random.randint(min_chunks,max_chunks+1,size=(num_spec,)) for i, mask in enumerate(zero_mask): bin_indx_removed = np.random.choice(num_bins, num_bins_removed[i], replace=False) for indx in bin_indx_removed: if indx==0: mask[indx*spec_sizes[indx]:(indx+1)*spec_sizes[indx]]=0. else: mask[indx*spec_sizes[indx-1]:indx*spec_sizes[indx-1]+spec_sizes[indx]]=0. return zero_mask """ Explanation: Zero-Augmentation In an effort to evenly distribute the weighting of the VAE, throughout training, a zero-augmentation technique was applied to the training spectra samples - both synthetic and observed. The zero-augmentation is implemented as the first layer in the encoder where a zero-augmentation mask is sent as an input along with the input spectrum and the two are multiplied together. The zero-augmentation mask is the same size as the input spectrum vector and is composed of ones and zeros. For the APOGEE wave-grid, the spectral region is divided into seven chunks and for each input spectrum a random 0-3 of these chunks are assigned to be zeros while the remainder of the zero-augmentation mask is made up of ones. This means for a given spectrum, the input for training may be 4/7ths, 5/7ths, 6/7ths, or the entire spectrum. This augmentation is done randomly throughout training, meaning that each spectrum will be randomly assigned a different zero-augmentation mask at every iteration. End of explanation """ def build_encoder(input_1,input_2): # zero-augment input spectrum x = ZeroAugmentLayer()([input_1,input_2]) # first conv block x = Conv1D(filters=16, kernel_size=8, strides=1, kernel_initializer=kernel_init, bias_initializer=bias_init, padding=padding)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # second conv bloack x = Conv1D(filters=16, kernel_size=8, strides=1, kernel_initializer=kernel_init, bias_initializer=bias_init, padding=padding)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) # maxpooling layer and flatten x = MaxPooling1D(pool_size=4, strides=4, padding='valid')(x) x = Flatten()(x) x = Dropout(0.2)(x) # intermediate dense block x = Dense(256)(x) x = LeakyReLU(0.3)(x) x = BatchNormalization()(x) x = Dropout(0.3)(x) # latent distribution output z_mean = Dense(z_dims)(x) z_log_var = Dense(z_dims)(x) return Model([input_1,input_2],[z_mean,z_log_var]) # function for obtaining a latent sample given a distribution def sampling(args, latent_dim=z_dims, epsilon_std=epsilon_std): z_mean, z_log_var = args epsilon = K.random_normal(shape=(z_dims,), mean=0., stddev=epsilon_std) return z_mean + K.exp(z_log_var) * epsilon """ Explanation: build encoder takes spectra (x) and zero-augmentation mask as inputs and outputs latent distribution (z_mean and z_log_var) End of explanation """ def build_decoder(inputs): # input fully-connected block x = Dense(256)(inputs) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # intermediate fully-connected block w = input_shape[0] // (2 ** 3) x = Dense(w * 16)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # reshape for convolutional blocks x = Reshape((w, 16))(x) # first deconv block x = UpSampling1D(size=4)(x) x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=16,kernel_size=8)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.1)(x) # zero-padding to get x in the right dimension to create the spectra x = ZeroPadding1D(padding=(2,1))(x) # second deconv block x = UpSampling1D(size=2)(x) x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=16,kernel_size=8)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) # output conv layer x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=1,kernel_size=8,activation='linear')(x) return Model(inputs,x) """ Explanation: build decoder takes z (latent variables) as an input and outputs a stellar spectrum End of explanation """ # encoder and predictor input placeholders input_spec = Input(shape=input_shape) input_mask = Input(shape=input_shape) # error spectra placeholder input_err_spec = Input(shape=input_shape) # decoder input placeholder input_z = Input(shape=(z_dims,)) model_name='vae_test' start_e = 0 # if you want to continue training from a certain epoch, you can uncomment the load models lines # and comment out the build_encoder, build_decoder lines ''' encoder = keras.models.load_model('models/encoder_'+model_name+'_epoch_'+str(start_e)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) decoder = keras.models.load_model('models/decoder_'+model_name+'_epoch_'+str(start_e)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) ''' # encoder model encoder = build_encoder(input_spec, input_mask) # decoder layers decoder = build_decoder(input_z) #''' encoder.summary() decoder.summary() # outputs for encoder z_mean, z_log_var = encoder([input_spec, input_mask]) # sample from latent distribution given z_mean and z_log_var z = Lambda(sampling, output_shape=(z_dims,))([z_mean, z_log_var]) # outputs for decoder output_spec = decoder(z) """ Explanation: build models End of explanation """ # loss for evaluating the regenerated spectra and the latent distribution class VAE_LossLayer_weighted(Layer): __name__ = u'vae_labeled_loss_layer' def __init__(self, **kwargs): self.is_placeholder = True super(VAE_LossLayer_weighted, self).__init__(**kwargs) def lossfun(self, x_true, x_pred, z_avg, z_log_var, x_err): mse = K.mean(K.square((x_true - x_pred)/x_err)) kl_loss_x = K.mean(-0.5 * K.sum(1.0 + z_log_var - K.square(z_avg) - K.exp(z_log_var), axis=-1)) return mse + kl_loss_x def call(self, inputs): # inputs for the layer: x_true = inputs[0] x_pred = inputs[1] z_avg = inputs[2] z_log_var = inputs[3] x_err = inputs[4] # calculate loss loss = self.lossfun(x_true, x_pred, z_avg, z_log_var, x_err) # add loss to model self.add_loss(loss, inputs=inputs) # returned value not really used for anything return x_true # dummy loss to give zeros, hence no gradients to train # the real loss is computed as the layer shown above and therefore this dummy loss is just # used to satisfy keras notation when compiling the model def zero_loss(y_true, y_pred): return K.zeros_like(y_true) """ Explanation: create loss function This VAE has two loss functions that are minimized simultaneously: a weighted mean-squared-error to analyze the predicted spectra: \begin{equation} mse = \frac{1}{N}\sum{\frac{(x_{true}-x_{pred})^2}{(x_{err})^2}} \end{equation} a relative entropy, KL (Kullbackโ€“Leibler divergence) loss to keep the latent variables within a similar distribuition: \begin{equation} KL = \frac{1}{N}\sum{-\frac{1}{2}(1.0+z_{log_var} - z_{avg}^2 - e^{z_{log_var}})} \end{equation} End of explanation """ # create loss layer vae_loss = VAE_LossLayer_weighted()([input_spec, output_spec, z_mean, z_log_var, input_err_spec]) # build trainer with spectra, zero-masks, and error spectra as inputs. output is the final loss layer vae = Model(inputs=[input_spec, input_mask, input_err_spec], outputs=[vae_loss]) # compile trainer vae.compile(loss=[zero_loss], optimizer=Adam(lr=1.0e-4, beta_1=0.5)) vae.summary() # a model that encodes and then decodes a spectrum (this is used to plot the intermediate results during training) gen_x_to_x = Model([input_spec,input_mask], output_spec) gen_x_to_x.compile(loss='mse', optimizer=Adam(lr=1.0e-4, beta_1=0.5)) # a function to display the time remaining or elapsed def time_format(t): m, s = divmod(t, 60) m = int(m) s = int(s) if m == 0: return u'%d sec' % s else: return u'%d min' % (m) # function for training on a batch def train_on_batch(x_batch,x_err_batch): # create zero-augmentation mask for batch zero_mask = create_zero_mask(x_batch,0,3,1030,dataset=None,ones_padded=False) # train on batch loss = [vae.train_on_batch([cnn_reshape(x_batch), cnn_reshape(zero_mask),cnn_reshape(x_err_batch)], cnn_reshape(x_batch))] losses = {'vae_loss': loss[0]} return losses def fit_model(model_name, data_file, epochs, reporter): # get the number of spectra in the data_file with h5py.File(data_file, "r") as F: num_data_ap = len(F['spectrum']) # lets use 90% of the samples for training num_data_train_ap = int(num_data_ap*0.9) # the remainder will be grabbed for testing the model throughout training test_indices_range_ap = [num_data_train_ap,num_data_ap] # loop through the number of epochs for e in xrange(start_e,epochs): # create a randomized array of indices to grab batches of the spectra perm_ap = np.random.permutation(num_data_train_ap) start_time = time.time() # loop through the batches losses_=[] for b in xrange(0, num_data_train_ap, batchsize): # determine current batch size bsize = min(batchsize, num_data_train_ap - b) # grab a batch of indices indx_batch = perm_ap[b:b+bsize] # load a batch of data x_batch, x_err_batch= load_train_data_weighted(data_file,indices=indx_batch) # train on batch losses = train_on_batch(x_batch,x_err_batch) losses_.append(losses) # Print current status ratio = 100.0 * (b + bsize) / num_data_train_ap print unichr(27) + u"[2K",; sys.stdout.write(u'') print u'\rEpoch #%d | %d / %d (%6.2f %%) ' % \ (e + 1, b + bsize, num_data_train_ap, ratio),; sys.stdout.write(u'') for k in reporter: if k in losses: print u'| %s = %5.3f ' % (k, losses[k]),; sys.stdout.write(u'') # Compute ETA elapsed_time = time.time() - start_time eta = elapsed_time / (b + bsize) * (num_data_train_ap - (b + bsize)) print u'| ETA: %s ' % time_format(eta),; sys.stdout.write(u'') sys.stdout.flush() print u'' # Print epoch status ratio = 100.0 print unichr(27) + u"[2K",; sys.stdout.write(u'') print u'\rEpoch #%d | %d / %d (%6.2f %%) ' % \ (e + 1, num_data_train_ap, num_data_train_ap, ratio),; sys.stdout.write(u'') losses_all = {} for k in losses_[0].iterkeys(): losses_all[k] = tuple(d[k] for d in losses_) for k in reporter: if k in losses_all: losses_all[k]=np.sum(losses_all[k])/len(losses_) for k in reporter: if k in losses_all: print u'| %s = %5.3f ' % (k, losses_all[k]),; sys.stdout.write(u'') # save loss to evaluate progress myfile = open(model_name+'.txt', 'a') for k in reporter: if k in losses: myfile.write("%s," % losses[k]) myfile.write("\n") myfile.close() # Compute Time Elapsed elapsed_time = time.time() - start_time eta = elapsed_time print u'| TE: %s ' % time_format(eta),; sys.stdout.write(u'') #sys.stdout.flush() print('\n') # save models encoder.save('models/encoder_'+model_name+'_epoch_'+str(e)+'.h5') decoder.save('models/decoder_'+model_name+'_epoch_'+str(e)+'.h5') # plot results for a test set to evaluate how the vae is able to reproduce a spectrum test_sample_indices = np.random.choice(range(test_indices_range_ap[0],test_indices_range_ap[1]), 5, replace=False) sample_orig,_, = load_train_data_weighted(data_file,indices=test_sample_indices) zero_mask_test = create_zero_mask(sample_orig,0,3,1030) test_x = gen_x_to_x.predict([cnn_reshape(sample_orig),cnn_reshape(zero_mask_test)]) sample_orig_aug = sample_orig*zero_mask_test sample_diff = sample_orig-test_x.reshape(test_x.shape[0],test_x.shape[1]) # save test results fig, axes = plt.subplots(20,1,figsize=(70, 20)) for i in range(len(test_sample_indices)): # original spectrum axes[i*4].plot(sample_orig[i],c='r') axes[i*4].set_ylim((0.4,1.2)) # input zero-augmented spectrum axes[1+4*i].plot(sample_orig_aug[i],c='g') axes[1+4*i].set_ylim((0.4,1.2)) # regenerated spectrum axes[2+4*i].plot(test_x[i],c='b') axes[2+4*i].set_ylim((0.4,1.2)) # residual between original and regenerated spectra axes[3+4*i].plot(sample_diff[i],c='m') axes[3+4*i].set_ylim((-0.3,0.3)) # save results plt.savefig('results/test_sample_ap_'+model_name+'_epoch_'+str(e)+'.jpg') plt.close('all') """ Explanation: build and compile model trainer End of explanation """ reporter=['vae_loss'] epochs=30 batchsize=64 if start_e>0: start_e=start_e+1 data_file = '/data/stars/aspcapStar_combined_main_dr14.h5' fit_model(model_name,data_file, epochs,reporter) """ Explanation: train model you can experiment with the number of epochs. I suggest starting with fewer and seeing if the results are adequate. if not, continue training. The models and results are saved in models/ and results/ after each epoch, so you can run analyses throughout training. End of explanation """ import numpy as np import h5py import keras import matplotlib.pyplot as plt import sys from keras.layers import (Input, Lambda) from keras.engine.topology import Layer from keras import backend as K %matplotlib inline # Define edges of detectors (for APOGEE) blue_chip_begin = 322 blue_chip_end = 3242 green_chip_begin = 3648 green_chip_end = 6048 red_chip_begin = 6412 red_chip_end = 8306 # function for loading data def load_train_data_weighted(data_file,indices=None): # grab all if indices is None: with h5py.File(data_file,"r") as F: ap_spectra = F['spectrum'][:] ap_err_spectra = F['error_spectrum'][:] # grab a batch else: with h5py.File(data_file, "r") as F: indices_bool = np.ones((len(F['spectrum']),),dtype=bool) indices_bool[:] = False indices_bool[indices] = True ap_spectra = F['spectrum'][indices_bool,:] ap_err_spectra = F['error_spectrum'][indices_bool,:] # combine chips ap_spectra = np.hstack((ap_spectra[:,blue_chip_begin:blue_chip_end], ap_spectra[:,green_chip_begin:green_chip_end], ap_spectra[:,red_chip_begin:red_chip_end])) # set nan values to zero ap_spectra[np.isnan(ap_spectra)]=0. ap_err_spectra = np.hstack((ap_err_spectra[:,blue_chip_begin:blue_chip_end], ap_err_spectra[:,green_chip_begin:green_chip_end], ap_err_spectra[:,red_chip_begin:red_chip_end])) return ap_spectra,ap_err_spectra # zero-augmentation layer (a trick I use to input chunks of zeros into the input spectra) class ZeroAugmentLayer(Layer): def __init__(self, **kwargs): self.is_placeholder = True super(ZeroAugmentLayer, self).__init__(**kwargs) def zero_agument(self, x_real, zero_mask): return x_real*zero_mask def call(self, inputs): x_real = inputs[0] zero_mask = inputs[1] x_augmented = self.zero_agument(x_real, zero_mask) return x_augmented # a function for creating the zero-masks used during training def create_zero_mask(spectra,min_chunks,max_chunks,chunk_size,dataset=None,ones_padded=False): if dataset is None: zero_mask = np.ones_like(spectra) elif dataset=='apogee': zero_mask = np.ones((spectra.shape[0],7214)) elif dataset=='segue': zero_mask = np.ones((spectra.shape[0],3688)) num_spec = zero_mask.shape[0] len_spec = zero_mask.shape[1] num_bins = len_spec/chunk_size remainder = len_spec%chunk_size spec_sizes = np.array([chunk_size for i in range(num_bins)]) spec_sizes[-1]=spec_sizes[-1]+remainder num_bins_removed = np.random.randint(min_chunks,max_chunks+1,size=(num_spec,)) for i, mask in enumerate(zero_mask): bin_indx_removed = np.random.choice(num_bins, num_bins_removed[i], replace=False) for indx in bin_indx_removed: if indx==0: mask[indx*spec_sizes[indx]:(indx+1)*spec_sizes[indx]]=0. else: mask[indx*spec_sizes[indx-1]:indx*spec_sizes[indx-1]+spec_sizes[indx]]=0. return zero_mask # function for reshaping spectra into appropriate format for CNN def cnn_reshape(spectra): return spectra.reshape(spectra.shape[0],spectra.shape[1],1) losses = np.zeros((1,)) with open("vae_test.txt", "r") as f: for i,line in enumerate(f): currentline = np.array(line.split(",")[0],dtype=float) if i ==0: losses[0]=currentline.reshape((1,)) else: losses = np.hstack((losses,currentline.reshape((1,)))) plt.plot(losses[0:16],label='vae_loss') plt.legend() plt.show() # function for encoding a spectrum into the latent space def encode_spectrum(model_name,epoch,spectra): encoder = keras.models.load_model('models/encoder_'+model_name+'_epoch_'+str(epoch)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) z_avg,z_log_var = encoder.predict([cnn_reshape(spectra),cnn_reshape(np.ones_like(spectra))]) return z_avg, z_log_var data_file = '/data/stars/aspcapStar_combined_main_dr14.h5' test_range = [0,30000] test_sample_indices = np.random.choice(range(0,30000), 5000, replace=False) sample_x,_, = load_train_data_weighted(data_file,indices=test_sample_indices) model_name = 'vae_test' epoch=16 z_avg, z_log_var = encode_spectrum(model_name,epoch,sample_x) """ Explanation: analyze results Note, this is a dummy result. I haven't trained the models for a proper epoch yet End of explanation """ from tsne import bh_sne perplex=80 t_data = z_avg # convert data to float64 matrix. float64 is need for bh_sne t_data = np.asarray(t_data).astype('float64') t_data = t_data.reshape((t_data.shape[0], -1)) # perform t-SNE embedding vis_data = bh_sne(t_data, perplexity=perplex) # separate 2D into x and y axes information vis_x = vis_data[:, 0] vis_y = vis_data[:, 1] fig = plt.figure(figsize=(10, 10)) synth_ap = plt.scatter(vis_x, vis_y, marker='o', c='r',label='APOGEE', alpha=0.4) plt.tick_params( axis='x', which='both', bottom='off', top='off', labelbottom='off') plt.tick_params( axis='y', which='both', right='off', left='off', labelleft='off') plt.legend(fontsize=30) plt.tight_layout() plt.show() """ Explanation: t-sne an example of an unsupervised clustering method on the latent space. End of explanation """
probml/pyprobml
notebooks/book2/04/rbm_contrastive_divergence.ipynb
mit
!pip install -qq optax import numpy as np import jax from jax import numpy as jnp from jax import grad, jit, vmap, random try: import optax except ModuleNotFoundError: %pip install -qq optax import optax try: import tensorflow_datasets as tfds except ModuleNotFoundError: %pip install -qq tensorflow tensorflow_datasets import tensorflow_datasets as tfds from sklearn.linear_model import LogisticRegression from matplotlib import pyplot as plt import matplotlib.gridspec as gridspec """ Explanation: A demonstration of using contrastive divergence to train the parameters of a restricted Boltzmann machine. References and Materials This notebook has made use of various textbooks, articles, and other resources with some particularly relevant examples given below. RBM and CD Background: - [1] K. Murphy. Probabilistic Machine Learning: Advanced Topics. MIT Press, 2023. - D. MacKay. Information theory, inference and learning algorithms. Cambridge University Press, 2003. - Hastie, Trevor, et al. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. New York: springer, 2009. Practical advice for training RBMs with the CD algorithm: - [2] G. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. Tech. rep. U. Toronto, 2010. Code: - gugarosa/learnenergy - yell/boltzmann-machines - Ruslan Salakhutdinov Matlab code End of explanation """ def plot_digit(img, label=None, ax=None): """Plot MNIST Digit.""" if ax is None: fig, ax = plt.subplots() if img.ndim == 1: img = img.reshape(28, 28) ax.imshow(img.squeeze(), cmap="Greys_r") ax.axis("off") if label is not None: ax.set_title(f"Label:{label}", fontsize=10, pad=1.3) return ax def grid_plot_imgs(imgs, dim=None, axs=None, labels=None, figsize=(5, 5)): """Plot a series of digits in a grid.""" if dim is None: if axs is None: n_imgs = len(imgs) dim = np.sqrt(n_imgs) if not dim.is_integer(): raise ValueError("If dim not specified `len(imgs)` must be a square number.") else: dim = int(dim) else: dim = len(axs) if axs is None: gridspec_kw = {"hspace": 0.05, "wspace": 0.05} if labels is not None: gridspec_kw["hspace"] = 0.25 fig, axs = plt.subplots(dim, dim, figsize=figsize, gridspec_kw=gridspec_kw) for n in range(dim**2): img = imgs[n] row_idx = n // dim col_idx = n % dim axi = axs[row_idx, col_idx] if labels is not None: ax_label = labels[n] else: ax_label = None plot_digit(img, ax=axi, label=ax_label) return axs def gridspec_plot_imgs(imgs, gs_base, title=None, dim=5): """Plot digits into a gridspec subgrid. Args: imgs - images to plot. gs_base - from `gridspec.GridSpec` title - subgrid title. Note that, in general, for this type of plotting it is considerably more simple to using `fig.subfigures()` however that requires matplotlib >=3.4 which has some conflicts with the default colab setup as of the time of writing. """ gs0 = gs_base.subgridspec(dim, dim) for i in range(dim): for j in range(dim): ax = fig.add_subplot(gs0[i, j]) plot_digit(imgs[i * dim + j], ax=ax) if (i == 0) and (j == 2): if title is not None: ax.set_title(title) """ Explanation: Plotting functions End of explanation """ def initialise_params(N_vis, N_hid, key): """Initialise the parameters. Args: N_vis - number of visible units. N_hid - number of hidden units. key - PRNG key. Returns: params - (W, a, b), Weights and biases for network. """ W_key, a_key, b_key = random.split(key, 3) W = random.normal(W_key, (N_vis, N_hid)) * 0.01 a = random.normal(a_key, (N_hid,)) * 0.01 b = random.normal(b_key, (N_vis,)) * 0.01 return (W, a, b) @jit def sample_hidden(vis, params, key): """Performs the hidden layer sampling, P(h|v;ฮธ). Args: vis - state of the visible units. params - (W, a, b), Weights and biases for network. key - PRNG key. Returns: The probabilities and states of the hidden layer sampling. """ W, a, _ = params activation = jnp.dot(vis, W) + a hid_probs = jax.nn.sigmoid(activation) hid_states = random.bernoulli(key, hid_probs).astype("int8") return hid_probs, hid_states @jit def sample_visible(hid, params, key): """Performs the visible layer sampling, P(v|h;ฮธ). Args: hid - state of the hidden units params - (W, a, b), Weights and biases for network. key - PRNG key. Returns: The probabilities and states of the visible layer sampling. """ W, _, b = params activation = jnp.dot(hid, W.T) + b vis_probs = jax.nn.sigmoid(activation) vis_states = random.bernoulli(key, vis_probs).astype("int8") return vis_probs, vis_states @jit def CD1(vis_sample, params, key): """The one-step contrastive divergence algorithm. Can handle batches of training data. Args: vis_sample - sample of visible states from data. params - (W, a, b), Weights and biases for network. key - PRNG key. Returns: An estimate of the gradient of the log likelihood with respect to the parameters. """ key, subkey = random.split(key) hid_prob0, hid_state0 = sample_hidden(vis_sample, params, subkey) key, subkey = random.split(key) vis_prob1, vis_state1 = sample_visible(hid_state0, params, subkey) key, subkey = random.split(key) # It would be more efficient here to not actual sample the unused states. hid_prob1, _ = sample_hidden(vis_state1, params, subkey) delta_W = jnp.einsum("...j,...k->...jk", vis_sample, hid_prob0) - jnp.einsum( "...j,...k->...jk", vis_state1, hid_prob1 ) delta_a = hid_prob0 - hid_prob1 delta_b = vis_sample - vis_state1 return (delta_W, delta_a, delta_b) @jit def reconstruct_vis(vis_sample, params, key): """Reconstruct the visible state from a conditional sample of the hidden units. Returns Reconstruction probabilities. """ subkey1, subkey2 = random.split(key, 2) _, hid_state = sample_hidden(vis_sample, params, subkey1) vis_recon_prob, _ = sample_visible(hid_state, params, subkey2) return vis_recon_prob @jit def reconstruction_loss(vis_samples, params, key): """Calculate the L2 loss between a batch of visible samples and their reconstructions. Note this is a heuristic for evaluating training progress, not an objective function. """ reconstructed_samples = reconstruct_vis(vis_samples, params, key) loss = optax.l2_loss(vis_samples.astype("float32"), reconstructed_samples).mean() return loss @jit def vis_free_energy(vis_state, params): """Calculate the free enery of a visible state. The free energy of a visible state is equal to the sum of the energies of all of the configurations of the total state (hidden + visible) which contain that visible state. Args: vis_state - state of the visible units. params - (W, a, b), Weights and biases for network. key - PRNG key. Returns: The free energy of the visible state. """ W, a, b = params activation = jnp.dot(vis_state, W) + a return -jnp.dot(vis_state, b) - jnp.sum(jax.nn.softplus(activation)) @jit def free_energy_gap(vis_train_samples, vis_test_samples, params): """Calculate the average difference in free energies between test and train data. The free energy gap can be used to evaluate overfitting. If the model starts to overfit the training data the free energy gap will start to become increasingly negative. Args: vis_train_samples - samples of visible states from training data. vis_test_samples - samples of visible states from validation data. params - (W, a, b), Weights and biases for network. Returns: The difference between the test and validation free energies. """ train_FE = vmap(vis_free_energy, (0, None))(vis_train_samples, params) test_FE = vmap(vis_free_energy, (0, None))(vis_test_samples, params) return train_FE.mean() - test_FE.mean() @jit def evaluate_params(train_samples, test_samples, params, key): """Calculate performance measures of parameters.""" train_key, test_key = random.split(key) train_recon_loss = reconstruction_loss(train_samples, params, train_key) test_recon_loss = reconstruction_loss(test_samples, params, test_key) FE_gap = free_energy_gap(train_samples, test_samples, params) return train_recon_loss, test_recon_loss, FE_gap """ Explanation: Restricted Boltzmann Machines Restricted Boltzmann Machines (RBMs) are a type of energy based model in which the connectivity of nodes is carefully designed to facilitate efficient sampling methods. For details of RBMs see the sections on undirected graphical models (Section 4.3) and energy-based models (Chapter 23) in [1]. We reproduce here some of the relevant sampling equations which we will instrumenting below. We will be considering RBMs with binary units in both the hidden, $\mathbf{h}$, and visible, $\mathbf{v}$, layers. In general for Boltzmann machines with hidden units the probability of a particular state for the visible nodes is given by: $$ P_{\theta}(\mathbf{v}) = \frac{\sum_{\mathbf{h}}\ \exp(-\mathcal{E}\left(\mathbf{h},\mathbf{v},\theta)\right)}{Z(\theta)} $$ where $\theta$ is the collection of parameters $\theta = (\mathbf{W}, \mathbf{a}, \mathbf{b})$: - $\mathbf{W} \in \mathbb{R}^{N_{\mathrm{vis}} \times N_{\mathrm{hid}}}$ - $\mathbf{a} \in \mathbb{R}^{N_{\mathrm{hid}}}$ - $\mathbf{b} \in \mathbb{R}^{N_{\mathrm{vis}}}$ and the energy of state is given by: $$ \mathcal{E}(\mathbf{h}, \mathbf{v}, \theta) = \mathbf{v}^\top \mathbf{W} \mathbf{h} + \mathbf{h}^\top \mathbf{a} + \mathbf{v}^\top \mathbf{b}. $$ In restricted Boltzmann machines the hidden units are independent from one another conditional on the visible units, and vic versa. This means that it is straightforward to do conditional block-sampling of the state of the network. This independence structure has the property that when conditionally sampling, the probability that the $j$th hidden unit is active is, $$ p(h_j = 1 | \mathbf{v}, \theta) = \sigma\left(b_j + \sum_i v_i w_{ij}\right), $$ and probability that the $i$th visible unit is active is given by, $$ p(v_i = 1 | \mathbf{h}, \theta) = \sigma\left(a_i + \sum_j h_j w_{ij}\right). $$ The function $\sigma(\cdot)$ is the sigmoid function: $$ \sigma(x) = \frac{1}{1 + e^{-x}}. $$ Contrastive Divergence Contrastive divergence (CD) is the name for a family of algorithms used to perform approximate maximum likelihood training for RBMs. Contrastive divergence approximates the gradient of the log probability of the data (our desired objective function) by intialising an MCMC chain on the data vector and sampling for a small number of steps. The insight behind CD is that even with a very small number of steps the process still provides gradient information which can be used to fit the model parameters. Here we implement the CD1 algorithm which uses just a single round of Gibbs sampling. For more details on the CD algorithm see [1] (Section 23.2.2). End of explanation """ def preprocess_images(images): images = images.reshape((len(images), -1)) return jnp.array(images > (255 / 2), dtype="float32") def load_mnist(split): images, labels = tfds.as_numpy(tfds.load("mnist", split=split, batch_size=-1, as_supervised=True)) procced_images = preprocess_images(images) return procced_images, labels mnist_train_imgs, mnist_train_labels = load_mnist("train") mnist_test_imgs, mnist_test_labels = load_mnist("test") """ Explanation: Load MNIST End of explanation """ def train_RBM(params, train_data, optimizer, key, eval_samples, n_epochs=5, batch_size=20): """Optimize parameters of RBM using the CD1 algoritm.""" @jit def batch_step(params, opt_state, batch, key): grads = jax.tree_map(lambda x: x.mean(0), CD1(batch, params, key)) updates, opt_state = optimizer.update(grads, opt_state, params) params = jax.tree_map(lambda p, u: p - u, params, updates) return params, opt_state opt_state = optimizer.init(params) metric_list = [] param_list = [params] n_batches = len(train_data) // batch_size for _ in range(n_epochs): key, subkey = random.split(key) perms = random.permutation(subkey, len(mnist_train_imgs)) perms = perms[: batch_size * n_batches] # Skip incomplete batch perms = perms.reshape((n_batches, -1)) for n, perm in enumerate(perms): batch = mnist_train_imgs[perm, ...] key, subkey = random.split(key) params, opt_state = batch_step(params, opt_state, batch, subkey) if n % 200 == 0: key, eval_key = random.split(key) batch_metrics = evaluate_params(*eval_samples, params, eval_key) metric_list.append(batch_metrics) param_list.append(params) return params, metric_list, param_list # In practice you can use many more than 100 hidden units, up to 1000-2000. # A small number is chosen here so that training is fast. N_vis, N_hid = mnist_train_imgs.shape[-1], 100 key = random.PRNGKey(111) key, subkey = random.split(key) init_params = initialise_params(N_vis, N_hid, subkey) optimizer = optax.sgd(learning_rate=0.05, momentum=0.9) eval_samples = (mnist_train_imgs[:1000], mnist_test_imgs[:1000]) params, metric_list, param_list = train_RBM(init_params, mnist_train_imgs, optimizer, key, eval_samples) """ Explanation: Training with optax End of explanation """ train_recon_loss, test_recon_loss, FE_gap = list(zip(*metric_list)) epoch_progress = np.linspace(0, 5, len(train_recon_loss)) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6)) ax1.plot(epoch_progress, train_recon_loss, label="Train Reconstruction Loss") ax1.plot(epoch_progress, test_recon_loss, label="Test Reconstruction Loss") ax1.legend() ax1.set_xlabel("Epoch") ax1.set_ylabel("Loss") ax2.plot(epoch_progress, FE_gap) ax2.set_xlabel("Epoch") ax2.set_ylabel("Free Energy Gap"); vis_data_samples = mnist_test_imgs[:25] fig = plt.figure(figsize=(15, 5)) gs_bases = gridspec.GridSpec(1, 3, figure=fig) recon_params = (param_list[0], param_list[1], param_list[-1]) subfig_titles = ("Initial", "Epoch 1", "Epoch 5") key, subkey = random.split(key) for gs_base, epoch_param, sf_title in zip(gs_bases, recon_params, subfig_titles): # Use the same subkey for all parameter sets. vis_recon_probs = reconstruct_vis(vis_data_samples, epoch_param, subkey) title = f"{sf_title} Parameters" gridspec_plot_imgs(vis_recon_probs, gs_base, title) fig.suptitle("Reconstruction Samples", fontsize=20); """ Explanation: Evaluating Training The reconstruction loss is a heuristic measure of training performance. It measures a combination of two effects: The difference between the equilibrium distribution of the RBM and the empirical distribution of the data. The mixing rate of the Gibbs sampling. The first of these effects tends to be what we care about however it is impossible to distinguish it from the second [2]. The objective function which contrastive divergence optimizes is the probability that the RBM assigns to the dataset. For the reasons outlined above we cannot calculate this directly because it requires knowledge of the partition function. We can however compare the average free energy between two different sets of data. In the comparison the partition function cancel out. Hinton [2] suggests using this comparison as a measure of overfitting. If the model is not overfitting the values should be approximately the same. As the model starts to overfit the free energy of the validation data will increase with respect to the training data so the difference between the two values will become increasingly negative. End of explanation """ class RBM_LogReg: """ Perform logistic regression on samples transformed to RBM hidden representation with `params`. """ def __init__(self, params): self.params = params self.LR = LogisticRegression(solver="saga", tol=0.1) def _transform(self, samples): W, a, _ = self.params activation = jnp.dot(samples, W) + a hidden_probs = jax.nn.sigmoid(activation) return hidden_probs def fit(self, train_samples, train_labels): transformed_samples = self._transform(train_samples) self.LR.fit(transformed_samples, train_labels) def score(self, test_samples, test_labels): transformed_samples = self._transform(test_samples) return self.LR.score(transformed_samples, test_labels) def predict(self, test_samples): transformed_samples = self._transform(test_samples) return self.LR.predict(transformed_samples) def reconstruct_samples(self, samples, key): return reconstruct_vis(samples, self.params, key) train_data = (mnist_train_imgs, mnist_train_labels) test_data = (mnist_test_imgs, mnist_test_labels) # Train LR classifier on the raw pixel data for comparison. LR_raw = LogisticRegression(solver="saga", tol=0.1) LR_raw.fit(*train_data) # LR classifier trained on hidden representations after 1 Epoch of training. rbm_lr1 = RBM_LogReg(param_list[1]) rbm_lr1.fit(*train_data) # LR classifier trained on hidden representations after 5 Epochs of training. rbm_lr5 = RBM_LogReg(param_list[-1]) rbm_lr5.fit(*train_data) print("Logistic Regression Accuracy:") print(f"\tRaw Data: {LR_raw.score(*test_data)}") print(f"\tHidden Units Epoch-1: {rbm_lr1.score(*test_data)}") print(f"\tHidden Units Epoch-5: {rbm_lr5.score(*test_data)}") """ Explanation: Classification While Boltzmann Machines are generative models they can be adapted to be used for classification and other discriminative tasks. Here we use RBM to transform a sample image into the hidden representation and then use this as input to a logistic regression classifier. This classification is more accurate than when using the raw image data as input. Furthermore, the hidden the accuracy of classification increases as the training time increases. Alternatively, a RBM can made to include a set of visible units which encode the class label. Classification is then performed by clamping each of the class units in turn along with the test sample. The unit that gives the lowest free energy is the chosen class [2]. End of explanation """ class1_correct = rbm_lr1.predict(mnist_test_imgs) == mnist_test_labels class5_correct = rbm_lr5.predict(mnist_test_imgs) == mnist_test_labels diff_class_img_idxs = np.where(class5_correct & ~class1_correct)[0] print(f"There are {len(diff_class_img_idxs)} images which were correctly labelled after >1 Epochs of training.") """ Explanation: The increase in accuracy here is modest because of the small number of hidden units. When 1000 hidden units are used the Epoch-5 accuracy approaches 97.5%. End of explanation """ key = random.PRNGKey(100) # Try out different subsets of img indices. idx_list = diff_class_img_idxs[100:] n_rows = 5 fig, axs = plt.subplots(n_rows, 3, figsize=(9, 20)) for img_idx, ax_row in zip(idx_list, axs): ax1, ax2, ax3 = ax_row img = mnist_test_imgs[img_idx] plot_digit(img, ax=ax1) true_label = mnist_test_labels[img_idx] ax1.set_title(f"Raw Image\nTrue Label: {true_label}") epoch1_recon = rbm_lr1.reconstruct_samples(img, key) plot_digit(epoch1_recon, ax=ax2) hid1_label = rbm_lr1.predict(img[None, :])[0] ax2.set_title(f"Epoch 1 Reconstruction\nPredicted Label: {hid1_label} (incorrect)") epoch5_recon = rbm_lr5.reconstruct_samples(img, key) hid5_label = rbm_lr5.predict(img[None, :])[0] plot_digit(epoch5_recon, ax=ax3) ax3.set_title(f"Epoch 5 Reconstruction\nPredicted Label: {hid5_label} (correct)"); """ Explanation: We can explore the quality of the learned hidden tranformation by inspecting reconstructions of these test images. You can explore this by choosing different subsets of images in the cell below: End of explanation """
enchantner/python-zero
lesson_2/Slides.ipynb
mit
%time "list(range(1000000)); print('ololo')" """ Explanation: ะŸะฐะบะตั‚ั‹ ะธ ะพะบั€ัƒะถะตะฝะธะต ะดะปั Python easy_install (setuptools) - ัั‚ะฐั€ั‹ะธฬ† ะผะตะฝะตะดะถะตั€ ะฟะฐะบะตั‚ะพะฒ (ะฟั€ะฐะบั‚ะธั‡ะตัะบะธ ะฝะต ะธัะฟะพะปัŒะทัƒะตั‚ัั) pip - ะฝะพะฒั‹ะธฬ† ะผะตะฝะตะดะถะตั€ ะฟะฐะบะตั‚ะพะฒ virtualenv - ัƒัั‚ะฐะฝะพะฒะธั‚ัŒ ะบะพะฝะบั€ะตั‚ะฝั‹ะต ะฒะตั€ัะธะธ ะฟะฐะบะตั‚ะพะฒ ะปะพะบะฐะปัŒะฝะพ virtualenvwrapper - ะพั‚ะปะธั‡ะฝะฐั ะพะฑะตั€ั‚ะบะฐ ะดะปั virtualenv ะžะฑั‰ะธะน ั€ะตะฟะพะทะธั‚ะพั€ะธะน ะฟะฐะบะตั‚ะพะฒ https://pypi.python.org/pypi ะŸั€ะตะดัะพะฑั€ะฐะฝะฝั‹ะต ะฟะฐะบะตั‚ั‹ ะดะปั Windows: http://www.lfd.uci.edu/~gohlke/pythonlibs/ http://docs.python-guide.org/en/latest/dev/virtualenvs/ ะ ะฐะฑะพั‚ะฐ ั virtualenv (ะตัะปะธ ัƒ ะฒะฐั Python ั python.org) ะกะพะทะดะฐะตะผ ะพะบั€ัƒะถะตะฝะธะต python3 -m venv my_venv ะ—ะฐั…ะพะดะธะผ ะฒ ะฝะตะณะพ: source my_venv/bin/activate (ะฝะฐ Windows: my_venv\scripts\activate.bat) ะ’ั‹ั…ะพะดะธะผ: deactivate ะ•ั‰ะต ัƒะดะพะฑะฝะพ ั‡ะตั€ะตะท virtualenvwrapper: workon my_venv ; lsvirtualenv ; rmvirtualenv my_venv ะ›ะธั‡ะฝะพ ั ะฒัะตะณะดะฐ ะฟะตั€ะฒั‹ะผ ะดะตะปะพะผ ะพะฑะฝะพะฒะปััŽ ัƒัั‚ะฐะฝะพะฒั‰ะธะบะธ: pip install -U pip wheel setuptools ะ›ัƒั‡ัˆะต ะ’ะžะžะ‘ะฉะ• ะะ˜ะšะžะ“ะ”ะ ะฝะต ะดะตะปะฐั‚ัŒ "sudo pip install" ะ ะฐะฑะพั‚ะฐ ั virtualenv (ะตัะปะธ ัƒ ะฒะฐั Anaconda) ะกะพะทะดะฐะตะผ ะพะบั€ัƒะถะตะฝะธะต conda create --name my_venv ะ—ะฐั…ะพะดะธะผ ะฒ ะฝะตะณะพ: source activate my_venv/activate myenv ะ’ั‹ั…ะพะดะธะผ: deactivate/source deactivate ะกั‚ะฐะฒะธะผ ะฟะฐะบะตั‚ั‹ ั ะฟะพะผะพั‰ัŒัŽ conda install ะฒะผะตัั‚ะพ pip install: conda install requests ะงั‚ะพะฑั‹ ัƒะดะฐะปะธั‚ัŒ ะพะบั€ัƒะถะตะฝะธะต, ะธัะฟะพะปัŒะทัƒะตะผ conda remove --name my_venv --all ะ›ัƒั‡ัˆะต ะ’ะžะžะ‘ะฉะ• ะะ˜ะšะžะ“ะ”ะ ะฝะต ะดะตะปะฐั‚ัŒ "sudo pip install" ะŸะพะดั€ะพะฑะฝะตะต: https://conda.io/docs/user-guide/tasks/manage-environments.html IPython! pip install ipython ะŸะพะดัั‚ะฐะฝะพะฒะบะฐ ะฟะพ Tab ะ˜ัั‚ะพั€ะธั ะบะพะผะฐะฝะด (ัั‚ั€ะตะปะบะฐ ะฒะฒะตั€ั…) ะ˜ะฝั‚ะตะณั€ะฐั†ะธั ั ัะธัั‚ะตะผะฝั‹ะผ shell ะ’ัั‚ั€ะพะตะฝะฝั‹ะน ะผะตั…ะฐะฝะธะทะผ ะฟะพะบะฐะทะฐ ัะฟั€ะฐะฒะบะธ ("?") %magic ะ ะตะดะฐะบั‚ะธั€ะพะฒะฐะฝะธะต ะผะฝะพะณะพัั‚ั€ะพั‡ะฝั‹ั… ะบัƒัะบะพะฒ ะบะพะดะฐ End of explanation """ def my_cool_function(a, b): return a + b def my_cool_function2(a: int, b: int) -> int: return a + b def my_cool_function(a, b): return a + b my_cool_function2("foo", "bar") """ Explanation: ะ’ะพะฟั€ะพั ะšะฐะบะพะน "ะผะฐะณะธั‡ะตัะบะพะน" ะบะพะผะฐะฝะดะพะน ะผะพะถะฝะพ ะฟะพะผะตั€ะธั‚ัŒ ะฒั€ะตะผั ะฒั‹ะฟะพะปะฝะตะฝะธั ะบัƒัะบะฐ ะบะพะดะฐ? Jupyter (ex IPython Notebook) ะะตะทะฐะฒะธัะธะผั‹ะต ัั‡ะตะนะบะธ ั ะบะพะดะพะผ ะขะตะบัั‚ะพะฒั‹ะต ะดะฐะฝะฝั‹ะต, Markdown ะธ LaTeX ะŸั€ะพัั‚ะพะน ั‚ะตะบัั‚ะพะฒั‹ะน ั„ะพั€ะผะฐั‚ .ipynb Inline-ะณั€ะฐั„ะธะบะฐ, ะฒ ั‚ะพะผ ั‡ะธัะปะต ั‚ั€ะตั…ะผะตั€ะฝะฐั ะธ ะธะฝั‚ะตั€ะฐะบั‚ะธะฒะฝะฐั ะ’ะธะดะถะตั‚ั‹ ะŸะพะดะดะตั€ะถะบะฐ ยซัะดะตั€ยป @ellisonbg @fperez_org ะšะฐะบ ัะดะตะปะฐะฝะฐ ัั‚ะฐ ะฟั€ะตะทะตะฝั‚ะฐั†ะธั https://github.com/damianavila/RISE ะŸั€ะตะถะดะต ั‡ะตะผ ะฟั€ะพะดะพะปะถะฐั‚ัŒ, ะดะฐะฒะฐะนั‚ะต ะฝะฐัั‚ั€ะพะธะผ, ั‡ั‚ะพะฑั‹ ะพะฝะฐ ั€ะฐะฑะพั‚ะฐะปะฐ ัƒ ะฒัะตั… ะ“ะดะต ะปัƒั‡ัˆะต ะฒัะตะณะพ ะฟะธัะฐั‚ัŒ ะบะพะด ะฟั€ะพะตะบั‚ะฐ Sublime Text 3 VSCode Atom PyCharm ะšะฐะบ ะฒั‹ะณะปัะดะธั‚ ะฟั€ะพะตะบั‚ ะฝะฐ Python ะคะฐะนะปั‹ (ะผะพะดัƒะปะธ) Python ะธะผะตัŽั‚ ั€ะฐััˆะธั€ะตะฝะธะต ".py" "ะŸะฐะบะตั‚ั‹" Python - ัั‚ะพ ะฟะฐะฟะบะธ, ะฒ ะบะพั‚ะพั€ั‹ั… ะตัั‚ัŒ ั„ะฐะนะป __init__.py ะ’ะฝัƒั‚ั€ะธ ะผะพะดัƒะปะตะน ะบะพะด ั€ะฐะทะดะตะปะตะฝ ะฝะฐ ั„ัƒะฝะบั†ะธะธ ะธ ะบะปะฐััั‹ ัะพะณะปะฐัะฝะพ ะปะพะณะธะบะต ะฟั€ะธะปะพะถะตะฝะธั ะŸั€ะฐะฒะธะปะพะผ ั…ะพั€ะพัˆะตะณะพ ั‚ะพะฝะฐ ัั‡ะธั‚ะฐะตั‚ัั ะพั„ะพั€ะผะปัั‚ัŒ ะบะพะด ะฟะพ PEP8 ะคัƒะฝะบั†ะธะธ https://www.tutorialspoint.com/python/python_functions.htm End of explanation """ def main(): # here be dragons return if __name__ == "__main__": main() """ Explanation: ะ—ะฐะณะพั‚ะพะฒะบะฐ ะดะปั ั‚ะธะฟะธั‡ะฝะพะณะพ ัะบั€ะธะฟั‚ะฐ ะฝะฐ Python End of explanation """ import random # ะฒัั‚ั€ะพะตะฝะฝั‹ะน ะผะพะดัƒะปัŒ import os.path as op # ะธะผะฟะพั€ั‚ ั ะฟัะตะฒะดะพะฝะธะผะพะผ random.randint(1, 10) from os.path import * from sample2 import fibonacci sample2.fibonacci(5) """ Explanation: ะœะพะดัƒะปะธ End of explanation """ import traceback d = {} try: 1 / 0 except KeyError as exc: print(traceback.format_exc()) except ZeroDivisionError as an_exc: print("bad luck") """ Explanation: ะžะฑั€ะฐะฑะพั‚ะบะฐ ะพัˆะธะฑะพะบ End of explanation """ import random a = [ random.randint(-10, 10) for _ in range(10) ] a b = [item ** 2 for item in a if item > 0] b """ Explanation: ะ“ะตะฝะตั€ะฐั†ะธั ัะฟะธัะบะพะฒ (ัะฟะธัะบะพะฒั‹ะต ะฒะบะปัŽั‡ะตะฝะธั) End of explanation """ a = "asfdhasdfh" dir(a) """ Explanation: ะ•ั‰ะต ะฝะตะผะฝะพะณะพ ะฒัั‚ั€ะพะตะฝะฝั‹ั… ั„ัƒะฝะบั†ะธะน * ะœั‹ ัƒะถะต ะทะฝะฐะตะผ max(), min(), sorted(), len(), sum(). ะงั‚ะพ ะพะฝะธ ะดะตะปะฐัŽั‚? * ะ ะฒะพั‚ ะตั‰ะต - any(), all(), abs(), dir() End of explanation """
Karuntg/SDSS_SSC
Analysis_2020/sdss_gaia_matching_IZv2.ipynb
gpl-3.0
# workhorse packages import matplotlib.pyplot as plt import numpy as np # Data handling from astropy.table import Table from astropy.coordinates import SkyCoord from astropy import units as u from astropy.table import hstack # for fits with log likelihood import scipy from scipy import stats from scipy import optimize # for plotting histograms from astroML.plotting import hist """ Explanation: 6 May 2020: THIS IS revised VERSION OF ZI'S ORIGINAL TAKEN FROM GITHUB ON APRIL 23, 2020 https://github.com/Karuntg/SDSS_SSC/raw/master/analysis/sdss_gaia_matching.ipynb SEE ADDNL INFO REG GAIA PHOT CALIB AT https://gea.esac.esa.int/archive/documentation/GDR1/Data_processing/chap_cu5phot/sec_phot_calibr.html CHANGES DONE 1. FOCUS ENTIRELY ON THE ESA GAIA ANALYSIS/ PLOTS 2. PLOTS DONE: Gg vs gr, gi, gz, and Gi vs ri 3. FIT ONLY 3RD ORDER POLYNOMIALS 4. FOR ALL OTHER ANALYSES/ PLOTS REFER TO V1 End of explanation """ SDSS_CAT = 'stripe82calibStars_v2.6.dat' GAIA_CAT = 'Stripe82_GaiaDR1.dat' # MATCHED GAIA-SDSS OBJECTS SDSS2GAIA = 'S82_SDSS2GAIA_matchkln.csv' # MAGS/ERRS OF MATCHED GAIA-SDSS OBJECTS SDSS2GAIA_magerr = 'S82_SDSS2GAIA_matchkln_magerr.csv' """ Explanation: DEFINE CAT NAMES, ETC. End of explanation """ # CONVERT IQD TO STD DEV IQD2STD = 0.741 # GAIA ZEROPOINT GAIA_ZP = 25.525 # DEFINE POLYNOMIAL DEGREES deg1,deg2,deg3,deg5,deg7 = 1,2,3,5,7 """ Explanation: DEFINE PROG CONSTS End of explanation """ %%time colnames = ['calib_fla', 'ra', 'dec', 'raRMS', 'decRMS', 'nEpochs', 'AR_val', 'u_Nobs', 'u_mMed', 'u_mMean', 'u_mErr', 'u_rms_scatt', 'u_chi2', 'g_Nobs', 'g_mMed', 'g_mMean', 'g_mErr', 'g_rms_scatt', 'g_chi2', 'r_Nobs', 'r_mMed', 'r_mMean', 'r_mErr', 'r_rms_scatt', 'r_chi2', 'i_Nobs', 'i_mMed', 'i_mMean', 'i_mErr', 'i_rms_scatt', 'i_chi2', 'z_Nobs', 'z_mMed', 'z_mMean', 'z_mErr', 'z_rms_scatt', 'z_chi2'] sdss = Table.read(SDSS_CAT, format='ascii', names=colnames) %%time colnames = ['ra', 'dec', 'nObs', 'Gmag', 'flux', 'fluxErr'] # gaia = Table.read('Stripe82_GaiaDR1_small.dat', format='ascii', names=colnames) gaia = Table.read(GAIA_CAT, format='ascii', names=colnames) """ Explanation: Read the sdss and gaia catalogs Goes quite slowly, consider DASK? End of explanation """ # PRINT OUT NUMBER OF OBJ READ print('Num 2007 cat obj: {0}'.format(len(sdss['ra']))) print('Num gaia obj: {0}'.format(len(gaia['ra']))) %%time sdss_coords = SkyCoord(ra = sdss['ra']*u.degree, dec= sdss['dec']*u.degree) gaia_coords = SkyCoord(ra = gaia['ra']*u.degree, dec= gaia['dec']*u.degree) # this is matching gaia to sdss, so that indices are into sdss catalog # makes sense in this case since the sdss catalog is bigger than gaia idx, d2d, d3d = gaia_coords.match_to_catalog_sky(sdss_coords) """ Explanation: Match gaia to sdss, since here sdss is much larger End of explanation """ # THIS IS THE PRIMARY DF AFTER SDSS-GAIA MATCHING # object separation is an object with units, # I add that as a column so that one can # select based on separation to the nearest matching object # since it's matching gaia to sdss, # the resulting catalog has the same length # as gaia ... gaia_sdss = hstack([gaia, sdss[idx]], table_names = ['gaia', 'sdss']) gaia_sdss['sep_2d_arcsec'] = d2d.arcsec print('Num gaia-SDSS 2007 matched: {0}'.format(len(idx))) print(gaia_sdss.info()) """ Explanation: HERE ARE ALL THE DATAFRAMES AND SELECTION CUTS THIS IS THE PRIMARY DF AFTER SDSS-GAIA MATCHING gaia_sdss End of explanation """ r_all = gaia_sdss['r_mMed'] G_all = gaia_sdss['Gmag'] # sdss colors gr_all = gaia_sdss['g_mMed'] - gaia_sdss['r_mMed'] gi_all = gaia_sdss['g_mMed'] - gaia_sdss['i_mMed'] gz_all = gaia_sdss['g_mMed'] - gaia_sdss['z_mMed'] ri_all = gaia_sdss['r_mMed'] - gaia_sdss['i_mMed'] # Gaia colors Gg_all = gaia_sdss['Gmag'] - gaia_sdss['g_mMed'] Gr_all = gaia_sdss['Gmag'] - gaia_sdss['r_mMed'] Gi_all = gaia_sdss['Gmag'] - gaia_sdss['i_mMed'] ra_all = gaia_sdss['ra_gaia'] raW_all = np.where(ra_all > 180, ra_all-360, ra_all) dec_all = gaia_sdss['dec_gaia'] """ Explanation: GET ALL THE REQUIRED QUANTITIES MAGS AND COLORS FROM SDSS AND GAIA POSITIONS FROM GAIA ONLY _all End of explanation """ # I would call good match to be within a certain limit # there is no built-in boundary - match_to_catalog_sky() # will find the nearest match, regardless if it's an arcsecond # or five degrees to the nearest one. # gaia sources that have a good sdss match flag = (gaia_sdss['sep_2d_arcsec'] < 0.5) # 486812 for <1 arcsec gaia_matched = gaia_sdss[flag] print('Num matched obj: %d' % len(gaia_sdss)) print('Num dist < 0.5 arc.sec: %d' % len(gaia_matched)) """ Explanation: SELECTION I: BASED ON MATCH DISTANCE SELECT WITH DIST < 0.5 ARC.SEC gaia_matched End of explanation """ ## MAGS AND COLORS r_pk1 = gaia_matched['r_mMed'] G_pk1 = gaia_matched['Gmag'] # sdss colors gr_pk1 = gaia_matched['g_mMed'] - gaia_matched['r_mMed'] gi_pk1 = gaia_matched['g_mMed'] - gaia_matched['i_mMed'] gz_pk1 = gaia_matched['g_mMed'] - gaia_matched['z_mMed'] ri_pk1 = gaia_matched['r_mMed'] - gaia_matched['i_mMed'] # gaia colors Gg_pk1 = gaia_matched['Gmag'] - gaia_matched['g_mMed'] Gr_pk1 = gaia_matched['Gmag'] - gaia_matched['r_mMed'] Gi_pk1 = gaia_matched['Gmag'] - gaia_matched['i_mMed'] # POSITIONS BASED ON GAIA ra_pk1 = gaia_matched['ra_gaia'] raW_pk1 = np.where(ra_pk1 > 180, ra_pk1-360, ra_pk1) dec_pk1 = gaia_matched['dec_gaia'] """ Explanation: PICK ALL THE REQUIRED QUANTITIES WITH SELECTION I MAGS AND COLORS FROM SDSS AND GAIA POSITIONS FROM GAIA ONLY _pk1 End of explanation """ # flagOK = ((raW > -10) & (raW < 50) & (rMed>15) & (rMed<20)) # flagOK = ((raW > -10) & (raW < 50) & (rMed>16) & (rMed<19)) flagOK = ((raW_pk1 > -10) & (raW_pk1 < 50) & (r_pk1>16) & (r_pk1<19) & (gi_pk1>0) & (gi_pk1<3.0)) gaia_matchedOK = gaia_matched[flagOK] print('Num obj after select by RA, rMag, gi color: %d' % (len(gaia_matchedOK))) """ Explanation: SELECTION II: BY RA, RMAG AND SDSS GI COLOR flagOK = ((raW_pk1 > -10) & (raW_pk1 < 50) & (r_pk1>16) & (r_pk1<19) & (gi_pk1>0) & (gi_pk1<3.0)) gaia_matchedOK End of explanation """ g_pk2 = gaia_matchedOK['g_mMed'] r_pk2 = gaia_matchedOK['r_mMed'] i_pk2 = gaia_matchedOK['i_mMed'] z_pk2 = gaia_matchedOK['z_mMed'] G_pk2 = gaia_matchedOK['Gmag'] # sdss colors gr_pk2 = gaia_matchedOK['g_mMed'] - gaia_matchedOK['r_mMed'] gi_pk2 = gaia_matchedOK['g_mMed'] - gaia_matchedOK['i_mMed'] gz_pk2 = gaia_matchedOK['g_mMed'] - gaia_matchedOK['z_mMed'] ri_pk2 = gaia_matchedOK['r_mMed'] - gaia_matchedOK['i_mMed'] # Gaia colors Gg_pk2 = gaia_matchedOK['Gmag'] - gaia_matchedOK['g_mMed'] Gr_pk2 = gaia_matchedOK['Gmag'] - gaia_matchedOK['r_mMed'] Gi_pk2 = gaia_matchedOK['Gmag'] - gaia_matchedOK['i_mMed'] ra_pk2 = gaia_matchedOK['ra_gaia'] raW_pk2 = np.where(ra_pk2 > 180, ra_pk2-360, ra_pk2) dec_pk2 = gaia_matchedOK['dec_gaia'] """ Explanation: PICK ALL THE REQUIRED QUANTITIES WITH SELECTION II** MAGS AND COLORS FROM SDSS AND GAIA POSITIONS FROM GAIA ONLY _pk2 End of explanation """ flagOK = ((raW_pk2 > -10) & (raW_pk2 < 50) & (r_pk2>16) & (r_pk2<19) & (gi_pk2>0) & (gi_pk2<3.0)) flagOK2 = (flagOK & (G_pk2 > 16) & (G_pk2 < 16.5)) gaia_matchedOK2 = gaia_matchedOK[flagOK2] print('Num obj after select by RA, rMag, gi color: %d' % len(gaia_matchedOK)) print('Num obj after Gaia G-mag cut: %d' % len(gaia_matchedOK2)) print(len(flagOK2)) """ Explanation: SELECTION III: INCLUDE GAIA GMAG WITH SELECTION II flagOK = ((raW_pk2 > -10) & (raW_pk2 < 50) & (r_pk2>16) & (r_pk2<19) & (gi_pk2>0) & (gi_pk2<3.0)) flagOK2 = (flagOK & (G_pk2 > 16) & (G_pk2 < 16.5)) gaia_matchedOK2 End of explanation """ g_pk3 = gaia_matchedOK2['g_mMed'] r_pk3 = gaia_matchedOK2['r_mMed'] i_pk3 = gaia_matchedOK2['i_mMed'] z_pk3 = gaia_matchedOK2['z_mMed'] G_pk3 = gaia_matchedOK2['Gmag'] # also get mag errors for the next selection cut # Gaia err Gflux_pk3 = gaia_matchedOK2['flux'] Gfluxerr_pk3 = gaia_matchedOK2['fluxErr'] G_err_pk3 = -2.5*np.log10(1.+ (Gfluxerr_pk3/Gflux_pk3)) # sdss errs g_err_pk3 = gaia_matchedOK2['g_mErr'] r_err_pk3 = gaia_matchedOK2['r_mErr'] i_err_pk3 = gaia_matchedOK2['i_mErr'] z_err_pk3 = gaia_matchedOK2['z_mErr'] # sdss colors gr_pk3 = gaia_matchedOK2['g_mMed'] - gaia_matchedOK2['r_mMed'] gi_pk3 = gaia_matchedOK2['g_mMed'] - gaia_matchedOK2['i_mMed'] gz_pk3 = gaia_matchedOK2['g_mMed'] - gaia_matchedOK2['z_mMed'] ri_pk3 = gaia_matchedOK2['r_mMed'] - gaia_matchedOK2['i_mMed'] # Gaia colors Gg_pk3 = gaia_matchedOK2['Gmag'] - gaia_matchedOK2['g_mMed'] Gr_pk3 = gaia_matchedOK2['Gmag'] - gaia_matchedOK2['r_mMed'] Gi_pk3 = gaia_matchedOK2['Gmag'] - gaia_matchedOK2['i_mMed'] ra_pk3 = gaia_matchedOK2['ra_gaia'] raW_pk3 = np.where(ra_pk3 > 180, ra_pk3-360, ra_pk3) dec_pk3 = gaia_matchedOK2['dec_gaia'] """ Explanation: PICK ALL THE REQUIRED QUANTITIES WITH SELECTION III MAGS AND COLORS FROM SDSS AND GAIA POSITIONS FROM GAIA ONLY NOTE Also get the mag errs for sdss griz and Gaia Gmag. These are needed for the next selection cut _pk3 End of explanation """ flagOK3 = ((G_err_pk3 < 0.01) & (g_err_pk3 < 0.01) & (r_err_pk3 < 0.01) & (i_err_pk3 < 0.01) & (z_err_pk3 < 0.01)) gaia_matchedOK3 = gaia_matchedOK2[flagOK3] print('Num obj after select by Gaia mags: %d' % len(gaia_matchedOK2)) print('Num obj after Gaia G, SDSS griz err cuts: %d' % len(gaia_matchedOK3)) g_pk4 = gaia_matchedOK3['g_mMed'] r_pk4 = gaia_matchedOK3['r_mMed'] i_pk4 = gaia_matchedOK3['i_mMed'] z_pk4 = gaia_matchedOK3['z_mMed'] G_pk4 = gaia_matchedOK3['Gmag'] # also get mag errors for the next selection cut # Gaia errs Gflux_pk4 = gaia_matchedOK3['flux'] Gfluxerr_pk4 = gaia_matchedOK3['fluxErr'] G_err_pk4 = -2.5*np.log10(1.+ (Gfluxerr_pk4/Gflux_pk4)) # sdss errs g_err_pk4 = gaia_matchedOK3['g_mErr'] r_err_pk4 = gaia_matchedOK3['r_mErr'] i_err_pk4 = gaia_matchedOK3['i_mErr'] z_err_pk4 = gaia_matchedOK3['z_mErr'] # SDSS clrs gr_pk4 = gaia_matchedOK3['g_mMed'] - gaia_matchedOK3['r_mMed'] gi_pk4 = gaia_matchedOK3['g_mMed'] - gaia_matchedOK3['i_mMed'] gz_pk4 = gaia_matchedOK3['g_mMed'] - gaia_matchedOK3['z_mMed'] ri_pk4 = gaia_matchedOK3['r_mMed'] - gaia_matchedOK3['i_mMed'] # Gaia-SDSS clrs Gg_pk4 = gaia_matchedOK3['Gmag'] - gaia_matchedOK3['g_mMed'] Gr_pk4 = gaia_matchedOK3['Gmag'] - gaia_matchedOK3['r_mMed'] Gi_pk4 = gaia_matchedOK3['Gmag'] - gaia_matchedOK3['i_mMed'] ra_pk4 = gaia_matchedOK3['ra_gaia'] raW_pk4 = np.where(ra_pk4 > 180, ra_pk4-360, ra_pk4) dec_pk4 = gaia_matchedOK3['dec_gaia'] """ Explanation: SELECTION IV: INCLUDE MAG ERR CUTS FOR SDSS G,R,I,Z, AND GAIA G MAG flagOK3 = ((G_err_pk3 < 0.01) & (g_err_pk3 < 0.01) & (r_err_pk3 < 0.01) & (i_err_pk3 < 0.01) & (z_err_pk3 < 0.01)) End of explanation """ %%time paths = SDSS2GAIA gaia_matchedOK2.write(paths,format='ascii.csv', delimiter=',', comment='#',overwrite=True) """ Explanation: WRITE THIS MATCHED, CLEANED SDSS-GAIA DF TO FILE End of explanation """ # GET A SUBSET OF MAG/ERR COLS ONLY cols_needed = ['ra_gaia','dec_gaia','Gmag','flux','fluxErr', 'g_mMed','g_mErr','r_mMed','r_mErr','i_mMed','i_mErr','z_mMed','z_mErr'] gaia_matchedOK2_subset = gaia_matchedOK2[cols_needed] # WRITE OUT A CSV FILE paths = SDSS2GAIA_magerr gaia_matchedOK2_subset.write(paths,format='ascii.csv', delimiter=',', comment='#',overwrite=True) """ Explanation: ALSO WRITE OUT ONLY THE SDSS-GAIA MAGS/ERRS MAKE A SUBSET OF THE DF WITH ONLY GAIA RA, DEC, SDSS GRIZ MED MAGS, ERRS, GAIA GMAG, FLUX AND ERR End of explanation """ # given vectors x and y, fit medians in bins from xMin to xMax, with Nbin steps, # and return xBin, medianBin, medianErrBin def fitMedians(x, y, xMin, xMax, Nbin, verbose=1): # first generate bins xEdge = np.linspace(xMin, xMax, (Nbin+1)) xBin = np.linspace(0, 1, Nbin) nPts = 0*np.linspace(0, 1, Nbin) medianBin = 0*np.linspace(0, 1, Nbin) sigGbin = -1+0*np.linspace(0, 1, Nbin) for i in range(0, Nbin): xBin[i] = 0.5*(xEdge[i]+xEdge[i+1]) yAux = y[(x>xEdge[i])&(x<=xEdge[i+1])] if (yAux.size > 0): nPts[i] = yAux.size medianBin[i] = np.median(yAux) # robust estimate of standard deviation: 0.741*(q75-q25) sigmaG = 0.741*(np.percentile(yAux,75)-np.percentile(yAux,25)) # uncertainty of the median: sqrt(pi/2)*st.dev/sqrt(N) sigGbin[i] = np.sqrt(np.pi/2)*sigmaG/np.sqrt(nPts[i]) else: nPts[i], medianBin[i], sigGBin[i] = 0 if (verbose): print('median:', np.median(medianBin[Npts>0])) return xBin, nPts, medianBin, sigGbin """ Explanation: FUNCTION DEFINITIONS fitMedians for fitting med and std dev in bins over a x,y distribution End of explanation """ def mag2flx(SDSSmag): Flux = 10**(0.4*SDSSmag) return Flux """ Explanation: mag2flux = convert SDSS mag to flux with ZP = 0 End of explanation """ # this function computes polynomial models given some data x # and parameters theta def polynomial_fit(theta, x): """Polynomial model of degree (len(theta) - 1)""" return sum(t * x ** n for (n, t) in enumerate(theta)) # compute the data log-likelihood given a model def logLv0(data, theta, model=polynomial_fit): """Gaussian log-likelihood of the model at theta""" x, y, sigma_y = data y_fit = model(theta, x) return sum(stats.norm.logpdf(*args) for args in zip(y, y_fit, sigma_y)) # a direct optimization approach is used to get best model # parameters (which minimize -logL) def best_theta(data, degree, model=polynomial_fit): theta_0 = (degree + 1) * [0] neg_logLv0 = lambda theta: -logLv0(data, theta, model) return optimize.fmin_bfgs(neg_logLv0, theta_0, disp=False) """ Explanation: best_theta = fits a polynomial for a given (x, y, sigma_y) End of explanation """ # this function computes a linear combination of 4 functions # given parameters theta def linear_fit(coeffs, x, w, y, z): ffit = coeffs[0]*x + coeffs[1]*w + coeffs[2]*y + coeffs[3]*z return ffit # compute the data log-likelihood given a model def logLv1(dataL, coeffs, model=linear_fit): """Gaussian log-likelihood of the model at theta""" x, w, y, z, f, sigma_f = dataL f_fit = model(coeffs, x, w, y, z) return sum(stats.norm.logpdf(*args) for args in zip(f, f_fit, sigma_f)) # a direct optimization approach is used to get best model # parameters (which minimize -logL) def best_lintheta(dataL, degree=4, model=linear_fit): coeffs_0 = degree * [0] neg_logLv1 = lambda coeffs: -logLv1(dataL, coeffs, model) return optimize.fmin_bfgs(neg_logLv1, coeffs_0, disp=False) """ Explanation: best_lintheta = fits a linear function of several vars, (x, w, y, z, f, sigma_f) End of explanation """ # FUNCTION TO PLOT THE CCD, FIT THE POLY, AND OVERPLOT THE FITTED VALUES def CCDpltNfit(clr1a,clr1b,clr1c,clr1d,clr2a,clr2b,clr2c,clr2d,labls, xlim,ylim,ax_labl,plt_titl,fitclr1,fitclr2,Binz,degr=3): # clr1a,clr1b,clr1c,clr1d = the four cuts on clr on x-axis # clr2a,clr2b,clr2c,clr2d = the four cuts on clr on y-axis # labls = labels for the datasets given as a 4-element list # xlim, ylim = plot limits on x and y as 2-element lists # ax_labl,plt_titl = axes labels, and plot title # fitclr1,fitclr2 = cut on clr1, clr2 with which to comp medians and fit poly # Binz = minx, maxx and nbins for fitting range # degr = degree of fit poly, default = 3 # plot the base CCD fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(clr1a,clr2a, s=0.01, c='orange',label=labls[0]) ax.scatter(clr1b,clr2b, s=0.01, c='green',label=labls[1]) ax.scatter(clr1c,clr2c, s=0.01, c='blue',label=labls[2]) ax.scatter(clr1d,clr2d, s=0.01, c='red',label=labls[3]) ax.set_xlim(xlim) ax.set_ylim(ylim) ax.set_xlabel(ax_labl[0]) ax.set_ylabel(ax_labl[1]) ax.set_title(plt_titl,fontdict={'fontsize': 14, 'fontweight': 'bold'}) ax.legend(loc='best') ax.grid(True) plt.show() # now fit the medians to the specified data set minx,maxx,nbin = Binz[0],Binz[1],Binz[2] xBin, nPts, medBin, sigBin = fitMedians(fitclr1,fitclr2,minx,maxx,nbin,0) # now fit the poly of spec degr data = np.array([xBin,medBin,sigBin]) theta = best_theta(data,degr) return xBin, nPts, medBin, sigBin,theta clr1a,clr1b,clr1c,clr1d = gr_pk1,gr_pk2,gr_pk3,gr_pk4 clr2a,clr2b,clr2c,clr2d = Gg_pk1,Gg_pk2,Gg_pk3,Gg_pk4 labls = ['pk1','pk2','pk3','pk4'] xlim = (-0.6,1.8) ylim = (-3.0,0.5) ax_labl = ['(g-r)','(G-g)'] plt_titl = 'CCD SDSS gr VS Gaia Gg' fitclr1,fitclr2 = gr_pk4,Gg_pk4 Binz = [0.2,1.2,20] xBin, nPts, medBin, sigBin,theta = CCDpltNfit(clr1a,clr1b,clr1c,clr1d,clr2a,clr2b,clr2c,clr2d,labls, xlim,ylim,ax_labl,plt_titl,fitclr1,fitclr2,Binz,degr=3) print(theta) %%time clr1a,clr1b,clr1c,clr1d = gi_pk1,gi_pk2,gi_pk3,gi_pk4 clr2a,clr2b,clr2c,clr2d = Gg_pk1,Gg_pk2,Gg_pk3,Gg_pk4 labls = ['pk1','pk2','pk3','pk4'] xlim = (-1.,3.5) ylim = (-3.2,0.4) ax_labl = ['(g-i)','(G-g)'] plt_titl = 'CCD SDSS gi VS Gaia Gg' fitclr1,fitclr2 = gi_pk4,Gg_pk4 Binz = [0.5,2.5,30] xBin, nPts, medBin, sigBin,theta = CCDpltNfit(clr1a,clr1b,clr1c,clr1d,clr2a,clr2b,clr2c,clr2d,labls, xlim,ylim,ax_labl,plt_titl,fitclr1,fitclr2,Binz,degr=3) print(theta) clr1a,clr1b,clr1c,clr1d = gz_pk1,gz_pk2,gz_pk3,gz_pk4 clr2a,clr2b,clr2c,clr2d = Gg_pk1,Gg_pk2,Gg_pk3,Gg_pk4 labls = ['pk1','pk2','pk3','pk4'] xlim = (-1.5,5) ylim = (-3.2,0.4) ax_labl = ['(g-z)','(G-g)'] plt_titl = 'CCD SDSS gz VS Gaia Gg' fitclr1,fitclr2 = gz_pk4,Gg_pk4 Binz = [0.5,3,35] xBin, nPts, medBin, sigBin,theta = CCDpltNfit(clr1a,clr1b,clr1c,clr1d,clr2a,clr2b,clr2c,clr2d,labls, xlim,ylim,ax_labl,plt_titl,fitclr1,fitclr2,Binz,degr=3) print(theta) """ Explanation: PLOTS ONLY CCD End of explanation """ %%time # plot fig,ax = plt.subplots(1,1,figsize=(8,6)) # ax.scatter(gr_all, ri_all, s=0.01, c='green') ax.scatter(gr_pk1, ri_pk1, s=0.01, c='orange',label='pk1') ax.scatter(gr_pk2, ri_pk2, s=0.01, c='green',label='pk2') ax.scatter(gr_pk3, ri_pk3, s=0.01, c='blue',label='pk3') ax.scatter(gr_pk4, ri_pk4, s=0.01, c='red',label='pk4') ax.set_xlim(-0.7,2.2) ax.set_ylim(-0.7,2.4) ax.set_xlabel('SDSS(g-r)') ax.set_ylabel('SDSS(r-i)') ax.set_title('CCD: gr vs ri all-pk1-pk2',fontdict={'fontsize': 14, 'fontweight': 'bold'}) ax.legend(loc='best') ax.grid(True) """ Explanation: CCD gr vs ri End of explanation """ %%time # Gaia quantities fluxGaia = Gflux_pk4 fluxGaiaErr = Gfluxerr_pk4 magGaiaErr = G_err_pk4 # sdss quantities gFlux = 10**(0.4*g_pk4) rFlux = 10**(0.4*r_pk4) iFlux = 10**(0.4*i_pk4) zFlux = 10**(0.4*z_pk4) dataL = np.array([gFlux, rFlux, iFlux, zFlux, fluxGaia, fluxGaiaErr]) x, w, y, z, f, sigma_f = dataL coeffs1 = best_lintheta(dataL) ffit = linear_fit(coeffs1, x, w, y, z) dmag = -2.5*np.log10(ffit/f) ## PRINT OUT THE FIT RESULTS Gmag_resd = dmag Gmag_resd_exp = magGaiaErr print('Med Gmag resd: ',np.median(Gmag_resd)) print('Med exp Gmag resd: ',np.median(Gmag_resd_exp)) """ Explanation: LINEAR FIT OF SDSS GRIZ TO GAIA G End of explanation """ # Convert flux to mag Gmag_fit = GAIA_ZP - 2.5*np.log10(ffit) Gmag_msr = GAIA_ZP - 2.5*np.log10(Gflux_pk4) Gmag_resd = Gmag_fit - Gmag_msr # Plot train and val losses %matplotlib inline # plot fig,ax = plt.subplots(1,2,figsize=(10,5)) # Plt 1: mag vs mag xmin,xmax = 16,16.5 ax[0].plot(Gmag_msr,Gmag_fit,'mo',markersize=2) ax[0].plot([xmin,xmax],[xmin,xmax],'k-') ax[0].set_xlabel('Measured Gmag') ax[0].set_ylabel('Fit Gmag') ax[0].set_xlim(xmin,xmax) ax[0].set_ylim(xmin,xmax) # Plt 2: mag vs resd ax[1].plot(Gmag_msr,Gmag_resd,'bo') ax[1].plot([xmin,xmax],[0,0],'k-') ax[1].set_xlabel('Measured Gmag') ax[1].set_ylabel('Gmag Resd (= Pred - Meas)') ax[1].set_xlim(xmin,xmax) ax[1].set_ylim(-0.15,0.15) """ Explanation: PLOT FiT VS MEASURED GMAG AND RESIDUAL End of explanation """ # plot fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(gi_all, Gr_all, s=0.01, c='green') ax.scatter(gi_pk1, Gr_pk1, s=0.01, c='blue') ax.scatter(gi_pk2, Gr_pk2, s=0.01, c='red') ax.set_xlim(-0.5,3.5) ax.set_ylim(-1.5,1.) ax.set_xlabel('SDSS(g-i)') ax.set_ylabel('Gaia G - SDSS r') ax.set_title('CCD: gi vs Gr all-pk1-pk2',fontdict={'fontsize': 14, 'fontweight': 'bold'}) """ Explanation: STOPPED HERE CCD gi vs Gr End of explanation """ %%time # medians #xBin, nPts, medianBin, sigGbin = fitMedians(gi, Gr, -0.7, 4.0, 47, 0) #xBinOK, nPtsOK, medianBinOK, sigGbinOK = fitMedians(giOK, GrOK, -0.2, 3.2, 34, 0) xBin, nPts, medianBin, sigGbin = fitMedians(gi_all, Gr_all, 0.0, 3.0, 30, 0) xBinOK1, nPtsOK1, medianBinOK1, sigGbinOK1 = fitMedians(gi_pk1, Gr_pk1, 0.0, 3.0, 30, 0) xBinOK2, nPtsOK2, medianBinOK2, sigGbinOK2 = fitMedians(gi_pk2, Gr_pk2, 0.0, 3.0, 30, 0) xBinOK3, nPtsOK3, medianBinOK3, sigGbinOK3 = fitMedians(gi_pk3, Gr_pk3, 0.0, 3.0, 30, 0) #print xBin, nPts, medianBin, sigGbin medOK1 = medianBinOK1[(xBinOK1>2)&(xBinOK1<3)] medOK2 = medianBinOK2[(xBinOK2>2)&(xBinOK2<3)] medOK3 = medianBinOK3[(xBinOK3>2)&(xBinOK3<3)] dmedOK21 = medOK2 - medOK1 dmedOK32 = medOK3 - medOK2 print('Med offset pk2 - pk1: ',dmedOK21) print('Med offset pk3 - pk2: ',dmedOK32) """ Explanation: FIT MEDIANS IN BINS TO gi on the CCD End of explanation """ %%time data = np.array([xBinOK1, medianBinOK1, sigGbinOK1]) #x, y, sigma_y = data theta1 = best_theta(data,deg1) print('Fit values: ',theta1) """ Explanation: FIT POLYNOMIAL TO THE gi MEDIANS TRY ORDER = 1 End of explanation """ %%time data = np.array([xBinOK2, medianBinOK2, sigGbinOK2]) # x, y, sigma_y = data Ndata = xBinOK2.size # get best-fit parameters for linear, quadratic and cubic models theta1 = best_theta(data,deg3) theta2 = best_theta(data,deg5) theta3 = best_theta(data,deg7) """ Explanation: USE POLYNOMIAL FITS OF DIFFERENT ORDERS USE _PK2 ORDERS TESTED: 3,5,7 End of explanation """ # generate best fit lines on a fine grid xfit = np.linspace(-1.1, 4.3, 1000) yfit1 = polynomial_fit(theta1, xfit) yfit2 = polynomial_fit(theta2, xfit) yfit3 = polynomial_fit(theta3, xfit) # and compute chi2 per degree of freedom: sum{[(y-yfit)/sigma_y]^2} x, y, sigma_y = xBinOK2, medianBinOK2, sigGbinOK2 chi21 = np.sum(((y-polynomial_fit(theta1, x))/sigma_y)**2) chi22 = np.sum(((y-polynomial_fit(theta2, x))/sigma_y)**2) chi23 = np.sum(((y-polynomial_fit(theta3, x))/sigma_y)**2) # the number of fitted parameters is 2, 3, 4 chi2dof1 = chi21/(Ndata - deg3) chi2dof2 = chi22/(Ndata - deg5) chi2dof3 = chi23/(Ndata - deg7) print("CHI2:") print(' Model deg, chi2 :', deg3,chi21) print('Model deg, chi2 ::', deg5, chi22) print(' Model deg, chi2 ::', deg7, chi23) print("CHI2 per degree of freedom:") print(' best model 1:', chi2dof1) print('best model 2:', chi2dof2) print(' best model 3:', chi2dof3) # Plot a (gaia - r) vs (g-i) for photometric transformation %matplotlib inline # plot fig,ax = plt.subplots(1,1,figsize=(12,8)) ax.scatter(gi_pk1, Gr_pk1, s=0.01, c='green') ax.scatter(gi_pk2, Gr_pk2, s=0.01, c='blue') ax.scatter(gi_pk3, Gr_pk3, s=0.01, c='red') # medians ax.scatter(xBinOK1, medianBinOK1, s=30.0, c='black', alpha=0.5) ax.scatter(xBinOK2, medianBinOK2, s=30.0, c='yellow', alpha=0.5) ax.scatter(xBinOK2, medianBinOK2, s=30.0, c='green', alpha=0.5) ax.set_xlim(-1,4) ax.set_ylim(-2.0,0.5) ax.set_xlabel('SDSS(g-i)') ax.set_ylabel('Gaia G - SDSS r') ax.errorbar(x, y, sigma_y, fmt='ok', ecolor='gray') ax.plot(xfit, polynomial_fit(theta1, xfit), label='best P3 model') ax.plot(xfit, polynomial_fit(theta2, xfit), label='best P5 model') ax.plot(xfit, polynomial_fit(theta3, xfit), label='best P7 model') ax.legend(loc='best', fontsize=14) print('Coeffts 3rd order fit: ',theta3) # GrModel = -0.06348 -0.03111*gi +0.08643*gi*gi -0.05593*gi*gi*gi GrModel = sum(t * gi_pk2 ** n for (n, t) in enumerate(theta3)) GrResid = Gr_pk2 - GrModel minx,maxx,nx = min(xBinOK2),max(xBinOK2),10*len(xBinOK2) xBinM, nPtsM, medianBinM, sigGbinM = fitMedians(gi_pk2, GrResid,minx,maxx,nx, 0) fig,ax = plt.subplots(1,1,figsize=(12,8)) ax.scatter(gi_pk2, GrResid, s=0.01, c='blue') # medians ax.scatter(xBinM, medianBinM, s=30.0, c='black', alpha=0.8) ax.scatter(xBinM, medianBinM, s=20.0, c='yellow', alpha=0.3) TwoSigP = medianBinM + 2*sigGbinM TwoSigM = medianBinM - 2*sigGbinM ax.plot(xBinM, TwoSigP, c='yellow') ax.plot(xBinM, TwoSigM, c='yellow') rmsBin = np.sqrt(nPtsM) / np.sqrt(np.pi/2) * sigGbinM rmsP = medianBinM + rmsBin rmsM = medianBinM - rmsBin ax.plot(xBinM, rmsP, c='cyan') ax.plot(xBinM, rmsM, c='cyan') ax.set_xlim(-1,4.2) ax.set_ylim(-0.14,0.14) ax.set_xlabel('g-i') ax.set_ylabel('residuals for (Gaia G - SDSS r)') xL = np.linspace(-10,10) ax.plot(xL, 0*xL+0.00, c='red') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') ax.plot(0*xL+0.4, xL, c='yellow') ax.plot(0*xL+2.0, xL, c='yellow') medOK = medianBinM[(xBinM>0.4)&(xBinM<2.0)] print('Median mag diff: ',medOK) np.median(medOK) np.std(medOK) np.max(medOK) np.min(medOK) residOK = GrResid[(gi_pk2>0.4)&(gi_pk2<2.0)&(raW_pk2>-10)&(raW_pk2<50)] magOK = r_pk2[(gi_pk2>0.4)&(gi_pk2<2.0)&(raW_pk2>-10)&(raW_pk2<50)] gaiaG = gaia_matched['Gmag'] GOK = gaiaG[(gi>0.4)&(gi<2.0)&(raW>-10)&(raW<50)] print('Med of resid: ',np.median(residOK)) print('Std of resid: ',np.std(residOK)) xBinMg, nPtsMg, medianBinMg, sigGbinMg = fitMedians(magOK, residOK, 14, 20.5, 65, 0) fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(magOK, residOK, s=0.01, c='blue') # medians ax.scatter(xBinMg, medianBinMg, s=30.0, c='black', alpha=0.8) ax.scatter(xBinMg, medianBinMg, s=20.0, c='yellow', alpha=0.3) ax.set_xlim(13,23) ax.set_ylim(-0.15,0.15) ax.set_xlabel('SDSS r') ax.set_ylabel('residuals for $G-G_{SDSS}$ ') xL = np.linspace(-10,30) ax.plot(xL, 0*xL+0.00, c='yellow') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') print medianBinMg fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(magOK, residOK, s=0.01, c='blue') # medians ax.scatter(xBinMg, medianBinMg, s=30.0, c='black', alpha=0.8) ax.scatter(xBinMg, medianBinMg, s=20.0, c='yellow', alpha=0.3) ax.set_xlim(13,23) ax.set_ylim(-0.05,0.05) ax.set_xlabel('SDSS r') ax.set_ylabel('residuals for $G-G_{SDSS}$ ') xL = np.linspace(-10,30) ax.plot(xL, 0*xL+0.00, c='yellow') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') print(magOK.size) print(GOK.size) xBinMg, nPtsMg, medianBinMg, sigGbinMg = fitMedians(GOK, residOK, 14, 20.5, 130, 0) fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(GOK, residOK, s=0.01, c='blue') # medians ax.scatter(xBinMg, medianBinMg, s=30.0, c='black', alpha=0.8) ax.scatter(xBinMg, medianBinMg, s=20.0, c='yellow', alpha=0.3) TwoSigP = medianBinMg + 2*sigGbinMg TwoSigM = medianBinMg - 2*sigGbinMg ax.plot(xBinMg, TwoSigP, c='yellow') ax.plot(xBinMg, TwoSigM, c='yellow') rmsBin = np.sqrt(nPtsMg) / np.sqrt(np.pi/2) * sigGbinMg rmsP = medianBinMg + rmsBin rmsM = medianBinMg - rmsBin ax.plot(xBinMg, rmsP, c='cyan') ax.plot(xBinMg, rmsM, c='cyan') ax.set_xlim(13,23) ax.set_ylim(-0.05,0.05) ax.set_xlabel('Gaia G mag') ax.set_ylabel('residuals for $G_{Gaia}-G_{SDSS}$ ') xL = np.linspace(-10,30) ax.plot(xL, 0*xL+0.00, c='cyan') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') gi = gaia_matched['g_mMed'] - gaia_matched['i_mMed'] ra = gaia_matched['ra_gaia'] raW = np.where(ra > 180, ra-360, ra) flux = gaia_matched['flux'] fluxErr = gaia_matched['fluxErr'] fluxOK = flux[(gi>0.4)&(gi<2.0)&(raW>-10)&(raW<50)] fluxErrOK = fluxErr[(gi>0.4)&(gi<2.0)&(raW>-10)&(raW<50)] rBandErr = gaia_matched['r_mErr'] rBandErrOK = rBandErr[(gi>0.4)&(gi<2.0)&(raW>-10)&(raW<50)] ## Gaia's errors underestimated by a factor of ~2 sigma = np.sqrt((2*fluxErrOK/fluxOK)**2 + 1*rBandErrOK**2) chi = residOK / sigma xBinMg, nPtsMg, medianBinMg, sigGbinMg = fitMedians(GOK, chi, 14, 20.5, 130, 0) fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(GOK, chi, s=0.01, c='blue') # medians ax.scatter(xBinMg, medianBinMg, s=30.0, c='black', alpha=0.8) ax.scatter(xBinMg, medianBinMg, s=20.0, c='yellow', alpha=0.3) TwoSigP = medianBinMg + 2*sigGbinMg TwoSigM = medianBinMg - 2*sigGbinMg ax.plot(xBinMg, TwoSigP, c='yellow') ax.plot(xBinMg, TwoSigM, c='yellow') rmsBin = np.sqrt(nPtsMg) / np.sqrt(np.pi/2) * sigGbinMg rmsP = medianBinMg + rmsBin rmsM = medianBinMg - rmsBin ax.plot(xBinMg, rmsP, c='cyan') ax.plot(xBinMg, rmsM, c='cyan') ax.set_xlim(13,23) ax.set_ylim(-5,5) ax.set_xlabel('Gaia G mag') ax.set_ylabel('($G_{Gaia}-G_{SDSS}$)/$\sigma$') xL = np.linspace(-10,30) ax.plot(xL, 0*xL+0.00, c='cyan') ax.plot(xL, 0*xL+2, c='red') ax.plot(xL, 0*xL-2, c='red') Gerr = fluxErrOK/fluxOK print(np.median(Gerr)) print(np.median(rBandErrOK)) residOK2 = residOK[(magOK>15)&(magOK<16)] print np.median(residOK2) mm = medianBinMg[(xBinMg>15)&(xBinMg<16)] xx = xBinMg[(xBinMg>15)&(xBinMg<16)] print(mm) print(xx) print("transition at G ~ 15.6") ## conclusions # 1) select:(-10 < RA < 50) & (16 < SDSSr < 19) & (0.4< g-i < 2.0) thetaFinal = theta3 print(thetaFinal) rMed = gaia_matched['r_mMed'] gi = gaia_matched['g_mMed'] - gaia_matched['i_mMed'] ra = gaia_matched['ra_gaia'] raW = np.where(ra > 180, ra-360, ra) flagOK = ((raW > -10) & (raW < 50) & (rMed>16) & (rMed<18) & (gi>0) & (gi<3.0)) # flagOK = ((raW > -10) & (raW < 50) & (rMed>16) & (rMed<19) & (gi>0) & (gi<3.0)) gaia_matchedOK = gaia_matched[flagOK] print(len(gaia_matchedOK)) giOK = gaia_matchedOK['g_mMed'] - gaia_matchedOK['i_mMed'] GrOK = gaia_matchedOK['Gmag'] - gaia_matchedOK['r_mMed'] GmagOK = gaia_matchedOK['Gmag'] GrModel = sum(t * giOK ** n for (n, t) in enumerate(theta3)) GrResid = GrOK - GrModel print(np.median(GrResid)) print(np.std(GrResid)) print(np.min(GrResid)) print(np.max(GrResid)) #xBinMg, nPtsMg, medianBinMg, sigGbinMg = fitMedians(GmagOK, GrResid, 16, 18.8, 14, 0) xBinMg, nPtsMg, medianBinMg, sigGbinMg = fitMedians(GmagOK, GrResid, 16, 17.8, 14, 0) fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(GmagOK, GrResid, s=0.01, c='blue') # medians ax.scatter(xBinMg, medianBinMg, s=30.0, c='black', alpha=0.9) ax.scatter(xBinMg, medianBinMg, s=15.0, c='yellow', alpha=0.5) ax.set_xlim(15.5,19) ax.set_ylim(-0.07,0.07) ax.set_xlabel('Gaia G mag') ax.set_ylabel('Gr residuals') xL = np.linspace(-10,30) ax.plot(xL, 0*xL+0.00, c='yellow') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') print(np.median(medianBinMg)) print(np.std(medianBinMg)) GrResidN = GrResid - np.median(medianBinMg) ra = gaia_matchedOK['ra_sdss'] raW = np.where(ra > 180, ra-360, ra) fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(raW, GrResidN, s=0.01, c='blue') ax.set_xlim(-12,52) ax.set_ylim(-0.06,0.06) ax.set_xlabel('R.A.') ax.set_ylabel('Gmag residual') xBin, nPts, medianBin, sigGbin = fitMedians(raW, GrResidN, -10, 50, 60, 0) ax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9) ax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5) TwoSigP = medianBin + 2*sigGbin TwoSigM = medianBin - 2*sigGbin ax.plot(xBin, TwoSigP, c='yellow') ax.plot(xBin, TwoSigM, c='yellow') xL = np.linspace(-100,100) ax.plot(xL, 0*xL+0.00, c='yellow') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') print(np.median(medianBin)) print(np.std(medianBin)) print(np.min(medianBin)) print(np.max(medianBin)) dec = gaia_matchedOK['dec_sdss'] fig,ax = plt.subplots(1,1,figsize=(8,6)) ax.scatter(dec, GrResidN, s=0.01, c='blue') ax.set_xlim(-1.3,1.3) ax.set_ylim(-0.06,0.06) ax.set_xlabel('Declination') ax.set_ylabel('Gmag residual') xBin, nPts, medianBin, sigGbin = fitMedians(dec, GrResidN, -1.2, 1.2, 120, 0) ax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9) ax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5) TwoSigP = medianBin + 2*sigGbin TwoSigM = medianBin - 2*sigGbin ax.plot(xBin, TwoSigP, c='yellow') ax.plot(xBin, TwoSigM, c='yellow') xL = np.linspace(-100,100) ax.plot(xL, 0*xL+0.00, c='yellow') ax.plot(xL, 0*xL+0.01, c='red') ax.plot(xL, 0*xL-0.01, c='red') for i in range(1,12): decCol = -1.2655 + i*0.2109 ax.plot(0*xL+decCol, xL, c='red') print(np.median(medianBin)) print(np.std(medianBin)) print(np.min(medianBin)) print(np.max(medianBin)) """ Explanation: USE COEFFTS TO GENERATE FITTED VALUES COMPARE WITH Y-VALUES, ESTIMATE CHI2 End of explanation """
dsacademybr/PythonFundamentos
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
gpl-3.0
# Versรฃo da Linguagem Python from platform import python_version print('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 8</font> Download: http://github.com/dsacademybr End of explanation """ # Importando o NumPy import numpy as np np.__version__ """ Explanation: NumPy Para importar numpy, utilize: import numpy as np Vocรช tambรฉm pode utilizar: from numpy import * . Isso evitarรก a utilizaรงรฃo de np., mas este comando importarรก todos os mรณdulos do NumPy. Para atualizar o NumPy, abra o prompt de comando e digite: pip install numpy -U End of explanation """ # Help help(np.array) # Array criado a partir de uma lista: vetor1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]) print(vetor1) # Um objeto do tipo ndarray รฉ um recipiente multidimensional de itens do mesmo tipo e tamanho. type(vetor1) # Usando mรฉtodos do array NumPy vetor1.cumsum() # Criando uma lista. Perceba como listas e arrays sรฃo objetos diferentes, com diferentes propriedades lst = [0, 1, 2, 3, 4, 5, 6, 7, 8] lst type(lst) # Imprimindo na tela um elemento especรญfico no array vetor1[0] # Alterando um elemento do array vetor1[0] = 100 print(vetor1) # Nรฃo รฉ possรญvel incluir elemento de outro tipo vetor1[0] = 'Novo elemento' # Verificando o formato do array print(vetor1.shape) """ Explanation: Criando Arrays End of explanation """ # A funรงรฃo arange cria um vetor contendo uma progressรฃo aritmรฉtica a partir de um intervalo - start, stop, step vetor2 = np.arange(0., 4.5, .5) print(vetor2) # Verificando o tipo do objeto type(vetor2) # Formato do array np.shape(vetor2) print (vetor2.dtype) x = np.arange(1, 10, 0.25) print(x) print(np.zeros(10)) # Retorna 1 nas posiรงรตes em diagonal e 0 no restante z = np.eye(3) z # Os valores passados como parรขmetro, formam uma diagonal d = np.diag(np.array([1, 2, 3, 4])) d # Array de nรบmeros complexos c = np.array([1+2j, 3+4j, 5+6*1j]) c # Array de valores booleanos b = np.array([True, False, False, True]) b # Array de strings s = np.array(['Python', 'R', 'Julia']) s # O mรฉtodo linspace (linearly spaced vector) retorna um nรบmero de # valores igualmente distribuรญdos no intervalo especificado np.linspace(0, 10) print(np.linspace(0, 10, 15)) print(np.logspace(0, 5, 10)) """ Explanation: Funรงรตes NumPy End of explanation """ # Criando uma matriz matriz = np.array([[1,2,3],[4,5,6]]) print(matriz) print(matriz.shape) # Criando uma matriz 2x3 apenas com nรบmeros "1" matriz1 = np.ones((2,3)) print(matriz1) # Criando uma matriz a partir de uma lista de listas lista = [[13,81,22], [0, 34, 59], [21, 48, 94]] # A funรงรฃo matrix cria uma matria a partir de uma sequรชncia matriz2 = np.matrix(lista) matriz2 type(matriz2) # Formato da matriz np.shape(matriz2) matriz2.size print(matriz2.dtype) matriz2.itemsize matriz2.nbytes print(matriz2[2,1]) # Alterando um elemento da matriz matriz2[1,0] = 100 matriz2 x = np.array([1, 2]) # NumPy decide o tipo dos dados y = np.array([1.0, 2.0]) # NumPy decide o tipo dos dados z = np.array([1, 2], dtype=np.float64) # Forรงamos um tipo de dado em particular print (x.dtype, y.dtype, z.dtype) matriz3 = np.array([[24, 76], [35, 89]], dtype=float) matriz3 matriz3.itemsize matriz3.nbytes matriz3.ndim matriz3[1,1] matriz3[1,1] = 100 matriz3 """ Explanation: Criando Matrizes End of explanation """ print(np.random.rand(10)) import matplotlib.pyplot as plt %matplotlib inline import matplotlib as mat mat.__version__ print(np.random.rand(10)) plt.show((plt.hist(np.random.rand(1000)))) print(np.random.randn(5,5)) plt.show(plt.hist(np.random.randn(1000))) imagem = np.random.rand(30, 30) plt.imshow(imagem, cmap = plt.cm.hot) plt.colorbar() """ Explanation: Usando o Mรฉtodo random() do NumPy End of explanation """ import os filename = os.path.join('iris.csv') # No Windows use !more iris.csv. Mac ou Linux use !head iris.csv !head iris.csv #!more iris.csv # Carregando um dataset para dentro de um array arquivo = np.loadtxt(filename, delimiter=',', usecols=(0,1,2,3), skiprows=1) print (arquivo) type(arquivo) # Gerando um plot a partir de um arquivo usando o NumPy var1, var2 = np.loadtxt(filename, delimiter=',', usecols=(0,1), skiprows=1, unpack=True) plt.show(plt.plot(var1, var2, 'o', markersize=8, alpha=0.75)) """ Explanation: Operaรงรตes com datasets End of explanation """ # Criando um array A = np.array([15, 23, 63, 94, 75]) # Em estatรญstica a mรฉdia รฉ o valor que aponta para onde mais se concentram os dados de uma distribuiรงรฃo. np.mean(A) # O desvio padrรฃo mostra o quanto de variaรงรฃo ou "dispersรฃo" existe em # relaรงรฃo ร  mรฉdia (ou valor esperado). # Um baixo desvio padrรฃo indica que os dados tendem a estar prรณximos da mรฉdia. # Um desvio padrรฃo alto indica que os dados estรฃo espalhados por uma gama de valores. np.std(A) # Variรขncia de uma variรกvel aleatรณria รฉ uma medida da sua dispersรฃo # estatรญstica, indicando "o quรฃo longe" em geral os seus valores se # encontram do valor esperado np.var(A) d = np.arange(1, 10) d np.sum(d) # Retorna o produto dos elementos np.prod(d) # Soma acumulada dos elementos np.cumsum(d) a = np.random.randn(400,2) m = a.mean(0) print (m, m.shape) plt.plot(a[:,0], a[:,1], 'o', markersize=5, alpha=0.50) plt.plot(m[0], m[1], 'ro', markersize=10) plt.show() """ Explanation: Estatรญstica End of explanation """ # Slicing a = np.diag(np.arange(3)) a a[1, 1] a[1] b = np.arange(10) b # [start:end:step] b[2:9:3] # Comparaรงรฃo a = np.array([1, 2, 3, 4]) b = np.array([4, 2, 2, 4]) a == b np.array_equal(a, b) a.min() a.max() # Somando um elemento ao array np.array([1, 2, 3]) + 1.5 # Usando o mรฉtodo around a = np.array([1.2, 1.5, 1.6, 2.5, 3.5, 4.5]) b = np.around(a) b # Criando um array B = np.array([1, 2, 3, 4]) B # Copiando um array C = B.flatten() C # Criando um array v = np.array([1, 2, 3]) # Adcionando uma dimensรฃo ao array v[:, np.newaxis], v[:,np.newaxis].shape, v[np.newaxis,:].shape # Repetindo os elementos de um array np.repeat(v, 3) # Repetindo os elementos de um array np.tile(v, 3) # Criando um array w = np.array([5, 6]) # Concatenando np.concatenate((v, w), axis=0) # Copiando arrays r = np.copy(v) r """ Explanation: Outras Operaรงรตes com Arrays End of explanation """
amogh3892/Context-based-sentence-classification-using-word2vec
main_sentence_classification.ipynb
apache-2.0
# Importing all the required modules and the helper functions import numpy as np import urllib.request from bs4 import BeautifulSoup from nltk import sent_tokenize from nltk import word_tokenize import re from gensim.models import Word2Vec import pickle # the following two modules are helper functions to generate features from the sentences def get_feature(text,feature_dimension,wordset,model, label = None): features = None for sample in text: paragraph = sample.lower() sentences = sent_tokenize(paragraph) for sentence in sentences: feature_vector = np.zeros(feature_dimension) words = word_tokenize(sentence) count = 0 for word in words: if word in wordset and word.isalnum(): count = count + 1 feature_vector = feature_vector + model[word] if count != 0: feature_vector = feature_vector / float(count) if label is not None: feature_vector = np.append(feature_vector, label) feature_vector = feature_vector[np.newaxis] if features is None: features = feature_vector else: features = np.concatenate((features, feature_vector)) return features def generate_features(feature_dimension,wordset,model): with open("patient.txt") as patfile: patient = patfile.readlines() patfile.close() with open("doctor.txt") as docfile: doctor = docfile.readlines() docfile.close() patient_features = get_feature(patient,feature_dimension,wordset,model,label=0) doctor_features = get_feature(doctor,feature_dimension,wordset,model,label=1) features = np.concatenate((patient_features,doctor_features)) return features def predict(clf, text,feature_dimension, wordset, model): paragraph = text.lower() sentences = sent_tokenize(paragraph) features = get_feature([text],feature_dimension,wordset,model) pred = clf.predict(features) for i,item in enumerate(pred): if item == 0: ret = "patient" else: ret = "doctor" print("{} : {}".format(sentences[i],ret)) print() return pred """ Explanation: Context Based Sentence Classification This program classifies sentences based on the context and predicts whether a sentence might be related to a "patient" or "doctor" spoken sentences. <strong> Run the cell below to import all the required modules End of explanation """ base_url = "https://www.askthedoctor.com/browse-medical-questions" base_f = urllib.request.urlopen(base_url) base_soup = BeautifulSoup(base_f,"lxml") # categories of diseases categories = [(base_anchor["href"],base_anchor["title"]) for base_div in base_soup.findAll("div",{"class":"disease_column"}) for base_anchor in base_div.findAll("a",{"itemtype":"https://schema.org/category"})] print("Collecting data ... ") with open("patient.txt","w") as patientfile, open("doctor.txt", "w") as doctorfile: for category in categories: topic = category[1] print(topic) try: url = category[0] f = urllib.request.urlopen(url) soup = BeautifulSoup(f,"lxml") divs = soup.findAll('div',{"class":"question_az"}) for i,div in enumerate(divs): inner_url = div.find('a')['href'] inner_f = urllib.request.urlopen(inner_url) inner_soup = BeautifulSoup(inner_f,"lxml") question = inner_soup.find('span',{"class":"quesans"}) question = question.text.replace(","," ") question = re.sub('[.]+', '.',question) for token in sent_tokenize(question): if len(word_tokenize(token)) > 3: patientfile.write("{}\n".format(token)) answer = inner_soup.find('span', {"class": "answer quesans"}) answer = answer.text.replace(""" \n(adsbygoogle = window.adsbygoogle || []).push({});""","").replace("\n"," ").replace(" "," ").replace(","," ") answer = re.sub('[.]+', '.',answer) for token in sent_tokenize(answer): if len(word_tokenize(token)) > 3: doctorfile.write("{}\n".format(token)) except: print("Error ................ {}".format(topic)) patientfile.close() doctorfile.close() print("Data saved !") """ Explanation: Data collection The data is crawled from www.askthedoctor.com. The website contains data of questions asked by the patients, and the corresponding answers given by the doctor. The data is categorized into different categories based on the diseases. Here, each of the category is looped and corresponding data is stored as "patient" data or "doctor" data Run the code below to collect data from the above website. Two files "patient.txt" and "doctor.txt" are saved <strong>Note that it might take several minutes depending on the internet connection. <strong>Skip the below cell if the data is already saved. End of explanation """ data_matrix = [] with open("patient.txt","r") as patfile, open("doctor.txt","r") as docfile: data_matrix = patfile.readlines() data_matrix.extend(docfile.readlines()) patfile.close() docfile.close() # converting the whole data into lower case data_matrix = [sample.lower() for sample in data_matrix] print("The Dataset consists of {} sentences.".format(len(data_matrix))) # Formatting the data to provide as input to gensim package's word2vec model words_matrix = [] for sample in data_matrix: sentences = sent_tokenize(sample) for sentence in sentences: words = word_tokenize(sentence) words_new = [word for word in words if word.isalnum()] words_matrix.append(words_new) import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\ level=logging.INFO) # Parameters required for training word2vec model num_features = 300 # Word vector dimensionality min_word_count = 5 # Minimum word count num_workers = 4 # Number of threads to run in parallel context = 10 # Context window size downsampling = 1e-3 # Downsample setting for frequent words # Initialize and train the model (this will take some time) from gensim.models import word2vec print("Training model...") model = word2vec.Word2Vec(words_matrix, workers=num_workers, \ size=num_features, min_count = min_word_count, \ window = context, sample = downsampling) # If you don't plan to train the model any further, calling # init_sims will make the model much more memory-efficient. model.init_sims(replace=True) # It can be helpful to create a meaningful model name and # save the model for later use. You can load it later using Word2Vec.load() model_name = "word2vec_model" model.save(model_name) print("Word2Vec model saved !") """ Explanation: Word2Vec Word2vec is a method of word embeddings where the words in a sentence are mapped to their corresponding vectors representation. Here the whole dataset is considered, both patient and doctor spoken sentences to learn word embeddings. A python library "gensim" is used to train a word2vec model <strong> Run the code below to generate a word2vec model. <string> Skip the cell below if model is already created. End of explanation """ model = Word2Vec.load("word2vec_model") word_vectors = model.wv.syn0 print("The model has {} words in the vocabulary and the dimension of the vectors is {}".format(word_vectors.shape[0],word_vectors.shape[1])) print("I\n{}\n".format(model.most_similar("i"))) print("swelling\n{}\n".format(model.most_similar("swelling"))) print("headache\n{}\n".format(model.most_similar("headache"))) print("fever\n{}\n".format(model.most_similar("fever"))) """ Explanation: Testing Word2Vec Model Given a word, the model should be able to give similar words after being trained depending on the context words that appeared in sentences of the dataset. End of explanation """ model = Word2Vec.load("word2vec_model") word_vectors = model.wv.syn0 feature_dimension = word_vectors.shape[1] # all words in the vocabulary wordset = set(model.wv.index2word) print("Generating features ...") features = generate_features(feature_dimension,wordset,model) # dividing the dataset into train and test datasets. indices = np.random.permutation(features.shape[0]) test_idx,training_idx = indices[:2000], indices[2000:] test_features, train_features = features[test_idx,:], features[training_idx,:] train_labels = train_features[:,-1] train_features = train_features[:,:-1] test_labels = test_features[:,-1] test_features = test_features[:,:-1] # Saving features np.save("train_features.npy",train_features) np.save("train_labels.npy", train_labels) np.save("test_features.npy",test_features) np.save("test_labels.npy",test_labels) print("Features saved") """ Explanation: Saving the features Each sentence vector is formed by averaging the vectors corresponding to each word in a sentence. The whole set of features is divided into train and test features. <strong> This might take some time <strong> Skip the cell if features are already saved End of explanation """ # Loading train features train_features = np.load("train_features.npy") train_labels = np.load("train_labels.npy") from sklearn.svm import SVC clf = SVC(kernel="linear",C=100) print("Training classifier ....") clf.fit(train_features,train_labels) import pickle with open("clf_model.pkl","wb") as clffile: pickle.dump(clf,clffile) clffile.close() print("Classifier Model saved !") """ Explanation: Training Classifier Model A simple SVM classifier is trained by converting a sentence into vectors. <strong> Run the following code a train a simple SVM classifier and save the model. This might take some time <Strong> Skip the cell if model is already generated. End of explanation """ with open("clf_model.pkl","rb") as clffile: clf = pickle.load(clffile) clffile.close() test_features = np.load("test_features.npy") test_labels = np.load("test_labels.npy") pred = clf.predict(test_features) from sklearn.metrics import accuracy_score acc = accuracy_score(test_labels,pred)*100 print("The accuracy for {} samples for the model : {}%".format(test_features.shape[0],acc)) """ Explanation: Evaluating Classifier Model End of explanation """ with open("clf_model.pkl","rb") as clffile: clf = pickle.load(clffile) clffile.close() model = Word2Vec.load("word2vec_model") word_vectors = model.wv.syn0 feature_dimension = word_vectors.shape[1] # all words in the vocabulary wordset = set(model.wv.index2word) text = "i still cough few times a day. what should i do?" res = predict(clf, text,feature_dimension, wordset, model) text = "i have severe pain in my abdomen. do i have to go to the doctor? wash your hands everytime and follow hygenic practices" res = predict(clf, text,feature_dimension, wordset, model) text = "i have a sore throat. it has been there for the past week." res = predict(clf, text,feature_dimension, wordset, model) text = "do you have sore throat? Does your throat feel itchy? Do you have flu?" res = predict(clf, text,feature_dimension, wordset, model) text = "you should apply neomycin ointment on your chin" res = predict(clf, text,feature_dimension, wordset, model) text = "do you think I have infection which is causing my blood pressure to rise? Yes, your blood pressure is increasing because of infection" res = predict(clf, text,feature_dimension, wordset, model) text = "Are you comfortable? If you are not comfortable, please let me know. No I am not comfortable and in too much in pain right now." res = predict(clf, text,feature_dimension, wordset, model) """ Explanation: Trying out some random sentences End of explanation """
computational-class/cjc2016
code/08.05-gradient_descent.ipynb
mit
import numpy as np # Size of the points dataset. m = 20 # Points x-coordinate and dummy value (x0, x1). X0 = np.ones((m, 1)) X1 = np.arange(1, m+1).reshape(m, 1) X = np.hstack((X0, X1)) # Points y-coordinate y = np.array([3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12, 11, 13, 13, 16, 17, 18, 17, 19, 21]).reshape(m, 1) # The Learning Rate alpha. alpha = 0.01 def error_function(theta, X, y): '''Error function J definition.''' diff = np.dot(X, theta) - y return (1./2*m) * np.dot(np.transpose(diff), diff) def gradient_function(theta, X, y): '''Gradient of the function J definition.''' diff = np.dot(X, theta) - y return (1./m) * np.dot(np.transpose(X), diff) def gradient_descent(X, y, alpha): '''Perform gradient descent.''' theta = np.array([1, 1]).reshape(2, 1) gradient = gradient_function(theta, X, y) while not np.all(np.absolute(gradient) <= 1e-5): theta = theta - alpha * gradient gradient = gradient_function(theta, X, y) return theta # source๏ผšhttps://www.jianshu.com/p/c7e642877b0e optimal = gradient_descent(X, y, alpha) print('Optimal parameters Theta:', optimal[0][0], optimal[1][0]) print('Error function:', error_function(optimal, X, y)[0,0]) """ Explanation: Introduction to Gradient Descent The Idea Behind Gradient Descent ๆขฏๅบฆไธ‹้™ <img src='./img/stats/gradient_descent.gif' align = "middle" width = '400px'> <img align="left" style="padding-right:10px;" width ="400px" src="./img/stats/gradient2.png"> ๅฆ‚ไฝ•ๆ‰พๅˆฐๆœ€ๅฟซไธ‹ๅฑฑ็š„่ทฏ๏ผŸ - ๅ‡่ฎพๆญคๆ—ถๅฑฑไธŠ็š„ๆต“้›พๅพˆๅคง๏ผŒไธ‹ๅฑฑ็š„่ทฏๆ— ๆณ•็กฎๅฎš; - ๅ‡่ฎพไฝ ๆ‘”ไธๆญป๏ผ - ไฝ ๅช่ƒฝๅˆฉ็”จ่‡ชๅทฑๅ‘จๅ›ด็š„ไฟกๆฏๅŽปๆ‰พๅˆฐไธ‹ๅฑฑ็š„่ทฏๅพ„ใ€‚ - ไปฅไฝ ๅฝ“ๅ‰็š„ไฝ็ฝฎไธบๅŸบๅ‡†๏ผŒๅฏปๆ‰พ่ฟ™ไธชไฝ็ฝฎๆœ€้™กๅณญ็š„ๆ–นๅ‘๏ผŒไปŽ่ฟ™ไธชๆ–นๅ‘ๅ‘ไธ‹่ตฐใ€‚ <img style="padding-right:10px;" width ="500px" src="./img/stats/gradient.png" align = 'right'> Gradient is the vector of partial derivatives One approach to maximizing a function is to - pick a random starting point, - compute the gradient, - take a small step in the direction of the gradient, and - repeat with a new staring point. <img src='./img/stats/gd.webp' width = '700' align = 'middle'> Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$, To the find the best model is an optimization problem - โ€œminimizes the error of the modelโ€ - โ€œmaximizes the likelihood of the data.โ€ Weโ€™ll frequently need to maximize (or minimize) functions. - to find the input vector v that produces the largest (or smallest) possible value. Mathematics behind Gradient Descent A simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning. https://www.douban.com/note/713353797/ The cost or loss function: $$Cost = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2$$ <img src='./img/stats/x2.webp' width = '700' align = 'center'> Parameters with small changes: $$ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$$ The cost function J is a function of m and b: $$J_{m, b} = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2 = \frac{1}{N} \sum_{i = 1}^N Error_i^2$$ $$\frac{\partial J}{\partial m} = 2 Error \frac{\partial}{\partial m}Error$$ $$\frac{\partial J}{\partial b} = 2 Error \frac{\partial}{\partial b}Error$$ Let's fit the data with linear regression: $$\frac{\partial}{\partial m}Error = \frac{\partial}{\partial m}(Y' - Y) = \frac{\partial}{\partial m}(mX + b - Y)$$ Since $X, b, Y$ are constant: $$\frac{\partial}{\partial m}Error = X$$ $$\frac{\partial}{\partial b}Error = \frac{\partial}{\partial b}(Y' - Y) = \frac{\partial}{\partial b}(mX + b - Y)$$ Since $X, m, Y$ are constant: $$\frac{\partial}{\partial m}Error = 1$$ Thus: $$\frac{\partial J}{\partial m} = 2 * Error * X$$ $$\frac{\partial J}{\partial b} = 2 * Error$$ Let's get rid of the constant 2 and multiplying the learning rate $\alpha$, who determines how large a step to take: $$\frac{\partial J}{\partial m} = Error * X * \alpha$$ $$\frac{\partial J}{\partial b} = Error * \alpha$$ Since $ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$: $$ m_1 = m_0 - Error * X * \alpha$$ $$b_1 = b_0 - Error * \alpha$$ Notice that the slope b can be viewed as the beta value for X = 1. Thus, the above two equations are in essence the same. Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$, we have: $$\Theta_1 = \Theta_0 - \alpha \bigtriangledown J(\Theta)$$ <img src='./img/stats/gd.webp' width = '800' align = 'center'> Hence,to solve for the gradient, we iterate through our data points using our new $m$ and $b$ values and compute the partial derivatives. This new gradient tells us - the slope of our cost function at our current position - the direction we should move to update our parameters. The size of our update is controlled by the learning rate. End of explanation """ def difference_quotient(f, x, h): return (f(x + h) - f(x)) / h """ Explanation: This is the End! Estimating the Gradient If f is a function of one variable, its derivative at a point x measures how f(x) changes when we make a very small change to x. It is defined as the limit of the difference quotients: ๅทฎๅ•†๏ผˆdifference quotient๏ผ‰ๅฐฑๆ˜ฏๅ› ๅ˜้‡็š„ๆ”นๅ˜้‡ไธŽ่‡ชๅ˜้‡็š„ๆ”นๅ˜้‡ไธค่€…็›ธ้™ค็š„ๅ•†ใ€‚ End of explanation """ def square(x): return x * x def derivative(x): return 2 * x derivative_estimate = lambda x: difference_quotient(square, x, h=0.00001) def sum_of_squares(v): """computes the sum of squared elements in v""" return sum(v_i ** 2 for v_i in v) # plot to show they're basically the same import matplotlib.pyplot as plt x = range(-10,10) plt.plot(x, list(map(derivative, x)), 'rx') # red x plt.plot(x, list(map(derivative_estimate, x)), 'b+') # blue + plt.show() """ Explanation: For many functions itโ€™s easy to exactly calculate derivatives. For example, the square function: def square(x): return x * x has the derivative: def derivative(x): return 2 * x End of explanation """ def partial_difference_quotient(f, v, i, h): # add h to just the i-th element of v w = [v_j + (h if j == i else 0) for j, v_j in enumerate(v)] return (f(w) - f(v)) / h def estimate_gradient(f, v, h=0.00001): return [partial_difference_quotient(f, v, i, h) for i, _ in enumerate(v)] """ Explanation: When f is a function of many variables, it has multiple partial derivatives. End of explanation """ def step(v, direction, step_size): """move step_size in the direction from v""" return [v_i + step_size * direction_i for v_i, direction_i in zip(v, direction)] def sum_of_squares_gradient(v): return [2 * v_i for v_i in v] from collections import Counter from linear_algebra import distance, vector_subtract, scalar_multiply from functools import reduce import math, random print("using the gradient") # generate 3 numbers v = [random.randint(-10,10) for i in range(3)] print(v) tolerance = 0.0000001 n = 0 while True: gradient = sum_of_squares_gradient(v) # compute the gradient at v if n%50 ==0: print(v, sum_of_squares(v)) next_v = step(v, gradient, -0.01) # take a negative gradient step if distance(next_v, v) < tolerance: # stop if we're converging break v = next_v # continue if we're not n += 1 print("minimum v", v) print("minimum value", sum_of_squares(v)) """ Explanation: Using the Gradient End of explanation """ step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001] """ Explanation: Choosing the Right Step Size Although the rationale for moving against the gradient is clear, - how far to move is not. - Indeed, choosing the right step size is more of an art than a science. Methods: 1. Using a fixed step size 1. Gradually shrinking the step size over time 1. At each step, choosing the step size that minimizes the value of the objective function End of explanation """ def safe(f): """define a new function that wraps f and return it""" def safe_f(*args, **kwargs): try: return f(*args, **kwargs) except: return float('inf') # this means "infinity" in Python return safe_f """ Explanation: It is possible that certain step sizes will result in invalid inputs for our function. So weโ€™ll need to create a โ€œsafe applyโ€ function - returns infinity for invalid inputs: - which should never be the minimum of anything End of explanation """ def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001): """use gradient descent to find theta that minimizes target function""" step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001] theta = theta_0 # set theta to initial value target_fn = safe(target_fn) # safe version of target_fn value = target_fn(theta) # value we're minimizing while True: gradient = gradient_fn(theta) next_thetas = [step(theta, gradient, -step_size) for step_size in step_sizes] # choose the one that minimizes the error function next_theta = min(next_thetas, key=target_fn) next_value = target_fn(next_theta) # stop if we're "converging" if abs(value - next_value) < tolerance: return theta else: theta, value = next_theta, next_value # minimize_batch" v = [random.randint(-10,10) for i in range(3)] v = minimize_batch(sum_of_squares, sum_of_squares_gradient, v) print("minimum v", v) print("minimum value", sum_of_squares(v)) """ Explanation: Putting It All Together target_fn that we want to minimize gradient_fn. For example, the target_fn could represent the errors in a model as a function of its parameters, To choose a starting value for the parameters theta_0. End of explanation """ def negate(f): """return a function that for any input x returns -f(x)""" return lambda *args, **kwargs: -f(*args, **kwargs) def negate_all(f): """the same when f returns a list of numbers""" return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)] def maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001): return minimize_batch(negate(target_fn), negate_all(gradient_fn), theta_0, tolerance) """ Explanation: Sometimes weโ€™ll instead want to maximize a function, which we can do by minimizing its negative End of explanation """ def in_random_order(data): """generator that returns the elements of data in random order""" indexes = [i for i, _ in enumerate(data)] # create a list of indexes random.shuffle(indexes) # shuffle them for i in indexes: # return the data in that order yield data[i] """ Explanation: Using the batch approach, each gradient step requires us to make a prediction and compute the gradient for the whole data set, which makes each step take a long time. Error functions are additive - The predictive error on the whole data set is simply the sum of the predictive errors for each data point. When this is the case, we can instead apply a technique called stochastic gradient descent - which computes the gradient (and takes a step) for only one point at a time. - It cycles over our data repeatedly until it reaches a stopping point. Stochastic Gradient Descent During each cycle, weโ€™ll want to iterate through our data in a random order: End of explanation """ def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01): data = list(zip(x, y)) theta = theta_0 # initial guess alpha = alpha_0 # initial step size min_theta, min_value = None, float("inf") # the minimum so far iterations_with_no_improvement = 0 # if we ever go 100 iterations with no improvement, stop while iterations_with_no_improvement < 100: value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data ) if value < min_value: # if we've found a new minimum, remember it # and go back to the original step size min_theta, min_value = theta, value iterations_with_no_improvement = 0 alpha = alpha_0 else: # otherwise we're not improving, so try shrinking the step size iterations_with_no_improvement += 1 alpha *= 0.9 # and take a gradient step for each of the data points for x_i, y_i in in_random_order(data): gradient_i = gradient_fn(x_i, y_i, theta) theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i)) return min_theta def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01): return minimize_stochastic(negate(target_fn), negate_all(gradient_fn), x, y, theta_0, alpha_0) print("using minimize_stochastic_batch") x = list(range(101)) y = [3*x_i + random.randint(-10, 20) for x_i in x] theta_0 = random.randint(-10,10) v = minimize_stochastic(sum_of_squares, sum_of_squares_gradient, x, y, theta_0) print("minimum v", v) print("minimum value", sum_of_squares(v)) """ Explanation: This approach avoids circling around near a minimum forever - whenever we stop getting improvements weโ€™ll decrease the step size and eventually quit. End of explanation """
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Analyse alexnet_extra_layer_dropouts models.ipynb
mit
import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import numpy as np %matplotlib inline import matplotlib.pyplot as plt import holoviews as hl %load_ext holoviews.ipython import sklearn.metrics cd .. m = pylearn2.utils.serial.load("/disk/scratch/neuroglycerin/models/alexnet_extra_layer_dropouts2.pkl.recent") %run check_test_score.py -v run_settings/alexnet_extra_layer_dropouts.json """ Explanation: This notebook investigates alexnet-based models with 8-augmentation, an extra convolutional layer with 192 output channels and various dropout values at each layer. When training the models, we saved both files from the best and most recent epoch. Let's see what score we got with the best parameter values from the models. We first look at the model specified in alexnet_extra_layer_dropouts.json model, which has 0.75 dropout on all but last convolutional layers, and 0.5 dropout on the last layer. End of explanation """ %run check_test_score.py -v run_settings/alexnet_extra_layer_dropouts2.json """ Explanation: Now check the model specified in alexnet_extra_layer_dropouts2.json model, which has 0.9 dropout on all but last convolutional layers, and 0.5 dropout on the last layer. End of explanation """ %run check_test_score.py -v run_settings/alexnet_extra_layer_dropouts3.json """ Explanation: Finally, check the model specified in alexnet_extra_layer_dropouts3.json model, which has 0.5 dropout on all convolutional layers. End of explanation """ def plot_monitor(model,c = 'valid_y_nll'): channel = model.monitor.channels[c] plt.title(c) plt.grid(which="both") plt.plot(channel.example_record,channel.val_record) return None plot_monitor(m) plot_monitor(m,c="train_y_nll") """ Explanation: Looks like the alexnet_extra_layer_dropouts2.json model gave the best score. It also still continues to improve. Let's look at the evolution of nll: End of explanation """ %run check_test_score.py -v run_settings/alexnet_extra_layer_dropouts2.json """ Explanation: The graph looks sort of stable for valid_y_nll. Perhaps there's not that much room for improvement? It could be useful to let the model run for around 85 epochs, which is roughly the time the current best model ran for. The #3 model was stopped, and #1 and #2 are let to run until ~epoch 85. After ~85 epochs, the score is not better than the current best. End of explanation """
mne-tools/mne-tools.github.io
stable/_downloads/6608d2f46fa33fc4dfd4a7f07bd9bdc9/10_ieeg_localize.ipynb
bsd-3-clause
# Authors: Alex Rockhill <aprockhill@mailbox.org> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD-3-Clause import os.path as op import numpy as np import matplotlib.pyplot as plt import nibabel as nib import nilearn.plotting from dipy.align import resample import mne from mne.datasets import fetch_fsaverage # paths to mne datasets: sample sEEG and FreeSurfer's fsaverage subject, # which is in MNI space misc_path = mne.datasets.misc.data_path() sample_path = mne.datasets.sample.data_path() subjects_dir = op.join(sample_path, 'subjects') # use mne-python's fsaverage data fetch_fsaverage(subjects_dir=subjects_dir, verbose=True) # downloads if needed # GUI requires pyvista backend mne.viz.set_3d_backend('pyvistaqt') """ Explanation: Locating intracranial electrode contacts Analysis of intracranial electrophysiology recordings typically involves finding the position of each contact relative to brain structures. In a typical setup, the brain and the electrode locations will be in two places and will have to be aligned; the brain is best visualized by a pre-implantation magnetic resonance (MR) image whereas the electrode contact locations are best visualized in a post-implantation computed tomography (CT) image. The CT image has greater intensity than the background at each of the electrode contacts and for the skull. Using the skull, the CT can be aligned to MR-space. This accomplishes our goal of obtaining contact locations in MR-space (which is where the brain structures are best determined using the tut-freesurfer-reconstruction). Contact locations in MR-space can also be warped to a template space such as fsaverage for group comparisons. Please note that this tutorial requires nibabel, nilearn and dipy which can be installed using pip as well as 3D plotting (see manual-install). End of explanation """ T1 = nib.load(op.join(misc_path, 'seeg', 'sample_seeg', 'mri', 'T1.mgz')) viewer = T1.orthoview() viewer.set_position(0, 9.9, 5.8) viewer.figs[0].axes[0].annotate( 'PC', (107, 108), xytext=(10, 75), color='white', horizontalalignment='center', arrowprops=dict(facecolor='white', lw=0.5, width=2, headwidth=5)) viewer.figs[0].axes[0].annotate( 'AC', (137, 108), xytext=(246, 75), color='white', horizontalalignment='center', arrowprops=dict(facecolor='white', lw=0.5, width=2, headwidth=5)) """ Explanation: Aligning the T1 to ACPC For intracranial electrophysiology recordings, the Brain Imaging Data Structure (BIDS) standard requires that coordinates be aligned to the anterior commissure and posterior commissure (ACPC-aligned). Therefore, it is recommended that you do this alignment before finding the positions of the channels in your recording. Doing this will make the "mri" (aka surface RAS) coordinate frame an ACPC coordinate frame. This can be done using Freesurfer's freeview: console $ freeview $MISC_PATH/seeg/sample_seeg_T1.mgz And then interact with the graphical user interface: First, it is recommended to change the cursor style to long, this can be done through the menu options like so: :menuselection:`Freeview --&gt; Preferences --&gt; General --&gt; Cursor style --&gt; Long` Then, the image needs to be aligned to ACPC to look like the image below. This can be done by pulling up the transform popup from the menu like so: :menuselection:`Tools --&gt; Transform Volume` <div class="alert alert-info"><h4>Note</h4><p>Be sure to set the text entry box labeled RAS (not TkReg RAS) to ``0 0 0`` before beginning the transform.</p></div> Then translate the image until the crosshairs meet on the AC and run through the PC as shown in the plot. The eyes should be in the ACPC plane and the image should be rotated until they are symmetrical, and the crosshairs should transect the midline of the brain. Be sure to use both the rotate and the translate menus and save the volume after you're finished using Save Volume As in the transform popup :footcite:HamiltonEtAl2017. End of explanation """ def plot_overlay(image, compare, title, thresh=None): """Define a helper function for comparing plots.""" image = nib.orientations.apply_orientation( np.asarray(image.dataobj), nib.orientations.axcodes2ornt( nib.orientations.aff2axcodes(image.affine))).astype(np.float32) compare = nib.orientations.apply_orientation( np.asarray(compare.dataobj), nib.orientations.axcodes2ornt( nib.orientations.aff2axcodes(compare.affine))).astype(np.float32) if thresh is not None: compare[compare < np.quantile(compare, thresh)] = np.nan fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(title) for i, ax in enumerate(axes): ax.imshow(np.take(image, [image.shape[i] // 2], axis=i).squeeze().T, cmap='gray') ax.imshow(np.take(compare, [compare.shape[i] // 2], axis=i).squeeze().T, cmap='gist_heat', alpha=0.5) ax.invert_yaxis() ax.axis('off') fig.tight_layout() CT_orig = nib.load(op.join(misc_path, 'seeg', 'sample_seeg_CT.mgz')) # resample to T1's definition of world coordinates CT_resampled = resample(moving=np.asarray(CT_orig.dataobj), static=np.asarray(T1.dataobj), moving_affine=CT_orig.affine, static_affine=T1.affine) plot_overlay(T1, CT_resampled, 'Unaligned CT Overlaid on T1', thresh=0.95) del CT_resampled """ Explanation: Freesurfer recon-all The first step is the most time consuming; the freesurfer reconstruction. This process segments out the brain from the rest of the MR image and determines which voxels correspond to each brain area based on a template deformation. This process takes approximately 8 hours so plan accordingly. The example dataset contains the data from completed reconstruction so we will proceed using that. console $ export SUBJECT=sample_seeg $ export SUBJECTS_DIR=$MY_DATA_DIRECTORY $ recon-all -subjid $SUBJECT -sd $SUBJECTS_DIR \ -i $MISC_PATH/seeg/sample_seeg_T1.mgz -all -deface <div class="alert alert-info"><h4>Note</h4><p>You may need to include an additional ``-cw256`` flag which can be added to the end of the recon-all command if your MR scan is not ``256 ร— 256 ร— 256`` voxels.</p></div> <div class="alert alert-info"><h4>Note</h4><p>Using the ``-deface`` flag will create a defaced, anonymized T1 image located at ``$MY_DATA_DIRECTORY/$SUBJECT/mri/orig_defaced.mgz``, which is helpful for when you publish your data. You can also use :func:`mne_bids.write_anat` and pass ``deface=True``.</p></div> Aligning the CT to the MR Let's load our T1 and CT images and visualize them. You can hardly see the CT, it's so misaligned that all you can see is part of the stereotactic frame that is anteriolateral to the skull in the middle plot. Clearly, we need to align the CT to the T1 image. End of explanation """ reg_affine = np.array([ [0.99270756, -0.03243313, 0.11610254, -133.094156], [0.04374389, 0.99439665, -0.09623816, -97.58320673], [-0.11233068, 0.10061512, 0.98856381, -84.45551601], [0., 0., 0., 1.]]) # use a cval='1%' here to make the values outside the domain of the CT # the same as the background level during interpolation CT_aligned = mne.transforms.apply_volume_registration( CT_orig, T1, reg_affine, cval='1%') plot_overlay(T1, CT_aligned, 'Aligned CT Overlaid on T1', thresh=0.95) del CT_orig """ Explanation: Now we need to align our CT image to the T1 image. We want this to be a rigid transformation (just rotation + translation), so we don't do a full affine registration (that includes shear) here. This takes a while (~10 minutes) to execute so we skip actually running it here:: reg_affine, _ = mne.transforms.compute_volume_registration( CT_orig, T1, pipeline='rigids', zooms=dict(translation=5))) Instead we just hard-code the resulting 4x4 matrix: End of explanation """ # make low intensity parts of the CT transparent for easier visualization CT_data = CT_aligned.get_fdata().copy() CT_data[CT_data < np.quantile(CT_data, 0.95)] = np.nan T1_data = np.asarray(T1.dataobj) fig, axes = plt.subplots(1, 3, figsize=(12, 6)) for ax in axes: ax.axis('off') axes[0].imshow(T1_data[T1.shape[0] // 2], cmap='gray') axes[0].set_title('MR') axes[1].imshow(np.asarray(CT_aligned.dataobj)[CT_aligned.shape[0] // 2], cmap='gray') axes[1].set_title('CT') axes[2].imshow(T1_data[T1.shape[0] // 2], cmap='gray') axes[2].imshow(CT_data[CT_aligned.shape[0] // 2], cmap='gist_heat', alpha=0.5) for ax in (axes[0], axes[2]): ax.annotate('Subcutaneous fat', (110, 52), xytext=(100, 30), color='white', horizontalalignment='center', arrowprops=dict(facecolor='white')) for ax in axes: ax.annotate('Skull (dark in MR, bright in CT)', (40, 175), xytext=(120, 246), horizontalalignment='center', color='white', arrowprops=dict(facecolor='white')) axes[2].set_title('CT aligned to MR') fig.tight_layout() del CT_data, T1 """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>Alignment failures sometimes occur which requires manual pre-alignment. Freesurfer's ``freeview`` can be used to to align manually ```console $ freeview $MISC_PATH/seeg/sample_seeg/mri/T1.mgz \ $MISC_PATH/seeg/sample_seeg_CT.mgz:colormap=heat:opacity=0.6 ``` - Navigate to the upper toolbar, go to :menuselection:`Tools --> Transform Volume` - Use the rotation and translation slide bars to align the CT to the MR (be sure to have the CT selected in the upper left menu) - Save the linear transform array (lta) file using the ``Save Reg...`` button Since we really require as much precision as possible for the alignment, we should rerun the algorithm starting with the manual alignment. This time, we just want to skip to the most exact rigid alignment, without smoothing, since the manual alignment is already very close. ```python from dipy.align import affine_registration # load transform manual_reg_affine_vox = mne.read_lta(op.join( # the path used above misc_path, 'seeg', 'sample_seeg_CT_aligned_manual.mgz.lta')) # convert from vox->vox to ras->ras manual_reg_affine = \ CT_orig.affine @ np.linalg.inv(manual_reg_affine_vox) \ @ np.linalg.inv(T1.affine) CT_aligned_fix_img = affine_registration( moving=np.array(CT_orig.dataobj), static=np.array(T1.dataobj), moving_affine=CT_orig.affine, static_affine=T1.affine, pipeline=['rigid'], starting_affine=manual_reg_affine, level_iters=[100], sigmas=[0], factors=[1])[0] CT_aligned = nib.MGHImage( CT_aligned_fix_img.astype(np.float32), T1.affine) ``` The rest of the tutorial can then be completed using ``CT_aligned`` from this point on.</p></div> We can now see how the CT image looks properly aligned to the T1 image. <div class="alert alert-info"><h4>Note</h4><p>The hyperintense skull is actually aligned to the hypointensity between the brain and the scalp. The brighter area surrounding the skull in the MR is actually subcutaneous fat.</p></div> End of explanation """ # estimate head->mri transform subj_trans = mne.coreg.estimate_head_mri_t( 'sample_seeg', op.join(misc_path, 'seeg')) """ Explanation: Now we need to estimate the "head" coordinate transform. MNE stores digitization montages in a coordinate frame called "head" defined by fiducial points (origin is halfway between the LPA and RPA see tut-source-alignment). For sEEG, it is convenient to get an estimate of the location of the fiducial points for the subject using the Talairach transform (see :func:mne.coreg.get_mni_fiducials) to use to define the coordinate frame so that we don't have to manually identify their location. End of explanation """ # load electrophysiology data to find channel locations for # (the channels are already located in the example) raw = mne.io.read_raw(op.join(misc_path, 'seeg', 'sample_seeg_ieeg.fif')) gui = mne.gui.locate_ieeg(raw.info, subj_trans, CT_aligned, subject='sample_seeg', subjects_dir=op.join(misc_path, 'seeg')) # The `raw` object is modified to contain the channel locations # after closing the GUI and can now be saved # gui.close() # typically you close when done """ Explanation: Marking the Location of Each Electrode Contact Now, the CT and the MR are in the same space, so when you are looking at a point in CT space, it is the same point in MR space. So now everything is ready to determine the location of each electrode contact in the individual subject's anatomical space (T1-space). To do this, we can use the MNE intracranial electrode location graphical user interface. <div class="alert alert-info"><h4>Note</h4><p>The most useful coordinate frame for intracranial electrodes is generally the ``surface RAS`` coordinate frame because that is the coordinate frame that all the surface and image files that Freesurfer outputs are in, see `tut-freesurfer-mne`. These are useful for finding the brain structures nearby each contact and plotting the results.</p></div> To operate the GUI: Click in each image to navigate to each electrode contact Select the contact name in the right panel Press the "Mark" button or the "m" key to associate that position with that contact Repeat until each contact is marked, they will both appear as circles in the plots and be colored in the sidebar when marked <div class="alert alert-info"><h4>Note</h4><p>The channel locations are saved to the ``raw`` object every time a location is marked or removed so there is no "Save" button.</p></div> <div class="alert alert-info"><h4>Note</h4><p>Using the scroll or +/- arrow keys you can zoom in and out, and the up/down, left/right and page up/page down keys allow you to move one slice in any direction. This information is available in the help menu, accessible by pressing the "h" key.</p></div> <div class="alert alert-info"><h4>Note</h4><p>If "Snap to Center" is on, this will use the radius so be sure to set it properly.</p></div> End of explanation """ T1_ecog = nib.load(op.join(misc_path, 'ecog', 'sample_ecog', 'mri', 'T1.mgz')) CT_orig_ecog = nib.load(op.join(misc_path, 'ecog', 'sample_ecog_CT.mgz')) # pre-computed affine from `mne.transforms.compute_volume_registration` reg_affine = np.array([ [0.99982382, -0.00414586, -0.01830679, 0.15413965], [0.00549597, 0.99721885, 0.07432601, -1.54316131], [0.01794773, -0.07441352, 0.99706595, -1.84162514], [0., 0., 0., 1.]]) # align CT CT_aligned_ecog = mne.transforms.apply_volume_registration( CT_orig_ecog, T1_ecog, reg_affine, cval='1%') raw_ecog = mne.io.read_raw(op.join(misc_path, 'ecog', 'sample_ecog_ieeg.fif')) # use estimated `trans` which was used when the locations were found previously subj_trans_ecog = mne.coreg.estimate_head_mri_t( 'sample_ecog', op.join(misc_path, 'ecog')) gui = mne.gui.locate_ieeg(raw_ecog.info, subj_trans_ecog, CT_aligned_ecog, subject='sample_ecog', subjects_dir=op.join(misc_path, 'ecog')) """ Explanation: Let's do a quick sidebar and show what this looks like for ECoG as well. End of explanation """ # plot projected sensors brain_kwargs = dict(cortex='low_contrast', alpha=0.2, background='white') brain = mne.viz.Brain('sample_ecog', subjects_dir=op.join(misc_path, 'ecog'), title='Before Projection', **brain_kwargs) brain.add_sensors(raw_ecog.info, trans=subj_trans_ecog) view_kwargs = dict(azimuth=60, elevation=100, distance=350, focalpoint=(0, 0, -15)) brain.show_view(**view_kwargs) """ Explanation: for ECoG, we typically want to account for "brain shift" or shrinking of the brain away from the skull/dura due to changes in pressure during the craniotomy Note: this requires the BEM surfaces to have been computed e.g. using mne watershed_bem or mne flash_bem. First, let's plot the localized sensor positions without modification. End of explanation """ # project sensors to the brain surface raw_ecog.info = mne.preprocessing.ieeg.project_sensors_onto_brain( raw_ecog.info, subj_trans_ecog, 'sample_ecog', subjects_dir=op.join(misc_path, 'ecog')) # plot projected sensors brain = mne.viz.Brain('sample_ecog', subjects_dir=op.join(misc_path, 'ecog'), title='After Projection', **brain_kwargs) brain.add_sensors(raw_ecog.info, trans=subj_trans_ecog) brain.show_view(**view_kwargs) """ Explanation: Now, let's project the sensors to the brain surface and re-plot them. End of explanation """ # plot the alignment brain = mne.viz.Brain('sample_seeg', subjects_dir=op.join(misc_path, 'seeg'), **brain_kwargs) brain.add_sensors(raw.info, trans=subj_trans) brain.show_view(**view_kwargs) """ Explanation: Let's plot the electrode contact locations on the subject's brain. MNE stores digitization montages in a coordinate frame called "head" defined by fiducial points (origin is halfway between the LPA and RPA see tut-source-alignment). For sEEG, it is convenient to get an estimate of the location of the fiducial points for the subject using the Talairach transform (see :func:mne.coreg.get_mni_fiducials) to use to define the coordinate frame so that we don't have to manually identify their location. The estimated head->mri trans was used when the electrode contacts were localized so we need to use it again here. End of explanation """ # load the subject's brain and the Freesurfer "fsaverage" template brain subject_brain = nib.load( op.join(misc_path, 'seeg', 'sample_seeg', 'mri', 'brain.mgz')) template_brain = nib.load( op.join(subjects_dir, 'fsaverage', 'mri', 'brain.mgz')) plot_overlay(template_brain, subject_brain, 'Alignment with fsaverage before Affine Registration') """ Explanation: Warping to a Common Atlas Electrode contact locations are often compared across subjects in a template space such as fsaverage or cvs_avg35_inMNI152. To transform electrode contact locations to that space, we need to determine a function that maps from the subject's brain to the template brain. We will use the symmetric diffeomorphic registration (SDR) implemented by Dipy to do this. Before we can make a function to account for individual differences in the shape and size of brain areas, we need to fix the alignment of the brains. The plot below shows that they are not yet aligned. End of explanation """ zooms = dict(translation=10, rigid=10, affine=10, sdr=5) reg_affine, sdr_morph = mne.transforms.compute_volume_registration( subject_brain, template_brain, zooms=zooms, verbose=True) subject_brain_sdr = mne.transforms.apply_volume_registration( subject_brain, template_brain, reg_affine, sdr_morph) # apply the transform to the subject brain to plot it plot_overlay(template_brain, subject_brain_sdr, 'Alignment with fsaverage after SDR Registration') del subject_brain, template_brain """ Explanation: Now, we'll register the affine of the subject's brain to the template brain. This aligns the two brains, preparing the subject's brain to be warped to the template. <div class="alert alert-danger"><h4>Warning</h4><p>Here we use custom ``zooms`` just for speed (this downsamples the image resolution), in general we recommend using ``zooms=None`` (default) for highest accuracy!</p></div> End of explanation """ # first we need our montage but it needs to be converted to "mri" coordinates # using our ``subj_trans`` montage = raw.get_montage() montage.apply_trans(subj_trans) montage_warped, elec_image, warped_elec_image = mne.warp_montage_volume( montage, CT_aligned, reg_affine, sdr_morph, thresh=0.25, subject_from='sample_seeg', subjects_dir_from=op.join(misc_path, 'seeg'), subject_to='fsaverage', subjects_dir_to=subjects_dir) fig, axes = plt.subplots(2, 1, figsize=(8, 8)) nilearn.plotting.plot_glass_brain(elec_image, axes=axes[0], cmap='Dark2') fig.text(0.1, 0.65, 'Subject T1', rotation='vertical') nilearn.plotting.plot_glass_brain(warped_elec_image, axes=axes[1], cmap='Dark2') fig.text(0.1, 0.25, 'fsaverage', rotation='vertical') fig.suptitle('Electrodes warped to fsaverage') del CT_aligned """ Explanation: Finally, we'll apply the registrations to the electrode contact coordinates. The brain image is warped to the template but the goal was to warp the positions of the electrode contacts. To do that, we'll make an image that is a lookup table of the electrode contacts. In this image, the background will be 0 s all the bright voxels near the location of the first contact will be 1 s, the second 2 s and so on. This image can then be warped by the SDR transform. We can finally recover a position by averaging the positions of all the voxels that had the contact's lookup number in the warped image. End of explanation """ # first we need to add fiducials so that we can define the "head" coordinate # frame in terms of them (with the origin at the center between LPA and RPA) montage_warped.add_estimated_fiducials('fsaverage', subjects_dir) # compute the head<->mri ``trans`` now using the fiducials template_trans = mne.channels.compute_native_head_t(montage_warped) # now we can set the montage and, because there are fiducials in the montage, # the montage will be properly transformed to "head" coordinates when we do # (this step uses ``template_trans`` but it is recomputed behind the scenes) raw.set_montage(montage_warped) # plot the resulting alignment brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, **brain_kwargs) brain.add_sensors(raw.info, trans=template_trans) brain.show_view(**view_kwargs) """ Explanation: We can now plot the result. You can compare this to the plot in tut-working-with-seeg to see the difference between this morph, which is more complex, and the less-complex, linear Talairach transformation. By accounting for the shape of this particular subject's brain using the SDR to warp the positions of the electrode contacts, the position in the template brain is able to be more accurately estimated. <div class="alert alert-info"><h4>Note</h4><p>The accuracy of warping to the template has been degraded by using ``zooms`` to downsample the image before registration which makes some of the contacts inaccurately appear outside the brain.</p></div> End of explanation """
slowvak/MachineLearningForMedicalImages
notebooks/Module 2 .ipynb
mit
%matplotlib inline import warnings warnings.filterwarnings('ignore') import os import numpy as np import matplotlib.pyplot as plt import pylab from mpl_toolkits.mplot3d import Axes3D from sklearn import svm import pandas as pd from matplotlib.colors import ListedColormap from sklearn.model_selection import StratifiedShuffleSplit from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.tree import DecisionTreeClassifier,export_graphviz import sklearn.metrics as metrics from sklearn import tree from IPython.display import Image from sklearn.externals.six import StringIO import pydotplus from sklearn.learning_curve import learning_curve from sklearn.preprocessing import StandardScaler def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)): """ Generate a simple plot of the test and training learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 3-fold cross-validation, - integer, to specify the number of folds. - An object to be used as a cross-validation generator. - An iterable yielding train/test splits. For integer/None inputs, if ``y`` is binary or multiclass, :class:`StratifiedKFold` used. If the estimator is not a classifier or if ``y`` is neither binary nor multiclass, :class:`KFold` is used. Refer :ref:`User Guide <cross_validation>` for the various cross-validators that can be used here. n_jobs : integer, optional Number of jobs to run in parallel (default 1). """ plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.legend(loc="best") return plt """ Explanation: Supervised Classification: Decision Trees Import Libraries End of explanation """ Data=pd.read_csv ('DataExample.csv') # if you need to print or have access to the data as numpy array you can execute the following commands # print (Data) # print(Data.as_matrix(columns=['NAWMpost'])) """ Explanation: Read the dataset In this case the training dataset is just a csv file. In case of larger dataset more advanced file fromats like hdf5 are used. Pandas is used to load the files. End of explanation """ ClassBrainTissuepost=(Data['ClassTissuePost'].values) ClassBrainTissuepost= (np.asarray(ClassBrainTissuepost)) ClassBrainTissuepost=ClassBrainTissuepost[~np.isnan(ClassBrainTissuepost)] ClassBrainTissuepre=(Data[['ClassTissuePre']].values) ClassBrainTissuepre= (np.asarray(ClassBrainTissuepre)) ClassBrainTissuepre=ClassBrainTissuepre[~np.isnan(ClassBrainTissuepre)] ClassTUMORpost=(Data[['ClassTumorPost']].values) ClassTUMORpost= (np.asarray(ClassTUMORpost)) ClassTUMORpost=ClassTUMORpost[~np.isnan(ClassTUMORpost)] ClassTUMORpre=(Data[['ClassTumorPre']].values) ClassTUMORpre= (np.asarray(ClassTUMORpre)) ClassTUMORpre=ClassTUMORpre[~np.isnan(ClassTUMORpre)] X_1 = np.stack((ClassBrainTissuepost,ClassBrainTissuepre)) # we only take the first two features. X_2 = np.stack((ClassTUMORpost,ClassTUMORpre)) X=np.concatenate((X_1.transpose(), X_2.transpose()),axis=0) y =np.zeros((np.shape(X))[0]) y[np.shape(X_1)[1]:]=1 """ Explanation: Creating training sets Each class of tissue in our pandas framework has a pre assigned label (Module 1). This labels were: - ClassTissuePost - ClassTissuePre - ClassTissueFlair - ClassTumorPost - ClassTumorPre - ClassTumorFlair - ClassEdemaPost - ClassEdemaPre - ClassEdemaFlair For demonstration purposes we will create a feature vector that contains the intensities for the tumor and brain tissue are from the T1w pre and post contrast images. End of explanation """ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) """ Explanation: X is the feature vector y are the labels Split Training/Validation End of explanation """ model = DecisionTreeClassifier() model.fit(X_train, y_train) # Print a summary of our model print(model) """ Explanation: Create the classifier For the following example we will consider a Decision tree classifier. The classifier is provided by the Scikit-Learn library End of explanation """ # make predictions expected = y_test predicted = model.predict(X_test) # summarize the fit of the model print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted)) """ Explanation: Run some basic analytics Calculate some basic metrics. End of explanation """ max_leaf_nodes_eval =[2,3,4,5,6,9] classifier = GridSearchCV(estimator=model, cv=5, param_grid=dict(max_leaf_nodes=max_leaf_nodes_eval)) classifier.fit(X_train, y_train) """ Explanation: Correct way Fine tune hyperparameters End of explanation """ title = 'Learning Curves (Decision Tree, max_leaf_nodes=%.1f)' %classifier.best_estimator_.max_leaf_nodes estimator = DecisionTreeClassifier( max_leaf_nodes=classifier.best_estimator_.max_leaf_nodes) plot_learning_curve(estimator, title, X_train, y_train, cv=5) plt.show() """ Explanation: Debug algorithm with learning curve X_train is randomly split into a training and a test set 3 times (n_iter=3). Each point on the training-score curve is the average of 3 scores where the model was trained and evaluated on the first i training examples. Each point on the cross-validation score curve is the average of 3 scores where the model was trained on the first i training examples and evaluated on all examples of the test set. End of explanation """ title = 'Learning Curves (Decision Tree, max_leaf_nodes=%.1f)' %classifier.best_estimator_.max_leaf_nodes estimator = DecisionTreeClassifier( max_leaf_nodes=classifier.best_estimator_.max_leaf_nodes) IND=np.random.randint(np.shape(X_train)[0], size=100) plot_learning_curve(estimator, title, X_train[IND], y_train[IND], cv=5) plt.show() """ Explanation: Example of Underfitting End of explanation """ classifier.score(X_test, y_test) """ Explanation: Final evaluation on the test set End of explanation """ from IPython.display import Image dot_data = tree.export_graphviz(model, out_file=None, feature_names=['T1 pre', 'T1w post'], class_names=['WM', 'Tumor'], filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png()) """ Explanation: Create a visualization of our decision tree End of explanation """
phoebe-project/phoebe2-docs
2.2/examples/sun_earth.ipynb
gpl-3.0
!pip install -I "phoebe>=2.1,<2.2" """ Explanation: Sun-Earth System NOTE: planets are currently under testing and not yet supported Setup Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation """ %matplotlib inline import phoebe from phoebe import u # units from phoebe import c # constants import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary(starA='sun', starB='earth', orbit='earthorbit') """ Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation """ b.set_value('teff@sun', 1.0*u.solTeff) b.set_value('requiv@sun', 1.0*u.solRad) b.flip_constraint('period@sun', solve_for='syncpar') b.set_value('period@sun', 24.47*u.d) #b.set_value('incl', 23.5*u.deg) b.set_value('teff@earth', 252*u.K) b.set_value('requiv@earth', 1.0*c.R_earth) b.flip_constraint('period@earth', solve_for='syncpar') b.set_value('period@earth', 1*u.d) b.set_value('sma@earthorbit', 1*u.AU) b.set_value('period@earthorbit', 1*u.yr) b.set_value('q@earthorbit', c.M_earth/c.M_sun) #b.set_value('ecc@earthorbit') print("Msun: {}".format(b.get_quantity('mass@sun@component', unit=u.solMass))) print("Mearth: {}".format(b.get_quantity('mass@earth@component', unit=u.solMass))) """ Explanation: Setting Parameters End of explanation """ b.add_dataset('mesh', times=[0.5], dataset='mesh01') b.add_dataset('lc', times=np.linspace(-0.5,0.5,51), dataset='lc01') b.set_value('ld_func@earth', 'logarithmic') b.set_value('ld_coeffs@earth', [0.0, 0.0]) """ Explanation: Running Compute End of explanation """ b['distortion_method@earth'] = 'rotstar' """ Explanation: We'll have the sun follow a roche potential and the earth follow a rotating sphere (rotstar). NOTE: this doesn't work yet because the rpole<->potential is still being defined by roche, giving the earth a polar radius way too small. End of explanation """ b['atm@earth'] = 'blackbody' b.set_value_all('ld_func@earth', 'logarithmic') b.set_value_all('ld_coeffs@earth', [0, 0]) b.run_compute() axs, artists = b.plot(dataset='mesh01', show=True) axs, artists = b.plot(dataset='mesh01', component='sun', show=True) axs, artists = b.plot(dataset='mesh01', component='earth', show=True) b['requiv@earth@component'] axs, artists = b.plot(dataset='lc01', show=True) """ Explanation: The temperatures of earth will fall far out of bounds for any atmosphere model, so let's set the earth to be a blackbody and use a supported limb-darkening model (the default 'interp' is not valid for blackbody atmospheres). End of explanation """