text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Pandas select rows and columns in MultiIndex dataframe We want to select or slice the rows and columns of a MultiIndex dataframe. In this post we will take a look on how to slice the dataframe using the index at all levels of a row and column A MultiIndex dataframe can have multi index for both rows and columns. The MultiIndex keeps all the defined levels of an index, even if they are not actually used. There are four methods that could be used to select the rows/columns of MultiIndex dataframe - Using loc and iloc, which are used to access group of rows and columns by labels and integers respectively - Dataframe.xs, it returns cross-section from the Series/DataFrame. - Dataframe.Query, we can query the columns of a DataFrame with a boolean expression. - IndexSlice, It creates an object to more easily perform multi-index slicing. We will be following the steps in this order to select rows and columns from a multiindex dataframe - Create a MultiIndex Dataframe - Use loc to slice the dataframe using labels - Use iloc to slice the dataframe based on integer position of Indexes - Using Slicers, It slice a MultiIndex by providing multiple indexers - Use xs method, it takes a key argument to select data at a particular level of a MultiIndex - query could be used to select rows based on conditions with help of boolean expression - IndexSlice with default slice command to perform MultiIndex Slicing Create MultiIndex Dataframe We have first created a MultiIndex from the cartesian product of list of rows and columns and after that a dataframe is built using the multiIndex rows and columns. The dataframe row index has two levels: Region and Division and similarly columns has two levels as well: Quarter(Q1 &Q2) and Buy & Sell. import pandas as pd import numpy as np Using loc and iloc to slice MultiIndex Dataframe We want to slice the above dataframe using loc and iloc, let’s start with slicing just one single row by Region Slice Row at level=0 The row label at level=0 i.e. Region is passed as list to slice the Region East row multidf.loc[['East']] This will just slice the row with Region East at level=0 and all rows at level=1 Slice Row and Column at level=0 We can slect all the rows starting Region North and just column Q1 multidf.loc['NORTH':, :'Q1':] Slice Column at level=1 We want to slice with column index at level=1, so in this case we would like to see all the rows and just two columns Q1-Sell and Q2-Buy multidf.loc[:,('Q1', 'Sell'):('Q2', 'Buy')] Slice row at level=1 Using iloc, we can select the rows and columns based on the integer position at all the levels of rows and columns, Let’s see how to do that We want every second row at level=1 of row index Region and one column at level=1 i.e. Q2-Sell multidf.iloc[::2,3:4] This will skip every other row and will select the column at 3rd position at level=1. You can see the row East-B is skipped and thereafter North A & C is skipped. Using slicers to slice a MultiIndex by multiple indexers We can provide any of the selectors as if we are indexing by label, and we can use slice(None) to select all the content of that level and don’t have to specify all the deeper levels Let’s understand how to use slicer with an example, we want to slice the rows with Region East & North at level=0 and rows B & C at level=1 multidf.loc[(slice('East', 'North'), slice('B', 'C')), :] Another Example, here we want to slice the East region and column Q1-Buy multidf.loc[(slice('East')), (slice('Q1'),slice('Buy'))] If you want both Q1 & Q2 buy columns multidf.loc[(slice('East')), (slice('Q1','Q2'),slice('Buy'))] Using xs to slice a MultiIndex dataframe This method(xs) returns a cross-section from the Series/DataFrame. It takes a key argument to select data at a particular level of a MultiIndex. we can also specify the level to indicate which levels are used. Levels can be referred by label or position Select rows for Region East, we can also specify the level=0 in this case multidf.xs('East', level=0) we can also slice by multiple levels by passing levels as list, here we want to select the rows with Region North at level=0 and Division C at level=1 multidf.xs(('North', 'C'), level=[0,1]) We can also slice columns, Here we want only Q1-Buy so the column index level is passed as [0, 1] and axis=1 multidf.xs(('Q1', 'Buy'), level=[0,1], axis=1) Using query to slice a MultiIndex dataframe based on condition We want to query the columns of a DataFrame with a boolean expression. It takes an expression to be evaluated as parameter and returns the DataFrame resulting from the provided query expression. We want to get all the rows where division is ‘A’ multidf.query("Division == 'A'") Select all the rows in Division A & B multidf.query("Division.isin(['A', 'B'])") Using get_level_values() This method returns an Index of values for requested level and useful to get an individual level of values from a MultiIndex Select all rows where division is A, you can pass the label as parameter to get_level_values() method multidf[multidf.index.get_level_values('Division') == 'A'] We can also filter the column index, here we want all the Buy columns in the dataframe, you can also pass the index integer position as parameter multidf.loc[:, multidf.columns.get_level_values(1) == 'Buy'] Select rows based on conditions Let’s select the rows based on conditions, we will use get_level_values() and loc methods to filter the dataframe We will first define our MultiIndex condition and save it in a variable, here we want Q1-Sell>2.5 and Q1-Buy>4 and Region is North and South condition = ((multidf[( 'Q1', 'Sell')]>2.5) &(multidf[( 'Q1', 'Buy')]>4) &(multidf.index.get_level_values(0).isin(['South', 'North']))) multidf[condition] Alternatively we can also use np.where() to filter the rows based on condition. The numpy.where() function just returns the index of all the matching rows which are evaluated true for the condition. multidf.iloc[np.where(condition)] Another example where we want row where Q-Sell is maximum. we are using argmax that returns the index of the maximum value along an axis multidf.iloc[np.argmax(multidf[( 'Q1', 'Sell')])] Out: Q1 Buy 1.08 Sell 4.54 Q2 Buy 4.73 Sell 2.19 Name: (North, C), dtype: float64 Using IndexSlice It creates an object to perform multi-index slicing, which means we don’t have to construct the slices on our own. All usages of colons : are converted into slice object. If multiple arguments are passed to the index operator, the arguments are turned into n-tuples idx = pd.IndexSlice multidf.loc[idx[:, 'A':'B'], :] It returns all the Division rows A and B for all Regions So IndexSlice requires you to specify enough levels of the MultiIndex to remove an ambiguity Here we are slicing by Region and Division both idx = pd.IndexSlice multidf.loc[idx['East':'North', 'A':'B'], :] we can also slice the column, here we want only Q1 at level=0 for all the Regions and Divisions idx = pd.IndexSlice multidf.loc[:, :'Q1']
https://kanoki.org/2022/07/25/pandas-select-slice-rows-columns-multiindex-dataframe/
CC-MAIN-2022-33
refinedweb
1,254
56.39
Hi Richard Did I understand your "1." correctly that this suite (Test for APIs) would be something competing with Sun's TCK? Thanks, Mikhail 2006/3/21, Richard Liang <richard.liangyx@gmail.com>: > Geir Magnusson Jr wrote: > > Good unit tests are going to be testing things that are package > > protected. You can't do that if you aren't in the same package > > (obviously). With the "custom" of putting in things in o.a.h.t are we > > implicitly discouraging good testing practice? Given that this > > o.a.h.t.* pattern comes from Eclipse-land, how do they do it? I > > couldn't imagine that the Eclipse tests don't test package protected > > things. > > > Hello Geir, > > Maybe we should have two types of test suites: > 1. Test for APIs including public and protected methods which could be > run against different Java SE implementations. > ==>> If we want to test a protected method of a class, we could mock a > subclass of this class. And write test case against the subclass. > (Protected methods are accessible to subclass) > > 2. Test for internal implementation which may include tests for package > private methods and tests for other internal-used classes. > ==>>We must put the tests into the same package if we want to test > package private methods of a class. > > These are just some rough thinking ;-) Any comments? Thanks a lot. > > I've been short of Round Tuits lately, but I still would like to > > investigate a test harness that helps us by mitigating the security > > issues... > > > > geir > > > > Mark Hindess wrote: > >> I thought the crucial thing was that tests should be in a separate > >> namespace not in the namespace of the package they are testing (at > >> least not unless it was absolutely necessary). > >> -Mark. > >> > >> On 3/20/06, Geir Magnusson Jr <geir@pobox.com> wrote: > >>> I'm doing it now. > >>> > >>> I need to go back and stare at our discussion on test setup, because > >>> I'm > >>> still not a raving fan of o.a.h.test.... > >>> > >>> geir > >>> > >>> Mark Hindess wrote: > >>>> Don't worry, you'd have to be less subtle for me to take something > >>>> as criticism. > >>>> > >>>> I've had an attempt at moving beans out - HARMONY-218. If that gets > >>>> committed I'll do the other too. > >>>> > >>>> -Mark. > >>>> > >>>> On 3/20/06, Geir Magnusson Jr <geir@pobox.com> wrote: > >>>>> That wasn't a criticism, btw. It seemed like a natural thing to > >>>>> do when > >>>>> I first saw it, but when I was actually dealing w/ it, my opinion > >>>>> changed. > >>>>> > >>>>> Yah, split away! That was going to be my next question, how to > >>>>> split.. > >>>>> > >>>>> geir > >>>>> > >>>>> Mark Hindess wrote: > >>>>>> Fair enough. > >>>>>> > >>>>>> Mind if I redo the script/patch to split the three modules to match > >>>>>> the structure of the others? That is, into separate modules/math, > >>>>>> modules/beans, modules/regex directories? > >>>>>> > >>>>>> Regards, > >>>>>> Mark. > >>>>>> > >>>>>> On 3/20/06, Geir Magnusson Jr <geir@pobox.com> wrote: > >>>>>>> I just committed. There was some delay because of a missing > >>>>>>> CCLA. Sorry. > >>>>>>> > >>>>>>> I've committed the code as is from the JIRA. I'm going to do > >>>>>>> some basic > >>>>>>> cleanup and then look at hte patches to integrate. > >>>>>>> > >>>>>>> Looking at this (and 88?) I think that this "add patches" > >>>>>>> approach is a > >>>>>>> bad one, because it complicates what this JIRA is now. > >>>>>>> > >>>>>>> In the future, I think we should just create new JIRA's for > >>>>>>> add-ons (if > >>>>>>> the add-on contributor isn't the contributor of the original > >>>>>>> JIRA) and > >>>>>>> just link them so they are easy to keep track of... > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Richard Liang wrote: > >>>>>>>> Geir Magnusson Jr wrote: > >>>>>>>>> Despite a touch of trouble with the packaging of the > >>>>>>>>> contribution, it > >>>>>>>>> passed with flying colors ( or 'colours', for our UK friends...) > >>>>>>>>> > >>>>>>>>> +1 from : > >>>>>>>>> > >>>>>>>>> Geir > >>>>>>>>> Stefano > >>>>>>>>> Dims > >>>>>>>>> Tim > >>>>>>>>> Leo > >>>>>>>>> > >>>>>>>>> In it comes.... > >>>>>>>>> > >>>>>>>>>. > >>>>>>>>>> > >>>>>>>>>> Go... > >>>>>>>>>> > >>>>>>>>>> geir > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>> Hello Geir, > >>>>>>>> > >>>>>>>> As this contribution has been accepted for a long time, I'm > >>>>>>>> wondering > >>>>>>>> when the source code could be put into Harmony SVN. > >>>>>>>> > >>>>>>>> I'm working on the implementation of java.text.DecimalFormat > >>>>>>>> which has > >>>>>>>> enhancements on BigDecimal and BigInteger support. Now I just > >>>>>>>> use this > >>>>>>>> contribution as external jars in Eclipse. > >>>>>>>> > >>>>>> -- > >>>>>> Mark Hindess <mark.hindess@googlemail.com> > >>>>>> IBM Java Technology Centre, UK. > >>>>>> > >>>>>> > >>>> > >>>> -- > >>>> Mark Hindess <mark.hindess@googlemail.com> > >>>> IBM Java Technology Centre, UK. > >>>> > >>>> > >> > >> > >> -- > >> Mark Hindess <mark.hindess@googlemail.com> > >> IBM Java Technology Centre, UK. > >> > >> > > > > > -- > Richard Liang > China Software Development Lab, IBM > > >
http://mail-archives.apache.org/mod_mbox/harmony-dev/200603.mbox/%3C906dd82e0603210501i396de20bp@mail.gmail.com%3E
CC-MAIN-2018-26
refinedweb
716
76.62
>>! [The Moogle’s] winning solution: First, [The Moogle] opened up the barcode in Photoshop, zoomed in, and added a grid of lines below to help in reading out the binary code. The red markers were used to help delineate between data chunks. The image was then put into a spreadsheet program (OpenOffice Calc in this case) and the binary for each chunk was read out by hand. He formatted the binary in order to make sure he hadn’t made errors, then used a lookup table for code 128 to generate the characters from each data chunk. Nice work! This solution was executed with tools that everyone has and knows how to use. 40 thoughts on “Barcode Challenge – Part 2” i got the same code “hackaday.com – hacking since 2004” i used That one was easy. “Smart? Try different languages” The barcode was hex which when translated into ASCII was “Smart? Subukan ang ibang mga wika.” Translate the Subukan ang ibang mga wika from Filipino to English and you get the final string. Smart? Try different languages. HEX, Filipino Damn, so close. Oh and I used: i got as far as the fact it was hex ascii and was gonna start doing it but then figured that someone would get it first….. prob not the right attitude. I do enjoying them, but perhaps having a secret submission would spurn ppl on to getting it. 1. Used old POS scanner to get code from barcode “536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E” 2. used to convert hex to ASCII “Smart? Subukan ang ibang mga wika.” 3. used google translate… said Filipino equivalent was “Smart? Try different languages.” Use to get the data of the barcode. That gets: 536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E Looks like Hexadecimal. Use to convert it to text. That gives: “Smart? Subukan ang ibang mga wika.” Use to detect and translate to English, giving: “Smart? Try different languages.” (Translated from Filipino.) Next Hackaday should have an encrypted code, just to see how fast it could be broken ;) … Just not AES… Right before publishing this we asked ourselves “Is this one difficult enough?”… I guess we were wrong. @Mike Szczys Still fun to see the rush for an answer… you guys should have some incredibly hard competition for everyone, not saying i would even have the remote chance, but it would be interesting to see people struggle for a few days to find the answer to a simple code, and would get a few bizzare, crazy solutions. Yeah it was fun even though once I saw the post it took about 2 minutes to get the answer. Easy doesn’t necessarily mean un-enjoyable. That being said, a harder one would be nice. :) UGH I worked so hard on this and you guys got it so fast. i feel dumb ;_; I used mspaint to count the pixels, similar to the previous contests method. Then I decoded the binary string with code-128, again like the previous contest. it looked like gibberish so i was about to try another encoding, when i noticed there were a lot of 6’s. i converted hex to ascii and got Smart? Subukan ang ibang mga wika. That second part looked like gibberish so I tried to solve it like a cryptoquip, only to fail miserably. I looked up the individual words and got results for Filipino Dictionary. So I translated it and got Smart? Try different languages. maybe i’m a little out of my league To be correct is Tagalo not filipino. I agree with JP. Is so easy. Get famous with this: Hint: German I have a good idea why don’t we set up a decode-athon. instead of these short and easy challenges. just put a bunch of different languages with different messages and see who deciphers them first. you and the google can make it happen! googled barcode decoder 2nd result: submitted picture of barcode received string of hex 536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E googled hex converter 1st result: pasted string into “hex” box, converted Smart? Subukan ang ibang mga wika. result in text typed “google translate” in chrome address bar detect language, paste “Smart? Subukan ang ibang mga wika.” result: Smart? Try different languages. There used to be a wargames site called dievo.org that was very very thorough, teaching the basics of research up to live shell wargames where to make it from level to level you needed to exploit a binary that ran as the next level’s user and coerce it to give you a shell. Hackaday should try to contact the site owner and get it back up and running. It was seriously the greatest computer security education I’ve ever gotten, and I have a bachelors degree in computer science with a specialization in information security. onlinebarcodereader.com 536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E i know, too simple. oh, i didn’t even think about a hex converter. crap. 536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E IS Smart? Subukan ang ibang mga wika. IS Smart? Try different languages. there we go. i lose The answer in one URL: I wanted to have a solution entirely “in the cloud” using a link of different Web services for each functions (one function at a time). But I failed to find a hexadecimal-to-text Web service that takes a URL as a GET parameter (so I have made a temporary one on my own server). Furthermore, due to the word “smart”, the automatic language detection of Google is not working. So not a very cool answer in the end :-( You can also get the binary string encoded in the image using imagemagick (convert barcode_challenge_part_2.jpg -crop 800×1+0+99 barcode_challenge_part_2.pbm), and get it printed to the screen by means of od (with something like od -j74 -x barcode_challenge_part_2.pbm | cut -d’ ‘ -f2-). Then you could apply the decoding algorithm. Of course it’s one hell of a job. Again, I used the python script I wrote for the last challenge, just substituting the old url, with the new one: ############# import cStringIO, urllib, Image bin = [‘11011001100′,’11001101100′,’11001100110′,’10010011000′,’10010001100’, ‘10001001100’,’10011001000′,’10011000100′,’10001100100′,’11001001000′, ‘11001000100’,’11000100100′,’10110011100′,’10011011100′,’10011001110′, ‘10111001100’,’10011101100′,’10011100110′,’11001110010′,’11001011100′, ‘11001001110’,’11011100100′,’11001110100′,’11101101110′,’11101001100′, ‘11100101100’,’11100100110′,’11101100100′,’11100110100′,’11100110010′, ‘11011011000’,’11011000110′,’11000110110′,’10100011000′,’10001011000′, ‘10001000110’,’10110001000′,’10001101000′,’10001100010′,’11010001000′, ‘11000101000’,’11000100010′,’10110111000′,’10110001110′,’10001101110′, ‘10111011000’,’10111000110′,’10001110110′,’11101110110′,’11010001110′, ‘11000101110’,’11011101000′,’11011100010′,’11011101110′,’11101011000′, ‘11101000110’,’11100010110′,’11101101000′,’11101100010′,’11100011010′, ‘11101111010’,’11001000010′,’11110001010′,’10100110000′,’10100001100′, ‘10010110000’,’10010000110′,’10000101100′,’10000100110′,’10110010000′, ‘10110000100’,’10011010000′,’10011000010′,’10000110100′,’10000110010′, ‘11000010010’,’11001010000′,’11110111010′,’11000010100′,’10001111010′, ‘10100111100’,’10010111100′,’10010011110′,’10111100100′,’10011110100′, ‘10011110010’,’11110100100′,’11110010100′,’11110010010′,’11011011110′, ‘11011110110’,’11110110110′,’10101111000′,’10100011110′,’10001011110′, ‘10111101000’,’10111100010′,’11110101000′,’11110100010′,’10111011110′, ‘10111101110’,’11101011110′,’11110101110′,’11010000100′,’11010010000′, ‘11010011100’,’1100011101011′ ] val = [‘ ‘, ‘!’, ‘”‘, ‘#’, ‘$’, ‘%’, ‘&’, ‘\”, ‘(‘, ‘)’, ‘*’, ‘+’, ‘,’, ‘-‘, ‘.’, ‘/’, ‘0’, ‘1’, ‘2’, ‘3’, ‘4’, ‘5’, ‘6’, ‘7’, ‘8’, ‘DEL’, ‘FNC3’, ‘FNC2’, ‘SHIFT’, ‘Code C’, ‘FNC4’, ‘Code A’, ‘FNC1’, ‘START A’, ‘START B’, ‘START C’, ‘STOP’ ] def parsePixel(pix): r, g, b = pix if r + g + b > 500: return 0 else: return 1 def parseChar(ch): for i in range(len(bin)): if ch == bin[i]: return i return -1 def parseImage(name): i = Image.open(img) count = -1 char = “” for a in range(i.size[0]): pixel = parsePixel(i.getpixel((a, i.size[1]/2))) if count == -1 and pixel == False: continue elif count == -1: count = 0 count += 1 char += str(pixel) if count >= 11: num = parseChar(char) if num == -1: continue print val[num], if num == 106: return char = “” count = 0 fp = urllib.urlopen(‘’) img = cStringIO.StringIO(fp.read()) parseImage(img) ################ this gave me: “(START B) 536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E- (STOP)” Another python script: ####################### import binascii print binascii.a2b_hex(“536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E”) ################ gave me: “Smart? Subukan ang ibang mga wika.” Googling ‘subukan’ brought me to this page: from which I understood that the language was Filipino. Google translate then translated the last part for me: “Try different languages.” Thus the final answer is: “Smart? Try different languages.” I google some interesting combination and it exists: Code of Life: If only it would result in some readable text at one point. Yes….. (rubs hands together with an evil grin) I second a Weekly Hack-A-Day Challenge. This time make it fun! Perhaps you could find a sponsor for a prize ;) I keep missing the posts for these challenges lol, but I think doing this kind of “challenge the audience” thing makes hackaday more interactive. I like it, and for the most part agree with Nikropht- It would definitely be a plus, but prize or not. Found this for linux users: Compile and launch zbarimg Seems nice and fast 1)On cursory examination, the code is black and white. 2)It appears the information is encoded horizontally, therefor (using GIMP) crop the picture down to a single row of black and white pixels. 3)Assign the color of negative space around the code – white – a value of 0, and the black valued pixels making up the positive space a value of 1. 4)Assume that the code does not begin with whitespace. Crop leading and trailing zero valued pixels. 5)Code is now 783 bits long. Bits will be referred to from left to right, bit 1 to bit 783. Assume each letter is encoded in the same length of bits. 783 can be (using a pocket calculator) factored to the following pairs: 783=(3*3*3*29) 3 letters of length 261 9 letters of length 87 27 letters of length 29 29 letters of length 27 87 letters of length 9 261 letters of length 3 6)Break into lengths of letters. Assume that no letter begins with whitespace. Assume the code is meant to be machine readable with simple technology, therefor each letter should begin with a binary value, and end in its complement. The code begins with 1, so assume each letter should end in 0. The code does not work for the above letter bit-lengths, therefor some sort of start or stop code is being utilized, i.e., the code is less than 738 bits long. 7)Perform cursory analysis of length between “0,1” pairs in code starting from beginning. 3, 6, 11, 14, 19, 22, 26, 28, 33, 37, 41, 44, 46, 51 + + + + Code appears to have breaks every 11 bits. Check 55 and 99 to examine hypothesis. 10001 | 00101 5 | 9 5 | 9 + | + _8_) Recap: Shrink picture to managable image of code, vector with 783 values. 783’s factors do not point to simple code, so check for patterns to decide letter length. “01” occurs every 11th bit, so work from that. 9)Highest multiple of 11 less than 783 is 781. This suggest that the last two bits are a stop code. Bits 781, 782, and 783 are “011” which would indicate a stop code of “11.” Proceed assuming 11 bit letters. 10)(Using GIMP) convert image to 1 bit pallette. Since most computer graphics use additive color mixing, invert values of the bitmap. Save image as “barcode2.c” (Filetype: C source code) in order to open in a text editor in human readable fashion. Heed warning that plugin must convert to RGB for export. 11)Open “barcode2.c” in a text editor. Ignoring possible knowledge of the C programming language, it is trivial to see that the image data is stored in a series of “/377″s and “/0” between quotes. Delete the text before and after this series of values. 12)Using the text editor’s functionality, delete every instance of quotation marks, all white space, and all {NEWLINE} characters. 13)Observe that the first 11 bits of our code are “11010010000” but the first 11 values in our text file are “\377\377\377\377\377\377\377\377” or “11111100011”. Remember the warning we heeded earlier; exported images will be encoded in RGB color depth. Each bit of our code is stored as “\377\377\377” = “1” and “” = “0”. 14)Using the text editor’s text substitution function, translate the code to 1’s and 0’s. 15)Observe that the first 11 bits of our text are “11010010000” as expected. _16_) Recap: Using basic functionalities of GIMP and a text editor, translate the code into easily manipulated and simplified text, as opposed to an unwieldly graphic. 17)Using the text editor, break the code into 11 bit “letters.” 18)Each letter begins with “1” and ends with “0”, and there are two “1”s left over at the end. Our hypothesis of 11 bit letters is reinforced. 19)Substitute “A” for every instance of the first 11-bit letter. Likewise “B” and every instance of “101110010”, and continue through the code until it is simplified to single characters. ABCDEDFGHGICJHKBCGBDHGBDLDFDMHKDFDMDGHKDNDHDFDMDGHKDEDGDFHKGGDNDLDFHMOP 20)Intuitively substitute letters to find patterns. There are 16 unique “letters” encoded, some of which might be markup or non-alphabet symbols – numbers, punctuation, etc. – so grind away at it. e for most common letter (D)? looks weird. {SPACE} for most common letter (D)? Thirteen single letter words, one thirteen letter word… clue? e for G. It’s the only double in the code… 21)oy. 22)Give up on find and replace. Cheat and search for “11010010000” on google, hit “I’m Feeling Lucky”. If this doesn’t work, give up altogether. 23)I, in fact, am lucky. _24_) Recap: Find the codetext. Cheat. 25)Using (the source of) translate a few “letters” and see if that’s enough of a clue. Will start with A, G, D, and H. 26)A = {START B}, D = “6”, G = “7”, H = “2”. Assuming double substitution. since this will probably remove the patterns I could see, translate by han– wait! 27)In sorting the 11 bit letters in order to simplify translation, I again notice the 16 unique letters. Since the first four I translated were numbers and a start code, I assume HEX encoding. Our problem now simplifies itself from: ABCDEDFGHGICJHKBCGBDHGBDLDFDMHKDFDMDGHKDNDHDFDMDGHKDEDGDFHKGGDNDLDFHMOP to {START B} BC 6E 6F 72 7I CJ 2K BC 7B 62 7B 6L 6F 6M 2K 6F 6M 67 2K 6N 62 6F 6M 67 2K 6E 67 6F 2K 77 6N 6L 6F 2M OP to (The “+” appears due to a transcription error. The code is still valid, as the symbols in place are meaningless and were just shifted to not interfere with HEX notation.) {START B} MN 6P 6Q 72 7T NU 2V MN 7M 62 7M 6W 6Q 6+ 2V 6Q 6+ 67 2V 6X 62 6Q 6+ 67 2V 6P 67 6Q 2V 77 6X 6W 6Q 2+ YZ to (knowing that lowercase a is 0x61) {START B} MN 6P 6Q r 7T NU 2V MN 7M b 7M 6W 6Q 6+ 2V 6Q 6+ g 2V 6X b 6Q 6+ g 2V 6P g 6Q 2V w 6X 6W 6Q 2+ YZ or (knowing lowercase letters go from 0x61 TO 0x7A) ___r_____b_______g__b__g__b__w_____ to (2V is common and has the right digit in the 16’s place for {SPACE}) ___r__ __b____ __g _b__g _g_ w_____ to (MN, NU, YZ are either Capitals or punctuation. NU {SPACE} MN suggests that MN is a capital letter, so M is either 5 or 4. test a few letters) M=5 => _ubu___. possible, but strange. M=4 => _tbt___. doubt it. 5N 6P 6Q r 7T NU 2V 5N 75 b 75 6W 6Q 6+ 2V 6Q 6+ g 2V 6X b 6Q 6+ g 2V 6P g 6Q 2V w 6X 6W 6Q 2+ YZ 5N 6P 6Q r 7T NU 5N u b u 6W 6Q 6+ 6Q 6+ g 6X b 6Q 6+ g 6P g 6Q w 6X 6W 6Q 2+ YZ to (NU and YZ are punctuation. U and Z are 0xE, 0xF, or 0x1. N and Y are 2 or 3. starts with 0x52 or 0x53, “R” or “S”) N=2 R__r_! or R__r_. | Roars Rears | Rubu___ N=3 S__r_? | Store? Stork? Spork? Smart? | Subu___ What is being used? 0, 2, 5, 6, 7 N can’t be 2, so N=3. 53 6P 6Q r 7T 3U 53 u b u 6W 6Q 6+ 6Q 6+ g 6X b 6Q 6+ g 6P g 6Q w 6X 6W 6Q 2+ YZ to S 6P 6Q r 7T 3U S u b u 6W 6Q 6+ 6Q 6+ g 6X b 6Q 6+ g 6P g 6Q w 6X 6W 6Q 2+ YZ or S__r__ Subu___ __g _b__g _g_ w_____ Terminal punctuation with 3 in the 16s place has to be “?” S__r_? Subu___ __g _b__g _g_ w_____ U = 0xF What is being used? 0, 3, 2, 5, 6, 7, F 28)Sleep on it. 29)First word is “Smart? ” since 6P and 6Q have to be from the first 16 letters of the alphabet. P=0xD, Q=0x1, T=4. S 6D 61 r 74 3U S u b u 6W 61 6+ 61 6+ g 6X b 61 6+ g 6D g 61 w 6X 6W 61 2+ YZ What is being used? 0, 2, 5, 6, 7, F, D, 1 30)Realize that the 16 unique values that inspired to use HEX doesn’t really follow, since one was a START CODE. Feel lucky still. 31)S m a r t ? S u b u 6W a 6+ a 6+ g 6X b a 6+ g m g a w 6X 6W a 2+ YZ Smart? Subu 6W a 6+ a 6+ g 6X ba 6+ g mga w 6X 6W a 2+ YZ YZ appears to be a stop code, since 2+ is punctuation, and it doesn’t have an open quote or anything like that earlier. We also have a complete word. Probably triple substitution. I hope not. Maybe it means something. If it’s another substitution code, I don’t want to play. W, +, X need decoding. 0, 1, 2, 3, 4, 5, 6, 7, D, F being used. 2+ is terminal punctuation, either “.” or “!”, plus either 0xE or 0x1, 1 is in use, so + = E Smart? Subu 6W a 6E a 6E g 6X ba 6E g mga w 6X 6W a 2E Smart? Subu6Wang 6Xbang mga w6X6Wa. 0, 1, 2, 3, 4, 5, 6, 7, D, E, F being used. W and X are 8, 9, A, B, or C random subbing. Smart? Subuhang ibang mga wiha. looks familiar, google search for word that is definite, “mga”. Nothing. Not an abbreviation, since there are capital letters… searching other random words. Subuhang = lots of asian stuff. ibang = 11th result down on google says “…this is Tagalog…” so I’ll pop it in a translator and see what comes out. Google doesn’t have a Tagalog translator, but a google search for “Tagalog translator” gives the info that it’s the Phillipines national language… No Reservations: Phillipines on Tivo while I was working on this. No lie. Google translate from Fillipino to English: Smart? Subuhang other wiha. The “i” is correct, X = 9, but Subuhang doesn’t translate. Trying A, B, and C Smart? Subu6Wang ibang mga wi6Wa. Smart? Subujang ibang mga wija. = Smart? Subujang other wija. Smart? Subukang ibang mga wika. = Smart? Try other languages. Smart? Subulang ibang mga wila. = Smart? Subulang other wila. _32_) Recap: Kind of like a Sudoku from Hell. Sub letters until it starts making a little sense, and then just go with it. keep track of what you’ve eliminated. Same general feeling as a Legend of Zelda puzzle. hmm. -33-) Looks like the bar code translates to “Smart? Try other languages.” Whether that means I should continue to work on the code, I don’t know. But I think I’m done. I agr33 with “thomas” n00bs need l0ve t0 @Ben: You sir, are a cognitive (and note taking) god. Good work grinding it out like that! Took a similar process. luckily you stuck with 128B and didn’t change the code-set in the middle of the barcode ( i would have to make new lookup tables, bleh ) after looking at the string I got… “536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E” it looked like it was made of hex, like Hakon i used binascii in python and got “Smart? Subukan ang ibang mga wika.” then used google translate from there… nice challenge. @Mike Szczys Unfortunately, as you can see, I cheated by trusting my luck to google and looking up four of the 11 bit letters. In hindsight, I wish I hadn’t, but I don’t think I would have gotten it otherwise. Thank you so much for your kind words! Oh, and how ethnocentric are we all for assuming the plaintext was supposed to be in English. So what did you guys win? :P An enthusiastic seconding of bringing dievo.org’s wargames back! Digital Evolution’s forums were a fount of info; Norse & co. were very generous with their time and energy, and the wargames were beyond compare. Norse & co. have moved on, the forums are lost to the sands of time, but it would be amazing to see those wargames running somewhere again. For those who weren’t lucky enough to have played there, they ran the gamut from crypto to vulnerability exploitation (metasploit wouldn’t help you!) to riddle/puzzles. Most wargames went by levels, so before you could (for example) try your hand at heap-smashing, you had to successfully smash the stack. Congrats Everyone! Great job! Congrats, I wish I could read binary code that well. Private Sub Form_Load() st = “536D6172743F20537562756B616E20616E67206962616E67206D67612077696B612E” For n = 1 To Len(st) Step 2 strMyHexNum = “&H” & Mid(st, n, 2) lngdecimalvalue = Val(strMyHexNum) Debug.Print (Mid(st, n, 2)), lngdecimalvalue, Chr(lngdecimalvalue) Next n End Sub i have used to generate code 128 with algorithm in java.
http://hackaday.com/2009/10/08/barcode-challenge-part-2/?like=1&source=post_flair&_wpnonce=f5309b4960
CC-MAIN-2017-26
refinedweb
3,590
82.75
08 April 2011 10 Friday, 8 April.?xml:namespace> Japan JAPAN DISASTER: DISASTER: Maruzen to restart benzene unit this week. JAPAN DISASTER: Idemitsu to shut benzene unit for turnaround Japanese aromatics producer Idemitsu Kosan will take its 327,000 tonne/year benzene unit in Chiba off line for scheduled maintenance on 27 March.. JAPAN DISASTER: JX Nippon Oil to restart Kawasaki benzene unit Japanese benzene major, JX Nippon Oil & Energy Corp, is expected to restart production at its Kawasaki-based steam cracker and 110,000 tonne/year benzene unit this week, following its shutdown after the 11 March earthquake.. Japan Honda has extended its production suspension in
http://www.icis.com/Articles/2011/04/08/9451023/japan-disaster-summary-of-japans-petchem-plants-after-quake.html
CC-MAIN-2014-42
refinedweb
107
51.18
Simplify Your Code With Rocket Science: C++20’s Spaceship Operator Cameron This Cameron DaCamara.. Comparisons It is not an uncommon thing to see code like the following: struct IntWrapper { int value; constexpr IntWrapper(int value): value{value} { } bool operator==(const IntWrapper& rhs) const { return value == rhs.value; } bool operator!=(const IntWrapper& rhs) const { return !(*this == rhs); } bool operator<(const IntWrapper& rhs) const { return value < rhs.value; } bool operator<=(const IntWrapper& rhs) const { return !(rhs < *this); } bool operator>(const IntWrapper& rhs) const { return rhs < *this; } bool operator>=(const IntWrapper& rhs) const { return !(*this < rhs); } }; Note: eagle-eyed readers will notice this is actually even less verbose than it should be in pre-C++20 code because these functions should actually all be nonmember friends, more about that later. That is a lot of boilerplate code to write just to make sure that my type is comparable to something of the same type. Well, OK, we deal with it for awhile. Then comes someone who writes this: constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) { return a < b; } int main() { static_assert(is_lt(0, 1)); } The first thing you will notice is that this program will not compile. error C3615: constexpr function 'is_lt' cannot result in a constant expression Ah! The problem is that we forgot constexpr on our comparison function, drat! So one goes and adds constexpr to all of the comparison operators. A few days later someone goes and adds a is_gt helper but notices all of the comparison operators do not have an exception specification and goes through the same tedious process of adding noexcept to each of the 5 overloads. This is where C++20’s new spaceship operator steps in to help us out. Let’s see how the original IntWrapper can be written in a C++20 world: #include <compare> struct IntWrapper { int value; constexpr IntWrapper(int value): value{value} { } auto operator<=>(const IntWrapper&) const = default; }; The first difference you may notice is the new inclusion of <compare>. The <compare> header is responsible for populating the compiler with all of the comparison category types necessary for the spaceship operator to return a type appropriate for our defaulted function. In the snippet above, the return type auto will be deduced to std::strong_ordering. Not only did we remove 5 superfluous lines, but we don’t even have to define anything, the compiler does it for us! Our is_lt remains unchanged and just works while still being constexpr even though we didn’t explicitly specify that in our defaulted operator<=>. That’s well and good but some people may be scratching their heads as to why is_lt is allowed to still compile even though it does not even use the spaceship operator at all. Let’s explore the answer to this question. Rewriting Expressions In C++20, the compiler is introduced to a new concept referred to “rewritten” expressions. The spaceship operator, along with operator==, are among the first two candidates subject to rewritten expressions. For a more concrete example of expression rewriting, let us break down the example provided in is_lt. During overload resolution the compiler is going to select from a set of viable candidates, all of which match the operator we are looking for. The candidate gathering process is changed very slightly for the case of relational and equivalency operations where the compiler must also gather special rewritten and synthesized candidates ([over.match.oper]/3.4). For our expression a < b the standard states that we can search the type of a for an operator<=> or a namespace scope function operator<=> which accepts its type. So the compiler does and it finds that, in fact, a‘s type does contain IntWrapper::operator<=>. The compiler is then allowed to use that operator and rewrite the expression a < b as (a <=> b) < 0. That rewritten expression is then used as a candidate for normal overload resolution. You may find yourself asking why this rewritten expression is valid and correct. The correctness of the expression actually stems from the semantics the spaceship operator provides. The <=> is a three-way comparison which implies that you get not just a binary result, but an ordering (in most cases) and if you have an ordering you can express that ordering in terms of any relational operations. A quick example, the expression 4 <=> 5 in C++20 will give you back the result std::strong_ordering::less. The std::strong_ordering::less result implies that 4 is not only different from 5 but it is strictly less than that value, this makes applying the operation (4 <=> 5) < 0 correct and exactly accurate to describe our result. Using the information above the compiler can take any generalized relational operator (i.e. <, >, etc.) and rewrite it in terms of the spaceship operator. In the standard the rewritten expression is often referred to as (a <=> b) @ 0 where the @ represents any relational operation. Synthesizing Expressions Readers may have noticed the subtle mention of “synthesized” expressions above and they play a part in this operator rewriting process as well. Consider a different predicate function: constexpr bool is_gt_42(const IntWrapper& a) { return 42 < a; } If we use our original definition for IntWrapper this code will not compile. error C2677: binary '<': no global operator found which takes type 'const IntWrapper' (or there is no acceptable conversion) This makes sense in pre-C++20 land, and the way to solve this problem would be to add some extra friend functions to IntWrapper which take a left-hand side of int. If you try to build that sample with a C++20 compiler and our C++20 definition of IntWrapper you might notice that it, again, “just works”—another head scratcher. Let’s examine why the code above is still allowed to compile in C++20. During overload resolution the compiler will also gather what the standard refers to as “synthesized” candidates, or a rewritten expression with the order of the parameters reversed. In the example above the compiler will try to use the rewritten expression (42 <=> a) < 0 but it will find that there is no conversion from IntWrapper to int to satisfy the left-hand side so that rewritten expression is dropped. The compiler also conjures up the “synthesized” expression 0 < (a <=> 42) and finds that there is a conversion from int to IntWrapper through its converting constructor so this candidate is used. The goal of the synthesized expressions are to avoid the mess of needing to write the boilerplate of friend functions to fill in gaps where your object could be converted from other types. Synthesized expressions are generalized to 0 @ (b <=> a). More Complex Types The compiler-generated spaceship operator doesn’t stop at single members of classes, it will generate a correct set of comparisons for all of the sub-objects within your types:); } The compiler knows how to expand members of classes that are arrays into their lists of sub-objects and compare them recursively. Of course, if you wanted to write the bodies of these functions yourself you still get the benefit of the compiler rewriting expressions for you. Looks Like a Duck, Swims Like a Duck, and Quacks Like operator== Some very smart people on the standardization committee noticed that the spaceship operator will always perform a lexicographic comparison of elements no matter what. Unconditionally performing lexicographic comparisons can lead to inefficient generated code with the equality operator in particular. The canonical example is comparing two strings. If you have the string "foobar" and you compare it to the string "foo" using == one would expect that operation to be nearly constant. The efficient string comparison algorithm is thus: - First compare the size of the two strings, if the sizes differ return false, otherwise - step through each element of the two strings in unison and compare until one differs or the end is reached, return the result. Under spaceship operator rules we need to start with the deep comparison on each element first until we find the one that is different. In the our example of "foobar" and "foo" only when comparing 'b' to '\0' do you finally return false. To combat this there was a paper, P1185R2 which details a way for the compiler to rewrite and generate operator== independently of the spaceship operator. Our IntWrapper could be written as follows: #include <compare> struct IntWrapper { int value; constexpr IntWrapper(int value): value{value} { } auto operator<=>(const IntWrapper&) const = default; bool operator==(const IntWrapper&) const = default; }; Just one more step… however, there’s good news; you don’t actually need to write the code above, because simply writing auto operator<=>(const IntWrapper&) const = default is enough for the compiler to implicitly generate the separate—and more efficient— operator== for you! The compiler applies a slightly altered “rewrite” rule specific to == and != wherein these operators are rewritten in terms of operator== and not operator<=>. This means that != also benefits from the optimization, too. Old Code Won’t Break At this point you might be thinking, OK if the compiler is allowed to perform this operator rewriting business what happens when I try to outsmart the compiler: struct IntWrapper { int value; constexpr IntWrapper(int value): value{value} { } auto operator<=>(const IntWrapper&) const = default; bool operator<(const IntWrapper& rhs) const { return value < rhs.value; } }; constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) { return a < b; } The answer here is, you didn’t. The overload resolution model in C++ has this arena where all of the candidates do battle, and in this specific battle we have 3 candidates: IntWrapper::operator<(const IntWrapper& a, const IntWrapper& b) IntWrapper::operator<=>(const IntWrapper& a, const IntWrapper& b) (rewritten) IntWrapper::operator<=>(const IntWrapper& b, const IntWrapper& a) (synthesized) If we accepted the overload resolution rules in C++17 the result of that call would have been ambiguous, but the C++20 overload resolution rules were changed to allow the compiler to resolve this situation to the most logical overload. There is a phase of overload resolution where the compiler must perform a series tiebreakers. In C++20, there is a new tiebreaker that states we must prefer overloads that are not rewritten or synthesized, this makes our overload IntWrapper::operator< the best candidate and resolves the ambiguity. This same machinery prevents synthesized candidates from stomping on regular rewritten expressions. Closing Thoughts The spaceship operator is a welcomed addition to C++ and it is one of the features that will simplify and help you to write less code, and, sometimes, less is more. So buckle up with C++20’s spaceship operator! We urge you to go out and try the spaceship operator, it’s available right now in Visual Studio 2019 under /std:c++latest! As a note, the changes introduced through P1185R2 will be available in Visual Studio 2019 version 16.2. Please keep in mind that the spaceship operator is part of C++20 and is subject to some changes up until such a time that C++20 is finalized. As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp. Also, feel free to follow me on Twitter @starfreakclone. If you encounter other problems with MSVC in VS 2019 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions or bug reports, let us know through DevComm. Can I just say this makes me come back to C++? C++ getting sexier! Will there be some STL algorithms understanding the spaceship operator? It would e.g. be awesome to std::sort a vector of strings with a lexicographic predicate! I didn’t find an answer to this question yet… The STL algorithms should implicitly adopt the new operators, because they are just templates internally (via `std::less`, by default) also using `a < b`. Am sorry, but this stuff makes me despair of C++, and I say that as a user of it for over 20 years. The rush with the recent modern C++ standards has been to make code ever “more powerful”, and “ever more terse”. Little of what has been introduced in C++14 and later has given any thought to readability/maintainability, unless all of your team are familiar and well versed in the latest versions of the standards, using code like this, just leads to unmaintainable code that the one “c++ guru” on the team will understand, and everyone else will be scratching their heads over. I’ve learnt in my time with C++, that pursuing the latest and greatest C++ styles and standards is a pointless exercise:1) Not everyone has the same level of expertise, making hiring staff tricky and supporting the code.2) Compilers have variable levels of compliance anyway.3) Toolsets (e.g. static analysis tools), likewise have variable levels of compliance / support for the standards anyway – so you may end up introducing code that you can’t analyse with your tools – is that really a win?4) All too often working existing code is then “updated” to use new language features, without fully understanding nuances etc, and the net “gain” is a new set of subtle bugs.It’s sad to see frankly, a manageable language is turning into a terse mess, that’s trying to be ever more terse, because “less is more” – but in the real world it’s not. Terse, heavily templated, uncommented code becomes an unmaintainable mess. 1) Not a problem with the language 2) Yes, but you have people that check this and update your coding standards accordingly, don’t you? 3) Probably the only real pain point, but again not a problem with the language. If you’re paying for those tools, demand the support or get better tools 4) Definitely not a problem of the language 2 of four points are people problems that can be fixed by fostering an attitude of continued learning. 1 point is organizational, and I find the last point to have some merit, but it’s not the fault of the language. The process of updating the C++ standard is open, and if these tooling companies want to stay in business, they should be following along and not reacting after the fact. Nice article, but for me (I have visual studio version 16.1.6), it doesn’t work for strings. If I try to do your first example the IntWrapper, but change int for std::string, I get an error C2678 Error C2678 binary ‘<=>’: no operator found which takes a left-hand operand of type ‘const std::string’ (or there is no acceptable conversion) Bit late on this one. I don’t see __cpp_lib_three_way_comparison defined is that expected? What’s the practical difference between strong_ordering and weak_ordering, beyond making the declaration self-explanatory? Does any STL algorithm or class rely on this “concept”? This is really great to have and see all the fantastic support for C++ by Microsoft, including clang and WSL, as well as cross-development with linux, IIoT, etc! Do you know when this work with “Platform Toolset: LLVM (clang-cl)”? It has been a year since you made this post. “-std=c++2a” with clang (any supporting version) fails with any version of VS2019 when using “Platform Toolset: LLVM (clang-cl)” which uses the VS2019-stdlibs; even on a skeleton app. Similar issues occur for any Windows SDK newer than “10.0.18362.0”. But works just fine with VS2019 with “Platform Toolset: LLVM” which uses the VS2017-stdlibs. For more details, see: Visual Studio 2019 C++ std libs used with clang “-std=c++2a” – fails with numerous compilation errors on a skeleton app More interesting question is: when would it be actually usable and how can we test that it’s usable. Compare: Clang support is added late, but it generates decent code. While Microsoft… well.. didn’t do that at the time when features was added. What should I check? __cpp_lib_three_way_comparison ? Or? When could I actually use that facility if I care about generated code (if I don’t care there are plenty other languages besides C++)?
https://devblogs.microsoft.com/cppblog/simplify-your-code-with-rocket-science-c20s-spaceship-operator/?utm_campaign=%E3%80%8A%E5%A4%A7%E5%B1%B1%E5%A7%86%E7%9A%84%E6%A9%9F%E6%A9%9F%E8%BB%8A%E8%BB%8A%E2%84%A2%E3%80%8B%E9%9B%BB%E5%AD%90%E5%A0%B1&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2021-25
refinedweb
2,694
50.06
Hello, so basically i am getting started with java.. I am doing work on eclipse, i am making a library project as a sample.. The library project includes the class of person and books.. here is my sample class of person: package Library; public class person { private String name; private int maxBooks; public person(){ name = "unknown name"; maxBooks = 0; } public String getName(){ return name; } public int getMaxBooks() { return maxBooks; } public void setMaxBooks(int maxBooks) { this.maxBooks = maxBooks; } public void setName(String name) { this.name = name; } } I have done this, but now i need code by which a user will enter his maxBooks and Name, and also a code that will output name and maximum books of the person, it may be like this? ??system.out.println(getPerson.name) I have only taken two classes of java yet, sorry for this noob question. Thanks for reading.
http://www.javaprogrammingforums.com/java-theory-questions/9990-newbie-getting-started-java-need-help.html
CC-MAIN-2014-15
refinedweb
145
62.58
Opened 3 years ago Last modified 3 weeks ago Although not all backends support the interval data type, it can be implemented using some other data type. This field is quite useful and it would be nice if it were available. I'm tentatively marking this as wontfix, as surely a combination of a "start-interval" and "end-interval" fields is better normalisation. Feel free to correct me if I'm wrong though! start-interval and end-interval doesn't work for all situations. Suppose you want to track "Amount of time spent driving in the past week" Unless I'm misunderstanding your use of start-interval and end-interval, there isn't an easy way to store this value with date/time fields. The real bonus of intervals comes in when you are able to do computations with them (such as "Amount of time spent driving in May" + "Amount of time spent driving in June") patches are welcome. DurationField This simple patch adds a DurationField, which, in my opinion, better encapsulates the use cases described by anonymous above. Calling it a duration implies that it's not tied to any specific points in time, but rather just the passage of a certain amount of time. This makes more sense for things like the length (duration!) of audio and video content, for instance. The patch itself is fairly simple, based on a FloatField, whose value represents the number of seconds to be represented. This seems sufficient for database storage, since all supported backends already support the equivalent of a FloatField, while max_digits and decimal_places allow for a number large enough to represent timedelta.max, complete with a full set of microseconds. It doesn't take calendars into account at all, but neither does timedelta, which is also in keeping with the duration concept as opposed to intervals. The one problem I have with it so far is that it seems like Django is relying on backend database connection modules to handle coercion into Python types, rather than using to_python, so the value isn't changed into a timedelta unless you manually call .validate() on the model. This also prevents the validation errors from occurring properly when editing in the admin. That may be another issue for another day, however. I've marked the issue as "Patch needs improvement" in case I'm missing something that would make this happen. P.S. Pardon the WikiFormatting on the attachment comment. That is, unless it gets a wiki page, in which case it'll be fine. I should also add, under the heading of "Patch needs improvement," that typing in decimals as a float value is hardly intuitive, and a better UI should be used. I, unfortunately, haven't thought of a good way to visually represent it, so I don't have any suggestions. Updated DurationField location to actually work (I hadn't tested the other one) Complete patch with widget, supporting oldforms and newforms, including newforms-admin This most recent patch covers many more aspects of DurationField, including a newforms field and widget, fully compatible with newforms-admin. This new widget implements a bit better of a UI, separating the value into its components (days, hours, minutes, seconds, microseconds). This new widget can't reasonably be done with oldforms, and since it's going out anyway, oldforms (and the current admin) will simply use a FloatField describing the number of seconds (including microseconds). Also, this version of the patch relies on #3982 to enable support of the coerce method. New patch, complete with documentation Once again, this patch supercedes all others, and once again relies on a new patch from #3982. However, it now uses lazy instantiation of timedelta objects, which should improve performance. Documentation is also included for trunk, with a few minor changes needed for the newforms-admin branch (due to the newforms widget included in the patch). Tests will come shortly, once I have a few other patches ironed out. Updated to work with latest trunk, and to no longer rely on #3982 This latest patch is missing a few things before it can be considered final, but it's getting close. It no longer relies on #3982, but it doesn't yet support the recent error_messages stuff, and fails with the JSON serializer. It works with the XML serializer though, and should work in newforms-admin as is. Fixed a problem with creating objects by hand, and removed a couple debug lines I've had some cases (mainly in the admin-interface) where get_db_prep_save(self, value) gets called with a value of type Decimal. I don't know why that happens, but I added this code to fix DurationField in these circumstances: def get_db_prep_save(self, value): if value is None: return None if (type(value) == decimal.Decimal) or (type(value)==float): return str(value) return str(value.days * 24 * 3600 + value.seconds + float(value.microseconds) / 1000000) If this is the correct way to fix this, please add it to the next diff you make. I'm more interested to know why it's getting a Decimal, because that would probably break other expectations as well. Do you have any code that reliably shows that to happen? If you don't know exactly where it's going wrong, that's fine, just being able to reproduce it will help me figure it out. OK... I did some testing and was able to reproduce the condition. In a simple Model with a DurationField create a record in Admin and enter 0 as the value. django-admin will throw a AttributeError?: 'Decimal' object has no attribute 'days' and will prevent saving the value. If you enter 0 directly into the database or save a datetime.timedelta(0) from the django shell (which works, unlike in the admin) that attribute will return a Decimal("0") instead of the timedelta object (once you re-fetch it from the database). So my proposed patch to get_db_prep_save is the wrong place to fix this problem. It seems this condition is rooted where the data is converted from to database to the native python object. Could this have something to do with the backend driver doing the coercion to python types that was reported in #3982? I've been testing this with sqlite3 btw. In DurationProxy.__get__ the value that instance.__dict__[self.field_name] returns is a Decimal object (if the value in the database is 0). With other values it is a TimeDelta This needs some UI help, but it's otherwise perfect. Is there any plan to use PostgreSQL’s Interval type with this? I have no plans to support interval, since DecimalField allows for compatibility with all of Django's backends. The only real reason to support interval is for inspecting existing databases that use it. If you'd like support for that, feel free to add in whatever code it would take to do it. I'm just too bogged down with other things at the moment, and supporting interval isn't high on my priority list. I tried to apply this patch and it no longer applies cleanly. I believe django/newforms/fields.py and django/newforms/widgets.py had a reject and some fuzz. When I did get it applied I was getting some errors when I went into the admin and tried to add data to my model... Traceback: File ".../django_trunk/django/contrib/admin/views/main.py" in add_stage 287. new_data = manipulator.flatten_data() Exception Value: 'str' object has no attribute 'items' Related: The 3 Best Abdominal Exercises that Are Not Traditional Ab Exercises Updated patch for current trunk, without docs (no idea how much they've changed) What's holding DurationField from going upstream (at least in contrib)? I can help fix whatever is necessary. Well, you'll notice that yours is the first patch to hit this ticket since Jacob said it "needs some UI help." That criticism still stands, since a half-dozen input fields don't make for a very good user experience. I had a look through the patch today. Using a DecimalField? and playing with commas seems unreasonably hacky. Is using a 64-bit int an option? It shouldn't be a problem under postgres, but I don't know about the other db backends. For the interface part, there's two decent choices. Either go for a parse of the values in a single field, for example (input) "3d 12h 4m 7s", "7d 10m", "150000 ms"; output would be highest value possible first - 60000ms would be "1m", 75000ms would be "1m 15s" (abbreviations are up for change, would go for the standard ones). Or, go for a single field with a <select> box, with some js shortcuts. [_] [ Unit ... ] (X) In both cases, the possible units would be years, months, weeks, days, hours, minutes, seconds, milliseconds, microseconds (assuming microsecond precision). Thoughts? I'm a bit confused as to what you mean by "playing with commas," but thus far, I don't see anything at all hacky with using a DecimalField. Going with a 64-bit int would only be an option if Django supports a standalone cross-database 64-bit integer field. I don't want to invent one just for this. As for being hacky, I find it extremely elegant that the current approach stores seconds in the database, allowing for super-simple conversion: value = datetime.timedelta(seconds=float(value)). Plus, it works with all existing backends and maintains the ability to order records as you would expect (though, admittedly, your 64-bit int would also preserve that feature). What's exactly is hacky about it? On the interface front, there was (once upon a time) a prevailing notion of using two text boxes: one for days and one for everything else. Basically, the days would be cleaned as a standard integer field, while the other would be cleaned as a time field, so the syntax for stuff less than a day would follow the "hh:mm:ss" format. The only thing that held me back from actually implementing that was that it would also be great to have a JavaScript helper similar to that of TimeField, showing a few useful options to get you started. Something like: 30 seconds, 2 hours, 1 day, 6 weeks (or some such similar options). I obviously haven't gotten back around to this ticket in quite some time, but I'll see if I can devote some time to it soon to get the original UI proposal up and running. It feels wrong to "store" as seconds. Then again, just my opinion. I was thinking the same thing about a js helper, and a friend came up with the idea of just parsing the field for units. I'll see if I can get a few mins to work on it tonight, I really like the idea. Three big advantages: - No need for javascript (6 weeks would be "6w", 30 seconds would be "30s", 2 hours would be "2h", etc) - Easy to understand, very user friendly (parsing "3h 10d" is okay, I suppose we'd give validationerror on "1h 10m 2h" or similar tho) - No need for multiple input boxes, avoids confusion On a slightly-related note, I don't know what to do about microseconds. Standard display is µs, but that could cause problems for users with special keyboard layouts etc. I think the year and month units are misleading, as in real life they have no fixed length. This is probably why the timedelta type does accept day as the longest unit possible. We can assume that year is an astronomical unit, but it's almost never useful in real life. A year has to round to days and it's length is known only when applied to a certain start date. The same (or even worse) happens with a month. In Gregorian calendar there are 4 different lengths possible: 28, 29, 30, 31 days. We could assume that month is astronomical unit again (moon cycle) but it will roughly match only the Muslim calendar. I'd prefer to leave a week as the longest unit possible. Emes, I agree with you. I just wanted to get the implementation in. It's easier to remove than to add-on. I sent the Dev list a mail a couple of days ago: Another question I bumped into this morning: Should we allow negative values? cc michal@salaban.info, sorry for the spam, Firefox is horrible. New DurationField implementation (try 3) Any feedback on the DurationField patch? I still need to figure out the admin output. Fully working DurationField implementation with changes and fixes from Yuri Baburov Yuri did some updates to the patch posted above; I'll post a rebase by tomorrow. This rebase also comments out support for months and years as per list discussion; since it's just two values, might as well leave them commented. 1.1's almost out if I am to trust the targeted bugs. I suppose we could start looking at DurationField implementation for 1.2? This ticket has been open long enough. Reassigning to myself. DurationField patch, with year/month support removed, bugfixes and rebased against last svn By Edgewall Software.
http://code.djangoproject.com/ticket/2443
crawl-002
refinedweb
2,206
62.17
Hints for Using SVN to collaborate on school projects This is a "how to" document for using a graphical user interface with SVN. Graphical user interfaces include - RabbitVCS for linux - RapidSVN for Mac - TortoiseSVN for Windows Help improve this document by adding SVN command line commands for examples. Contents - 1 SVN Basics - 2 Starting a Project On SVN - 3 Preparing Your Own Workspace for Development - 4 Working within your own Workspace - 5 Merging your work back to trunk - 6 Resources SVN Basics their own home directory or workspace (member-id1, member-id2,...) for their own development tasks within branches. - Each team member should divide their workspace into several sub-directories (workspaces) during the development of the project. These workspaces(Task1, Task2, ...) are usually copies of the trunk to be worked on. - These sub-directories(Task1, Task2,...) are called branches of trunk. When the word branch is used as a verb, it means copying the whole trunk into a sub-directory, either in branches or tags. (This method of development is called Feature Branching - tags is the directory that holds copies of successful stages of trunk throughout development. (Also called as Milestones) - tags are never modified or edited. You may branch a directory of tag into branches under a workspace and then modify the branch and apply the changes back to trunk, but you should never change the contents of a tag - The action of branching the trunk into tags is oftenthe repository holds more than one project, trunk is divided into several sub-directories - one for each project. - In this case, we divide tags into exactly the same project sub-directories as present in trunk Basic Actions A few important facts and terminology to help clarify the basic actions: - One responsibility of a code repository is to keep track of all of the modifications done to a project by its team members. - In a project that is tracked by a code repository (version-controlled or in short versioned), you can focus on any changes during the project's development life; such as, who modified/added/deleted what and when. You can undo work or rollback the work to any stage of the development and much more. - SVN is a client/server repository; - Code is kept on a server and those members who have access can copy the whole or parts of the project to their local machines, work on the whole or parts and then apply their changes back to the server. Because of the code is kept on a server, one member may be unaware of the changes made by another member, unless the other has applied the changes to the server and the member has update their local copy. - Merging the modifications of different members into the repository is another responsibility. - A Version-controlled or versioned file is a file that is handled and tracked by a repository To checkout is to copy the code from a repository server to a versioned directory on the client, so that you can start working on the code. branch To branch is to copy one directory on the repository server into another directory on the repository server. - Note that branching the code copies it on the repository itself and not on the local (client) machine. To work on the branched (copied) code, you must checkout the directory to which you copied the branch. add To add is to flag a non-versioned file or directory to be added to the repository server at next commit commit To commit is to apply (that is, to copy) your modifications and additions to the repository server. merge To "merge" is to merge a branched directory back to the original directory; that is, to apply to the original directory those modifications and additions that you have made to your branched directory. - merge is the opposite of branch. export To export is to copy the whole or part of a repository to a non-versioned directory on the client machine - You export when you want either to package the project, to make it ready for production, or to copy a piece of work from one repository to another repository. import To import is to copy a non-versioned work (directory) to the repository server. - Note that although the imported code is on the server, it is still NOT versioned on the client machine. To start working on an imported directory, you need first to checkout the directory from the repository to the client machine. Starting a Project On SVN There are two ways to initiate a project on SVN: - To start the project from zero - We do this when we create an empty project and start to write the code from scratch - To start the project by continuing existing work - We do this when someone else has started the project (that is, the professor, other team-members, etc.) and we want to copy the work into our own repository and continue that work. Start the Project from zero To start from zero, create the initial code of your project in trunk, add the code, and then commit it. These are the detailed steps: - checkout the project repository in a new directory on your local computer; - Create a new directory on your local computer, right click on that directory, and then click on SVN Checkout - In URL of the repository type your repository path (svn://zenit.senecac.on.ca/....) - Click on ok - If the basic directories (trunk, tags, branches) don't exist, create them and add them by right clicking on them and selecting ...SVN/add. - In trunk, create your project, compile it, and run it. (This could be as simple as a few empty files or a Hello world application). - Right click on the trunk, select ...SVN/add, and then select all of the files you would like to add to the repository. - Add only those files that you want to track for modification. Only source and project files need to be version-controlled. We don't usually add binary or executable files to the repository. Add them only if you have a reason for doing so. - Right click on trunk and select SVN Commit to commit your work to the repository server. Start the project by continuing an existing work To continue an existing work, you should have a non-versioned copy of the initial code for your project. Copy this code into the trunk, add and commit it. These are the detailed steps: - Copy the initial code into the trunk of your repository - If the code is available in another repository (say RepoSrc), export from RepoSrc into trunk of your repository (say RepoDest) - update RepoSrc to make sure that it is in sync with the server. - Right click on the directory with the initial code in RepoSrc and select ...SVN/export. - Select the trunk of your own repository (RepoDest) and click OK - This will create a "non-versioned" copy of the initial code in RepoDest/trunk - Do any modification needed to make the initial code ready for your own work - Right click on RepoDest/trunk and select Add. Then choose the files that you would like "versioned" - Finally, Commit the trunk to copy those files to the SVN server. Preparing Your Own Workspace for Development Create a home directory in branches for your development and name that directory with your seneca-id. Then branch the trunk into the proper workspace in your home directory under branches. Finally update your workspace, so that the code is added to your copy of the repository on your machine. These are the detailed steps: - Create a directory in branches and name it with your seneca-id - add the directory to the repository and commit branches to update the repository server. - Right click on trunk and select ...SVN/"Branch/Tag" to create a branch for your next task - This will create your first workspace. Choose a relevant name for your assigned task. For example if your project is writing a text editor and your next task is to implement "Copy And Paste" feature, then a suitable name would be: "CopyPaste" - In "To URL" type or select your home sub-directory in the branches directory and add "/CopyPaste" to it: (svn://RepoUrlAndPath/branches/yourSenecaid/CopyPaste") - Select HEAD revision or a specific revision if necessary (mostly head revision applies in our case) - Add a descriptive message to inform others about what you have done - Click on OK - This copies the trunk into a branch so you can start your own implementation. This action is done on server and your local copy of the repository remains unchanged. - Right click on branches and click SVN update. This will download the new branch to your local machine. Working within your own Workspace You can now start to implement your assigned tasks. - Unlike trunk, you can leave YOUR workspace in any state you like. Comment each and every commits so that later you know which commit belongs to what and your professor knows what to mark. - If you do not comment a COMMIT, it means that it was minor and does not need to be marked - You can work form home, commit your work, come to school, checkout your committed code, and continue later. - If you are working on a public computer, make sure to delete your work, after you have committed it to the repository. - It is your responsibility to keep your code safe - If you have any problem with your code, and need help, contact your professor, send the path of your workspace. If needed, he can checkout your code see what is wrong with it, leave comments on it, and commit it. Afterwards, all you need to do is update your repo and read the comments and corrected code. Merging your work back to trunk After you have completed your work within your own workspace and your code compiles and is ready to go, you can merge your work back to trunk. These are the detailed steps: - Right click on trunk or trunk/prj, depending on what you branched into branches - Select "...SVN/Merge", select "reintegrate a branch", and click on next. - Make sure "From URL" is the branch you want to merge and click on next. - Click on "test merge" to see if the merge is successful. - Click on "merge" to merge the branch back to trunk - Now update the trunk to apply the changes. - If there are any conflicts, click on "...SVN/edit conflicts" and fix each conflict, save and click on conflict resolved" - Check the trunk status in your team page on the wiki - If the trunk status is "committed" then change it to "being committed by your_name" - If the trunk status is "being committed by member_name", wait for them to complete their commit and go to previous step. - Commit the trunk and when done, update the truck status to "committed by your_name" - If this commit was worth recording, branch it in the tags directory under a new release. If you receive a "branch/project must be ancestrally related to trunk/project" error, try going into your trunk and right click > merge one file at a time. Make sure to click the "show log" button in the merge wizard to get the latest revision for the merge added for you.
https://wiki.cdot.senecacollege.ca/wiki/Hints_for_Using_SVN_to_collaborate_on_school_projects
CC-MAIN-2019-26
refinedweb
1,881
65.15
pi = sqrt(12) * (1 - 1/(3*3) + 1/(5*3^2) - 1/(7*3^3) + ... ) I am teaching myself through the interactive 'How to Think Like A Computer Scientist' online textbook and so far we have not been introduced to if statements or anything beyond very basic functions (accumulator pattern, drawing multiple shapes with turtles etc) so knowing what I know (the problem before this used the Leibniz approximation and in the book solution they set it up in this general way) here's what I tried : - Code: Select all import math def madhavapi (n): """ Give an approximation for pi using the Madhava method out to n iterations """ sign = 1 # First term positive denominator = 1 # First term 1 = 1/1 series = 0 for i in range (n): series = series + (sign/denominator) sign = sign * -1 # Alternate sign of terms as in approximation denominator = (denominator + 2) * 3 madhavapi = math.sqrt(12) * series return madhavapi n = int(input('How many iterations should the Madhava method go through? ')) result = madhavapi(n) print('An approximation for pi using', n, 'Madhava method iterations is', result) Because the Wiki article mentions that 21 terms used in the approximation gives pi accurately out to 11 decimal places, 21 seemed like a natural number to try for n. Using that as my input I get the following output: - Code: Select all How many iterations should the Madhava method go through? 21 An approximation for pi using 21 Madhava method iterations is 3.1592915113695215 So far when I've made an error in a calculation on a practice problem it has been glaringly obvious (negative numbers, answers off by a factor of 10 or more, etc), but this result is sorta kinda close. Maybe a rounding error? Honestly I've been out of school for several years now and my math hasn't been kept as sharp as I'd like it to be. It may just be one of those things where I've stared at it for far too long and just need to step away for a bit.. which I'm definitely going to do after I post this, haha. Any insight on whatever dumb mistake I've made would be most welcome.. thank you very much! EDIT: Okay, well just going over things after coming back from dinner, I gather that it's definitely my denominator statement. The first trial works, ending up with the correct 1/9.... but then the second trial gives 1/33 (9+2 is 11 * 3 is 33) when it should yield 1/45.. so that's where the extra bit is coming from I guess. Time to start playing.
http://www.python-forum.org/viewtopic.php?f=6&t=10734
CC-MAIN-2015-48
refinedweb
439
57
On 15.10.2010 10.28, Vesa wrote: > Unfortunately it is not working.. The Working Fix is this: > > - return (stc == 0) ? -1LL : stc; > + if (Transferring()){ > + stc -= 90000L; > + } > + else { > + stc -= 520000L; > + } > + return (stc == 0) ? -1LL : (stc& 0x00000000FFFFFFFF); > > Replaying returns always true, Transferring returns true only for live. eHD > seems to need small delay also for live. > > With this patch eHD looks usable device. I have to do long time testing for > stability, but so far it is more stable than xine/vdpau based solutions. I'm > using reelbox testing tree version 15208 for reelbox-3 and eHD device > driver. Is it still required to modify also the dvbsubtitle.c?... ...hanu -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
http://www.linuxtv.org/pipermail/vdr/2010-October/023776.html
CC-MAIN-2014-10
refinedweb
127
76.22
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. On 2015.03.19 at 15:17 -0400, Jason Merrill wrote: > This patch makes some significant changes to attribute abi_tag. > > First, it allows explicit naming of tags on inline namespaces, which > previously always had a tag with the same name as the namespace itself; > this is still the default if no tag is specified. > > It also introduces automatic tagging of functions and variables with > tagged types where the tags are not already reflected in the mangled > name. I feel somewhat uneasy about this change, but I think it's the > right answer. -Wabi-tag will also warn about this so that people are > aware of it and can tag explicitly if they want to. This breaks compatibility with other compilers. Consider the case when a user compiles a library, that contains e.g. some member function with a std::string return type, with clang using gcc-5's libstdc++. It will be mangled without abi-tags, because clang doesn't support them. Now when the user switches back to gcc-5 and uses the libraries headers in a new project he will get undefined symbol errors when linking with the library, because gcc-5 adds an abi-tag to the member function declaration. Another issue is that the -Wabi-tag warning isn't enabled by -Wall or even by -Wextra. -- Markus
https://gcc.gnu.org/ml/gcc-patches/2015-04/msg00153.html
CC-MAIN-2018-51
refinedweb
239
62.88
java tutor Its excellent.I will give commets later. thankyou ultimat site.. helps lot to learn.. thankyou.., textile texttile prject NICE TUTORIALS super tutorials i found it very useful...!!! java awt exmpples thank u sir for ur help java questions please send me java programming question's solution on my e-mail id Package Example Java AWT Package Example In this section you will learn about the AWT package of the Java. Many running examples are provided that will help you master AWT package. Example... visit the following links: Dialogs a) I wish to design a frame whose layout mimics - Swing AWT information, Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi, import java.awt.
http://roseindia.net/tutorialhelp/allcomments/530
CC-MAIN-2014-42
refinedweb
128
66.03
W3C WAI ER This is a little service to check web pages for some of the Web Content Accessibility Guidelines. It correctly identifies some, but not all, accessability problems in HTML pages. It also falsely reports some stuff. I hope to fix that over time. It's mostly just a repackaging of the WAI example in the Schematron materials by Rick Jelliffe, Academia Sinica Computing Centre. I modified wai.xml and schematron-report to use XHTML with namespaces, for compatibility with tools like tidy. Rick notes that his WAI schema is incomplete, and I haven't fixed that (yet). If your document is already XHTML, you can skip to the next step. NOTE: the result is XHTML, but this HTTP server might label it as text/xml, so you might have to do something like save it locally with a .html extension to get it to look right in your client. I hope to fix this soon.
https://www.w3.org/2000/07/eval43/
CC-MAIN-2021-25
refinedweb
157
75.5
Pinned topic Support for the IBM Advanced Toolchain sphfastcpufreq - what freqquency is returned ?2014-02-11T23:06:48Z This is the accepted answer. This is the accepted answer. Hi, I'm running the at7.0.3 on Suse Linux 11 sp3 - on a POWER7 LPar In the C code I'm working with (sysjitter.c ) the following call is made to sphfastcpufreq static unsigned measure_cpu_mhz(void) { return (unsigned) (sphfastcpufreq() / 1000000); } But the problem is this returns a value of 512 ! (ie. sphgetfastcpufreq is returning 512000000 (512Mhz) But my POWER7 cpu is a 3.3Ghz model - shouldn't this return 3300000000 and not 512000000 ? What is this call doing exactly - is it the headline CPU frequency that is meant to be returned - or something else ? Later in the code sphgettimer(); is called to get the number of cycles before and after a thread is executed. Are these cycles relative to the 512Mhz reported clock or the true CPU frequency of 3.3GHz ? I did look for the man page for this call in /opt/.at7.0/share/man - but if it's in there it not obvious to me where it is having set the man path "man sphgetfastcpufreq" also returns nothing (although there is a brief decription in sphtimer.h) Thanks Andy Re: sphfastcpufreq - what freqquency is returned ?2014-02-12T03:28:32Z This is the accepted answer. This is the accepted answer. On Intel processors this will return something clost to the nominal CPU clock frequency. But on POWER (Server PowerISA) there is a separate timebase register and clock that is constant and independant of the CPU clock. The CPU clock is constantly adjusted for power management, but the timebase clock is contant and the timebase value is synchronized across all cores in the system. So 512MHz is the just timebase frequence for POWER7. This is used for time of day and makes fast accurate interval timer for performance measurement. This function is returning a value read from /proc/cpuinfo and cached for fast access. processor : 71 cpu : POWER7 (architected), altivec supported clock : 3000.000000MHz revision : 2.1 (pvr 003f 0201) timebase : 512000000 platform : pSeries model : IBM,8233-E8B machine : CHRP IBM,8233-E8B
https://www.ibm.com/developerworks/community/forums/html/topic?id=b00d20e3-fc73-4151-ac7d-fbbe5c22ee2a&ps=100
CC-MAIN-2016-40
refinedweb
365
66.33
I am in an introductory programming course and have a few months of self teaching under my belt as well, but I have yet to come across a problem where I have had to use "this." Today my teacher told our class to ALWAYS use "this" for accessing instance variables, but all of the programming I have ever done I've never used "this" to access my instance variables. Furthermore, one of the books I have says that putting "this" in front of every instance variable (no matter what) is somewhat of a sin. For example, here is my finished code for my assignment today: public class MyApp { public static void main(String[] args) { Student myStudent = new Student(101, "Tyler"); myStudent.setGrade(86.0); myStudent.printInfo(); Student otherStudent = new Student(102, "Tom"); otherStudent.setGrade(71.0); otherStudent.printInfo(); } //end of main } //end of class MyApp public class Student { private String name; private String grade; private int id; public static final int maxCourses = 5; public Student() { } public Student(int id) { this.id = id; } public Student(int id, String name) { this.name = name; this.id = id; } public String getName() { return this.name; } public int getId() { return this.id; } public String getGrade() { return this.grade; } public void setName(String name) { this.name = name; } public void setId(int id) { this.id = id; } public void setGrade(double mark) { if (mark < 50.0) { this.grade = "F"; } else if (mark < 70.0) { this.grade = "C"; } else if (mark < 85.0) { this.grade = "B"; } else { this.grade = "A"; } } public void printInfo() { System.out.println("Name: " + this.name); System.out.println("ID: " + this.id); System.out.println("Grade: " + this.grade); System.out.println("Max Courses: " + Student.maxCourses + "\n"); } } //end of Class Student It is my gut reaction that this isn't a good use of the "this" keyword. Am I right in assuming that I could eliminate every "this" in the Student class simply by changing the parameter names to something other that the instance variable names? Why use it in the get and set methods? It also seems pointless to use the "this" keyword when printing the variables because different object references are invoking the print method, and therefore already know what instance variables to print. For example: public Student(int anId) { id = anId; } So I guess what I am trying to say is why use "this" everywhere; why not just change the parameter name? Is it a matter of taste or convention to use one over the other? To conclude, can anyone show examples where the "this" keyword is properly used; something more complex that simply getters and setters? If there are any threads I may have missed in my search, a point in the right direction would be appreciated. Thanks
http://www.dreamincode.net/forums/topic/84476-the-proper-use-of-the-this-keyword/
CC-MAIN-2017-51
refinedweb
454
64.71
I've spent more than a week trying to figure out how to show and hide buttons on dynamic pages based on these criteria: the visitor must be logged in the visitor's role (i.e "Gold", "Silver", "Bronze") must match the 'Access Conditions' I set. Each dynamic has year information (the 'pageYear'). I want to restrict access on certain dynamic pages based on this year information and the user role (by Wix). Here are the conditions: Role access Conditions Gold - access to dynamic pages that contain the years: 1970 to the current year Silver - access to dynamic pages that contain the years: 2011 to current year Bronze - access to dynamic pages that contain the years: 2011 to last year I tried writing code (see below) but am unable to make the button show or hide properly. For some reason, the button is always hidden, leading me to think that maybe I'm not calling the user role (e.g. Bronze, Silver, Gold) correctly. I've followed a lot of Wix forum posts and I've tried them all -- nothing seems to be working. I need help! I have already: Checked it using the live site. ( I always do) Looked at the Wix API references Set up the member roles such that it appears on the member list: Here is my code: import wixUsers from 'wix-users'; import wixData from 'wix-data'; import wixLocation from 'wix-location'; let source; let user = wixUsers.currentUser; $w.onReady(function() { $w("#dynamicDataset").onReady( () => { let itemObj = $w("#dynamicDataset").getCurrentItem(); let pageYear = itemObj.year console.log(pageYear) var today = new Date(); var currentYear = today.getFullYear(); var lastYear = today.getFullYear() - 1; if (user.loggedIn) { user.getRoles() .then((roles) => { let firstRole = roles[0]; let roleName = firstRole.name; //Checks under which tier/roles the user falls and then forms the array var tier = roleName; var arr; if (tier === "Bronze") { var t1 = []; var t1_start; for (t1_start = 2011; t1_start <= lastYear; t1_start++) { t1.push(t1_start); } arr = t1; } else if (tier === "Silver") { var t2 = []; var t2_start; for (t2_start = 2011; t2_start <= currentYear; t2_start++) { t2.push(t2_start); } arr = t2; } else if (tier === "Gold") { var t3 = []; var t3_start; for (t3_start = 1970; t3_start <= currentYear; t3_start++) { t3.push(t3_start); } arr = t3; } else { } //Checks if the pageYear is part of the array. If yes, show; otherwise, hide. function tier_1() { if (arr.includes(pageYear)) { } else { } } //Passes the value of array arr and runs the function tier_1(arr); }); } else { } } ); }); What am I doing wrong? I would really appreciate any help! I'm so confused. Even a simple conditional is not working. This really confirms my doubts that the problem lies in the acquisition of user roles. Push... Really need help, guys! :( Were you able to get any help with this? I am having a similar issue whereby I am trying to display buttons based on three conditions - logged in, assigned a specific role; logged in; no role assigned; not logged in. I have the logged in/not logged in working perfectly but the buttons display as though no role was assigned in all logged in cases.
https://www.wix.com/corvid/forum/community-discussion/unable-to-get-button-to-conditionally-hide-using-getroles-and-arrays
CC-MAIN-2020-24
refinedweb
505
65.12
Hi Christopher, I have done some work on the Swing console to allow for input and display of all Unicode characters (assuming you have the proper font installed.) I've used it with Cyrillic characters and Japanese. I submitted the patch to Pat a while back. Hopefully we can get it in the next release. Regards, Dan -----Original Message----- From: beanshell-developers-admin@lists.sourceforge.net [mailto:beanshell-developers-admin@lists.sourceforge.net] On Behalf Of Christopher Dillon Sent: Thursday, January 23, 2003 11:46 AM To: Beanshell Developper Lists Subject: [Beanshell-dev] Non ASCII charcters not supported... Hi, Why isn't it possible to use non-ASCII characters? Such character are for example é, ù, ç, à, ... (yes, I'm french ;D ) It *is* possible to do so in Java (compiles & works fine): public class NonAsciiCharTest{ static String e = "hello"; static String é = "héhé"; // notice the é static void façon(){ //notice the ç System.out.println("façon"); } public static void main(String a[]){ façon(); System.out.println("e: "+e+" - - é:"+é); } } I know it may sound a little bit un necessary as it would be possible to just use the "standard" letters (e, u, c, a) (and actually I tend to do so...) The problem is that I am working on a project to allow non computer scientists to create programs (you whish ;D). And, in this country (and I guess some others to...) we use these "special" characters...so it would be kind of *really* nice to be able to do so... I had a quick look at ASCII_UCodeESC_CharStream.java. It seems that some special characters have to be handled, woulnd't it be possible to let the strange frenchy characters go through? Would that cause a problem? And finally, I know so many people said that before me so I feel a bit stupid writing that but Beanshell is GREAT (formidable, magnifique, genial, de la bombe de balle,... in french ;D) Cheers Chris Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Testez le nouveau Yahoo! Mail
https://sourceforge.net/p/beanshell/mailman/attachment/000601c2c2d3$4f963fd0$9cfcc3db@sabrina/1/
CC-MAIN-2017-09
refinedweb
342
66.13
ndctl-zero-labels - Man Page zero out the label area on a dimm or set of dimms Synopsis ndctl zero-labels <nmem0> [<nmem1>..<nmemN>] [<options>] Description The namespace label area is a small persistent partition of capacity available on some NVDIMM devices. The label area is used to resolve aliasing between pmem and blk capacity by delineating namespace boundaries. This command resets the device to its default state by deleting all labels. Options - <memory device(s)> A nmemX device name, or a dimm id number. Restrict the operation to the specified dimm(s). The keyword all can be specified to indicate the lack of any restriction, however this is the same as not supplying a --dimm option at all. - -s, --size= Limit the operation to the given number of bytes. A size of 0 indicates to operate over the entire label capacity. - -O, --offset= Begin the operation at the given offset into the label area. - . - -v Turn on verbose debug messages in the library (if ndctl was built with logging and debug enabled). Copyright © 2016 - 2020, Intel Corporation. License GPLv2: GNU GPL version 2. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. See Also UEFI NVDIMM Label Protocol[1] Notes - 1. UEFI NVDIMM Label Protocol Referenced By ndctl(1), ndctl-create-namespace(1).
https://www.mankier.com/1/ndctl-zero-labels
CC-MAIN-2021-10
refinedweb
227
58.58
Red Hat Bugzilla – Bug 139462 yum doesnt fail gracefully on early user cancel Last modified: 2014-01-21 17:50:46 EST Description of problem: I start a package installation through yum and cancel it immediately. It fails with traceback Version-Release number of selected component (if applicable): How reproducible: everytime Steps to Reproduce: 1. yum install package. 2. cancel using control+c immediately Actual results: yum -y gnome-panel Traceback (most recent call last): File "/usr/bin/yum", line 6, in ? import yummain File "/usr/share/yum-cli/yummain.py", line 23, in ? import yum File "/usr/lib/python2.3/site-packages/yum/__init__.py", line 35, in ? import depsolve File "/usr/lib/python2.3/site-packages/yum/depsolve.py", line 30, in ? import packages File "/usr/lib/python2.3/site-packages/yum/packages.py", line 18, in ? import rpm KeyboardInterrupt Expected results: output: exit on user cancel Additional info: It does this correctly if cancelled after a minute or so the main yum handler that captures the keyboard interrupt isn't loaded at that time. There's not much I can do. I have no idea on the actual code but perhaps trying to load the keyboard handing module or something as early as possible might help I load it in /usr/bin/yum, now. if you ctrl-c at that point it will try to catch it. beyond that, not a lot I can do. This is fixed in yum 2.1.12
https://bugzilla.redhat.com/show_bug.cgi?id=139462
CC-MAIN-2017-30
refinedweb
246
60.11
Kubernetes Interview Questions And Answers 2020. - What is the Kubernetes? - What is Kubernetes and how to use it? - What is the meaning of Kubernetes? - What is a docker? - What is orchestration in software? - What is a cluster in Kubernetes? - What is a swarm in Docker? - What is Openshift? - What is a namespace in Kubernetes? - What is a node in Kubernetes? - What is Docker and what does it do? - What is a Heapster? - Why do we use Docker? - What is a docker in cloud? - What is the Kubelet? - What is Minikube? - What is Kubectl? - What is the Gke? - What is k8s? - What is KUBE proxy? Must Read Kubernetes Books Kubernetes Interview Questions And Answers A) Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes Interview Question # 2) What is Kubernetes and how to use it? A) Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. With Kubernetes, you are able to quickly and efficiently respond to customer demand: Deploy your applications quickly and predictably. Kubernetes Interview Question # 3) What is the meaning of Kubernetes? A) Kubernetes (commonly referred to as “K8s”) is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation. Docker Kubernetes Interview Questions For Experienced Kubernetes Interview Question # 4) What is a docker? A) Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS). Kubernetes Interview Question # 5) What is orchestration in software? A) Application Orchestration. Application or service orchestration is the process of integrating two or more applications and/or services together to automate a process, or synchronize data in real-time. Often, point-to-point integration may be used as the path of least resistance. Kubernetes Questions # 6) What is a cluster in Kubernetes? A) These master and node machines run the Kubernetes cluster orchestration system. A container cluster is the foundation of Container Engine: the Kubernetesobjects that represent your containerized applications all run on top of a cluster. Interview Questions on Kubernetes # 7) What is a swarm in Docker? A) Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster ofDocker nodes as a single virtual system.. Advanced Kubernetes Interview Questions Docker and Kubernetes Interview Question # 9) What is a namespace in Kubernetes? A) Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default. Kubernetes Interview Question # 10) What is a node in Kubernetes? A) A node is a worker machine in Kubernetes, previously known as a minion. A nodemay be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy. Kubernetes Interview Question # 11) What is Docker and what does it do? A) Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. Kubernetes Interview Question # 12) What is a Heapster? A) Heapster is a cluster-wide aggregator of monitoring and event data. It supports Kubernetes natively and works on all Kubernetes setups, including our Deis Workflow setup. Kubernetes Interview Question # 13) Why do we use Docker? A) Docker provides this same capability without the overhead of a virtual machine. It lets you put your environment and configuration into code and deploy it. The same Docker configuration can also be used in a variety of environments. This decouples infrastructure requirements from the application environment. Kubernetes Interview Question # 14) What is a docker in cloud? A) A node is an individual Linux host used to deploy and run your applications. Docker Cloud does not provide hosting services, so all of your applications, services, and containers run on your own hosts. Your hosts can come from several different sources, including physical servers, virtual machines or cloud providers. Kubernetes Interview Question # 15) What is a cluster of containers? A) A container cluster is a set of Compute Engine instances called nodes. It also creates routes for the nodes, so that containers running on the nodes can communicate with each other. The Kubernetes API server does not run on your cluster nodes. Instead, Container Engine hosts the API server. Real-Time Kubernetes Scenario Based Interview Questions Kubernetes Interview Questions # 16) What is the Kubelet? A) Kubelets run pods. The unit of execution that Kubernetes works with is the pod. A pod is a collection of containers that share some resources: they have a single IP, and can share volumes. Kubernetes Interview Questions # 17) What is Minikube? A). Kubernetes Interview Questions # 18) What is Kubectl? A). Kubernetes Interview Questions # 19) What is the Gke? A) Google Container Engine (GKE) is a management and orchestration system for Docker container and container clusters that run within Google’s public cloud services. Google Container Engine is based on Kubernetes, Google’s open source container management system. Kubernetes Interview Questions # 20) What is k8s? A) Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers. Kubernetes Interview Questions # 21) What is KUBE proxy? A) Synopsis. The Kubernetes network proxy runs on each node. Service cluster ips and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. Kubernetes Interview Questions # 22) Which process runs on Kubernetes master node? A) Kube-apiserver process runs on Kubernetes master node. Kubernetes Interview Questions # 23) Which process runs on Kubernetes non-master node? A) Kube-proxy process runs on Kubernetes non-master node. Kubernetes Interview Questions # 24) Which process validates and configures data for the api objects like pods, services? A) kube-apiserver process validates and configures data for the api objects. Kubernetes Interview Questions # 25) What is the use of kube-controller-manager? A) kube-controller-manager embeds the core control loop which is a non-terminating loop that regulates the state of the system. Kubernetes Interview Questions # 26) Kubernetes objects made up of what? A) Kubernetes objects are made up of Pod, Service and Volume. Kubernetes Interview Questions # 27) What are Kubernetes controllers? A) Kubernetes controllers are Replicaset, Deployment controller. Kubernetes Interview Questions # 28) Where Kubernetes cluster data is stored? A) etcd is responsible for storing Kubernetes cluster data. Kubernetes Interview Questions # 29) What is the role of kube-scheduler? A) kube-scheduler is responsible for assigning a node to newly created pods. Kubernetes Interview Questions # 30) Which container runtimes supported by Kubernetes? A) Kubernetes supports docker and rkt container runtimes. Kubernetes Interview Questions # 31) What are the components interact with Kubernetes node interface? A) Kubectl, Kubelet, and Node Controller components interacts with Kubernetes node interface. RELATED - Advanced Java Interview Questions
https://codingcompiler.com/kubernetes-interview-questions-answers/
CC-MAIN-2020-16
refinedweb
1,256
51.24
Examples scala> import cats.implicits._, cats._, cats.derived._ scala> case class Cat[Food](food: Food, foods: List[Food]) defined class Cat scala> val cat = Cat(1, List(2, 3)) cat: Cat[Int] = Cat(1,List(2, 3)) Derive Functor scala> implicit val fc: Functor[Cat] = { the default toString. scala> case class Address(street: String, city: String, state: String) scala> case class ContactInfo(phoneNumber: String, address: Address) scala> case class People(name: String, contactInfo: ContactInfo) scala> val mike = People("Mike", ContactInfo("202-295-3928", Address("1 Main ST", "Chicago", "IL"))) scala> //existing Show instance for Address scala> implicit val addressShow: Show[Address] = you should add the following to your build.sbt: scalacOptions += "-Ypartial-unification" scala> import cats.implicits._, cats.sequence._ import cats.implicits._ import cats.sequence._ scala> val f1 = (_: String).length f1: String => Int = <function1> scala> val f2 = (_: String).reverse f2: String => String = <function1> scala> val f3 = (_: String).toFloat f3: String => Double = <function1> scala> val f = sequence(f1, f2, f3) f: String => shapeless.::[Int,shapeless.::[String,shapeless.::[Float,shapeless.HNil]]] = <function1> scala> f("42.0") res0: shapeless.::[Int,shapeless.::[String,shapeless.::[Float,shapeless.HNil]]] = 4 :: 0.24 :: 42.0 :: HNil //or generic over ADTs scala> case class MyCase(a: Int, b: String, c: Float) defined class MyCase scala> val myGen = sequenceGeneric[MyCase] myGen: cats.sequence.sequenceGen[MyCase] = cats.sequence.SequenceOps$sequenceGen@63ae3243 scala> val f = myGen(a = f1, b = f2, c = f3) f: String => MyCase = <function1> scala> f("42.0") res1: MyCase = MyCase(4,0.24,42.0) Traverse works similarly but you need a Poly. Lift examplesLift examples scala> import cats._, implicits._, lift._ import cats._ import implicits._ import lift._ scala> def foo(x: Int, y: String, z: Float) = s"$x - $y - $z" scala> val lifted = Applicative[Option].liftA(foo _) lifted: (Option[Int], Option[String], Option[Float]) => Option[String] = <function3> scala> lifted(Some(1), Some("a"), Some(3.2f)) res0: Option[String] = Some(1 - a - 3.2) Three Modes of DerivationThree Modes of Derivation Kittens provides three objects for derivation cats.derived.auto, cats.derived.cached and cats.derived.semi The recommended best practice is going to be a semi auto one: import cats.derived implicit val showFoo: Show[Foo] = { import derived.auto.show._ derived.semi.show } This will respect all existing instances even if the field is a type constructor. For example Show[List[A]] will use the native Show instance for List and derived instance for A. And it manually caches the result to the val showFoo. Downside user will need to write one for every type they directly need a Show instance There are 3 alternatives: - full auto: import derived.auto.show._ The downside is that it will re-derive for every use site, which multiples the compilation time cost. - full auto cached import derived.cached.show._ Use this one with caution. It caches the derived instance globally. So it's only applicable if the instance is global in the application. This could be problematic for libraries, which has no control over the uniqueness of an instance on use site. It relies on shapeless.Cached which is buggy. Mile Sabin is working on a language level mechanism for instance sharing. - manual semi implicit val showFoo: Show[Foo] = derived.semi.show It has the same downside as the recommenced semi-auto practice but also suffers from the type constructor field issue. I.e. if a field type is a type constructor whose native instance relies on the instance of the parameter type, this approach will by default derive an instance for the type constructor one. To overcome this user have to first derive the instance for type parameter. e.g. given case class Foo(bars: List[Bar]) case class Bar(a: String) Since the bars field of Foo is a List of Bar which breaks the chains of auto derivation, you will need to derive Bar first and then Foo implicit val showBar: Show[Bar] = :-)
https://index.scala-lang.org/milessabin/kittens/kittens/1.0.0-M10?target=_2.12
CC-MAIN-2020-50
refinedweb
666
59.6
American fuzzy lop is a polished and effective fuzzing tool. It has found tons of bugs and there are any number of blog posts talking about that. Here we’re going to take a quick look at what it isn’t good at. For example, here’s a program that’s trivial to crash by hand, that afl-fuzz isn’t likely to crash in an amount of time you’re prepared to wait: #include <stdlib.h> #include <stdio.h> int main(void) { char input[32]; if (fgets(input, 32, stdin)) { long n = strtol(input, 0, 10); printf("%ld\n", 3 / (n + 1000000)); } return 0; } The problem, of course, is that we’ve asked afl-fuzz to find a needle in a haystack and its built-in feedback mechanism does not help guide its search towards the needle. Actually there are two parts to the problem. First, something needs to recognize that divide-by-zero is a crash behavior that should be targeted. Second, the search must be guided towards inputs that result in a zero denominator. A finer-grained feedback mechanism, such as how far the denominator is from zero, would probably do the job here. Alternatively, we could switch to a different generation technology such as concolic testing where a solver is used to generate test inputs. The real question is how to get the benefits of these techniques without making afl-fuzz into something that it isn’t — making it worse at things that it is already good at. To see why concolic testing might make things worse, consider that it spends a lot of time making solver calls instead of actually running tests: the base testing throughput is a small fraction of what you get from plain old fuzzing. A reasonable solution would be to divide up the available CPU time among strategies; for example, devote one core to concolic testing and seven to regular old afl-fuzz. Alternatively, we could wait for afl-fuzz to become stuck — not hitting any new coverage targets within the last hour, perhaps — and then switch to an alternate technique for a little while before returning to afl-fuzz. Obviously the various search strategies should share a pool of testcases so they can interact (afl-fuzz already has good support for communication among instances). Hopefully some concolic testing people will spend a bit of time bolting their tools onto afl-fuzz so we can play with these ideas. A variant of the needle-in-a-haystack problem occurs when we have low-entropy input such as the C++ programs we would use to test a C++ compiler. Very few ascii strings are C++ programs and concolic testing is of very little help in generating interesting C++. The solution is to build more awareness of the structure of C++ into the testcase generator. Ideally, this would be done without requiring the somewhat elaborate effort we put into Csmith. Something like LangFuzz might be a good compromise. (Of course I am aware that afl-fuzz has been used against clang and has found plenty of ways to crash it! This is great but the bugs being found are not the kind of semantic bugs that Csmith was designed to find. Different testcase generators solve different problems.) So we have this great fuzzing tool that’s easy to use, but it also commonly runs up against testing problems that it can’t solve. A reasonable way forward will be for afl-fuzz to provide the basic framework and then we can all build extensions that help it reach into domains that it couldn’t before. Perhaps Valgrind is a good analogy: it provides not only an awesome baseline tool but also a rich infrastructure for building new tools. [Pascal Cuoq gave me some feedback on this post including a bunch of ideas about afl-fuzz-related things that he’ll hopefully blog about in the near future.] afl seems to benefit greatly from seed test cases. I’d regard the SMT solver strategy as an automated way to find additional seeds. Does afl have any mechanisms to steer it away from known bugs? I.e. once I have 8-10 crashes from the same bug I’d rather not spend time generating another 100 if that could be spent generating cases for another bug. If I need more characterization, I’d throw the first 8 at a reducer and log all the transition pairs. bcs, I don’t think so, but it does look like there’s some crash triage code for trying to weed out the redundant tests later on. Hey, Although afl-fuzz isn’t very modular internally [*], it actually does provide a fairly convenient way to sync coverage and test cases with other tools. It’s not documented as such, and I’ll try to fix that. But the mechanism used to synchronize instances of afl-fuzz actually gives you a very simple tool: pull in queue/ files from a running AFL instance, extend them and write them back to another subdirectory, have AFL pull them in. [*] It’s been on my TODO list, but it requires ripping out and redoing most of the guts and I’m not sure how many people would genuinely benefit from it. I always worry about giving folks too much flexibility in a setting where it’s a lot easier to come up with bad ideas and settings than with good ones. Curiosity killed the cat and all =) Hi Michal, communication through the filesystem sounds like the right solution for most use cases I can think of. Regarding internal control, one thing I’d like is to be able to turn off mutation strategies that I’m fairly certain will not help solve a particular problem (e.g., bit flips when generating C programs). BTW when I was writing programs that afl is bad at finding bugs in, it surprised me a few times by generating test cases I didn’t expect it to get in a short time. +1 for communication through the filesystem. documents the compatibility between AFL and LLVM’s libFuzzer. Turns out this is not the 100% correct description of the current state (which is a bit more complex) but I’d love to make it correct. 🙂 Kostya, looks great, except I don’t want to have to restart the fuzzers to get them to talk! One stupid-simple thing AFL could do that would eventually work in this case is to harvest integer literals from the code to add to its pool of magic numbers. It’s not obvious to me that this strategy would pay its way, but it seems worth trying. Sean, I’ve seen that done but it seems a little too “this one weird trick” to me. But along the same lines, running strings on a binary could easily be useful in finding the tokens you should use to fuzz that binary, assuming it contains some sort of parser. I haven’t tried it, but wouldn’t it be easy to hook AFL up to KLEE’s zesti version ()? It shouldn’t be hard to set up a scan that looks for new files in AFL’s output directory and launches a make-test-zesti instance on them. That’s basically what we did here () except offline, since instead of AFL we were waiting for a delta-debugger to finish minimizing things. You could actually put in the whole pipeline we had in that paper: 1. Scan for new files arriving in AFL’s output directory 2. When you see one, minimize it by coverage (AFL already does some kind of minimization on request, so you could use that or plain old Zeller DD, though Zeller will require more work since I think AFL “knows” what a crash and valid minimization are, etc., but you might want more semantics at work in your minimization). 3. Drop it in the KLEE-zesti queue, perhaps prioritizing by an FPF ranking like we did in the paper if AFL is getting ahead of KLEE (which seems plausible even if we give KLEE several cores). I haven’t played with AFL enough to know: does it handle closing the feedback loop? Can I drop new files into the AFL input testcase directory, and expect AFL to notice them and add them to a running fuzzer instance? Alex see Michal’s comment #4 above — sounds like the documentation for syncing up testcases is coming soon. Is Zesti the right tool here? How does it differ from Klee? zesti’s a bit of “KLEE made easy” if you have existing test cases lying around to base your symbolic exploration on. I haven’t directly played with it myself, just KLEE, but Super found using it for the ISSTA paper and his thesis work since then quite pleasant, I think. For existing test cases vs. building a harness I think it may be The Right Thing right now. Given what KLEE and company are doing, saying “elaborate on this existing test case” vs. “let me right a harness with symbolic holes” definitely has some nice aspects. er. “write” a harness, you can tell I’m describing how to determine if a specification is right in another window…
https://blog.regehr.org/archives/1238
CC-MAIN-2019-18
refinedweb
1,545
67.79
It is really easy to send emails using gmail and Python() I just want to know how to use this as a sublime plugin. I'll make it for you but I won't be home for ~3-4 hours. Oh, it would be cool if you could email the current file as an attachment! Might make it useful whoops, forgot to do this. I'll get around to it, though. I am trying here with no success. Which methd I have to use to get selected text? self.view.sel() ? I wanted to have this: Then in dialog I put subject, to, cc, bcc, and a checkbox if I want to include the file as attachment. even if dialog is not possible, this can be setted in the text with special markup {{G:to:cc:bcc:subject:True,False}} Any idea? self.view.substr(self.view.sel()[0]) gets you the currently selected text. While you can't have a custom dialog box with multiple fields, you can use an input panel. Hey guys. I don't know if this has been answered or not.. Or the guy sent around the plugin or whatever. Here is the plugin that does JUST THAT. Upon pressing ctrl+alt+g it opens up a small input panel where you can enter the destination address and send the selected text: Google code: code.google.com/p/sublime-gmail-plugin/ Github: github.com/Skarlso/SublimeGmailPlugin Hope that helped.Cheers,Skarlso Sorry to necropost but here goes: I find this plugin very useful. I'd like to be able to add a input panel that allows you to customize/enter the subject line. How should I go about doing that? Try this (a small change to the gmail.py code from here:): import sublime, sublime_plugin, smtplib class GmailCommand(sublime_plugin.TextCommand): to = "example@gmail.com" text = "selected text" def on_done(self, to, text): self.view.window().show_input_panel("Subject:", 'Sent from SublimeText', lambda s, content=text, recipient=to: self.send_gmail(recipient, content, s), None, None) def send_gmail(self, to, text, subject): gmail_user = "your@gmail.com" gmail_pwd = "yourpassword" FROM = 'your@gmail.com' TO = '%s' % to] # from input SUBJECT = subject TEXT = text() sublime.status_message("Email sent successfully to: %s" % to) except: sublime.status_message("There was an error sending the email to: %s " % to) def run(self, edit): for region in self.view.sel(): if not region.empty(): # Get the selected text text = self.view.substr(region) self.view.window().show_input_panel("To:", 'email@gmail.com', lambda s, content=text: self.on_done(s, content), None, None) Note: I cannot test this as smtp access is blocked at work but I only changed the on_done function and the signature and subject assignment in the send_gmail function (so it would work as well as it did before). Thanks for the reply! No, it doesn't seem to work. I figured I needed to change the on_done function. I'm more used to vanilla C programming for microcontrollers and object-oriented programming seems a bit confusing for me ... Especially the lambda function and s variable as it doesn't seem to be defined. How exactly does self.view.window().show_input_panel("Subject:", 'Sent from SublimeText', lambda s, content=text, recipient=to: self.send_gmail(recipient, content, s), None, None) get the subject string so that it can be attributed to the SUBJECT variable? Yeeeah... Sorry... It actually DOES work.I just kept forgetting to actually input my username/passwordSorry... Cool. Glad to help. I'm sorry I have to necropost this thread too, but there is apparently an issue with special characters with this plugin. I used the above code (which allows setting an email subject), but will always get an error when using special characters, at least the "ç". It have done several trials prior to identify that this was what caused the error. Any idea on how to solve that? Also, a few minor issues:- How can I change the display name of the emails I send? I tried to change the "From:" in the gmail.py to "Firstname Name", but it will still show the username of my Gmail address (which is "firstname.name").- Any way to send an email in "thread" mode, where all messages with the same subject are put in the same conversation?- How should I proceed to send an email to multiple recipients? Should I use "," or ";" as a separator, is it supported? Many thanks in advance for the answers. 1) Not sure about the unicode issue...I'll have to look closer 2) To set the display name, use the following format for the "from":Display Name address@gmail.com 3) An email will be included in a thread if the subjects are the same (ignoring FW/RE prefix) and the sender is part of the threadsensefulsolutions.com/2010/0 ... gmail.html 4) You have to send a list rather that a string:"first@recipient.com", "second@recipient.com", "third@recipient.com"] I'll see if I can whip something together to help with this... Thanks a lot for the detailed answer! Hope you'll find something about that, I think accented characters also cause the issue, so that's a big one for my language. Thanks. I've just tried that but it still seems to show my username instead of the display name. I was using an Alias in Gmail so I disabled it and put my Gmail address as default sender again, but still the same issue. Same here: it apparently does not work! Does it, on your side? Thanks a lot! Unfortunately I've never actually used the plugin since gmail is blocked from my work computer...I just thought I could help answer your questions 2) Try it with quotes around the name: "Firstname Name" <example@gmail.com> (although this won't work if you have unicode characters in your name - you need to encode the name part of the string if you want to do that) It looks like the creating emails in python that have unicode is a bit difficult. I did some research and tried some things out but it's tough when I can't actually test out the sending code. Even so, I started running into problems as soon as I start using unicode characters... It's very possibly that my errors are a result of my limited python knowledge - I seem to frequently have issues handling unicode in my plugins. If I get some time later I'll give it another shot and see if I can get any further. Many thanks again for your time Jbjornson. FROM = '"Firstname Name" <firstname.name@gmail.com>' Any other uses of the quotes disabled the plugin (Ctrl+Shift+G wouldn't work anymore), like: FROM = ''Firstname Name' <firstname.name@gmail.com>' FROM = 'Firstname Name' <firstname.name@gmail.com> Don't bother too much for the display name if you don't find anything about the unicode issue, this is by far the most problematic issue because I cannot use the plugin without unicode characters anyway. Display name and other details might be discussed later if that big one can be solved. I am sorry I don't code, there is no way I can help you with python, I'm afraid. Sorry for the delay...I got distracted with actual work I tried to implement support for unicode. I'm not sure if it works (gmail is blocked at work)....can you test it out and give some feedback? I also added some functionality to allow you to be prompted for any/all of the mail fields but it requires a bit of configuration (see the TODOs at the top of the file). Check it out here:github.com/jbjornson/SublimeGMa ... r/GMail.py If there are issues maybe you can open an issue on the github repository rather than boring the people in the sublime forum here github.com/jbjornson/SublimeGMail/issues If it works I'll split the configuration into a settings file and look into adding other features... This would be excellent !, I'll try to try to find a solution, would be very useful to me
https://forum.sublimetext.com/t/is-there-a-sublimegmail-plugin/4448/8
CC-MAIN-2017-22
refinedweb
1,360
66.64
From: David Abrahams (abrahams_at_[hidden]) Date: 2000-09-19 13:11:00 ----- Original Message ----- From: <jsiek_at_[hidden]> To: <boost_at_[hidden]> Sent: Tuesday, September 19, 2000 11:44 AM Subject: [boost] Graph stuff > Hi Dave, > > There's three things you point out: > > 1. replacing 'plugin' with 'property' and changing the order of > the tag and value type. > This seems reasonable to me. > > 2. Not using an extra typedef. > Of course we don't have control over how users will do this, > by I can certainly change the examples to use this style. > > 3. Using 'weight' instead of 'weight_tag'. > I'm hesitant because this will cause annoying name conflicts. I like > to use 'weight' for local variables. We've been using 'S' at > the end of the names of selectors in the interface, perhaps > 'weightS'? the 'S' rubs me the wrong way also; it goes against my personal rule "no non-standard abbrvs" ;) To follow boost conventions (?) it ought to be vec_s but I would prefer select_vector or use_vector. Isn't it really there to make up for lack of template template parameters? If so, I would hesitate to use the same convention for weight, which will never be a TTP. Your local variable conflict shouldn't be an issue for users who abstain from "using namespace boost" and "using boost::weight"... but the latter construct probably ought to be given consideration. If there is room to experiment, consider the following ideas: // cumbersome, inflexible properties< std::pair<int, weight_property>, std::pair<bool, visited_property>,... > property<int, weights> // s just pluralizes declare<int, weight_property> // cumbersome? property<int, weight_tag> // back to using _tag. Maybe that's OK if you don't use "plugin" property<int, weight_> // too easy to lose the underscore inject<int, weight_property> // obscure? add<int, weight_property> // add is probably too general a name property<int, edge_weight> // nobody puts weights on nodes anyway property<int, edge_color> // I realize there are some orthogonality issues here... property<int, node_color> // ...but it may actually make the declaration of the graph a lot clearer Any of these things strike your fancy or set off a spark? -Dave Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/09/5258.php
CC-MAIN-2019-47
refinedweb
372
66.33
Mixing C and C++/Qt code - #ifdef SUM_H //this is sum.h - extern "C"{ - #endif - int mySum(int a,int b); - #ifdef SUM_H - } - #endif // SUM_H ———————————————————————————————- - #include "sum.h" //this is sum.c - int mySum(int a, int b) - { - return a + b; - } ———————————————————————————————————————————————————————————————— - #include <QtCore/QCoreApplication> // this is main.cpp - #include <QDebug> - #include "sum.h" - int main(int argc, char *argv[]) - { - QCoreApplication a(argc, argv); - int i = 5; - int j = 6; - int k; - k = mySum(i,j); - qDebug() << "this sum is " << k; - return a.exec(); - } ——————————————————————————————————————————————————————————————— but it was wrong with /home/lovebingheji/qt/test-build-desktop-Qt_4_6_2_in_PATH__System__Release/../test/main.cpp:13: error: undefined reference to `mySum(int, int)’ ——————————————————————————————————————————————————————————————- but when i was iadd #include “sum.c” in main.cpp that is right - #include <QtCore/QCoreApplication> - #include <QDebug> - #include "sum.h" - #include "sum.c" - int main(int argc, char *argv[]) - { - QCoreApplication a(argc, argv); - int i = 5; - int j = 6; - int k; - k = mySum(i,j); - qDebug() << "this sum is " << k; - return a.exec(); - } ————————————————————————————————————————————————————————————————— why i just want main.cpp include “sum.h” but “sum.c” how can i should do ? i create a QT console application project,please help me.thanks very much 2 replies Welcome to the forums. Some housekeeping first: - wrap code in @-tags or use the editor’s button. It makes your code look nicely and adds some pretty printing. I’ve added them for your, but please format yourself next time - adding your question to quite old threads (the original thread [qt-project.org] dates back to October last year) isn’t that useful, better open a new thread with a decent topic. I’ve split your question off to a new thread for that reason. - begging for private answers usually leads you to no answers at all. This is a public forum, with everyone answering in their spare times. We do not do private support. Read for some explanations Regarding your actual question: The sum.h header file is plain wrong. Use this one - #ifndef SUM_H - #define SUM_H - #ifdef __cplusplus - extern "C" { - #endif - int mySum(int a, int b); - #ifdef __cplusplus - } - #endif - #endif // SUM_H And then make sure that your sum.c implementation file is actually added to the list of source files and gets compiled and linked to your application. You must log in to post a reply. Not a member yet? Register here!
http://qt-project.org/forums/viewthread/17546/
CC-MAIN-2013-20
refinedweb
388
69.18
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. On Fri, 23 Feb 2001, Benjamin Kosnik wrote: > > The Sun WS6 compiler supplies its own standard C++ header files which do nothing but include the /usr/include/iso > > header files directly. > > This is what the include/c headers were supposed to do. They might not be > in the correct form at the moment to be useful though. I understand what you were trying to do with the include/c headers, but it will never work in their current form with a compliant implementation. They simply include the deprecated C headers. Annex D section 1.5.2 of the C++ standard says:_) That means just passing the contents of these headers through as-is will introduce name conflicts and defeat the entire purpose of putting the <c..> headers in the std namespace. The only way they would work is if the C headers put everything in namespace std and only there. > ...or this. Perhaps you are right, we need a fourth. (Please use the > nnaming conventions in the c_shadow directory though for the namespaces) If > these solaris headers are in fact correct, then doing this should be relatively > painless, and your problems are indicative of v3 trying to smash a square > peg into a round hole. Let's not do that. > > Can you work up this last (Option/Solution 4) and see if that solves this > insanity?". I can certainly work this up. Unfortunately I seem to have discovered a possible bug in the compiler that seems to prevent certain key header files from being wrapped in a namespace, so any results will have to wait until that issue is resolved. Stephen M. Webb
http://gcc.gnu.org/ml/libstdc++/2001-02/msg00314.html
crawl-001
refinedweb
291
71.75
Java Constructor – An Exclusive Guide on Constructors In this Java tutorial, we are going to discuss everything that you must know about a Constructor in Java. Constructor in Java is a block of code that creates an object. We can also call it an Object Builder. They are similar to methods in Java but they differ from methods in the fact that they do not have a return type like methods. In this article, we will learn what a constructor is, the need for constructors, its types, and the rules for writing constructors in Java. We will also cover some other topics like Constructor Overloading and Constructor Chaining. We will also see how the methods are different from the constructors in Java. Before learning about Java Constructor, it is recommended for you to first take a quick sneak peak on Java Methods to clear your basics with Techvidvan. Constructor in Java “A Constructor is a member function which has the same name as its class and is used to initialize the object of that class type with the legal initial value.” A constructor is a member function of a class that is called for initializing objects when we create an object of that class. It is a special type of method that instantiates a newly created object and just after the memory allocation of this object takes place, the constructor is called. The name of the constructor is the same name as that of the class and its primary job is to initialize the object with a legal initial value for the class. It is not necessary for Java coders to write a constructor for a class. Get familiar with the concept of Java Classes and Objects with Techvidvan. Note: When we create an object of a class, at least one constructor is called. If we do not write any constructor in the class then the default constructor is called. Need for Java Constructor We can use the constructors when we want to assign values to the class variables at the time of object creation. To understand the importance of constructors, let’s take an example. Suppose there is a Table. If we create a class named Apple, then it will have some class variables like shape, color, and taste. But, when we create an object of the class Apple, now Apple will reside in the computer’s memory. Can we define an Apple with no value defined for its properties? We can definitely not do this. The constructors allow us to define the values while creating the objects. We can create a constructor either explicitly through programming and if we do not define it explicitly, the Java compiler itself defines the default constructor. Rules for Writing Constructors in Java Following are some rules for writing constructors in Java: - The name of the constructor must be the same as the name of its class. - A constructor must have no return type. It can not have not even void as its return type. - We can use the access modifiers with a constructor to control its access so that other classes can call the constructor. - We can not declare a constructor as final, abstract, abstract and synchronized. Dive a little deep into the basic concept of Access Modifiers in Java. The syntax for writing a Constructor A constructor has the same name as the class and we can write a constructor as follows: public class MyClass { //This is the Constructor MyClass() { //Constructor body } .. } Note that the constructor name matches the class name and it is without a return type. How does constructor work in Java? Let’s take an example to understand the working of a constructor. Suppose we have a class named MyClass. When we initialize or create the object of MyClass it looks like this: MyClass obj = new MyClass(); In the above line, the new keyword creates the object of class MyClass and invokes or calls the constructor to initialize this newly created object. Types of Constructor in Java There are two types of constructors in Java, which are: - Default Constructor - Parameterized Constructor Let’s discuss each of them with examples: 1. Default Constructor A Default Constructor is a constructor with no parameter. The Java compiler automatically creates a default constructor if we do not write any constructor in our program. This default constructor is not present in your source code or the java file as the compiler automatically puts it into the Java code during the compilation process and therefore we can’t find it in our java file, rather it exists in the bytecode or .class file. The following figure shows this process: If we do not provide an user-defined constructor in a class, the compiler initializes member variables to its default values such as: - numeric data types set to 0 - char data types set to a null character (‘\0’) - reference variables set to null Code to understand Default Constructors: package com.techvidvan.constructors; class TechVidvan { int number; String name; TechVidvan() { System.out.println("Default Constructor called"); } } public class DefaultConstructor { public static void main(String[] args) { TechVidvan object = new TechVidvan(); System.out.println(object.name); System.out.println(object.number); } } Output: null 0 Note: If you implement any constructor then the Java compiler will no longer provide a default constructor. 2. Parameterized Constructor A Parameterized constructor is a constructor with a specific number of parameters. We can use parameterized constructor mainly to initialize the members of the class with different values or objects. Code to understand Parameterized Constructors: package com.techvidvan.constructors; class TechVidvan { String name; int id; //Creating a parameterized constructor TechVidvan(String name, int id) { this.name = name; this.id = id; } } public class ParamaterizedConstructor { public static void main (String[] args) { TechVidvan object = new TechVidvan("Raj", 16); System.out.println("Name: " + object.name ); System.out.println("id: " + object.id); TechVidvan object1 = new TechVidvan1("Shivani", 24); System.out.println("Name: " + object1.name ); System.out.println("id: " + object1.id); } } Output: id: 16 Name: Shivani id: 24 Constructor Chaining in Java Constructor Chaining in Java is a process in which a constructor calls another constructor of the same class with the current/present object. The concept of constructor chaining helps to pass the parameters through different constructors, but with the same object. Constructor Overloading in Java- Multiple Constructors for a Java Class Overloading generally means “to have multiple instances of the same thing”. Constructor Overloading in Java is a process of having more than one constructor with different parameters list. It allows the constructor to behave differently and perform a different task with respect to its parameters. Constructor overloading is the same as method overloading in Java. Code to understand Constructor Overloading in Java: package com.techvidvan.constructors; class TechVidvan { TechVidvan(String name) { System.out.println("Constructor with one parameter: String: "); System.out.println("Name: " +name); } TechVidvan(String name, int age) { System.out.println("Constructor with two parameters: String and Integer: "); System.out.println("Name: " +name); System.out.println("Age: " +age); } TechVidvan(long id) { System.out.println("Constructor with one parameter: Long: "); System.out.println("id: " +id); } } public class ConstructorOverloading { public static void main(String[] args) { TechVidvan ObjectName = new TechVidvan("Sameer"); TechVidvan ObjectName1 = new TechVidvan("Neeraj", 25); TechVidvan ObjectName2 = new TechVidvan(235784567); } } Output: Name: Sameer Constructor with two parameters: String and Integer: Name: Neeraj Age: 25 Constructor with one parameter: Long: id: 235784567 Constructor and Inheritance: Super() Keyword The compiler implicitly invokes the constructor of the parent class whenever we invoke or call a constructor of its child class. To do this, the Java compiler inserts a super() keyword at the beginning of the child class constructor. In the below code, we called the child class constructor but firstly the constructor of parent class runs and then the constructor of child class id executed because the compiler puts the super keyword at the beginning of child class constructor, therefore, the control first moves to the parent class constructor and then to the child class constructor. class Parent { Parent() { System.out.println("Parent Class Constructor"); } } class Child extends Parent { Child() { System.out.println("Child Class Constructor"); } public static void main(String args[]) { new Child(); } } Output: Child Class Constructor Difference Between Constructor and Method in Java The following points explain the difference between constructor and method in Java. - A constructor is a block of code that instantiates a newly created object, while a method is a set of statements that always return value depending upon its execution. - The constructor’s name should be the same as the class name. On the other hand, the name of the method should not be the same as the class name. - Constructors are called implicitly, while we call the methods explicitly. - A constructor should not have any return type, not even void, but methods must have a return type. - The compiler automatically creates the constructor if there is no constructor in the class. But, in the case of the method, there is no default method provided by the compiler. - We can override a method but we can’t override a constructor. Important Points - Constructors are called implicitly when we instantiate objects with the new operator. - The two rules for creating a constructor are: - A name of a Java constructor name must exactly match with the class name. - A Java constructor must not have a return type. - If there is no constructor in a class then the Java compiler automatically creates a default constructor during the compilation. - We can’t declare constructors as abstract, synchronized, static or final. - We can overload a constructor but we can’t override a constructor. - Every class has a constructor whether it is a concrete class or an abstract class. - A constructor can use any access specifier. - Interfaces can not have constructors. Summary Constructors are useful to instantiate an object. They are similar to methods but have some differences which we covered in this article. That was all about Java Constructor. Coming to the end of this article, we learned how to create a constructor along with its working. We discussed the importance of constructors. Also, we covered the two types of constructors in Java with the examples and how we can overload a constructor in Java. We also studied briefly about the Constructor Chaining in Java. This article will surely help you to sharpen your concepts in Java Constructors. Thank you for reading our article. Do share your feedback through the comment section below. Happy Learning 🙂
https://techvidvan.com/tutorials/java-constructor/
CC-MAIN-2020-16
refinedweb
1,736
54.52
I’ve been asked to look at versioning on PI. Specifically how we maintain multiple versions of a single interface for our various releases. I’ve researched the options, run a few tests and my conclusions are below but I would appreciate your feedback Options 1) Create a new SWCV. According to SAPs documentation ‘a software component version is a shipment unit for design objects in the ES Repository. You use a SWCV to group together objects that are shipped or installed together.’ SAP uses this approach for the majority of their standard content. You can’t copy objects between different releases of a SWCV. You have to use the Transfer Design Objects Function (ESR Menu under tools). It’s a neat tool that will transfer objects only if the source objects are more recent or the target objects don’t exist. You can preview the changes and it will tell you what will be updated (source version is more recent), what can’t be transferred (the target version is more recent) or what doesn’t need to be transferred (the versions are the same). This is appropriate for deploying different versions of a particular software packages but doesn’t allow us to use both versions. The majority of the ID objects refer to the interface/namespace and not the SWCV. If you try and add two identical interfaces from separate SWCV to the same Communications Component you get an error. 2) Create a new namespace with a version number. SAP has used this approach for the NFE updates and it works. 3) Ensure that the interface is backwards compatible. Backwards compatibility isn’t always easy to achieve. The functional requirements we are given evolve over time and it may be the case that what seems to be a simple change that may not have any impact may will change significantly. 4) Add a version number to the object names Not the most elegant approach but it would work. 5) Stand up a separate PI landscape. Not my preferred approach as it introduces additional costs, maintenance, monitoring and development. Summary Using different SWCV’s allows new releases to be created and deployed but you can only have a single release active at any moment in time. Using Namespaces allows multiple releases to be deployed. If you look at the SWCV for NFE, SAP SLL-NFE you can see they’ve taken this approach. I’ve checked the SAP support sites and can’t find anything on versioning. My preferred option is number 2. Using a different namespace works but will require the calling system to update their configuration to support this. It has to be taken into account the landscape and how it works. But 2 and 4 is probably the easies ways to have versiionng Hello Robert, Thanks for a blog on that topic, I think it deserves some attention. I actually prefer option 1, since the restriction you mention usually does not apply. It depends on your landscape, of course, but in the installations I have seen there is only one application per system using the interface, and either they move to the new version (in that case you switch in the configuration for that system) or they keep using the old one. I found this is also a quite elegant way to support project and maintenance landscapes with just one PI system. Advantage over option 2 is that you keep your SWCV clean, you don’t drag along old versions and can delete old ones more cleanly. Regards, Jörg Jörg Many thanks for your comments. I prefer option 1 too but if we have a global application and are dependant on the local businesses to upgrade we will need to have multiple versions deployed for a period of time. I agree, option 1 is very clean and easy to manage. However, in this situation it wouldn’t work. I have recently added some more information to a new blog that may be worth reading.–part-2 Regards Rob Warde Hello Robert, thanks, I’ve read both parts before posting here. My point is that in most cases an interface is only consumed or provided once on an ABAP system, no matter if used locally or globally. If that is the case, you can perfectly use option 1. If that is not the case, so if your interface is called from multiple applications on the same system, then you’re in trouble with that approach, that is true. However, I would always recommend option 1, if possible, and if not, go for option 2. But the main brain change is to think in interface lifecycles at all. This is my experience so far. People know build and run, but usually not decommission or even evolve their interfaces. This is why I like a blog on that topic. Keep it up! Regards, Jörg Hi Robert, Nice blog, thanks for sharing. Regards, Yomesh What is NFE? Hi, This is a process specific to Brazil. It helps companies comply with the Brazilian requirements for electronic invoicing. Regards Rob Warde
https://blogs.sap.com/2013/01/25/versioning-in-sap-pi/
CC-MAIN-2018-51
refinedweb
848
64.1
>>IMAGE." Am I missing something? (Score:5, Insightful) Are they kicking & screaming about it being a private account or something? I mean it doesn't sound like they are hiding anything by publicly asking people to use it to contact them temporarily. Re:Am I missing something? (Score:5, Insightful) Seriously though, I found this to be perhaps the least interesting Re:Am I missing something? (Score:5, Insightful) Seriously though, I found this to be perhaps the least interesting ./ item ever You haven't been to the idle section, have you? Re:Am I missing something? (Score:5, Funny) No I haven't, is it pants? Re:Am I missing something? (Score:5, Insightful) ______ - insert whichever politician you dislike, McCain, Palin, or Obama "It's not a great idea to run a government using web e-mail accounts. That's the word from experts, anyway, reacting to news that ______________ used web e-mail. The practice is dangerous, said experts, and can run counter to laws ensuring government is open and accountable --. "_______'s use of the private account to discuss public business - a practice reportedly shared by top aides - also raised concerns from open-government advocates, who fear the practice could impede the spirit of laws designed to preserve government communications and documents. Recently, the office has fought to withhold some emails from public release, saying they were exempt from disclosure because state law protected certain categories of communication, such as those related to the "deliberative process." ______'s email habits echo the worst practices of the Bush administration. "Maybe they did it because they thought the records wouldn't be disclosed," said Fuchs. "That raises issues possible destruction of evidence issues - if they expected litigation." - [go.com] Re:Am I missing something? (Score. You can choose to believe that Obama is some how different from every other politician in washington if you so choose, but it is pure ignorance to assume that EVERYONE in his administration, from Cabinet members to secretary's for the secretary's secretary are just as noble. Re:Am I missing something? (Score:4, Funny) How is this funny? It's more informative. It's highlighting the doublestandard that exists on this site for Bush v. Obama, or more generally Republicans v. Democrats. Look at the article -- the Republicans use Yahoo!, the Democrats use Google. Of course /. comes out in favour of the Democrats when there's such a clear and significant issue dividing them! By the way, isn't a majority endorsing the Democrat position an accurate reflection of opinion in the USA as a whole? Maybe they should put it to a vote or something to find out. Re: (Score:3, Insightful) By the way, isn't a majority endorsing the Democrat position an accurate reflection of opinion in the USA as a whole? There is a world of difference between being, or endorsing a Democrat, and willingly letting one side slide for doing the same thing you slam the opponents for. As I said in my post, I'm a Republican and I was furious with the Bush Administration for hiding official communications behind RNC email address. Regardless of you party affiliation, you should have certain lines that divide "ok" from "not ok" and they should apply equally to everyone. Obviously there is room for grey area and interpretation. t Re:Am I missing something? (Score:5, Insightful) They ANNOUNCED the fucking addresses. OF COURSE they knew it would be reported. The Bush staff had government accounts and chose to use RNC ones specifically to avoid oversight. And they did it for YEARS. Re:Am I missing something? (Score:4, Informative) The Bush staff had government accounts and chose to use RNC ones specifically to avoid oversight. And they did it for YEARS did you skip this line when reading my post? As a Republican I was just as upset about the Bush administration trying to hide official communications behind RNC email addresses, as the rest of the people on this site. Re: (Score:3, Informative) did you skip this line when reading my post? No. But you seemed to be saying that the day or two some Obama staff were using webmail, openly, because that had no official accounts yet, was comparable to the years that Bush staff covertly used non .gov accounts. Re: (Score. And it's being reported why exactly? Possibly because they announced the fact? And with what motivation? In order to be somehow publicly accountable, maybe? But back to the summary: comparing this to Palin's gaffe is a red herring. Palin sought to conduct business with a private email address to avoid accountability. She also used a service which provides a method for the informed to take over any account -- even your pet's name is guessable. In fact, the only similarity is that the two services are in compet Re:Am I missing something? (Score:4, Informative) I donno, seems like the real story is how backwards the whitehouse is technologically. A few quotes from the Washington Post story: "It is kind of like going from an Xbox to an Atari," Obama spokesman Bill Burton said of his new digs. And:. And finally.... This doesn't seem to have much to do with trying to circumvent any sort of records keeping, but rather a way to function for a few days while a #&$%@# up system is worked out. Though I admit, I would be more suspicious of the last president doing this then the current one, but I suspect with the last guy we wouldn't have heard about for 3 years until a whistle blower leaked it. Re: (Score:3, Insightful) "Maybe they did it because they thought the records wouldn't be disclosed," said Fuchs. "That raises issues possible destruction of evidence issues - if they expected litigation." And how exactly does this apply to TEMPORARY e-mail addresses used for a day until they got their WhiteHouse accounts working? Hmmm? Re: (Score:3, Insightful) They want it both ways. They want secure email to block spies, but also want it to be stored someplace for later usage in a trial against the president. With a webmail account, they have neither. Re:Am I missing something? (Score:5, Funny) Re: (Score:2, Funny) I'm hoping that was irony, but on Slashdot, I know it probably wasn't. Sigh. Re:Am I missing something? (Score:4, Interesting) I agree, but I can see a scenario someday whereby someone files a Freedom of Information Act request to Google. Must they comply? Re:Am I missing something? (Score:4, Insightful) I agree, but I can see a scenario someday whereby someone files a Freedom of Information Act request to Google. Must they comply? Firstly, something tells me that 99.999% of emails to/from staffers directed to this account on this particular was logistical/planning. Secondly, unlike the Bush/RNC, they aren't going to continue using the accounts in an effort to hide anything. Thirdly, Obama has already made it clear that this White House is going to be much more transparant. Finally, pretty sure FOIA would be served to the White House, not Google. His answer, should someone want the emails, "pfft. Take them." Re: (Score:2, Insightful) >unlike the Bush/RNC, they aren't going to continue using the accounts in an effort to hide anything. Thirdly, Obama has already made it clear that this White House is going to be much more transparant. From: "Although President Obama swept into office pledging transparency and a new air of openness, the press hammered spokesman Robert Gibbs for nearly an hour over a slate of perceived secretive slights that have piled up Re: (Score:3, Insightful) You do know that the Washington Times [wikipedia.org] is a Moonie newspaper, right? Re: (Score:3, Insightful) Thirdly, Obama has already made it clear that this White House is going to be much more transparant. Your faith in a politician's ability to follow through with things they say is...naive, at best. Re:Am I missing something? (Score:5, Informative) You can always track [politifact.com] his campaign promises. As of right now, 7 are kept, 1 stalled, 14 in the works, and no status on 488. Not a bad start after 3 days. Re: (Score:3, Interesting) You can always track [politifact.com] his campaign promises. As of right now, 7 are kept, 1 stalled, 14 in the works, and no status on 488. Not a bad start after 3 days. Holy shit, did you take a look at the promise that was stalled? It reads, "During 2009 and 2010, existing businesses will receive a $3,000 refundable tax credit for each additional full-time employee hired." This is a bit of a conspiracy theory, but...companies like Microsoft and IBM who actually reported quarterly profits (not losses), but didn't meet expectations. You think they might be exaggerating their condition and going with mass layoffs in anticipation of that tax credit? They would get to hire i Re: (Score:3, Interesting) Is this a cleverly crafted example of word salad, or is it a Google translation? Re: (Score:3, Informative) oh my god will_die please stop, reading your posts is causing a headache of extreme nature that you must understand. first i thought you were posting this way on purpose as some kind of inverse meta-meta-irony to another poster but now i see it is your style and it hurts. You do see what it is that is wrong with your posts and are doing it on purpose correct? There is considerable risk of damage to the space time continuum if you persist. Re: (Score:3, Funny) Why should I trust politifact.com? They can't even add! (7+1+14+488=500? Not when I went to school!) Apparently, you missed the meaning of the word "about" in school too. "PolitiFact has compiled about 500 promises that Barack Obama made..." Re:Am I missing something? (Score:5, Insightful) .. Re:Am I missing something? (Score:5, Insightful) Its frightening that you take a politician *especially one from the Chicago political machine* at his word.. I wouldn't either, but in this case the Executive Orders he's been signing (particularly the one about FOIA requests) in the last couple of days indicate that he's prepared to back that one up with some action. Re: (Score:3, Insightful) I think the promise of transparency is one that needs to be watched very closely. Chicago Machine vs Obama... (Score:3, Interesting) Actually, if you look at the Obama crowd, they (Jarret, Axelrod, etc.) are from the UofC/Hyde Park/Harold Washington Party crowd -- the folks that beat the Machine in Chicago, at least for a while. You could argue that since then, a new and bigger Machine has evolved, I suppose, but I don't think that would be accurate. Re:Am I missing something? (Score:5, Funny) Slashdot.... where naivete meets rampant paranoia and cynicism. Re: (Score:3, Insightful) On his first day, Obama closed the revolving door, closed the secret military base that was holding people without evidence, and instructed his staff to default to leaving files open instead of closed. [upi.com] That was DAY 1. Re: (Score:3, Informative) .. The difference was that Bush always did the exact opposite of what he said, but this Obama puts his presidential powers where his mouth is: PRESIDENTIAL MEMORANDA [whitehouse.gov] January 21, 2009 * Freedom of Information Act * Pay Freeze * Transparency and Open Government Re: (Score:3, Insightful) Its easy to do nice things quickly after all lets not forget , no matter how much the democrats want, that bush worked tightly with Ted Kennedy early in his admin to forge No Child Left Behind but FYI: * Freedom of Information Act - Nice change but all it does is add review *not* in and of itself release info. If he follows through and controversial material is released (about his admin) I will be impressed. * Pay Freeze - All hat no horse. He hires someone Jan 20th at a salary of 130,000 and implements a pay Re: (Score:3, Interesting) So, asking a legitimate question is "grilling". So much for transparency and a new tone. Did you stop at that part of the article? You probably should have read on to the point where Obama explained that he would be answering questions later on that day. The purpose of the surprise visit was just to say hello, hence the comment of "I came down to visit, not to answer questions, I'll do that later on." This submission is a troll (Score:5, Insightful) This is clearly a transitional measure, and not a concerted effort to hide communications from mandated records keeping procedures as Bush and Palin are accused of. Re: (Score:3, Funny) Also, none of them is likely to be using the password "popcorn". They use the more secure p0pc0rn, instead. Re: (Score:2) This is clearly a double standard applied because they are Democrats. Re: (Score:3, Insightful) Yeah, except what Palin did was idiocy, and not a concerted effort to hide communications from mandated records keeping. Hence the tit-for-tat. Our major political parties are run as if twelve year olds were in charge. (Incidentally, I'd also chalk this case up to idiocy as well. Obama's staff should have gone without e-mail for the day. But clearly he's decided that day-1 is so important that his VP shouldn't even be making jokes. I'd hate to see what Obama is going to look like in four years if he already h Re:Parent is troll (Score:5, Informative) When this was started it was noted in official White House policy that these email accounts will be archived with the rest of the official White House email. The issue with the previous administration was that they were using RNC accounts precisely because they wouldn't be archived and therefore can remain hidden from the press and future historians trying to delve into what made the Bush White House tick. It's the archiving that is the problem, not the private mail service. Re: (Score:2) It doesn't matter if you are accused of something if the accusation is not credible. So you don't think there are any missing emails, despite the millions found after Bush and Co. said there were no missing emails? Re: (Score:2, Troll) So, someone presents a rational argument, and it's mocked because they're defending Palin? Nice. Maybe you think his facts are BS. That's fair. Attack his facts, provide a reputable source of evidence that things are not as he claims they are. If things are as he says they are (and I have no idea if that's the case), his statement is very reasonable. If you refuse to counter his statement with fact, then you're just spouting partisan drivel. story? (Score:2) whats with every single article on slashdot being tagged with "story" even this?? How long? (Score:2) I think it takes 3 minutes to create an account, including exchange. How long does it take in the head office of the USA? Re:How long? (Score:5, Insightful) It's a pretty poor system, IMHO. Imagine a complete refresh of IT staff in an office. There would be chaos for weeks. Re:How long? (Score:5, Informative) Re: (Score:3, Interesting) There's also nothing to prevent me from using wh.whatever@gmail.com and sending fake orders out. This is something I'm not really clear on, even after reading the Washington Times piece. Is the staff really using the GMail accounts for all of their normal work-related communications, or were the accounts just created for the general public to send stuff to, which will then be forwarded to the regular accounts when they come online? The piece even explicitly says that official press releases will not be sent from any GMail accounts, which leads me to believe that the accounts are "receive-only". You don't really want them to inherit GWB IT Staff (Score:3, Insightful) GWB's IT Staff managed to "lose" massive amounts of email. These aren't the career professionals that serve one administration after the next. It looks like we may see a more technologically enlightened administration this time around. The changeover, while painful, at least should function as an effective purge of the incompetent and/or corrupt predecessors. wh.azzup@gmail.com? (Score:5, Funny) Can anyone confirm that Mr. Azzup is a staffer? :o) Re: (Score:2) Re: (Score:3, Funny) Yeah, but he's busy watching the game, having a Bud. This is not the same thing as Palin's situation (Score:5, Insightful) Palin staff: already had government e-mail accounts, but used Yahoo accounts to conduct business that they did not want to reveal to the public. Obama staff: losing one e-mail account before they gained their next one, so for a few hours they needed transitional addresses, and Gmail was free and easy to use. If Obama staff continue to use Gmail for government business, THEN we can equate these two situations. But not until then. Re: (Score:3, Informative) [wikileaks.org] and as long as they archive it, there's no problem (Score:3, Funny) The problem with Palin's Yahoo use is that it was secret, for one, and second that the emails involved govt. business but weren't recorded anywhere. So, as long as the mails sent and received using Gmail are subsequently archived somewhere, there's no problem. Whether they will be? Who knows. Who really cares? (Score:3) I argue, again, that Obama, as does any President, has the right to set up a communications infrastructure that is private and unrecordable. But, even if we put that issue aside, how far up on the priority list is this issue, versus this list. a) jobs b) budget deficit c) looming entitlements meltdown d) not one, but two wars e) aligning tax rates and health care with NATO allies f) trade imbalances with asia just to throw a couple out there. If we're going to be political, can we talk about something important? Re: (Score:3, Funny) I agree - these Obama stories are pretty stupid. I'm proposing the tag 'obamagasm' for stories like this in the future. Next we are going to find out (Score:2) they are using google docs for collaboration... What the HELL. (Score:4, Interesting) I love Gmail, but this is ridiculous. Google has no contract with the government, its terms of service void most liability (that's what "free" means). It also uses a non-reserved namespace. Right now, within a few minutes, I could sign up for wh.obamma, wh.barrak-obama, wh1te.house and any number of other unclaimed addresses and possibly pick up sensitive email sent to misspelled addresses. Regardless of whether all email is encrypted or signed (and remember, this is the government, half of which is probably using Outlook), this is a bad idea. Kudos for using Gmail, which is the best webmail service in existence, but this shouldn't have been necessary. Who the hell is running IT at the White House? Shouldn't they have set up .gov accounts for the entire administrative staff some time back in November? What was the hold-up? How the US works, federal and state. (Score:3, Informative) Palin is the governor of the STATE of Alaska who ran for a FEDERAL position. During the time she ran for the federal position she was still the state governor and did work as the state governor. She did state and political party work on a Yahoo account. If she had been elected as Vice-President or had been working for the White House work related documents on Yahoo would of been illegal but she was not and was doing state related work and so far no-one has pointed to an Alaskan law saying she could not do it. Not that this should be a shock, she had many claims put against her that were correct and permitted under Alaska law but members of the opposition political party figured they would use to attack her. Now in the USA federal and state laws are separate and while many federal laws must be followed by the states, the laws that the article are complaining that governor Palin did not follow do not deal with the states. Breaking the Law (Score:3, Informative) Okay, so we have staffers using non-government email to conduct government business? There is at least one law on the books about archiving WH emails for various purposes. That they are relying on external systems at all for that purpose seems like a clear violation. Whether Palin did it or not does not justify the new WH staff violating the law. "He committed murder, so I can commit murder, too." That they are using a rationalization means they know they are violating something. But now, they have established a shadow infrastructure that allows them to continue to carry on government business outside government channels. Nothing prevents them from continuing to use this shadow infrastructure after they have legitimate accounts. I would have thought that most of these accounts could have been created during the transition. It's not like the previous transition, where members of the outgoing administration ripped the letter 'W' off the keyboards and slipped porn into the paper in printers and copiers. If the prior administration here caused any significant delay, you can bet your bippy the press would have informed by the incoming administration. My point is 1) that the delay is probably a ruse, or at best a minor inconvenience and 2) the new administration has established a way to violate federal law. Maybe we should all set up gmail accounts with WH.... Re:politicians != understand IT security (Score:5, Funny) That thing that just went over your head... (Score:3, Interesting) that was the real problem, you missed it... This is not about a technical protocol being more secure this is about an organization. How many employees does google have world wide? how many have been screened to the same level that folks in the federal government have? You are putting mail from executive employees onto a mail server read by people not vetted to be/not to be security threats from more than a half dozen nations... Re: (Score:3, Insightful) Re: (Score:3, Informative) Really? why the hell do companies bother to put mail servers behind firewalls then... oh year because after transit you have the content sitting on the server. You do understand that having any data (mail, file, db) sit on a third parties equipment is pretty damn irresponsible, especially a third party whos TOS says: "You acknowledge and agree that Subsidiaries and Affiliates will be entitled to provide the Services to you." "you acknowledge and agree that Google may stop (permanently or temporarily) providin Re:That thing that just went over your head... (Score:5, Funny) "I wonder what kind of ads showed up next to White House emails concerning political appointments?" Buy SENATE SEATS Online Now! {] Anyone who didn't see that one coming from a million miles away deserves a shot in the balls. Re:politicians != understand IT security (Score:4, Insightful) Because whitehouse.gov mail is more secure? It's e-mail, people. You know. SMTP. It's sent in plaintext over the wire through SMTP servers. That's why stuff like PGP, GPG, etc. exist. Re: (Score:2) Re:politicians != understand IT security (Score:5, Informative) The issue was never security. Dude, it's unencrypted e-mail, there's no such thing. The issue was an attempt to dodge records retention laws that allow "we the people" to keep an eye on what our employees - public officials - are doing. Since 1) the official e-mail accounts are not yet available, 2) it seems to be only for a few hours, and 3) in TFA, an Obama staffer notes that "could be forwarded to White House accounts and subject to the Presidential Records Act," these concerns don't seem to apply. (Though I wonder WTF these folks couldn't either be provided with the new e-mail addresses earlier, or hold the transition accounts a little longer.) Re: (Score:2) Yes, it's actually just you, because the number of dopes who think the White House doesn't have an IT staff is very small. Now, for a little puzzle, ask yourself how long it would normally take to create hundreds of email accounts in a secured system? Re: (Score:3, Insightful) Now, for a little puzzle, ask yourself how long it would normally take to create hundreds of email accounts in a secured system? About as long as it would take to create them in a regular system? Unless the person entering the account data has to do on-the-fly RSA encryption in their head. Seriously, that security for @whitehouse.gov is (hopefully) tighter than for, say, GMail does not mean that accounts are not likely managed by a few folks via a sleek administrative GUI, just like it's done at any well-managed IT department at medium-sized to large organisations. Re:Kind of a side note... (Score:5, Insightful) The delay is not in clicking 'create account' on the administrative interface, or running a list of names through a Perl script; it's in processing the paperwork that ensures that the people getting accounts are who they say they are, and that their account access is appropriately restricted. Re: (Score:2) Re:Kind of a side note... (Score:4, Informative) Re: (Score:2) Now, for a little puzzle, ask yourself how long it would normally take to create hundreds of email accounts in a secured system? If the system wasn't designed by chimps, about as long as it takes to create and upload a csv. Re: (Score:2) Re:Kind of a side note... (Score:5, Informative) I know this is /. and I know people can't be bothered to read... bothers me is that, knowing this was coming, they didn't have everything tested and ready to go at the throw of a switch (or literally, the click of a mouse). I'm not even going to get into the whole, the staff isn't familiar with the Windows platform and wants Apple issue, because that was covered extensively a few days ago, except to say, it's not as if they haven't had since November to plan for this transition... Re: (Score:3, Interesting) I went to school with someone who was on the Bush IT team. Nice guy btw. Anyway while Bush did actually work with Obama from a security standpoint, there was no such working together when it came to IT. Not implying anything malicious either, it just didn't happen. Bush's people were VERY busy making sure nothing that wasn't supposed to be there would be hanging around for the Obama people to come across. Why would Bush have anything to hide? (Score:4, Insightful) Only criminals require privacy. The Obama team has as much clearance as Bush did and should have access to everything. Re:Why would Bush have anything to hide? (Score:5, Funny) Why would Bush have anything to hide? Only criminals require privacy. Congratulations, you've reached a level of irony we thought to be unattainable. Re: (Score:3, Insightful) If you were taking over my job, I'd gladly give you the passwords to my work computer. Re: (Score:2) Re: (Score:2) Nice. What about my post says that I didn't know that: I was saying is that if there was a dedicated staff ( Re: (Score:2) Actually, there is. The White House is an institution upon itself. The security, logistics, kitchen, cleaning, groundskeeping, engineering and, yes, IT staffs work for the Government but are not administration appointees. The problem isn't a lack of staff. The problem is a bureaucracy. Part of this is good: institutional pushback that serves to protect the White House and Executive Branch by not being overly concerned with the state of the art. Part of this is bad: forcing the WH to stay in a perpetual 15-year Hello, Captain Obvious (Score:2) ...the e-mail servers, for which the White House is infamously known, seemed to be down. Well, duh! You can't really expect a server to boot immediately after someone runs shred /dev/hda. Re: (Score:3, Interesting) I used to work in WHCA (White House Communications Agency). I don't know how the PC side of things was or is being handled, however I'm quite aware of how the mainframe side of things is handled. And I'd be very surprised that things are working at 12:01. For the mainframe, on the day of inauguration, full system backups are performed. These backups are then sent to the national archives. After the backups are made, then *everything* associated with the old administration is removed from the system. Only af Re: (Score:3, Insightful) The whitehouse was a functioning government office with thousands of employees up until 12:00 on Tuesday, and at 12:01 all ~3000 employees were replaced. If it all worked smoothly it would have been nothing short of a miracle. What's amazing is that a should-have-been-expected bump in the road has turned into a partisan political battle, where Democrats say the Republicans lived in the technological dark ages for 8 years, and Republicans say the Democrats botched the transition. This is the kind of story that Re: (Score:2) Really? What is your source for this claim? Re: (Score:3, Informative) Let's start here: White House Vandalized In Transition, G.A.O. Finds [nytimes.com] But if you want, you can search for "clinton white house vandalism" if you like. To be honest, I thought every one knew that transitions of the White House between parties were filled with this stuff. Is the Bush staff playing dirty pool with the Obama staff? Probably, but its more of a tradition than an isolated Bush is Evil incident. Re: (Score:3, Interesting) Re: (Score:3, Insightful) IANAL, but: Even a few hours before inauguration, using a whitehouse.org email address could be considered impersonating or forgery. I suspect most of these people had email address ending with @democrats.org (or even @rnc.org) which could be considered bad taste to use in an official use out of a campaign. Yeah, the best solution would have been a @change.org. Gmail comes second. Anyway, it is disturbing that Google could potentially spy this. Re: (Score:2) I wonder what context-sensitive ads they were seeing? "Extreme Rendition? Come enjoy all EXTREME sports at Vale." "Global Thermonuclear War? Get WARGAMES on DVD for $1.99 at Overstock.com" Re: (Score:3, Informative) Google's free Gmail accounts to work around the fact that their transition emails will go dark at 11 a.m. Tuesday, at least an hour before they will have access to their new government accounts. Re: (Score:3, Informative) Will those emails then be transfered to the official email server? Most likely, yes. FTFA: In addition, Cherlin noted that any e-mail sent to the Gmail accounts "could be forwarded to White House accounts and subject to the Presidential Records Act." Re: (Score:2) Re: (Score:2, Insightful) We can find political news anyplace else. This stuff really is not news for nerds and does not matter here. It's a technology story, not just an Obama story (as was the last one involving cookies). E-mail is Internet tech, last I checked. Gmail is a state-of-the-art free Web-based e-mail service. Obama is the most technologically fluent President ever. What's not to like? Re: (Score:3, Interesting) Obama is the most technologically fluent President ever. You know, this gets tossed around a lot, and it bugs the living hell out of me. Who the fuck cares? It's irrelevant! Praising Obama for using technology is no different than something like praising him because he likes rock music. It's a completely superficial thing, and doesn't affect his ability to be president in the least. What's not to like? So far? Lying to us, ranging from the petty ("My grandma survived WWI, which she was born after") to the serious ("I oppose telecom immunity in the wiretapping fiasco"). Spouting eliti Re: (Score:2, Funny) Typically when a Dem gets into hot water, it also has a half dozen strippers in it. Re: (Score:2) I really wish I spoke murloc [wowwiki.com]. I'm really curious what you just said that got you modded Interesting. Re: (Score:2) Once you start using an email address it is with you forever unless you're willing to dump all of your contacts. Gmail has supported forwarding mail and exporting contacts for as long as I can remember.
https://tech.slashdot.org/story/09/01/23/0551227/obama-staffers-followed-palins-email-lead-on-inauguration-day
CC-MAIN-2017-26
refinedweb
5,555
64.3
This Guide is most relevant to Sencha Touch, 1.x. Sencha.io Src helps you dynamically resize images for the ever increasing number of mobile screen sizes. We’ve previously done a lot of work in Sencha Touch to make your UI resolution independent, and Src expands this to include your image assets. It’s easy to use, and in this guide, we run through the main API options for the service. Sencha.io Src is essentially a proxy that lies between image assets (hosted either on your own server or by a third party) and the browser or application requesting them via HTTP. The API is accessed entirely via placing a prefix before the original image URL. This prefix gives you declarative access to all of the different types of transformation that the service can perform. This approach makes the service very easy to add to existing web sites or apps without any programming knowledge. Let's start with a quick example. Let's assume you are inserting a 640px × 480px image into your web app or site with markup something like this: <img src='' alt='My large image' /> To use Sencha.io Src in its default mode, you simply prefix your absolute src attribute with Add that into the tag and your image will be magically resized for a smaller, mobile screen: <img src='' alt='My smaller image' /> Unless you tell it otherwise, Sencha.io Src will resize the image to fit the physical screen of the mobile handset visiting your site, based on its user-agent string. For example, if an iPhone 3GS visits the site, the image will be constrained to its screen size of 320px × 480px. In this particular case, the image is of landscape orientation, and so width becomes the constraining dimension. Aspect ratios are always preserved by Sencha.io Src, so our 640px × 400px image will emerge resized for an iPhone 3GS as 320px × 200px. If you want to resize graphics to be constrained by something other than the full screen in width or height (and you probably will), there are plenty of other ways the API can be used. Let's take a look. Defined sizing If, instead of keying the size off the user-agent string, you want to resize your images to precise dimensions, Sencha.io Src takes optional parameters to let you define width and height, in that order. These need to appear prior to the image URL in your src attribute. So for example <img src='' alt='My constrained image' width='320' height='200' /> Because we are being explicit about the resizing, we can also be reasonably confident about using the width and height attributes into the tag as well. If you are only concerned about constraining the image's width, just provide a single numeric argument: <img src='' alt='My constrained image' width='320' height='200' /> And remember, Sencha.io Src always preserves aspect ratio, so in this example, we can still leave the height attribute of 200px in the tag, even though it is not explicit in the src. Important note: Sencha.io Src will only shrink images. It will not enlarge them. If you were to specify the following: <img src='' alt='My huge image' width='1280' height='800' /> ...then the returned image would be the same as the original. This means that you should make sure the original graphics are large enough to fulfill your needs, especially for high-resolution smart-phone devices. Client-side measurements This is currently an experimental feature. Sencha.io Src provides a small JavaScript file which obtains the browser's screen dimensions and places them in a cookie scoped to the src.sencha.io domain. Subsequent requests to Sencha.io Src to have an image resized can then refer to these dimensions rather than explicit values. To insert the JavaScript into an HTML page, use the following snippet: <script src=''></script> Where you place this can matter. If you place it at the start of the document, it's slightly more likely that the cookie has been set before the images within the page are downloaded - increasing the chance that your newly-measured dimensions can be used within this load of the page itself. By placing the script after the closing </body>, however, increases the chance that the page has laid itself out, and this can sometimes affect some of the measurements made. To constrain an image to the width of the device returned by the screen.width measurement, insert it where you would have placed an explicit value: <script src=''></script> <img src='' alt='My JS-measured image' /> You can also abbreviate the measurement names for brevity: <script src=''></script> <img src='' alt='My JS-measured image' /> The full set of client measurements available, with abbreviations, is in the API summary at the end of this document For example, if you wanted to include an image that was constrained by 'available' height and width measurements, you could use the following: <script src=''></script> <img src='' alt='My JS-measured image' /> Note: The document.body.* properties are particularly affected by where you place the script snippet, although under certain conditions, window.outerHeight will also vary on mobile browsers during page load. Also note that the values returned from many of these properties will depend upon the DOCTYPE of the document and any viewport scale setting. Real-device testing is highly recommended when using this experimental technique. Orientation You may explicitly indicate the orientation of image constraints, by placing landscape or portrait in the URL, which will flip the width and height constraints if required. (Note that this will only have effect if the screen dimensions have been identified from the device's user-agent: if you have explicitly specified a width and height, the flip will not occur.) On an iPhone, for example, the following code will constrain an image to be 480px wide (and 320px high), instead of the normal 320 pixels wide (and 480px high): <img src='' alt='Constrained to WxH, W greater than H' /> As a further experimental feature, client-side measurement can be attempted to detect the current orientation of a device, using the window.orientation API. Specify detect for the orientation to try to use the value that may have been recently sent by the screen.js cookie: <script src=''></script> <img src='' alt='Constrained according to orientation' /> Remember again that there is a very likely race condition whereby the cookie's value may not be set before the image starts to download, and so the immediate effect may not be apparent in the first rendering of the page. The test page for this experimental feature uses DOM manipulation on the document's load event to ensure the cookie is set, before inserting the resized element. Try it on a modern mobile device: rotate the device and reload the page to see the feature in action. Altering sizes As we have seen, an explicit width or height before the URL will fix the image size. Sometimes, however, you want these dimensions to be based on physical screen size, but then altered slightly. For example you might notice in our example above that the 8 pixel white margin around the web page means that the 320-pixel-wide image actually truncates on the right hand side of the iPhone screen. Instead of using absolute values, we can alter width and height parameters by prefixing operands with characters like -, a or x. Under these conditions, the numbers would represent subtraction, addition and percentage scaling of the dynamic screen size, respectively. So for example, if you wanted your image to be, at most, 16 pixels narrower than the width of the screen, (whatever that actually is), you would use -16, as in the following: <img src='' alt='My image, constrained to 16px less than the screen' /> This is useful if you wish to leave a border around your images or if you want to account for the scroll bar of the browser screen. On the iPhone example to the side, you should be able to see it now accounts for the default document margin effect. Percentage sizing Similarly, if you want to scale the graphic to a proportion of the screen, use the x prefix. The value provided is interpreted as an integer percentage from 1 to 100. So to ensure the image takes no more than half the screen, use x50, as in the following: <img src='' alt='My image, constrained by half the width of the screen' /> It is possible to use either of these modifiers on both the width and the height simultaneously. For example, the following would ensure the image fitted into a quadrant of the screen (although in this case it has no effect): <img src='' alt='My image, at most half width, half height' /> Different modifiers can be used for width and height. This example ensures the image will never run the width of the screen, but also that it doesn't take up more than 15% of its vertical height. This obviously suits wide, landscape images best, such as banner ads, for example. <img src='' alt='My image, constrained in banner-ad style' /> Other adjustments For completeness, the addition operator allows you to increase the image size. a20 will expand the constraints of the image by 20 pixels, for example. Remember, Sencha.io Src will never grow an image beyond its original size - but it might occasionally be useful if you have knowingly reduced an image's size and then want to grow it slightly. There is a rounding-down operator, r, which will round a size down to the nearest multiple of that value. This might be useful if you have a tile- or column-based layout. The following will round down the constraint to ensure that the half-width image's width is always a multiple of 20 pixels: <img src='' alt='Half the screen, rounded down to nearest 20 pixels' /> Finally, you can specify a maximum value for the image's size, so that you can be sure that, regardless of any other transformations, it does not exceed a limit. You can specify two maxima, to depend upon whether the browser has been identified as being mobile or non-mobile. For mobile browsers, the m operator is obeyed, and for desktop browsers, the n operator is used. The code below will display an image no larger than 500 pixels on a desktop browser, and no larger than 100 pixels on a mobile browser: <img src='' alt='Max 500 or 100, depending on browser' /> Complex formulaic operations You can string together all of these formulaic operations. If you wanted to have two images side-by-side, with balanced margins, we can combine the -16 from the example above with x50 to halve the width. The following approach might work well for a gallery app, for example: <img src='' alt='My image, in a gallery' /> <img src='' alt='Another image, in a gallery' /> (Note we also deduct a further 2 pixels to account for the 4 pixel gap caused by whitespace between the <img> tags in the markup.) And finally, if the width (or height) does not start with -, a or x modifiers, it is interpreted as an absolute pixel number upon which further operations can be applied. In other words, it is possible to specify 320-8 to get the same effect as using 312 explicitly. This is particularly useful when you use the (experimental) client-side measurement technique, allowing you to deduct values from those measurements. This will use a client-side measurement of screen.width and then deduct 16 pixels: <img src='' alt='Client-measurement, reduced' /> You can use different techniques in different dimensions of course. This example constrains the image by half the width of the screen and exactly 90 pixels in height. This technique might be useful if you want your gallery to handle both landscape and portrait images alongside each other. <img src='' alt='My half-width image, not too tall' /> It is important to remember that in all these cases, however, the image retains its aspect ratio. Even if the resulting dimensions of the above examples are letterbox in shape, a portrait image will remain portrait. File formats You can specify the file format of the resulting image that's returned from Sencha.io Src. You can choose either PNG or JPG encoding by using the png or jpg token. This parameter goes before the dimensions (if present) in the URL. In the following example, an original JPG is converted to both default, and explicitly resized, PNG images: <img src='' alt='My PNG' /> <img src='' alt='My small PNG' /> If not specified, Sencha.io Src will decide on a suitable format based on the original image. PNGs and JPGs will remain in their original format, and GIF images will get converted to PNG. Sencha.io Src will never emit a GIF. Note that JPG is a 'lossy' encoding, so the quality of a resized JPG image will not necessarily be as good as the original PNG (although it should be smaller in file size). You can also take control of the degree to which the JPG is compressed, by appending a number between 1 and 100 to the end of the jpg formatting token, like this: <img src='' alt='My JPG' /> <img src='' alt='My highly compressed JPG' /> It's important to note that the compression will only change if the dimensions of the image have changed or if the original format was PNG and has been converted to JPG. If you have a small JPG image, smaller than the constraints you've specified for it, it will always remain with its original compression. Also note that PNG images will lose their the alpha channel when converted to JPG, so you are advised to keep any PNG files with transparent regions in that format. Resizing PNG files while remaining in the same format, however, will preserve the alpha channel. Data URLs In some circumstances, you might want to return your image in an encoded format. In particular, images can be encoded into data URLs. This is useful if you want to embed an image into the markup or stylesheet of a site or application, or if you want to cache an image resource offline in a textual form in a browser's local storage. Unlike performing the toDataURL method of a <canvas> element, using Sencha.io Src allows you to encode images from other origin servers. To have Sencha.io Src return the requested image in data URL format, simply place 'data' as the first segment of the request: This returns, as a plain text response: data' segment. For example: This will return, as a JavaScript response:myCallBack('); This now means that you can do something programmatically with the string - in the browser - such as set it as the source of an <img>element or cache it to the browser's local storage. Because it's possible you might want to run this callback multiple times with some sort of identifier to attach it to different <img>elements throughout the document, the callback allows you to add additional initial arguments. If your callback name contains hyphens ( -), these are used as separators, and subsequent portions get treated as successive string arguments. (Also, the callback name itself can contain dotted syntax so you can invoke a method within an object or a function within a specific namespace). So, for example: This will return, as a JavaScript response:MyApp.myCallBack('img2','); Typically, your callback would use that first argument as a way to reference an image within the document. This callback will be run asynchronously to the original JSON-P mechanism and scope, so otherwise you'd have no way to correlate responses with target elements. Note that if you need to use hyphens in your arguments, you can use commas ( ,), the presence of which means that that will be used as the separator instead of hyphens. For example: will return:MyApp.myCallBack('img-2','123','); The following is a full, working example, which replaces an hourglass image with the data URL for a tick:<!DOCTYPE html> <html> <head> <title>Sencha.io Src stepping up a gear</title> <script type='text/javascript'> // run the JSON-P window.addEventListener("load", function () { var script = document.createElement("script"); script.setAttribute("src", "" + "" ); script.setAttribute("type","text/javascript"); document.head.appendChild(script); }, false); // the JSON-P callback function setDataUrl(id, dataUrl) { document.getElementById(id).src = dataUrl; }; </script> </head> <body> <img id='img1' src='' /> </body> </html> Cache flushing Sencha.io Src caches images for up to one day. The cache is sensitive to all of the API settings above, so if you request or expect images with different sizes, formats or compression ratios, the cache will store each version. Nevertheless, there are times when you might want to manually force Sencha.io Src to refetch an image from your server - such as when you have updated the original without changing its URL, for example. To force the cached version of an image to be ignored, add flushto the start of the URL thus: This will always cause a new request to be made to the server and the cache updated accordingly. This API flag should ONLY be used manually or for very short periods of time when you know a set of images should be refreshed. Not only will you invoke a lot of additional traffic to the origin server, but it will increase the latency of the user's fetch of the images. If you have no need to urgently update images, you are strongly advised to wait 24 hours for the cache to expire naturally. Abuse of this flushing feature may result in it being removed. Domain Sharding Some browsers are not able to make large numbers of simultaneous requests to servers on the same domain, and a well-known technique for improving page load times is to host images on different domains so that more requests can be parallelized. Sencha.io Src facilitates this by allowing you to use the src1to src4subdomains in addition to just src. So it's possible to improve the likelyhood of efficient loading by cycling through these four sub-domains in the markup: <img src='' alt='My first parallel JPG' /> <img src='' alt='My second parallel JPG' /> <img src='' alt='My third parallel JPG' /> <img src='' alt='My fourth parallel JPG' /> Obviously this will be very useful when you have large numbers of resized images on a single page, and easy when you are looping over the images on the server-side anyway. In PHP, for example:$picture_urls = array( '', '', '', '' ); foreach ($picture_urls as $index=>$url) { $shard = $index % 4 + 1; print "<img src='' />"; } Suffice to say, on mobile devices, where latency is often more of a concern than throughput, maximizing the amount of parallelization the browser can perform is A Good Thing. The API in Summary The full syntax of the Sencha.io Src API is as follows (with linebreaks added for clarity only):[shard].sencha.io [/flush] [/data] [/format[quality]] [/orientation] [/width[/height]] /url Where: - shard (optional). A number between 1 and 4, to distribute loading across subdomains. - flush (optional). If flushthen original image is refetched and its cached copy updated. - data (optional). If datathen Sencha.io Src returns a data URL. Also takes a callback suffix and arguments for JSON-P use. - format (optional). This is either jpgor png. Defaults to the original image format. - quality (optional). When the format is jpg, a compression value from 1 to 100. Defaults to 85. - orientation (optional). If 'landscape' or 'portrait', this will swap X/Y constraints if required. Defaults to no effect. 'detect' is experimental to use window.orientation if present. - width (optional). A width in pixels (which overrides the adaptive- or family-sizing). Can also contain formulaic adjustment. - height (optional). A height in pixels, if width is also present. Can also contain formulaic adjustment. - url (required). The absolute path of the original image. It must start with http:// Formulaic adjustments use the following operators: Apps using screen.js are also able to use the following client-side measurements (or their abbreviations) at the start of the width of height parameters: It's worth re-iterating that each aspect of the API we've described in this document can normally be used alongside others. You can resize images, convert their format, and turn them into data URLs - all in one go - if you want. Simply ensure that you adhere to the order of the URL fragments above to ensure that the service performs the transformations you need. Have fun! 46 Comments Swarnendu De4 years ago Very interesting. AwesomeBob4 years ago Cool stuff. I’m already using io, but now I’ll implement the domain sharding! Augie4 years ago This is awesome! I am a huge fan of everything you guys do, and this is no exception. I am curious to know how you are able to offer this service for free… I would think that eventually you would be hosting millions and millions of images. Am I missing something, perhaps? I am new to Sencha so please forgive me if this is a total noob question. Th@o4 years ago Very interesting, i’ll use it immediately. Thank you Sencha James Pearce Sencha Employee4 years ago @Augie, it’s more of a proxy than a hosting solution (although there is caching), so it’s efficient to run and we can afford to offer it for free, despite its huge popularity! Shali Nguyen4 years ago This is amazing! Rajganesh4 years ago This is too good. Tried it from desktop browser and works great. Percentage sizing works with the desktop size, will it be possible to make it work with browser window size? Brani Mead4 years ago Am I right in assuming that there is no cropping, just resizing? Binod Bajracharya4 years ago Indeed Very Professional! -Thanks. James Pearce Sencha Employee4 years ago @Rajganesh The proxy can’t tell what current viewport size you have (on a desktop browser). It would be possible to create a custom solution to query that in JavaScript and then use the explicit sizing options above. Maybe we should also explore the idea of looking for cookies (in src.sencha.io) so you can set these hints in the client for future use. James Pearce Sencha Employee4 years ago @Brani; no cropping… currently Rajganesh4 years ago @James Pearce Great… looking for cookies support in src.sencha.io Niall McC4 years ago Is there a fair usage policy on using it? What if it was implemented by an ISP/telecoms carrier for example and was hitting your servers with tens/hundreds of thousands of TPS? In that scenario is it no longer free or a hosted solution available for purchase? Kaimo4 years ago Seems like useful stuff. But i’d like to know if it means smaller image sizes also. I can understand that it changes the dimensions of the image to fit the screen, but does it also mean that the image will be smaller in file size, in kilobytes. Lavi Yatziv3 years ago Awesome! How long do images remain cached on sencha servers? Thanks! James Pearce Sencha Employee3 years ago @Lavi - 1 day Daniel Koskinen3 years ago Does the default resizing have an upper limit? I have a 1600px wide image, and my screen is 1920px wide. Despite this I’m getting a 980px wide image through sencha.io. I’ve tried using just the default and x100 parameter, but still get the reduced size image. Am I missing something? James Pearce Sencha Employee3 years ago Daniel, for desktop user-agent strings, the width defaults to 980px (which happens to be the same as the iOS viewport). So I’m afraid you’d have to explicitly set it to be larger than that; e.g. Arseny3 years ago That’s great! As all stuff made by Sencha! Daniel Koskinen3 years ago James: OK thanks for the clarification. But if I explicitly specify a large width, won’t mobile devices also get the large version, defeating the whole purpose of using the proxy? James Pearce Sencha Employee3 years ago @Daniel - yes, exactly… which is why I said ‘I’m afraid’. Let me just see if I can add an override to the API somehow. John Clark3 years ago I can’t seem to make sencha.io src scale to the device screen size without using specific sizes. I have a page up at: which uses the very first example code from this page: [img] alt=‘My smaller image’ [/img] However, I cannot get this to display a full screen image on any device. Is this not the correct code? Terence Eden3 years ago Any chance of getting animate gif support? By which, I mean, shrinking the dimensions but keeping the animation? Currently it just shows the first frame. Addrienne3 years ago I had no idea how to approach this before-now I’m lecokd and loaded. Mark Zeman3 years ago When using this for thumbnails it looks like the images are being blurred or a low quality scaling method is being used. Would be great to have a way to control things like scaling method or sharpness of the resulting image. James Pearce3 years ago Mark, the JPG option allows you to specify the compression ratio which should help: (It defaults to 85) Mark Zeman3 years ago Hi James, I experimented with jpg quality but with little difference. Even png’s lose a lot of detail. Looks like a nearest neighbor scale rather than bicubic. Leaving the image at one to one provides great quality but the scaling is preventing us using the service. Is there anything we can do to help as we’d love to use such a promising tool? Can we fork it on App Engine? Here’s an example to compare… Daniel Anderson3 years ago @Daniel and @James Any progress on this issue ” for desktop user-agent strings, the width defaults to 980px” ? “@Daniel - yes, exactly… which is why I said ‘I’m afraid’. Let me just see if I can add an override to the API somehow.” I have images that are 1140px wide that I am having issues with. Sumu3 years ago Thanks for awesome API. This consistently improved response time for displaying images at our website. Are you planning on charging for this service in future? Todd Santoro3 years ago Is this service down right now??? I can’t seem to get anything to work. Even the images with the examples are not showing up. Please advise… And if it is down how often does it go down… James Pearce Sencha Employee3 years ago Todd, no, it’s up - we have 83(!) server instances all apparently running fine. What URL were you trying? PS all the images in this article are also served by src.sencha.io - so if you can see them, the service is up… can you? James Pearce Sencha Employee3 years ago Oh, and @Mark, I haven’t forgotten about your image quality issue. The good news is that our platform now has an upgraded image library, so we’re going to experiment with using that and see if it produces better results for you. Outlawsessy3 years ago James, I am currently using Sencha src for a mobile site and it works pretty good with most devices. It does have some issues with the Android. Is there a fix for this coming soon? Mark Zeman3 years ago Thanks James, any progress on image scaling quality? Also what are the terms of use for the service? We’re actually looking to use the service on a site that gets approx 2-3 million pageviews per month. Is that a problem? We’re happy to pay if needed. BBQbrains3 years ago This is a great service. 2 quick questions: 1. When an image is resized, is it resized for portrait orientation only? If I want an image to look good when a device is rotated to landscape, I need to request a larger version (like 150%)? 2. I tested some of the example URLs on an Android (LG Ally), and the images were about 30% smaller than they should have been in both orientations. Is this the Android issue mentioned above (@Outlawsessy), or is this a specific device issue? Alex Garcia3 years ago If I already have an existing data url that I am retrieving from a database can I still use ? something like this or similar:[removed]/9j/4AAQSkZJRgABAQ…’ Thanks, Mike Taylor3 years ago Getting a super tiny image (120x75) when trying in Opera Mobile on my Nexus S. That seems… broken. James Pearce Sencha Employee3 years ago @Mike, could you please try and tell me what you get? Non-default browsers obviously bring some special conditions. Will be helpful to debug. JP Mike Taylor3 years ago Hey James, UA: Opera/9.80 (Android 2.3.5; Linux; Opera Mobi/ADR-1110171336; U; en) Presto/2.9.201 Version/11.50 mobile_device: 1 full_width: 120 full_height: 120 Same for Opera Mini. Haven’t even tried it on my Galaxy Tab yet… John Clark3 years ago Debug returns fullwidth: 320 and full_height: 120 on an iPod Touch in portrait and landscape. James Pearce Sencha Employee3 years ago @John we can’t distinguish between landscape and portrait (because this is only available to the client-side Javascript environment, and is not hinted at in the headers). Height: 120 is obviously not right, but we had a similar issue with the iPhone 4S when first released (and since resolved by updating the UA database). I assume you’ve upgraded to iOS5, so could you possibly send me the user-agent header you see in the debug? Thanks John Clark3 years ago Sorry. I was running on the simulator: UA:Mozilla/5.0 (iPhone Simulator; U; CPU iPhone OS 4_3_2 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8H7 Safari/6533.18.5 mobile_device:1 full_width:320 full_height:120 The image returned is 192 x 120 I also have an elderly gen 1 iPod touch: UA:Mozilla/5.0 (iPod; U; CPU iPhone OS 3_1_3 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7E18 Safari/528.16 mobile_device:1 full_width:320 full_height:480 Image returned is 320x200 Matteo3 years ago This looks great! I have the same questions as Daniel K and BBQbrains, namely: - is there a way to override the minimum dimensions? - what is the correct way of addressing portrait to landscape device rotation? Furthermore, how does Google Images react to images being passed through Sencha.io? i.e will indexation been performed anyway? Regards, Matteo pete3 years ago Would it be possible to specify height instead of width, and have the width be calculated based on aspect ratio? If you have boht portrait and landscape images, you actually want to specify their height, so that they take up the same vertical space and can wrap more consistently. By spcifying width, landscape images that are scaled down, ar dramtically smaller than the portrait ones and leave a fair amount of empty space as well as being too small to see. Halvard L. Simonsen3 years ago It doesn’t work on my local MAMP setup, like timthumb does. Any reason for this? Jean-François Fortier3 years ago Very nice, but we face a problem with Androïd. When we compile with Phonegap a page that contains an image load with src.sencha.io, on iPhone, all works perfectly. On Androïd, version 2.3.3 (phone) and 3.2 (tablet), the image resize very small. Just like if the detection is not working properly. If we set the size we need, it resize to that, but the default does not seams to work. Any idea how to fix it? Leave a comment:Commenting is not available in this channel entry.
https://www.sencha.com/learn/how-to-use-src-sencha-io
CC-MAIN-2015-14
refinedweb
5,339
62.07
Label not moved from input on browser autofill in Chrome Environment Description When the browser autofills a textbox, the label is not moved from the input - it remains overlaying the text instead of moving up. This happens only in Chrome. It works fine in Firefox and Edge. A short video of the browser behavior and how the Blazor model data is only updated when some other action happens - a click somewhere or you typing in the input Steps to Reproduce Create a simple form with a textbox and a password box. Click its Submit button and when Chrome prompts you to Save the user-password, Save them. Refresh the page. The browser autofills the form. Expected: The textbox label is above the textbox. Actual: The textbox label is still "inside" the textbox. Sample reproducible <form> @* The autofill should populate these lines here, but it does not *@ @SomeUserModel.UserName <br /> @SomeUserModel.Password <br /> <input @ <br /> <TelerikTextBox @ <br /> <input type="password" @ <br /> <button type="submit">Submit</button> </form> @code{ public class SomeModel { public string UserName { get; set; } public string Password { get; set; } } SomeModel SomeUserModel { get; set; } = new SomeModel(); } Cause\Possible Cause(s) It seems Chrome updates only the rendering, but no the actual value of the input. This also happens with the regular <input> elements. The data Chrome filled in does not appear in the Blazor model and so the Blazor components don't actually see the change in order to update. Suggested Workarounds At the time of writing, we are not aware of ways to work around this browser behavior. You can try setting the autocomplete="off" attribute on the form to prevent the autofill in the first place, yet it is up to the browser to respect it and it may not. General suggestions on the Internet are to handle events like mouseover or mouseenter on the form/page to detect that the user is doing something, so you can try updating your form - in Blazor that would be calling StateHasChanged(). The downside of this is that it can cause severe performance issues in Blazor if called repeatedly. Moreover, in our local tests, neither calling StateHasChanged(), nor invoking a JS click on the body, nor updating a third field in the model helped (you can't update the autofilled fields because it will defeat the purpose of the autofill, and it seems that their value still does not exist - even if you update one, the other does not get populated).
https://docs.telerik.com/blazor-ui/knowledge-base/textbox-chrome-autofill-label
CC-MAIN-2021-31
refinedweb
410
58.72
Read Analog Data Directly in Processing Introduction: Read Analog Data Directly in Processing This instructable presents a fast an easy way to use data received from an analog sensor in Processing. You will learn to utilize the Arduino and prototype electronic boards to read meaningful data from the environment. The sensors can be affected by the light, the orientation, or a user’s physical input. For this matter we will use photocell, accelerometer, and potentiometer. Step 1: Preparation Steps 1. Upload the “standardFirmata” example from the Arduino IDE in your Arduino board. If you are using Arduino UNO, there is a standard firmata for UNO. 2. Install Arduino library for processing in your libraries folder in the processing sketchbook For a detailed instructable for this process go to the Arduino playground example Step 2: Build the Circuits Choose one of the circuits from the images below depending on your preferred sensor: 1. Accelerometer 2. Potentiometer 3. Photocell Step 3: Test the Code and Calibrate Sensor Range 1.Use the following code in processing: import processing.serial.*; import cc.arduino.*; Arduino arduino; //creates arduino object color back = color(64, 218, 255); //variables for the 2 colors int sensor= 0; int read; float value; void setup() { size(800, 600); arduino = new Arduino(this, Arduino.list()[0], 57600); //sets up arduino arduino.pinMode(sensor, Arduino.INPUT);//setup pins to be input (A0 =0?) background(back); } void draw() { read=arduino.analogRead(sensor); background(back); println (read); value=map(read, 0, 680, 0, width); //use to callibrate ellipse(value, 300,30, 30); } 2. Use the “println” command to output your sensor’s minimum and maximum values. They will be different depending on the context. Plug the min and max values in your map() function. Step 4: Add Other Variables to Your Code -Add other variables to manipulate the X and Y axis of the accelerometer. This program tracks down the X and Y values from the accelerometer and displays them altering the position of a graphic in the screen. It needs to be calibrated for both X and Y parameters. import processing.serial.*; import cc.arduino.*; Arduino arduino; //creates arduino object color back = color(64, 218, 255); //variables for the 2 colors int xpin= 0; int ypin= 1; float value=0; void setup() { size(800, 600); arduino = new Arduino(this, Arduino.list()[0], 57600); //sets up arduino arduino.pinMode(xpin, Arduino.INPUT);//setup pins to be input (A0 =0?) arduino.pinMode(ypin, Arduino.INPUT); background(back); } void draw() { noStroke(); ellipse(arduino.analogRead(xpin)-130, arduino.analogRead(ypin)-130,30, 30);//needs to be calibrated for X and Y separately } Instead of an accelerometer, I'm using a gyroscope. The circuit is basically the same as an accelerometer. I have a functioning circuit but when combined with processing it is not working. It is reading 0 values Hi, Nice work here. Can you help me with plotting xyz values on Processing ? I am using an ADXL345. I am quite new to Embedded systems and processing ... Would appreciate any help ..thanks I am receiving 0 values all across the board when I put the first sample code in. Hi Zac, I think it depends on which accelerometer you are using. The connections might be different. If you are not getting any values it might be that the accelerometer is connected wrong (see image for a different accelerometer). I made a more detailed manual here:... Here is the link to some information i've found. I've had the same issue using analogRead(A0); You can translate the a0 pin to a 14, a1 to 15 and so on until a7 wich is 21. Good catch! -Thank you. I got confused when they added the A1 thing
http://www.instructables.com/id/Read-analog-data-directly-in-Processing/
CC-MAIN-2017-39
refinedweb
619
57.47
Hi. Why vBulletin 5 Connect does not support PHP version older than 5.3? Tx. No idea. My guess is they are using 5.3 specific features in vBulletin 5 Connect, you really should ask them that question. Seems that you have got wrong information somewhere.System requirements on their page says, that you need PHP 5.3.0 or greater to install vBulletin 5. "older than 5.3" and "PHP 5.3.0 or greater" does not mean a wrong information, but just a little information, as you said, ronalds.. Any version of PHP below 5.3 is no longer supported, not longer getting security updates. Do not use a version lower then 5.3. My bad, misread the question.Anyway, as they don't have change log, there's no telling for sure, but I guess they are using some of the new PHP features, available from version 5.3, like namespaces. This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/forums-scripts-work-on-php-at-least-5-3-vbulletin/27183
CC-MAIN-2014-52
refinedweb
165
79.36
Hi all! After few weeks of inactivity we are here again and we want to show you some new hints which will try to improve your code in NetBeans 7.4 (at least we hope so ;-). So here they are: Too Many Lines So what do you think that it does? Yes, exactly :-) It checks the number of lines of class, interface, trait and method (function) declarations. If the number is exceeded, than IDE warns you. So your elements will be more readable and pretty thin :-) Do you think that the default value is too strict? Don't worry, you can simply adjust it ;-) Superglobals Everyone knows superglobals, but not everyone knows that superglobal arrays shouldn't be accessed directly. Never ever trust to user inputs. Every user input should be somehow filtered/validated. And that's why this hint was created :-) Braces As you certainly know, one can use conditional statements and control structures without curly braces (if block doesn't contain more than one statement). But it makes code less readable, so we suggest you to use braces everytime. This hint works for Do-While, While, For and Foreach loops and for If-Else-ElseIf statements. Empty Statement Empty statement is just an empty statement so why to pollute your code by stuff which does nothing? Error Control Operator Misused Error control operator is pure evil (we talk about an "at" sign). It hides much more errors than you think. So don't use it. But we allow to use it in some special cases like fopen() and such. Nested Blocks in Functions Just another hint which tries to make your code more readable. Too many nested blocks make your code ugly and it also tends to violate single responsibility principle. So it tries to check the depth of your blocks and warns you if it seems suspicious. And you can set your custom depth number of course ;-) Parent Constructor Call Constructor of parent class should be called if exists. It ensures the right initialization of instantiated object. So if you override parent constructor we try to check whether you should call parent constructor or not. Unnecessary Closing Delimiter Using PHP closing delimiter at the end of file is unnecessary when there is no text after it. It is just a cause of Headers already sent error and such. So if it finds such an unnecessary delimiter, IDE will warn you. Unreachable Statement What are unreachable statements? E.g. statements after return statement. Just snippets which will never be executed. They just pollute your code. Hint tries to detect them and warn you. Please, don't forget, that all hints are just our suggestions. If you don't want, you don't have to use them. You can simply disable every hint which you don't like in: Tools -> Options -> Editor -> Hints -> PHP. Just uncheck it in the list. And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks. I didn't get the closing ?> delimiter. Does it cause a "headers already sent" error? Thank you! It looks to be very practical. Good job. @Fernando - If you accidentally place a space after the ?> tag, it can cause a "header's already sent error". This is pretty frustrating for new programmers, especially. Thank you NB core, for helping make us better coders! Loving the updates. Netbeans is leading the way on static PHP checking! Is it too challenging for Netbeans's parser to suppress hints? For example, most of the time, I want to be warned about unused variables. Sometimes, I must hack suppressions to please Netbeans. I already filed this feature request: foreach ($myArray as $unused => $value) { //would be nice to suppress $unused echo $value; } unset($unused); //sugar to suppress Netbeans hint $neverUsed = "This has a real warning that can be fixed"; Netbeans Java has @suppress, and PhpCodesniffer can suppress blocks guest: it works properly it 7.4. I think the "Superglobals" hint should also warn about using $_REQUEST. :) It warns in the dev build ;) Plz make a setting for the psr guidelines to it will be really helpfull, and an autoformat to if can be :) I prepare some code hints for PSR-0 and PSR-1 for next NB version. I think this is pretty good information which you shared with us. I always follow your blog for good stuff which I think the good decision of mine. :) Ondrej: Netbeans has 2 Codesniffer plugins. PHP_Codesniffer supports PSR-0, PSR-1 and more. I really know that ;) But those are external CLI tools which have to be executed somehow, I talk about native Hints support. For other readers, who have not used the Netbeans CodeSniffer plugin, it does require you to call `pear install php_Codesniffer`, just as Netbeans requires you to install php, and other php tools. The codesniffer plugin does have a native feel inside Netbeans, just like all the php tools in the options menu. However, like php itself, Codesniffer also runs as a CLI program. Netbeans is missing a PSR-? code formatting style, which can be saved into XML, and a good place to start. This also means that every time you automatically re-indent your code, you also must fix your code afterwards, to make it PSR compliant. I recommend everyone use the Codesniffer plugin (for now), available at And just one note from me, CodeSniffer and Mess Detector tools will have a native integration in NB 7.4 as well ;) But you will have to install them too, of course ;) That will be great :) will there also be an psr format option? Currently it's not really possible to set this up correctly in netbeans. Not sure if I'll make it in a time to 8.0 but I have it in my todo list. It's a lot of work. It would be cool if CodeSniffer/MD which is new in 7.4 beta could be added to the "Hints" feature so that checks are made on-the-fly. I have opened a bug with that request: Please support my request :) I could not find a way, to disable the "multiple assignment" hint. Sometimes it helps, but more often than not, I wish to ignore warnings, for code, like: $stmt = $db->prepare('...'); //... $stmt = null; //another assignment; mark for garbage collection I realize that alternative syntax sugar exists, such as unset(), or creating a null-setting function (not necessarily improving code). guest: Hint is called "Immutable variables" Is there a way to know which projects are currently being worked on by the Netbeans team? For example I want to know what features are currently being developed and the road map for the pending features if that's possible. I'd like to put up my hand and ask for Fixer to go with sniffer and MD. If you have ever had to deal with something really big and horribly written, like say WordPress, you will want to have Fixer to whip the files you decide to work on into shape before you start. Otherwise there will be so many sniffer finds on the page you will hardly be able to read it. And of course it is nice to work with clean, grown up code instead of the armature hour stuff that is in the plugins and templates. I know it can't be hard to integrate because I have a plugin handling it in Sublime-Text right now. For those who are not familiar, it is kind of the PHP lovechild of HTML Tidy and Rebase. Just be careful with it. Doc and Sniffer will go out and find your code smells, and Fixer will go take care of the fact your code smells, not the WHY it smells. It is only designed so that people who work in shops that have very tight coding style guidelines can get their stuff checked into git. Like I said, great for PHP in WordPress projects. It will fix the 100 ugly things wrong with the code in somebody's functions.php and leave the half dozen really broken things exposed for fixing. Cheers! Thanks for these hints, I generally find them helpful. Just a small criticism on a couple of them where I think more careful use of English would be appropriate:- It says here 'Please, don't forget, that all hints are just our suggestions'. If this is true then it is not good English to use words in the Hints like "Must", "Allowed" or "Do not", which imply that the code won't work: ("If-Else Statements Must Use Braces", "Method Length Is 11 Lines (10 allowed)", "Do Not Access Superglobals..."). This is very confusing. Better English would be "Should", "Recommended", "Should not", or "Preferred", "Suggested", etc. Thanks again, and keep up the good work! It's rather unhelpful to flag references to dollar_SERVER as requiring filtering. Many of the fields provided do not ever need filtering. Error suppression is not always evil. Sometimes it's perfectly reasonable, especially when applied to a simple variable. The only error that can arise through referring to a simple variable is that it is unset, so preceding it with the at sign recognises that there are good reasons why the variable may not be set and that null is acceptable. For example, suppose a loop that conditionally sets array elements, but may not always set any element, resulting in the array being unset, and assume the processing is in a method that returns the array. This is neatly achieved by return (array) @$resultarray; and gives the desirable degenerate case of an empty array. I'd like to second the request for a simple ability to suppress a hint. That would be far more useful than questionable hints such as line counts (sometimes, it just does make sense to have a lot of lines in one place, and I can see by looking how large a class/method is). The example given above is rather artificial, so here is another. Suppose an abstract class that wants to allow subclasses to do extra processing when a particular situation arises. The abstract class has a method protected function special ($foo) {} which the subclasses can override. It is called by method(s) in the abstract class. If a subclass does not want to process the special situation, it doesn't override the method. The abstract class doesn't want to use an abstract method, because then all the subclasses would be forced to implement the method, even if empty, adding unnecessary clutter. But the method in the abstract class generates a hint ($foo is unused) that is liable to hide genuinely useful hints. And I really don't want to write $foo = $foo; just to make Netbeans happy - that's surely not the way to good coding.
https://blogs.oracle.com/netbeansphp/improve-your-code-with-new-hints
CC-MAIN-2020-34
refinedweb
1,820
72.16
I originally joined this site almost a year ago so i could find out the name of the GUI library that's in most compilers. Didn't get much help back then. But I hope that will change. I'm making a sudoku game in C++, so obviously I need a matrix (or a vector of vectors, if possible). Also, I could make it text-based, but I'd rather have some graphics. So, I need a matrix library, and a GUI library. I thought that vectors were: #include <vector.h> ... vector<allocator_type> name(length); But apparently not. And I thought that matrices were: #include <matrix.h> ... matrix<allocator_type> name(xDim,yDim); But apparently my compiler doesn't even have the matrix library. So could somebody help me out? In a nutshell, I'm having trouble with GUI, matrices, and vectors. (By the way, I'm using Dev-C++ 5 on Windows XP.)
https://www.daniweb.com/programming/software-development/threads/75487/gui-matrix-and-vector-libraries
CC-MAIN-2018-13
refinedweb
153
69.28
I'm trying to create a video player which will play several videos after another, depending on several video indexes sent trough an OSC message. Example: I send a OSC message "playlist" with the following arguments playlist = [2,5,12,34,8] , then my player will play video N°2 then, when it reach the end, switch to the video N°5 and so on, until it reaches the end of the last video and pause, waiting for the next upcoming OSC message. The different video files available are loaded from a text file "playlist.txt". Note that the video playlist length can change (so I can't create a fix number of OMXPlayer instances and fill them with the video). I've already done this whole project in Processing, but i'm trying to reproduce in Python to see if I can get some performances improvements, and also it's a good project to start a bit with Python... Oh yes, important: I'M VERY NEW TO PYTHON For now i'm only focusing on getting videos to play one after another, using omxplayer-wrapper ( ... yer.player) My first try was: I have 2 problems with that solution: Code: Select all from omxplayer.player import OMXPlayer from time import sleep import os import OSC curMovie = 0 playing = True playlist_file = open("playlist.txt") movie = playlist_file.read().split("\n") path = "/home/pi/Desktop/PYPlayer/" def handler(addr, tags, data, client_address): print(data) curMovie = data def player(): global curMovie print("playlist lenth is", len(movie), "and movie count is ", curMovie) if curMovie == 0: player = OMXPlayer(path + movie[curMovie]) while playing: if curMovie < len(movie) - 1: if player.position() > (player.duration() - 0.2) : //temporary solution to determine when player is about to reach the end of the video curMovie = curMovie + 1 player.load(path + movie[curMovie]) player() elif curMovie == len(movie)-1: if player.position() > (player.duration() - 0.2) : player.pause() else: player.quit() player() 1- The omxplayer window closes when switching between video, which i would like to avoid 2- It works for the first 2 videos I have in my folder (on a total of 5), then stops and gives me the error: 'OMXPlayer' object is not callable I tried a second approach with 2 instances of OMXplayer, to be able to load the next video while one is playing. Here, my player function is: With this option, my 2 first videos are playing at the same time, blinking really quick from one to another, after they're done, the program quit and return the error that Code: Select all def multiplayer(): global curMovie if curMovie == 0: print("first video is playing", curMovie) player0 = OMXPlayer(path + movie[curMovie]) player1 = OMXPlayer(path + movie[curMovie + 1]) player1.pause() player1.hide_video() elif curMovie % 2 == 0 and curMovie != 0: print(" curMovie is even number ",curMovie) player0.show_video() player1 = OMXPlayer(path + movie[curMovie + 1]) player1.pause() player1.hide_video() elif curMovie % 2 == 1: print("curMovie is uneven number ", curMovie) player1.show_video() player0 = OMXPlayer(path + movie[curMovie + 1]) player0.pause() player0.hide_video() while playing: if curMovie < len(movie): if player0.position() > (player0.duration() - 0.2): curMovie = curMovie + 1 print("curMovie is ",curMovie) multiplayer() elif player1.position() > (player1.duration() - 0.2) : curMovie = curMovie + 1 print("curMovie is ",curMovie) multiplayer() else: print("it's done") player1.quit() player0.quit() couldn't run because omxplayer process is no longer alive. Code: Select all player0.position() > (player0.duration() - 0.2) I also tried to dynamically create the good number of OMXPlayer instances to call it one after another: This would work in processing, but apparently not in Python: Code: Select all for i in range(len(movie)): player[i] = OMXPlayer(path + movie[i]) player[i].pause() I know it's going in many directions but i'm trying several options because of my lack of knowledge in Python and now I fell a bit stuck... Code: Select all TypeError: 'function' objects does not support item assignments If anybody has any clue or example, or can point out something obviously wrong in my code, that would help me a lot Thanks!
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=219989
CC-MAIN-2018-47
refinedweb
678
50.16
Clojure and Leiningen to manage Clojure projects I would like to introduce you to Clojure. I am fascinated by how it works and it is also known to sharpen your thinking as a programmer. In this article, we'll do a basic setup and start programming in Clojure! What is Clojure ? Daniel Higginbotham in his book Clojure for the Brave and True has humorously defined Clojure as an alloy of Lisp, functional programming and a lock of Rich Hickey's own epic hair It combines Lisp dialect, powerful and expressive features of functional programming. Clojure is a hosted language meaning Clojure programs run on platforms like Java, Javascript and .NET and use their underlying features. Leiningen Leiningen is a useful tool to automate and manage your Clojure projects. Leiningen and Clojure require Java. Since, we are focusing on the JVM implementation of Clojure, OpenJDK will be required. You must have > 1.6 java version installed. Follow the steps on leiningen.org to download Leiningen. It will automatically download Clojure compiler, clojure.jar Creating a new project lein new app clojure-project After installing you would find the following files in the directory. project.clj(like package.json of npm) contains project dependencies. resourcesfolder to save assets (like images). src/clojure-project/core.cljwill be where you write your code. Running the project (ns clojure-project.core (:gen-class)) (defn -main "I don't do a whole lot...yet." [& args] (println "Hello, World!")) This code will be already present in the src/clojure-project/core.clj file. If you have ever used C, C++ you would know namespace and the main function. Well, the defn -main is the starting point of your program. To run, cd into the clojure-project and run: lein run You should see Hello, World! as the output! Building the project Clojure works even with environment change. It is not necessary that Leiningen is required to run the project elsewhere. To allow code shareability, we can create a stand-alone file that works where Java is installed. lein uberjar After running this command a stand-alone file would be created in a target/uberjar/ folder (same directory level as src). This jar file can be distributed on any platform. Using REPL REPL - read, evaluate and print loop. It is a tool that takes single line and executes the code as soon as it evaluates (just like Console in web developer tools or python prompt). To start Clojure repl lein repl The prompt would look something like this: nREPL server started on port 56969 on host 127.0.0.1 - nrepl://127.0.0.1:56969 REPL-y 0.4.4, nREPL 0.7.0 Clojure 1.10.1 OpenJDK 64-Bit Server VM 1.8.0_242-b08 Docs: (doc function-name-here) (find-doc "part-of-name-here") Source: (source function-name-here) Javadoc: (javadoc java-object-or-class-here) Exit: Control+D or (exit) or (quit) Results: Stored in vars *1, *2, *3, an exception in clojure-project.core=> Clojure follows prefix notation and everything resides in matching parentheses. Samples to try out: (+ 1 2 3 4 5)should add up all numbers and return 15 ( str "hello world" )should print "hello world" ( str "Hi! " "John " "Doe" )should concatenate all the strings ( * 1 2 3 4 5 )should return multiplied value 120
https://divyajyotiuk.hashnode.dev/clojure-and-leiningen-to-manage-clojure-projects?guid=none&deviceId=8be841bc-77f5-4c7c-b5df-3a4d04b78096
CC-MAIN-2020-50
refinedweb
556
68.26
Path Operation Advanced Configuration OpenAPI operationId¶ Danger If you are not an "expert" in OpenAPI, you probably don't need this. You can set the OpenAPI operationId to be used in your path operation with the parameter operation_id. You would have to make sure that it is unique for each operation. from fastapi import FastAPI app = FastAPI() @app.get("/items/", operation_id="some_specific_id_you_define") async def read_items(): return [{"item_id": "Foo"}] Using the path operation function name as the operationId¶ If you want to use your APIs' function names as operationIds, you can iterate over all of them and override each path operation's operation_id using their APIRoute.name. You should do it after adding all your path operations. from fastapi import FastAPI from fastapi.routing import APIRoute app = FastAPI() @app.get("/items/") async def read_items(): return [{"item_id": "Foo"}] def use_route_names_as_operation_ids(app: FastAPI) -> None: """ Simplify operation IDs so that generated API clients have simpler function names. Should be called only after all routes have been added. """ for route in app.routes: if isinstance(route, APIRoute): route.operation_id = route.name # in this case, 'read_items' use_route_names_as_operation_ids(app) Tip If you manually call app.openapi(), you should update the operationIds before that. Warning If you do this, you have to make sure each one of your path operation functions has a unique name. Even if they are in different modules (Python files). Exclude from OpenAPI¶ To exclude a path operation from the generated OpenAPI schema (and thus, from the automatic documentation systems), use the parameter include_in_schema and set it to False; from fastapi import FastAPI app = FastAPI() @app.get("/items/", include_in_schema=False) async def read_items(): return [{"item_id": "Foo"}] Advanced description from docstring¶ You can limit the lines used from the docstring of a path operation function for OpenAPI. Adding an \f (an escaped "form feed" character) causes FastAPI to truncate the output used for OpenAPI at this point. It won't show up in the documentation, but other tools (such as Sphinx) will be able to use the rest. from typing import Set from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str = None price: float tax: float = None tags: Set[str] = [] @app.post("/items/", response_model=Item, summary="Create an item") async def create_item(*, item: Item): """ Create an item with all the information: - **name**: each item must have a name - **description**: a long description - **price**: required - **tax**: if the item doesn't have tax, you can omit this - **tags**: a set of unique tag strings for this item \f :param item: User input. """ return item
https://fastapi.tiangolo.com/tutorial/path-operation-advanced-configuration/
CC-MAIN-2020-05
refinedweb
426
61.56
Data Points - Looking Ahead to Entity Framework 7 By Julie Lerman | January 2015 Development of the next version of Entity Framework is well underway. I got my first glimpse of what the EF team was working on at TechEd North America 2014, when Program Manager Rowan Miller talked about the goals for Entity Framework 7 (EF7) and demonstrated some very early bits. That was five months ago as I’m writing this column, and although EF7 is still an early alpha, it has already come a long way. In this column I want to make you aware of what EF7 will bring to developers, the motivations behind decisions being made about EF7, and what this version means to existing apps that use EF6 or earlier. I’ll also give you a peek at what some of the code will look like. Open Source, but Now on GitHub The. EF6 Is Not Going Away Any Time Soon Don’t worry—you won’t be forced to move to EF7. Think back to ADO.NET DataSets and DataReaders. Much like ASP.NET Web Forms, which are still supported and even benefiting from occasional tweaks, ADO.NET is still part of the Microsoft .NET Framework, even though EF has been the primary data access technology for .NET for many years. Not much has happened to enhance those technologies, but they’re still there, and still supporting a lot of legacy code (mine included). One of the big advantages EF6 has over those other technologies is that it’s open source, so even though the team at Microsoft won’t be making big investments in EF6, the community will still be able to. And the EF team is committed to EF6. They will continue to make tweaks, closely inspect pull requests and update EF6. Even though they’ve been going full-bore on EF7 for a good part of 2014, EF6 was updated. Version 6.1.0 was released in February 2014; 6.1.1 in June 2014; and as I’m writing this article, 6.1.2 is in beta, to be released soon. I was originally concerned, but am no longer worried, about being able to keep older apps going. The only ones I worry about are the earliest apps that used EF with the .NET Framework 3.5, ObjectContext and more. But if you haven’t updated those apps to leverage all of the great improvements to EF over the years, you may not be worrying too much about EF7, anyway. You can also find all of the earlier packages for EF on NuGet, all the way back to EF 4.1.10311. EF7: The Short List. Familiar Coding Surface, New Code Base Each version of EF has evolved the framework, adding new capabilities and fine-tuning performance and APIs. As I’ve written about before in this column, as well as in an overview article in the December 2013 issue, “Entity Framework 6: The Ninja Edition” (bit.ly/1cwjyCD), the latest version brought EF to a new level, giving the framework many of the features users have been asking for along the way, such as asynchronous database execution, tapping into the query pipeline, customizing Code First conventions and so much more. I drill much further into these features in my Pluralsight course, “Entity Framework 6, Ninja Edition: What’s New in EF6” (bit.ly/PS-EF6). There were even more features developers sought for EF that Microsoft was eager to implement, but the 10-plus-year-old code base EF is built on—with a continued dependency on ObjectContext and less flexible coding patterns—prevented the team from getting to this next level of capabilities. A difficult decision—which surely many of you have faced with your own legacy software—was made to rebuild Entity Framework from scratch. EF7 isn’t creating a new framework for data access. Instead, it’s building a new, more sustainable base on which to support not only the features and workflow you’ve depended on for years with EF, but also one that will enable so much more. The team struggled with whether this should be the next EF or a new data access technology. At one point, I even questioned whether it would be “EF Light.” But the core functionality of EF is still there and after a lot of consideration, I agree it makes sense to think of this as the next version of Entity Framework. You can read more about this in the team’s blog post, “EF7 – v1 or v7?” (bit.ly/1EFEdRH). Shedding Some Legacy Weight, Keeping the Good Parts Yet there’s also news about EF7 that’s worrisome for some developers. While the most common EF classes, patterns and workflows will remain intact, some of the lesser-used members will be left behind. But please don’t panic; I’ll talk more about this in a bit. Enabling developers to continue using familiar patterns and even be able to port good amounts of existing code to EF7 was a critical goal. You’ll still use DbContext, DbSet, LINQ queries, SaveChanges and many of the means of interaction that have been part of EF for a long time. Here’s a DbContext class I defined in EF7: And here’s a simple update in EF7 that’s the same as in EF6. I’m using a synchronous save, but all of the async methods are there, as well: And a simple query: I’m using the version of EF7 that’s in the packages with version beta2-11616. EF7 isn’t really beta at this time, but the “beta2” is related to a NuGet package-naming decision. By the time this article is published, EF7 will have evolved further, so please consider this a look, not a promise. I still have a DbContext and define DbSets, just as I’ve always done. OnModelCreating is still there, although I’m not using it here. EF4.1 introduced the DbContext API, which was much more focused on typical EF usage. Underneath, it still relied on the original ObjectContext that provides database interaction, manages transactions and tracks the state of objects. Since then, DbContext has become the default class to use and you’d dip down to the lower-level APIs if you wanted to do a rare interaction with the ObjectContext. EF7 will shed the obese ObjectContext; only the DbContext will remain. But some of those tasks you’ve relied on the ObjectContext for will still be accessible. Some of the very complicated mappings that are difficult to support and not commonly used will go away with EF7. The aforementioned blog post says, “For example, you could have an inheritance hierarchy that combined TPH, TPT and TPC mappings, as well as Entity Splitting, all in the same hierarchy.” If you’ve ever attempted to work directly with the MetadataWorkspace API and run away screaming, you know it’s an intricate and complex beast, useful for being able to support this kind of flexibility. But that complexity has prevented the team from being able to support other scenarios for which users have asked. By simplifying the mapping possibilities, the MetadataWorkspace API also became simpler and much more flexible. You can easily get to metadata about your model schema from the DbContext API in EF7, which gives you a low-level capability to perform advanced techniques without having to have the low-level ObjectContext at your disposal. Dropping EDMX, but Database First Will Continue Entity Framework currently has two ways to describe a model. One uses an EDMX in the designer; the other involves the classes, a DbContext and mappings that are used by the Code First APIs. If you’re using the EDMX and designer, at run time EF creates an in-memory model from the XML behind the EDMX. If you choose the Code First path, EF creates the same in-memory model by reading the classes, DbContext and mappings you provided. From that point on, EF works the same, regardless of how you describe your model. Note that with the EDMX/Designer workflow, you also get POCO classes and a DbContext to work with in your code. But because the EDMX is there, they aren’t used to create that in-memory model. This is important to understand as you read the next sentences: EF7 will not support the designer-based EDMX model. It will not have the ability to read the EDMX XML at run time to create the in-memory model. It will use only the Code First workflow. When the team blogged about this, it caused panic among developers. Partly this was due to the fact that many still don’t realize you can reverse-engineer a database to POCO classes, DbContext and mappings. In other words, you can start with a database to get a Code First model. This has been possible since the EF Power Tools Beta was first released in early 2011. It’s supported by the EF6.1 designer and it will definitely be supported for EF7. I’ve said many times that the “Code First” moniker is a little confusing and misleading. It was originally called “Code Only,” but the name was changed to “Code First” to make a nice match with “Database First” and “Model First.” So you don’t need the designer or an EDMX to start with an existing database. But what if you have existing EDMX models and don’t want to lose the ability to use a designer? There are third-party designers that support Entity Framework, such as the LLBLGen Pro Designer, which already supports EF Code First (bit.ly/11OLlN2), and the Devart Entity Developer (bit.ly/1yHWbB2). Look for those tools and possibly others to potentially provide designer support for EF7. There is yet another path to keep in mind: sticking with EF6! Smaller Footprint, More Devices and OSes Additionally, Microsoft strove to streamline the distribution of the EF APIs. The NuGet package folder for EF6.1.1 is about 22MB. This includes a 5.5MB assembly for the .NET Framework 4.5 and another in case you’re using .NET Framework 4. With EF7, there are a number of smaller DLLs. You’ll combine only the DLLs necessary to support your workflow. For example, if you’re targeting SQL Server, you’d use a core EntityFramework.dll, a DLL for SQL Server and another with APIs common to relational data stores. If you want to use migrations, that’s a separate assembly you can skip. Otherwise, you may want to create and execute migrations from the Package Manager Console. There’s an API for commands. Using the NuGet package manager, the proper packages will be identified and downloaded via their dependencies, so you won’t have to worry too much about the details. What this does is minimize the EF7 footprint on the end user’s computer or device, which is especially important on devices. ASP.NET is going this route, as well. Both of these technologies are dropping their reliance on the full .NET Framework. Instead, they’ll distribute only the DLLs necessary for accomplishing the tasks of a given application. This means the already-streamlined version of .NET used by Windows Phone and Windows Store apps will be able to use EF7. It also means that OSes like OS X and Linux that use Mono rather than the full .NET Framework will also be able to support client-side Entity Framework. Beyond Relational When Entity Framework was first introduced, Microsoft had a vision of it being used for a variety of data stores, though the first pass focused on relational databases. Non-relational databases existed at that time, but were not widely used, unlike the NoSQL databases—especially document databases—that are so popular today. While EF is an Object Relational Mapper (ORM), developers who use it want to be able to use the same constructs to interact with non-relational databases. EF7 will provide a high level of support for this, but keep in mind what high level really means. There are vast differences between relational databases and non-relational databases and EF will not make any attempt to mask those differences. But for basic querying and updates, you’ll be able to use the patterns with which you’re already familiar. Figure 1 shows code from a sample app that targets Microsoft Azure Table Storage, which is a non-relational document database. The sample comes from EF Program Manager Rowan Miller at github.com/rowanmiller/Demo-EF7. Note that the sample runs against the 11514 version of the EF7 alpha nightly builds. public class WarrantyContext : DbContext { public DbSet<WarrantyInfo> Warranties { get; set; } protected override void OnConfiguring(DbContextOptions options) { var connection = ConfigurationManager.ConnectionStrings["WarrantyConnection"] .ConnectionString; options.UseAzureTableStorage(connection); } protected override void OnModelCreating(ModelBuilder builder) { builder.Entity<WarrantyInfo>() .ForAzureTableStorage() .PartitionAndRowKey(w => w.BikeModelNo, w => w.BikeSerialNo); } } The OnConfiguring method is new. It’s a way to affect how EF configures the DbContext at run time, somewhat like you can do today with the DbConfiguration class. Notice the builder.UseAzureTableStorage extension method, which exists because I’ve also installed the EntityFramework.AzureTableStorage package into my project. EF7 uses this pattern for its various providers. Here’s an OnConfiguring method in a DbContext class within a project that targets SQLite: This project has the EntityFramework.SQLite package installed, so now I have the UseSQLite extension method instead. Back in the WarrantyContext class in Figure 1, you can see the familiar OnModelCreating override for DbContext and in there I’m doing some special mapping. Again, I have methods provided by the EntityFramework.AzureTableStorage NuGet package. I get to pick and choose the packages I want based on the features I need. Azure Table Storage relies on a key-value pair for unique identity and to support table partitioning. In order to retrieve or store data, it’s critical to know what values are to be used for the PartitionKey and the RowKey, so the API provides a method—PartitionAndRowKey—that allows you to map the properties to the appropriate keys. The concept is no different from how you’ve been able to use the fluent API or Data Annotations to specify the property that maps to a relational database’s primary key. Thanks to this mapping, I can write a familiar LINQ query to retrieve some data: So you’re seeing a typical LINQ query, but it’s being executed against the Azure Table Storage data store, just as you can do with a relational database today. This same demo also updates warranty objects; creates and inserts new ones using DbSet.Add; and uses DbContext.SaveChanges to persist everything back to the data store, just as it’s done today with EF6—and has been done throughout the history of EF. Also interesting to consider is how Entity Framework has always supported a set of canonical features for mapping to relational databases, but left it up to the database providers to specify how those would translate to the database they target. EF7 will have a high-level set of canonical features that can be understood by relational and non-relational data stores. There’s also a lower-level set of features that focus on relational databases, and they’re encapsulated in the EntityFramework.Relational assembly. All of the relational database providers will depend on those and, just like today, their specific handling of database interaction will be housed in their own provider APIs, like the EntityFramework.SQLite I used earlier. You’ll find extension methods in the providers that spin off of an AsRelational method, which is in the Relational API. It’s an extension method of DbContext. There’s even an in-memory data store provider, which is for unit testing when you want to avoid a database interaction that might be involved in the logic you’re testing. Typically in these scenarios you use fakes or mocking frameworks to impersonate the database interaction. If you were setting up a test to perform a query or update the database, you’d have some code to instantiate the database, such as: You can easily switch to an in-memory store by first installing the entityframework.InMemory package to your test project, defining a DbContextOption for InMemoryStore and then specifying that the context should use that option. Again, this is possible thanks to extension methods provided by this API: More Features, More Capabilities, Much More Flexible You can already see the benefits of the new code base in the flexibility the extension methods provide, and in the ability to affect the Entity Framework pipeline with the OnConfiguring overload. There are extensibility points throughout the new code base, not only for changing EF7, but also for making it simpler for you to plug in your own logic to EF7. The new core code base gives the EF team a chance to solve some age-old problems. For example, the version I’m using already has support for batch updating, which is the default for relational databases. I’ve played with code that allows me to use my own methods inline in LINQ queries without receiving the dreaded “Entity Framework cannot translate this method into SQL.” Instead, EF and the providers are able to parse out which part of the query will become SQL and which will get run locally on the client. I’m sure there will be protection from and guidance for avoiding some potential performance issues for that particular feature. The team was able to add the long-requested Unique Foreign Keys capability for models. They’re also looking closely at providing support for table-valued functions and cleaner ways to handle disconnected data, which is something I’ve focused on for many years with Entity Framework. It’s a common problem with disconnected applications—not just when Entity Framework is involved—and it’s not easy to create algorithms that will work consistently in every scenario. So a new approach is needed, for sure. There’s a lot more to get excited about with EF7. I highly recommend a close look at the posts on the ADO.NET Team Blog at blogs.msdn.com/adonet. In addition to the post I linked to earlier, Rowan Miller wrote in-depth about the decision to drop designer support in EF7; see “EF7 - What Does ‘Code First Only’ Really Mean” at bit.ly/1sLM3Ur. Keep an eye on that blog, as well as the GitHub project. The wiki on GitHub (bit.ly/1viwqXu) has links to how to access the nightly builds; how to download, compile and debug the source code; some walk-throughs and the design meeting notes. The team is eager for your feedback and is excited to receive pull requests. A Decision Not Taken Lightly It’s important to me to write about EF7 to help allay some fears about such a big change and that some of the existing EF features that might be integral to your applications will not make it into EF7. These fears are not unfounded and the team is not taking them lightly, nor am I. But understanding that EF6 will not go away and will continue to evolve with contributions from the community is critical. If you want to take advantage of the forward movement, you’ll have some tough choices to make. Upgrading big applications will not be easy and you should weigh the options carefully. Perhaps you can break up your application, rewriting only some portions to benefit from EF7. Again, as I’m writing this column, EF7 is still in its early stages, and I’m not sure how far along it will be by the time you read this. But the current available source and NuGet packages are there to explore, experiment with and provide feedback. Bear in mind that the team may not always keep all of the provider APIs (such as Redis, SQLite and others) up-to-date as they evolve the core API. According to the post at bit.ly/1ykagF0, “EF7 - Priorities, Focus and Initial Release,” the first release of EF7 will focus on compatibility with ASP.NET 5. Subsequent releases will add more features. Still, even though EF7 is not yet stable enough to begin building applications with, there’s definitely enough there to let you start planning ahead. Microsoft technical expert for reviewing this article: Rowan Miller This is based on an alpha version of Entity Framework 7. All information is subject to change. Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/dn890367.aspx
CC-MAIN-2019-09
refinedweb
3,450
62.38
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search Coderanch Advance search Google search Register / Login Michael Krimgen Ranch Hand 54 8 Threads 0 Cows since Jul 08, 2012 Merit Badge info Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 4 Received in last 30 days 0 Total given 5 Given in last 30 days 0 Forums and Threads Scavenger Hunt Ranch Hand Scavenger Hunt Number Posts (54/100) Number Threads Started (8/100) Number Cows Received (0/5) Number Likes Received (4/10) Number Likes Granted (5/20) Set bumper stickers in profile (0/3) Report a post to the moderators (0/1) Edit a wiki page (0/1) Create a post with an image (0/2) Greenhorn Scavenger Hunt First Post Number Posts (54/10) Number Threads Started (8/10) Number Likes Received (4/3) Number Likes Granted (5/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Michael Krimgen Advanced data structures? Hi Marcello! In your reply to what are advanced data structures (in comparison to basic ones), you say that they build in top of the basic ones. For example, a d-ary heap would be an improvement (depending on the use case) of a binary heap. Another definition for an advanced data structure or algorithm could be one where multiple basic data structures are combined (e.g. Tim Sort, where the algorithm decides which sorting algorithm to use based in the nature of the input). Or a HashMap in Java, where a bucket is represented as a linked list or a red-black tree depending of the number of entries. Does the book also cover examples of combined algorithms or data structures and how to analyse them? Cheers, Michael show more 1 month ago Design Advanced data structures Hi Marcello! Does the book describe the data structures and algorithms in pseudo code or in any pogramming language? Also, is there a real implementations of all topics in the book available for download? Cheers, Michael show more 1 month ago Design Quick questions about the High Performance Python book Hi Tiago! Could the techniques described in your book also be useful for other applications than data analysis? Like building APIs or games in Python? show more 9 months ago Jython/Python Do I need a quantum computer? Hi Eric, For now, the only reason for me to learn about QC algorithms would be curiosity. However, do you see any applications of the concepts of QC algorithms in other areas like concurrent programming? show more 2 years ago Quantum Computing Programming Quantum Computers: Theoretical background of algorithms in the book Hi Eric, Thanks for your detailed answer! show more 2 years ago Quantum Computing Programming Quantum Computers: Theoretical background of algorithms in the book Dear authors, Does the book also discuss the theoretical background of the presented algorithms? For example proofs that the algorithms actual works (providing they are executed on a working quantum computer). show more 2 years ago Quantum Computing Are there any prerequisites ? Hi authors, There are some experimental quantum computers out there. Do you know if any of the presented algorithms has already been tested on such a computer? show more 2 years ago Quantum Computing Java Code Review and Psychology Done! Good luck with your research! show more 2 years ago Java in General Building Ethereum DApps: Question regarding the content Hi Roberto, I have two questions regarding the content of your book. a) Does the book also cover the cryptographical details of the blockchain? Or is the reader expected to know these? b) I see that there is a chapter on Unit testing the application. Is integration testing also covered? Cheers, Michael show more 2 years ago Cloud/Virtualization Display of USA flag at Command Prompt??? Hi Mark, You should consider making your code more readable, so it's easier to understand for people to review your code. Here are some points to consider: - You have one long method with nested for/if. Consider using methods. You could, for example, create a method which takes a Strings and a number n as input and prints the string n times. - Try to find an alternative for the first_time / second_time. show more 2 years ago Code Reviews Discussion on quick sort Java uses TimSort for Objects: show more 2 years ago Java in General Verhas Java 11: what are the new Java 11 features you show? Hi Peter, Does the book point out which are Java 11 specific features, so the reader can easily see which Java 11 improvements are applied in the projects presented in the book? show more 3 years ago Beginning Java Switch You could use a switch with an enum. This could be an option if you want to reuse the conditions and keep the switch short. Here is a simplified example. Please note that I do not advocate this pattern, but it might be appropriate in some cases. public class Switch { public static void main(String[] argv) { int i = 5; switch(Compare.getByValue(i)) { case RANGE_1_5: System.out.println("one to five"); break; case RANGE_8_12: System.out.println("eight to twelve"); break; default: System.out.println("default value"); } } private enum Compare { //do not add overlapping values RANGE_1_5(1,5), RANGE_8_12(8,12), DEFAULT(0,-1); int startRange, endRange; Compare(int start, int end) { startRange = start; endRange = end; } public static Compare getByValue(int i) { for(Compare c : Compare.values()) { if( i <= c.endRange && i>= c.startRange) { return c; } } return DEFAULT; } } } show more 3 years ago Beginning Java javac plugin Hello, I recently read about the possibility to create a java compiler plugin, a feature which was added in version 1.8. The only resources I could find are: I am looking for some good use cases for a plugin and examples of plugins which actually do more than can be done with annotations and reflection. Has anyone here used a compiler plugin and for which purpose? Thanks, Michael show more 3 years ago Java in General Java Literals Note that you are assigning a character to your variable a, not an integer value. If you want the value of a to be 7, you should write: int a = 7; (without the singel quotes) show more 3 years ago Beginning Java
https://coderanch.com/u/269433/Michael-Krimgen
CC-MAIN-2021-49
refinedweb
1,082
60.55
Source, Sinkgenerally, because I felt, it would be too intrusive, when just working with Monix or scalarx or airstream natively. Maybe, just hide them under a syntax._-import? It is nicer too write. And that is an advantage of the Sink, Sourcearchtiecture. Observable/ Observerthere? Sourceand Sinkwould feel wrong because usually algebraic methods are members these days. I hope that's going to change with scala 3 and the extensionkeyword. No more .mapon everything. Just a single global extension method for all functor instances. a.withLatest(b)and the method on the static object Observable.withLatest(a, b). While I think its fine to have both, outwatch tends to use them at random in some places and I believe it would make the code more readable to make that a little more consistent. not found: type Handler [error] def cmpDate(hdl: Handler[Long]) = import outwatch.reactive.handler.Handler import outwatch.reactive.handlers.monix._ Handler. To me, this concept feels too complex, that you indirectly create new instances of your stream-types (just with the help of a magic import). It just seems more straightforward to directly create your subjects with the library of your choice. The intent is much more clear. Observablein your dom nodes. any idea why this code works perfectly, but if I put the class FrmClass on another file it doesn't? class TestingOnClassSpec extends JSDomAsyncSpec { class FrmClass { val testClick = new Observer[String] { def onNext(elem: String): Future[Ack] = { Future { println(elem) Continue } } def onError(ex: Throwable): Unit = { println(ex.printStackTrace.toString) } def onComplete(): Unit = println("O completed Handler") } val but = button( idAttr := "cmdTestOnClass", "Save", cls := "myButton", onClick.use("on the observer a class ----------") --> testClick ) } it should "be test the save button" taggedAs(ButtonTestingTest) in { val frm = new FrmClass for { _ <- OutWatch.renderInto[IO]("#app", frm.but) } yield { val element = document.getElementById("cmdTestOnClass") sendEvent(element, "click") succeed } } } something lacking me ? VNode. The element is what you can append the canvas to which is rendered by nspl.
https://gitter.im/OutWatch/Lobby?at=626139ac8db2b95f0ac822d5
CC-MAIN-2022-27
refinedweb
330
57.98
a simple static fileserver and directory index server in python (WSGI app) a simple static fileserver and directory index server in python (WSGI app) About Often for testing you will want a static fileserver and directory index as part of your WSGI stack. In addition, you may have requirements to run such as part of a production WSGI stack. FileServer fits these needs. Motivation I needed a directory index server a la Apache to test a PyPI clone I was using. After surveying what was out there, there didn’t seem anything out there that was easily consumable for my purposes. So I wrote one only depending on webob . Contents from fileserver import * should give you access to all of the usable components of fileserver: - file_response: return a webob response object appropriate to a file name - FileApp: WSGI app that wraps file_response - Directory Server: serves a directory tree and generated indices - main: command line entry point FileApp and file_response are heavily borrowed from . I also borrowed from Paste’s StaticURLParser and static.Cling. In addition there is a command line script, serve, which may be used to serve a directory with the wsgiref server. Tests doctests and a test runner, test.py, exist in the tests/ subdirectory of . I currently use paste.fixture.TestApp to mock requests and inspect responses, but should probably move to WebTest . Other Projects While I didn’t find them suitable for my use, there are other standalone static fileservers available for python: - static - Paste StaticURLParser - SimpleHTTPServer Jeff Hammel Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/FileServer/
CC-MAIN-2017-39
refinedweb
274
53.51
In this article, I’ll show you: 💬 How to check the version of the Python module (package, library) array? And how to check if array is installed anyways? These are the eight best ways to check the installed version of the Python module array: - Method 1: pip show array - Method 2: pip list - Method 3: pip list | findstr array - Method 4: library.__version__ - Method 5: importlib.metadata.version - Method 6: conda list - Method 7: pip freeze - Method 8: pip freeze | grep array Before we go into these ways to check your array array in your current Python environment? Method 1: pip show To check which version of the Python library array is installed, run pip show array or pip3 show array array Name: array Version: a.b.c Summary: ... Home-page: ... Author: ... Author-email: ... License: ... Location: ... Requires: ... Required-by: ... In some instances, this will not work—depending on your environment. In this case, try those commands before giving up: python -m pip show array python3 -m pip show array py -m pip show array pip3 show array Next, we’ll dive into more ways to check your array array ... array array using the CMD or Powershell command: pip3 list | findstr array to locate the version of array in the output list of package versions automatically. Here’s an example for array: pip3 list | findstr array('array') for library array. This returns a string representation of the specific version such as 1.2.3 depending on the concrete version in your environment. Here’s the code: import importlib.metadata print(importlib.metadata.version('array')) # 'array'? conda list '^array' Method 7: pip freeze The pip freeze command without any option lists all installed Python packages in your environment in alphabetically order (ignoring UPPERCASE or lowercase). You can spot your specific package array if it is installed in the environment. pip freeze Output example (depending on your concrete environment/installation): PS C:\Users\xcent> pip freeze aaa==1.2.3 ... array= array using the CMD or Powershell command: pip freeze | grep array to programmatically locate the version of your particular package array in the output list of package versions. Here’s an example for array: pip freeze | grep array array==1.2.3 Related Questions Check array Installed Python How to check if array is installed in your Python script? To check if array is installed in your Python script, you can run import array in your Python shell and surround it by a try/except to catch a potential ModuleNotFoundError. try: import array print("Module array installed") except ModuleNotFoundError: print("Module array not installed") Check array Version Python How to check the package version of array in Python? To check which version of array is installed, use pip show array or pip3 show array in your CMD/Powershell (Windows), or terminal (macOS/Linux/Ubuntu) to obtain the output major.minor.patch. pip show array # or pip3 show array # 1.2.3 Check array Version Linux How to check my array version in Linux? To check which version of array is installed, use pip show array or pip3 show array in your Linux terminal. pip show array # or pip3 show array # 1.2.3 Check array Version Ubuntu How to check my array version in Ubuntu? To check which version of array is installed, use pip show array or pip3 show array in your Ubuntu terminal. pip show array # or pip3 show array # 1.2.3 Check array Version Windows How to check my array version on Windows? To check which version of array is installed, use pip show array or pip3 show array in your Windows CMD, command line, or PowerShell. pip show array # or pip3 show array # 1.2.3 Check array Version Mac How to check my array version on macOS? To check which version of array is installed, use pip show array or pip3 show array in your macOS terminal. pip show array # or pip3 show array # 1.2.3 Check array Version Jupyter Notebook How to check my array version in my Jupyter Notebook? To check which version of array is installed, add the line !pip show array to your notebook cell where you want to check. Notice the exclamation mark prefix ! that allows you to run commands in your Python script cell. !pip show array Output: The following is an example on how this looks for array in a Jupyter Notebook cell: Package Version ------------- – – ------- aaa 1.2.3 ... array 1.2.3 ... zzz 1.2.3 Check array Version Conda/Anaconda How to check the array version in my conda installation? Use conda list 'array' to list version information about the specific package installed in your (virtual) environment. conda list 'array' Check array Version with PIP How to check the array version with pip? You can use multiple commands to check the array version with PIP such as pip show array, pip list, pip freeze, and pip list. pip show array pip list pip freeze pip list The former will output the specific version of array. The remaining will output the version information of all installed packages and you have to locate array first. Check Package Version in VSCode or PyCharm How to check the array version in VSCode or PyCharm? Integrated Development Environments (IDEs) such as VSCode or PyCharm provide a built-in terminal where you can run pip show array to check the current version of array in the specific environment you’re running the command in. pip show array pip3 show array pip list pip3 list pip freeze pip3 freeze You can type any of those commands in your IDE terminal like so: Summary In this article, you’ve learned those best ways to check a Python package version: - Method 1: pip show array - Method 2: pip list - Method 3: pip list | findstr array - Method 4: library.__version__ - Method 5: importlib.metadata.version - Method 6: conda list - Method 7: pip freeze - Method 8: pip freeze | grep array.
https://blog.finxter.com/how-to-check-array-package-version-in-python-2/
CC-MAIN-2022-33
refinedweb
993
61.77
This post comes in preparation for a post on decision trees (a specific type of tree used for classification in machine learning). While most mathematicians and programmers are familiar with trees, we have yet to discuss them on this blog. For completeness, we’ll give a brief overview of the terminology and constructions associated with trees, and describe a few common algorithms on trees. We will assume the reader has read our first primer on graph theory, which is a light assumption. Furthermore, we will use the terms node and vertex interchangeably, as mathematicians use the latter and computer scientists the former. Definitions Mathematically, a tree can be described in a very simple way. Definition: A path in a graph is called a cycle if . Here we assume no vertex is repeated in a path (we use the term trail for a path which allows repeated vertices or edges). Definition: A graph is called connected if every pair of vertices has a path between them. Otherwise it is called disconnected. Definition: A connected graph is called a tree if it has no cycles. Equivalently, is a tree if for any two vertices there is a unique path connecting them. The image at the beginning of this post gives an example of a simple tree. Although the edges need not be directed (as implied by the arrows on the edges), there is usually a sort of hierarchy associated with trees. One vertex is usually singled out as the root vertex, and the choice of a root depends on the problem. Below are three examples of trees, each drawn in a different perspective. People who work with trees like to joke that trees are supposed to grow upwards from the root, but in mathematics they’re usually drawn with the root on top. We call a tree with a distinguished root vertex a rooted tree, and we denote it , where is the tree and is the root. The important thing about the hierarchy is that it breaks the tree into discrete “levels” of depth. That is, we call the depth of a vertex the length of the shortest path from the root to . As you can see in the rightmost tree in the above picture, we often draw a tree so that its vertices are horizontally aligned by their depth. Continuing with nature-inspired names, the vertices at the bottom of the tree (more rigorously, vertices of degree 1) are called leaves. A vertex which is neither a leaf nor the root is called an internal node. Extending the metaphor to family trees, given a vertex of depth , the adjacent vertices of depth (if there are any) are called the child nodes (or children) of . Similarly, is called the parent node of its children. Extrapolating, any node on the path from to the root is an ancestor of , and is a descendant of each of them. As a side note, all of this naming is simply a fancy way of imposing a partial ordering on the vertices of a tree, in that the vertex if is on the path from to . Using some mathematical lingo, a “chain” in this partial order is simply a traversal down the tree from some stopping vertex. All of the names simply make this easier to talk about in English: if and only if is an ancestor of . Of course, there are also useful total orderings on a tree (where you can compare two vertices, neither of which is a descendant of the other), and we will describe some later in this post. In applications, there is usually some data associated with the vertices and edges of a tree. For example, in our future post on decision trees, the vertices will represent attributes of the data, and the edges will represent particular values for those attributes. A traversal down the tree from root to a leaf will correspond to an evaluation of the classification function. The meat of the discussion will revolve around how to construct a sensible tree. The important thing about depth in trees is that, given sufficient bounds on the degree of each vertex, the depth of a tree which is not egregiously unbalanced is logarithmic in the number of vertices. In fact, most trees in practice will satisfy this. Perhaps the most common kind is a so-called binary tree, in which each internal node has degree at most 3 (two children, one parent). To see that this satisfies the logarithmic claim, simply count nodes by depth: the -th level of the tree can have at most vertices. And so if all of the levels are filled (the tree is not “unbalanced”) and the tree has depth , then the number of nodes in the tree is . Taking a logarithm recovers a term that is linear in , and the same argument holds if we can fix a global bound on the degree of each internal node. The rightmost picture in the image above gives an example of a complete binary tree of 15 nodes. In other words, if one can model their data in a binary tree, then searching through the data takes logarithmic time in the number of data points! For those readers unfamiliar with complexity theory, that is wicked fast. To put things into perspective, it’s commonly estimated that there are less than a billion websites on the internet. If one could search through all of these in logarithmic time, it would take roughly 30 steps to find the right site (and that’s using a base of 2; in base 10 it would take 9 steps). As a result, much work has been invested in algorithms to construct and work with trees. Indeed the crux of many algorithms is simply in translating a problem into a tree. These data structures pop up in nearly every computational field in existence, from operating systems to artificial intelligence and many many more. Representing a Tree in a Computer The remainder of this post will be spent designing a tree data structure in Python and writing a few basic algorithms on it. We’re lucky to have chosen Python in that the class representation of a tree is particularly simple. The central compound data type will be called “Node,” and it will have three associated parts: - A list of child nodes, or an empty list if there are none. - A parent node, or “None” if the node is the root. - Some data associated with the node. In many strongly-typed languages (like Java), one would need to be much more specific in number 3. That is, one would need to construct a special Tree class for each kind of data associated with a node, or use some clever polymorphism or template programming (in Java lingo, generics), but the end result is often still multiple versions of one class. In Python we’re lucky, because we can add or remove data from any instance of any class on the fly. So, for instance, we could have our leaf nodes use different internal data as our internal nodes, or have our root contain additional information. In any case, Python will have the smallest amount of code while still being readable, so we think it’s a fine choice. The node class is simply: class Node: def __init__(self): self.parent = None self.children = [] That’s it! In particular, we will set up all of the adjacencies between nodes after initializing them, so we don’t need to put anything else in the constructor. Here’s an example of using the class: root = Node() root.value = 10 leftChild = Node() leftChild.value = 5 rightChild = Node() rightChild.value = 12 root.children.append(leftChild) root.children.append(rightChild) leftChild.parent = root rightChild.parent = root We should note that even though we called the variables “leftChild” and “rightChild,” there is no distinguishing from left and right in this data structure; there is just a list of children. While in some applications the left child and right child have concrete meaning (e.g. in a binary search tree where the left subtree represents values that are less than the current node, and the right subtree is filled with larger elements), in our application to decision trees there is no need to order the children. But for the examples we are about to give, we require a binary structure. To make this structure more obvious, we’ll ugly the code up a little bit as follows: class Node: def __init__(self): self.parent = None self.leftChild = None self.rightChild = None In-order, Pre-order, and Post-order Traversals Now we’ll explore a simple class of algorithms that traverses a tree in a specified order. By “traverse,” we simply mean that it visits each vertex in turn, and performs some pre-specified action on the data associated with each. Those familiar with our post on functional programming can think of these as extensions of the “map” function to operate on trees instead of lists. As we foreshadowed earlier, these represent total orders on the set of nodes of a tree, and in particular they stand out by how they reflect the recursive structure of a tree. The first is called an in-order traversal, and it is perhaps the most natural way to traverse a tree. The idea is to hit the leaves in left-to-right order as per the usual way to draw a tree, ignoring depth. It generalizes easily from a tree with only three nodes: first you visit the left child, then you visit the root, then you visit the right child. Now instead of using the word “child,” we simply say “subtree.” That is, first you recursively process the left subtree, then you process the current node, then you recursively process the right subtree. This translates easily enough into code: def inorder(root, f): ''' traverse the tree "root" in-order calling f on the associated node (i.e. f knows the name of the field to access). ''' if root.leftChild != None: inorder(root.leftChild, f) f(root) if root.rightChild != None: inorder(root.rightChild, f) For instance, suppose we have a tree consisting of integers. Then we can use this function to check if the tree is a binary search tree. That is, we can check to see if the left subtree only contains elements smaller than the root, and if the right subtree only contains elements larger than the root. def isBinarySearchTree(root): numbers = [] f = lambda node: numbers.append(node.value) inorder(root, f) for i in range(1, len(numbers)): if numbers[i-1] > numbers[i]: return False return True As expected, this takes linear time in the number of nodes in the tree. The next two examples are essentially the same as in-order; they are just a permutation of the lines of code of the in-order function given above. The first is pre-order, and it simply evaluates the root before either subtree: def preorder(root, f): f(root) if root.leftChild != None: preorder(root.leftChild, f) if root.rightChild != None: preorder(root.rightChild, f) And post-order, which evaluates the root after both subtrees: def postorder(root, f): if root.leftChild != None: postorder(root.leftChild, f) if root.rightChild != None: postorder(root.rightChild, f) f(root) Pre-order does have some nice applications. The first example requires us to have an arithmetical expression represented in a tree: root = Node() root.value = '*' n1 = Node() n1.value = '1' n2 = Node() n2.value = '3' n3 = Node() n3.value = '+' n4 = Node() n4.value = '3' n5 = Node() n5.value = '4' n6 = Node() n6.value = '-' root.leftChild = n3 root.rightChild = n6 n3.leftChild = n1 n3.rightChild = n2 n6.leftChild = n4 n6.rightChild = n5 This is just the expression , and the tree structure specifies where the parentheses go. Using pre-order traversal in the exact same way we used in-order, we can convert this representation to another common one: Polish notation. def polish(exprTree): exprString = [] f = lambda node: exprString.append(node.value) preorder(exprTree, f) return ''.join(exprString) One could also use a very similar function to create a copy of a binary tree, as one needs to have the root before one can attach any children, and this rule applies recursively to each subtree. On the other hand, post-order traversal can represent mathematical expressions in post-fix notation (reverse-polish notation), and it can be useful for deleting a tree. This would come up if, say, each node had some specific cleanup actions required before it could be deleted, or alternatively if one is working with a dynamic memory allocation (e.g. in C) and must explicitly “free” each node to clear up memory. So now we’ve seen a few examples of trees and mentioned how they can be represented in a program. Next time we’ll derive and implement a meatier application of trees in the context of machine learning, and in future primers we’ll cover minimum spanning trees and graph searching algorithms. Until then!
https://jeremykun.com/2012/09/16/trees-a-primer/
CC-MAIN-2019-09
refinedweb
2,172
62.38
On Tuesday 31 January 2012, Michael S. Tsirkin wrote: > I have an idea: we can make the generic one inline > if we keep it in the .c file. So something like > the below on top of my patch will probably work. > Ack? IMHO this is still worse than the macro, because it breaks common practice. The common way to do this is #ifdef/#else/#endif in the header file to provide either an extern or a macro/inline definition, while having the inline definition in a separate place makes it harder to understand what's going on. E.g. a frequent review comment is to not put extern declarations inside of #ifdef, but if someone tries that here, it would break. You also still need the #ifdef in the implementation file, which we try to avoid normally just like we try to avoid macros where possible. Arnd
http://www.linux-mips.org/archives/linux-mips/2012-01/msg00155.html
CC-MAIN-2014-52
refinedweb
147
67.79
Announcing MSTest Framework support for .NET Core RC2 / ASP.NET Core RC2 May 30, Installing the SDK Install the Visual Studio official MSI installer from Creating a class library project Create a .NET Core Class Library application. Open Visual Studio, and choose File | New | Project: Adding references for MSTest From nuget.org, install the MSTest.TestFramework package (shown below). Now, install the runner – look for the dotnet-test-mstest package, and install it: [Editor’s note: Thank you @IanGriffiths and @BenHysell for reporting an issue with the package we published earlier. We have since uploaded a fixed version, as shown in the updated image above.] Open the project.json file in the solution. You will already see the packages you just installed mentioned under “dependencies”. We need to add the “testRunner” property and set that to “mstest”. To make it easier, just replace the content of the project.json file with the following: { " } } } } } Notice that the class library project we created is getting marked as an application ( netcoreapp1.0). That is because when using .NET CLI for testing, unit test projects are actually an application, not a class library. That application’s Main method is provided by the runner. Writing the tests Visual Studio would have automatically created a file named Class1.cs. Open that file and replace its content with the following: using Microsoft.VisualStudio.TestTools.UnitTesting; namespace SampleNetCoreUnitTests { [TestClass] public class TestClass { [TestMethod] public void TestMethodPassing() { Assert.IsTrue(true); } [TestMethod] public void TestMethodFailing() { Assert.IsTrue(false); } } } Running the tests from Visual Studio Open the Test Explorer window (Test | Windows | Test Explorer in Visual Studio). Build the solution, and you should see the tests as follows: Click on “Run All” to run the tests. Running tests from the console Open a command prompt and navigate to the folder containing the solution. Type dotnet test to run the .NET CLI test runner: D:\Samples\dotNetCoreTests\src\dotNetCoreTests>dotnet test Project dotNetCoreTests (.NETFramework,Version=v4.5.1) was previously compiled. Skipping compilation. Discovering Tests ... Executing Tests ... Passed TestMethodPassing Failed TestMethodFailing Error Message: Assert.IsTrue failed. Stack Trace: at SampleNetCoreUnitTests.TestClass.TestMethodFailing() in D:\Samples\dotNetCoreTests\src\dotNetCoreTests\Class1.cs:line 17 ============ Test Run Summary ============ Total tests: 2. Passed: 1. Failed: 1. Skipped: 0 Test Run Failed. SUMMARY: Total: 1 targets, Passed: 1, Failed: 0. The tests are discovered and executed as expected. Targeting the desktop .NET In addition to .NET Core, the .NET CLI runner can run tests targeting desktop .NET (minimum version 4.5.1) as well. To target desktop .NET, update the project.json to use this frameworks section instead: "frameworks": { "net451": { } } Summary There, it is as simple as that – MSTest support for .NET Core 1.0 RC2 and ASP.NET Core 1.0 RC2, fully integrated with Visual Studio. Hi, first of all, this is awesome! Does the “dotnet test” command described in “Running tests from the console” work on other platforms like Mac and Linux? Do you guys plan to (or already) have an integration with this in Visual Studio Code? (Similar to Visual Studio described in “Running the tests from Visual Studio”). Thanks, Greg Just tried this using Visual Studio code on OSX and it works. Its a little tedious typing all this out without R# or VS and using a combination of Terminal and VS Code. I took the example off MS site using xUnit and replaced all that with mstest implementation successfully. When I try this it builds but I don’t see any tests in the test runner. Seems to complain that dotnet.exe not starting. I checked and it is there. —— Discover test started —— Discovering tests in ‘D:MyPathtestMyProject.testproject.json’ [“C:Program Filesdotnetdotnet.exe” test “D:MyPathtestMyProject.testproject.json” –output “D:MyPathtestMyProject.testbinReleasenetcoreapp1.0” –port 61103 –parentProcessId 13820 () Unhandled Exception:() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) dotnet-test Error: 0 : [ReportingChannel]: Error sending) dotnet-test Error: 0 :) at Microsoft.DotNet.Tools.Test.ReportingChannel.SendError(String error) at Microsoft.DotNet.Tools.Test.ReportingChannel.SendError(Exception ex) at Microsoft.DotNet.Tools.Test.DesignTimeRunner.HandleDesignTimeMessages(ProjectContext projectContext, DotnetTestParams dotnetTestParams) at Microsoft.DotNet.Tools.Test.DesignTimeRunner) ========== Discover test finished: 0 found (0:00:00.6145596) ========== Running from the command line looks like its not finding dotnet-test-MSTest. I have installed it via Nuget (and uninstalled then reinstalled). My project.lson is copied from here & values look OK. >dotnet.exe test project.json Project TestTest (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation. dotnet-test Error: 0 : Microsoft.DotNet.Cli.Utils.CommandUnknownException: No executable found matching command “dotnet-test-MSTest” at Microsoft.DotNet.Cli.Utils.ProjectDependenciesCommandFactory.FindProjectDependencyCommands(String commandName, IEnumerable`1 commandArgs, String configuration, NuGetFramework framework, String outputPath, String buildBasePath, String projectDirectory) at Microsoft.DotNet.Cli.Utils.ProjectDependenciesCommandFactory.Create(String commandName, IEnumerable`1 args, NuGetFramework framework, String configuration) at Microsoft.DotNet.Tools.Test.ConsoleTestRunner) I got this error when I had not installed the second package from NuGet: dotnet-test-mstest As soon as I installed it, Test Explorer started working. Is it project.json which is now obsolete? ReSharper-MSTest integration found the tests, but returned “Inconclusive: Test not run”. I switched off ReSharper unit testing, and built — Test Explorer is still not discovering my tests. I coped the project.json file exactly. When I navigate to the folder containing my solution file and open a command prompt typing in ‘dotnet test’, it says it can’t find a project.json file. As a follow up, when I navigate to the folder to containing the *project*, not the solution, and run the command dotnet test, I get the following: No executable found matching command “dotnet-test-MSTest” this is same as I see Ehh, so I think I found the problem? In the project.config, the value provided for ‘testRunner’ is ‘MSTest’, which apparently gets concatenated into dotnet-test-MSTest. However, if you look in your C:Users{username}.nugetpackagesdotnet-test-mstest1.0.0-previewlibnet451 — you’ll see the executable is dotnet-test-mstest. Change your project.config to be “testRunner”: “mstest” and it should work, at least for the command line. fantastic – now picked up for me in visual studio test explorer Worked great – thanks for the research and sharing. Just to avoid confusion: it should be “project.json” instead of “project.config”. @SpikeMelbost, @Alexander, @kris, @Paul, Yes, the testrunner property in the project.json should be set to “mstest” (note the lower case). I was sure I had done that, but somehow seem to have missed it. Good catch. @GregKalapos, The answer to your first question is yes. Regarding integration with Visual Studio, that is under consideration, but I do not have anything concrete to share on that. Firstly, thanks for this post. We have a scenario where we have a new .NET core library that multi-targets net40 and netcoreapp1.0. We need to test this project and so far using xUnit and MSTest we cannot seem to setup it up to do so since adding a reference to the project throws a warning. Project ReportingSystem.AwsQueueWriter is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Project ReportingSystem.AwsQueueWriter supports: net40 (.NETFramework,Version=v4.0) One or more projects are incompatible with .NETCoreApp,Version=v1.0. Is it possible to build test project that will allow us to run tests when multi-targeting like this? This is our class library project,json as an example of how that is setup. This is the library we need to test. { “version”: “1.0.0-*”, “dependencies”: { “Cloud.Abstractions”: “1.0.0-*”, “ReportingSystem.Interfaces”: “1.0.0-*”, “Microsoft.NETCore.App”: “1.0.0-rc2-3002702”, “Newtonsoft.Json”: “7.0.1” }, “frameworks”: { “net40”: { }, “netcoreapp1.0”: { “imports”: [ “dnxcore50”, “portable-net45+win8” ] } } } Does MsTest have parameterized unit test? That was the reason I stopped using it. Pratap, Will this test framework work against UWP apps? Thanks! Ken @AKilgour, Yes. It supports DataTestMethod/DataRow. @Ken, Yes. This test framework will work against UWP. Here are the steps: (1) Create a UWP Unit Test App using the “Unit Test App (Universal Windows)” template. That is already referencing an earlier version of the MSTest framework. (2) Then, add NuGet references to “MSTest.TestFramework v1.0.0-preview” and to “MSTest.TestAdapter v1.0.0-preview”. (3) Replace “using Microsoft.VisualStudio.TestPlatform.UnitTestFramework;“ with “using Microsoft.VisualStudio.TestTools.UnitTesting;” Now you are all set. I will be blogging about that shortly as well. Does this not work with the “dotnet-watch test” command? It seems to throw an exception when I try it, saying it can’t find an entry point. Unhandled Exception: System.MissingMethodException: Entry point not found in assembly ‘MudDesigner.Engine.Tests, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null’. [DotNetWatcher] fail: dotnet exit code: 134 [DotNetWatcher] info: Waiting for a file to change before restarting dotnet… If I just run “dotnet test” from the directory, it works without an issue. It seems to only apply to my using dotnet-watch. I’m seeing two problems with this: 1) following the steps you provide I get errors installing the 2nd package. 2) the executable looks like it doesn’t have a valid signature – I’m unable to get any of this working for the full .NET Framework unless I register the dotnet-test-mstest.exe file for strong name verification. More detail on this: If I just follow your instructions, I get a ‘Package restore failed’ message in Solution Explorer, when trying to install the 2nd package (dotnet-test-mstest) and the following in the Output window (showing the Package Manager output): Errors in c:usersiandocumentsvisual studio 2015ProjectsMsTestTest2srcMsTestTest2MsTestTest2.xproj Package dotnet-test-mstest 1.0.0-preview is not compatible with netstandard1.5 (.NETStandard,Version=v1.5). Package dotnet-test-mstest 1.0.0-preview supports: – net451 (.NETFramework,Version=v4.5.1) – netcoreapp1.0 (.NETCoreApp,Version=v1.0) – netstandardapp1.5 (.NETStandardApp,Version=v1.5) One or more packages are incompatible with .NETStandard,Version=v1.5. If I replace the project.json with the one you supply (which looks quite different from the one you get when creating a new .NET Core Class Library project in RC2), then it seems to work OK for targeting .NET Core, although I’m not sure what the side effects of such radical changes to the project.json will be. (The class library template has the framework as “netstandard1.5” with “imports” of “dnxcore”, with no mention of this Microsoft.NETCore.App stuff.) But this doesn’t really help me because I’m trying to use this with an ASP.NET Core 1.0 project that runs against the full .NET Framework, but when it gets to test discovery I see this: Discovering tests in ‘c:usersiandocumentsvisual studio 2015ProjectsMsTestTest2srcMsTestTest2project.json’ [“C:Program Filesdotnetdotnet.exe” test “c:usersiandocumentsvisual studio 2015ProjectsMsTestTest2srcMsTestTest2project.json” –output “c:usersiandocumentsvisual studio 2015ProjectsMsTestTest2srcMsTestTest2binDebugnet451win7-x64” –port 25241 –parentProcessId 27144 –no-build] ‘test-mstest’ returned ‘-532462766’. And when I run ‘dotnet test’ from the command line I get this output: Project MsTestTest2 (.NETFramework,Version=v4.5.1) will be compiled because Input items removed from last build Compiling MsTestTest2 for .NETFramework,Version=v4.5.1 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:02.2736481 Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly ‘dotnet-test-mstest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A) —> System.Security.SecurityException: Strong name validation failed. (Exception from HRESULT: 0x8013141A) — End of inner exception stack trace — SUMMARY: Total: 1 targets, Passed: 0, Failed: 1. It occurred to me that I might possibly have somehow got a corrupted copy in my NuGet cache so I tried this from a completely different machine. Same outcome. I also ran Peverify.exe on the offending executable to see if there was evidence of corruption, but it reported that it was fine. Only by running the ‘strong name’ utility (and specifically, the 64-bit version, because 32-bit and 64-bit verification skipping entries are stored separately) am I able to get ‘dotnet test’ to run without error. (And then having done that, test discovery starts working inside VS as well.) So either this preview was accidentally shipped without a valid signature on the test executable (which doesn’t matter for .NET Core but does matter for the full .NET runtime), or that was deliberate but you’re missing a step in your instructions. Get the same problems as you. The tests are not found by the testrunner. How do I register the dotnet-test-mstest.exe file for strong name verification? I’m the same again – I can’t seem to get any unit testing working with .net core at all. I’ve burnt days on this so far. Hi, Can I use dotnet test to generate a file the same way as (dotnet test . -xml test-results.xml) for xunit? If so what is the arguments and options to be passed? In the publish tests results task in hosted vsonline there is test result format vstest. Can this also be used to output the test results for a build tasks? @Pratap, “Yes. It supports DataTestMethod/DataRow.” I can’t multiple datarows to work. Can you post a sample on how to set up multiple data rows? I thought multiple data rows would have worked like this: @Tadar: Here is how you would use this: [DataTestMethod] [DataRow(1, 1, 2)] [DataRow(1, 3, 4)] [DataRow(2, 4, 6)] [DataRow(6, 1, 7)] public void AddTest(int a, int b, int result) { Assert.AreEqual(result, Calculator.Add(a,b)); } Abhitej, that is really helpful. Thank you. How would I report on which data row passed? For example. This doesn’t seem to work to great in TFS, unless I’m doing something wrong =(. I get this message Starting test execution, please wait… Warning: No test is available in the provided sources. Make sure that installed test discoverers & executors, platform & framework version settings are appropriate and try again. Information: Additionally, you can try specifying ‘/UseVsixExtensions’ command if the test discoverer & executor is installed on the machine as vsix extensions and your installation supports vsix extensions. Example: vstest.console.exe myTests.dll /UseVsixExtensions:true No results found to publish. Great to see this functionality. How can I generate a TRX test result run report? usually we use the command line and the /logger:trx parameter vstest.console.exe /logger:trx .TestProjectbindebugYourUnitTestAssembly.dll Can you share a real life example on how to test the asp.net controller with Dependency injection? I am struggle to find the mock framework to create the injected object. Thanks, Walter Hooray! Would have liked to have seen this earlier but glad it’s here now! Things are starting to shape up. W Not sure what I’m doing wrong, but I cannot get the test runner to find any of my tests if I target net46 or net451 framework. Given a project.json of: { “version”: “1.0.0-*”, “testRunner”: “mstest”, “dependencies”: { “dotnet-test-mstest”: “1.0.0-preview”, “MSTest.TestFramework”: “1.0.0-preview” }, “frameworks”: { “net451”: { } } } I get an output of: —— Discover test started —— Discovering tests in ‘C:Usersbenhdocumentsvisual studio 2015ProjectsClassLibrary1srcClassLibrary1project.json’ [“C:Program Filesdotnetdotnet.exe” test “C:Usersbenhdocumentsvisual studio 2015ProjectsClassLibrary1srcClassLibrary1project.json” –output “C:Usersbenhdocumentsvisual studio 2015ProjectsClassLibrary1srcClassLibrary1binDebugnet451win7-x64” –port 58466 –parentProcessId 1472 –no-build] ‘test-mstest’ returned ‘-532462766’. ========== Discover test finished: 0 found (0:00:01.4959364) ========== Not sure what I’m going wrong here… Ben, You should see an update on this issue soon. We are actively investigating this. Is there an issue I can track on github? This has been fixed with the latest dotnet-test-mstest package : “1.0.1-preview”. I just tried to update my project again…same result, same error: ‘test-mstest’ returned ‘-532462766’. I know this is old, but I’m still seeing this problem Am I correct that this version of the test runner does not support existing test assemblies that reference Microsoft.VisualStudio.QualityTools.UnitTestFramework? Test projects must reference the new test framework package? I am not seeing any tests for such projects. I’ve been browsing online more than 3 hours today, yet I never found any interesting article like yours. It is pretty worth enough for me. Personally, if all website owners and bloggers made good content as you did, the internet will be much more useful than ever before. Can I use such connection to TFS with a lot of parameters/iterations? [DataSource(“Microsoft.VisualStudio.TestTools.DataSource.TestCase”, “;def”, “123123”, DataAccessMethod.Sequential)] (After run example from 1 post with such connection – Testmethod executed without connection to TFS). Thanks, Nick @SteveGordon, Sure, targetting multiple frameworks is supported- for example: netcoreapp1.0 and net451. “dotnet test” will identify all of the targets of the test project and, by default, run the tests for each target. This can be controlled by specifying the –runtime and –framework options when invoking. @JohnathanSullinger, Will get back regarding use with “dotnet watch” @IanGriffiths, Thank you for reporting this issue. We have since fixed it, published a revised package to NuGet, and I have updated this blog post inline as well. @MarkPurdon, If you are on Windows, then you can use “vstest.console.exe project.json /UseVsixExtensions:true /logger:trx” to generate a results file. @Tadar, Thank you for the feedback. We will consider this enhancement going forward. @GordonBeeming, Please use the following: “vstest.console.exe project.json /UseVsixExtensions”. @Siktel, This is similar to the question from @MarkPurdon. Please see response to that, above. @BenHysell, This is the same issue @IanGriffiths is facing. Please see response to that, above. @mthamil, Correct. @Nick, Yes. @Pratap Thanks for the reply that you are going to consider an enhancement around the logging of test results in the console. I came up with a workaround and then discovered what appear to be a bug. For my workaround, I appended a message to the Assert.AreEqual() code statement. At least for errors I get the message, and honestly for passing tests I’d like to see the message be printed as well. Could you please consider or provide a way to write out messages for passing tests as well? In the process of looking at the results in the test result log. I think I’ve found an error, for each failing test a Red F is printed however a Red F is printed a second time. This appears to be a bug as it’s super unclear why the passed test has a red Failure in front of it. Can you confirm that this is a bug or is this how the test results should be viewed in the command line? Here is my test: [DataTestMethod] [DataRow(1, 1, 2)] [DataRow(1, 3, 4)] [DataRow(2, 4, 6)] [DataRow(6, 1, 7)] [DataRow(1, 1, 2)] [DataRow(1, 3, 4)] [DataRow(2, 4, 6)] [DataRow(6, 1, 7)] [DataRow(1, 1, 2)] [DataRow(1, 3, 5)] [DataRow(2, 4, 6)] [DataRow(6, 1, 7)] [DataRow(1, 1, 8)] [DataRow(1, 3, 4)] [DataRow(2, 4, 6)] [DataRow(6, 1, 9)] [DataRow(1, 1, 2)] public void AddTest(int a, int b, int result) { Assert.AreEqual(result, Calculator.Add(a, b), ” DataRowAttributes_” + a + “_” + b + “_” + result); } @Pratap Thanks for your answer and could you please help me to fix this code. Because it is not connecting to TFS and does not executing iterations from Test Case during run in console (dotnet test). ======= Code: using System; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace ConsoleApplication { [TestClass] public class TestClass { [TestMethod] [DataSource(“Microsoft.VisualStudio.TestTools.DataSource.TestCase”, “;test”, “123”, DataAccessMethod.Sequential)] public void TestMethodFailing() { Assert.IsTrue(true, “ok”); } } } ======= project.json: { ” } } } } } ======= OUTPUT with fake TFS link is: λ dotnet test Project pro2 (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation. Discovering Tests … Executing Tests … Passed TestMethodFailing ============ Test Run Summary ============ Total tests: 1. Passed: 1. Failed: 0. Skipped: 0 Test Run Successful. SUMMARY: Total: 1 targets, Passed: 1, Failed: 0. @Nick, Apologies; I spoke to soon. TestCase as a DataSource is not supported in this release. Hi Pratap, Looks like TestCase as a DataSource is not supported even in MSTest V2. Could you please clarify when this feature will work? Thanks, Nick Am I correct, that also MSTest targeting .NET Core is *not* open source? @Tadar, Can you tell me what you did to see the red F and the green dots? Regarding writing out the message only for failing tests – that is by design. Please see here:. @PaulRohorzka, Yes, it is not. However, I noticed an item on uservoice already for this here:. Please consider voting. @Pratap I recorded a video showing you the red dots and green dots in the wrong(?) place. I am running Windows 7 Enterprise. Here is a link to the video recording: @Tadar, I am unable to repro this. Can you tell me more about your setup? Can you share your project with me (pratapL _at_ microsoft)? yeah baby yeah Our team successfully switched a small .NET 452 project from using the version of MSTest included with VS2015, to this new NuGet-based version of MSTest. This is appreciated because we want to be able to run MSTest on a build server (Teamcity) without having Visual Studio installed, and MSTest has been the only remaining hold-up. Once Teamcity natively supports this new version of MSTest, things will be great. I’m very happy to see MSTest finally moving this direction, and I hope it pointing towards a future where improvements/fixes to MSTest will be shipped independently of Visual Studio. I’m having a weird problem with this. It works perfectly when testing pure .Net core assemblies. But if my .net core project has a reference to a full .net framework project, the test runner does not find the tests. Any suggestions on what might be the issue? @FelipeAndrade, I am unable to repro this. Can you share your project with me (pratapL _at_ microsoft)? I can reproduce the problem with xUnit. If you have a project.json with both frameworks “net452” and “netcoreapp1.0”, then it fails with the error. Discovering tests in ‘C:UsersfabricioDesktopdata-stresstestTest-DataDriverproject.json’ [“C:Program Filesdotnetdotnet.exe” test “C:UsersfabricioDesktopdata-stresstestTest-DataDriverproject.json” –output “C:UsersfabricioDesktopdata-stresstestTest-DataDriverbinDebugnet452win7-x64” –port 59805 –parentProcessId 3676 It looks like the TCP port used for IDE communication is being used twice – one for framework. Could you point me in the direction of a mocking framework that I can use with MSTest & Core? NSubstitute I am sure somebody already asked this question… I assume that this is the first blog post to set up infrastructure for unit testing. And I am looking forward to more real-life scenarios of testing Controllers and Services. Specifically, I would love to see examples of dependency injection (such as in-memory database), and mocking logged-in user. In case anyone needs to do something similar, I have managed to set this up so I can test my controllers in an ASP.NET Core project that targets the .NET Framework 4.6.1 and also references class libraries that are standard .NET, not .NET core (we will have this hybrid situation for some time!). I am using VS2015 update 3 and .NET Core 1.0. My project.json is: { “version”: “1.0.0-*”, “testRunner”: “mstest”, “dependencies”: { “dotnet-test-mstest”: “1.0.1-preview”, “MSTest.TestFramework”: “1.0.0-preview”, “MyASP.NetCoreProject”: “1.0.0-*” }, “frameworks”: { “net461”: { “dependencies”: { “MyClassLibrary”: { “target”: “project” } } } }, } This is great. Is there an dotnet-test-mstest update available that will work with .NetStandardApp Version 1.6. I can’t install the package because of the dependency on .NetStandardApp version 1.5. I am experiencing an exception outside of my code with AssemblyCleanup when targeting framework “net451”. I have detailed my setup in the MSDN forums. Please have a look when you get a chance. This was resolved by VS Update KB3165756. However, I am still encountering the error posted below regarding DateTime.Parse while running Unit Tests. I have also since run into an issue when calling “var date = DateTime.Parse(“2016-01-01″);” in a test. The stacktrace I encounter is displayed below. System.TypeInitializationException was unhandled by user code HResult=-2146233036 Message=The type initializer for ‘System.Runtime.Versioning.BinaryCompatibility’ threw an exception. Source=System.Private.CoreLib TypeName=System.Runtime.Versioning.BinaryCompatibility StackTrace: at System.Globalization.DateTimeFormatInfo.InsertHash(TokenHashValue[] hashTable, String str, TokenType tokenType, Int32 tokenValue) at System.Globalization.DateTimeFormatInfo.AddMonthNames(TokenHashValue[] temp, String monthPostfix) at System.Globalization.DateTimeFormatInfo.CreateTokenHashTable() at System.Globalization.DateTimeFormatInfo.Tokenize(TokenType TokenMask, TokenType& tokenType, Int32& tokenValue, __DTString& str) at System.__DTString.GetSeparatorToken(DateTimeFormatInfo dtfi, Int32& indexBeforeSeparator, Char& charBeforeSeparator) at System.DateTimeParse.Lex(DS dps, __DTString& str, DateTimeToken& dtok, DateTimeRawInfo& raw, DateTimeResult& result, DateTimeFormatInfo& dtfi, DateTimeStyles styles) at System.DateTimeParse.TryParse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles, DateTimeResult& result) at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles) at ClassLibrary1.Class1.Test() InnerException: FileName=System.Runtime.InteropServices.PInvoke, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a HResult=-2147024894 Message=Could not load file or assembly ‘System.Runtime.InteropServices.PInvoke, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’. The system cannot find the file specified. Source=System.Private.CoreLib StackTrace:) at System.Reflection.RuntimeModule, RuntimeModule decoratedModule, MetadataToken decoratedToken, RuntimeType attributeFilterType, Boolean mustBeInheritable, Object[] attributes, IList derivedAttributes, RuntimeType& attributeType, IRuntimeMethodInfo& ctor, Boolean& ctorHasParameters, Boolean& isVarArg).Attribute.GetCustomAttributes(Assembly element, Type attributeType, Boolean inherit) at System.AppDomain.GetTargetFrameworkName() at System.Runtime.Versioning.BinaryCompatibility.ReadTargetFrameworkId() at System.Runtime.Versioning.BinaryCompatibility.get_AppWasBuiltForFramework() at System.Runtime.Versioning.BinaryCompatibility.BinaryCompatibilityMap..ctor() at System.Runtime.Versioning.BinaryCompatibility..cctor() InnerException: I am using the code samples in this blog post, and I am discovering that the test suite returns an exit code of zero even when there are failing tests. Is this a bug? I’ve opened a stackoverflow to discuss the problem: There is no documentation whatsoever for MSTest on .NET Core. I found out the –test and –list command line options, and you don’t mention them anywhere. There is no Github project mentioned, we are lost, and there seems to be no support. This has to improve. I am trying to use MSTest instead of xUnit, and I think I am going to give it up. Again. This all works for me only if I have something like “runtimes”: { “win8-x64”: {} } in project.json. Otherwise I get that message that no runtime specified. Why? Should it pick the runtime by default? @johnnymo87, this is now fixed, and updated on NuGet here: @Giovanni Bassi, we will have more to share soon. Kindly stay tuned. @Mikhail Orlov, I wonder if you are on an older/intermediate dotnet CLI installer. Kindly check. When I try to install “dotnet-test-mstest package” I get error “error: Package dotnet-test-mstest 1.1.1-preview is not compatible with netstandard1.6 (.NETStandard,Version=v1.6)” I am using Visual Studio 2015 Update 3 and I have created the library using the default library template. which by default uses “NETStandard.Library”: “1.6.0” I hope we are not going to back to DLL hell problem 🙂 with .Net core En сaso de Һabеr prestado ᥱl servicio militar ya antеs de 1993 y no estaar registrado еn eⅼ sistema, se requiere: Ⅼa perseverancia ԁel tikempo de servicio original; ⅼa misma se solicita еn el archivo general dell Ministerio ɗᥱ Defensa. hi, I want to add unit test for a project “MyProject”. in project.json I added “dependencies”: { “MyProject” : “1.0.0” } in global.json I added { “projects”: [ “src”, “test” ] } “MyProject” is under “src” and the test is under “test” but when running “dotnet test” it is showing error CS0246: The type or namespace name ‘MyProject’ could not be found (are you missing a using directive or an assembly reference?) what am I missing? I was excited to uncover this page. I wanted to thank you for your time for this wondedrful read!! I definitely enjoyed every little bit of it and I have you bookmarked to check out new stuff in your blog. Thank you for this blog post. I am in the process of converting a .Net framework library to .Net Core. My unit test project has a ordered test file (I need to be able to control the order in which certain things happen). After following this blog I am able to get the individual tests discovered, and running. But the ordered test file is not opening. Visual Studio 2015 Community shows an error stating it can’t open a file containing tests in a non-test project. Any plans to provide ordered test feature in Microsoft.VisualStudio.TestTools? @mvadu, ordered tests are not yet supported for MSTest V2 based tests. I have added this as a suggestion on uservoice here:. Please consider voting for it and spreading the word for others to vote for, and comment upon. Thank you. Minha dúvida: Faço LCHF há 1 regras com em direção a prática do jejum alternado 1x na semana. Hi I get following error with “MSTest.TestFramework”: “1.0.5-preview” and “dotnet-test-mstest”: “1.1.1-preview” Exception thrown while executing test. If using extension of TestMethodAttribute then please contact vendor. Error message: Could not load type ‘LogMessageHandler’ from assembly ‘Microsoft.VisualStudio.TestPlatform.TestFramework, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ Second question: The default Test Assembly (build step on build server) is **$(BuildConfiguration)*test*.dll;-:**obj** What’s the recommended way to run tests for ASP.NET Core Web Application (.Net Framework) on build server? At the moment, the test assemblies are located in binReleasenet452 Folder (not a subfolder of obj) Thanks Michael My second question was answered here: @Michael, If you are using “dotnet-test-mstest 1.1.1-preview”, then please use “MSTest.TestFramework 1.0.4-preview”. We have not yet updated dotnet-test-mstest to work with the latest framework. Tried to install for .netcore 1.6 but I get error not supported. When will 1.6 be supported? @hawi: Unit Testing for .Net Core requires the unit test project to be an app which is why it currently targets netcoreapp1.0. And from this link() netcoreapp1.0 has the same API set as netstandard 1.6. So, it should be possible writing unit tests for a project targeting netstandard1.6. Let us know if you hit any issues with that.
https://blogs.msdn.microsoft.com/devops/2016/05/30/announcing-mstest-framework-support-for-net-core-rc2-asp-net-core-rc2/?replytocom=67716
CC-MAIN-2017-43
refinedweb
5,055
52.36
Derby and JDBC version How-To "Help! Which Derby version am I using?" Table of Contents Contents the database is booted boot) Possible scenario: You found out that you have a Derby installation on your hard drive, but you don't know which version it is. Possible solution: Run sysinfo. Example: If you have derbyrun.jar available (should be in the lib directory of Derby versions 10.2.1.6 and later), you can do: java -jar /home/user/derby/lib/derbyrun.jar sysinfo In the above example, the Derby installation is located in the directory /home/user/derby/ on a Unix/Linux system, and the PATH points to a valid Java installation. Look for something like this in your output, which will tell you the version of your derby jar files: --------- Derby Information -------- JRE - JDBC: J2SE 5.0 - JDBC 3.0 [/home/user/derby/lib/derby.jar] 10.3.1.4 - (561794) [/home/user/derby/lib/derbytools.jar] 10.3.1.4 - (561794) [/home/user/derby/lib/derbynet.jar] 10.3.1.4 - (561794) [/home/user/derby/lib/derbyclient.jar] 10.3.1.4 - (561794) ------------------------------------------------------ Here, the version of all the Derby jar files is 10.3.1.4 (SVN revision 561794). Remember that derby.jar contains the embedded JDBC driver, while derbyclient.jar contains the client JDBC driver. If derbyrun.jar is not available, you probably have an older version of derby. Then you must access sysinfo directly by doing for example: cd /home/user/derby/lib/ java -cp derby.jar:derbytools.jar:derbyclient.jar:derbynet.jar org.apache.derby.tools.sysinfo Refer to the documentation on Derby tools for more information about sysinfo. Derby driver and server version (CLASSPATH) Possible scenario: You don't know which version of Derby your application is using (that is, which version of Derby is in your CLASSPATH). For example, you have downloaded several products which bundle different versions of Derby and/or Java DB, and you are not quite sure which version your IDE is using. Possible solution: Use the JDBC API (DatabaseMetaData). Example: The following Java class demonstrates how to retrieve software version information using the database metadata. 1 import java.sql.Connection; 2 import java.sql.DatabaseMetaData; 3 import java.sql.DriverManager; 4 /** 5 * Checks the version of the Derby software running a database. 6 */ 7 public class DerbyVersionChecker { 8 9 public static void main(String[] args) { 10 try { 11 // load the embedded driver 12 Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance(); 13 // or the client driver... 14 //Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance(); 15 16 // replace the following URL with your own, or use an existing connection: 17 Connection conn = DriverManager.getConnection("jdbc:derby:testDB;create=true"); 18 19 // this will print the name and version of the software used for running this Derby system 20 DatabaseMetaData dbmd = conn.getMetaData(); 21 String productName = dbmd.getDatabaseProductName(); 22 String productVersion = dbmd.getDatabaseProductVersion(); 23 System.out.println("Using " + productName + " " + productVersion); 24 } catch (Exception ex) { 25 ex.printStackTrace(); 26 System.exit(1); 27 } 28 } 29 } For example, if you are using Derby 10.3.1.4, and derby.jar is in your CLASSPATH, the above class will print: Using Apache Derby 10.3.1.4 - (561794) Note: If you are using the client driver, the DatabaseMetaData#getDatabaseProductVersion() method returns the version of the software running the database, that is the version of the Derby Network Server (actually, the server is using the embedded driver to connect to the database). If you want to get the version of your JDBC driver (the Derby client driver) instead, use the DatabaseMetaData#getDriverVersion() method, as shown below. In an embedded scenario the embedded driver itself runs the database, so the two methods will return the same result. String driverVersion = dbmd.getDriverVersion(); JDBC specification support Possible scenario: You want to use some special feature that is not available in all versions of JDBC. You are not sure which JDBC version your Java VM supports, but want to find out. Possible solution: Try to access classes/interfaces introduced in or removed from certain JDBC-related specifications. Example: The following Java code demonstrates this in a way that is similar to how Derby's test harness determines the level of JDBC support in the current JVM: 1 private void printJDBCSupportInVM() { 2 3 /* Check the availability of classes or interfaces introduced in or 4 * removed from specific versions of JDBC-related specifications. This 5 * will give us an indication of which JDBC version this Java VM is 6 * supporting. 7 */ 8 if (haveClass("java.sql.SQLXML")) { 9 System.out.println("JDBC 4"); 10 } else if (haveClass("java.sql.Savepoint")) { 11 // indication of JDBC 3 or JSR-169. 12 // JSR-169 is a subset of JDBC 3 which does not include the java.sql.Driver interface 13 if (haveClass("java.sql.Driver")) { 14 System.out.println("JDBC 3"); 15 } else { 16 System.out.println("JSR-169"); 17 } 18 } else if (haveClass("java.sql.Blob")) { 19 // new in JDBC 2.0. 20 // We already checked for JDBC 3.0, 4.0 and JSR-169, all of which also 21 // include this class. Chances are good this is JDBC 2.x 22 System.out.println("JDBC 2"); 23 } else if (haveClass("java.sql.Connection")) { 24 // included in most (all?) JDBC specs 25 System.out.println("Older than JDBC 2.0"); 26 } else { 27 // JDBC support is missing (or is older than JDBC 1.0?) 28 System.out.println("No valid JDBC support found"); 29 } 30 } 31 32 /** 33 * Checks whether or not we can load a specific class. 34 * @param className Name of class to attempt to load. 35 * @return true if class can be loaded, false otherwise. 36 */ 37 private static boolean haveClass(String className) { 38 try { 39 Class.forName(className); 40 return true; 41 } catch (Exception e) { 42 return false; 43 } 44 } For example, if you are running a Java SE 6 VM you will have JDBC 4 support, so the printJDBCSupportInVM() method prints: JDBC 4 Note that Derby JDBC drivers of version 10.2.2.0 and later include support for JDBC 4.
http://wiki.apache.org/db-derby/VersionInfo?action=diff
CC-MAIN-2013-20
refinedweb
1,016
51.55
Threads Versus The Singleton Pattern The Simple Singleton Factory Consider that you decide you do not want to instantiate multiple Helper objects—you only want one Helper object to exist. One reason for this decision is that you might have several methods in several classes that all need access to the same "state" information. That is, things are spread out farther than the simple (and admittedly contrived) do-while loop shown above. In that case, a common solution is the arrange for a factory class to provide, via a static getter method, just one instance of the Helper. Consider this simple HelperFactory class, whose job is to ensure that, when asked, it only hands out the same singular instance of a Helper class: public class HelperFactory { private static Helper instance = new Helper(); public static Helper getHelper() { return instance; } } The usage code above changes from Helper helper = new Helper(); to Helper helper = HelperFactory.getHelper();. Now, for that demonstrational code block, little is gained. But in a real application where, as I mentioned, your working code is spread far and wide, yet tied to the same progression of state, each interested piece of code can directly access this common, application-static Helper object via this factory getter method. (The alternative is fairly cumbersome: pass the same Helper object around in all your method calls. Enjoy the additional maintenance headaches for doing so while you're at it!) Because my assumption for this article is that you are familiar with this pattern, I won't dwell on it further. Threaded Headaches public class Runner extends Thread { public Runner(String name) { super(name); } public void run() { Helper helper = HelperFactory.getHelper(); do { try { Thread.sleep((long)(250*Math.random())); } catch (InterruptedException e) { break; } switch (helper.getState()) { case Helper.BEGINNING: System.out.println(getName()+": "+helper); System.out.println(getName()+ ": Beginning operation finished."); helper.setState(Helper.MIDDLE); break; case Helper.MIDDLE: System.out.println(getName()+": "+helper); System.out.println(getName()+ ": Middle operation finished."); helper.setState(Helper.END); break; case Helper.END: System.out.println(getName()+": "+helper); System.out.println(getName()+ ": End operation finished."); helper.setState(Helper.DONE); break; } } while (helper.getState() != Helper.DONE); System.out.println("Code section finished."); } } ...and then modify the main method to create a couple Threads of this class to start them: public class Demo2 { public static void main(String[] args) { Thread t1 = new Runner("threadA"); t1.start(); Thread t2 = new Runner("threadB"); t2.start(); } } Run this several times and compare the output for each run. Here's a sample when I did: threadB: Helper has state 0 threadB: Beginning operation finished. threadA: Helper has state 1 threadA: Middle operation finished. threadB: Helper has state 2 threadB: End operation finished. Code section finished. Code section finished. ---------------- threadB: Helper has state 0 threadB: Beginning operation finished. threadA: Helper has state 1 threadA: Middle operation finished. threadA: Helper has state 2 threadA: End operation finished. Code section finished. Code section finished. ---------------- threadB: Helper has state 0 threadB: Beginning operation finished. threadB: Helper has state 1 threadB: Middle operation finished. threadA: Helper has state 2 threadA: End operation finished. Code section finished. Code section finished. All output sections are different, and given the random variation of the sleep timer, this is somewhat expected. More importantly, all three are wrong—in none of them does any one thread print messages for all three helper states. (Although for different purposes you might desire the effect shown, it is wrong for the purposes of this article.) This is, at last, a visual illustration of how the two threads compete with each other for the state of the same singleton Helper object. The failure point, and the solution, lie in the factory class itself. Page 2 of 3
http://www.developer.com/design/article.php/10925_3680701_2/Threads-Versus-The-Singleton-Pattern.htm
CC-MAIN-2014-42
refinedweb
618
59.6
Details - Type: Bug - Status: Resolved - Priority: Critical - Resolution: Fixed - Affects Version/s: 0.90.4 - - Component/s: regionserver - Labels:None - Environment: all - Hadoop Flags:Reviewed - Release Note:HideThis.ShowThis. Description This follows the dicussion around HBASE-3855, and the random errors (20% failure on trunk) on the unit test org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting I saw some points related to numIterReseek, used in the MemStoreScanner#getNext (line 690): 679 protected KeyValue getNext(Iterator it) { 680 KeyValue ret = null; 681 long readPoint = ReadWriteConsistencyControl.getThreadReadPoint(); 682 //DebugPrint.println( " MS@" + hashCode() + ": threadpoint = " + readPoint); 683 684 while (ret == null && it.hasNext()) { 685 KeyValue v = it.next(); 686 if (v.getMemstoreTS() <= readPoint) { 687 // keep it. 688 ret = v; 689 } 690 numIterReseek--; 691 if (numIterReseek == 0) { 692 break; 693 } 694 } 695 return ret; 696 } This function is called by seek, reseek, and next. The numIterReseek is only usefull for reseek. There are some issues, I am not totally sure it's the root cause of the test case error, but it could explain partly the randomness of the error, and one point is for sure a bug. 1) In getNext, numIterReseek is decreased, then compared to zero. The seek function sets numIterReseek to zero before calling getNext. It means that the value will be actually negative, hence the test will always fail, and the loop will continue. It is the expected behaviour, but it's quite smart. 2) In "reseek", numIterReseek is not set between the loops on the two iterators. If the numIterReseek is equals to zero after the loop on the first one, the loop on the second one will never call seek, as numIterReseek will be negative. 3) Still in "reseek", the test to call "seek" is (kvsetNextRow == null && numIterReseek == 0). In other words, if kvsetNextRow is not null when numIterReseek equals zero, numIterReseek will start to be negative at the next iteration and seek will never be called. 4) You can have side effects if reseek ends with a numIterReseek > 0: the following calls to the "next" function will decrease numIterReseek to zero, and getNext will break instead of continuing the loop. As a result, later calls to next() may return null or not depending on how is configured the default value for numIterReseek. To check if the issue comes from point 4, you can set the numIterReseek to zero before returning in reseek: numIterReseek = 0; return (kvsetNextRow != null || snapshotNextRow != null); } On my env, on trunk, it seems to work, but as it's random I am not really sure. I also had to modify the test (I added a loop) to make it fails more often, the original test was working quite well here. It has to be confirmed that this totally fix (it could be partial or unrelated) org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting before implementing a complete solution. Issue Links - relates to HBASE-6385 [0.90 branch] Backport HBASE-4195 to 0.90 - Resolved Activity Committed to TRUNK and fixed the Ted and Andrew comments on commit. Resolving against 0.92. I think we might want to commit this to 0.90 to get over hbase-3855. Waiting on Andrew feedback. Nice one N. Thanks. Ok, thank you. The remaining points I saw in 4195-v2.txt are the one mentioned above by Ted at 16/Sep/11 04:11 and by Andrew at 16/Sep/11 01:19 (modifications in the comments not in the code). I can do clean up on commit N. No need of new patch. Let me run some tests first. Thank you. What is the next step? Should I write a new patch taking into account all the different comment? Yes, it's because I removed a field in MemStore (the reseek counter). It seems that the fix is to change 12 to 11 in MemStore.java This is the correct fix for that. Same here, I got one failure with TestGetRowVersions, but it seems unrelated (it worked once & failed once). Other unit tests in errors fail with and without the patch. Note that I am currently running the full unit test and checking the failures are not related to this fix. @N: You're right. Patch v2 passes TestHeapSize. Let's see what Stack says about it. Yes, it's because I removed a field in MemStore (the reseek counter). It seems that the fix is to change 12 to 11 in MemStore.java public final static long FIXED_OVERHEAD = ClassSize.align( ClassSize.OBJECT + (12 * ClassSize.REFERENCE)); The unit test is happy after this change, do you confirm it's the right solution? I am going to check. I think the following test failure (w.r.t. MemStore.FIXED_OVERHEAD) is related to this JIRA: testSizes(org.apache.hadoop.hbase.io.TestHeapSize) Time elapsed: 0.056 sec <<< FAILURE! junit.framework.AssertionFailedError: expected:<104> but was:<112> at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.failNotEquals(Assert.java:283) at junit.framework.Assert.assertEquals(Assert.java:64) at junit.framework.Assert.assertEquals(Assert.java:130) at junit.framework.Assert.assertEquals(Assert.java:136) at org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:272) Minor comment: + // kvset and snapshot will never be null. + // if tailSet can't find anything, SS is empty (not null). I think SS above should be replaced with SortedSet. Looks good, especially the new comments, +1. There is a minor typo in the comments that can be cleaned up on commit: + 1) t's not possible to use the 'kvTail' and 'snapshot' should be + 1) It's not possible to use the 'kvTail' and 'snapshot' +1 on latest patch. A few empty lines can be removed at time of commit. Ran TestHRegion#testWritesWhileGetting 101 times which all passed based on latest patch. Patch for 4195 and 4188, taking in account all the points mentionned above. The patch : for the test case itself is recommended as well, but not mandatory. I have as well done some very minor refactoring on some functions that I was touching for the patch: - @Override added - getLower renamed to getLowest as in MemStore - test and temp var var removed in next() Ok, will do (likely this week end). Do you mind if I do a single patch for this jira and for HBASE-4188? They both touch MemStore seek and reseek. N Would you mind making a patch that puts common code into a single method and that heavily docs what you've found; i.e. repeat in code your expectations above so if we want to change this code subsequently or the scope of readpoint changes, the editor will get the benefit of your rumination? Thanks boss. @stack: yes I am ok with all your points. Thanks! Some details below: Are seek and reseek the same now? Or it seems like they have a bunch of common code... can we factor it out to common method if so? The initialization of kvTail & snapshotTail differs, then it's the same code. There are only 6 lines of code, but I aggree, it would be cleaner if shared in a private method (this would simplify as well the improvement on peek)? That's what I expect. Note that between the 3 implementations: - the initial one: it was impossible because we were just using the iterator without going back to the list. - the one currently in the tunk: possible because we're restarting from the very beginning of the list. - the proposed one; in the middle: we're not restarting from the beginning from from an intermediate point of the list. So we're not in the same situation as we were 2 years ago, but I expect (without having done a full analysis) that the readpoint will hide this. The best of the best, in terms of performance and similarity to the initial implementation, would be to get the sub-skiplist implictly pointed by the iterator, but there is nothing in the Java API to do it today: it would require to implement a specific skip list. N: We should remove: this.reseekNumKeys = conf.getInt(RESEEKMAX_KEY, RESEEKMAX_DEFAULT); and reseekNumKeys and the defines else someone will come along later and wonder what these were about? Are seek and reseek the same now? Or it seems like they have a bunch of common code... can we factor it out to common method if so? To add to your synthesis: + We're removing the numIterReseek facility because we get new tailSet every time we reseek. + It looks like there is nice perf. benefit if we go w/ this patch +? If you agree on the above changes and synthesis, no need of a new patch. I"ll do the clean up on commit. Thanks boss. Synthesis: - the patch on the test case itself helps to understand where the error come from, but does not change anything else - the fix on the memstore fix the menstore part. - there is still another issue, not linked to the memstore but to the flush part (see message above from 23/Aug/11) - the test cas may fail for this reason. The probability of a failure is increased by increasing the test count and lowering the flush interval - Anyway, the flush issue is already handled in HBASE-2856. So I think we can consider the patch as ok for the scope of this bug? @Ted : @Stack: do you agree? Do you need more info? Clarification for my earlier comments: I performed the test (TestHRegion#testWritesWhileGetting) using the patch 20110824_4195_MemStore.patch. With the current implementation, it can fails in TestHRegion at assert "assertEquals("i=" + i, expectedCount, result.size());" when there is a mess up on the lists. It should not occur with the new implementation. The issue in the flush is shown by the assert just next to this one. It's (log added in the new patch) LOG.warn("These two KV should have the same value." + " Previous KV:" + previousKV + "(memStoreTS:" + previousKV.getMemstoreTS() + ")" + ", New KV: " + kv + "(memStoreTS:" + kv.getMemstoreTS() + ")" ); assertEquals(previousKV.getValue(), thisValue); With these modified values on testWritesWhileGetting: int testCount = 1000; // more iterations. Increase it more if necessary. // [...] int flushInterval = 2; // more flush int compactInterval = 1000 * flushInterval; // no compact (should have no impact, but...) I produced this error on the trunk + patch proposed: [...] put iteration = 1927 put iteration = 2021 These two KV should have the same value. Previous KV:row0/family4:qual99/1942/Put/vlen=4(memStoreTS:993), New KV: row0/family5:qual0/1944/Put/vlen=4(memStoreTS:0) E Time: 19.051 There was 1 failure: 1) testWritesWhileGetting(org.apache.hadoop.hbase.regionserver.TestHRegion) junit.framework.AssertionFailedError: expected:<1942> but was:<1944> at org.apache.hadoop.hbase.HBaseTestCase.assertEquals(HBaseTestCase.java:704) at org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:2781) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [...] May be there are more memory allocations on 0.90 and as a consequence more random interruptions caused by the GC? Note that I am doing the tests outside of Maven, but with JUnit, without specific java parameters. If worth checking that on .90 it is as well an error when the family changes (qualifier equals qual0 or qual1). May be it's something different on .90. Unfortunatly (or not ) I am on vacations for a week, so I won't be able to give a hand the next days, but I will be back middle of next week. But may be it's not an issue as these put could be skipped anyway by the readpoint criteria ? I just don't know. This state does not last long, as the Store will recreate the scanner when notified of the flush. It looks like region scanner takes out a read point on construction and holds to it as scanner runs so I guess neither put would be seen (since they came in after scanner started so will have write points in excess of the read point). I'm just a tourist in this code so I could be wrong. N, so the failing TestHRegion#testWritesWhileGetting happens on the 0.90 branch only. Do you want me to try this patch on 0.90? We don't fail on TRUNK (Ted show this the case above and I verified it a while back). How could this be the case when the MemStore is same in both code branches? Consecutively run TestHRegion#testWritesWhileGetting 100 times. All of which passed. +1 on patch (some extra empty lines should be removed). All tests pass. TestHRegion: added log to help understand where the problem comes from + modification of the put thread to insert strings (easier to read). MemStore: implementation of "reseek" by using sublist set in "seek" I tested 2 cases: 1) The current implementation 2) A new one (reusing the first one logic without calling seek). Times (for public void TestHRegion#testWritesWhileGetting()) are, on a virtualized environment: Current - 26s Proposed - 13s It's a little bit suprising to have a so good result, but it could be explained by the behaviour when the counter gets negative: we can have both a lot of iterations on the list + a call to seek. There is also a lot of gc related time, so it will depends on the memory available. But as a conclusion, it seems that it's interesting to use directly the search on the sublist. I will do a patch on the trunk. The behavior we have with the previous (or with the one I propose) implementation is: - before the flush, the MemScanner iterator points on the KV Lists of the MemStore. So the "put" on the MemStore.kvset will be seen by the scanner. - The flush does: snapshot = kvset; kvset = new SkipList(); - So after the flush, the "put" will be made on the new Memstore.kvset, hence not visible by the existing scanner. So the memScanner behaves differently before & after the flush. But may be it's not an issue as these put could be skipped anyway by the readpoint criteria ? I just don't know. This state does not last long, as the Store will recreate the scanner when notified of the flush. Problem solved then Ehh... not exactly (smile) is what we want, right? A consistent view across the flush; definitely not part of 'snapshot' only. Your suggested fix sounds good (we'll keep iterators on whatever the sets were when scan started in spite of a flush coming in midway through the scan?) For #2, yes, it seems that HBASE-2856 is addressing this type of issues. Problem solved then said, I believe it's possible to have an implementation with the same properties as the previous one, with an optimized reseek time, by keeping the pointers to the sublists in the MemStoreScanner. The "reseek" implementation would then become very similar to the "seek" one. I tested this approach, it seems to work functionally as the previous one (i.e. I fail in the case #2 mentionned above). I have not tested the reality of performance improvement, but if there is an aggreement on this approach, I can do it. The implementation of reseek would be: public synchronized boolean reseek(KeyValue key) { // kvset and snapshot will never be empty. // if tailSet cant find anything, SS is empty (not null). kvTail = kvTail.tailSet(key, true); snapshotTail = snapshotTail.tailSet(key, true); kvsetIt = kvTail.iterator(); snapshotIt = snapshotTail.iterator(); kvsetNextRow = getNext(kvsetIt); snapshotNextRow = getNext(snapshotIt); KeyValue lowest = getLowest(); // has data := (lowest != null) return lowest != null; } #2 above looks like HBASE-2856 ( HBASE-2856 is a long issue but skim it and I think you'll find it what you describe here – Amit is working on this one; we chatted at the hackathon yesterday on the issue). #1 sounds bad; should we back out the reseek? Update: I have two other scenarios for failure, both linked to flush occuring during a get. 1) With the current/optimized implementation of reseek, the set snapshot and kvset can be changed by the thread doing the flush right in the middle of the reseek. This will lead to an unconsistant state. A second effect of the flush process creating a new kvset is that the latter writes may not be seen by the MemStore scanner, as it will still be connected to the previous kvset. 2) More important, and actually not linked to the reseek optimization itself, the following scenario will lead to see a write on multiple families as non atomic. t1 : put, it finishes at t2. Write a single row with multiple families. t3 : get starts, it finishes at t9 t4 : the get continues, reads the value for the first family (scanner) t5 : put, it finishes at t6. Change the values of the row previously written. t7 : flush start, it finishes at t8. t9 : the get continues, reads the value for the second family (scanner) from the FileScanner t10: get finishes In this case, the get will have the values of the first write for the first families, and the value of the second write for the last families. This is due to the fact that the flush process create a file, and notifies the scanners. The scanners then refreshes their view. The notification is "java synchronized" with the next in the StoreFileScanner, so it does not happen during a scan within a family, but it occurs between the families within a read, as there is one scanner per family. If you add a read lock in the next() (on HRegion.updatelock), the problem does not occur, as the flush will not take place during a read. As it's a random bug, there can be other scenarios. In my environnement, when there is a failure: - it's always with a KV with a memstoreTS equals to 0 - the column is always "qual1" As said, this second is actually not linked to the modifications on "MemStoreScanner#reseek", but is linked to the flush/get parallel execution. I would tend to think that the issue happens in production as well. I can do a simple patch (removing all the code around numIterReseek). However, it would conflict with the patch for HBASE-4188/ HBASE-1938. Is it possible for you to commit this one first? Note that I have been able to make this reseek implementation fails as well by adding a Thread.sleep between the search on the two iterators. In other words, there is a race condition somewhere. It could be a conflict with the "flush" process. I noticed that a flush cannot happen during a put (lock on hregion.update) or a seek (lock on store), but there is nothing to prevent a reseek to take place during the snapshot. But I don't how long it will take to find the real issue behind all this, so a partial fix lowering the probability of having an issue makes sense... TestHRegion.testWritesWhileGetting failed in build 266: Can someone confirm that we should not see partial writes in this case? If we are seeing partial writes then we are not paying attention to RWCC sequence numbers properly. The refactored reseek is probably how the code was before hbase-3855 (I didn't check) and that code was around when RWCC was being worked out is my guess... so it probably 'worked' (not seeing parital writes). The new patch adding seeks may not be watching RWCC properly. You've done some nice analysis above N. Any chance of your seeing it home? I will run my dumb TestHRegion test if you have any patch you'd like me to try. Do you want me do write the patch? You confirm that we should not see a partial write? I think that will do the trick. I propose setting RESEEKMAX_DEFAULT to -1. With the current implementation, setting the config RESEEKMAX_KEY to -1 (read with conf.getInt(RESEEKMAX_KEY, RESEEKMAX_DEFAULT) will have this effect. disclaimer: i did not test it. I think the feature of reseek() selectively calling seek() should be governed by a config parameter until total understanding of the intricacies is reached. The issue with the implementation calling only seek is that we can see "writes in progress". From my understanding, it should not be the case (and at least, if it's allowed, there is an issue in the test case itself). The error is this assert: Assert.assertEquals("i=" + i, expectedCount, result.size());, that's different from the one mentionned in HBASE-3855. If I change the reseek implementation to something that does no call seek at all, like: public boolean reseek(KeyValue key) { while (kvsetNextRow != null && comparator.compare(kvsetNextRow, key) < 0) { kvsetNextRow = getNext(kvsetIt); } while (snapshotNextRow != null && comparator.compare(snapshotNextRow, key) < 0) { snapshotNextRow = getNext(snapshotIt); } numIterReseek = 0; return (kvsetNextRow != null || snapshotNextRow != null); } The whole test works fine. So it seems the issue really comes from using seek. The current implementation should have the same issue I think. May be we don't see it often (or at all) because seek is not called that often because of the points mentionned in 2 & 3 in the analysis above. Can someone confirm that we should not see partial writes in this case? FWIW, I also tried to change reseek to call "seek" all the time, but I have strange results (i.e. errors). I don't know if we can swap a loop of "next" with a single call to seek.Here is the implementation I tried; public boolean reseek(KeyValue key) { return seek(key); } But the test case fails (randomly). So there could be more than one issue. Integrated in HBase-TRUNK #2222 (See) HBASE-4195Possible inconsistency in a memstore read after a reseek, possible performance improvement stack : Files :
https://issues.apache.org/jira/browse/HBASE-4195?focusedCommentId=13084444&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-15
refinedweb
3,604
66.33
semget - get a semaphore set identifier Synopsis Description Errors Notes Bugs Colophon #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> int semget(key_t key, int nsems, int semflg); sets associated data structure, semid_ds (see semctl(2)), as follows:The argument nsems can be 0 (a dont -1 is returned, with errno indicating the error. On failure errno will be set to one of the following:. IPC_PRIVATE is.) semctl(2), semop(2), ftok(3), capabilities(7), sem_overview(7), svipc(7) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/semget.2.php
CC-MAIN-2018-17
refinedweb
111
59.5
My task I wasn’t in a hurry to accomplish mission no. 016 (in decimal), because I’m an author of this mess ;) So in this post I will show “my intended solution”. Before we begin, I want to give a huge shoutout to my IRC mate, who have told me a lot about a SSTV and a ham radio. This inspired me to create this mission. Kudos to @fishcake! The rating of this mission was done by @Gynvael. So what were the orders? MISSION 016 goo.gl/zqvPQD DIFFICULTY: ████████░░ [8╱! Oh hack… All specialists are deployed, so once again we have to save the world ;) today’s world is called “Chunar”… Intel Let’s start simply by downloading a file that is located under a link shown in a mission description. What is it? Huh, a txt file with some weird encoding. If you scroll down, there is a = sign. Adding a charset the straight conclusion hits my mind. This is base64! Let’s decode it with commandline tool. cat mission_file.txt | base64 -d > somefile This command will create binary file. By file, a radare2 \m or simply a hexeditor we can identify this bytes as a wav file. So it’s a sound! Let’s play it! mv somefile somefile.wav This is not the music unfortunately. Let’s use audacity and search for some patterns in this file. Definitely there is a encoded message. But how to get it? This link should help. Searching by waterfall image shows that this characteristic pattern belongs to the SSTV (Slow-Scan Television). You can even play a sample file to be 100% sure with the identification. Honestly this was the hardest part of this task. Looking at weird codenames in the description also hints that the SSTV was used to encode image. Solve! I used software called RX-SSTV to obtain the image. It looks like this: To get the best quality I suggest to either redirect sound from output to input in your soundcard settings or simply put microphone inside close shells of headphones. This is how I got mine. The type of encoding should be automatically discovered by the software. It was robot36 that was not mentioned in the description ;) Now message looks like: REPORT ONE: ? ? R O N D I Y M A U Z ? ? ? B C K P ? ? ? V W X Y DHXDMW BQLF KDYNV Over and out. So this is not the end of this challenge. Remember what the description told us, if something is not clear it is good to bruteforce it just in case. First and the most important thing is to identify used cipher. What we see is a 5x5 letters matrix. Some of them are blurred. There aren’t many ciphers that use this kind of matrix. The most popular one is a Playfair. Searching for it in Wiki gives us the name of Lyon Playfair. Check where he was born ;) - if you weren’t convinced then now you should. So this cipher is the Playfair. Unfortunately the radio transmissions are vulnerable to some interference and we have to somehow recover damaged matrix to decode it. I really like the way of Pawel’s Lukasik thinking. So if you are interested in cryptanalytic way of finding out the flag check out his blogpost! In such cases I prefer to use the power of CPU and simply try all combinations. I use my Playfair library and this dirty Python script: import random import playfair import itertools def main(): ph = playfair.Playfair() left = ["E", "F", "G", "H", "L", "Q", "S", "T"] part1 = "RONDIYMAUZ" part2= "BCKP" part3 = "VWX" for i in list(itertools.permutations(left, 8)): pswd = ''.join(i[:2]) + part1 + ''.join(i[2:5]) + part2 + ''.join(i[5:]) + part3 ph.setPassword(pswd) dc = ph.decrypt('Y DHXDMW BQLF KDYNV') dc = dc[0] + ' ' + dc[1:7] + ' ' + dc[7:11] + ' ' + dc[11:] print("Password: {}\tmessage {}".format(pswd, dc)) if __name__ == "__main__": main() to decrypt password. The output of this program can be redirected to a file and then using some command like this fgrep -f "first-file" "second-file" with dictionary like rockyou.txt or simply trying to find the line that matches English words the most, the most English flag will be found. The flag is I ALWAYS PLAY FAIRX. This X comes from the fact that the Playfair cipher needs even number of letters. That’s all. I hope you’ve enjoyed the mission. I had a lot of fun creating it! From this place I want to give kudos to Adrian Laskowski. He didn’t solve the mission (although he was really close), but he have decoded SSTV bare hand! Gratz, I’m impressed! And kudos to all of you, who solved the mission ;) Thank you for your writeups! If you are interested what was the password, well I was trying to randomize Playfair’s board a little bit ;) This is how I’ve encoded the message using this website: And this is how the image looked before blurring some stuff: foxtrot_charlie over and out!
https://s3gm3nt4ti0nf4ult.github.io/2017/10/en-gynvaels-mission-016/
CC-MAIN-2018-13
refinedweb
843
76.93
Previous: Install Graphite under pyenv virtualenv on Ubuntu Compared to Graphite, installing StatsD server is just a piece of cake. 1. Install Node.js. For better management on Node.js, you could consider using nvm. 2. Checkout the StatsD project on GitHub. git clone 3. Copy the exampleConfig.js and name it to whatever you like and edit it as follow. ex. statsdConfig.js. { graphitePort: 2003 , graphiteHost: "<graphite host>" , port: 8125 , backends: [ "./backends/graphite", "./backends/console" ] // console is for debug , debug: true // For debug , graphite: { legacyNamespace: false } // Better group all collected metrics under stats } 4. Edit the Graphite storage-schemas.conf to define the schema for those metrics with namespace which are start with stats. The file size of the .wsp under this retention is about 6.2MB. [stats] pattern = ^stats\. retentions = 10s:24h,1min:7d,2min:365d,10min:1825d 5. Since we have different retentions period, we need to create the storage-aggregation.conf to define the rules for the data downsampling. [min] pattern = \.min$ xFilesFactor = 0.1 aggregationMethod = min [max] pattern = \.max$ xFilesFactor = 0.1 aggregationMethod = max [lower] pattern = \.lower$ xFilesFactor = 0.1 aggregationMethod = min [upper] pattern = \.upper(_\d+)?$ xFilesFactor = 0.1 aggregationMethod = max [sum] pattern = \.sum$ xFilesFactor = 0 aggregationMethod = sum [count] pattern = \.count$ xFilesFactor = 0 aggregationMethod = sum [default_average] pattern = .* xFilesFactor = 0.3 aggregationMethod = average 6. Restart the carbon-cache.py and start the StatsD server. cd <statsd root> node stats.js statsdConfig.js 7. There are 2 ways to test your StatsD setup. The first one is to use Netcat but i find that not all the data written to the StatsD listening port could be received. # Install Netcat sudo apt-get install netcat # Send data to the StatsD server echo "ykyuen.test.random:1|c" | nc -u -w0 127.0.0.1 8125 8. Another way is to use the StatsD example client which i didn’t experience any data loss. Go to <statsd root>/examples and send some testing data to The StatsD server. You can keep sending and the default StatsD flush rate is 10s which should be larger or equal to the minimum retention period defined in storage-schemas.conf. ./statsd-client.sh 'ykyuen.test.random:1|c' 8. Check it out on your graphite-web portal. Done =)
https://eureka.ykyuen.info/2014/08/22/statsd-installation-and-integration-with-graphite/
CC-MAIN-2020-16
refinedweb
376
62.95
In my programming class I work with Visual C++ and I made a program that compiles and runs perfectly on visual C++, but on Dev C++ it doens't recognise things like: left, fixed, right, showpoint. IT sees them as undeclared identifyers and stuff. I e-mailed the guys and they said that not only are those not ANSII standard, but they odn't know what they are used for. Now I got another program similar to this one that is due tomorrow, but I won't be able to test it with DevC++, so that is why I am worried. Code:#include <iostream> #include <fstream> #include <iomanip> using namespace std; const float propTaxCal = 0.92; const float taxRate = 1.05; int main() { double assessedVal, taxAmount, propertyTax; ofstream outFile; outFile.open("a:ch3_Ex4out.txt"); cout << "This Program will help you Calculate property tax\n\n"; cout << "Enter the Assesed value of the Property: \n"; cin >> assessedVal; cout << "Calculating values to output file on Drive A"; outFile << fixed <<showpoint << setprecision(2); taxAmount = assessedVal * propTaxCal; outFile <<left <<"Assesse Value:" <<setw(34) <<setfill('.') <<right <<assessedVal << endl; outFile <<left <<"TaxAmount:"<<setw(38) <<setfill('.') <<right <<taxAmount << endl; outFile <<left <<"Tax Rate for each $100.00:"<<setw(22) <<setfill('.') <<right <<"1.05" << endl; propertyTax = (taxAmount / 100) * taxRate; outFile <<left <<"Property Tax: " <<setw(34) <<setfill('.') <<right << propertyTax <<endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/25173-code-won%27t-compile-devcplusplus-but-will-vcplusplus.html
CC-MAIN-2015-14
refinedweb
223
62.88
DCPython Meetup Sprint Plan (Nov 2010) This is the project planning page for the Python Beginner's Open Source Sprint, organized by the DCPython Meetup (primarily Alex Clark). There are a lot of people planning to attend, so a little advance planning is worthwhile. If people can figure out what projects they want to work on and use this page to connect with others interested in the same projects, it should be easier to get ourselves organized when we get together. This is organized with a section for each "larger" project. If you're interested in a project listed here, add your name and any emphasis you'd like to bring to the sprint. If you want to work on a project not listed already, add it! Beginners Everyone attending the sprint that has never sprinted before, add your name under "Developers". If you plan to be available to help these folks, add your name and add (helper) afterward. Developers - Alex Clark (helper) - Fred Drake (helper) - Eric Groo - Owen Martin - Barry Austin - Bob Schmertz - Mohamed Ainab <MohamedAinab> - Eric Palakovich Carr - Londell - Gabriel Getzie - Mike Onzay - Adam Reilly - Sergejs Melderis - Ryan Luu Tasks - Pick a project - Get your laptop setup for development - Fix bugs! Python core The interpreter and standard library. Developers Tasks - PEP-382, namespace packages - Add alternate float format specifiers (and Decimal, if time) - Straighten out the JSON issues: distutils2 (d2) The next generation of Python packaging. You'll need Mercurial (hg) for revision control, and several versions of Python. See for detailed preparation instructions. For this, it's important to use "clean" Pythons; you don't want to have any extras installed in your site-python directory. Developers Tasks - Remove fancygetopt; move to a clean application of optparse. - the command line tool is now in run.py, with new options, and pass it to dist+fancygetopt to call the old system so what could be done is to list: - all command line options in dist.py that are not a call to a command and its options - move them to run.py - remove fancygetopt and clean up dist. - Try converting a few projects to use d2, preferably based on the documentation alone. - Continue the work on py3 support (ask tarek or regebro on irc) Zope Toolkit (ZTK) Django Look at fixing some long-standing and reasonably trivial documentation issues. Shared exploration of the code base. Help 1.3 get out the door. Developers - Steve Holden but not on my own! ;-) - Steve Waterbury but this is not a Steves-only project! ;-) - Alex Clark Maybe I'll watch! :-) - Eric Palakovich Carr - Adam Reilly Tasks - - - Possibly consider whether context objects might benefit from the use of structural modifications. Plone Python-based, open-source CMS Tasks - PloneSoftwareCenter add-on () development - Fix bugs: - Fix broken tests Parse2Plone Utility app for importing static website content into Plone Tasks - Improve test coverage from 50% to 100% for parse2plone (you'll need a github account for this, so you can fork: and send me pull requests.) zc.buildout A tool for creating repeatable environments Tasks - Fix bugs: DCPython Meetup Sprint Wrap Up Here is a wrap-up of what was accomplished at the sprint. Distutils2 (d2) - People - Fred Drake - Alex Clark - Travis Pinney - Arc Riley - Jim Fulton - Progress - Fred schooled Travis, Alex and Jim in distutils2 setup, with a dev environment from: - and a distutils2 checkout from: - - The group ran the tests, but unfortunately did not manage to get much further than that. - Fred worked on ripping out the "Fancy option parser" to replace it with optparse. - Arc added Python 3 support to distutils2 setup.py and started adapting its unit tests to run clean on Py3. Buildout - People - Jim Fulton - Alex Clark - Progress - Jim and Alex brainstormed on the status quo: "brittle tests" (i.e. tests giving spurious results depending on the system they are being run on), and then began to fix tests. - Jim worked on a test failure having to do with zc.recipe.egg tests that used the interpreter parameter to create a script ; the script being created in the tests used a technique only valid on newer ubuntus and Windows (where #!/path/to/exe refers to another script that has #!/path/to/exe instead of an executable) - Alex worked on a test having to do with Buildout being able to work with more than one version of Python ; so the test requires that more than one version of Python be found. I committed a "modernization" here: <> (to make the test find 2.6 and 2.5 vs. 2.4 and 2.3) - Alex plans to do further improvements that will make the test look for all 2.x versions. Beginners People - Steve Holden - Bob Schmertz - Anyone else? Progress - Steve Holden was corralled with the beginners and discussed basic Python concepts - Looked at urllib as a code example - Some of the beginners broke off to do a "fun" project. - The "fun" project group mostly worked on bootstrapping an Android environment to develop an Android app with Python. The project created is called 'ConnectUs' and the project is hosted on Google Code. - All participants joined on other projects after lunch. I think next time we need to be better prepared for the beginners, and have a simple project that they can work on. Python Core People - Eric Smith - Eric Groo - Vlad Korolev - Bob Schmertz - Owen Martin Progress The core group worked on "Alternate float formatting",, which actually affects float, Decimal, and complex. Eric Smith has posted their patch, is cleaning it up and will commit it for Python 3.2. Additional comments from Londell: During the sprint, I also worked on some python core stuff too. I'm chasing a buffering issue in the pty code (that I stumbled upon before participating in the sprint). Eric gave me some ideas of some people to who I could reach out for more info on the pty code base. Additionally, Arc and I had several discussions. He challenged me to dig deeper into the C/Python bridging code, and he challenged me to rethink how I implement behavior in my code to handle C and Python exceptions in the bridging code. Django - People - Steve Holden - Anyone else? - Progress - Steve Holden and at Gabe Getzie worked on and submitted a documentation bug fix for issue #3529, a long-standing though minor bug in the documentation for context.update(). This served to refresh memory about the documentation setup. GeoDjango - People: - Eric Palakovich Carr - Travis Pinney - Adam Reilly - Barry Austin - Progress: - The GeoDjango group managed to get a patch out for.
https://wiki.python.org/moin/DCPythonNov2010Sprint?action=fullsearch&context=180&value=linkto%253A%2522DCPythonNov2010Sprint%2522
CC-MAIN-2018-26
refinedweb
1,093
70.53
This article is based on Enterprise OSGi in Action, to be published If you’re like a lot of developers, you’ll probably divide your testing into a few phases. The lowest level tests you’ll run are simple unit tests that exercise individual classes but not their interactions with one another. The next group of tests in your test hierarchy is the integration tests, which test the operation of your application. Finally, at the highest level, you may have system tests, which test the complete system, preferably in an environment that is very close to your production one. When you’re testing OSGi bundles, each of these types of testing requires a different approach—different from the other phases and also different from how you’d do similar testing for an application intended to run on a JEE server or standalone. We’ll start by discussing unit testing, since that’s the simplest case in many ways. We’ll then show you some tools and strategies that we hope you’ll find useful for integration and system testing. Unit testing OSGi bundles is pretty straightforward, but you’ll find you’ve got choices to make and a bewildering array of tool options when you start looking at integration testing. The good news is that, if you’ve already decided how to build your bundles, some of the choices about how best to test them will have been made for you. The bad news is that you’ve still got quite a few choices! Figure 1 shows some of these choices. Figure 1 The ways of testing OSGi applications.Any test process should include a simple unit test phase, but the best way to do integration test depends on a number of factors, including which build tools are already being used. Unit testing OSGi By definition, unit tests are designed to run in the simplest possible environment. The purest unit tests have only the class you’re testing and any required interfaces on the classpath. In practice, you’ll probably also include classes that are closely related to the tested class and possibly even some external implementation classes. However, unit tests needn’t—and shouldn’t—require big external infrastructure like an OSGi framework. So how can code that’s written to take advantage of the great things OSGi offers work without OSGi? How can such code be tested? Luckily, the answer is pretty easily. Mocking out The enterprise OSGi features of your runtime environment will ideally be handling most of the direct contact with the OSGi libraries for you. Even if you need to reference OSGi classes directly, you may feel more comfortable writing your application so that it doesn’t actually have too many explicit OSGi dependencies. After all, loose coupling is good, even when the coupling is to a framework as fabulous as OSGi. If you do have direct OSGi dependencies, using mocking frameworks to mock out OSGi classes like BundleContext can be helpful. You can even mock up service lookups, although if you’re using Blueprint or declarative services you’ll probably very rarely have cause to use the OSGi services API directly. In fact, using Blueprint has a lot of testability advantages beyond just eliminating direct OSGi dependencies. Blueprint-driven testing One of the really nice side effects of dependency injection frameworks like Blueprint is that unit testing becomes much easier. Although Blueprint won’t do your testing for you, testing code which was written with Blueprint in mind is an awful lot easier. Separating out the logic to look up or construct dependencies from the work of doing things with those dependencies allows dependencies to be stubbed out without affecting the main control flow you’re trying to test. The key is to realize that, while you certainly need a Blueprint framework to be present to run your application end-to-end, you don’t actually need it at all to test components of the application in isolation. Instead, your test harness can inject carefully managed dependencies into the code you’re trying to test. You can even inject mock objects instead of real objects, which would be almost impossible if your inter-component links were all hard-wired. To see this in action, let’s have a look at the DesperateCheeseOffer again. It depends on the Inventory, which requires container-managed JPA and a functioning database. Clearly, we don’t want to be trying to get a bunch of JPA entities and a database going just for a unit test. Instead, we’ll mock up an Inventory object and use the setter methods we created for Blueprint injection for unit testing instead. Listing 1 Using mocked injection to unit test the desperate cheese offer public class DesperateCheeseOfferTest { @Test public void testOfferReturnsCorrectFood() { Food food = mock(Food.class); #1 when(food.getName()).thenReturn("Green cheese"); Inventory inventory = mock(Inventory.class); List foods = new ArrayList(); foods.add(food); when(inventory.getFoodsWhoseNameContains("cheese", 1)) .thenReturn(foods); DesperateCheeseOffer offer = new DesperateCheeseOffer(); #2 offer.setInventory(inventory); assertNotNull(offer.getOfferFood()); #3 assertEquals("Green cheese", offer.getOfferFood().getName()); } } #1 Use Mockito to mock out classes #2 Initialise the cheese offer #3 Test the cheese offer behaves as expected This sort of testing allows you to verify that if the injected Inventory object behaves as expected, the DesperateCheeseOffer does the right thing. You could also add more complex tests which confirmed that the cheese offer tolerated the case when Inventory had no cheeses in it at all, or the case when there was more than one cheese present in the inventory. Although tests running outside an OSGi framework are straightforward and can spot a range of problems, there are several classes of problems they can’t detect. In particular, unit tests will still continue to run cleanly even if bundles fail to start or Blueprint services never get injected. To catch these issues, you’ll also want to test the end-to-end behavior of your application, inside an OSGi framework. How closely this OSGi framework matches your production environment is a matter of taste. You may choose to do automated integration testing in a quite minimal environment, and follow it up with a final system test in a mirror of your production system. Alternatively, you may find you flush more problems out more quickly if you test in a more fully featured environment. The tools and techniques we’ll describe for testing inside an OSGi environment are broadly suitable for both integration and system testing. Pax Exam The gold standard for OSGi testing tools is a tool called Pax Exam. Pax Exam is part of a suite of OSGi-related tools developed by the OPS4J open source community. In contrast to other open source communities like the Apache and Eclipse Foundations, OPS4J has an interestingly flat structure that emphasizes open participation as well as open consumption. There is no notion of a committer, no barrier to committing source changes, and very little internal hierarchy. Pax Exam builds on other tools developed by OPS4J, such as Pax Runner, to provide a sophisticated framework for launching JUnit—or TestNG—tests inside an OSGi framework and collecting the results. Under the covers, the Pax Exam framework wraps test classes into a bundle (using bnd to generate the manifest) and then automatically exposes the tests as OSGi services. Pax Exam then invokes each test in turn and records the results. How clean is your framework? By default, Pax Exam will start a fresh framework for each test method, which means Pax Exam tests may run rather slowly if you’ve got a lot of them. In recent versions, you can speed things up—at the risk of intertest side-effects—by specifying an @ExamReactorStrategy annotation. You can also choose whether Pax Exam launches the OSGi frameworks inside the main JVM or forks a new JVM for each framework and runs tests by RMI invocation. Not spawning a new JVM makes things far faster, and it also means you can debug your tests without having to attach remote debuggers. However, many of the more useful options for configuring frameworks are only supported for the remote framework case. Which container to use is determined by which container you list in your Maven dependency. To use the quicker non-forking container, add the following dependency: <dependency> <groupId>org.ops4j.pax.exam</groupId> <artifactId>Pax Exam-container-native</artifactId> <version>${paxexamversion}</version> <scope>test</scope> </dependency> To use the more powerful but slower Pax Runner-based container, specify the following: <dependency> <groupId>org.ops4j.pax.exam</groupId> <artifactId>Pax Exam-container-paxrunner</artifactId> <version>${paxexamversion}</version> <scope>test</scope> </dependency> - See more at: Enabling tests for Pax Exam A JUnit test intended for Pax Exam has a few key differences from one which runs standalone. Running your test code in an entirely different JVM from the one used to launch the test, with RMI and all sorts of network communication going on in the middle, isn’t something the normal JUnit test runner is designed to handle. You’ll need to run with a special Pax Exam runner instead by adding a class-level annotation: @RunWith(org.ops4j.pax.exam.junit.JUnit4TestRunner.class) You can also choose to have a bundle context injected into your test class: @Inject protected BundleContext ctx; Configuring a framework You’ll also need to tell Pax Exam what you want in your OSGi framework. This is done in a method annotated @Configuration. Pax Exam provides a fluent API for building up configuration options. Pax Exam gives you detailed control over the contents and configuration of your OSGi framework. You can specify the OSGi framework implementation (Equinox, Felix, Knopflerfish) and version or any combination of implementations and versions, system properties, and installed bundles. You can specify a list of bundles to install, as well as vm options, and OSGi frameworks. All of these can be controlled using methods statically imported from org.ops4j.pax.exam.CoreOptions, and combined into an array using the CoreOptions.options() method. @Configuration public static Option[] configuration() { MavenArtifactProvisionOption foodGroup = mavenBundle().groupId( "fancyfoods"); Option[] fancyFoodsBundles = options( foodGroup.artifactId("fancyfoods.department.cheese").version("1.0.0"), foodGroup.artifactId("fancyfoods.api").version("1.0.0"), foodGroup.artifactId("fancyfoods.persistence").version("1.0.0"), foodGroup.artifactId("fancyfoods.datasource").version("1.0.0")); Option[] server = PaxConfigurer.getServerPlatform(); Option[] options = OptionUtils.combine(fancyFoodsBundles, server); return options; } Here we’re installing the fancyfoods.department.cheese bundle, along with its dependencies and the bundles that make up the hosting server. Most of your tests will probably run on the same base platform (or platforms), so it’s worth pulling out common configuration code into a configuration utility, PaxConfigurer in this case. If you do this you can use OptionUtils.combine() to merge the option array and Option varargs into one big array. Using Maven Pax Exam is very well integrated with Maven, so one of the most convenient ways of specifying bundles to install is using their Maven coordinates and the CoreOptions.mavenBundle() method. Versions can be explicitly specified, pulled from a pom.xml using versionAsInProject(), or left implicit to default to the latest version. Using an existing server install In addition to your application bundles, you’ll need to list all the other bundles in your target runtime environment. If you think about using mavenBundle() calls to specify every bundle in your server runtime, you may start to feel uneasy. Do you really need to list out the Maven coordinates of every bundle in your Aries assembly—or worse yet, every bundle in your fully fledged application server? Luckily, the answer is no—Pax Exam does provide alternate ways of specifying what should get installed into your runtime. You can install Pax Runner or Karaf features if any exist for your server or just point Pax Exam at a directory which contains your server. For testing applications intended to run on a server, this is the most convenient option—you probably don’t need to test your application with several different OSGi frameworks or see what happens with different Blueprint implementations, because your application server environment will be well-defined. Listing 2 shows how to configure a Pax Exam environment based on an Aries assembly used for testing. Using profiles Pax Exam also provides some methods to reference convenient sets of bundles, such as a and a set webProfile() of junitBundles(). Remember that Pax Exam installs only the bundles you tell it to install – your server probably doesn’t ship JUnit, so if you point Pax Exam at your server directory you’ll need to add in your test class’s JUnit dependencies separately. Since it can be complex, even with profiles, we find it can be convenient to share the code for setting up the test environment. Listing 2 shows a utility class for setting up a test environment that reproduces an Aries assembly used for testing. Listing 2 A class that provides options which can be shared between tests package fancyfoods.department.cheese.test; import org.ops4j.pax.exam.Option; import org.ops4j.pax.exam.options.extra.DirScannerProvisionOption; import static org.ops4j.pax.exam.CoreOptions.*; public class PaxConfigurer { public static Option[] getServerPlatform() { String ariesAssemblyDir = "${aries.assembly}/target"; #1 Option bootPackages = bootDelegationPackages("javax.transaction", "javax.transaction.*"); String f = "*-*.jar"; DirScannerProvisionOption unfiltered = scanDir(ariesAssemblyDir); Option ariesAsembly = unfiltered.filter(f); #2 Option osgiFramework = equinox().version("3.5.0"); return options(bootPackages, ariesAsembly, junitBundles(), osgiFramework); } } #1 The path to the Aries assembly #2 Only include jars with *-* in the name Here, ${aries.assembly} should be replaced with an actual path—there’s no clever variable substitution going on! WARNING: One OSGi framework good, two OSGi frameworks bad Pax Exam will install all the bundles in the scanned directory into an OSGi framework. Since the scanned directory contains its own OSGi framework bundle, this means you may getting slightly more than you bargained for. Installing one OSGi framework into another is possible, but it the classloading gets complicated! To avoid installing multiple frameworks, you can either copy all your server bundles except for the OSGi framework itself to another directory, or rename the OSGi bundle so that it’s not captured by your filter. For example, if you rename the bundle to osgi.jar (no dash) and then specify the filter “*-*.jar”), Pax Exam will install every jar except for your framework jar (assuming all the other jars have dashes before the version number). All this setup might seem like quite a bit of work, and it is. Luckily, once you’ve done it for one test, writing all your other tests will be much easier. The Pax Exam test So what kind of things should you be doing in your tests? The first thing to establish is that your bundles are present and that they’re started. (And if not, why not!) Bundles that haven’t started are one of the more common problems in OSGi applications and are especially common with Pax Exam because of the complexity of setting up the environment. Despite this, checking bundle states isn’t a first-class Pax Exam diagnostic feature. It’s worth adding in your tests some utility methods that try and start the bundles in your framework to make sure everything is started and fail the test if any bundles can’t start. After this setup verification, what you test will depend on the exact design of your application. Verifying that all your services, including Blueprint ones, are present in the service registry is a good next step—and the final step is to make sure your services all behave as expected, for a variety of inputs. Listing 3 shows a test for the cheese bundle. Listing 3 A Pax Exam integration test @Test public void testOfferReturnsCorrectFood(BundleContext ctx) { Bundle bundle = getInstalledBundle(ctx, "fancyfoods.department.cheese"); try { bundle.start(); } catch (BundleException e) { fail(e.toString()); #1 } SpecialOffer offer = waitForService(bundle, SpecialOffer.class); assertNotNull("The special offer gave a null food.", #2 offer.getOfferFood()); assertEquals("Did not expect " + offer.getOfferFood().getName(), "Wensleydale cheese", offer.getOfferFood().getName()); } protected T waitForService(Bundle b, Class clazz) { try { BundleContext bc = b.getBundleContext(); ServiceTracker st = new ServiceTracker(bc, clazz.getName(), null); st.open(); Object service = st.waitForService(30 * 1000); #3 assertNotNull("No service of the type " + clazz.getName() + " was registered.", service); st.close(); return (T) service; } catch (Exception e) { fail("Failed to register services for " + b.getSymbolicName() + e.getMessage()); return null; } } #1 If the bundle can't start, find out why #2 Get the offer service #3 Give the service thirty seconds to appear Unlike normal JUnit test methods, Pax Exam test methods can take an optional BundleContext parameter. You can use it to locate the bundles and services you’re trying to test. WARNING: Blueprint and fast tests An automated test framework will generally start running tests as soon as the OSGi framework is initialized. This can cause fatal problems when testing Blueprint bundles, because Blueprint initializes asynchronously. At the time you run your first test, half your services may not have been registered! You may find you suffer from perplexing failures, reproducible or intermittent, unless you slow Pax Exam down. Waiting for a Blueprint-driven service is one way of ensuring things are mostly ready, but unfortunately just because one service is enabled doesn’t mean they all will be. In the case of the cheese test, waiting just for the SpecialOffer service will do the trick, since that’s the service you’re testing. Running the tests If you’re using Maven, and you keep your test code in Maven’s src/test/java folder, your tests will automatically be run when you invoke the integration-test or install goals. You’ll need one of the Pax Exam containers declared as a Maven dependency. If you’re using ant instead, don’t worry—Pax Exam also supports Ant. Tycho test Although Pax Exam is a very popular testing framework, it’s not the only one. In particular, if you’re using Tycho to build you’re bundles, you’re better off using Tycho to test them as well. Tycho offers a nice test framework that, in many ways, is less complex than Pax Exam. Tycho is a Maven-based tool, but like Tycho build, Tycho test uses an Eclipse PDE directory layout. Instead of putting your tests in a test folder inside your main bundle, Tycho expects a separate bundle. It relies on naming conventions to find your tests, so you’ll need to name your test bundle with a .tests suffix. Fragments and OSGi unit testing Tools like Pax Exam will generate a bundle for your tests, but with Tycho you’ve got control of where your tests go. To allow full white-box unit testing of your bundle, you may find it helpful to make the test bundle a fragment of the application bundle. This will allow it to share a classloader with the application bundle and drive all classes, even those which aren’t exported. Configuring a test framework Tycho will use your bundle’s package imports to provision required bundles into the test platform. Like any provisioning which relies solely on package dependencies, it’s unlikely that all the bundles your application needs to function properly will be provisioned. API bundles will be provisioned, but service providers probably won’t be. It’s certain that the runtime environment for your bundle won’t be exactly the same as the server you eventually intend to deploy on. In order to ensure Tycho provisions all your required bundles, you can add them to your test bundle’s manifest using Require-Bundle. Tycho will automatically provision any dependencies of your required bundles, so you won’t need to include your entire runtime platform. However, you may find the list is long enough that it’s a good idea to make one shared ‘test.dependencies’ bundle whose only function is to require your test platform. All your other test bundles can just require the test.dependencies bundle. Your test bundles will almost certainly have a dependency on JUnit, so you’ll need to add in one of the main Eclipse p2 repositories so that Tycho can provision the JUnit bundle. eclipse-helios p2 As with Pax Exam, you may find it takes you a few tries to get the Tycho runtime platform quite right. Until you’ve got a reliable platform definition, you may spend more time debugging platform issues than legitimate failures. Don’t be deterred—having a set of solid tests will pay back the effort many times over, we promise! Rolling your own test framework OSGi testing tools provide some way of integrating a unit test framework, like JUnit, into an OSGi runtime. In order to do this, they require you to do some fairly elaborate definition and configuration of this runtime, either in test code (Pax Exam) or in Maven scripts and manifest files (Tycho). Sometimes the benefits of running JUnit inside an OSGi framework don’t justify the complexity of getting everything set up to achieve this. Instead of using a specialized OSGi testing framework, some developers rely on much more low-tech or hand-rolled solutions. A bundle running inside an OSGi framework can’t interact with a JUnit runner without help, but this doesn’t mean it can’t interact with the outside world. It can provide a bundle activator or eager Blueprint bean write-out files, or it can expose a servlet and write status to a web page. Your test code can find the file or hit the servlet and parse the output to work out if everything is behaving as expected. It can even use a series of JUnit tests which hit different servlets or read different log files so that you can take advantage of your existing JUnit reporting infrastructure. This method of testing OSGi applications always feels a little inelegant to us, but it can work really well as a pragmatic test solution. Separating out the launching of the framework from the test code makes it much easier to align the framework with your production system. In fact, you can just use your production system as is to host the test bundles. It’s also much easier to debug configuration problems if the framework doesn’t start as expected. We have witnessed the sorry spectacle of confused Pax Exam users shouting at an innocent laptop “Where’s my bundle? And why won’t you tell me that six bundles in my framework failed to start?” at the end of a long day’s testing. If you’re planning to deploy your enterprise OSGi application on one of the bigger and more muscular application servers, you may find that it’s more application server than you need for your integration testing. In this case, you may be better off preparing a sandbox environment for testing, either by physically assembling one or just by working out the right minimum set of dependencies. The Aries assembly we’ve been using to run the examples is a good example of a hand-assembled OSGi runtime. We started with a simple Equinox OSGi framework and just added in the bundles needed to support the enterprise OSGi programming model. Alternatively, there are several open source projects that provide lightweight and flexible OSGi runtimes. Pax Runner Pax Runner is a slim container for launching OSGi frameworks and provisioning bundles. Pax Exam relies heavily on Pax Runner to configure and launch its OSGi framework. Pax Runner comes with a number of predefined profiles, which can be preloaded into the framework. For example, to launch an OSGi framework which is capable of running simple web applications, it’s sufficient to use the command: pax-run.sh --"profiles=web" At the time of writing, there isn’t a Pax Runner profile that fully supports the enterprise OSGi programming model. However, profiles are basically text files that list bundle locations, so it’s simple to write your own profiles. These profiles can be shared between members of your development team, which is nice, and fed to Pax Exam for automated testing, which is even nicer. Pax Runner also integrates with Eclipse PDE, so you can use the same framework in your automated testing and your development-time testing. pax-run.sh -- Although Pax Runner is described as a provisioning tool, Pax Runner can’t provision bundles like provisioners. You’ll need to list every bundle you want to include in your profile file. Karaf An alternative to Pax Runner with slightly more dynamic provisioning behavior is Apache Karaf. Like the name implies, Karaf is a little OSGi container. Karaf has some handy features missing in Pax Runner, like hot deployment and runtime provisioning. This functionality makes Karaf suitable for use as a production runtime rather than just as a test container. In fact, Karaf is so suitable for production; it underpins the Apache Geronimo server. However, Karaf isn’t so well integrated into existing unit test frameworks as Pax Runner, so if you use Karaf for your testing, you’ll mostly be limited to the “use JUnit to scrape test result logs” approach we described above. Karaf has quite a lot of support for Apache Aries, which makes it a convenient environment for developing and testing enterprise OSGi applications. So well-integrated is Karaf with Aries, Karaf comes with Blueprint support by default. Even better, if you list the OSGi bundles using the osgi:list, there’s a special column which shows each bundle’s Blueprint status. HOT DEPLOYMENT Like the little Aries Assembly we’ve been using, Karaf can install bundles from the filesystem dynamically. Karaf monitors the ${karaf.home}/deploy directory, and will silently install any bundles (or feature definitions) dropped into it. To confirm which bundles are installed and see their status, you can use the list command. KARAF FEATURES Karaf features, like Pax Runner profiles, are pre-canned groups of bundles that can easily be installed. As an added bonus, Karaf features have been written for much of the Apache Aries components. To see all the available features, type features:list. To install new features, type features:install -v (the -v option does a verbose install). To get a working Apache Aries runtime with Karaf features, it’s sufficient to execute the following commands: features:install -v war features:install -v jpa features:install -v transaction features:install -v jndi You’ll also need to enable the http server by creating a file called ${karaf.home}/etc/org.ops4j.pax.web.cfg and specifying an http port: org.osgi.service.http.port=8080 To get our sample Fancy Foods application going, the final step is to install a database by copying one into the deploy directory, and then copy your fancy foods jars into the deploy directory. You can also install the bundles from the console using Maven coordinates. The Karaf console is really nice, but of course it doesn’t lend itself so well to automation. Luckily, repositories can be defined, features installed, and bundles started by changing files in ${karaf.home}. If you’re very keen to automate, the Karaf console can even be driven programmatically. Summary The OSGi tools ecosystem is big, and many of the tools are very powerful (or complicated!). The key differentiators between many of the tool stacks is whether your OSGi development starts with an OSGi manifest, or starts with plain code. Once you’ve made this decision, you’ll find the choice between tools like bnd and Tycho starts to get made for you. As you move on to test your bundles, you’ll find that many of your build-time tools have natural testing extensions. In particular, almost all testing tools offer some level of Maven integration. [...] Full Article – How to test OSGi Applications? on. [...]
http://www.javabeat.net/how-to-test-osgi-applications/
CC-MAIN-2014-42
refinedweb
4,635
53
Technical Articles Hands-On Tutorial: Machine Learning push-down to SAP HANA with Python With this tutorial you will learn how to train Machine Learning (ML) models in SAP HANA through Python code. Trigger predictive algorithms either from local Jupyter Notebooks or, even better, from Jupyter Notebooks within SAP Data Intelligence. If you are using SAP HANA, you probably have valuable business data in that system. This data can also be a very valuable asset for Machine Learning tasks. Since SAP HANA contains predictive algorithms you can train ML models within SAP HANA on the existing information – without having to extract and duplicate the data! I like to call this the “push-down”. In case you are not familiar with Machine Learning or Python, this project can be a starting point. If you are already experienced with Machine Learning, you might be curious how to train ML models directly in SAP HANA from your preferred Python environment. That’s right, leverage the power of SAP HANA without leaving your existing Python framework! You can implement the scenario yourself using your own SAP HANA instance. Or if you just want to get an idea of the concept without getting hands-on, you can also just scroll through the Notebooks that are shared. To get hands-on you need to: - have access to a SAP HANA system (version 2.0 SPS 03 or higher) - have a Python development environment, preferably JupyterLab - install the libraries to your Python environment, which are needed to connect and push-down calculation and training logic to SAP HANA - download a set of Jupyter Notebooks that have been prepared for you If you have access to SAP Data Intelligence, you can get started quicker as SAP Data Intelligence already has JupyterLab integrated. Those who will work with SAP Data Intelligence can jump to the SAP Data Intelligence chapter in this blog after having read through the remainder of this chapter. The notebooks will implement a typical Machine Learning scenario in which a regression model is trained using the Predictive Algorithm Library (PAL). You will estimate the price of a used vehicle, based on the car model, the year in which it was built as well as the car’s mileage and other parameters. For this scenario we are using a dataset of cars that were offered for sale at the end of 2016. This dataset was compiled by scraping offers on eBay and shared as “Ebay Used Car Sales Data” on Kaggle. As the data is from 2016, any analysis or prediction refers to that time. In today’s prices the used car’s value would have reduced further. Unless you are looking at an old-timer, for which the price might rise over time… Needless to say, this blog and the code and any advice that comes with it is not part of an official product of SAP. I am hoping you find the information useful to learn and create your own analysis on your own data, but there is no support for any of the content. The official documentation for the components used in this blog are - SAP HANA Python Client API for Machine Learning Algorithms - SAP HANA Predictive Analysis Library (PAL) A big “Thank you” goes to Thomas Bitterle, who was the first to test out this blog before publication! His feedback rounded off a number of areas, making it easier for everyone. Install Python environment You should be able to use your own Python environment, in case you already have one. In case you do not have Python installed yet, I suggest using the Anaconda Installer. Anaconda is a free and open-source distribution that installs everything that is typically needed to get started. After the installation you can easily open the JupyterLabs environment from the Anaconda Navigator. Alternatively, you can also start the JupyterLab environment from the “Anaconda Prompt” application with the command: jupyter lab JupyterLabs provides a browser-based interface to work with multiple Jupyter Notebooks. A Jupyter Notebook allows to code and execute Python syntax from your browser. You can add nicely formatted comments to describe the code. And the Notebooks can display the output of the code, ie some text or charts that were produced. Having all this information in the same place makes it easier to keep track of your project. Don’t worry if this sounds complex. It doesn’t take long to pick up and it is good fun to use! If you haven’t worked with Jupyter Notebooks so far, this collection of very brief introductory videos is a good start. SAP HANA access and configuration The Python wrapper, which is facilitating the push-down, is supported beginning from SAP HANA 2.0 SPS 03. Should you not have such a system ready for testing, a quick way to get access can be to start SAP HANA Express on a hyperscaler, ie on Amazon Web Services, Microsoft Azure or Google Cloud Platform For this blog I chose to use a 32 GB instance of SAP HANA Express 2.0 SPS 04 on AWS as outlined in this guide. Please keep a close look on the hosting costs. Do check daily to avoid surprises! I understand the SAP HANA Express on AWS is not covered by the AWS free tier. Once you have an appropriate SAP HANA available, you need to run some SQL statements to configure it to be used by the Python wrapper. SQL syntax can be executed in different environments. In this blog I am using the SAP Web IDE for SAP HANA. If you are also using this interface, you must add your SAP HANA instance to the Web IDE. In the Web IDE’s “Database Explorer” on the left hand side click the “+”-sign and choose: - Database Type: SAP HANA Database (Multitenant) - Host: Your SAP HANA’s IP address or server name (hxehost if you set up SAP HANA Express on AWS) - Identifier: Instance Number: 90 - Tenant database: HXE - User: SYSTEM - Password: [the password you specified] Open the SQL Console for that connection and execute the following statements. Create a user named ML, who will access SAP HANA to upload data, to analyse the data and to train Machine Learning models. On the risk of stating the obvious, please replace ‘YOURPASSWORD’ with a password of your own choosing. CREATE USER ML Password "YOURPASSWORD"; Optionally, you can ensure that the user will never be prompted to change the password: ALTER USER ML DISABLE PASSWORD LIFETIME; Assign the user the necessary rights to trigger the Predictive Algorithm Library (PAL): GRANT AFLPM_CREATOR_ERASER_EXECUTE TO ML; GRANT AFL__SYS_AFL_AFLPAL_EXECUTE TO ML; The Predictive Algorithm Library (PAL) requires that the index server is running on the tenant database. The index server can be activated through the System database. Therefore add the System database as additional connection to the Database Explorer in the SAP Web IDE for SAP HANA: - Database Type: SAP HANA Database (Multitenant) - Host: Your SAP HANA’s IP address or server name (hxehost if you set up SAP HANA Express on AWS) - Identifier: Instance Number: 90 - Database: System database - User: SYSTEM - Password: [the password you specified] Now use this connection to start the index server on the HXE tenant database with this statement: ALTER DATABASE HXE ADD 'scriptserver'; Later on we will need to know the SQL port of the HXE tenant. The port can be retrieved from the System database. Use the same connection to the System database to execute this SQL statement. Note down the SQL_PORT that is shown for our tenant HXE. Credit for that clever SQL statement goes to this developer tutorial! SELECT DATABASE_NAME, SERVICE_NAME, PORT, SQL_PORT, (PORT + 2) HTTP_PORT FROM SYS_DATABASES.M_SERVICES WHERE (SERVICE_NAME = 'indexserver' and COORDINATOR_TYPE = 'MASTER' ) OR SERVICE_NAME = 'xsengine'; Install the Python libraries for SAP HANA push-down By now you have JupyterLabs installed and have access to a SAP HANA system. Now you need to install the wrapper, which allows Python to connect to SAP HANA and to push-down data calculations and the training of ML-models to SAP HANA. Start JupyterLab as explained above, either through the Anaconda Navigator or from the Anaconda Prompt. Then create a new Jupyter Notebook and install these two libraries: - The SAP HANA Python Client, which is the underlying connectivity from Python to SAP HANA: !pip install hdbcli - The Python wrapper, which facilitates the push-down to SAP HANA Update January 2020: This Python wrapper is now also available through pip! !pip install hana_ml Since the library is now available through pip, the following manual installation steps are not needed anymore. The Python wrapper, which facilitates the push-down to SAP HANA, is currently (October 2019) not available through pip. You need to download and install a recent version of the SAP HANA 2.0 client (at least SAP HANA 2.0 SPS 04 Revision 42). After installation you find in “C:\Program Files\SAP\hdbclient” the file hana_ml-1.0.7.tar.gz. Install this library in the Jupyter Notebook with: !pip install “C:\Program Files\SAP\hdbclient\hana_ml-1.0.7.tar.gz” Test the installation with the following code, which should print the version of the hana_ml package. This hands-on guide requires you to have at least version 1.0.7. import hana_ml print(hana_ml.__version__) In case the above import statement is throwing an error related to shapely, try the following - Open the Anaconda prompt - Install shapely through conda: conda install shapely - Open your Jupyter Lab as usual: jupyter lab Test the connection from JupyterLab to SAP HANA Run a quick test whether the hana_ml package can indeed connect to your SAP HANA system. To keep things simple for the now, logon with the user name and password. The code connects to SAP HANA, executes a very simple SELECT statement and retrieves and displays the result in Python. You may need to change the server name the SAP HANA, “hxehost” is the name given in the above AWS guide. import hana_ml.dataframe as dataframe # Instantiate connection object conn = dataframe.ConnectionContext("hxehost", 39015, "ML", "YOURPASSWORD") # Send basic SELECT statement and display the result sql = 'SELECT 12345 FROM DUMMY' df_pushdown = conn.sql(sql) print(df_pushdown.collect()) # Close connection conn.close() Running the cell should display the value 12345. Should you receive an error, scroll to the end of the error message. Typically the last line of the error is the most helpful one. Connect with secure password In the previous cell it was convenient to write the SAP HANA password directly into the Python code. This is obviously not very secure, and you may want to take a more sophisticated approach. It would be better to save the logon parameters securely with the hdbuserstore application, which is part of the SAP HANA client. Navigate in a command prompt (cmd.exe) to the folder that contains the hdbuserstore, ie C:\Program Files\SAP\hdbclient Then store the logon parameters in the hdbuserstore. In the example below the parameters are saved under a key called hana_hxe. You are free to chose your own name, but if you stick with hana_hxe you can execute the Jupyter Notebooks as they are. However, you have to modify this command though to include your own server name, SQL port and user name. C:\Program Files\SAP\hdbclient>hdbuserstore -i SET hana_hxe “SERVER:SQL_PORT” YOURUSER The password is not specified in the above command as you will be prompted for it. Bear in mind that the above command only stores the logon parameters. It does not test whether they are valid. Now that the logon credentials are securely saved, they can be leveraged by the hana_ml wrapper to logon to SAP HANA. Create a new cell in the Jupyter Notebook and repeat the earlier test, but now use the securely stored credentials. import hana_ml.dataframe as dataframe # Instantiate connection object conn = dataframe.ConnectionContext(userkey="hana_hxe") # Send basic SELECT statement and display the result sql = 'SELECT 12345 FROM DUMMY' df_pushdown = conn.sql(sql) print(df_pushdown.collect()) # Close connection conn.close() You should see the familiar output of 12345. Run the notebooks to trigger Machine Learning within SAP HANA Everything is in place now to run the Notebooks that are shared with this blog. Download these Jupyter Notebooks and the data file from the hana_ml samples Github repository. - 00 Preparation.ipynb - 05 Introduction.ipynb - 10 Exploratory Data Analysis.ipynb - 20 Imputation and model training.ipynb - 30 Apply saved model.ipynb - 40 Tidy up.ipynb - autos.csv (zipped) Save these file to a local folder. Now open JupyterLab as described in the first chapter above. In the File Browser on the left navigate to that folder. You should see something similar to: We will now go through the notebooks in the order of their numbering. If you have implemented the above steps, you should be able to run the notebooks without modifications. In case you have not saved the SAP HANA logon credentials in hdbuserstore, you need to change the ConnectionContext to the hardcoded logon approach as shown earlier in this blog. All notebooks that are offered for download here are saved with the cell output that was produced when the notebooks were executed. Before running the notebooks yourself, you can remove that output so that you know for sure that all output was produced by yourself. To clear the previous output, right-click on the notebook and select “Clear All Outputs”. Within these notebooks you find additional comments and explanations. Therefore in the blog here only a high-level summary of the different notebooks is given. Data upload The notebook “00 Preparation” loads the data from autos.csv first into a local pandas data frame, does some data preparation before using the hana_ml wrapper to save the data to SAP HANA. This data will be used to train ML models. The notebook also creates a second table in SAP HANA which contains additional cars, on which we will apply the trained model to predict the price. Introduction to Python wrapper Before going into a longer and more realistic project, run the notebook “05 Introduction” to train a very simple model through the hana_ml wrapper. If you are comfortable with the steps in this notebooks, you already got the hang of it! Exploratory Data Analysis With notebook “10 Exploratory Data Analysis” you start a more comprehensive project. You will explore and filter the data. The transformed data is saved as a view to SAP HANA, which will be used in the following notebook. As the transformation is saved as a view, no data got duplicated! Imputation and Model Training With “20 Imputation and model training” the data gets transformed further, missing data is imputed. The data is split into train and test sets. These data sets are used to train different decision trees and to test the model’s quality. The best model is chosen and an error analysis is carried out to see how the model performed in different areas. The model is then saved to SAP HANA. Apply the trained model / predict In the first notebook (“00 Preparation”) we created a table in SAP HANA with cars whose price we want to predict. The moment has come! Run “30 Apply saved model” to load the model that was created in the previous notebook (“20 Imputation and model training”). Then apply the model on these cars and see how the difference in mileage affects the price of the cars. Tidy up Optionally, if you want to delete the tables and views that have been created you can run the Notebook “40 Tidy up“. SAP Data Intelligence This chapter is only relevant for those who have access to SAP Data Intelligence. The above notebooks can also be executed within the interface of SAP Data Intelligence. Ideally you should already have some familiarity with SAP Data Intelligence, ie by having read through or even implemented your first ML Scenario. You need to follow these steps to run the Notebooks in SAP Data Intelligence: - Prepare a SAP HANA system as described in the chapter “SAP HANA access and configuration”. - In SAP Data Intelligence create a new connection of type HANA_DB to that SAP HANA instance. Name the connection “di_hana_hxe”. The “Host” is the SAP HANA’s IP address or server name. As “Port” enter the SQL port (see above for SQL_PORT). Specify the ML user and the corresponding password. - In the “ML Scenario Manager” create a new “ML Scenario” named “Car price prediction blog” - Create a Notebook called “dummy”, just to open the JupyterLab interface. If you get prompted for a kernel, select “Python 3”. - Import the five Notebooks and the data file autos.csv (in the “File Browser” on the left). - In each Notebook change the command that logons to SAP HANA since now the connection “di_hana_hxe” needs to be used. Replace this line: conn = dataframe.ConnectionContext(key = 'hana_hxe') with from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext conn = NotebookConnectionContext(connectionId = 'di_hana_hxe') Only for Notebook “40 Tidy up” replace the existing command with from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext conn = NotebookConnectionContext(connectionId = 'di_hana_hxe').connection - And finally, you may need to update the hana_ml package. Currently (October 2019) the version that comes with SAP Data Intelligence is not at the version that is required for this blog’s notebooks. Uninstall the current version with: !pip uninstall hana_ml --yes Install the latest version of the library with: !pip install hana_ml Restart the Python kernel through the JupyterLab menu (In the “Kernel” menu select “Restart Kernel…”) Now you should be good to run all Notebooks as explained above. Deployment into production Now we have a model that we can manually work with. To bring the predictions into an ongoing business process you can leverage SAP Data Intelligence to retrain and apply the model as needed. With SAP Data Intelligence you can script in Jupyter Notebooks without having to install any components on your own computer. The code you want to bring into the productive process can be deployed through graphical pipelines, which help IT to run the code in a governed framework. This sounds like a topic for another day and another blog. Update June 2, 2020: That day has come and here is the blog, which focusses on deploying HANA ML through graphical pipelines in SAP Data Intelligence. Conclusion Well done! If you have read the blog this far, you have an understanding of how Machine Learning can be carried out within SAP HANA. If you have followed this guide hands-on, you now even have experience with Machine Learning in SAP HANA and are ready to experiment with your own data! Great ! When the SAP Data Intelligence Will be available to try ? You can find details about a SAP Data Intelligence trial here: Hello Rodrigo, Does your company have a Cloud Platform Enterprise Agreement (CPEA)? This would allow you to start SAP Data Intelligence yourself. Alternatively, I suggest to reach out to your Account Executive at SAP. There's a small typo when testing for the installed hana_ml version. Currently, it reads: but it should be Hello Lars Breddemann, Thank you for the heads up. In the "00 Preparation" notebook the code seems fine. Where did you see this? That piece of code was from this very blog post. The tutorial files worked all fine for me Ah got it, Thank you for spotting this Lars. It's now corrected. Thank you for this really nice tutorial! HI Andreas , while i trying to install HANA Client Library using following command in Jupyter Notebook, i am getting following error. ERROR: Could not find a version that satisfies the requirement hdbcli (from hana-ml==1.0.7) (from versions: none) ERROR: No matching distribution found for hdbcli (from hana-ml==1.0.7) Hi Aniruddha, A new version of hdbcli (2.4.171) has just been released on pypi. Maybe hana_ml 1.0.7 is not compatible with that release. Please try installing the hdbcli version that comes with the HANA client !pip install "C:\Program Files\SAP\hdbclient\hdbcli-2.4.151.zip" thanks Andreas ; i got error after trying to install hdbcli-2.4.151.zip. hdbcli does not support 32-bit (x86) python It looks like hdbcli doesn't support 32bit python; do i uninstall my Anaconda 32bit and reinstall 64bit Anaconda. thanks Aniruddha Could it be that you downloaded the 64 bit HANA client? If that's the case you can try downloading the 32 bit HANA client. Or use 64 bit Anaconda. Hi Andreas This is one of the best SAP PA blog explaining all the details, and it worked from the very frst step. Many thanks, was a pleasure to go through the jupiter notebook files. Looking forward to your next blog ! KR Angelo Hi Andreas, Thanks for the detailed explanation. Could you please explain me without SAP DATA Intelligence, can I train the PAL machine learning models using jubiterLab and SAP HANA. Which you have explained initially.. Hello Jwala, Yes absolutely. The six Notebooks that you can download here are designed to be used in a locally installed JupyterLab environment, without SAP Data Intelligence. The deployment of PAL algorithms through Data Intelligence is explained in this blog: Hi Andreas, what approach for production deploment do you suggest when you have an algorithm in r (catboost) which is not shipped with PAL? I think about an dedicated r-server but the problem in SAP Note 2185029, only version R3.2 ist supported - last SAP Note update 2018 - seems that SAP may not support RServe furthermore? thanks Kevin Hello Kevin, The SAP Note 2185029 is referring to an R server that can be connected directly to HANA. Data Intelligence also supports R, which is independent of the HANA + R option. Currently DI is supporting "R 3.3.3 and 3.5.1" The HANA data could then stream to DI on prem or cloud, the R code is executed and the results can be written to HANA or elsehwere. Here are two blogs on using R in Data Intelligence Andreas Thanks for the answer. i should have mentioned before that SAP DI is current no fast option for us. maybe long term. the only approach that comes in my mind is: over SAP ML API run the r-script on an dedicated server. so first transfer the hana data to the server where r-script runs. the output than came back to hana. the consequence: the control flow situated on the server whre r script is running. Hi Andreas, I'm getting issue while running imputer function, its throwing error from your code. I'm using below code and running on Python 3.7 and connected with Hana Database 2.0 via GCP. But getting below error..its been while I'm trying to resolve but not able to. Can you please help? TypeError: init() got an unexpected keyword argument 'conn_context' Looks like init function is not accepting the conn as parameter. Help will be appreciated. Thanks Rubane Hi Rubane, A new version of hana_ml was released recently, which is treating the connection context differently. I assume you are on the new version 2.5.x. As a quick fix, try uninstalling that version, then install with pip install hana-ml==1.0.8.post11 Please let me know if this is working and I will add a comment to the blog. Andreas Thanks Andreas. Error is gone but imputer function is not replacing null values. I ran the code for imputer as you decribed in blog but upon checking its still showing no change in ,other words it doesnt changed null values. I checked at bakend too there are still null values there.Not sure why its not working. One observation that below code doesnt filter null Gearboxes it needs to be changed to ==. df_pushdown_history.filter('GEARBOX IS NULL').head(5).collect() But anayways this is just for retrieving results nothing to do with imputer. Please let me know if you've any suggestions. Thanks Hi Rubane, The describe() output shows only 0s in the nulls column. That means the hana_ml dataset does not contain null values anymore. It might be that the GEARBOX is an empty string. Imputing through hana_ml does not change the underlying data in the physical table. This is on purpose, as the original data might be used for other purposes. You can persist the changes, but that's an extra save command. Maybe during the upload to HANA the gearbox was already saved with an empty string, or maybe an earlier imputation was done on the hana_ml DataFrame, where nulls were replaced with that empty string? Hi Andreas, I've not changed any dataset since imputer wasnt working earlier... I do remeber running imputer without conn_context but not sure whther it had done anything. See I see lot of blank values in Hana still. Hi Rubane, This looks like an empty string, not a null value. It makes sense if hana_ml is not finding any nulls here. Maybe the new hana_ml version is replacing nulls from the csv with empty strings during upload. You should be able to continue the hands-on with that table. Or you can try uploading the csv again into a new table (different table name). Since you are now on 1.0.8.post11 you might get the null values as shown in the Notebooks. Thanks Andreas..I used the same data and it worked.Appreciate your help! Please let me know if you've any other hands on blog on ML. Its very useful for learning. Looks like we need to learn Python now for Data science
https://blogs.sap.com/2019/11/05/hands-on-tutorial-machine-learning-push-down-to-sap-hana-with-python/
CC-MAIN-2021-43
refinedweb
4,267
64.2
13 August 2012 05:10 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The BD unit was restarted on 12 August after its unexpected shutdown on 8 August, the source said without disclosing the reason for the shutdown. “The BD plant’s operating rate is about 80% capacity, following yesterday’s restart.” the source added. Bluestar Petrochemical’s on-stream unit will increase the BD supply in the domestic market, a market player said. The company is selling its BD cargoes at yuan (CNY) 17,500/tonne ($2,752/tonne) EXW (ex-works) on 13 August, down by CNY1,000/tonne from 8 August, according to Chemease, an ICIS service in The BD prices may decline further because demand from the downstream butadiene rubber (BR) and styrene butadiene rubber (SBR) sectors remains weak. (
http://www.icis.com/Articles/2012/08/13/9586213/chinas-bluestar-petrochemical-runs-bd-unit-at-80-after.html
CC-MAIN-2014-10
refinedweb
132
51.48
What about GD::Graph and related modules in that namespace. A number of years ago I was using GD::Graph to emit daily reports from overnight regression tests. I was writing the graphs to the filesystem which where then served up as static files by Apache, however if you prefer you can generate your graph within a CGI script an serve it up directly in response to a CGI query. I believe that it is possible to get GD::Graph to emit graphs in SVG form (instead of bitmap), by passing it a GD::SVG render surface, though I have never tried that. Back in the day, I submitted a patch related to how graph axes where generated and where the tick marks would appear..
http://www.perlmonks.org/?node_id=875818
CC-MAIN-2017-26
refinedweb
125
63.83
Noxive Ok finally solved it. I missed this fragment in the docs: Overriding the default provided on_message forbids any extra commands from running. To fix this, add a bot.process_commands(message) line at the end of your on_message Thanks everyone for help. Noxive @JonB Tried the function, nothing no response not on the server or the terminal @bot.command() async def test(ctx): print("test Called") await ctx.send('test') I changed the client.event to bot.event, the on_message event and on_ready work fine. The command still doesn't respond import discord from discord.ext.commands import Bot TOKEN = 'MyToken' bot = Bot(command_prefix='$') @bot.command() async def test(ctx): print("test Called") await ctx.send('test') @bot.event async def on_ready(): print("We have logged in as: {0.user}".format(bot)) @bot.event async def on_message(message): if message.author == bot.user: return if message.content.startswith("shrug"): print("shrug") await message.channel.send('¯\_(ツ)_/¯') bot.run(TOKEN) Noxive @ellie_ff1493 I did. Nothing happens. Like the command event never even occured. @ccc @cvp I deleted all the comments and PyDoc, there is no change. Noxive Ok, so I'm trying to create Discord Bot and i can't figure out why my commands don't work. The on_message() events work properly if i type shrug the bot sends the emoji but when i try to call the $test command bot doesn't respond literraly no errors and no response on the channel. I've spent like 2 hours reading the documentation and everything is ok, but its not. Python 3.6.8 discord-py 1.2.2 Please somebody help. from discord.ext.commands import Bot TOKEN = 'Here goes my token' client = discord.Client() bot = Bot(command_prefix='$') """ Command prefix is marked red in PyCharm""" @bot.command() async def test(ctx, arg): await ctx.send(arg) """ After the bot logging on to the server on_ready event occurs and prints the name of the logged in Bot """ @client.event async def on_ready(): print("We have logged in as: {0.user}".format(client)) """ When the message is sent event on_message() occurs """ @client.event async def on_message(message): """ If the message is from the bot nothing happens. ... When the user inputs $hello bot sends Hello! message on the channel. ... For the message = 'shrug' bot sends shrug emoji in ASCII onto the channel. """ if message.author == client.user: return if message.content.startswith('$hello'): await message.channel.send("Hello!") if message.content.startswith("shrug"): await message.channel.send('¯\_(ツ)_/¯') client.run(TOKEN)
https://forum.omz-software.com/user/noxive/posts
CC-MAIN-2019-35
refinedweb
422
70.6
Flash xml image veiwerJobs el de talle 1 de talle 2 de tralle 3 dwetalle 4 detalle 5 detalle... Hello, I am looking for someone who can find a link or a way to retrieve the xml of odds for some bookmakers (non english bookmakers). Please contact me privately I will provide you with more details. Regards I have ~50 images and I need to cut out the person inside image. Experienced photoshop needed We are planning to build website with for Soccer Live score with Content XML or JSON . Must manage can for user Friendly and Responsive website. Please refer to [log ind for at se URL] for more .., city, country, zip, state import 2 xml of different language , WPML setup , site is already done Hi there I have one eps image bought from Shutterstock, and I need an image editor or designer to edit it per my wish.
https://www.dk.freelancer.com/job-search/flash-xml-image-veiwer/
CC-MAIN-2019-22
refinedweb
150
69.31
java.lang.Object java.util.EventObjectjava.util.EventObject javax.sql.RowSetEventjavax.sql.RowSetEvent public class RowSetEvent An Event object generated when an event occurs to a RowSet object. A RowSetEvent object is generated when a single row in a rowset is changed, the whole rowset is changed, or the rowset cursor moves. When an event occurs on a RowSet object, one of the RowSetListener methods will be sent to all registered listeners to notify them of the event. An Event object is supplied to the RowSetListener method so that the listener can use it to find out which RowSet object is the source of the event. public RowSetEvent(RowSet source) RowSetEventobject initialized with the given RowSetobject. source- the RowSetobject whose data has changed or whose cursor has moved Copyright © 2004, 2010 Oracle and/or its affiliates. All rights reserved. Use is subject to license terms. Also see the documentation redistribution policy.
http://docs.oracle.com/javase/1.5.0/docs/api/javax/sql/RowSetEvent.html
crawl-003
refinedweb
151
55.84
Running tests¶ Kolla-ansible contains a suit of tests in the tests directory. Any proposed code change in gerrit is automatically rejected by the Zuul CI system, test-requirements.txt and doc/requirements.txt files, so the only package you install is tox itself: pip install tox For more information, see the unit testing section of the Testing wiki page. For example: To run the default set of tests: tox To run the Python 3.8 tests: tox -e py38 To run the style tests: tox -e linters To run multiple tests separate items by commas: tox -e py38,linters Running a subset of tests¶ Instead of running all tests, you can specify an individual directory, file, class or method that contains test code, i.e. filter full names of tests by a string. To run the tests located only in the kolla-ansible/tests directory use: tox -e py38 kolla-ansible.tests To run the tests of a specific file kolla-ansible/tests/test_kolla_docker.py: tox -e py38 test_kolla_docker To run the tests in the ModuleArgsTest class in the kolla-ansible/tests/test_kolla_docker.py file: tox -e py38 test_kolla_docker.ModuleArgsTest To run the ModuleArgsTest.test_module_args test method in the kolla-ansible/tests/test_kolla_docker.py file: tox -e py38 test_kolla_docker.ModuleArgsTest.test_module_args Debugging unit tests¶ In order to break into the debugger from a unit test we need to insert a breaking point to the code: import pdb; pdb.set_trace() Then run tox with the debug environment as one of the following: tox -e debug tox -e debug test_file_name.TestClass.test_name For more information, see the oslotest documentation.
https://docs.openstack.org/kolla-ansible/latest/contributor/running-tests.html
CC-MAIN-2020-40
refinedweb
268
57.37
Date posted: 2017-10-09 In my last post I talked about some optimizations that the C compiler can do to our code. This time, I will talk a little about vectorization and how the compiler can also do it automatically. First, I want to talk about what vectorization is. Suppose we have the following code: short arr1[] = { 1, 1, 2, 2 }; short arr2[] = { 3, 4, 4, 3 }; Now, say we want to sum the elements of these arrays (1 + 3, 1 + 4, 2 + 4, and 2 + 3) into the array c. How could we do this? A simple approach is to do the following: for (size_t i = 0; i < 4; i++) c[i] = a[i] + b[i]; This is great, but we are not being as efficient as we could be. Let's think about memory for a bit: a short is -typically- 2 bytes long (16 bits), and words for modern processors are 64 bits long. This means that for every iteration, we are loading 16 bit numbers into 64 bit registers: I am going to use decimal notation for some examples, because it is much easier to read // Imagine we are loading the numbers "4" and "1" in registers // in order to add them. This is what the compiler would do: 16 bit 16 bit 16 bit 16 bit |----------------|----------------||----------------|----------------| 4 0 0 0 // <- a[] 1 0 0 0 // <- b[] 5 0 0 0 // <- c[] (result) This is a lot of wasted space. If we somehow loaded more numbers into that wasted space, we would be doing 4 operations at a time, instead of only 1: 16 bit 16 bit 16 bit 16 bit |----------------|----------------||----------------|----------------| 1 1 2 2 // <- a[] 3 4 4 3 // <- b[] 4 5 6 5 // <- c[] (result) This would be great! We could theoretically align the numbers in memory (or not, if they are already aligned), load them into the register, and do these operations in parallel! But there is a catch: if the numbers are large enough, they will overflow to the left, ruining the whole operation: // 0 + 0 65535 *+ 1 16 bit 16 bit 16 bit 16 bit |----------------|----------------||----------------|----------------| ???????????????? ???????????????? 0000000000000000 1111111111111111 ???????????????? ???????????????? 0000000000000000 0000000000000001 ???????????????? ???????????????? 0000000000000000 0000000000000000 // <- what we want ???????????????? ???????????????? 0000000000000001 0000000000000000 // <- what we get // ^- This should not be here! // * assuming unassigned Checking for this pitfall manually would be too tedious and too complicated - it would likely make the process slow again. It would just not be worth it. However, modern CPUs have special registers for this kind of operation, they are called vector registers: we can tell them how long our data is, and it will avoid this overflow. Since they are optimized for a high volume of data, they can process larger words, like 128 bits instead of 64. The process of using these registers to make operations in parallel is called vectorization - it is great for applying single instructions to multiple data (SIMD). Luckily, compilers like GCC have built-in vectorization - this means that if we are using optimization, the compiler will try to vectorize operations when it finds necessary. To make this possible, however, we need to adopt some good practices. These practices will make sure that the vectorization process is easy and will not carry extra overhead, making it too complicated and ruining the performance. The two main practices I would like to talk about is memory alignment and data dependencies. 1. Memory alignment Memory is just a long strip of data. Say that this is how our memory looks like, with both our arrays in it: Word 1 Word 2 |---------------------------------------|---------------------------------------| 0x0 0x1 0x2 0x3 0x4 0x5 0x6 0x7 0x8 0x9 0x10 0x11 0x12 0x13 0x14 0x15 |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| ? ? 1 1 2 2 ? ? ? 3 4 4 3 ? ? ? Cool. Now let's try loading both words in vector registers to add them together: |----|----|----|----|----|----|----|----| ? ? 1 1 2 2 ? ? ? 3 4 4 3 ? ? ? Well, that is a problem: they are misaligned. We can't do the vectorization here properly. Well, we can, but it would take some time to properly align the memory in order to make the vectorization easy. This is why I said memory alignment is a problem. However, C is a really cool language, and it has some tools we can use to align the memory: short arr1[] __attribute__((__aligned__(16))) = { 1, 1, 2, 2 }; short arr2[] __attribute__((__aligned__(16))) = { 3, 4, 4, 3 }; It is not very pretty, but it will align our arrays: we are telling the compiler to put the start of the array in any address multiple of 16. Why 16? Because 16 bytes is 128 bits - which will align our memory right at the beginning of every 128 bit word for the vector registers. Notice that even if we are using those attributes, our data can still be misaligned. For example, if this is our operation: for (size_t i = 0; i < 4; i++) c[i] = a[i] + b[i + 1]; We don't want a[i] to be aligned with b[i], we want a[i] to be aligned with b[i + 1]! We must keep this in mind when aligning our values. C structs are a good way to keep variables together. For example: struct { int a; char b; short c; } These three variables would be declared sequentially in memory. Assuming an int of 32 bits, a char of 8 bits, and a short of 16 bits, we would have 56 bits in total. We align these structs in memory so they will occupy 64 bits each. However, if we were to vectorize the sum of the member "a" from several of these structs, we would run into another problem: the member "a" of the two structs would be too far apart in memory, and we will not be able to fit more than one in the same vector register easily. For this reason, it is recommended to have objects of arrays instead of arrays of objects if you are planning to vectorize their values: keep the values you want to vectorize close to each other. 2. Data dependencies Say that we are doing this operation in the loop, and it will be vectorized: a[i + 1] = a[i] + b[i]; This is a big problem: in a normal loop, we would be modifying the next value in the array, which is totally fine. However, since we will be working with a[i] and a[i + 1] at the same time, this would not be possible. Data dependencies like this can make the vectorization process too difficult or impossible. 3. Other practices Other practices that will make life easier for the compiler to vectorize our code (but I won't spend too much time talking about) are: - Make the number of iterations easily countable - Having the loop as single entry and single exit (with no breaks, continues, etc) - Avoid branches, like switches and functions - Use the loop index ("i") for accessing the array Vectorizing I wrote the following application in C: #include <stdio.h> #include <stdlib.h> int main() { srand(1); int a1[1000] __attribute__((__aligned__(16))); int a2[1000] __attribute__((__aligned__(16))); int a3[1000] __attribute__((__aligned__(16))); int sum = 0; for (int i = 0; i < 1000; i++) { a1[i] = (rand() % 2000) - 1000; a2[i] = (rand() % 2000) - 1000; } for (int i = 0; i < 1000; i++) { a3[i] = a1[i] + a2[i]; } for (int i = 0; i < 1000; i++) { sum += a3[i]; } printf("Sum: %d ", sum); return 0; } The idea is simple: 3 arrays, load two arrays with random numbers, add the numbers into the third array, add the numbers from the third array into a variable "sum", print the sum, exit. I compiled the code above in an Aarch64 machine, with the flag -O3 (optimized), and here is the result in assembly: I cleaned up the code a little bit, removing some useless comments and leaving part of the debugging source. I also added some observations to what is important. /* Just some initialization stuff */ mov x16, #0x2f20 sub sp, sp, x16 /* Seeding the random generator */ mov w0, #0x1 stp x29, x30, [sp] mov x29, sp stp x19, x20, [sp,#16] /* for (int i = 0; i < 1000; i++) { */ /* a1[i] = (rand() % 2000) - 1000; */ mov w20, #0x4dd3 stp x21, x22, [sp,#32] /* a1[i] = (rand() % 2000) - 1000; */ movk w20, #0x1062, lsl #16 str x23, [sp,#48] add x22, x29, #0x40 add x21, x29, #0xfe0 /* a1[i] = (rand() % 2000) - 1000; */ mov w19, #0x7d0 mov x23, #0x0 bl 400510 <srand@plt> /* a22,x23] /* a21,x23] add x23, x23, #0x4 /* for (int i = 0; i < 1000; i++) { */ cmp x23, #0xfa0 b.ne 40056c <main+0x3c> mov x2, #0x1f80 add x1, x29, x2 mov x0, #0x0 /* } */ /* for (int i = 0; i < 1000; i++) { */ /* a3[i] = a1[i] + a2[i]; */ ldr q0, [x22,x0] ldr q1, [x21,x0] add v0.4s, v0.4s, v1.4s /* <--- Notice the name of these strange registers! */ str q0, [x1,x0] add x0, x0, #0x10 cmp x0, #0xfa0 b.ne 4005bc <main+0x8c> movi v0.4s, #0x0 /* <--- WOW!!!! */ mov x0, x1 mov x1, #0x2f20 add x1, x29, x1 /* } */ /* for (int i = 0; i < 1000; i++) { */ /* sum += a3[i]; */ ldr q1, [x0],#16 add v0.4s, v0.4s, v1.4s /* <--- O NO!!!!!!! */ cmp x0, x1 b.ne 4005e8 <main+0xb8> /* } */ /* printf("Sum: %d ", sum); */ addv s0, v0.4s adrp x0, 400000 <_init-0x498> add x0, x0, #0x7f0 mov w1, v0.s[0] /* <--- SEND HELP!!!!!!!!!!!!! */ bl 400520 <printf@plt> /* return 0; */ /* } */ ldr x23, [sp,#48] ldp x19, x20, [sp,#16] mov w0, #0x0 ldp x21, x22, [sp,#32] mov x16, #0x2f20 ldp x29, x30, [sp] add sp, sp, x16 ret .inst 0x00000000 ; undefined If the compiler was not optimizing this code, we would see 3 loops in total, but it does not happen here: the loops get merged together, and they also get unrolled. You can see more details about this in my previous post. The most important thing, however, are those weird registers, like v0.4s. You will never guess what that v in the name of the register stand for in this blog post about vector registers. They are vector registers! The compiler optimized the code so it would make use of vector registers to make the operations! The name v0.4s refers to the first (index 0) vector register, dividing it into 4 lanes of 32 bits (that's the meaning of the "s") each. This means that we can fit 4 pieces of data with 32 bits each in each vector. You can find more information about this naming convention here. Now, let's take a closer look at this part: /* Load the Quadword 0 (128 bit register) with the content of x22 + x0 */ ldr q0, [x22,x0] /* Load the Quadword 1 (128 bit register) with the content of x21 + x0 */ ldr q1, [x21,x0] /* Add vector 0 + vector 1 (both), storing the result in vector 0 */ add v0.4s, v0.4s, v1.4s /* Store the content of Quadword 0 (our result) into x1 + x0 */ str q0, [x1,x0] And there it is. Our vectorized code. Now, I also compiled the same code in -O0, which should NOT vectorize the code - and indeed, it was not vectorized. Here it is, notice that there are no vector registers being used here: mov x16, #0x2f00 sub sp, sp, x16 stp x29, x30, [sp] mov x29, sp /* srand(1); */ mov w0, #0x1 bl 400510 <srand@plt> /* int a1[1000] __attribute__((__aligned__(16))); */ /* int a2[1000] __attribute__((__aligned__(16))); */ /* int a3[1000] __attribute__((__aligned__(16))); */ /* int sum = 0; */ str wzr, [x29,#12028] /* for (int i = 0; i < 1000; i++) { */ str wzr, [x29,#12024] b 4006f0 <main+0xbc> /* a1, lsl #12 add x1, x1, #0xf50 str w2, [x1,x0] /* afb0 str w2, [x1,x0] /* for (int i = 0; i < 1000; i++) { */ ldr w0, [x29,#12024] add w0, w0, #0x1 str w0, [x29,#12024] ldr w0, [x29,#12024] cmp w0, #0x3e7 b.le 400658 <main+0x24> /* } */ /* for (int i = 0; i < 1000; i++) { */ str wzr, [x29,#12020] b 400748 <main+0x114> /* a3[i] = a1[i] + a2[i]; */ ldrsw x0, [x29,#12020] lsl x0, x0, #2 add x1, x29, #0x1, lsl #12 add x1, x1, #0xf50 ldr w1, [x1,x0] ldrsw x0, [x29,#12020] lsl x0, x0, #2 add x2, x29, #0xfb0 ldr w0, [x2,x0] add w2, w1, w0 ldrsw x0, [x29,#12020] lsl x0, x0, #2 add x1, x29, #0x10 str w2, [x1,x0] /* for (int i = 0; i < 1000; i++) { */ ldr w0, [x29,#12020] add w0, w0, #0x1 str w0, [x29,#12020] ldr w0, [x29,#12020] cmp w0, #0x3e7 b.le 400704 <main+0xd0> /* } */ /* for (int i = 0; i < 1000; i++) { */ str wzr, [x29,#12016] b 400784 <main+0x150> /* sum += a3[i]; */ ldrsw x0, [x29,#12016] lsl x0, x0, #2 add x1, x29, #0x10 ldr w0, [x1,x0] ldr w1, [x29,#12028] add w0, w1, w0 str w0, [x29,#12028] /* for (int i = 0; i < 1000; i++) { */ ldr w0, [x29,#12016] add w0, w0, #0x1 str w0, [x29,#12016] ldr w0, [x29,#12016] cmp w0, #0x3e7 b.le 40075c <main+0x128> /* } */ /* printf("Sum: %d ", sum); */ adrp x0, 400000 <_init-0x498> add x0, x0, #0x870 ldr w1, [x29,#12028] bl 400520 <printf@plt> /* return 0; */ mov w0, #0x0 /* } */ ldp x29, x30, [sp] mov x16, #0x2f00 add sp, sp, x16 ret .inst 0x00000000 ; undefined
https://hcoelho.com/blog/40/Auto-vectorization_with_GCC_on_Aarch64_and_also_my_SPO600_Lab_5
CC-MAIN-2020-45
refinedweb
2,202
72.29
#include "petscdmplex.h" #include "petscdmlabel.h" PetscErrorCode DMPlexCreateHybridMesh(DM dm, DMLabel label, DMLabel bdlabel, DMLabel *hybridLabel, DMLabel *splitLabel, DM *dmInterface, DM *dmHybrid)Collective on dm Note: The hybridLabel indicates what parts of the original mesh impinged on the on division surface. For points directly on the division surface, they are labeled with their dimension, so an edge 7 on the division surface would be 7 (1) in hybridLabel. For points that impinge from the positive side, they are labeled with 100+dim, so an edge 6 with one vertex 3 on the surface would be 6 (101) and 3 (0) in hybridLabel. If an edge 9 from the negative side of the surface also hits vertex 3, it would be 9 (-101) in hybridLabel. The splitLabel indicates what points in the new hybrid mesh were the result of splitting points in the original mesh. The label value is +=100+dim for each point. For example, if two edges 10 and 14 in the hybrid resulting from splitting an edge in the original mesh, you would have 10 (101) and 14 (-101) in the splitLabel. The dmInterface is a DM built from the original division surface. It has a label which can be retrieved using DMPlexGetSubpointMap() which maps each point back to the point in the surface of the original mesh.
https://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DMPLEX/DMPlexCreateHybridMesh.html
CC-MAIN-2019-35
refinedweb
220
58.21
1.0 Introduction A C program comprises of global data and functions. A program must have a main function and the execution starts at the first statement in the main function. A function has local data and statements. The control flow deals with the order in which statements are executed by a program. In this post, we will take a close look at control statements and functions. 2.0 Statements A statement is a expression terminated with a semicolon. A block comprises of zero or more statements enclosed between a pair of braces. Syntactically, a block can be used in all places where a single statement is used. The scope of variables defined in a block ranges from the point of definition to the closing brace of the block. For example, consider the program below. #include <stdio.h> #include <stdlib.h> int main () { for (int i = 0, sum = 0; i < 10; i++) { sum += i; if (i == 9) printf ("Sum = %d\n", sum); } } The scope of variables i and sum is the block associated with the for loop. 3.0 If statement The if statement has the following form. if (expression) statement else statement A statement, above, can be a single statement, or a block comprising of zero or more statements inside matching braces. If expression evaluates as non-zero, the statement(s) just after if are executed. Otherwise, statement(s) after else are executed. The else part is optional. And, the statement after if or else can itself be another if statement. Consider the following code. int i = 8; int j = 10; if (i == 7) if (j == 10) printf ("One\n"); else printf ("Two\n"); The question is whether else is paired with the first if or the second if. The answer is that an else is paired with the most recent if. So, it is paired with the second if. And since the second if is the statement to be executed only if the expression for the first if holds true, and since i is 8, the expression for the first if is false and the code does not print anything. If we want to pair the else with the first if, we need put braces around the statement after the first if as shown below. int i = 8; int j = 10; if (i == 7) { if (j == 10) printf ("One\n"); } else printf ("Two\n"); An interesting usage of if statement is when it is used as the statement in the else part of the previous if. Consider the following usage. if (expr-1) statement-1; else if (expr-2) statement-2; else if (expr-3) statement-3; ... else statement-n; This usage is known as the if-else-if. The execution starts with the first if. As soon as an expression evaluates as non-zero, the statement(s) after that if are executed and the execution of the entire if-else-if is over. If none of the expressions evaluate non-zero, the statement(s) after the last else, if present, are executed. For example, consider the following code for printing the day of the week. void print_day (int day) { if (day == 0) printf ("Sunday\n"); else if (day == 1) printf ("Monday\n"); else if (day == 2) printf ("Tuesday\n"); else if (day == 3) printf ("Wednesday\n"); else if (day == 4) printf ("Thursday\n"); else if (day == 5) printf ("Friday\n"); else if (day == 6) printf ("Saturday\n"); else printf ("Error\n"); } 4.0 Switch statement A switch statement is of the form, switch (integer-expression) { case constant-1 : statement; statement; ... break; case constant-2 : statement; statement; ... break; ... default: statement; statement; ... } It starts with the switch keyword, followed by an integer expression in parentheses. Then, there are multiple case statements inside braces. The integer expression is evaluated and the control goes on to the case statement with constant equal to the value of the integer expression. Often, a default option is put after all the case statements and the control goes to the default if the value of the integer expression does not match any of the constants with the case statements. The cases and the default can be in any order and the default is optional. The control flow goes on till the closing brace. If a break statement is found, the control immediately goes to the closing brace. However, if there are no break statements, the control flow "falls through", that is, all the following statements are executed, including those that are next to other cases. It is quite common to club case statements with common processing. For example, suppose you wish to find number of days in a month, a la, "thirty days hath September, ...". #include <stdio.h> #include <stdlib.h> #include <stdbool.h> enum {January = 1, February, March, April, May, June, July, August, September, October, November, December}; int days_in_month (int month, bool leap) { switch (month) { case January: case March: case May: case July: case August: case October: case December: return 31; case February: return (leap ? 29 : 28); case April: case June: case September: case November: return 30; default: return -1; } } int main () { printf ("January = %d\nFebruary = %d\nFebruary (leap year) = %d\nDecember = %d\n", days_in_month (January, 0), days_in_month (February, 0), days_in_month (February, 1), days_in_month (December, 0)); } In the above program, break statements are not necessary, because the return statements, anyway, break the control flow and return to the calling function. We can compile and run this program, as below. $ gcc try.c -o try $ ./try January = 31 February = 28 February (leap year) = 29 December = 31 The switch statement is often a substitute for the If-Else-If construct. 5.0 Loops There are three loops, the while loop, for loop and the do while loop. 5.1 The while Loop while (expression) statement The while loop starts with the while keyword, followed by an expression in parentheses. Then, there is the statement, which may be a single statement or a block comprising of zero or more statements. For each iteration, the expression is evaluated. If the expression evaluates as zero, the while loop terminates. Otherwise, that is, the expression evaluates as non-zero and another iteration in the loop is done and the statement(s) under the while loop are executed. 5.2 for Loop for (expr1; expr2; expr3) statement The expression, expr1, is evaluated once at the start of the loop. expr2 is the condition that is evaluated at the start of each iteration. An iteration is done only if expr2 evaluates non-zero. The expression expr3 is evaluated at the end of each iteration. Any of the three expressions can be omitted but the two semicolons must always be present. The processing of the for loop can be expressed as below. expr1; while (expr2) { statement .... expr3; } The for loop is well suited to processing a fixed number of items like the elements of an array, members of a linked list, etc. 5.3 do while loop do { statement ... } while (expr); The statement(s) inside the braces are executed at least once. (The braces are not necessary if there is only one statement.) At the end of each iteration, expr is executed. If it evaluates non-zero, another iteration is started. If expr evaluates as zero, the loop terminates. Note the semicolon at the end of the do-while loop. The do-while loop is used much less than the while and for loops. But it is useful, especially in cases when at least one iteration is to be done and nothing special is required to be done at the start of the loop. 6.0 break statement break; The break statement breaks the control flow out of the innermost while, for, do-while loops and the switch statement. The control flow resumes at the next statement after the closing brace of the concerned loop or switch statement. 7.0 continue statement continue; The continue statement immediately causes the next iteration of the innermost while, do while or the for loop. The control flow goes to the statement just after the opening brace of the concerned loop statement. In case of while and do-while, the control passes right away to statement after the opening brace. In the case of for statement, the third expression in the parentheses of the for loop is evaluated and the control flow continues to the first statement after the opening brace of the for loop. 8.0 Functions A C program comprises of functions. One of the functions is the main function. The program execution starts with the main function. C is a procedural programming language. To develop a program you need to conceptualize the procedure or the algorithm to solve a problem. Traditionally, a big problem is broken into smaller sub-problems. The solution of smaller sub-problems put together makes the solution of the bigger problem. Eventually, the solution to a small problem is implemented in a function. It is recommended that each function be a cohesive unit. A function should be small and do a specific job well. In C language, the parameters are passed by value to the functions. That is, a function gets its own local copy of parameters, passed on the stack. It can modify the passed parameters, but the corresponding variables in the calling function are not affected. If it is required that a function access and/or modify the variables in the calling function, a pointer to the concerned variable can be passed. Now, the pointer is passed by value; so the the called function cannot modify the pointer variable in the calling function. But it can use the pointer to access the variable and read or modify it. A function returns a value, which can be used by the calling function. As an example, we will write a function to find the value of a number raised to the power of another number. Our numbers are non-negative integers. We call this function xpowery. It takes in two integers x and y and returns x to the power y. Our algorithm is based on fact that if the power is even, we can square the base number and divide the power by 2. To take care of odd values of power, we subtract 1 from power and multiply a product of factors by the base number. We continue doing this till power becomes 1. And then we multiply the base number by the factor. That is, factor = 1; while (power != 1) { if (power is odd) { power--; factor *= base_number; } else if (power is even) { power /= 2; base_number *= base_number; } } result = factor * base_number; And, the program is as given below. #include <stdio.h> #include <stdlib.h> unsigned long xpowery (unsigned long x, unsigned long y); int main () { unsigned long x, y; while (scanf ("%lu %lu", &x, &y) != EOF) printf ("%lu\n", xpowery (x, y)); } unsigned long xpowery (unsigned long x, unsigned long y) { unsigned long factor = 1; if (!y) return 1; // y == 0 if (!x) return 0; // x == 0 while (y != 1) { if (y % 2) { // y is odd factor *= x; y--; continue; } x *= x; y /= 2; } return factor * x; } The function xpowery needs to be declared before use in the main function. A function declaration like, unsigned long xpowery (unsigned long x, unsigned long y); is called a function prototype. The compiler checks the parameter types declared in the function prototype with those in the function calls and also function definition. In case of mismatch, a compile time error is generated, which needs to be corrected before the program is compiled OK. We can compile and run the above program as below. $ gcc try.c -o try $ ./try 10 3 1000 25 3 15625 900 4 656100000000 125 4 244140625 $ 9.0 Recursion A function may call itself and in doing so it is said to be working recursively. Why is recursion important? The reason is that some problems are inherently recursive. For example, we can define natural numbers like this. - 1 is the first natural number. - The successor of a natural number is the next natural number. A recursive solution is generally neat, compact, elegant and intuitive. However, for most recursive algorithms, iterative solution can be developed. In recursion, the stack grows for each recursive call and extensive recursion may lead to stack overflow. The iterative solutions may be more robust and free from memory problems. As an example, consider the problem of finding the factorial of a number. Factorials are defined for non-negative whole numbers. The symbol for factorial is the exclamation mark and is defined for a number n, as - if (n == 0), n! = 1 - if (n > 0), n! = n * (n - 1)! The program for finding factorial for a number is as follows. #include <stdio.h> #include <stdlib.h> unsigned long factorial (unsigned long); int main () { unsigned long number; while (scanf ("%lu", &number) != EOF) printf ("%lu! = %lu\n", number, factorial (number)); } unsigned long factorial (unsigned long number) { if (number == 0) return 1; return number * factorial (number - 1); } We can compile and run the above program as below. $ gcc try.c -o try $ ./try 0 0! = 1 1 1! = 1 2 2! = 2 3 3! = 6 4 4! = 24 5 5! = 120 6 6! = 720 7 7! = 5040 9 9! = 362880 10 10! = 3628800 10.0 Reference Brian W. Kernighan and Dennis M. Ritchie, "The C Programming Language", Second Edition, Prentice Hall, 1989
https://www.softprayog.in/programming/c-programming-tutorial-control-flow-and-functions
CC-MAIN-2021-39
refinedweb
2,225
65.93
Created on 2009-04-28 13:01 by alanh, last changed 2009-04-28 18:19 by mark.dickinson. This issue is now closed. mathmodule.c fails to compile because math_log1p() is missing in mathmodule.c... gcc -fno-strict-aliasing -DNDEBUG -O2 -pipe -fomit-frame-pointer -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/mathmodule.c -o Modules/mathmodule.o ./Modules/mathmodule.c: In function 'math_log1p': ./Modules/mathmodule.c:353: error: 'log1p' undeclared (first use in this function) ./Modules/mathmodule.c:353: error: (Each undeclared identifier is reported only once ./Modules/mathmodule.c:353: error: for each function it appears in.) ./Modules/mathmodule.c: In function 'math_fsum': ./Modules/mathmodule.c:574: warning: passing argument 1 of 'PyFPE_dummy' discards qualifiers from pointer target type Thanks for the report. Could you please tell me what platform you're on? And does the configure script detect that log1p is available? (There should be a line in the (stdout) output of the configure script that looks something like: checking for log1p... yes except that in your case I'm guessing that you get a 'no' there rather than a 'yes'. Is that true? I do have log1p() available... checking for log1p... yes And it's in math.h and libm.a on my system. I still can't see any reference to math_log1p() in mathmodule.c which is why it's barfing. math_log1p should be produced by the line FUNC1(log1p, log1p, 1, ... FUNC1 is a macro that (in this instance) creates the math_log1p function. The original error message that you posted seems to say that it's log1p that's undeclared, not math_log1p. It's strange that log1p isn't being picked up. What's your operating system? It might also be helpful if you could tell me how you're building Python (what configure options, etc.), and which version of the source you have---is this the 2.6.2 tarball from python.org, or a recent svn; if the latter, which revision? Thanks! Ah. right. I see what you mean about the macro expansion. So I'm on an Atari FreeMiNT platform which is an m68k box which has no shared libraries so I'm setting up for static only via Setup.dist. Note that cmathmodule.c compiles fine and that does reference log1p() too. This is basic 2.6.2 from the tarball. > Note that cmathmodule.c compiles fine and that does reference > log1p() too. Now that's *really* peculiar. I can't imagine why one would work and the other not. Some ideas about how to proceed from here: - try doing a debug build (./configure --with-pydebug) to see if the errors still occur. (They probably will, but you never know...) - What happens if you add an #include <float.h> after the line #include "Python.h" in mathmodule.c? I'm clutching at straws here, but that's the only difference I can see between cmathmodule.c and mathmodule.c that might be affecting this. Your error output from gcc looks as though either: - math.h isn't being included, or - you've got more than one version of math.h, and the wrong one's being included, or - math.h doesn't declare log1p (but you've already said that it does). What *should* be happening is that the line #include "Python.h" includes Python.h, which in turn includes pyport.h, which in turn includes math.h. It's also odd that this is the first function that fails. If math.h isn't being included somehow, then I'd also expect acosh, etc. to be failing to build. Do people really still use m68k. :-) It's ok, I see what the problem is. It's GCC's headers that are causing trouble. Closing. I'm glad you sorted it out. :) Any chance you could tell us what the fix was, in case anyone else runs into something similar? Or is that unlikely to happen? Also, while you're there, I have a favour to ask: could you tell me what the result of doing >>> 1e16 + 2.99999 is on that platform? It's not often I get to find out much about Python on m68k, and I'm curious to know whether it has the same double rounding problems as x86. Thanks! GCC was munging math.h when it did fixincludes. I'm fixing the GCC port now. Result of 1e16+2.99999 is... 10000000000000002.00 Many thanks. Looks like no double rounding then; that's good news. (32-bit Linux on x86 typically produces 10...0004.0 for this example).
https://bugs.python.org/issue5865
CC-MAIN-2021-21
refinedweb
764
79.97
Ok, we’ll admit it. We like FPGAs because it reminds us of wiring up a 100-in-1 kit when we were kids. But the truth is, many projects are just as well off to have a CPU. But there’s a real sweet spot when you have a CPU and an FPGA together. Intel (or Altera, if you prefer) has the NIOS II CPU core, but that’s hard to configure, right? Maybe not, thanks to a project by [jefflieu] over on GitHub. He’s assembled some basic definitions and libraries to easily — relatively speaking — use NIOS II on the MAX1000 as well as a few other boards. The MAX1000 is a pretty nice board for about $30, so this is a very inexpensive way to get into “System on Chip” (SOC) development. [jeff] goes into more detail in a blog post, but the idea is pretty simple. We tried it, and it works very well, although we found a few things hard to follow so read on to see how we managed. The idea behind SoC development is you define your CPU configuration and then your hardware devices. Then you write software to talk to those custom hardware devices and — of course — write your actual application code. So you don’t just write a program, you also define the CPU the program will run on and the hardware that it will talk to. There are several ready-to-go I/O devices included in the project, but the real fun will be writing your own. The Intel tools have the C compiler and everything else you need. You could also do everything from scratch, but these tools make it much easier to get started. Go To Shell The first thing you need is to open a NIOS shell. On our machine, this was handled by a script in the nios2eds directory called nios2_command_shell.sh. We were running Linux, but [jeff] was using Cygwin under Windows. This just runs a system shell but sets up the PATH and other environment variables so you can use the tools. The project files — called recon — live in a few subdirectories rooted where you unpacked the archive. As you might guess the hw directory has Verilog and other hardware-related items in it. The sw directory contains libraries and header files and, of course, your program to run in the CPU. If you are used to sending code to, say, an Arduino or a PIC, this is going to be a little different. You still generate code. But instead of sending it to the CPU, the hardware configuration for the NIOS CPU gets bundled with the code into a single configuration file you can program with the standard Intel programmer. Making it Work The key to this is a series of Makefiles. If you type just make, you should get some help text. However, that doesn’t work unless your make uses bash by default — which is usually not true on Linux. However, just add the following line near the top of the Makefile in question and things will work better: SHELL:=/bin/bash Or don’t ask for help. Everything else we tried worked fine. If you don’t want to fix it, here’s the help from a fixed Linux installation: There are five steps: - Create a board support package (BSP) for the CPU - Generate a library - Prepare your source code, or use an example - Generate an application - Create a POF file The POF file is exactly what the Intel programming software takes to burn the nonvolatile configuration memory on the board. If you’ve done the getting started tutorial, you should know how to burn this file into the FPGA. There’s more detail on the blog, but here’s the basic command lines for the example frequency counter, all issued from the sw/recon_0 directory: # step 1 make new_bsp # step 2 make generate_lib # step 3 make generate_example # step 4 make app APP_NAME=frequency_counter # step 5 make pof APP_NAME=frequency_counter BOARD_NAME=max1000 Once you program the FPGA, you should see a USB serial port enumerate. Use a terminal program set to 115,200 baud and you’ll see the software counting an internal clock. Software Things The NIOS II is a pretty robust CPU with a good C compiler implementation. Check out the component block diagram to get an idea of how things are laid out. However, since you’ll mostly be programming it in C, you probably don’t care much about the actual layout, but more the board support package (BSP) and associated libraries. What does the software look like? Good question. There are two files. One is just a little Arduino-like main: #include <system.h> #include <sys/alt_sys_init.h> #include <sys/alt_irq.h> void setup(); void loop(); int main(void) { /* Altera sepcific */ alt_irq_init(0); alt_sys_init(); setup(); while(1) { loop(); } } The main code is a little longer but shows how the libraries are set up very similar to an Arduino. Here are a few lines: ///////////////////////////////////////////////////////////////////////////////////////////////// // Bind Serial Object to UART_0 component by assigning the base address Serial.bind(UART_0_BASE); ///////////////////////////////////////////////////////////////////////////////////////////////// // Setup serial object with baudrate 115200 Serial.begin(115200); ///////////////////////////////////////////////////////////////////////////////////////////////// // Setup mode of Pins, we have INPUT, OUTPUT and OUTPWM modes pinMode(0,OUTPUT); pinMode(PWM_PIN,OUTPWM); setPWMPeriod(1000); analogWrite(PWM_PIN, 200); . . . /* * ISR that handles IO IRQ */ void io_isr(void* freqCntr) { /////////////////////////////////////////////// //Read back which pin is interrupted u32 pin = recon_io_rd32(IRQ_STATUS); /////////////////////////////////////////////// //Clear Interrupt recon_io_wr32(IRQ_STATUS,pin); /////////////////////////////////////////////// // On each interrupt, we increment the counter (*((u32*)freqCntr))++; } Next Step Not bad for a few quick command lines. Of course, the real test is when you go to make customizations, but this should give you an excellent start. If you look at the cmn (common) subdirectories in the hw and sw directories, you can see how the existing I/O devices are created and controlled. That should get you started pretty easily. There is a standard called Avalon for how things should interface with the NIOS core. The example I/O devices use the simplest method, but there are more complex setups like memory mapping if you need it. You can send the Makefile the edit_bsp command if you want to experiment with changing the board support package. You’ll need to do a little research, though, as there are lots of options. In addition, regenerating the NIOS II core isn’t trivial although the Quartus project files are in the hw subfolder. In particular, we had to take the following steps to get everything upgrading and building properly: - Open the recon_0.qpf file in Quartus - Open the recon_0.qsys file which opens Platform Designer - Go to Tools | Options in Platform Designer - Set the IP Search Path to the hw/cmn directory - Save your work in Platform Designer - Go back to Quartus and compile your design - Copy the .sof file generated from the output_files directory to the release directory (you might want to backup the old sof file, first) Just make sure you pick up the right pof file from the sw directory after you go through the five steps to build your app and not the pof of the blank CPU in the hw directory. Don’t ask us how we know that. You should only have to set the directory one time. Of course, there are lots of tempting things to play with in the Platform Designer, too, including adding custom opcodes to NIOS. If you prefer the GUI method or you start working with the tools to do your own extensions, you can follow the official tutorial. You might also find the official page useful. There’s also a video, that you can see below. If you need more of a jump start on FPGA coding, check out our FPGA boot camp over on Hackaday.io. It doesn’t use the MAX10 part, but the general principles still apply and the tutorial is hardware-neutral until the last boot camp, anyway. If you are interested in using the MAX10 analog inputs, the same site has a recon_0 setup for that, too. 7 thoughts on “Easy FPGA CPU with MAX1000” “Ok, we’ll admit it. We like FPGAs because it reminds us of wiring up a 100-in-1 kit when we were kids.” Hey! What about those who never were kids? Seriously, there’s still the disconnect between the physicality of the latter and the mental breadboard of the former. Oops, sorry Ostracus; my fat finger hit ‘report.’ Sorry I thought the problem with Nios was that the license is a little pricey for the average maker? Historically, NIOS II/e (economy edition) was free, but looking just now I couldn’t find anything current that specifically says that: Ah, ok if you dig around there you’ll find: Nios® II/e “Economy” Intel specifically designed the Nios® II/e “economy” processor cores to use the fewest FPGA logic and memory resources.It is now offered free for both Nios® II Classic and Nios® II Gen2 processors. No license is required with the Intel Quartus® Prime software and Intel Quartus® development software version 9.1 and later. The Nios® II/e core has higher performance but is in the same cost class as a typical 8051 architecture, achieving over 30 DMIPS at up to 200 MHz. It is using fewer than 700 logic elements (LEs). The core is supported by the Nios® II Embedded Design Suite (EDS), including the Eclipse-based Nios® II Integrated Development Environment (IDE). The free Nios® II/e core features: Up to 2 GB of external address space JTAG debug module Complete processors in fewer than 700 LEs Optional debug enhancements Up to 256 custom instructions I remember that the free version of NIOS used to only work on the hook. Am I mistaken, can it be used completely freely, without jtag cable attached? 700 LE sounds very compact. I considered ZPU for my current project, but gave up figuring out their mess of weird and incompatible implementations. Ended up using a 6502 core. Another project in this space is LiteX (). It supports a few different open source soft cores and has a ready library of peripherals. There’s a getting started demo that goes through creating a design and booting Linux on it. no hook. Honestly. The /e version is free and works standalone. And it is small. I am still confused what is a difference between MAX 10 and Cyclone 10 LP, there is MAX1000 board and CYC1000, they are very similar, Arrow has MAX 1000 with 8k lUTs foer $30 and CYC1000 with 25kLUTs for $40, which fpga family is faster?
https://hackaday.com/2018/10/05/easy-fpga-cpu-with-max1000/
CC-MAIN-2018-43
refinedweb
1,770
69.92
0 I've been trying to get this code to build, but I keep receiving the following errors: week5Crook.obj : error LNK2019: unresolved external symbol "void __cdecl decrypt(char *)" (?decrypt@@YAXPAD@Z) referenced in function _main week5Crook.obj : error LNK2019: unresolved external symbol "void __cdecl encrypt(char *)" (?encrypt@@YAXPAD@Z) referenced in function _main Debug/week5Crook.exe : fatal error LNK1120: 2 unresolved externals The program compiles fine and I haven't been able to figure out exactly what this error is trying to tell me. I was able to build and run this program in Dev C++, but I'm now getting these errors when I try to build this program in VS .Net 2003. Any guidance you may be able to provide would be greatly appreciated. #include <iostream> using namespace std; void encrypt(char *); void decrypt(char *); int main() { char string[] = "This is a secret!"; cout << "Encrypted string is: "; encrypt(string); cout << "\nDecrypted string is: "; decrypt(string); cout << endl << endl; return 0; }//end main() //encrypts string by adding 1 to the value of each character void encrypt(char *sPtr) { while (*sPtr != '\0') //exit while at null character { *sPtr += 1; cout << *sPtr; ++sPtr; }//end while }//end encrypt() //decrypts string by subtracting 1 to the value of each character void decrypt(char *sPtr) { while (*sPtr != '\0')//exit while at null character { *sPtr -= 1; cout << *sPtr; ++sPtr; }//end while }//end decrypt()
https://www.daniweb.com/programming/software-development/threads/38525/lnk2019-error
CC-MAIN-2017-34
refinedweb
229
62.27
mfcalcLexer mfcalcMain yyparse yypush_parse yypull_parse yystate_new yystate_delete yylex yylex yyerror mfcalc YYPRINTMacro position location Next: Introduction, Up: (dir) [Contents][Index] This manual (23 January 2015) is for GNU Bison (version 3.0.4), the GNU parser generator.: Conditions, Previous: Top, Up: Top [Contents][Index] or to understand this manual. Java is also supported as an experimental feature. originally by Robert Corbett. Richard Stallman made it Yacc-compatible. Wilfred Hansen of Carnegie Mellon University added multi-character string literals and other features. Since then, Bison has grown more robust and evolved many other new features thanks to the hard work of a long list of volunteers. For details, see the THANKS and ChangeLog files included in the Bison distribution. This edition corresponds to version 3.0.4 of Bison. Next:: Concepts, Previous: Conditions, Up: Top [Contents] chapter introduces many of the basic concepts without which the details of Bison will not make sense. If you do not already know how to use Bison or Yacc, we suggest you start by reading this chapter carefully. Next:: Semantic Values, Previous: Language and Grammar, Up: Concepts [Contents][Index] A formal grammar is a mathematical construct. To define the language for Bison, you must write a file expressing the grammar in Bison syntax: a Bison grammar file. See Symbols. A terminal symbol can also be represented as a character literal, just like a C character constant. You should do this whenever a token is just a single character (parenthesis, plus-sign, etc.): use that same character in a literal as the terminal symbol for that token. A third way to represent a terminal symbol is with a C string constant containing several characters. See Symbols, for more information. Syntax of Grammar Rules. Next:. Next: Locations, Previous: Semantic Actions, Up: Concepts [Contents][Index]. Next: Merging GLR Parses, Up: GLR Parsers [Contents][Index]. Next: GLR Semantic Actions, Previous: Simple GLR Parsers, Up: GLR Parsers [Contents][Index]. Next: Semantic Predicates, Previous: Merging GLR Parses, Up: GLR Parsers [Contents][Index] The nature of GLR parsing and the structure of the generated parsers give rise to certain restrictions on semantic values and actions. By definition, a deferred semantic action is not performed at the same time as the associated reduction. This raises caveats for several Bison features you might use in a semantic action in a GLR parser. In any semantic action, you can examine yychar to determine the type of the lookahead token present at the time of the associated reduction. After checking that yychar is not set to YYEMPTY or YYEOF, you can then examine yylval and yylloc to determine the lookahead token’s semantic value and location, if any. In a nondeferred semantic action, you can also modify any of these variables to influence syntax analysis. See Lookahead Tokens. In a deferred semantic action, it’s too late to influence syntax analysis. In this case, yychar, yylval, and yylloc are set to shallow copies of the values they had at the time of the associated reduction. For this reason alone, modifying them is dangerous. Moreover, the result of modifying them is undefined and subject to change with future versions of Bison. For example, if a semantic action might be deferred, you should never write it to invoke yyclearin (see Action Features) or to attempt to free memory referenced by yylval. Another Bison feature requiring special consideration is YYERROR (see Action Features), which you can invoke in a semantic action to initiate error recovery. During deterministic GLR operation, the effect of YYERROR is the same as its effect in a deterministic parser. The effect in a deferred action is similar, but the precise point of the error is undefined; instead, the parser reverts to deterministic operation, selecting an unspecified stack on which to continue with a syntax error. In a semantic predicate (see Semantic Predicates) during nondeterministic parsing, YYERROR silently prunes the parse that invoked the test. GLR parsers require that you use POD (Plain Old Data) types for semantic values and location types when using the generated parsers as C++ code. Next: Compiler Requirements, Previous: GLR Semantic Actions, Up: GLR Parsers [Contents][Index] In addition to the %dprec and %merge directives, GLR parsers allow you to reject parses on the basis of arbitrary computations executed in user code, without having Bison treat this rejection as an error if there are alternative parses. (This feature is experimental and may evolve. We welcome user feedback.) For example, widget: %?{ new_syntax } "widget" id new_args { $$ = f($3, $4); } | %?{ !new_syntax } "widget" id old_args { $$ = f($3, $4); } ; is one way to allow the same parser to handle two different syntaxes for widgets. The clause preceded by %? is treated like an ordinary action, except that its text is treated as an expression and is always evaluated immediately (even when in nondeterministic mode). If the expression yields 0 (false), the clause is treated as a syntax error, which, in a nondeterministic parser, causes the stack in which it is reduced to die. In a deterministic parser, it acts like YYERROR. As the example shows, predicates otherwise look like semantic actions, and therefore you must be take them into account when determining the numbers to use for denoting the semantic values of right-hand side symbols. Predicate actions, however, have no defined value, and may not be given labels. There is a subtle difference between semantic predicates and ordinary actions in nondeterministic mode, since the latter are deferred. For example, we could try to rewrite the previous example as widget: { if (!new_syntax) YYERROR; } "widget" id new_args { $$ = f($3, $4); } | { if (new_syntax) YYERROR; } "widget" id old_args { $$ = f($3, $4); } ; (reversing the sense of the predicate tests to cause an error when they are false). However, this does not have the same effect if new_args and old_args have overlapping syntax. Since the mid-rule actions testing new_syntax are deferred, a GLR parser first encounters the unresolved ambiguous reduction for cases where new_args and old_args recognize the same string before performing the tests of new_syntax. It therefore reports an error. Finally, be careful in writing predicates: deferred actions have not been evaluated, so that using them in a predicate will have undefined effects. Previous: Semantic Predicates, Up: GLR Parsers [Contents][Index] The GLR parsers require a compiler for ISO C89 or later. In addition, they use the inline keyword, which is not C89, but is C99 and is a common extension in pre-C99 compilers. It is up to the user of these parsers to handle portability issues. For instance, if using Autoconf and the Autoconf macro AC_C_INLINE, a mere %{ #include <config.h> %} will suffice. Otherwise, we suggest %{ #if (__STDC_VERSION__ < 199901 && ! defined __GNUC__ \ && ! defined inline) # define inline #endif %} Next: Bison Parser, Previous: GLR Parsers, Up: Concepts [Contents][Index]. When). Next:: %{ Prologue %} Bison declarations %% Grammar rules %% Epilogue The ‘%%’, ‘%{’ and ‘%}’ are punctuation that appears in every Bison grammar file to separate the sections. The prologue may define types and variables used in the actions. You can also use preprocessor commands to define macros used there, and use #include to include header files that do any of these things. You need to declare the lexical analyzer yylex and the error printer yyerror here, along with any other global identifiers used by the actions in the grammar rules. The Bison declarations declare the names of the terminal and nonterminal symbols, and may also describe operator precedence and the data types of semantic values of various symbols. The grammar rules define how to construct each nonterminal symbol from its parts. The epilogue can contain any code you want to use. Often the definitions of functions declared in the prologue go here. In a simple program, all the rest of the program can go here. Next:. Next:. Next: Rpcalc Rules, Up: RPN Calc [Contents][Index] rpcalc Here are the C and Bison declarations for the reverse polish notation calculator. As in C, comments are placed between ‘/*…*/’. /* Reverse polish notation calculator. */ %{ #include <stdio.h> #include <math.h> int yylex (void); void yyerror (char const *); %} %define api.value.type {double} %token NUM %% /* Grammar rules and actions follow. */ The declarations section (see The prologue) contains two preprocessor directives and two forward declarations. The #include directive is used to declare the exponentiation function pow. The forward declarations for yylex and yyerror are needed because the C language requires that functions be declared before they are used. These functions will be defined in the epilogue, but the parser calls them so they must be declared in the prologue. The second section, Bison declarations, provides information to Bison about the tokens and their types (see The Bison Declarations Section). The %define directive defines the variable api.value.type, thus specifying the C data type for semantic values of both tokens and groupings (see Data Types of Semantic Values). The Bison parser will use whatever type api.value.type is defined as; if you don’t define it, int is the default. Because we specify ‘{double}’, each token and each expression has an associated value, which is a floating point number. C code can use YYSTYPE to refer to the value api.value.type. Each terminal symbol that is not a single-character literal must be declared. (Single-character literals normally don’t need to be declared.) In this example, all the arithmetic operators are designated by single-character literals, so the only terminal symbol that needs to be declared is NUM, the token type for numeric constants. Next: Rpcalc Lexer, Previous: Rpcalc Declarations, Up: RPN Calc [Contents][Index] rpcalc Here are the grammar rules for the reverse polish notation calculator. input: %empty | input line ; line: '\n' | exp '\n' { printf ("%.10g\n", $1); } ; exp: NUM { $$ = $1; } | exp exp '+' { $$ = $1 + $2; } | exp exp '-' { $$ = $1 - $2; } | exp exp '*' { $$ = $1 * $2; } | exp exp '/' { $$ = $1 / $2; } | exp exp '^' { $$ = pow ($1, $2); } /* Exponentiation */ | exp 'n' { $$ = -$1; } /* Unary minus */ ; %% The groupings of the rpcalc “language” defined here are the expression (given the name exp), the line of input ( line), and the complete input transcript ( input). Each of these nonterminal symbols has several alternate rules, joined by the vertical bar ‘|’ which is read as “or”. The following sections explain what these rules mean. The semantics of the language is determined by the actions taken when a grouping is recognized. The actions are the C code that appears inside braces. See. Next:. Next: Rpcalc Expr, Previous: Rpcalc Input, Up: Rpcalc Rules [Contents][Index]. Previous: Rpcalc Line, Up: Rpcalc Rules [Contents][Index]. Next: Error, Previous: Rpcalc Lexer, Up: RPN Calc [Contents][Index] In keeping with the spirit of this example, the controlling function is kept to the bare minimum. The only requirement is that it call yyparse to start the process of parsing. int main (void) { return yyparse (); } Next:. Next: Rpcalc Compile, Previous: Rpcalc Error, Up: RPN Calc [Contents][Index] Before running Bison to produce a parser, we need to decide how to arrange all the source code in one or more source files. For such a simple example, the easiest thing is to put everything in one file, the grammar file. The definitions of yylex, yyerror and main go at the end, in the epilogue of the grammar file (see The Overall Layout of a Bison Grammar). For a large project, you would probably have several source files, and use make to arrange to recompile them. With all the source in the grammar file, you use the following command to convert it into a parser implementation file: bison file.y In this example, the grammar file is called rpcalc.y (for “Reverse Polish CALCulator”). Bison produces a parser implementation file named file.tab.c, removing the ‘.y’ from the grammar file name. The parser implementation file contains the source code for yyparse. The additional functions in the grammar file ( yylex, yyerror and main) are copied verbatim to the parser implementation file. Previous: Rpcalc Generate, Up: RPN Calc [Contents][Index] Here is how to compile and run the parser implementation file: # List files in current directory. $ ls rpcalc.tab.c rpcalc.y # Compile the Bison parser. # ‘-lm’ tells compiler to search math library for pow. $ cc -lm -o rpcalc rpcalc.tab.c # List files again. $ ls rpcalc rpcalc.tab.c rpcalc.y The file rpcalc now contains the executable code. Here is an example session using rpcalc. $ rpcalc 4 9 + ⇒ 13 3 7 + 3 4 5 *+- ⇒ -13 3 7 + 3 4 5 * + - n Note the unary minus, ‘n’ ⇒ 13 5 6 / 4 n + ⇒ -3.166666667 3 4 ^ Exponentiation ⇒ 81 ^D End-of-file indicator $ Next:: Multi-function Calc, Previous: Simple Error Recovery, Up: Examples [Contents][Index]. Next:. Next:. Previous: 0; /* number. Next:. Next: Mfcalc Symbol Table, Previous: Mfcalc Declarations, Up: Multi-function Calc [Contents][Index] mfcalc Here are the grammar rules for the multi-function calculator. Most of them are copied directly from calc; three rules, those which mention VAR or FNCT, are new. %% /* The grammar follows. */ input: %empty | input line ; line: '\n' | exp '\n' { printf ("%. */ %% Next: Mfcalc Lexer, Previous: Mfcalc Rules, Up: Multi-function Calc [Contents][Index]. /* will call init_table to initialize the symbol table: struct init { char const *fname; double (*fnct) (double); }; struct init const arith_fncts[] = { { "atan", atan }, { "cos", cos }, { "exp", exp }, { "ln", log }, { "sin", sin }, { "sqrt", sqrt }, { 0, 0 }, }; /* The symbol table: a chain of 'struct symrec'. */ symrec *sym_table; /* Put arithmetic functions in table. */ static void init_table (void) { int i; for (i = 0; arith_fncts[i].fname != 0; i++) { symrec . #include <stdlib.h> /* malloc. */ #include <string.h> /* strlen. */ symrec * putsym (char const *sym_name, int sym_type) { symrec (char const *sym_name) { symrec *ptr; for (ptr = sym_table; ptr != (symrec *) 0; ptr = (symrec *)ptr->next) if (strcmp (ptr->name, sym_name) == 0) return ptr; return 0; } Next:; } Previous:. Previous: Multi-function Calc, Up: Examples [Contents][Index] init_tableto add these constants to the symbol table. It will be easiest to give the constants type VAR. Bison takes as input a context-free grammar specification and produces a C-language function that recognizes correct instances of the grammar. The Bison grammar file conventionally has a name ending in ‘.y’. See Invoking Bison. Next: Symbols, Up: Grammar File [Contents][Index]. Next: Prologue Alternatives, Up: Grammar Outline [Contents][Index] The Prologue section contains macro definitions and declarations of functions and variables that are used in the actions in the grammar rules. These are copied to the beginning of the parser implementation. Next: Bison Declarations, Previous: Prologue, Up: Grammar Outline [Contents][Index] The functionality of Prologue sections can often be subtle and inflexible. As an alternative, Bison provides a %code directive with an explicit qualifier field, which identifies the purpose of the code and thus the location(s) where Bison should generate it. For C/C++, the qualifier can be omitted for the default location, or it can be one of requires, provides, top. See %code Summary. Look again at the example of the previous section: %{ #define _GNU_SOURCE #include <stdio.h> #include "ptypes.h" %} %union { long int n; tree t; /* treeis defined in ptypes.h. */ } %{ static void print_token_value (FILE *, int, YYSTYPE); #define YYPRINT(F, N, L) print_token_value (F, N, L) %} … Notice that there are two Prologue sections here, but there’s a subtle distinction between their functionality. For example, if you decide to override Bison’s default definition for YYLTYPE, in which Prologue section should you write your new definition? You should write it in the first since Bison will insert that code into the parser implementation file before the default YYLTYPE definition. In which Prologue section should you prototype an internal function, trace_token, that accepts YYLTYPE and yytokentype as arguments? You should prototype it in the second since Bison will insert that code after the YYLTYPE and yytokentype definitions. This distinction in functionality between the two Prologue sections is established by the appearance of the %union between them. This behavior raises a few questions. First, why should the position of a %union affect definitions related to YYLTYPE and yytokentype? Second, what if there is no %union? In that case, the second kind of Prologue section is not available. This behavior is not intuitive. To avoid this subtle %union dependency, rewrite the example using a %code top and an unqualified %code. Let’s go ahead and add the new YYLTYPE definition and the trace_token prototype at the same time: %code top { #define _GNU_SOURCE #include <stdio.h> /* WARNING: The following code really belongs * in a '%code requires'; see below. */ #include "ptypes.h" #define YYLTYPE YYLTYPE typedef struct YYLTYPE { int first_line; int first_column; int last_line; int last_column; char *filename; } YYLTYPE; } %union { long int n; tree t; /* treeis defined in ptypes.h. */ } %code { static void print_token_value (FILE *, int, YYSTYPE); #define YYPRINT(F, N, L) print_token_value (F, N, L) static void trace_token (enum yytokentype token, YYLTYPE loc); } … In this way, %code top and the unqualified %code achieve the same functionality as the two kinds of Prologue sections, but it’s always explicit which kind you intend. Moreover, both kinds are always available even in the absence of %union. The %code top block above logically contains two parts. The first two lines before the warning need to appear near the top of the parser implementation file. The first line after the warning is required by YYSTYPE and thus also needs to appear in the parser implementation file. However, if you’ve instructed Bison to generate a parser header file (see %defines), you probably want that line to appear before the YYSTYPE definition in that header file as well. The YYLTYPE definition should also appear in the parser header file to override the default YYLTYPE definition there. In other words, in the %code top block above, all but the first two lines are dependency code required by the YYSTYPE and YYLTYPE definitions. Thus, they belong in one or more %code requires: { static void print_token_value (FILE *, int, YYSTYPE); #define YYPRINT(F, N, L) print_token_value (F, N, L) static void trace_token (enum yytokentype token, YYLTYPE loc); } … Now Bison will insert #include "ptypes.h" and the new YYLTYPE definition before the Bison-generated YYSTYPE and YYLTYPE definitions in both the parser implementation file and the parser header file. (By the same reasoning, %code requires would also be the appropriate place to write your own definition for YYSTYPE.) When you are writing dependency code for YYSTYPE and YYLTYPE, you should prefer %code requires over %code top regardless of whether you instruct Bison to generate a parser header file. When you are writing code that you need Bison to insert only into the parser implementation file and that has no special need to appear at the top of that file, you should prefer the unqualified %code over %code top. These practices will make the purpose of each block of your code explicit to Bison and to other developers reading your grammar file. Following these practices, we expect the unqualified %code and %code requires to be the most important of the four Prologue alternatives. At some point while developing your parser, you might decide to provide trace_token to modules that are external to your parser. Thus, you might wish for Bison to insert the prototype into both the parser header file and the parser implementation file. Since this function is not a dependency required by YYSTYPE or YYLTYPE, it doesn’t make sense to move its prototype to a %code requires. More importantly, since it depends upon YYLTYPE and yytokentype, %code requires is not sufficient. Instead, move its prototype from the unqualified %code to a %code provides: provides { void trace_token (enum yytokentype token, YYLTYPE loc); } %code { static void print_token_value (FILE *, int, YYSTYPE); #define YYPRINT(F, N, L) print_token_value (F, N, L) } … Bison will insert the trace_token prototype into both the parser header file and the parser implementation file after the definitions for yytokentype, YYLTYPE, and YYSTYPE. The above examples are careful to write directives in an order that reflects the layout of the generated parser implementation and header files: %code top, %code requires, %code provides, and then %code. While your grammar files may generally be easier to read if you also follow this order, Bison does not require it. Instead, Bison lets you choose an organization that makes sense to you. You may declare any of these directives multiple times in the grammar file. In that case, Bison concatenates the contained code in declaration order. This is the only way in which the position of one of these directives within the grammar file affects its functionality. The result of the previous two properties is greater flexibility in how you may organize your grammar file. For example, you may organize semantic-type-related directives by semantic type: %code requires { #include "type1.h" } %union { type1 field1; } %destructor { type1_free ($$); } <field1> %printer { type1_print (yyoutput, $$); } <field1> %code requires { #include "type2.h" } %union { type2 field2; } %destructor { type2_free ($$); } <field2> %printer { type2_print (yyoutput, $$); } <field2> You could even place each of the above directive groups in the rules section of the grammar file next to the set of rules that uses the associated semantic type. (In the rules section, you must terminate each of those directives with a semicolon.) And you don’t have to worry that some directive (like a %union) in the definitions section is going to adversely affect their functionality in some counter-intuitive manner just because it comes first. Such an organization is not possible using Prologue sections. This section has been concerned with explaining the advantages of the four Prologue alternatives over the original Yacc Prologue. However, in most cases when using these directives, you shouldn’t need to think about all the low-level ordering issues discussed here. Instead, you should simply use these directives to label each block of your code according to its purpose and let Bison handle the ordering. %code is the most generic label. Move code to %code requires, %code provides, or %code top as needed. Next: Grammar Rules, Previous: Prologue Alternatives, Up: Grammar Outline [Contents][Index] The Bison declarations section contains declarations that define terminal and nonterminal symbols, specify precedence, and so on. In some simple grammars you may not need any declarations. See Bison Declarations. Next:. Previous:. Next:: Semantics, Previous: Symbols, Up: Grammar File [Contents][Index] A Bison grammar is a list of rules. Next:: Recursion, Previous: Rules Syntax, Up: Rules [Contents][Index] A rule is said to be empty if its right-hand side (components) is empty. It means that result can match the empty string. For example, here is how to define an optional semicolon: semicolon.opt: | ";"; It is easy not to see an empty rule, especially when | is used. The %empty directive allows to make explicit that a rule is empty on purpose: semicolon.opt: %empty | ";" ; Flagging a non-empty rule with %empty is an error. If run with -Wempty-rule, bison will report empty rules without %empty. Using %empty enables this warning, unless -Wno-empty-rule was specified. The %empty directive is a Bison extension, it does not work with Yacc. To remain compatible with POSIX Yacc, it is customary to write a comment ‘/* empty */’ in each rule with no components: semicolon.opt: /* empty */ | ";" ; Previous: Empty Rules, Up: Rules [Contents][Index] A rule is called recursive when its result nonterminal appears also on its right hand side. Nearly all Bison grammars need to use recursion, because that is the only way to define a sequence of any number of a particular thing. Consider this recursive definition of a comma-separated sequence of one or more expressions: expseq1: exp | expseq1 ',' exp ; Since the recursive use of expseq1 is the leftmost symbol in the right hand side, we call this left recursion. By contrast, here the same construct is defined using right recursion: expseq1: exp | exp ',' expseq1 ; Any kind of sequence can be defined using either left recursion or right recursion, but you should always use left recursion, because it can parse a sequence of any number of elements with bounded stack space. Right recursion uses up space on the Bison stack in proportion to the number of elements in the sequence, because all the elements must be shifted onto the stack before the rule can be applied even once. See The Bison Parser Algorithm, for further explanation of this. Indirect or mutual recursion occurs when the result of the rule does not appear directly on its right hand side, but does appear in rules for other nonterminals which do appear on its right hand side. For example: expr: primary | primary '+' primary ; primary: constant | '(' expr ')' ; defines two mutually-recursive nonterminals, since each refers to the other. Next: Tracking Locations, Previous: Rules, Up: Grammar File [Contents][Index] ‘x + y’ is to add the numbers associated with x and y. Next:. Next:). Next: (see api.value.union.name). As another extension to POSIX, you may specify multiple %union declarations; their contents are concatenated. However, only the first %union declaration can specify a tag. Note that, unlike making a union declaration in C, you need not write a semicolon after the closing brace. Next: Actions, Previous: Union Decl, Up: Semantics [Contents][Index]; }; and then your grammar can use the following instead of %union: %{ #include "parser.h" %} %define api.value.type {union YYSTYPE} %type <val> expr %token <tptr> ID Actually, you may also provide a struct rather that a union, which may be handy if you want to track information for every symbol (such as preceding comments). The type you provide may even be structured and include pointers, in which case the type tags you provide may be composite, with ‘.’ and ‘->’ operators. Next:: Mid-Rule Actions, Previous: Actions, Up: Semantics [Contents][Index] If you have chosen a single data type for semantic values, the $$ and $n constructs always have that data type. If you have used %union to specify a variety of data types, then you must declare a choice among these types for each terminal or nonterminal symbol that can have a semantic value. Then each time you use $$ or $n, its data type is determined by which symbol it refers to in the rule. In this example, exp: … | exp '+' exp { $$ = $1 + $3; } $1 and $3 refer to instances of exp, so they all have the data type declared for the nonterminal symbol exp. If $2 were used, it would have the data type declared for the terminal symbol '+', whatever that might be. Alternatively, you can specify the data type when you refer to the value, by inserting ‘<type>’ after the ‘$’ at the beginning of the reference. For example, if you have defined types as shown here: %union { int itype; double dtype; } then you can write $<itype>1 to refer to the first subunit of the rule as an integer, or $<dtype>1 to refer to it as a double. Previous: Action Types, Up: Semantics [Contents][Index] Occasionally it is useful to put an action in the middle of a rule. These actions are written just like usual end-of-rule actions, but they are executed before the parser even recognizes the following components. Next:; }; ^^^^^^^^^^^^^ Previous: Mid-Rule Action Translation, Up: Mid-Rule Actions [Contents][Index]ahead token at this time, since the parser is still deciding what to do about it. See Lookahead Tokens.). Next:. Next:. Next: Location Default Action, Previous: Location Type, Up: Tracking Locations [Contents][Index] Actions are not only useful for defining language semantics, but also for describing the behavior of the output parser with locations. The most obvious way for building locations of syntactic groupings is very similar to the way semantic values are computed. In a given rule, several constructs can be used to access the locations of the elements being matched. The location of the nth component of the right hand side is @n, while the location of the left hand side grouping is @$. In addition, the named references construct @name and @[name] may also be used to address the symbol locations. See Named References, for more information about using the named references construct. Here is a basic example using the default data type for locations: exp: … | exp '/' exp { @$.first_column = @1.first_column; @$.first_line = @1.first_line; @$.last_column = @3.last_column; @$.last_line = @3.last_line; if ($3) $$ = $1 / $3; else { $$ = 1; fprintf (stderr, "%d.%d-%d.%d: division by zero", @3.first_line, @3.first_column, @3.last_line, @3.last_column); } } As for semantic values, there is a default action for locations that is run each time a rule is matched. It sets the beginning of @$ to the beginning of the first symbol, and the end of @$ to the end of the last symbol. With this default action, the location tracking can be fully automatic. The example above simply rewrites this way: exp: … | exp '/' exp { if ($3) $$ = $1 / $3; else { $$ = 1; fprintf (stderr, "%d.%d-%d.%d: division by zero", @3.first_line, @3.first_column, @3.last_line, @3.last_column); } } It is also possible to access the location of the lookahead token, if any, from a semantic action. This location is stored in yylloc. See Special Features for Use in Actions. Previous:. Next: Declarations, Previous: Tracking Locations, Up: Grammar File [Contents][Index] As described in the preceding sections, the traditional way to refer to any semantic value or location is a positional reference, which takes the form $n, $$, @n, and @$. However, such a reference is not very descriptive. Moreover, if you later decide to insert or remove symbols in the right-hand side of a grammar rule, the need to renumber such references can be tedious and error-prone. To avoid these issues, you can also refer to a semantic value or location using a named reference. First of all, original symbol names may be used as named references. For example: invocation: op '(' args ')' { $invocation = new_invocation ($op, $args, @invocation); } Positional and named references can be mixed arbitrarily. For example: invocation: op '(' args ')' { $$ = new_invocation ($op, $args, @$); } However, sometimes regular symbol names are not sufficient due to ambiguities: exp: exp '/' exp { $exp = $exp / $exp; } // $exp is ambiguous. exp: exp '/' exp { $$ = $1 / $exp; } // One usage is ambiguous. exp: exp '/' exp { $$ = $1 / $3; } // No error. When ambiguity occurs, explicitly declared names may be used for values and locations. Explicit names are declared as a bracketed name after a symbol appearance in rule definitions. For example: exp[result]: exp[left] '/' exp[right] { $result = $left / $right; } In order to access a semantic value generated by a mid-rule action, an explicit name may also be declared by putting a bracketed name after the closing brace of the mid-rule action code: exp[res]: exp[x] '+' {$left = $x;}[left] exp[right] { $res = $left + $right; } In references, in order to specify names containing dots and dashes, an explicit bracketed syntax $[name] and @[name] must be used: if-stmt: "if" '(' expr ')' "then" then.stmt ';' { $[if-stmt] = new_if_stmt ($expr, $[then.stmt]); } It often happens that named references are followed by a dot, dash or other C punctuation marks and operators. By default, Bison will read ‘$name.suffix’ as a reference to symbol value $name followed by ‘.suffix’, i.e., an access to the suffix field of the semantic value. In order to force Bison to recognize ‘name.suffix’ in its entirety as the name of a semantic value, the bracketed syntax ‘$[name.suffix]’ must be used. The named references feature is experimental. More user feedback will help to stabilize it. Next: Multiple Parsers, Previous: Named References, Up: Grammar File [Contents][Index] grammar file also specifies the start symbol, by default. If you want some other symbol to be the start symbol, you must declare it explicitly (see Languages and Context-Free Grammars). Next: Token Decl, Up: Declarations [Contents][Index] You may require the minimum version of Bison to process the grammar. If the requirement is not met, bison exits with an error (exit status 63). %require "version" Next: Precedence Decl, Previous: Require Decl, Up: Declarations [Contents][Index], %precedence, or %nonassoc instead of %token, if you wish to specify More Than One Value Type). " Next: Type Decl, Previous: Token Decl, Up: Declarations [Contents][Index] Use the %left, %right, %nonassoc, or %precedence declaration to declare a token and specify its precedence and associativity, all at once. These are called precedence declarations. See Operator Precedence, for general information on operator precedence. The syntax of a precedence declaration is nearly the same as that of %token: either %left symbols… or %left <type> symbols… And indeed any of these declarations serves the purposes of %token. But in addition, they specify the associativity and relative precedence for all the symbols: %leftspecifies left-associativity (grouping x with y first) and %rightspecifies right-associativity (grouping y with z first). %nonassocspecifies no associativity, which means that ‘x op y op z’ is considered a syntax error. %precedence gives only precedence to the symbols, and defines no associativity at all. Use this to define precedence only, and leave any potential conflict due to associativity enabled. For backward compatibility, there is a confusing difference between the argument lists of %token and precedence declarations. Only a %token can associate a literal string with a token type name. A precedence declaration always interprets a literal string as a reference to a separate token. For example: %left OR "<=" // Does not declare an alias. %left OR 134 "<=" 135 // Declares 134 for OR and 135 for "<=". Next: Initial Action Decl, Previous: Precedence Decl, Up: Declarations [Contents][Index] (see The Union Declaration).>. Next:); }; Next:: Expect Decl, Previous: Destructor Decl, Up: Declarations [Contents][Index] When run-time traces are enabled (see Tracing Your Parser), the parser reports its actions, such as reductions. When a symbol involved in an action is reported, only its kind is displayed, as the parser cannot know how semantic values should be formatted. The %printer directive defines code that is called when a symbol is reported. Its syntax is the same as %destructor (see Freeing Discarded Symbols). Invoke the braced code whenever the parser displays one of the symbols. Within code, yyoutput denotes the output stream (a FILE* in C, and an std::ostream& in C++), $$ (or $<tag>$) designates the semantic value associated with the symbol, and @$ its location. The additional parser parameters are also available (see The Parser Function yyparse). The symbols are defined as for %destructor (see Freeing Discarded Symbols.): they can be per-type (e.g., ‘<ival>’), per-symbol (e.g., ‘exp’, ‘NUM’, ‘"float"’), typed per-default (i.e., ‘<*>’, or untyped per-default (i.e., ‘<>’). For example: %union { char *string; } %token <string> STRING1 STRING2 %type <string> string1 string2 %union { char character; } %token <character> CHR %type <character> chr %token TAGLESS %printer { fprintf (yyoutput, "'%c'", $$); } <character> %printer { fprintf (yyoutput, "&%p", $$); } <*> %printer { fprintf (yyoutput, "\"%s\"", $$); } STRING1 string1 %printer { fprintf (yyoutput, "<>"); } <> guarantees that, when the parser print any symbol that has a semantic type tag other than <character>, it display the address of the semantic value by default. However, when the parser displays a STRING1 or a string1, it formats it as a string in double quotes. It performs only the second %printer in this case, so it prints only once. Finally, the parser print ‘<>’ for any symbol, such as TAGLESS, that has no semantic type tag. See Enabling Debug Traces for mfcalc, for a complete example. Next: Start Decl, Previous: Printer Decl, Up: Declarations [Contents][Index]. Next: Pure Decl, Previous: Expect Decl, Up: Declarations [Contents][Index] Bison assumes by default that the start symbol for the grammar is the first nonterminal specified in the grammar specification section. The programmer may override this restriction with the %start declaration as follows: %start symbol Next: Push Decl, Previous: Start Decl, Up: Declarations [Contents][Index]. Normally, Bison generates a parser which is not reentrant. This is suitable for most uses, and it permits compatibility with Yacc. (The standard Yacc interfaces are inherently nonreentrant, because they use statically allocated variables for communication with yylex, including yylval and yylloc.) Alternatively, you can generate a pure, reentrant parser. The Bison declaration ‘%define api.pure’ says that you want the parser to be reentrant. It looks like this: %define api.pure full. Next: Decl Summary, Previous: Pure Decl, Up: Declarations [Contents][Index] (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) A pull parser is called once and it takes control until all its input is completely parsed. A push parser, on the other hand, is called each time a new token is made available. A push parser is typically useful when the parser is part of a main event loop in the client’s application. This is typically a requirement of a GUI, when the main event loop needs to be triggered within a certain time period. Normally, Bison generates a pull parser. The following Bison declaration says that you want the parser to be a push parser (see api.push-pull): %define api.push-pull push In almost all cases, you want to ensure that your push parser is also a pure parser (see A Pure (Reentrant) Parser). The only time you should create an impure push parser is to have backwards compatibility with the impure Yacc pull mode interface. Unless you know what you are doing, your declarations should look like this: %define api.pure full %define api.push-pull push There is a major notable functional difference between the pure push parser and the impure push parser. It is acceptable for a pure push parser to have many parser instances, of the same type of parser, in memory at the same time. An impure push parser should only use one parser at a time. When a push parser is selected, Bison will generate some new symbols in the generated parser. yypstate is a structure that the generated parser uses to store the parser’s state. yypstate_new is the function that will create a new parser instance. yypstate_delete will free the resources associated with the corresponding parser instance. Finally, yypush_parse is the function that should be called whenever a token is available to provide the parser. A trivial example of using a pure push parser would look like this: int status; yypstate *ps = yypstate_new (); do { status = yypush_parse (ps, yylex (), NULL); } while (status == YYPUSH_MORE); yypstate_delete (ps); If the user decided to use an impure push parser, a few things about the generated parser will change. The yychar variable becomes a global variable instead of a variable in the yypush_parse function. For this reason, the signature of the yypush_parse function is changed to remove the token as a parameter. A nonreentrant push parser example would thus look like this: extern int yychar; int status; yypstate *ps = yypstate_new (); do { yychar = yylex (); status = yypush_parse (ps); } while (status == YYPUSH_MORE); yypstate_delete (ps); That’s it. Notice the next token is put into the global variable yychar for use by the next invocation of the yypush_parse function. Bison also supports both the push parser interface along with the pull parser interface in the same generated parser. In order to get this functionality, you should replace the ‘%define api.push-pull push’ declaration with the ‘%define api.push-pull both’ declaration. Doing this will create all of the symbols mentioned earlier along with the two extra symbols, yyparse and yypull_parse. yyparse can be used exactly as it normally would be used. However, the user should note that it is implemented in the generated parser by calling yypull_parse. This makes the yyparse function that is generated with the ‘%define api.push-pull both’ declaration slower than the normal yyparse function. If the user calls the yypull_parse function it will parse the rest of the input stream. It is possible to yypush_parse tokens to select a subgrammar and then yypull_parse the rest of the input stream. If you would like to switch back and forth between between parsing styles, you would have to write your own yypull_parse function that knows when to quit looking for input. An example of using the yypull_parse function would look like this: yypstate *ps = yypstate_new (); yypull_parse (ps); /* Will call the lexer */ yypstate_delete (ps); Adding the ‘%define api.pure’ declaration does exactly the same thing to the generated parser with ‘%define api.push-pull both’ as it did for ‘%define api.push-pull push’. Next: : %code Summary, Previous: Decl Summary, Up: Declarations [Contents][Index] There are many features of Bison’s behavior that can be controlled by assigning the feature a single value. For historical reasons, some such features are assigned values by dedicated directives, such as %start, which assigns the start symbol. However, newer such features are associated with variables, which are assigned by the %define directive: Define variable to value. The type of the values depend on the syntax. Braces denote value in the target language (e.g., a namespace, a type, etc.). Keyword values (no delimiters) denote finite choice (e.g., a variation of a feature). String values denote remaining cases (e.g., a file name). It is an error if a variable is defined by %define multiple times, but see -D name[=value]. The rest of this section summarizes variables and values that %define accepts. Some variables take Boolean values. In this case, Bison will complain if the variable definition does not meet one of the following four conditions: valueis true valueis omitted (or ""is specified). This is equivalent to true. valueis false. What variables are accepted, as well as their meanings and default values, depend on the selected target language and/or the parser skeleton (see %language, see %skeleton). Unaccepted variables produce an error. Some of the accepted variables are described below. %define api.namespace {foo::bar} Bison uses foo::bar verbatim in references such as: foo::bar::parser::semantic_type However, to open a namespace, Bison removes any leading :: and then splits on any remaining occurrences: namespace foo { namespace bar { class position; class location; } } "::". For example, "foo"or "::foo::bar". %name-prefix, which defaults to yy. This usage of %name-prefixis for backward compatibility and can be confusing since %name-prefixalso specifies the textual prefix for the lexical analyzer function. Thus, if you specify %name-prefix, it is best to also specify ‘%define api.namespace’ so that %name-prefixonly affects the lexical analyzer function. For example, if you specify: %define api.namespace {foo} %name-prefix "bar::" The parser namespace is foo and yylex is referenced as bar::lex. location_typefor C++ in Bison 2.5 and for Java in Bison 2.4. yy true, false, full The value may be omitted: this is equivalent to specifying true, as is the case for Boolean values. When %define api.pure full is used, the parser is made reentrant. This changes the signature for yylex (see Pure Calling), and also that of yyerror when the tracking of locations has been activated, as shown below. The true value is very similar to the full value, the only difference is in the signature of yyerror on Yacc parsers without %parse-param, for historical reasons. I.e., if ‘%locations %define api.pure’ is passed then the prototypes for yyerror are: void yyerror (char const *msg); // Yacc parsers. void yyerror (YYLTYPE *locp, char const *msg); // GLR parsers. But if ‘%locations %define api.pure %parse-param {int *nastiness}’ is used, then both parsers have the same signature: void yyerror (YYLTYPE *llocp, int *nastiness, char const *msg); (see The Error Reporting Function yyerror) false fullvalue was introduced in Bison 2.7 pull, push, both pull false %token FILE for ERROR %define api.token.prefix {TOK_} %% start: FILE for ERROR; generates the definition of the symbols TOK_FILE, TOK_for, and TOK_ERROR in the generated source files. In particular, the scanner must use these prefixed token names, while the grammar itself may still use the short names (as in the sample rule given above). The generated informational files (*.output, *.xml, *.dot) are not modified by this prefix. Bison also prefixes the generated member names of the semantic value union. See Generating the Semantic Value Type, for more details. See Calc++ Parser and Calc++ Scanner, for a complete example. This grammar has no semantic value at all. This is not properly supported yet. The type is defined thanks to the %union directive. You don’t have to define api.value.type in that case, using %union suffices. See The Union Declaration. For instance: %define api.value.type union-directive %union { int ival; char *sval; } %token <ival> INT "integer" %token <sval> STR "string" The symbols are defined with type names, from which Bison will generate a union. For instance: %define api.value.type union %token <int> INT "integer" %token <char *> STR "string" This feature needs user feedback to stabilize. Note that most C++ objects cannot be stored in a union. This is similar to union, but special storage techniques are used to allow any kind of C++ object to be used. For instance: %define api.value.type variant %token <int> INT "integer" %token <std::string> STR "string" This feature needs user feedback to stabilize. See C++ Variants. Use this type as semantic value. %code requires { struct my_value { enum { is_int, is_str } kind; union { int ival; char *sval; } u; }; } %define api.value.type {struct my_value} %token <u.ival> INT "integer" %token <u.sval> STR "string" union-directiveif %unionis used, otherwise … intif type tags are used (i.e., ‘%token <type>…’ or ‘%type <type>…’ is used), otherwise … stype. union(not the name of the typedef). This variable is set to idwhen ‘%union id’ is used. There is no clear reason to give this union a name. YYSTYPE. Obsoleted by api.location.type since Bison 2.7. most, consistent, accepting acceptingif lr.typeis canonical-lr. mostotherwise. lr.default-reductionsin 2.5, renamed as lr.default-reductionin 3.0. false lr.keep_unreachable_statesin 2.3b, renamed as lr.keep-unreachable-statesin 2.5, and as lr.keep-unreachable-statein 3.0. lalr, ielr, canonical-lr lalr Obsoleted by api.namespace false yyerror. simpleError messages passed to yyerrorare simply "syntax error". verboseError messages report the unexpected token, and possibly the expected ones. However, this report can often be incorrect when LAC is not enabled (see LAC). simple none, full none In C/C++, define the macro YYDEBUG (or prefixDEBUG with ‘%define api.prefix {prefix}’), see Multiple Parsers in the Same Program) to 1 in the parser implementation file if it is not already defined, so that the debugging facilities are compiled. false Previous: : Declarations, Up: Grammar File [Contents][Index] yyparse, yylex, yyerror, yynerrs, yylval, yylloc, yychar and yydebug. If you use a push parser, yypush_parse, yypull_parse, yypstate, yypstate_new and yypstate_delete will also be renamed. Bison Symbols) and the option --name-prefix (see Bison Options). Next:. Next:. Next:. Next:. Next: Lexical, Previous: Parser Create Function, Up: Interface [Contents][Index] yystate_delete (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) You call the function yypstate_delete to delete a parser instance. function is available if either the ‘%define api.push-pull push’ or ‘%define api.push-pull both’ declaration is used. See A Push Parser. This function will reclaim the memory associated with a parser instance. After this call, you should no longer attempt to use the parser instance. Next: Error Reporting, Previous: Parser Delete Function, Up: Interface [Contents][Index] ‘-d’ option when you run Bison, so that it will write these macro definitions into the separate parser header file, name.tab.h, which you can include in the other source files that need it. See Invoking Bison. Next: Token Values, Up: Lexical [Contents][Index] yylex The value that yylex returns must be the positive numeric code for the type of token it has just found; a zero or negative value signifies end-of-input. When a token is referred to in the grammar rules by a name, that name in the parser implementation file becomes a C macro whose definition is the proper numeric code for that token type. So yylex can use the name to indicate that type. See Symbols. When a token is referred to in the grammar rules by a character literal, the numeric code for that character is also the code for the token type. So yylex can simply return that character code, possibly converted to unsigned char to avoid sign-extension. The null character must not be used this way, because its code is zero and that signifies end-of-input. Here is an example showing these things: int yylex (void) { … if (c == EOF) /* Detect end-of-input. */: yylexcan use these symbolic names like all others. In this case, the use of the literal string tokens in the grammar file has no effect on yylex. yylexcan find the multicharacter token in the yytnametable. The index of the token in the table is the token type’s code. The name of a multicharacter token is recorded in yytnamewith a double-quote, the token’s characters, and another double-quote. The token’s characters are escaped as necessary to be suitable as input to Bison. Here’s code for looking up a multicharacter token in yytname, assuming that the characters of the token are stored in token_buffer, and assuming that the token does not contain any characters like ‘"’ that require escaping.. See Decl Summary. Next:. */ … Next:. Previous: Token Locations, Up: Lexical [Contents][Index] When you use the Bison declaration %define api.pure full additional arguments to yylex, use %lex-param just like %parse-param (see Parser Function). To pass additional arguments to both yylex and yyparse, use %param. Specify that argument-declaration are additional yylex argument declarations. You may pass one or more such declarations, which is equivalent to repeating %lex-param. Specify that argument-declaration are additional yylex/ yyparse argument declaration. This is equivalent to ‘%lex-param {argument-declaration} … %parse-param {argument-declaration} …’. You may pass one or more declarations, which is equivalent to repeating %param. For instance: %lex-param {scanner_mode *mode} %parse-param {parser_mode *mode} %param {environment_type *env} results in the following signatures: int yylex (scanner_mode *mode, environment_type *env); int yyparse (parser_mode *mode, environment_type *env); If ‘%define api.pure full’ is added: int yylex (YYSTYPE *lvalp, scanner_mode *mode, environment_type *env); int yyparse (parser_mode *mode, environment_type *env); and finally, if both ‘%define api.pure full’ and %locations are used: int yylex (YYSTYPE *lvalp, YYLTYPE *llocp, scanner_mode *mode, environment_type *env); int yyparse (parser_mode *mode, environment_type *env); Next: Action Features, Previous: Lexical, Up: Interface [Contents][Index]. Next: Internationalization, Previous: Error Reporting, Up: Interface [Contents][Index] declaration. See Data Types of Values in Actions. Like $n but specifies alternative typealt in the union specified by the %union declaration.. Value stored in yychar when there is no lookahead token. Value stored in yychar when the lookahead is the end of the input stream. ;. The expression YYRECOVERING () yields 1 when the parser is recovering from a syntax error, and 0 otherwise. See Error Recovery. Variable containing either the lookahead token, or YYEOF when the lookahead is the end of the input stream, or YYEMPTY when no lookahead has been performed so the next token is not yet known. Do not modify yychar in a deferred semantic action (see GLR Semantic Actions). See Lookahead Tokens. ; Discard the current lookahead token. This is useful primarily in error rules. Do not invoke yyclearin in a deferred semantic action (see GLR Semantic Actions). See Error Recovery. ; Resume generating error messages immediately for subsequent syntax errors. This is useful primarily in error rules. See Error Recovery. Variable containing the lookahead token location when yychar is not set to YYEMPTY or YYEOF. Do not modify yylloc in a deferred semantic action (see GLR Semantic Actions). See Actions and Locations. Variable containing the lookahead token semantic value when yychar is not set to YYEMPTY or YYEOF. Do not modify yylval Locations. Previous: Action Features, Up: Interface [Contents][Index] A Bison-generated parser can print diagnostics, including error and tracing messages. By default, they appear in English. However, Bison also supports outputting diagnostics in the user’s native language. To make this work, the user should set the usual environment variables. See The User’s View in GNU gettext utilities. For example, the shell command ‘export LC_ALL=fr_CA.UTF-8’ might set the user’s locale to French Canadian using the UTF-8 encoding. The exact set of available locales depends on the user’s installation. The maintainer of a package that uses a Bison-generated parser enables the internationalization of the parser’s output through the following steps. Here we assume a package that uses GNU Autoconf and GNU Automake. cp /usr/local/share/aclocal/bison-i18n.m4 m4/bison-i18n.m4 AM_GNU_GETTEXTinvocation, add an invocation of BISON_I18N. This macro is defined in the file bison-i18n.m4 that you copied earlier. It causes ‘configure’ to find the value of the BISON_LOCALEDIRvariable, and it defines the source-language symbol YYENABLE_NLSto enable translations in the Bison-generated parser. mainfunction of your program, designate the directory containing Bison’s runtime message catalog, through a call to ‘bindtextdomain’ with domain name ‘bison-runtime’. For example: bindtextdomain ("bison-runtime", BISON_LOCALEDIR); Typically this appears after any other call bindtextdomain (PACKAGE, LOCALEDIR) that your package already has. Here we rely on ‘BISON_LOCALEDIR’ to be defined as a string through the Makefile. mainfunction, make ‘BISON_LOCALEDIR’ available as a C preprocessor macro, either in ‘DEFS’ or in ‘AM_CPPFLAGS’. For example: DEFS = @DEFS@ -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"' or: AM_CPPFLAGS = -DBISON_LOCALEDIR='"$(BISON_LOCALEDIR)"' autoreconfto generate the build infrastructure. Next:: Shift/Reduce, Up: Algorithm [Contents][Index]. Next: Precedence, Previous: Lookahead, Up: Algorithm [Contents][Index] Suppose we are parsing a language which has if-then and if-then-else statements, with a pair of rules like this: if_stmt: "if" expr "then" stmt | "if" expr "then" stmt "else" stmt ; Here "if", "then" and "else" are terminal symbols for specific keyword tokens. When the "else" token is read and becomes the lookahead, you can use the %expect n declaration. There will be no warning as long as the number of shift/reduce conflicts is exactly n, and Bison will report an error if there is a different number. See Suppressing Conflict Warnings. However, we don’t recommend the use of %expect (except ‘%expect 0’!), as an equal number of conflicts does not mean that they are the same. When possible, you should rather use precedence directives to fix the conflicts explicitly (see Using Precedence For Non Operators). The definition of if_stmt above is solely to blame for the conflict, but the conflict does not actually appear without additional rules. Here is a complete Bison grammar file that actually manifests the conflict: %% stmt: expr | if_stmt ; if_stmt: "if" expr "then" stmt | "if" expr "then" stmt "else" stmt ; expr: "identifier" ; Next: Contextual Precedence, Previous: Shift/Reduce, Up: Algorithm [Contents][Index] Another situation where shift/reduce conflicts appear is in arithmetic expressions. Here shifting is not always the preferred resolution; the Bison declarations for operator precedence allow you to specify when to shift and when to reduce. Next:. Next:… Next: How Precedence, Previous: Precedence Only, Up: Precedence [Contents][Index] In our example, we would want the following declarations: %left '<' %left '-' %left '*' In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, '+' is declared with '-': %left '<' '>' '=' "!=" "<=" ">=" %left '+' '-' %left '*' '/' Next: Non Operators, Previous: Precedence Examples, Up: Precedence [Contents][Index] Context-Dependent Precedence.) Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the lookahead ‘-v’ (see Invoking Bison) says how each conflict was resolved. Not all rules and not all tokens have precedence. If either the rule or the lookahead token has no precedence, then the default is to shift. Previous:”. Next: Parser States, Previous: Precedence, Up: Algorithm [Contents][Index] Next: Reduce/Reduce, Previous: Contextual Precedence, Up: Algorithm [Contents][Index]ahead token is read, the current parser state together with the type of lookahead token are looked up in a table. This table entry can say, “Shift the lookaheadahead token is erroneous in the current state. This causes error processing to begin (see Error Recovery). Next:: Tuning LR, Previous: Reduce/Reduce, Up: Algorithm [Contents][Index]. Next:. Next:: LAC, Previous: LR Table Construction, Up: Tuning LR [Contents][Index]-reduction.) Next: Unreachable States, Previous: Default Reductions, Up: Tuning LR [Contents][Index] (see Error Reporting), the expected token list in the syntax error message can both contain invalid tokens and omit valid tokens. The culprits for the above problems are %nonassoc, default reductions in inconsistent states (see Default Reductions), and parser state merging. Because IELR and LALR merge parser states, they suffer the most. Canonical LR can suffer only if %nonassoc is used or if default reductions are enabled for inconsistent states. LAC (Lookahead Correction) is a new mechanism within the parsing algorithm that solves these problems for canonical LR, IELR, and LALR without sacrificing %nonassoc, default reductions, or state merging. You can enable LAC with the %define parse.lac directive. Enable LAC to improve syntax error handling. none(default) full (This feature is experimental. More user feedback will help to stabilize it. Moreover, it is currently only available for deterministic parsers in C.) Conceptually, the LAC mechanism is straight-forward. Whenever the parser fetches a new token from the scanner so that it can determine the next parser action, it immediately suspends normal parsing and performs an exploratory parse using a temporary copy of the normal parser state stack. During this exploratory parse, the parser does not perform user semantic actions. If the exploratory parse reaches a shift action, normal parsing then resumes on the normal parser stacks. If the exploratory parse reaches an error instead, the parser reports a syntax error. If verbose syntax error messages are enabled, the parser must then discover the list of expected tokens, so it performs a separate exploratory parse for each token in the grammar. There is one subtlety about the use of LAC. That is, when in a consistent parser state with a default reduction, the parser will not attempt to fetch a token from the scanner because no lookahead is needed to determine the next parser action. Thus, whether default reductions are enabled in consistent states (see Default Reductions) affects how soon the parser detects a syntax error: immediately when it reaches an erroneous token or when it eventually needs that token as a lookahead to determine the next parser action. The latter behavior is probably more intuitive, so Bison currently provides no way to achieve the former behavior while default reductions are enabled in consistent states. Thus, when LAC is in use, for some fixed decision of whether to enable default reductions in consistent states,. There are a few caveats to consider when using LAC: IELR plus LAC does have one shortcoming relative to canonical LR. Some parsers generated by Bison can loop infinitely. LAC does not fix infinite parsing loops that occur between encountering a syntax error and detecting it, but enabling canonical LR or disabling default reductions sometimes does. Because of internationalization considerations, Bison-generated parsers limit the size of the expected token list they are willing to report in a verbose syntax error message. If the number of expected tokens exceeds that limit, the list is simply dropped from the message. Enabling LAC can increase the size of the list and thus cause the parser to drop it. Of course, dropping the list is better than reporting an incorrect list. Because LAC requires many parse actions to be performed twice, it can have a performance penalty. However, not all parse actions must be performed twice. Specifically, during a series of default reductions in consistent states and shift actions, the parser never has to initiate an exploratory parse. Moreover, the most time-consuming tasks in a parse are often the file I/O, the lexical analysis performed by the scanner, and the user’s semantic actions, but none of these are performed during the exploratory parse. Finally, the base of the temporary stack used during an exploratory parse is a pointer into the normal parser state stack so that the stack is never physically copied. In our experience, the performance penalty of LAC has proved insignificant for practical grammars. While the LAC algorithm shares techniques that have been recognized in the parser community for years, for the publication that introduces LAC, see Denny 2010 May. If. Next: Memory Management, Previous: Tuning LR, Up: Algorithm [Contents][Index]. Previous: Generalized LR Parsing, Up: Algorithm [Contents][Index]. Next: Context Dependency, Previous: Algorithm, Up: Top [Contents][Index] It is not usually acceptable to have a program terminate on a syntaxts: %empty | stmts '\n' | stmts exp '\n' | stmts error '\n' The fourth rule in this example says that an error followed by a newline makes a valid addition to any stmts. What happens if a syntax error occurs in the middle of an exp? The error recovery rule, interpreted strictly, applies to the precise sequence of a stmts, an error and a newline. If an error occurs in the middle of an exp, there will probably be some additional tokens and subexpressions on the stack after the last stmts,ts.) At this point the error token can be shifted. Then, if the old lookahead token is not acceptable to be shifted next, the parser reads tokens and discards them until it finds a token which is acceptable. In this example, Bison reads and discards input until the next newline so that the fourth rule can apply. Note that discarded symbols are possible sources of memory leaks, see Freeing Discarded Symbols, for a means to reclaim this memory. The choice of error rules in the grammar is a choice of strategies for error recovery. A simple and useful strategy is simply to skip the rest of the current input line or current statement if an error is detected: stmt:mt. Suppose that instead a spurious semicolon is inserted in the middle of a valid stmt. After the error recovery rule recovers from the first error, another syntax error will be found straightaway, since the text following the spurious semicolon is also an invalid stmt.; ‘yyerrok;’ is a valid C statement. The previous lookahead token is reanalyzed immediately after an error. If this is unacceptable, then the macro yyclearin may be used to clear this token. Write the statement ‘yyclearin;’ in the error rule’s action. See Special Features for Use in Actions. For example, suppose that on a syntax error, an error handling routine is called that advances the input stream to some point where parsing should once again commence. The next symbol returned by the lexical scanner is probably correct. The previous lookahead token ought to be discarded with ‘yyclearin;’. The expression YYRECOVERING () yields 1 when the parser is recovering from a syntax error, and 0 otherwise. Syntax error diagnostics are suppressed while recovering from a syntax error. Next: Debugging, Previous: Error Recovery, Up: Top [Contents][Index].) Next:: Tie-in Recovery, Previous: Semantic Tokens, Up: Context Dependency [Contents][Index]. Previous: Lexical Tie-ins, Up: Context Dependency [Contents][Index]. Next: Invocation, Previous: Context Dependency, Up: Top [Contents][Index]). As documented elsewhere (see The Bison Parser Algorithm) Bison parsers are shift/reduce automata. In some cases (much more frequent than one would hope), looking at this automaton is required to tune or simply fix a parser. The textual file is generated when the options --report or --verbose are specified, see Invoking Bison. Its name is made by removing ‘.tab.c’ or ‘.c’ from the parser implementation file name, and adding ‘.output’ instead. Therefore, if the grammar file is foo.y, then the parser implementation useless in grammar calc.y: warning: 1 rule useless in grammar calc.y:12.1-7: warning: nonterminal useless in grammar: useless calc.y:12.10-12: warning: rule useless in grammar: useless: STR calc.y: conflicts: 7 shift/reduce When given --report=state, in addition to calc.tab.c, it creates a file calc.output with contents detailed below. The order of the output and the exact presentation might vary, but the interpretation is the same. The first section reports useless tokens, nonterminals lists states that still have conflicts. State 8 conflicts: 1 shift/reduce State 9 conflicts: 1 shift/reduce State 10 conflicts: 1 shift/reduce State 11 conflicts: 4 shift/reduce Then Bison reproduces the exact grammar it used: Grammar 0 $accept: exp $end 1 exp: exp '+' exp 2 | exp '-' exp 3 | exp '*' exp 4 | exp '/' exp 5 | NUM and reports the uses of the symbols: Terminals, with rules where they appear $end (0) 0 '*' (42) 3 '+' (43) 1 '-' (45) 2 '/' (47) 4 error (256) NUM (258) 5 STR (259) Nonterminals, with rules where they appear $accept (9) on left: 0 exp (10) on left: 1 2 3 4 5, on right: 0 1 2 3 4 Bison then proceeds onto the automaton itself, describing each state with its set of items, also known as pointed rules. Each item is a production rule together with a point (‘.’) marking the location of the input cursor. State 0 0 $accept: . exp $end NUM shift, and go to state 1 exp go to state 2 This reads as follows: “state 0 corresponds to being at the very beginning of the parsing, in the initial rule, right before the start symbol (here, exp). When the parser returns to this state right after having reduced a rule that produced an exp, the control flow jumps to state 2. If there is no such transition on a nonterminal symbol, and the lookahead is a NUM, then this token is shifted onto the parse stack, and the control flow jumps to state 1. Any other lookahead triggers a syntax error.” Even though the only active rule in state 0 seems to be rule 0, the report lists NUM as a lookahead token because NUM can be at the beginning of any rule deriving an exp. By default Bison reports the so-called core or kernel of the item set, but if you want to see more detail you can invoke bison with --report=itemset to list the derived items as well: State 0 0 $accept: . exp $end 1 exp: . exp '+' exp 2 | . exp '-' exp 3 | . exp '*' exp 4 | . exp '/' exp 5 | . NUM NUM shift, and go to state 1 exp go to state 2 In the state 1… State 1 5 exp: NUM . 0 $accept: exp . $end 1 exp: exp . '+' exp 2 | exp . '-' exp 3 | exp . '*' exp 4 | exp . '/' exp $end is ‘+’ it is shifted onto the parse stack, and the automaton jumps to state 4, corresponding to the item ‘exp: exp '+' . exp’. Since there is no default action, any lookahead not listed triggers a syntax error. The state 3 is named the final state, or the accepting state: State 3 0 $accept: exp $end . $default accept the initial rule is completed (the start symbol and the end-of-input were read), the parsing exits successfully. The interpretation of states 4 to 7 is straightforward, and is left to the reader. State 4 1 exp: exp '+' . exp NUM shift, and go to state 1 exp go to state 8 State 5 2 exp: exp '-' . exp NUM shift, and go to state 1 exp go to state 9 State 6 3 exp: exp '*' . exp NUM shift, and go to state 1 exp go to state 10 State 7 4 exp: exp '/' . exp NUM shift, and go to state 1 exp go to state 11 As was announced in beginning of the report, ‘State 8 conflicts: 1 shift/reduce’: State 8 1 exp: exp . '+' exp 1 | exp '+' exp . 2 | exp . '-' exp 3 | exp . '*' exp 4 | exp . '/' exp '*' shift, and go to state 6 '/' shift, and go to state 7 '/' [reduce using rule 1 (exp)] $default reduce using rule 1 (exp) Indeed, there are two actions associated to the lookahead ‘/’: either shifting (and going to state deterministic parsing a single decision can be made, Bison arbitrarily chose to disable the reduction, see Shift/Reduce Conflicts. Discarded actions are reported 8 is one such state: if the lookahead is: State 8 1 exp: exp . '+' exp 1 | exp '+' exp . [$end, '+', '-', '/'] 2 | exp . '-' exp 3 | exp . '*' exp 4 | exp . '/' exp '*' shift, and go to state 6 '/' shift, and go to state 7 '/' [reduce using rule 1 (exp)] $default reduce using rule 1 (exp) Note however that while ‘NUM + NUM / NUM’ is ambiguous (which results in the conflicts on ‘/’), ‘NUM + NUM * NUM’ is not: the conflict was solved thanks to associativity and precedence directives. If invoked with --report=solved, Bison includes information about the solved conflicts in the report: Conflict between rule 1 and token '+' resolved as reduce (%left '+'). Conflict between rule 1 and token '-' resolved as reduce (%left '-'). Conflict between rule 1 and token '*' resolved as shift ('+' < '*'). The remaining states are similar: State 9 1 exp: exp . '+' exp 2 | exp . '-' exp 2 | exp '-' exp . 3 | exp . '*' exp 4 | exp . '/' exp '*' shift, and go to state 6 '/' shift, and go to state 7 '/' [reduce using rule 2 (exp)] $default reduce using rule 2 (exp) State 10 1 exp: exp . '+' exp 2 | exp . '-' exp 3 | exp . '*' exp 3 | exp '*' exp . 4 | exp . '/' exp '/' shift, and go to state 7 '/' [reduce using rule 3 (exp)] $default reduce using rule 3 (exp) State 11 1 exp: exp . '+' exp 2 | exp . '-' exp 3 | exp . '*' exp 4 | exp . '/' exp 4 | exp '/' exp . '+'. Bison may also produce an HTML version of this output, via an XML file and XSLT processing (see Visualizing your parser in multiple formats). Next: Xml, Previous: Understanding, Up: Debugging [Contents][Index] As another means to gain better understanding of the shift/reduce automaton corresponding to the Bison parser, a DOT file can be generated. Note that debugging a real grammar with this is tedious at best, and impractical most of the times, because the generated files are huge (the generation of a PDF or PNG file from it will take very long, and more often than not it will fail due to memory exhaustion). This option was rather designed for beginners, to help them understand LR parsers. This file is generated when the --graph option is specified (see Invoking Bison). Its name is made by removing ‘.tab.c’ or ‘.c’ from the parser implementation file name, and adding ‘.dot’ instead. If the grammar file is foo.y, the Graphviz output file is called foo.dot. A DOT file may also be produced via an XML file and XSLT processing (see Visualizing your parser in multiple formats). The following grammar file, rr.y, will be used in the sequel: %% exp: a ";" | b "."; a: "0"; b: "0"; The graphical output (see Figure 8.1) is very similar to the textual one, and as such it is easier understood by making direct comparisons between them. See Debugging Your Parser, for a detailled analysis of the textual report. The items (pointed rules) for each state are grouped together in graph nodes. Their numbering is the same as in the verbose file. See the following points, about transitions, for examples When invoked with --report=lookaheads, the lookahead tokens, when needed, are shown next to the relevant rule between square brackets as a comma separated list. This is the case in the figure for the representation of reductions, below. The transitions are represented as directed edges between the current and the target states. Shifts are shown as solid arrows, labelled with the lookahead token for that shift. The following describes a reduction in the rr.output file: State 3 1 exp: a . ";" ";" shift, and go to state 6 A Graphviz rendering of this portion of the graph could be: Reductions are shown as solid arrows, leading to a diamond-shaped node bearing the number of the reduction rule. The arrow is labelled with the appropriate comma separated lookahead tokens. If the reduction is the default action for the given state, there is no such label. This is how reductions are represented in the verbose file rr.output: State 1 3 a: "0" . [";"] 4 b: "0" . ["."] "." reduce using rule 4 (b) $default reduce using rule 3 (a) A Graphviz rendering of this portion of the graph could be: When unresolved conflicts are present, because in deterministic parsing a single decision can be made, Bison can arbitrarily choose to disable a reduction, see Shift/Reduce Conflicts. Discarded actions are distinguished by a red filling color on these nodes, just like how they are reported between square brackets in the verbose file. The reduction corresponding to the rule number 0 is the acceptation state. It is shown as a blue diamond, labelled “Acc”. The ‘go to’ jump transitions are represented as dotted lines bearing the name of the rule being jumped to. B When a Bison grammar compiles properly but parses “incorrectly”, the yydebug parser-trace feature helps figuring out why. Next: () Previous: Mfcalc Traces, Up: Tracing [Contents][Index] YYPRINTMacro Before %printer support, semantic values could be displayed using the YYPRINT macro, which works only for terminal symbols and only with the yacc.c skeleton. If you define YYPRINT, it should take three arguments. The parser will pass a standard I/O stream, the numeric code for the token type, and the token value (from yylval). For yacc.c only. Obsoleted by %printer. Here is an example of YYPRINT suitable for the multi-function calculator ); } Next: Other Languages, Previous: Debugging, Up: Top [Contents][Index] The usual way to invoke Bison is as follows: bison infile Here infile is the grammar file name, which usually ends in ‘.y’. The parser implementation file’s name is made by replacing the ‘.y’ with ‘.tab.c’ and removing any leading directory. Thus, the ‘bison foo.y’ file name yields foo.tab.c, and the ‘bison hack/foo.y’ file name yields foo.tab.c. It’s also possible, in case you are writing C++ code instead of C in your grammar file, to name it foo.ypp or foo.y++. Then, the output files will take an extension like the given one as input (respectively foo.tab.cpp and foo.tab.c++). This feature takes effect with all options that manipulate file names. Next:: Yacc Library, Previous: Bison Options, Up: Invocation [Contents][Index] Here is a list of options, alphabetized by long option, to help you find the corresponding short option and directive. Previous: Option Cross Key, Up: Invocation [Contents][Index] *); The int value returned by this yyerror is ignored. The implementation of Yacc library’s main function is: int main (void) { setlocale (LC_ALL, ""); return yyparse (); } so if you use it, the internationalization support is enabled (e.g., error messages are translated), and your yyparse function should have the following type signature: int yyparse (void); Next: FAQ, Previous: Invocation, Up: Top [Contents][Index] Next: Java Parsers, Up: Other Languages [Contents][Index] Next:. Next:). Next: C++ Variants, Up: C++ Semantic Values [Contents][Index] The %union directive works as for C, see The Union Declaration. In particular it produces a genuine union, which have a few specific features in C++. YYSTYPEis defined but its use is discouraged: rather you should refer to the parser’s encapsulated type yy::parser::semantic_type. Because objects have to be stored via pointers, memory is not reclaimed automatically: using the %destructor directive is the only means to avoid leaks. See Freeing Discarded Symbols. Previous:. Next:. Next: C++ location, Up: C++ Location Values [Contents][Index] position Create a position denoting a given point. Note that file is not reclaimed when the position"’. The line, starting at 1. If height is not null, advance by height lines, resetting the column number. The resulting line number cannot be less than 1. The column, starting at 1. Advance by width columns, without changing the line number. The resulting column number cannot be less than 1. Various forms of syntactic sugar for columns. Whether *this and that denote equal/different positions. Report p on o like this: ‘file:line.column’, or ‘line.column’ if file is null. Next: User Defined Location Type, Previous: C++ position, Up: C++ Location Values [Contents][Index] location Create a Location from the endpoints of the range. Create a Location denoting an empty range located at a given point. Reset the location to an empty range at the given values. The first, inclusive, position of the range, and the first beyond. Forwarded to the end position. Various forms of syntactic sugar for columns. Join two locations: starts at the position of the first one, and ends at the position of the second. Move begin onto end. Whether *this and that denote equal/different ranges of positions. Report p on o, taking care of special cases such as: no filename defined, or equal filename/line or column. Previous: C++ location, Up: C++ Location Values [Contents][Index] Instead of using the built-in types you may use the %define variable api.location.type to specify your own type: %define api.location.type {LocationType} The requirements over your LocationType are: @$in a reduction, the parser basically runs @$.begin = @1.begin; @$.end = @N.end; // The location of last right-hand side symbol. so there must be copyable begin and end members; In programs with several C++ parsers, you may also use the %define variable api.location.type to share a common set of built-in definitions for position and location. For instance, one parser master/parser.yy might use: %defines %locations %define api.namespace {master::} to generate the master/position.hh and master/location.hh files, reused by other parsers as follows: %define api.location.type {master::location} %code requires { #include <master/location.hh> } Next: C++ Scanner Interface, Previous: C++ Location Values, Up: C++ Parsers [Contents][Index] The output files output.hh and output.cc declare and define the parser class in the namespace yy. The class name defaults to parser, but may be changed using ‘%define parser_class_name {name}’. The interface of this class is detailed below. It can be extended using the %parse-param feature: its semantics is slightly changed since it describes an additional member of the parser class, and an additional argument for its constructor. The types for semantic values and locations (if enabled). A structure that contains (only) the yytokentype enumeration, which defines the tokens. To refer to the token FOO, use yy::parser::token::FOO. The scanner can use ‘typedef yy::parser::token token;’ to “import” the token enumeration (see Calc++ Scanner). This class derives from std::runtime_error. Throw instances of it from the scanner or from the user actions to raise parse errors. This is equivalent with first invoking error to report the location and message of the syntax error, and then to invoke YYERROR to enter the error-recovery mode. But contrary to YYERROR which can only be invoked from user actions (i.e., written in the action itself), the exception can be thrown from function invoked from the user action. Build a new parser object. There are no arguments by default, unless ‘%parse-param {type1 arg1}’ was used. Instantiate a syntax-error exception. Run the syntactic analysis, and return 0 on success, 1 otherwise. The whole function is wrapped in a try/ catch block, so that when an exception is thrown, the %destructors are called to release the lookahead symbol, and the symbols pushed on the stack.. If location tracking is not enabled, the second signature is used. Next: A Complete C++ Example, Previous: C++ Parser Interface, Up: C++ Parsers [Contents][Index] The parser invokes the scanner by calling yylex. Contrary to C parsers, C++ parsers are always pure: there is no point in using the ‘%define api.pure’ directive. The actual interface with yylex depends whether you use unions, or variants. Next:; } Previous: Split Symbols, Up: C++ Scanner Interface [Contents][Index] If you specified both %define api.value.type variant and %define api.token.constructor, the parser class also defines the class parser::symbol_type which defines a complete symbol, aggregating its type (i.e., the traditional value returned by yylex), its semantic value (i.e., the value passed in yylval, and possibly its location ( yylloc). Build a complete terminal symbol which token type is type, and which semantic value is value. If location tracking is enabled, also pass the location. This interface is low-level and should not be used for two reasons. First, it is inconvenient, as you still have to build the semantic value, which is a variant, and second, because consistency is not enforced: as with unions, it is still possible to give an integer as semantic value for a string. So for each token type, Bison generates named constructors as follows. Build a complete terminal symbol for the token type token (not including the api.token.prefix) whose possible semantic value is value of adequate value_type. If location tracking is enabled, also pass the location. For instance, given the following declarations: %define api.token.prefix {TOK_} %token <std::string> IDENTIFIER; %token <int> INTEGER; %token COLON; Bison generates the following functions: symbol_type make_IDENTIFIER(const std::string& v, const location_type& l); symbol_type make_INTEGER(const int& v, const location_type& loc); symbol_type make_COLON(const location_type& loc); which should be used in a Lex-scanner as follows. [0-9]+ return yy::parser::make_INTEGER(text_to_int (yytext), loc); [a-z]+ return yy::parser::make_IDENTIFIER(yytext, loc); ":" return yy::parser::make_COLON(loc); Tokens that do not have an identifier are not accessible: you cannot simply use characters such as ':', they must be declared with %token. Previous: C++ Scanner Interface, Up: C++ Parsers [Contents][Index] interactions. A hand-written scanner is actually easier to interface with. Next: Next: Calc++ Parser, Previous: Calc++ --- C++ Calculator, Up: A Complete C++ Example [Contents][Index] To support a pure interface with the parser (and the scanner) the technique of the “parsing context” is convenient: a structure containing all the data to exchange. Since, in addition to simply launch the parsing, there are several auxiliary tasks to execute (open the file for parsing, instantiate the parser etc.), we recommend transforming the simple parsing context structure into a fully blown parsing driver class. The declaration of this driver class, calc++-driver.hh, is as follows. The first part includes the CPP guard and imports the required standard library components, and the declaration of the parser class. #ifndef CALCXX_DRIVER_HH # define CALCXX_DRIVER_HH # include <string> # include <map> # include "calc++-parser.hh" Then comes the declaration of the scanning function. Flex expects the signature of yylex to be defined in the macro YY_DECL, and the C++ parser expects it to be declared. We can factor both as follows. // Tell Flex the lexer's prototype ... # define YY_DECL \ yy::calcxx_parser::symbol_type yylex (calcxx_driver& driver) // ... and declare it for the parser's sake. YY_DECL; The calcxx_driver class is then declared with its most obvious // Conducting the whole scanning and parsing of Calc++. class calcxx_driver { public: calcxx_driver (); virtual ~calcxx_driver (); std::map<std::string, int> variables; int result; To encapsulate the coordination with the Flex scanner, it is useful to have member functions to open and close the scanning phase. // Handling the scanner. void scan_begin (); void scan_end (); bool trace_scanning; Similarly for the parser itself. // Run the parser on file F. // Return 0 on success. int parse (const std::string& f); // The name of the file being parsed. // Used later to pass the file name to the location tracker. std::string file; // Whether parser traces should be generated. bool trace_parsing; To demonstrate pure handling of parse errors, instead of simply dumping them on the standard error output, we will pass them to the compiler driver using the following two member functions. Finally, we close the class declaration and CPP guard. // Error handling. void error (const yy::location& l, const std::string& m); void error (const std::string& m); }; #endif // ! CALCXX_DRIVER_HH The implementation of the driver is straightforward. The parse member function deserves some attention. The error functions are simple stubs, they should actually register the located error messages and set error state. #include "calc++-driver.hh" #include "calc++-parser.hh" calcxx_driver::calcxx_driver () : trace_scanning (false), trace_parsing (false) { variables["one"] = 1; variables["two"] = 2; } calcxx_driver::~calcxx_driver () { } int calcxx_driver::parse (const std::string &f) { file = f; scan_begin (); yy::calcxx_parser parser (*this); parser.set_debug_level (trace_parsing); int res = parser.parse (); scan_end (); return res; } void calcxx_driver::error (const yy::location& l, const std::string& m) { std::cerr << l << ": " << m << std::endl; } void calcxx_driver::error (const std::string& m) { std::cerr << m << std::endl; } Next: Calc++ Scanner, Previous: Calc++ Parsing Driver, Up: A Complete C++ Example [Contents][Index] The grammar file calc++-parser.yy starts by asking for the C++ deterministic parser skeleton, the creation of the parser header file, and specifies the name of the parser class. Because the C++ skeleton changed several times, it is safer to require the version you designed the grammar for. %skeleton "lalr1.cc" /* -*- C++ -*- */ %require "3.0.4" %defines %define parser_class_name {calcxx_parser} This example will use genuine C++ objects as semantic values, therefore, we require the variant-based interface. To make sure we properly use it, we enable assertions. To fully benefit from type-safety and more natural definition of “symbol”, we enable api.token.constructor. %define api.token.constructor %define api.value.type variant %define parse.assert Then come the declarations/inclusions needed by the semantic values. Because the parser uses the parsing driver and reciprocally, both would like to include the header of the other, which is, of course, insane. This mutual dependency will be broken using forward declarations. Because the driver’s header needs detailed knowledge about the parser class (in particular its inner types), it is the parser’s header which will use a forward declaration of the driver. See %code Summary. %code requires { # include <string> class calcxx_driver; } The driver is passed by reference to the parser and to the scanner. This provides a simple but effective pure interface, not relying on global variables. // The parsing context. %param { calcxx_driver& driver } Then we request location tracking, and initialize the first location’s file name. Afterward new locations are computed relatively to the previous locations: the file name will be propagated. %locations %initial-action { // Initialize the initial location. @$.begin.filename = @$.end.filename = &driver.file; }; Use the following two directives to enable parser tracing and verbose error messages. However, verbose error messages can contain incorrect information (see LAC). %define parse.trace %define parse.error verbose names are provided for each symbol. To avoid name clashes in the generated files (see Calc++ Scanner), prefix tokens with TOK_ (see api.token.prefix). %define api.token.prefix {TOK_} %token END 0 "end of file" ASSIGN ":=" MINUS "-" PLUS "+" STAR "*" SLASH "/" LPAREN "(" RPAREN ")" ; Since we use variant-based semantic values, %union is not used, and both %type and %token expect genuine types, as opposed to type %token <std::string> IDENTIFIER "identifier" %token <int> NUMBER "number" %type <int> exp No %destructor is needed to enable memory deallocation during error recovery; the memory, for strings for instance, will be reclaimed by the regular destructors. All the values are printed using their operator<< (see Printing Semantic Values). %printer { yyoutput << $$; } <*>; The grammar itself is straightforward (see Location Tracking Calculator - ltcalc). %% %start unit; unit: assignments exp { driver.result = $2; }; assignments: %empty {} | assignments assignment {}; assignment: "identifier" ":=" exp { driver.variables[$1] = $3; }; %left "+" "-"; %left "*" "/"; exp: exp "+" exp { $$ = $1 + $3; } | exp "-" exp { $$ = $1 - $3; } | exp "*" exp { $$ = $1 * $3; } | exp "/" exp { $$ = $1 / $3; } | "(" exp ")" { std::swap ($$, $2); } | "identifier" { $$ = driver.variables[$1]; } | "number" { std::swap ($$, $1); }; %% Finally the error member function registers the errors to the driver. void yy::calcxx_parser::error (const location_type& l, const std::string& m) { driver.error (l, m); } Next: Calc++ Top Level, Previous: Calc++ Parser, Up: A Complete C++ Example [Contents][Index] The Flex scanner first includes the driver declaration, then the parser’s to get the set of defined tokens. %{ /* -*- C++ -*- */ # include <cerrno> # include <climits> # include <cstdlib> # // The location of the current token. static yy::location loc; %} Because there is no #include-like feature we don’t need yywrap, we don’t need unput either, and we parse an actual file, this is not an interactive session with the user. Finally, we enable scanner tracing. %option noyywrap nounput batch debug noinput, its width is added to the end column. When matching ends of lines, the end cursor is adjusted, and each time blanks are matched, the begin cursor is moved onto the end cursor to effectively ignore the blanks preceding tokens. Comments would be treated equally. %{ // Code run each time a pattern is matched. # define YY_USER_ACTION loc.columns (yyleng); %} %% %{ // Code run each time yylex is called. loc.step (); %} {blank}+ loc.step (); [\n]+ loc.lines (yyleng); loc.step (); The rules are simple. The driver is used to report errors. "-" return yy::calcxx_parser::make_MINUS(loc); "+" return yy::calcxx_parser::make_PLUS(loc); "*" return yy::calcxx_parser::make_STAR(loc); "/" return yy::calcxx_parser::make_SLASH(loc); "(" return yy::calcxx_parser::make_LPAREN(loc); ")" return yy::calcxx_parser::make_RPAREN(loc); ":=" return yy::calcxx_parser::make_ASSIGN(loc); {int} { errno = 0; long n = strtol (yytext, NULL, 10); if (! (INT_MIN <= n && n <= INT_MAX && errno != ERANGE)) driver.error (loc, "integer is out of range"); return yy::calcxx_parser::make_NUMBER(n, loc); } {id} return yy::calcxx_parser::make_IDENTIFIER(yytext, loc); . driver.error (loc, "invalid character"); <<EOF>> return yy::calcxx_parser::make_END(loc); %% Finally, because the scanner-related driver’s member-functions depend on the scanner’s data, it is simpler to implement them in this file. void calcxx_driver::scan_begin () { yy_flex_debug = trace_scanning; if (file.empty () || file == "-") yyin = stdin; else if (!(yyin = fopen (file.c_str (), "r"))) { error ("cannot open " + file + ": " + strerror(errno)); exit (EXIT_FAILURE); } } void calcxx_driver::scan_end () { fclose (yyin); } Previous: Calc++ Scanner, Up: A Complete C++ Example [Contents][Index] The top level file, calc++.cc, poses no problem. #include <iostream> #include "calc++-driver.hh" int main (int argc, char *argv[]) { int res = 0; calcxx_driver driver; for (int i = 1; i < argc; ++i) if (argv[i] == std::string ("-p")) driver.trace_parsing = true; else if (argv[i] == std::string ("-s")) driver.trace_scanning = true; else if (!driver.parse (argv[i])) std::cout << driver.result << std::endl; else res = 1; return res; } Previous: C++ Parsers, Up: Other Languages [Contents][Index] Next: Location Values, Previous: Java Bison Interface, Up: Java Parsers [Contents][Index] ‘%define api.value.type’ directive. For example, after the following declaration: %define api.value.type . Next:. Create a Location denoting an empty range located at a given point. Create a Location from the endpoints of the range. Prints the range represented by the location. For this to work properly, the position class should override the equals and toString methods appropriately. Next: Push Parser Interface, Previous: Java Scanner Interface, Up: Java Parsers [Contents][Index] The following special constructs can be uses in Java actions. Other analogous C action features are currently unavailable for Java. Use ‘%define throws’ to specify any uncaught exceptions from parser actions, and initial actions specified by %initial-action. The semantic value for the nth component of the current rule. This may not be assigned to. See Java Semantic Values. Like $n but specifies a alternative type typealt. See Java Semantic Values. The semantic value for the grouping made by the current rule. As a value, this is in the base type ( Object or as specified by ‘%define api.value.type’) as in not cast to the declared subtype because casts are not allowed on the left-hand side of Java assignments. Use an explicit Java cast if the correct subtype is needed. See Java Semantic Values. Same as $$ since Java always allow assigning to the base type. Perhaps we should use this and $<>$ for the value and $$ for setting the value but there is currently no easy way to distinguish these constructs. See Java Semantic Values. The location information of the nth component of the current rule. This may not be assigned to. See Java Location Values. The location information of the grouping made by the current rule. See Java Location Values. ; Return immediately from the parser, indicating failure. See Java Parser Interface. ; Return immediately from the parser, indicating success. See Java Parser Interface. ; Start error recovery (without printing an error message). See Error Recovery. Return whether error recovery is being done. In this state, the parser reads token until it reaches a known state, and then restarts normal operation. See Error Recovery. Print an error message using the yyerror method of the scanner instance in use. The Location and Position parameters are available only if location tracking is active. Next: Java Differences, Previous: Java Action Features, Up: Java Parsers [Contents][Index] (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) Normally, Bison generates a pull parser for Java. The following Bison declaration says that you want the parser to be a push parser (see api.push-pull): %define api.push-pull push Most of the discussion about the Java pull Parser Interface, (see Java Parser Interface) applies to the push parser interface as well. When generating a push parser, the method push_parse is created with the following signature (depending on if locations are enabled). The primary difference with respect to a pull parser is that the parser method push_parse is invoked repeatedly to parse each token. This function is available if either the "%define api.push-pull push" or "%define api.push-pull both" declaration is used (see api.push-pull). The Location and Position parameters are available only if location tracking is active. The value returned by the push_parse method is one of the following four constants: YYABORT, YYACCEPT, YYERROR, or YYPUSH_MORE. This new value, YYPUSH_MORE, may be returned if more input is required to finish parsing the grammar. If api.push-pull is declared as both, then the generated parser class will also implement the parse method. This method’s body is a loop that repeatedly invokes the scanner and then passes the values obtained from the scanner to the push_parse method. There is one additional complication. Technically, the push parser does not need to know about the scanner (i.e. an object implementing the YYParser.Lexer interface), but it does need access to the yyerror method. Currently, the yyerror method is defined in the YYParser.Lexer interface. Hence, an implementation of that interface is still required in order to provide an implementation of yyerror. The current approach (and subject to change) is to require the YYParser constructor to be given an object implementing the YYParser.Lexer interface. This object need only implement the yyerror method; the other methods can be stubbed since they will never be invoked. The simplest way to do this is to add a trivial scanner implementation to your grammar file using whatever implementation of yyerror is desired. The following code sample shows a simple way to accomplish this. %code lexer { public Object getLVal () {return null;} public int yylex () {return 0;} public void yyerror (String s) {System.err.println(s);} } Next:. Previous: Java Differences, Up: Java Parsers [Contents][Index] This summary only include declarations specific to Java or have special meaning when used in a Java parser. Generate a Java class for the parser. A parameter for the lexer class defined by %code lexer only, added as parameters to the lexer constructor and the parser constructor that creates a lexer. Default is none. See Java Scanner Interface. The prefix of the parser class name prefixParser if ‘%define parser_class_name’ package declaration. See Java Differences. Code inserted at the beginning of the parser constructor body. See Java Parser Interface. Code added to the body of a inner lexer class within the parser class. See Java Scanner Interface. Code (after the second %%) appended to the end of the file, outside the parser class. See Java Differences. Not supported. Use %code imports instead. See Java Differences. Whether the parser class is declared abstract. Default is false. See Java Bison Interface. The Java annotations for the parser class. Default is none. %code init from the parser class constructor. Default is none. See Java Parser Interface. The exceptions thrown by the yylex method. Formerly named location_type. See Java Location Values. The package to put the parser class in. Default is none. See Java Bison Interface. The name of the parser class. Default is YYParser or name-prefixParser. See Java Bison Interface. The name of the class used for positions. This class must be supplied by the user. Default is Position. Formerly named position. Next: Table of Symbols, Previous: Other Languages, Up: Top [Contents][Index] Several questions about Bison come up occasionally. Here some of them are addressed. Next: How Can I Reset the Parser, Up: FAQ [Contents][Index] My parser returns with error with a ‘memory exhausted’ message. What can I do? This question is already addressed elsewhere, see Recursive Rules. Next:" Next: Multiple start-symbols, Previous: Strings are Destroyed, Up: FAQ [Contents][Index]. Next: Secure? Conform?, Previous: Implementing Gotos/Loops, Up: FAQ [Contents][Index]. */ Next:. Next: Where can I find help?, Previous: Secure? Conform?, Up: FAQ [Contents][Index] I can’t build Bison because makecomplains that msgfmtis not found. What should I do? Like most GNU packages with internationalization support, that feature is turned on by default. If you have problems building in the po subdirectory, it indicates that your system’s internationalization support is lacking. You can re-configure Bison with --disable-nls to turn off this support, or you can install GNU gettext from and re-configure Bison. See the file ABOUT-NLS for more information. Next: Bug Reports, Previous: I can't build Bison, Up: FAQ [Contents][Index] I’m having trouble using Bison. Where can I find help? First, read this fine manual.. Next: More Languages, Previous: Where can I find help?, Up: FAQ [Contents][Index] I found a bug. What should I include in the bug report? ‘configure’. Depending on the nature of the bug, you may be asked to send additional files as well (such as config.h or config.cache). Patches are most welcome, but not required. That is, do not hesitate to send a bug report just because you cannot provide a fix. Send bug reports to bug-bison@gnu.org. Next: Beta Testing, Previous: Bug Reports, Up: FAQ [Contents][Index] Will Bison ever have C++ and Java support? How about insert your favorite language here? C++ and Java support is there now, and is documented. We’d love to add other languages; contributions are welcome. Next: Mailing Lists, Previous: More Languages, Up: FAQ [Contents][Index] What is involved in being a beta tester? the developers do not have easy access. They currently have easy access to recent GNU/Linux and Solaris versions. Reports about other operating systems are especially welcome. Previous: Beta Testing, Up: FAQ [Contents][Index] How do I join the help-bison and bug-bison mailing lists? See. In an action, the location of the left-hand side of the rule. See Tracking Locations. In an action, the location of the n-th symbol of the right-hand side of the rule. See Tracking Locations. In a grammar, the Bison-generated nonterminal symbol for a mid-rule action with a semantical value. See Mid-Rule Action Translation. In an action, the location of a symbol addressed by name. See Tracking Locations. In a grammar, the Bison-generated nonterminal symbol for a mid-rule action with no semantical value. See Mid-Rule Action Translation. In an action, the semantic value of the left-hand side of the rule. See Actions. In an action, the semantic value of the n-th symbol of the right-hand side of the rule. See Actions. In an action, the semantic value of a symbol addressed by name. See Actions. Delimiter used to separate the grammar rule section from the Bison declarations section or the epilogue. See The Overall Layout of a Bison Grammar. All code listed between ‘%{’ and ‘%}’ is copied verbatim to the parser implementation file. Such code forms the prologue of the grammar file. See Outline of a Bison Grammar. Predicate actions. This is a type of action clause that may appear in rules. The expression is evaluated, and if false, causes a syntax error. In GLR parsers during nondeterministic operation, this silently causes an alternative parse to die. During deterministic operation, it is the same as the effect of YYERROR. See Semantic Predicates. This feature is experimental. More user feedback will help to determine whether it should become a permanent feature. Separates a rule’s result from its components. See Syntax of Grammar Rules. Terminates a rule. See Syntax of Grammar Rules. Separates alternate rules for the same result nonterminal. See Syntax of Grammar Rules. Used to define a default tagged %destructor or default tagged %printer. This feature is experimental. More user feedback will help to determine whether it should become a permanent feature. See Freeing Discarded Symbols. Used to define a default tagless %destructor. Insert code verbatim into the output parser source at the default location or at the location specified by qualifier. See %code Summary. Equip the parser for debugging. See Decl Summary. Define a variable to adjust Bison’s behavior. See %define Summary. Bison declaration to create a parser header file, which is usually meant for the scanner. See Decl Summary. Same as above, but save in the file defines-file. See Decl Summary. Specify how the parser should reclaim the memory associated to discarded symbols. See Freeing Discarded Symbols. Bison declaration to assign a precedence to a rule that is used at parse time to resolve reduce/reduce conflicts. See Writing GLR Parsers. Bison declaration to declare make explicit that a rule has an empty right-hand side. See Empty Rules. The predefined token marking the end of the token stream. It cannot be used in the grammar. A token name reserved for error recovery. This token may be used in grammar rules so as to allow the Bison parser to recognize an error in the grammar without halting the process. In effect, a sentence containing an error may be recognized as valid. On a syntax error, the token error becomes the current lookahead token. Actions corresponding to error are then executed, and the lookahead token is reset to the token that originally caused the violation. See Error Recovery. An obsolete directive standing for ‘%define parse.error verbose’ (see The Error Reporting Function yyerror). Bison declaration to set the prefix of the output files. See Decl Summary. Bison declaration to produce a GLR parser. See Writing GLR Parsers. Run user code before parsing. See Performing Actions before Parsing. Specify the programming language for the generated parser. See Decl Summary. Bison declaration to assign precedence and left associativity to token(s). See Operator Precedence. Bison declaration to specifying additional arguments that yylex should accept. See Calling Conventions for Pure Parsers. Bison declaration to assign a merging function to a rule. If there is a reduce/reduce conflict with a rule having the same merging function, the function is applied to the two semantic values to get a single result. See Writing GLR Parsers. Obsoleted by the %define variable api.prefix (see Multiple Parsers in the Same Program). Rename the external symbols (variables and functions) used in the parser so that they start with prefix instead of ‘yy’. Contrary to api.prefix, do no rename types and macros. %define api.namespace documentation in this section. Bison declaration to avoid generating #line directives in the parser implementation file. See Decl Summary. Bison declaration to assign precedence and nonassociativity to token(s). See Operator Precedence. Bison declaration to set the name of the parser implementation file. See Decl Summary. Bison declaration to specify additional arguments that both yylex and yyparse should accept. See The Parser Function yyparse. Bison declaration to specify additional arguments that yyparse should accept. See The Parser Function yyparse. Bison declaration to assign a precedence to a specific rule. See Context-Dependent Precedence. Bison declaration to assign precedence to token(s), but no associativity See Operator Precedence. Deprecated version of ‘%define api.pure’ (see api.pure), for which Bison is more careful to warn about unreasonable usage. Require version version or higher of Bison. See Require a Version of Bison. Bison declaration to assign precedence and right associativity to token(s). See Operator Precedence. Specify the skeleton to use; usually for development. See Decl Summary. Bison declaration to specify the start symbol. See The Start-Symbol. Bison declaration to declare token(s) without specifying precedence. See Token Type Names. Bison declaration to include a token name table in the parser implementation file. See Decl Summary. Bison declaration to declare nonterminals. See Nonterminal Symbols. The predefined token onto which all undefined values returned by yylex are mapped. It cannot be used in the grammar, rather, use error. Bison declaration to specify several possible data types for semantic values. See The Union Declaration. Macro to pretend that an unrecoverable syntax error has occurred, by making yyparse return 1 immediately. The error reporting function yyerror is not called. See The Parser Function yyparse. For Java parsers, this functionality is invoked using return YYABORT; instead. Macro to pretend that a complete utterance of the language has been read, by making yyparse return 0 immediately. See The Parser Function yyparse. For Java parsers, this functionality is invoked using return YYACCEPT; instead. Macro to discard a value from the parser stack and fake a lookahead token. See Special Features for Use in Actions. External integer variable that contains the integer value of the lookahead token. (In a pure parser, it is a local variable within yyparse.) Error-recovery rule actions may examine this variable. See Special Features for Use in Actions. Macro used in error-recovery rule actions. It clears the previous lookahead token. See Error Recovery. Macro to define to equip the parser with tracing code. See Tracing Your Parser. External integer variable set to zero by default. If yydebug is given a nonzero value, the parser will output information on input symbols and parser action. See Tracing Your Parser. Macro to cause parser to recover immediately to its normal mode after a syntax error. See Error Recovery.. For Java parsers, this functionality is invoked using return YYERROR; instead. User-supplied function to be called by yyparse on error. See The Error Reporting Function yyerror. An obsolete macro used in the yacc.c skeleton, that you define with #define in the prologue to request verbose, specific error message strings when yyerror is called. It doesn’t matter what definition you use for YYERROR_VERBOSE, just whether you define it. Using ‘%define parse.error verbose’ is preferred (see The Error Reporting Function yyerror). Macro used to output run-time traces. See Enabling Traces. Macro for specifying the initial size of the parser stack. See Memory Management. User-supplied lexical analyzer function, called with no arguments to get the next token. See The Lexical Analyzer Function yylex. External variable in which yylex should place the line and column numbers associated with a token. (In a pure parser, it is a local variable within yyparse, and its address is passed to yylex.) You can ignore this variable if you don’t use the ‘@’ feature in the grammar actions. See Textual Locations of Tokens. In semantic actions, it stores the location of the lookahead token. See Actions and Locations. Data type of yylloc; by default, a structure with four members. See Data Types of Locations. External variable in which yylex should place the semantic value associated with a token. (In a pure parser, it is a local variable within yyparse, and its address is passed to yylex.) See Semantic Values of Tokens. In semantic actions, it stores the semantic value of the lookahead token. See Actions. Macro for specifying the maximum size of the parser stack. See Memory Management. Global variable which Bison increments each time it reports a syntax error. (In a pure parser, it is a local variable within yyparse. In a pure push parser, it is a member of yypstate.) See The Error Reporting Function yyerror. The parser function produced by Bison; call this function to start parsing. See The Parser Function yyparse. Macro used to output token semantic values. For yacc.c only. Obsoleted by %printer. See The YYPRINT Macro. The function to delete a parser instance, produced by Bison in push mode; call this function to delete the memory associated with a parser. See The Parser Delete Function yypstate_delete. (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) The function to create a parser instance, produced by Bison in push mode; call this function to create a new parser. See The Parser Create Function yypstate_new. (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) The parser function produced by Bison in push mode; call this function to parse the rest of the input stream. See The Pull Parser Function yypull_parse. (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) The parser function produced by Bison in push mode; call this function to parse a single token. See The Push Parser Function yypush_parse. (The current push parsing interface is experimental and may evolve. More user feedback will help to stabilize it.) The expression YYRECOVERING () yields 1 when the parser is recovering from a syntax error, and 0 otherwise. See Special Features for Use in Actions. Macro used to control the use of alloca when the deterministic parser in C needs to extend its stacks. If defined to 0, the parser will use malloc to extend its stacks. If defined to 1, the parser will use alloca. Values other than 0 and 1 are reserved for future Bison extensions. If not defined, YYSTACK_USE_ALLOCA defaults to 0. In the all-too-common case where your code may run on a host with a limited stack and with unreliable stack-overflow checking, you should set YYMAXDEPTH to a value that cannot possibly result in unchecked stack overflow on any of your target hosts when alloca is called. You can inspect the code that Bison generates in order to determine the proper numeric values. This will require some expertise in low-level implementation details. Deprecated in favor of the %define variable api.value.type. Data type of semantic values; int by default. See Data Types of Semantic Values. Next: Copying This Manual, Previous: Table of Symbols, Up: Top [Contents][Index] A state whose only action is the accept action. The accepting state is thus a consistent state. See Understanding Your Parser. Formal method of specifying context-free grammars originally proposed by John Backus, and slightly improved by Peter Naur in his 1960-01-02 committee document contributing to what became the Algol 60 report. See Languages and Context-Free Grammars. A state containing only one possible action. See Default Reductions. Grammars specified as rules that can be applied regardless of context. Thus, if there is a rule which says that an integer can be used as an expression, integers are allowed anywhere an expression is permitted. See Languages and Context-Free Grammars. The reduction that a parser should perform if the current parser state contains no other action for the lookahead token. In permitted parser states, Bison declares the reduction with the largest lookahead set to be the default reduction and removes that lookahead set. See Default Reductions. A consistent state with a default reduction. See Default Reductions. The Bison Parser Algorithm. A parsing algorithm that can handle all context-free grammars, including those that are not LR(1). It resolves situations that Bison’s deterministic parsing algorithm cannot by effectively splitting off multiple parsers, trying all possible parsers, and discarding those that fail in the light of additional right context. See Generalized LR Parsing. A language construct that is (in general) grammatically divisible; for example, ‘expression’ or ‘declaration’ in C. See Languages and Context-Free Grammars. A minimal LR(1) parser table construction a grammar. See LR Table Construction. An arithmetic operator that is placed between the operands on which it performs some operation. A continuous flow of data between devices or programs. A parsing mechanism that fixes the problem of delayed syntax error detection, which is caused by LR state merging, default reductions, and the use of %nonassoc. Delayed syntax error detection results in unexpected semantic actions, initiation of error recovery in the wrong syntactic context, and an incorrect list of expected tokens in a verbose syntax error message. See LAC. One of the typical usage schemas of the language. For example, one of the constructs of the C language is the if statement. See Languages and Context-Free Grammars. Operators having left associativity are analyzed from left to right: ‘a+b+c’ first computes ‘a+b’ and then combines with ‘c’. See Operator Precedence. A rule whose result symbol is also its first component symbol; for example, ‘expseq1 : expseq1 ',' exp;’. See Recursive Rules. Parsing a sentence of a language by analyzing it token by token from left to right. See The Bison Parser Algorithm. A function that reads an input stream and returns tokens one by one. See The Lexical Analyzer Function yylex. A flag, set by actions in the grammar rules, which alters the way tokens are parsed. See Lexical Tie-ins. A token which consists of two or more fixed characters. See Symbols. A token already read but not yet shifted. See Lookahead Tokens. The class of context-free grammars that Bison (like most other parser generators) can handle by default; a subset of LR(1). See Mysterious Symbols. The Bison Parser Algorithm. A reentrant subprogram is a subprogram which can be in invoked any number of times in parallel, without interference between the various invocations. See A Pure (Reentrant) Parser. A language in which all operators are postfix operators. A rule whose result symbol is also its last component symbol; for example, ‘expseq1: exp ',' expseq1;’. See Recursive Rules. In computer languages, the semantics are specified by the actions taken for each instance of the language, i.e., the meaning of each statement. See Defining Language Semantics. A parser is said to shift when it makes the choice of analyzing further input from the stream rather than reducing immediately some already-recognized rule. See The Bison Parser Algorithm. A single character that is recognized and interpreted as is. See From Formal Rules to Bison Input. The nonterminal symbol that stands for a complete valid utterance in the language being parsed. The start symbol is usually listed as the first nonterminal symbol in a language specification. See The Start-Symbol. A data structure where symbol names and associated data are stored during parsing to allow for recognition and use of existing information in repeated uses of a symbol. See Multi-function Calc. An error encountered during parsing of an input stream due to invalid syntax. See Error Recovery. A basic, grammatically indivisible unit of a language. The symbol that describes a token in the grammar is a terminal symbol. The input of the Bison parser is a stream of tokens which comes from the lexical analyzer. See Symbols. A grammar symbol that has no rules in the grammar and therefore is grammatically indivisible. The piece of text it represents is a token. See Languages and Context-Free Grammars. A parser state to which there does not exist a sequence of transitions from the parser’s start state. A state can become unreachable during conflict resolution. See Unreachable States. Next: Bibliography, Previous: Glossary,). Previous: Bibliography, Up: Top [Contents][Index] Java parsers include the actions in a separate method than yyparse in order to have an intuitive syntax that corresponds to these C macros.
http://www.gnu.org/software/bison/manual/bison.html
CC-MAIN-2016-36
refinedweb
19,091
57.77
17 AnswersNew Answer #include <iostream> #include <stdlib.h> #include <time.h> using namespace std; int main() { cout<<endl<<"\nWeight:"; double weight; cin>>weight; cout<<endl<<"\nHeight:"; double height; cin>>height; double heightpow2=height*height; double bmi=weight/heightpow2; cout<<endl<<"\nBmi = "<<bmi; double hlr=rand()%17.9+12; if(bmi==hlr){ cout<<"\nWeight loss range"; } double hwr=rand()%24+18; if(bmi==hwr){ cout<<"\nHealthy weight range"; } return 0; } 17 AnswersNew Answer like amost all languages: if (low <= bmi && bmi < high) == equals != different < lower than <= lower or equals > greater than >= greater or equals && logical AND || logical OR What are the errors you are getting from it? Arash the error says that you're trying to get remainder of integer division (% operator) with a decimal (double) as right operand (divisor)... maybe you're trying to get 17.9 percent of rand()? if so, you must do: rand()*17.9/100 ... and add 12, obviously: double hlr = rand()*17.9/100+12; edit: similar for hwr ;) so, first you doesn't need random values, nor hlr and hwr... you just need to test if bmi is greater (or equal) than lowest boundary AND lowest than higest boundary ;) what do you think doing with % initially? and what are you trying to achieve? as 'if' blocks have quite no chance to be executed, even if you provide the good calculation to get a decimal random value between two bounds ^^ I named the variable under bmi in the code and now I want it to print a message if the value is between the two selected numbers You did type double value of int function. For Example: Int x = 28 double y = 17.9 #include <iostream> #inlcude <stdlib.h> #include <time.h> using namespace std; Int main() { double weight; cout << "Weight : "; cin >> weight; cout << endl; double height; cout << " Height "; cin >> height; cout << endl; double heightpow2 ; heightpow2 = height*height; cout << endl; double bmi ; bmi = weight/heightpow2; cout << "Bmi " << bmi << endl; double hlr; hlr = rand() % 17.9+12; If(bmi==hlr) { cout << "Weight loss range"; } double hwr; hwr = rand()%24+18; If(bmi==hwr) { cout << "Helthy weight range"; } system(" pause") return 0; } &&&&&: If it c++ : Do u include - <ctime>?? #include <ctime> "for rand" And Why u dont add : srand(0(time)) ? Айдын номе SoloLearn Inc.4 Embarcadero Center, Suite 1455
https://www.sololearn.com/Discuss/2706422/what-s-error-of-my-code/
CC-MAIN-2021-17
refinedweb
383
73.17
Tags and usertags The BTS supports tagging mechanisms to organize and manage bugs: global pre-defined tags: patch, wontfix, moreinfo, unreproducible, help, free-form usertags associated with a user There are 2 typical use cases of the usertags: package centric usertags which use the e-mail address packagename@packages.debian.org for user personal or team usertags which use an e-mail address for user Package centric usertags The use of package centric usertags enables to organize the default BTS package view if usercategory is set for the package. See BTS for devscripts as an example. Here are some typical cases: To report a bug on the bts command in the devscripts package with its version 1.0, you set the pseudo-header fields like this: Package: devscripts Version: 1.0 User: devscripts@packages.debian.org Usertags: bts bugreport... The fields User and Usertags control the user tags for the bug report. If you wish to set usertags to bts for an bug# 12345, you send an e-mail to control@bugs.debian.org with the following content: user devscripts@packages.debian.org usertags 12345 bts ... thanks Alternatively, you may use the bts command from the shell as: $ bts user devscripts@packages.debian.org , usertags 12345 bts The tweaking of usercategory should be done only by people involved with resolving bugs. Personal or team usertags The use of personal or team usertag enables to organize the user specific BTS package view associated to a user identified by a personal e-mail address or a team e-mail address.;users=debian-lsb@lists.debian.org Bug-Urls listed in lenny release goal The email address is not important and doesn't even need to be valid. It just provides a namespace for the tag to exist in. This means there is no need to co-ordinate a central list of tags. If you wish to mark bug# 12345 as todo, use the bts command from the shell as: $ bts user foo@example.org , usertags 12345 todo Then, you can see all bugs with the todo usertags with;users=foo@example.org Note on usercategory and display Use <pkg>@packages.debian.org as user@domain.tld (only if you are actually from that team). pay attention at the order you write the tags in the usercategory, since it will be the same order the tags will show up on the BTS page. when you add a tag to bugs, you need to modify the usercategory to treat that tag too (I don't like this, but that's the way it is). you can add a usertag to usercategory even if no bug is usertagged (yet) with it: it will appears as soon a some bugs will have that usertag. usertag name can have only alphanumerics, at, dot, plus and dash, while <cat> in <cat> [<tag>] can have underscore too (maybe other chars, untested). Access to usertags and usercategories Usertags are available on the bugs page for individual packages, in UDD, via a UDD CGI and via rsync: rsync rsync://bugs-mirror.debian.org/bts-spool-index/user/*/someone*example.org ./ Links The official documentation: user usertags usercategory Original announcement by aj some links to existing views from #debian-mentors: <pabs> ooh, usertags <pabs> bts++ <Nigel> pabs: i thought it was a random users suggestion at first, was about to reply, "HAHAHA, they will never do that" <pabs> :) <pabs> aj does roxor <Nigel> yeah <Nigel> i must go and set usertags on all my bugs now... <Nigel> i must say the following though: it'd be nice for the maintainer to get a copy of all tags set... <pabs> yeah, indeed <Nigel> also, it'd be nice if there was a link up the top now, with "Maintainers Tags" <Nigel> or (dunno if this defeats the purpose) set it up so tags set against the maintainer's email address, or the package@packages.d.o addy shown by default... <pabs> even better, add a thing to reqest@ for advertising usertags by default for a certain user/package/etc combo <pabs> adding it to the wiki page :) - After playing around with usertags a bit for the purpose of using them for tracking bugs about non-DFSG-free documentation, here some thoughts about further improvements: One could add a option to the CGI to select a predefined categorisation class. I would like to say something like Example 1: ...;class0=Status;nam1=myown;pri1=...;ttl1=...;class2=Severity Another useful addition could be to be able to globally rearrange the ordering of the categories. If I have used example 1 and now I want to rearrange the categorisation to move Severity to the top level I currently would have to do that (we assume for a moment the class parameters would actually work): Example 2: ...;class1=Status;nam2=myown;pri2=...;ttl2=...;class0=Severity better: ...;class0=Status;nam1=myown;pri1=...;ttl1=...;class2=Severity;ord=1,2,0 Here my first tries with categorisation together with usertags: I've started to use usertags to track QA issues, using "proposed-{removal,orphan}" on bugs that I've filed on very old and rc-buggy packages. At the moment, I'm using this to track my bugs (as they were filed yesterday, I don't have much experience yet). It's quite useful, but the long URIs are not ideal, so I second Frank's request for a way to specify standard classifications in a shorter way. It would also be very nice to use the package a bug is filed against to divide them into sections. I'd like to use something like this: tag=proposed-orphan,proposed-removal; users=debian-qa@lists.debian.org; ... nam1=Status;pri1=pkg:; ttl1=Pending removal,Waiting for adoption,No answer yet; ord1=2,1,0 Couldn't we show the usertags of packagename@packages.debian.org by default on the BTS site of packagename? Like this the maintainer could categorise the bugs with a set of his own tags and all users could see them. (This is done.) I've created a page, OngoingTransitions, that tracks library transitions in unstable. It'd be nice to have support for classifying by "is there a tag by this user set?", in order to separate claimed bugs (by bugsquash@qa.debian.org ).ç DebianInstaller uses usertags and usercategories to produce some useful views of our bugs and installation reports. See DebianInstaller/Bugs for details. The testing security team tags bugs with their CVE id (ie, CVE-2005-0001) and also uses a "tracked" tag to indicate security bugs that they are tracking and find ones they are not. These tags are added automatically based on info in their database. Set users= debian-security@lists.debian.org to see these tags. Here's a useful view in the BTS:;users=debian-security@lists.debian.org;ordering=tracked
https://wiki.debian.org/bugs.debian.org/usertags
CC-MAIN-2019-18
refinedweb
1,141
61.26
16 March 2009 By clicking Submit, you accept the Adobe Terms of Use. You should have a good understanding of Adobe Flash Professional and ActionScript 3.0. Intermediate In many SWF applications, developers need a way to programmatically monitor a timeline as it is played back so the code can take some action when a frame label or the end of the timeline is reached. Techniques for achieving this, however, have often conflicted with the design goal of separating timelines from actions. In the past decade Adobe Flash has grown from a simple animation tool into a comprehensive authoring environment for creating rich content. Nowadays many rich media agencies have implemented a workflow where multiple people in different roles—including Flash designers and developers—can simultaneously work together on one project, and projects are split into many different FLA and ActionScript files that all fit a specific framework or conceptual structure. As a result, it has become a key best practice to use FLA files primarily as a collection of multimedia elements and timeline animations, and to embody all logic in external ActionScript files. To enable this workflow, Flash CS3 Professional introduced the Document class as a default mechanism to link a FLA file to external ActionScript code. Overall it has become pretty easy to separate timelines from actions or code. However, there is unfortunately one exception to the rule. Consider the following scenario: A developer would like to run a piece of code after a certain timeline has played to a specific point, which is marked with a label. How does he know exactly when this point has been reached? To solve this problem, the developer might place the following action on a timeline inside a FLA file: dispatchEvent(new Event("animationHasFinished")); Although there is nothing invalid about this approach, it does have drawbacks. For developers, timeline actions are harder to author, adjust, and locate than code inside external ActionScript files. Many designers will also find this approach too technical. Lastly, it simply violates the principle of keeping your actions and timelines separated (yes, we "Flashers" are proud people). At Refunk, where I work, we have developed a small utility that elegantly solves this problem.. The TimelineWatcher class is a small, simple utility class. Here is the full documentation: TimelineWatcher(timeline:MovieClip): Creates a TimelineWatcher object to watch a single timeline dispose():void: Removes the TimelineWatcher object's internal listeners and references, so it can be garbage collected without any memory leaks labelReached: Dispatched when a new timeline label has been reached endReached: Dispatched when the end of a timeline has been reached Because the TimelineWatcher class extends the built-in EventDispatcher class you can add event listeners using the standard addEventListener() method. It goes hand-in-hand with the TimelineEvent class, which is a custom event class that explicitly types the events used by TimelineWatcher. These events additionally store the current frame number and label, so they can be retrieved via the event object. You can find the TimelineWatcher and TimelineEvent classes in the sample file for this article (timelinewatcher.zip). Both classes are made available as free code by Refunk and are distributed under the GNU General Public License. To see a working example of TimelineWatcher, start by unzipping the timelinewatcher.zip file and opening index.html in your desktop web browser (make sure you have Flash Player 9 or later installed). You will see a simple timeline animation of a red ball that animates from the left to the right and back. It loops three times before the animation stops. A text field in the top left corner displays the timeline label that is currently playing and the number of loops the animation has made (see Figure 1). Open the test.fla file in your Flash authoring environment, and you will see a white canvas with a MovieClip of a red ball (see Figure 2). The main Timeline contains two classic tweens, one that moves the ball to the right and one that moves it back to the left, with two frame (or timeline) labels—moveRight and moveLeft—indicating the direction of the animation (see Figure 3). That's all the design work that went into the example. When you test this movie—without the attached Document class, as I will discuss next—you will see an endlessly looping animation of a red ball that moves back and forth. The next step is to add an external ActionScript file that will do the following: To link the external ActionScript file named Test.as to the test.fla file, type Test as the Class in the Publish settings of the Properties panel (see Figure 4). At compile-time this Document class will be associated with the main Timeline, and will be able to provide functionality much like actions on the main Timeline. Before examining the external ActionScript file, there is one additional setting you need to be aware of. Choose File > Publish Settings and click the Flash tab. Make sure ActionScript 3.0 is specified in the Script menu and click Settings to open the Advanced ActionScript 3.0 dialog box (see Figure 5). By default the Automatically Declare Stage Instances check box is selected, which means that Flash will automatically declare any element with an instance name that resides on the main Timeline at compile time. Although this is a handy feature for designers and developers who use the Flash authoring environment for ActionScript development, it also can cause problems for developers who use external code editors, because these editors will complain that these declarations have not been created. (Flash acts as if they have been created—it kind of creates them for you in memory—so it's not really a matter of visibility; they are required but don't exist, and Flash creates them for you automagically.) To make your code editable in multiple development environments, you are better off unchecking this option and explicitly declaring every element with an instance name that resides on the main Timeline as a public class variable in your Document class. Without these declarations, Flash will throw a compile-time error. In the example there is just one MovieClip on the main Timeline with the instance name ball, so the following declaration is required: public var ball:MovieClip; From this point onwards you automatically have a reference to the ball instance on the main Timeline. Here is the basic scaffolding for the Document class named Test.as: package { import flash.display.MovieClip; public class Test extends MovieClip { public var ball:MovieClip; public function Test() { super(); stop(); } } } The code above declares the ball MovieClip on the main Timeline and stops its playback. Although Flash implicitly calls the super() method in the constructor, I usually add it for all extended classes as a best practice; this reminds me to make the method call when additional arguments are required. Starting with the basic scaffolding above, I add the text field for the current timeline label to the top left of the screen (highlighted text indicates the newly added code): package { import flash.display.MovieClip; import flash.text.TextField; public class Test extends MovieClip { public var ball:MovieClip; private var output:TextField; public function Test() { super(); stop(); output = new TextField(); addChild(output); output.text = "testing: 1, 2, 3"; // just for testing purposes } } } At this point it starts to get more interesting. I add the TimelineWatcher and TimelineEvent classes as well as the necessary logic:; public function Test() { super(); stop(); output = new TextField(); addChild(output); timelineWatcher = new TimelineWatcher(this); timelineWatcher.addEventListener(TimelineEvent.LABEL_REACHED, handleTimelineEvent); gotoAndPlay(1); } private function handleTimelineEvent(e:TimelineEvent):void { if (e.currentLabel === MOVE_LEFT || e.currentLabel === MOVE_RIGHT) { output.text = "label: " + e.currentLabel; } } } } First I import both classes so the Flash compiler knows where to find them. Next I add two constants named MOVE_LEFT and MOVE_RIGHT that match the timeline labels. This is a good practice to avoid errors due to typing mistakes. Next I declare the TimelineWatcher instance named timelineWatcher as a private class variable. In the main constructor named Test(), I create the actual TimelineWatcher instance and pass it the timeline that I want it to watch as a parameter. In this case I can use the keyword this, because it reflects the main Timeline. I also add an event listener to listen for the TimelineEvent.LABEL_REACHED event, which triggers the custom handleTimelineEvent() method. Next, I start playing the main Timeline again. The handleTimelineEvent() method uses the returned event object named e to retrieve the current label and checks if it corresponds with the predefined labels I am monitoring. If so, it updates the timeline label on the screen. When you run this example you will see the red ball continuously moving back and forth with the current timeline label in the top left of the canvas. After adding only a few lines of external ActionScript code I now know exactly what's going on with the main Timeline. And, look mum, I used no timeline actions at all! The edits to the code below stop the timeline after it has played three loops:; private var loops:uint = 1; public function Test() { super(); stop(); output = new TextField(); addChild(output); timelineWatcher = new TimelineWatcher(this); timelineWatcher.addEventListener(TimelineEvent.LABEL_REACHED, handleTimelineEvent); timelineWatcher.addEventListener(TimelineEvent.END_REACHED, handleTimelineEvent); gotoAndPlay(1); } private function handleTimelineEvent(e:TimelineEvent):void { switch (e.type) { case TimelineEvent.LABEL_REACHED: if (e.currentLabel === MOVE_LEFT || e.currentLabel === MOVE_RIGHT) { output.text = "label: " + e.currentLabel + "\nloops: " + loops; } break; case TimelineEvent.END_REACHED: loops++; if (loops > 3) { stop(); timelineWatcher.removeEventListener(TimelineEvent.LABEL_REACHED, handleTimelineEvent) timelineWatcher.removeEventListener(TimelineEvent.END_REACHED, handleTimelineEvent); timelineWatcher.dispose(); timelineWatcher = null; } break;} } } } I have added a second listener to listen for the TimelineEvent.END_REACHED event to count each timeline loop, and I stop the timeline playback after three loops. Finally, I clean up by removing the two listeners, disposing the TimelineWatcher instance, and setting its reference to null. That completes all the code for the example! When you test this code you should see the example as displayed in Figure 1. In this article you've seen how to use the TimelineWatcher class to dispatch an event when a frame label or the end of a timeline is reached, while keeping timelines and actions separate. Thibault Imbert from ByteArray.org recently blogged about the FrameLabel event, a new feature request for the next version of Adobe Flash Player. For more information about ActionScript, visit the ActionScript Technology Center. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
https://www.adobe.com/devnet/flash/articles/timelinewatcher.html
CC-MAIN-2017-13
refinedweb
1,747
54.52
wanted: sbcl version assert Bug Description Several times in my working with other people, since I am keeping up to date with SBCL, when I have them pull my latest code we go through a cycle of: them: "I'm getting error xyz." me: "Oh, you need to update to the latest SBCL." So I wrote the following in my initialization file. I'm wondering if there is a place for this or something like it in the main code base. If so, please review and I can clean it up as needed. Also, if this launchpad system is not the best place for things like this and the sbcl-devel list would be better, please inform me of my transgressions. ---- (defparameter *required- (defun split-sbcl-version (str) (flet ((list-to-num (list) (loop with acc and vals for c across str do (if (digit-char-p c) finally (return (cons (list-to-num acc) vals))))) (defun sbcl-version- (destructurin (+ (* a 1000000000000) (* b 100000000) (* c 10000) d))) (let ((this-version (lisp-implement (when (< (sbcl-version- (format t "SBCL version of at least ~A is required. This SBCL is ~A.~%" (quit))) Wouldn't it make more sense to call it sb-ext:version<=, since i imagine we would want (version<= "1.0.42. The attached patch adds SB-EXT: It takes a version number as a string, and asserts that (lisp-implement I've chosen to only parse leading numerical components in the version strings and ignore anything after the numerical components stop. Trying to make sense of whether a bunch of letters are some kind of versioning scheme (1.1.12a vs. 1.1.12b), a designator for release candidates (1.1.12rc1), a specific git commit or branch (1.1.12. Thus, as soon as version-assert encounters a non-digit character that isn't a period in either the version-string or (lisp-implement If the current version is 1.1.12, (version-assert "1.1.12") passes. If the current version is 1.1.12, (version-assert "1.1.12.1") does not pass. If the current version is 1.1.1, (version-assert "1.0") passes. It seems that ASDF3 actually includes functions that do this: uiop:version< uiop:version<= uiop:version- I pushed similar in 920b5eb (New function SB-EXT: Launchpad is fine. In principle I think SB-EXT:VERSION< or similar would be possible. It does need to deal with non-numeric components, but probably TRT is the right thing to only parse the leading numeric components. So (version< "1.0.34.12-foo" "1.0.44. 1.master. 0-dirty" ) => T and (version< "1.0.42. debian- something" "1.0.42. debian- something- other") => NIL . Or possibly use STRING< to compare non-numeric tails. Dunno. While there's no strict need to have this in SBCL itself, obviously, SBCL _is_ a sufficiently moving target that it might be a good thing.
https://bugs.launchpad.net/sbcl/+bug/674372
CC-MAIN-2017-17
refinedweb
488
65.42
Socket programming is an important concept in the world of computer programming. When we are sending or receiving data between two communication links, we need to use socket programming. What is Socket Programming? Socket programming tells us how to create a link between the local and remote processes by using the sockets. Now you might be thinking what do you mean by sockets, will socket is a communications endpoint that you can name and address in a network. To learn more about socket Programming in general you can visit the IBM Article on Socket Programming. It is super informative. Socket Programming with Python Although there is a high-level library in python which makes it easy for us to do Socket programming. for example, the requests module, which can help us communicate to the endpoints very easily than we can think of. Python Socket Library The socket library is one of the python standard libraries for socket programming, which makes it easy to do socket programming, in a few lines. although you do not have the control over all the sockets that one can have with a low-level programming language as c and c++, where you have complete access to the low level of sockets. although there are certain limitations in socket programming with the python programming language, still we have a socket that makes networking very easy for us. In the next few lines, I will introduce you, how we can actually write code that can help us create communications to a remote server. How to get the IP Address of a domain with socket python? To get the IP address of a domain with the socket we can use the following code snippets. import socket ip = socket.gethostbyname('') print (ip) # output # 142.250.180.51 The above is a simple program that gives us the ip address of a domain. this is not the basic purpose of socket programming. Socket programming is used to create a connection between the two processes, whether on the same computer or on the remote computer. We will see what we can do with socket programming in the next sections and we will create a server-client application where the client will send the data to the server. to make things simple we will just send a text but you can send files and whatever data you have. Create a server and client to send data with Python? We can create a server and a client locally on our computer for testing purposes that can send and receive the data. we can do so by creating one program for the client and one program for the server. On our server-side, we will be waiting for the connection and when we receive the connection we will send a text to our client and print it on a terminal. Creating our server that will wait for the connection. The service will keep waiting for the connection unless one process is connected to this server. We can do so by using the while loop once a clint is connected it will just send the text data and will exit the while loop and the server will be shut down. import socket s = socket.socket() port = 34541 host = socket.gethostname() ip = socket.gethostbyname(host) s.bind(('', port)) print (f"sever create on {ip}:{port}") s.listen(2) while True: clirn, addr = s.accept() print ('Connected to', addr ) clirn.send('Data from the server'.encode()) clirn.close() break Creating Clint Program with Socket using Python. import socket s = socket.socket() port = 34541 s.connect(('10.48.57.254', port)) print(s.recv(1024).decode()) s.close() A few Important methods in Python socket Library socket. socket.bind(address)Bind the socket to address. The socket must not already be bound. socket.close( statement around them. Changed in version 3.6: OSError is now raised if an error occurs when the underlying close() call is made. socket.connect(address)Connect to a remote socket at the address. (The format of address depends on the address family — see above.) If the connection is interrupted by a signal, the method waits until the connection completes, or raise a TimeoutError on timeout, if the signal handler doesn’t raise an exception and the socket is blocking or has a timeout. For non-blocking sockets, the method raises an InterruptedError exception if the connection is interrupted by a signal (or the exception raised by the signal handler). socket.connect_ex(address. socket.detach()Put the socket object into closed state without actually closing the underlying file descriptor. The file descriptor is returned, and can be reused for other purposes. socket.getpeername(()Return the socket’s own address. This is useful to find out the port number of an IPv4/v6 socket, for instance. socket.listen([backlog])Enable a server to accept connections. If the backlog is specified, it must be at least 0 (if it is lower, it is set to 0); it specifies the number of unaccepted connections that the system will allow before refusing new connections. If not specified, a default reasonable value is chosen.
https://www.alixaprodev.com/2021/10/socket-programming-with-python.html
CC-MAIN-2021-43
refinedweb
855
65.62
C++ Tutorial: Keyword - 2017 Q: What're the right forms of main()? - void main() - static int main() - int main(char** argv, int argc) - int main(int argc, char** argv) - int main() - inline int main(int argc, char* argv[]) - int main(char* argv[], int argc) - int main(int argc, char* argv[]) - int main(int argc, char* argv[], char* options[]) Ans: 5, 8, 9 main() must return an int, and must have either no parameters or (int argc , char* argv[]) as its first set of parameters. A program that declares main() to be inline or static is ill-formed. With C++ 11, we can use auto to declare types that can be deduced from context so that we can iterate over complex data structures conveniently. It is no longer a storage specifier which is used to put the auto variable to a stack. Actually, this usage as a storage specifier has never been widely used since its inception. So, the following code for(vector<int>::iterator it = v.begin(); ...) cout << *it << '\t'; cout << endl; can be simplified by using auto: for(auto it = v.begin(); ...) Here is another sample for the usage of auto: #include <iostream> #include <list> using namespace std; ostream& operator<< (ostream& os, const list<int>&lst;) { for(auto elm: lst) { os << " " << elm; } return os; } int main () { list<int> listA; list<int> listB; for(int i = 1; i < 6; i++) listA.push_back(i); for(int i = 1; i < 6; i++) listB.push_back(i*100); cout << "listA: " << listA << endl; cout << "listB: " << listB << endl; auto iter = listA.begin(); advance(iter, 3); /* Moves all elements from listB into *this. The elements are inserted before the element pointed by iter. The container listB becomes empty after the operation. */ listA.splice(iter, listB); cout << "listA: " << listA << endl; cout << "listB: " << listB << endl; /* Moves the elements in the range [iter, listA.end()) from listA into *this. The elements are inserted before the element pointed to by listB.begin(). */ listB.splice(listB.begin(), listA, iter, listA.end()); cout << "listA: " << listA << endl; cout << "listB: " << listB << endl; return 0; } Output: listA: 1 2 3 4 5 listB: 100 200 300 400 500 listA: 1 2 3 100 200 300 400 500 4 5 listB: listA: 1 2 3 100 200 300 400 500 listB: 4 5 const qualifier allows us to ask compiler to enforce a semantic constraint: a particular object should not be modified. It also allows us to tell other programmers that a value should remain invariant. The general form for creating a constant is: const type name = value; Note that we initialize a const in the declaration. So, the following line is an error: const int cint; cint = 10; // too late We'll get an error message something like this: error: 'cint' : const object must be initialized if not extern error: 'cint' : you cannot assign to a variable that is const If we don't provide a value when we declare the constant, it ends up with an unspecified value that we cannot modify. What is const? If we answer that question with "const is constant." We may get 0 point for that answer. If we answer with "const is read only." Then, we may get some points. Const correctness refers to use of the C++ const keyword to declare a variable or method as immutable. It is a compile-time construct that can be used to maintain the correctness of code that shouldn't modify certain variables. We can define variables as const, to indicate that they should not be modified, and we can also define methods as const, to mean that they should not modify any member variables of the class. Using const correctness is simply good programming practice. It can also provide documentation on the intent of our methods, and hence make them easier to use. For pointers, we can specify whether the pointer itself is const, the data it points to is const, both, or neither: char str[] = "constantness"; char *p = str; //non-const pointer to non-const data const char *pc = str; //non-const pointer to const data char * const cp = str; //const pointer to non-const data const char * const cpc = str; //const pointer to const data When const appears to the left of the *, what's pointed to is constant, and if const appears to the right of the *, the pointer itself is constant. If const appears on both sizes, both are constants.. Since STL iterators are modeled on pointers, an iterator behaves mush like a T* pointer. So, declaring an iterator as const is like declaring a pointer const. If we want an iterator that points to something that can't be altered (const T*), we want to use a const_iterator: vector<int> v; vector<int>::const_iterator itc = v.begin(); *itc = 2012; // error: *itc is cost ++itc; // ok, itc is not const How about T*const iterator: vector<int> v; const vector<int>::iterator cit = v.begin; *cit = 2012; // ok ++cit; // error: cit is const Making a function to display the array is simple. We pass the name of the array and the number of elements to the function. However, there are some implications. We need to guarantee that the display doesn't change the original array. In other words, we need to guard it from altering the values of array. That kind of protection comes automatically with ordinary parameters of the function because C++ passes them by value, and the function plays with a copy. But functions that use an array play with the original. To keep a function from accidentally modifying the contents of an array, we can use the keyword const: void display_array(const int arr[], int sz); This declaration says that the pointer arr points to constant data, which means that we can't use ar to alter the data. A class designer indicates which member functions do not modify the class object by declaring them as const member functions. For example: class Testing { public: void foo() const {} }; In that way, we can protect members of an object from being modified. So, in the following example, we'll get an error. For VS, we get "Error: expression (val) must be a modifiable lvalue." #include <iostream> using namespace std; class Testing { public: Testing(int n):val(n){} int getValue() const { return val; } void setValue(int n) const { val = n; } private: int val; }; int main() { Testing test1(10); return 0; } Because, in the member function setValue() is trying to modify a member variable val though the function is declared as const. So, we should remove the const from the setValue() function. There is another case which a member function appears against the const declaration: #include <iostream> using namespace std; class Testing { public: Testing(int n):val(n){} void foo1() const { foo2(); } void foo2() {} private: int val; }; int main() { Testing test1(10); return 0; } In this case, the member function foo2() is not doing anything. However, compiler thinks the foo2() is not safe because it does not have const declaration. In other words, compiler thinks that by calling non-constant function from const function, the code may try to change the value of the class object. So, as a constant member function can't modify a data member of its class, a constant member function cannot make a call to a non-constant function. However, there are exceptions to the rule: - A constant member function can alter a static data member. - If we qualify a data member with the mutable keyword, then even a constant member function can modify it. The example below shows that a const member function can be overloaded with a non-const member function that has the same parameter list. In this case, the constness of the class object determines which of the two functions is invoked: #include <iostream> using namespace std; class Testing { public: Testing(int n):val(n){} int getVal() const { cout << "getVal() const" << endl; return val; } int getVal() { cout << "getVal() non-const" << endl; return val; } private: int val; }; int main() { const Testing ctest(10); Testing test(20); ctest.getVal(); test.getVal(); return 0; } Output is: getVal() const getVal() non-const Other exmaples using const related to returning object, see Object Returning. #include <iostream> struct Foo { Foo() {} void go() { std::cout << "Foo" << std::endl; } }; struct Bar : public Foo { Bar() {} void go() { std::cout << "Bar" << std::endl; } }; int main(int argc, char** argv) { Bar b; const Foo f = b; f.go(); // 'Foo::go' : cannot convert 'this' pointer from 'const Foo' to 'Foo &' return 0; }. All header files should have #define guards to prevent multiple inclusion. The format of the symbol name should be <PROJECT>_<PATH>_<FILE>_H_. To guarantee uniqueness, they should be based on the full path in a project's source tree. For example, the file foo/src/bar/baz.h in project foo should have the following guard: #ifndef FOO_BAR_BAZ_H_ #define FOO_BAR_BAZ_H_ ... #endif // FOO_BAR_BAZ_H_ An enum is a very simple user-defined type, specifying its set of values as symbolic constants. #include <iostream> enum Month { Jan = 1, Feb, Mar, Apr, May, June, Jul, Aug, Sep, Oct, Nov, Dec }; int main() { using namespace std; Month f = Feb; Month j = Jul; cout << "f = " << f << endl; // f = 2; // error: cannot convert from 'int' to 'Month' int jj = j; // allowed: can get the numeric value of a 'Month' Month jjj = Month(7); // Convering int to 'Month' cout << "jj = " << jj << ", jjj = " << jjj << endl; return 0; } Output is: f = 2 jj = 7, jjj = 7 Let's look at another usage example of enum: #define NumArrays 10 class ArrayObj { private: int array[NumArrays]; }; int main() { ArrayObj a; return 0; } Here, #define does its job. However, we can use const instead. const qualifier lets us specify the type explicitly as well as we can use scoping rules to limit the definition to particular functions or files. In other words, there's no way to create a class-specific constant using a #define, because #define doesn't respect scope. class ArrayObj { private: static const int NumArrays = 5; int array[NumArrays]; }; We can also use enum for the array size: class ArrayObj { private: enum {NumArrays = 5}; int array[NumArrays]; }; The following example defines days as enum, increment the days by one day, and then display them as strings by overloading + and <<. #include <iostream> using namespace std; typedef enum days {SUN, MON, TUE, WED, THURS, FRI, SAT} days; days operator+(days d) { return static_cast<days> ((static_cast<int>(d) + 1) % 7); } ostream& operator<<(ostream& os, days d) { switch (d) { case SUN: os << "SUN"; break; case MON: os << "MON"; break; case TUE: os << "TUE"; break; case WED: os << "WED"; break; case THURS: os << "THURS"; break; case FRI: os << "FRI"; break; case SAT: os << "SAT"; break; } return os; } int main() { days d, a; d = WED ; a = +d; cout << d << " " << a << endl; d = SAT ; a = +d; cout << d << " " << a << endl; return 0; } Output: WED THURS SAT SUN Adding explicit is a good practice for any constructor that accepts a single argument. It is used to prevent a specific constructor from being called implicitly when constructing an object. For example, without the explicit keyword, the following is valid C++ code: Array a = 10; This will call the Array single-argument constructor with the integer argument of 10: Array::Array(int size) {} This type of implicit behavior, however, can be confusing, and in most cases, unintended. As a further example of this kind of undesired implicit conversion, let's consider the following function: void checkArraySize(const Array &array;, int size); Without declaring the single-argument constructor of Array as explicit, we could call this function as checkArray(10,10); As another example, let's look at the following example which has a constructor that takes a single argument. It actually, defines a conversion from its argument type to its class: class Complex { public: Complex(double) // This defines double-to-complex conversion Complex(double, double) }; ... Complex cmplx = 3.14 // OK: convert 3.14 to (3.14,0) Complex cmplx = Complex(1.0, 2.5); But this kind of conversion may cause unexpected and undesirable effects as we see in the example below: class Vector { int sz; double *elem; public: Vector(int s): sz(s) elem (new double[s]) { for(int i=0; i< s; ++i) elem[i] = 0; } ... }; The Vector has a constructor that takes an int, which implies that it defines a conversion from int to Vector: class Vector { ... Vector(int); ... }; Vector v = 10 // makes a vector of 10 double ? v = 20; // assignes a new Vector of 20 double to v ? This weakens the type safety of our code because now the compiler will not enforce the type of the first argument to be an explicit Array/Vector object in the above examples. As a result, there is the potential for the user to forget the correct order of arguments and pass them in the wrong order. This is why we chould always use the explicit keyword for any single argument constructors unless we know that we want to support implicit conversion. So, for the 2nd example, we put explicit: class Vector { ... explicit Vector(int); ... }; Vector v = 10 // error: no int-to-Vector,double> conversion v = 20; // error: no int-to-Vector,double> conversion Vector v(10) // OK Though implementing a program as a set of functions is good from a software engineering standpoint, function calls involve execution-time overhead. So, C++ provides inline functions to help reduce function call overhead, especially for small functions. Placing the qualifier inline before a function's return type in the function definition tells the compiler to generate a copy of the function's code in place to avoid a function call. The trade-off is that multiple copies of the function code are inserted in the program rather than there being a single copy of the function to which control is passed each time the function is called. The compiler can ignore the inline qualifier and typically does so for all but the smallest functions. The complete definition of function should appear before it is used in the program. This is required so that the compiler knows how to expand a function call into its inlined code. For this reason, reusable inline functions are typically placed in header files, so that their definitions can be included in each source file that uses them. In general, we provide declaration in our .h files and associated definition in our .cpp files. However, it's also possible to provide a definition for a method at the point where we declare it in a .h file: class MyClass { public: void MyMethod() { } }; This implicitly requests the compiler to inline the MyMethod() member function at all places where it is called. In terms of API design, this is a bad practice since it exposes the code for how the method has been implemented and directly inlines the code into our clients' programs. There are exceptions to this rule to support templates and intentional use of inline. - A decent rule of thumb is to not inline a function if it is more than 10 lines long. Beware of destructors, which are often longer than they appear because of implicit member- and base-destructor calls! Another useful rule of thumb: it's typically not cost effective to inline functions with loops or switch statements (unless, in the common case, the loop or switch statement is never executed). It is important to know that functions are not always inlined even if they are declared as such; for example, virtual and recursive functions are not normally inlined. Usually recursive functions should not be inline. The main reason for making a virtual function inline is to place its definition in the class, either for convenience or to document its behavior, e.g., for accessors and mutators. To allow a class data member to be modified even though it is the data member of a const object, we can declare the data member as mutable. A mutable member is a member that is never const, even when it is the data member of a const object. A mutable member can always be updated, even in a const member function. struct account { char name[50]; mutable int id; }; const account ac = {"Bush", 0, ....}; strcpy(ac.name, "Obama"} // not allowed ac.id++; // allowed The following example has an error because it tries to modify a variable which in a const member function: #include <iostream> #include <cstring> class MyText { public: std::size_t getLength() const; private: char * ptrText; std::size_t txtLen; }; std::size_t MyText::getLength() const { // error: l-value specifies const object // cannot assign to txtLen because it is in a const member function txtLen = std::strlen(ptrText); return txtLen; } We can solve the problem. mutable frees non-static data members from the const constraints: #include <iostream> #include <cstring> class MyText { public: std::size_t getLength() const; private: char * ptrText; mutable std::size_t txtLen; }; std::size_t MyText::getLength() const { txtLen = std::strlen(ptrText); // ok return txtLen; } How can we make the code work by making changes only inside class C #include <iostream> class C { private: int num; public: C(int a) : num(a) {} int get_val() const; }; //changes are not allowed in below code int C::get_val() const { num++; return num; } int main() { C obj(29); std::cout << obj.get_val() << std::endl; } Ans: mutable int num; As programming projects grow large, the potential for name conflicts increases. The language mechanism for organizing classes, functions, data, and types into an identifiable and named part of a program without defining a type is a namespace. The C++ standard provides namespace which allows us to have greater control over the scope of names. Note that a namespace scope does not end with a semicolon(;). The following code uses namespace to create two namespaces, Stock, and Market: namespace Stock { double penny; void order(); int amount; struct Option { ... }; } namesapce Market { double dollar; void order(); int amount; struct Purchase { ... }; } The names in any one namespace don't conflict with names in another namespace. So, the order in Stock is not confused with the order in Market Namespaces can be located at the global level or inside other namespaces, but they cannot be in a block. Therefore, a name declared in a namespace has external linkage by default. Namespaces are open or they can be discontiguous. In other words, we can add names to existing namespaces. For instance, we can add another name to the existing list of names in Stock: namespace Stock { string getCompanyName(); } The original Stock namespace provides a prototypes for order() function. We can provide the code for the function later in the file or in another file by using the Stock namespace again: namespace Stock { void order() { ... } } How do we access names in a given namespace? We use the scope-resolution operator (::), to qualify a name with its namespace: Stock::amount = 200; Market::Purchase p; Market::order(); Just variable name, such as penny is called unqualified name, which a name with the namespace, as in Market::dollar is called qualified name. As a quick summary: If we want to reference "j" in "main()", how do we do that? namespace { int j; } int main() { // ?? return 0; } Answer: int i = ::j; So, it looks like this when we have j both in global and local scope: #include <iostream> namespace { int j; } int main() { ::j = 911; int j = 800; int n1 = ::j; int n2 = j; std::cout << "n1 = " << n1 << std::endl; std::cout << "n2 = " << n2 << std::endl; return 0; } Output: n1 = 911 n2 = 800 The ::j appears to be defined at global scope and it can be handled as such. In other words, it is declared outside any class, function, or namespace. The global namespace is implicitly declared and exists in every program. Each file that defines variables at global scope adds those names to the global namespace. To refer to the member in the global namespace, we use scope operator(::). However, the ::j is not actually defined at global scope. More specifically, it is declared using unnamed namespace. So, unlike the variable in the global namespace, it is local to a particular file and never space multiple files. The register keyword is a hint to the compiler that we want it to provide fast access to the variable, perhaps by using a CPU register instead of the stack to handle a particular variable. The CPU can access a value in one of its registers more quickly than it can access memory in the stack. Some compilers may ignore the hint and use register allocation algorithms to figure out the best candidates to be placed within the available machine registers. Because the compiler is aware of the machine architecture on which the program is run, it is often able to make a more informed decision when selecting the content of machine registers. Usually, automatic objects used heavily within a function can be declared with the keyword register. If possible, the compiler will load the object into a machine register. If it cannot, the object remains in memory. To declare a register variable, we preface the type with the keyword register: register int heavy_use; Array indexes and pointers occurring within a loop are good candidates for register objects. for (register int i = 0; i < sz ; i++) ... for (register int *ip = array; p < arraySize ; p++) If a variable is stored in a register, it doesn't have a memory address. So, we can't apply the address operator to a register variable. Therefore, in the following code, it's okay to take the address of the variable xStack but not of the register variable xRegister: void f(int *); int main() { int xStack; int register xRegister; f(&xStack;) // ok f(&xRegister;) // not ok ... } The keyword struct introduces a structure declaration, which is a list of declarations enclosed in braces. struct structure_name { type1 member1; type2 member2; } object_name; Specific example is like this: struct data { int idata; float fdata; } myData; myData.idata = 10; myData.fdata = 3.14; Once used like above, the data can represent for the declaration {...}, and it can be used later in definition of instances of the structure. For example, it can be used like this: struct data yourData; It defines a variable yourData which is a structure of type struct data. We can also define array of structures with initializer: #include <stdio.h> struct data { int idata; float fdata; } myData[] = { {10, 3.14}, {20, 0.314}, {30, 0.0314} }; int main() { printf("%d,%f\n", myData[0].idata,myData[0].fdata); printf("%d,%f\n", myData[1].idata,myData[1].fdata); printf("%d,%f\n", myData[2].idata,myData[2].fdata); printf("%d\n", sizeof data); // 8 printf("%d\n", sizeof myData); // 24 printf("%d\n", sizeof *myData); // 8 return 0; } The number of entries in the array myData[] will be computed if initializers are there and the [] is left empty as in the above example. We can also calculate the size of myData[] array using: sizeof myData / sizeof (struct data) or sizeof myData / sizeof myData[0] The 2nd one is better because it does not need to be changed if the type changes. For more on the size of struct, see Size of struct The struct is very useful when we build linked list: struct list { int data; struct list *next; }; The recursive declaration looks illegal because it's referring itself. But it's not. It's not containing an instance of itself, but struct list *next; declares next to be a pointer to a list, not a list itself. For more on the size of struct, visit the size of the struct. Suppose we have the struct as below: typedef struct t { int i; float f; t* ptr; } T; How can we initialize the array of that struct? Here is the answer: T myStruct[10] = {{0}}; The example below is a sort of quiz for switch related to break and continue. We should be able to guess the output from the code: #include <iostream> using namespace std; int main() { for (int i = 0; i < 5; ++i) { switch (i) { case 0 : cout << "0"; case 1 : cout << "1"; continue; case 2 : cout << "2"; break; case 3 : cout << "3"; break; default : cout << "d"; break; } cout << "*"; } return 0; } Output should look like this: 0112*3*d* At i = 0, prints out 0, continue to case 1, print 1, then next index of the loop at i = 1, skip case 0, go to case 1 and print 1, then next index of the loop at i = 2, skip case 0 and 1, go to 2, print 2, break, print * at i = 3, skip case 0-2, at case 3, print 3, break print * at i = 4, skip call cases, at default, print d, print *. We can define a new name for an existing type. We use typedef to create an alias: typedef typeName aliasName; So, we can make byte_pointer an alias for char *: typedef char* byte_pointer; Or we can create shorter names for types with longer names: typedef unsigned short int ushort; It defines ushort as another name for the type unsigned short int. As an another example: struct node { int data; node *next; }; typedef node node_t; or typedef struct node { int data; node *next; } node_t; But typedef declaration does not create a new type. It just adds a new name for existing type. The primary reason of using typedef is to parameterize a code against portability issues. So, by just changing typedefs, we can minimize the change in our source code. There is another issue regarding the difference between C and C++ in using struct tagged namespace: In C, we'll get an error if we do: struct Foo { ... }; Foo x;while we do not get any error in C++. So, we should use the following instead: struct Foo { ... }; struct Foo x; Why? Here is an explanation. So, I usually use typedef to avoid this subtle difference: typedef struct Foo { ... } Foo; C++ provides two mechanisms to qualify names: - using declaration lets us to make particular identifiers available. using Stock::order; // a using declaration - using directives makes the entire namespace accessible. using namespace Stock; // make all the names in Stock availableBut we should not use the using keyword in the global scope of our public headers because it would against the purpose of using namespaces in the first place. If we want to reference symbols in another namespace in our header, we should use fully qualified name, i.e., std::string. The volatile keyword indicates that the value in a memory location can be altered in ways unknown to the compiler or have other unknown side effects (e.g. modification via a signal interrupt, hardware register, or memory mapped I/O) even though nothing in the program code modifies the contents. In other words, volatile informs the compiler that the value of the variable can change from the outside, without any update done by the code. As an example, we can think of a memory-mapped register representing a DIP-switch input: while register is read and saved into a general-purpose register, our program may keep reading the same value, even if hardware has changed. The intent of volatile keyword is to improve the optimization of compilers. For more on optimization with volatile keywords, visit Volatile Optimization. In that optimization, compilers, can cache a value in a register if it's used several times with the same value, under the assumption the variable doesn't change during those uses. If we don't declare a variable as volatile, then the compiler may make the optimization. If we do declare a variable as volatile, we're telling the compiler not to make the optimization of the code referring to the object. In other words, volatile just tells the compiler to reload variables from memory before using them and store them back to memory after they have been modified. Declaring a variable as volatile is more applicable to systems-level programming rather than normal applications-level programming. More on volatile: - Can a parameter be both const and volatile? Yes. A read-only status register is an example. Also, embedded systems can have many kinds of input peripherals such as free-running timers and keypad interfaces. They must be declared "const volatile", because they both - change value outside by means outside our C program, and also - our C program should not write values to them (it makes no sense to write a value to a 10-key keypad). It is const because our C program must not try to modify it. - Can a pointer be volatile? An Interrupt Service Routine (ISR) can modify a pointer. - Check the following square code: int square(volatile int *p) { return *p * *p; }Will the code always be working? I found this code from the Web but not sure about the answer. Answer from the site: It may be not. That's because the compiler would produce the internal code like this: int square(volatile int *p) { int a = *p; int b = *p return a*b; }In other words, we may be end up multiplying different numbers because it's volatile and could be changed unexpectedly. So, the site suggested the following code: int square(volatile int *p) { int a = *p; return a*a; } The essense of embedded programming is that it requires communications with the outside world. So, both input and output devices). References on volatile: - "Empirical data suggests that incorrect optimization of volatile objects is one of the most common defects in C optimizers". - Placing C variables at specific addresses to access memory-mapped peripherals.
http://www.bogotobogo.com/cplusplus/cplusplus_keywords.php
CC-MAIN-2017-34
refinedweb
4,904
58.52
I have tried now for half a day, but I don't get "Sencha package build" working. Why is it that "Sencha package build" keep complaining that it can't find my Ext JS classes (f.e. Ext.panel.Panel)? I have done this so often, but with the latest Sencha Cmd it simply doesn't work. Has this been tested by Sencha, I mean with more than just the Readme file in the Src folder? At the top of my package.json: when I put "classic" in the "toolkit" it is not even processing anything.when I put "classic" in the "toolkit" it is not even processing anything.Code:"name": "ext-markdown-panel", "sencha": { "namespace": "MdPanel", "type": "code", "toolkit": "", "framework": "ext", "creator": "Intranet", "summary": "Panel that supports Remarkable.js Markdown (view only)", "detailedDescription": "Panel that supports Remarkable.js Markdown (view only)", "version": "1.0.0", "compatVersion": "1.0.0", "format": "1", Code:[INF] Processing Build Descriptor : default [ERR] Cannot satisfy requirements for "classic"! [ERR] The following versions cannot be satisfied: [ERR] ext-markdown-panel: classic (No matches!) [ERR] Cannot resolve package requirements As I can see the .sencha folder has disappeared, so that leaves also a few options. The question is quite simple, "why is within a workspace (following the documentation) a local package not simply taking the "ext" folder within that workspace, when I do a "sencha package build"? ** And a recommendation to Sencha Inc.: please make that the documention of Sencha Cmd is up to date, which isn't !!!
https://www.sencha.com/forum/showthread.php?468626-Sencha-CMD-6-5-2-15-package-build-dependencies-problem&p=1314323
CC-MAIN-2018-05
refinedweb
252
58.48
If you are familiar with Brad Frost's Atomic Design (opens in new tab) or Pattern Lab (opens in new tab) you're probably wondering how it can be applied to a website layout (opens in new tab) styling language like CSS since; it's not really as structured as other programming languages, and all examples of Atomic Design involve the organisation of HTML. However, if you've actually applied the Pattern Lab web style guide (opens in new tab) to a project you'll quickly discover anything that can be broken down to basic reusable code can be subject to Atomic Design. This has inevitably found its way to CSS and has lead to the rise of something called Atomic CSS (ACSS). ACSS has been around for a while, but didn't really get much traction until early 2015 and it's increasing in popularity. So – what Atomic CSS? ACSS is a method of using one class for one CSS property so that same property can be used in different parts of a site. This eliminates the possibility of duplicating that property in the stylesheet since the class itself is being used multiple times in the HTML. This results in smaller, 'drier' stylesheets, and is also a great way to quick way prototype layout and components on a web page. This usually results in the use of multiple CSS classes on a HTML element similar to what's shown below: <div class="fz(2em) c($custom-blue) mt(1.5em)"> Text </div> The syntax above might give shock you a little as you've probably never seen brackets used for CSS class names before and might look unnecessary but it makes sense for complex atomic classes like pseudo classes and media queries. This isn't the canon way to write atomic classes but it's the best way I've found so far. Using brackets for class names In order for the browser to read brackets in the class names, they need to be escaped in CSS. I don't want to talk too much about escaping characters in this article but it's a simple method that involves placing a back slash \ before the character you want to escape. If we wanted to create an atomic class out of the property text-transform: uppercase, the class in HTML would look like his, tt(u) and this in CSS .tt\(u\). All modern browsers support escaping characters in CSS, however I believe IE8 has some issues with it. If you would like to know more I suggest reading this article by Mathias Bynens (opens in new tab). Another benefit including brackets in your atomic css class names is it clearly separates them from your non atomic classes both because of the brackets and because it's best practise to namespace classes. Harry Roberts wrote a great article on namespacing classes (opens in new tab) and I myself have come up with a methodology for reusable components with namespacing classes (opens in new tab). As mentioned before however, you can write atomic classes however you want. So what about BEM? In my opinion atomic classes should be used only for properties that will be reused throughout the site, for properties you know will not be changed it's fine to stick with BEM, or whatever naming convention you are comfortable with. Say for example you have a default input box style for your site, I see no issue in using BEM for that class name and adding atomic classes after that. So for example: <div class="c-form__input"></div> Could have the properties: .c-form__input { border-radius: 2px; border: 1px solid #555; color: white; height: 30px; } These properties make up the base state of the input element, additional states such as success or error could have atomic classes added to them. .bd\(green\) { background: green } .bd\(red\) { background: red } <div class="c-form__input bd(green)"></div> <!-- for success --> <div class="c-form__input bd(red)"></div> <!-- for error --> What's the different between inline styles and ACSS In short, there isn't much of a difference, however using inline CSS for each element might cause one to repeat properties whereas with acss, a property is only written once. I still recommend inlining critical css but not adding properties to elements if they will be used over and over again. Will this cause bloat in CSS ACSS can cause bloat if there are unused classes. You can avoid this by using a tool such as uncss (opens in new tab) or purify CSS (opens in new tab) to removed unused styles for production. The best tool for ACSS however is Atomizer by Yahoo (opens in new tab) which reads the atomic classes used in a document (or several all documents) and automatically adds the relevant CSS to your destination file. Atomizer works with gulp and grunt which means it's easy to add as one of your frontend processes. How can I convince my superiors to use this technique? Not only are there large well know companies such as Yahoo and Mailchimp that already use ACSS in their production code but if you're a believer in Object Orientated CSS (OOCSS) it ACSS makes more sense as it further modularises your code. I consider Atomic CSS (not to be confused with Atomic Design) to represent OOCSS taken the the nth-degree. – Ben Frain (opens in new tab) Conclusion I'm a strong believer in ACSS and don't believe I could ever go back to creating a web based project without it. I am yet to try it alongside something like Radium (opens in new tab) for React, and I am yet to see how different it is to CSS Modules (opens in new tab), but if you currently aren't using those tools (and even if you are) I'm sure you'll find it to be a very beneficial way to make your CSS properties reusable.
https://www.creativebloq.com/css3/atomic-css-11619006
CC-MAIN-2022-27
refinedweb
996
54.66
Managing Dates and Times in JavaScript Using date-fns Working? Moment.js is a fantastic library for working with dates in JavaScript — it has many great features and offers a whole host of useful utilities. It is, however, is not without its critics. Many people cite the fact that Moment objects are mutable (i.e. operations like add, or subtract change the original Moment object) as being confusing for developers and a source of bugs. It has also come under fire for its large size. Moment doesn’t play well with modern “tree shaking” algorithms and if you require internationalization or time zone support, you can quickly find yourself with a rather large JavaScript bundle. This has gone so far that Chrome’s dev tools now highlights the fact that using Moment can lead to poor performance. If JavaScript libraries are proving costly, replace them with smaller alternatives. Lighthouse in @ChromeDevTools now recommends smaller libraries that improve bundle size. pic.twitter.com/VFe8TFV9Y5 — Addy Osmani (@addyosmani) September 12, 2020 All of which has lead the Moment maintainers to place the project into maintenance mode and to discourage Moment from being used in new projects going forward. We recognize that many existing projects may continue to use Moment, but we would like to discourage Moment from being used in new projects going forward. Instead, we would like to recommend alternatives that are excellent choices for use in modern applications today. This makes date-fns one of the best alternatives to Moment.js out there. Installation Since version two of the library, the only way to install date-fns is as an npm package. npm install date-fns Or via Yarn: yarn add date-fns You can use date-fns with both the CommonJS module system and also with ES modules: // CommonJS const { lastDayOfMonth } = require('date-fns'); or: // ES Modules import { lastDayOfMonth } from 'date-fns'; There is unfortunately currently no CDN version of date-fns available. Its removal and possible reinstatement is being discussed in this GitHub issue. But, that’s not to say you cannot use it in a browser, just that you’ll need to introduce a bundling step into your workflow. Let’s look at how to do that now. How Bundle date-fns for Use in a Browser I’ll assume you have Node and npm installed on your machine. If not, please consult our tutorial on installing Node. Next, install Parcel. This is a bundler (similar to Webpack), which will allow you to bundle up your JavaScript and serve it in a browser. npm install -g parcel-bundler Next, make a new project with a package.json file. mkdir datefns cd datefns npm init -y Install the date-fns library, as above: npm install date-fns Note: this will create a date-fnsfolder inside a node_modulesfolder in your project directory. If you look inside the date-fnsfolder, you will see lots more folders and files. Don’t worry though, we won’t be shipping much of this to the client. We’ll only be selecting the functions we need and then running everything through parcel to make a minified and tree shaken bundle. Now create two files, index.html and index.js. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>date-fns</title> </head> <body> <script src="index.js"></script> </body> </html> import { lastDayOfMonth } from 'date-fns'; const today = new Date(); console.log(lastDayOfMonth(today)); Start parcel’s inbuilt development server: parcel index.html And navigate to You won’t see anything displayed on the page, but if you open the browser’s console. you should that it has logged the last day of the current month. When it comes to deployment, you can run: parcel build index.js --experimental-scope-hoisting to have Parcel output a minified and tree-shaken bundle in the dist folder. Date-fns Basic Usage Now that we’re up and running, let’s look at what date-fns can do. One of the most common tasks when working with dates is the ability to format them nicely. We can do this with the date-fns format function. Alter the HTML from our example page above to look like this: <body> <h1>The date today is <span></span></h1> <script src="index.js"></script> </body> In index.js we want to import the format function, which we can then pass today’s date and a format string. We then want to output the result to the page. import { format } from 'date-fns'; const today = new Date(); const formattedDate = format(today, 'dd.MM.yyyy'); document.querySelector('h1 > span').textContent = formattedDate; Of course, we are not limited to a dd.MM.yyyy format, let’s try something different: const formattedDate = format(today, 'PPPP'); This will format the output like so: Wednesday, September 16th, 2020. You can find a full list of formatting options in the docs. Change Locale If you have a website in multiple languages, then date-fns makes it simple to internationalize times and dates. Let’s greet our German guests: <h1>Heute ist <span></span></h1> And in the JavaScript file, we can import the German locale and pass it to the format function: import { format } from 'date-fns'; import { de } from 'date-fns/locale'; const today = new Date(); const formattedDate = format(today, 'PPPP', { locale: de }); document.querySelector('h1 > span').textContent = formattedDate; This will output something along the lines of: Heute ist Mittwoch, 16. September 2020. It might seem complicated to require and pass locales as options, but unlike Moment.js which bloats your build with all the locales by default, date-fns forces developers to manually require locales as and when they are needed. You can view a list of available locales by looking in the node_modules/date-fns/locale folder in your project. Immutability, Pureness and Simplicity One of the selling points for date-fns is that its functions are pure, and simple, to explain. This leads to easy to understand code, which is easier to debug when things go wrong. Let me demonstrate this using Moment.js as a counter example. As mentioned before, dates in Moment are mutable, which can lead to unexpected behavior. const moment = require('moment'); const now = new Date(); const mNow = moment(now); mNow.add('day', 3); console.log(mNow.toDate()); mNow.add(3, 'day'); console.log(mNow.toDate()); // 2020-09-19T10:08:36.999Z // 2020-09-22T10:08:36.999Z There are a couple of things to take note of here. Moment’s add function is not fussy about the order in which it accepts its arguments (although the first method will now throw a deprecation warning). But more confusing is that if you call add multiple times in a row you, won’t get the same result because Moment objects are mutable: mNow.add(3, 'day'); // add 3 days mNow.add(3, 'day'); // adds 3 **more** days Now compare that to date-fns which keeps arguments in one order and always returns the same result, returning a new Date object for each call. import { addDays } from 'date-fns'; const today = new Date(); const threeDaysTime = addDays(3, today); const sixDaysTime = addDays(threeDaysTime, 3); console.log(today); // Wed Sep 16 2020 12:11:55 GMT+0200 console.log(threeDaysTime); // Sat Sep 19 2020 12:12:58 GMT+0200 console.log(sixDaysTime); // Invalid Date Also notice how the method name is more expressive ( addDays instead of just add), keeping things consistent and having one method to do one thing and one thing only. Comparing Dates If you look at the list of posts on SitePoint’s JavaScript channel, you can see that some are listed as being published on a certain date, whereas others are listed as being published X days ago. It might take a while if you tried to implement this in vanilla JavaScript, but with date-fns this is a breeze – just use the formatDistance method. Let’s compare two different dates. import { formatDistance } from 'date-fns'; const startDate = new Date(2020, 8, 16); // (Sep 16 2020) const endDate = new Date(2020, 11, 25); // (Dec 25 2020) const distanceInWords = formatDistance(startDate, endDate); console.log(`It is ${distanceInWords} until Christmas`); // It is 3 months until Christmas Notice how, when working with JavaScript, months are zero based (e.g. month 11 = December), but days count up from one. This trips me up time and again. Working With Collections of Dates Date-fns has some very handy helper methods which you can use to manipulate collections of dates in all kinds of ways. Ordering a Collection of Dates The following example uses compareAsc to sort dates into ascending order. To do this, it returns 1 if the first date is after the second, -1 if the first date is before the second or 0 if dates are equal. import { compareAsc } from 'date-fns'; const date1 = new Date('2005-01-01'); const date2 = new Date('2010-01-01'); const date3 = new Date('2015-01-01'); const arr = [date3, date1, date2]; const sortedDates = arr.sort(compareAsc); // [ 2005-01-01, 2010-01-01, 2015-01-01 ] As you can see the dates are now in ascending order. The counterpart method to compareAsc is compareDesc. import { compareDesc } from 'date-fns'; ... const sortedDates = arr.sort(compareDesc); // [ 2015-01-01, 2010-01-01, 2005-01-01 ] Generating the Days Between Two Dates To generate the days between two dates, you can use the addDays method we met previously, as well as the eachDayOfInterval helper which returns an array of dates within the specified range. import { addDays, eachDayOfInterval } from 'date-fns'; const today = new Date(); const aWeekFromNow = addDays(today, 7); const thisWeek = eachDayOfInterval( { start: today, end: aWeekFromNow }, ); console.log(thisWeek); /* [ Wed Sep 16 2020 00:00:00 GMT+0200 (Central European Summer Time), Thu Sep 17 2020 00:00:00 GMT+0200 (Central European Summer Time), Fri Sep 18 2020 00:00:00 GMT+0200 (Central European Summer Time), Sat Sep 19 2020 00:00:00 GMT+0200 (Central European Summer Time), Sun Sep 20 2020 00:00:00 GMT+0200 (Central European Summer Time), Mon Sep 21 2020 00:00:00 GMT+0200 (Central European Summer Time), Tue Sep 22 2020 00:00:00 GMT+0200 (Central European Summer Time), Wed Sep 23 2020 00:00:00 GMT+0200 (Central European Summer Time) ] */ Finding the Closest Date Finding the closest date to a certain date in an array of dates can be done using the closestTo method. This code snippet follows on from the previous example: import { addDays, eachDayOfInterval, closestTo } from 'date-fns'; ... const christmas = new Date(2020, 11, 25); const closestToChristmasDate = closestTo(christmas, thisWeek); console.log(closestToChristmasDate); // Wed Sep 23 2020 00:00:00 GMT+0200 (Central European Summer Time) There’s also the closestIndexTo method if you want to get the index of the array instead. Validating a Date The final helper I want to look at is the isValid method which, as the name suggests, checks if a given date is valid. However, because of the way JavaScript deals with dates, there are a couple of gotchas to be aware of: import { isValid } from 'date-fns'; const invalidDate = new Date('2020, 02, 30'); console.log(isValid(invalidDate)); // true, lol, wut? You would be forgiven for thinking that the above snippet should output false, as the 30th February, 2020 is obviously an invalid date. To understand what is happening, enter new Date('2020, 02, 30') in your browser’s console. You will see Sun Mar 01 2020 come back to you — JavaScript has taken the extra day from the end of February, and turned it into the 1st March (which is of course a valid date). To work around this, we can parse the date before checking its validity: import { isValid, parse } from 'date-fns'; const validDate = parse('29.02.2020', 'dd.MM.yyyy', new Date()); const invalidDate = parse('30.02.2020', 'dd.MM.yyyy', new Date()); console.log(validDate); // Sat Feb 29 2020 00:00:00 GMT+0100 (Central European Standard Time) console.log(invalidDate); // Invalid Date console.log(isValid(validDate)); // true console.log(isValid(invalidDate)); // false This can easily be extracted out into a little helper method, useful, for example, for validating user input in forms. Time Zones One disadvantage of date-fns is that it doesn’t currently have any time zone helper functions like Moment.js does, rather it returns the local time zone that the code is running on. This Stack Overflow answer gives some background on how native Date objects don’t actually store “real time zone” data. In that thread you’ll notice that they mention a method of setting time zones natively in JavaScript. This isn’t a comprehensive solution, but it works for many scenarios that require only output conversion (from UTC or local time to a specific time zone). new Date().toLocaleString("en-US", {timeZone: "America/New_York"}); Time zones are actually a complicated problem to solve which is why MomentJS has a separate library for it. There are plans afoot to add time zone support to date-fns, but at the time of writing, this is still a work in progress. There is however a package available on npm (based on an unmerged pull request to date-fns) which adds time zone support for date-fns v2.0.0 using the Intl API. With 140k weekly downloads it seems popular, but at the time of writing, it hasn’t been updated for several months. That said, here’s how you might use it: npm i date-fns-tz import { format, utcToZonedTime } from 'date-fns-tz'; const today = new Date(); // Wed Sep 16 2020 13:25:16 const timeZone = 'Australia/Brisbane'; // Let's see what time it is Down Under const timeInBrisbane = utcToZonedTime(today, timeZone); console.log(` Time in Munich: ${format(today, 'yyyy-MM-dd HH:mm:ss')} Time in Brisbane: ${format(timeInBrisbane, 'yyyy-MM-dd HH:mm:ss')} `); // Time in Munich: 2020-09-16 13:26:48 // Time in Brisbane: 2020-09-16 21:26:48 Conclusion Date-fns is a great little library that puts a whole bunch of helper methods for working with dates and times in JavaScript at your finger tips. It is under active development and now that Moment.js has been put into maintenance mode, it makes it a great replacement for Moment.js. I hope this article has given you enough understanding and inspiration to go check it out and to start using it in your own projects.
https://www.sitepoint.com/date-fns-javascript-date-library/
CC-MAIN-2022-21
refinedweb
2,408
62.17
TerraPOV is a long-term project I started a while ago. I wanted it to be as declarative as I could have TerraPOV to do so, allowing the artist to minimmize programming, and then have those declarations be processed by a specialized engine. All in POV-SDL as a start point. The process of making a lansdcape scene would be the following: -) valuate a (large ...) set of defined parameters for the lansdcape framework of my scene, with defaults and predefined stuff, and with preview utilities -) perform little and 'quite defined' programming for what cannot be expressed with single values -) put my objects in the scene with placement/populating macros and few rules (user-defined code) -) render the scene using the TerraPOV engine that sets up the scene accordingly The first question I had to answer was: what is a lanscape? Observing the nature, seeing 3D masters' artwork, and using Terragen for a few tries helped me define the main areas of what a lanscape is made of: -) Atmosphere, sky, cloudscape and so forth -) Terrain shaping and texturing, water -) Vegetation (very, very vast area ...) -) user-defined objects I have explored more or less those areas, more for sky-related features, less for terrain, and even less for vegetation. In this series of aticles, I will provide some code fragments that implement the main principles of TerraPOV, and let you use them right now. I will use the following basic coding rules: -) All TerraPOV-related identifiers start with 'TP_' or '_tp_', easy answer to some 'namespace' considerations -) Constants are uppercase: TP_SOME_CONSTANT -) Macros and parameters are lowercase: _tp_some_macro (_parameter_1, parameter_2). Macros shall not be considered as some kind of text substitution, but like 'real' fonctions or procedures, with typed parameters and returned value, and are written accordingly Before I expose how all this assembles in a usable way, I'll describe what are the sets of parameters I defined, first for skyscape, then for terrain. When I have something consitent and useable for vegetation, I'll let you know. But for vegetation, I have in mind POV-Tree and PlantStudio to help defining a ready-to-use collections of vegetation objects, with placement and populating macros. Well, now, let's start in the next article with the first area: TerraPOV's sky system: Atmosphere. I hope I won't be too boring.... If so, let me know, I'll return to RSOCP :p Bruno -- Utilisant le client e-mail révolutionnaire d'Opera :
http://news.povray.org/povray.binaries.tutorials/thread/%3Cop.utedr1iom1sclq%40pignouf%3E/
CC-MAIN-2018-51
refinedweb
409
53.44
hi hi I have connected mysql with jsp in linux and i have used JDBC connectivity but when i run the program, its not working the program is displaying hi....... hi....... i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design... enter in my frame should reach mysql and should get saved in a database which we've hi! hi! public NewJFrame() { initComponents(); try { Class.forName("java.sql.Driver"); con=DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","admin"); } catch(Exception e hi hi i want to develop a online bit by bit examination process... = DriverManager.getConnection("jdbc:mysql://localhost:3306/register","root";, "root"); Statement st...").newInstance(); Connection connection = DriverManager.getConnection("jdbc:mysql HI!!!!!!!!!!!!!!!!!!!!! HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*; import java.sql....("com.mysql.jdbc.Driver"); Connection con =DriverManager.getConnection("jdbc:mysql://localhost...:mysql://localhost:3306/test","root","admin"); Statement stmt=con.createStatement hi....... hi....... import java.awt.; import java.sql.*; import javax.swing.... =DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","admin...("com.mysql.jdbc.Driver"); Connection con =DriverManager.getConnection("jdbc:mysql hi - SQL hi hi sir,i want to copy the mysql prompt queries to one text file,how to achieve this ,plz tell me Thanq hi - SQL hi hi sir,how to create a database in sql sir thanks in advance Hi Friend, Please visit the following links: Hi - Struts Hi Hi friends, must for struts in mysql or not necessary... very urgent.... Hi friend, I am sending you a link...:// Thanks. Hi Soniya, We can use oracle too in struts MySql MySql what is default password of mySql, and how i configure mySql. Hi, If you are installing MySQL on windows then you will have to provide the Password for the user root at the installation time. You may try mysql mysql How can install My sql drivers using a jar File Hi you need to download the mysql-connector jar file for connecting java program from mysql database....... Hi friend, MySQL is open source database hi - Ajax ("jdbc:mysql://localhost:3306/ashok", "root", "mysql"); Statement stmt Hi, - Java Beginners Hi, Hi Friends, I want to write code in java for change...,con_pwd Thanks Hi Friend, Create a database table login[id...; String id="2"; String connectionURL = "jdbc:mysql://localhost:3306/test Hi... - Java Beginners Hi... I want to write code for change password from data base please... is occur Hi Friend, Try the following code: 1)change.jsp..."); if(pass.equals(conpass)){ String connectionURL = "jdbc:mysql://localhost Hi... - Java Beginners Hi... Hi friends I want to make upload file module please...; Hi Friend, Try the following code: 1)page.jsp: Display file... connectionURL = "jdbc:mysql://localhost:3306/test"; ResultSet rs = null Hi, Hi, Hi,what is the purpose of hash table hi - Java Beginners hi hi sir,when i am enter a one value in jtextfield the related... phone no sir Hi Friend, Try the following code: import... = DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","root"); Statement st hi - Java Beginners hi hi sir,Thanks for ur coporation, i am save the 1 image... that image is rendered to my frame,that's my question Hi Friend... = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); - JSP-Servlet Form Code in JSP and Servlet Hi, I am looking for a form code in JSP and Servlet. HI,Here is the form code:----<html><title>...; String url = "jdbc:mysql://localhost:3306/"; String db = " Mysql connector Mysql connector Wer can i get mysql-connector? Is it jar or exe file? Hi Friend, Visit the following link: It will provide you mysql - SQL mysql hi i want to export my data from mysql to excel using php script. pls help me Hi Friend, Please visit the following link: Thanks Image in mysql Image in mysql Hi. How to insert and retrieve images in mysql db using JSP or JAVA Servlet? Thanks in advance MYSQL Database MYSQL Database Can any one brief me about how to use MYSQL Database to store the create new database, create tables. Thanks. Hi, the MySQL database server is most popular database server and if you want to know MySQL - SQL MySQL how to set a time to repeat a trigger in Mysql? Hi Friend, Please visit the following links: mysql - SQL mysql hi i am using mysql. i want to create simple table... the the table with foreign key Hi friend, Code to solve Solve...=INNODB; For more information MySql visit to : mysql - MobileApplications ;Hi friend, WHERE clause is used with the SELECT keyword. ' Where' is a clause... * FROM Persons WHERE unit='india' For read more information on JDBC-Mysql visit to : http persistence.xml for MySQL persistence.xml for MySQL Hi, Can anyone share me the example of persistence.xml for MySQL database? Thanks Hi, Here is the code for MySQL: <persistence xmlns="" xmlns:xsi mysql-java mysql-java i try to execute this code stmt1.executeUpdate("insert... the manual that corresponds to your MySQL server version for the right syntax to use...; hi friend, You are missing the single quote in your sql statement before mysql - WebSevices mysql Hello, mysql storing values in column with zero... when I want to store this value in mysql it store 0 first and than 60... Hi Friend, Now it is clear from your explanation since you have time jsp_mysql jsp_mysql hi,, plz help me by providing the code for displaying SELECTED columns from mysql table which are given dynamically through checkboxes...; <html> <head><title>Read from mySQL Database</title> mysql - SQL mysql hi sir i want to display first two person highest salary in mysql Hi I am sending a link where u can find max function to find the maximum marks of student. mysql - WebSevices Regards Ahmed Hi Friend, We are providing you the sample code... = DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","root"); try
http://www.roseindia.net/tutorialhelp/comment/97245
CC-MAIN-2014-49
refinedweb
988
64.81
Be the first to know about new publications.Follow publisher Unfollow publisher Sequim Gazette - Info Spread the word. Share this publication. - Stack Organize your favorites into stacks. - Like Like this publication. Living on the Peninsula, Winter 2012 Living on the Peninsula, Winter 2012 WINTER 2012 NONPROFITS ENRICH LIVES ON THE PENINSULA Supplement to the Sequim Gazette and Port Townsend & Jefferson County Leader Pg. 6 Lending a hand not a chore for volunteers Pg. 10 Native horse therapy Pg. 16 Humane Society: finding forever homes for pets in need Pg. 18 Mosaic: Embracing the beauty of being different Pg. 26 A community treasure: The Dungeness Valley Health & Wellness Clinic Pg. 28 Handing down history in Quilcene Fun at the Fifth! Active Retirement Living. 500 Hendrickson Road • Sequim, WA 98382 • 360.683.3345 • thefifthavenue.com Assisted Living With A Difference There’s never a shortage of things to enjoy! 550 Hendrickson Road • Sequim, WA 98382 • 360.683.3348 • sherwoodassistedliving.com Luxury Retirement Living. More than independent senior living – a lifestyle of luxury 660 Evergreen Farm Way • Sequim, WA 98382 • 360.681.3100 • thelodgeatsherwood.com. 14 18 DEPARTMENTS 13 13 Food & Spirits 32 21 Heart & Soul 34 Sequim's Fresh Seafood I can't thank you enough SPOTLIGHT 6 6 10 14 16 Lending a hand not a chore for volunteers Native horse therapy Landmark reaches a milestone Historical Dungeness Schoolhouse marks 120th anniversary 18 The Living End The value of nonprofits Now & Then Photographic journal Mosaic Vision Embracing the beauty of being different 26 A community treasure 28 Handing down history in Quilcene The Dungeness Valley Health & Wellness Clinic Cover photo: Humane Society Finding forever homes for pets in need Contact us: Patricia Morrison Coate – pcoate@sequimgazette.com P.O. Box 1750, Sequim, WA 98382 For information on advertising, please call Debi Lahmeyer 360-683-3311 x 3058 226 Adams St., Port Townsend, WA 98368 360-385-2900 Fred Obee: fobee@ptleader.com Vol. 8, Number 4, Living on the Peninsula is a quarterly publication. © 2012 Sequim Gazette © 2012 Port Townsend & Jefferson County Leader 4 Railroad Bridge in Winter by Jay Cline. Cline is a graphic designer for the Sequim Gazette and plays with cameras in his spare time. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Hurricane Ridge Veterinary Hospital Dr. Toni Jensen and staff 530 W. Fir St. Suite D • Sequim Next to St. Luke’s Episcopal Church O NE Y B A CK SERVICES: • Preventative care and vaccines • Skin and ear disease • Separate cat and dog exam rooms • Digital dental and fullbody radiology • Surgery and dentistry • Management of chronic disease • Senior discount M Professional service and compassionate care for cats and dogs. $1500 “Everybody Calls Us” 302 Kemp Street Port Angeles, WA 98362 360-681-0117 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 5 Lending a hand not a chore for volunteers Story and photos by Mary Powell Teri Wensits has been the program coordinator for the Clallam County Volunteer Chore Services for the past 2½ years, but has worked with the program since 2003. Imagine, if you will, being visually impaired to the point of not being able to drive. Or, perhaps a broken hip or other medical condition precludes a trip to the store for groceries, prescriptions or other necessities. Or, maybe it’s simply time to hang up the car keys, which means finding some sort of transportation for medical appointments and other essential errands. Now suppose there are no family or friends readily available to provide help. A daunting circumstance, to be sure. Take Kate Sheffield, for example. She has had health concerns for several years, but after breaking a hip in 2009, she found she needed to use a wheelchair for mobility. She also knew she needed someone to get groceries, take her to appointments and tidy up the apartment a bit. Sheffield contacted Volunteer Chore Services, a program of Catholic Community Services, and has used their services since. “Without chore services I wouldn’t be here,” she maintains. “I’m not able to do what I did before and cannot live independently.” However, with a chore service volunteer and caregivers — who are not with chore services — Sheffield is able to stay in her apartment rather than going to an assisted living facility, something she says wouldn’t work for her. Teri Wensits describes herself as a workaholic. Sipping a latte at a local coffee shop, she talks about working two jobs, yet says she has been very blessed. “Working hard has enabled me to give my kids opportunities,” says the 57-year-old mom of two daughters. It’s the job with Volunteer Chore Service that keeps Wensits 6 busy for most of the working week, matching clients who are in need of assistance with a variety of life issues and volunteers who are ready and able to provide that assistance. Wensits has worked for the Clallam County Volunteer Chore Services since 2003. Two years ago she became the program coordinator for VCS, located in Port Angeles, and calls her work a wonderful experience. “There is so much volunteering here, that is such a rewarding feeling,” says Wensits. She should know. With nearly 100 clients whose needs range anywhere from a trip to the grocery story to a longer trip to the doctor in Seattle, and 35 volunteers, Wensits nearly always finds a volunteer to serve one of those clients. That’s not to say she couldn’t use a few more helping hands, saying VCS needs a big roster of volunteers. “One of those most difficult situations is to find volunteers to help within the home,” she explains. “Most are ready to drive someone, but might be uncomfortable in a home situation.” She makes it clear she isn’t looking for housekeepers per se, but someone to lend a hand with light housekeeping. There are a few restrictions for those seeking services. Clients are 60 years of age and older, living on a fixed income with some health and/or mobility limitations, and adults, ages 18-59, who have temporary or permanent functional limitations. VCS screened and have a Washington State Patrol background check. Maxine Richards, right, is the primary caregiver to her daughter, Yolanda Brandon, left. Richards has volunteers from VCS periodically take her to run errands. “We like to get a good match with a volunteer and a client,” Wensits says. “We have to protect both.” Wensits goes on to say many of the people served don’t have a lot of contact with the outside world. Often, a volunteer serves as a conduit for that contact. “There is a lot of socialization involved,” Wensits says of the client-volunteer relationship. Volunteering a ‘rewarding experience’ It’s amazing what the cadre of volunteers actually do for the seniors in need here on the peninsula. Grocery shopping trips, yard care, home repairs, help with reading and writing are but a few of the services volunteers provide. June Smith, who is a young 81 years old, has lived in Sequim for 40 years and has spent a good deal of time volunteering for a number of organizations, including the American Legion and VFW, the Children’s Hospital Guild and her church. Apparently she didn’t have enough to do because four years ago she added chore services to her resume. When she agreed to be interviewed for this story, she had just returned from taking a woman to the dentist – at 8 in the morning, no less. Then it was off to Sunny Farms for vitamins for the client. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Two-pronged program provides respite for those in need Taking those with limited mobility to the grocery story is a mainstay for Volunteer Chore Service volunteers. Photo courtesy of Catholic Community Services of Western Washington June Smith, a volunteer with chore services, drives those who need assistance to doctor or dental appointments and the grocery store, and has used what she calls her “trusty” car to take people as far as Olympia and Seattle. O nc e she drove a client to Seattle for a medical appointment. “The wind and rain was terrific,” Smith chuckles. “I thought I would never do it again, but I do.” Interestingly, Smith has been on the other side of the fence, using chore services when her late husband was ill. At the time she discovered she needed a wheelchair ramp to get her husband in and out of the house. Enter the Olympic Peninsula Boeing Bluebills, a group of retired Boeing workers who volunteer their services building projects, such as a wheelchair ramp. “It was a big help,” Smith says. And, she adds, the group recently improved the aging ramp a bit. As for helping others, Smith maintains she can still drive, and why not? “It gives me something to do,” says Smith of the 25 or so hours a month she volunteers for VCS. Other than their generosity toward others, many volunteers do help out because they are looking for something meaningful to fill their time. Quite a few are retired. Bob Forton spent his career working for the telephone company in California. After retirement in 1983, he and his wife, Peggy, traveled the country in an RV, bought a house in Oregon, didn’t like that, then decided to check out Sequim. “Everybody goes to Sequim,” he laughs. “But we like it up here.” Two years ago, his neighbor, a VCS volunteer, “kept after me to volunteer.” He finally acquiesced and today spends a couple of hours a week mostly driving clients to appointments or running errands for them. He periodically takes groceries to a woman who is blind. She calls the store, orders what she needs, Forton goes to her house to collect the money and then picks up her order. “It works pretty well,” he says. Wensits calls volunteering for VCS a “generally rewarding experience.” “There are so many possibilities and volunteers can ‘work’ as many or few hours as they wish. Some only work in the summer, some work with groups, like Boeing Bluebills. It’s very flexible,” Wensits explains. Volunteers may help the same person each week or be listed for on-call assistance depending on their schedules and interests. Most requests are for help with shopping, light house- LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 keeping, laundry and yard work. Volunteers have an orientation before they begin serving clients and receive mileage reimbursement. Background Volunteer Chore Services is a program of Catholic Community Services, an outreach of the Catholic Church in Western Washington. Employees and volunteers of CCS come from many faith traditions to serve and support the poor and vulnerable people through the provision of quality, integrated services and housing. Catholic Community Services is the largest nonprofit organization that provides services to the poor and underserved in Washington, according to Penny Grellier, program manager, volunteer services, who works at the Tacoma CCS offices. Volunteer Chore Services began in 1981 in response to cuts in services for elders by the state Legislature. The program works with thousands of senior citizens and adults with disabilities statewide. The commitment is to help this population remain independent in their own homes through a network of caring community members. The services are provided at no charge and serve as a safety net for those individuals who cannot afford to pay for assistance and do not qualify for other assistance. In the southwest region for CCS and VCS, 13 counties are served, including Jefferson and Clallam. 7 Russell May, a volunteer with Volunteer Chore Services, stands by client and good friend, Kate Sheffield. Photo by Mary Powell “We serve very rural to the urban, with Tacoma being the largest,” Grellier explains. Funding comes largely from DSHS, the United Ways of the counties served and individual and corporate donations. In Clallam County, some funding comes from the Olympic Area Agency on Aging and in Jefferson County, from the United Good Neighbors of Jefferson County. “We did get some funding from the City of Sequim,” Wensits adds. “We’ll take any variety of funding we can get.” According to Grellier, the statewide budget for VCS is about $1.8 million; the projected budget for Clallam County is right around $90,000. “Because I can sit up and talk, I don’t look like I need help, but I need a lot of help.” — Kate Sheffield, Volunteer Chore Services client Clients To inquire about Volunteer Chore Services, either as a prospective client or volunteer, contact Teri Wensits, program coordinator, at 360-417-5640 or 888-415-4267. 8 Sheffield likes to say she wears a wheelchair. “Everybody else wears shoes to get around,” she says with a wry smile. “I wear a wheelchair.” Although Sheffield has had a number of medical issues for several years, it wasn’t until she broke her hip in 2009 that she began to use a wheelchair for mobility. It also was when she began depending on services from Volunteer Chore Services. Thirty-two years ago, Sheffield traveled from her home in Sunnyvale, Calif., to Port Angeles to visit a friend. The friend had a “guy” she wanted her to meet. The guy, Russell May, did indeed become a good friend and one reason for Sheffield to relocate to Sequim. She quickly became entrenched in Sequim life and its issues, enough so that she ran for and was elected in 1988 to the Sequim City Council. She served until 1991. She also worked for many years advocating for affordable housing in Sequim, going as far as to testify in Olympia before the Ways and Means Committee. When Sheffield’s health began to fail, May was there to help with daily chores or to run errands. He eventually decided to volunteer for chore services and she has been one of his primary clients. “Kate is one of the brightest women I have met,” May says of his friend. There is an easy rapport between the pair. They share many of the same interests, including both being science fiction fans. “We’re big in 'Star Trek',” she laughs. May says he enjoys volunteering with chore services and does yard work and gardening for his landlady, as well. During the Thanksgiving and Christmas seasons, he helps deliver food baskets to those in need. This Thanksgiving, May drove Sheffield to California for a long-awaited family reunion. “I wouldn’t have it another way,” he says, with an everpresent twinkle in his eye. It’s difficult to believe Maxine Richards is 98 years old. She is a beautiful, intelligent woman who not only cares for herself, but also is the primary caregiver for her disabled 77-year-old daughter, Yolanda. Yes, Richards has independent caregivers who come to her tastefully appointed apartment in Sequim to help take care of Yolanda, but Richards is with her daughter nearly every minute of every day. She does not complain. The unadulterated love she has for her daughter means she wouldn’t turn away from the responsibility of caring for Yolanda. Yolanda’s life changed significantly after she underwent surgery at the age of 49. A once promising ballerina and model and a mother of two, she no longer could care for herself. Richards, who has outlived two husbands, has been by her daughter’s side since. However, there are times when Richards needs to shop or run errands and that is when she depends upon chore services volunteers. “We have to arrange it for when a caregiver is here,” Richards says of her outings. “Yolanda can’t be left alone.” It can be very frustrating if the caregiver doesn’t show up after she has arranged for the VCS volunteer to take her shopping. Then, the trip must be canceled. Richards has long studied genealogy and has compiled a book, “From Noah’s Ark to Me.” “I can trace my roots back to Noah,” she claims, as she turns pages and pages of photos from homesteading days in Scottsbluff, Neb., to pictures of Yolanda dancing and modeling. It’s tough at times for Richards, but while she tells her story, she occasionally turns and puts her hand on Yolanda’s shoulder and the two share a smile. Like Russell May, she says she wouldn’t have it any other way. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Welcome to the Finest Thai Cuisine in Sequim! OLYMPIC HEARING CENTER “Dine with us here at Galare Thai and travel to my hometown of Chiang Mai without ever having to leave the country.” Suree Chommuang, Proprietor & Chef An artful dining experience Improve Your Hearing. Improve Your Life. 120 West Bell St. • Sequim, WA 360-683-8069 Open Monday-Saturday • Lunch 11 am - 3 pm • Dinner 4 pm - 9 pm Custom Cabinets & WoodWorking Jeff Howat - 681-3451 - Over 30 years experience jeff@hfww.biz • We’ll help you take the first step in staying closely connected to your family and friends by providing a comfortable, personalized experience. You’ll be resting in the comfort of your own home the SAME DAY . • Hand Surgery • Gastroenterology Including EGD and Colonoscopy, with the option of anesthesia • Foot Surgery • Pain Management With Fluoroscopy-guided epidural steroid injections • SSDS is an innovative health care facility designed for minor outpatient procedures. • Cost-effective alternative • General Anesthesia • Bronchoscopy • Urology • Dental • Knee Arthroscopy We provide an experienced and professional atmosphere for a positive surgical experience. Sequim Same Day Surgery 777 North Fifth Ave, Suite 113 Sequim, WA 98382 Serv ing What is Sequim Same Day Surgery Center? Hearing loss affects all aspects of your life, but only as long as you allow it. th e eP nin c sula Sin e 1983 (360) 582-2632 Living on the Peninsula | WINTER | DECEMBER 2012 We offer only the world’s best equipment and the education you need to make today’s miniature devices an unobtrusive part of your active, everyday life. NO HASSLE, NO PRESSURE, GUARANTEED! Curtis Miller, M.A. CCC-A, F-AAA Clinical Audiologist Contact us today! Proud distributors of olympic 538 N. Fifth Ave. Sequim, WA 98382 hearing center 360-681-7500 9 Poppy Cunningham and Colleen Brastad, volunteer physical therapist Submitted photo Native Horse Therapy Felicia Gowdy shows her brother, Doug, how to curry a horse. Story and photos by Elizabeth Kelly W hat began as a for-profit business eventually changed to a nonprofit organization when director, founder, caretaker and lead instructor Yvette TwoRabbits Ludwar learned of the need to help students and adults working through physical and/or mental difficulties and of how therapeutic working with horses can be. “In 2001, we were operating a basic horsemanship center when we kept getting requests for therapeutic horse training,” said TwoRabbits. In order to change the business and work under the guidelines of PATH International, the Professional Association of Therapeutic Horsemanship (formerly NARTHA, North American Riding for the Handicapped Association), she had many roads to travel and a lot of knowledge to acquire. To that end, TwoRabbits went to Temple, N.H., where she learned the discipline “from the ground up,” at a PATH-recognized instructor training school. “We were taught first by horse people and then instructed by university level therapists about mental disorders using the DSM (Diagnostic and Statistical Manual),” she said. Allowed to ride twice a week, they learned how clients could relate to horses in ways that open them to the healing power of horses. She also is a certified instructor with Horses for Heroes, a national program for veterans dealing with PTSD (Post-traumatic Stress Disorder). Over the next several years, she completed a 40-hour course with Peninsula Dispute Resolution and other training courses. She holds an equine science degree from Uptalla University. Her Native most recent training was an Equine Horsemanship Facilitated Therapy (EFT) course, Riding Center which she completed in 2012. On Dec. 1, 2007, her center 396 Taylor Cutoff Road officially became a 501(c)(3) desigSequim, WA 98382 spottedponynhrc@hotmail.com nated nonprofit business. “With the 360-582-0907 training I’ve received, I am able to offer better instruction in horse therapy and to structure a lesson so it really helps,” she said. Her clientele consists of special needs students, veterans, and 10 an Over-Forty group, enabling people needing physical therapy to interact with the horses. Also patients with Parkinson’s disease benefit, she said. The 50-150 clients/ students a year come to her mostly from word-of-mouth advertising, she said. The riding center has four horses, four ponies, four mini-horses and a goat. Plus it has other animals like bunnies, chickens and guinea pigs. TwoRabbits came up with her color-coding concept out of necessity after she decided to start having home-schooled children from ages 5-12 at the center. “It helps empower a child when they can’t read, but can remember colors,” she explained. “When a child can get their own horse’s gear, it helps them feel in control.” A horse or pony is given a specific color — for example, Comet’s color is green — then all the equipment used with that horse, such as the halter and lead rope, the grooming box, bridle and saddle pad, is colored green. A student is assigned to a particular horse and then can gather all the necessary items for his or her ride. Australian saddles are used at the center and in most therapeutic riding because they are more stable and provide a secure seat for the rider. “The saddle automatically tilts up and puts the rider in a deep-seated position which is more comfortable for the horse,” she said. A hybrid blend of English and Western saddles, the Australian saddle is considered more suitable for activities requiring long hours on a horse because of its deeper seat, higher cantle – the raised, curved part at the back of the saddle — and front knee pads. Featured at the riding center are a covered walking area for the animals and clients they call “The Dome” and a maze or obstacle course. However, TwoRabbits sees it as much more than an obstacle course. “When a student is walking around the course with their horse, they have to think about the animal. They have to think about moving their own feet and looking ahead at the obstacles that are coming. This gives them brain exercise and helps them to feel good about themselves when they finish.” They walk over a bridge, over rubber bicycle tires, over small poles and over a 12-inch walking bar. The exercise enhances the client’s hand-eye coordination and helps with residual memory, balance, self-esteem and empowerment. “They are in charge of the horse when they are in the maze,” she said. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Above: Color-coded equipment Below: Australian saddles Susan Hillgren, director of TAFY (The Answer For Youth), a nonprofit organization that seeks to feed “not only the stomachs of at-risk young people, but their emotional and spiritual needs as well,” has included a request for funding for the Native Horsemanship Riding Center in a recent grant she wrote. “I think Yvette is awesome,” Hillgren said, “and I have a lot of confidence that she knows what she’s doing.” Hillgren added that the 40-72 clients they serve each day at TAFY are ages 13-35 and, “They all have grounding issues. They have never been grounded and many were abandoned by their mothers. They need a way to connect to themselves and others.” Working with the horses gives them that connection, she explained. “TAFY goes hand-inhand with Yvette’s program,” she said. “It’s about healing broken people.” TwoRabbits also applies for grants and currently is looking for funding, either by grant or private donations for the EFT program, which she describes as “working with an equine professional and a certified therapist to structure and maintain a plan with goals to move clients forward where conventional therapy has failed.” Tuition fees, donations and grants fund the center. Some of their sponsors have been Soroptimists, Veterans of Foreign War Women’s Auxiliary Post 1024, Elks and Lions clubs. “Also, our own volunteers (a total of 22 in all with a core group of nine) give when they can,” TwoRabbits said. Students and helpers in the dome with animals. Left to right standing: Michael Colt, student, Doug Gowdy, student, Felicia Gowdy, volunteer, Mikayla Adams, assistant instructor. Left to right kneeling: Jeremy Reyes, student, Yvette TwoRabbits Ludwar, director “We will make it because we just can’t close. This program works,” she continued, “and there are too many people who depend on us to quit. We want to promote education and how to be safe around the animals.” Students learn how to safely get on and off a horse, how to cause a horse to stop and go, turn right and left. TwoRabbits said she does not want to be called a horse whisperer, but, “as an Indian person, I speak gently and become a partner with the horse.” She is proud to be of the Haida Gwaii Nation from the islands off the coast of British Columbia. She dreams of having a large covered arena, either on the property or on an acreage somewhere nearby, so the students can ride year-round. Her positive outlook, generous nature and upbeat smile help her maintain the center and give many an opportunity to learn about horses in a safe, gentle manner. “I can walk into any field here and the horses will come to me,” she said. Smiling, she added, “It all boils down to trust.” Quail Hollow Psychotherapy PLLC Joseph L. Price, PhD 360.683.4818 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 11 Sin ce 19 Live theatre at it its best! Experience in Sequim 80 LITTLE SHOP OF HORRORS Feb. 8-24 414 N Sequim Ave, Sequim, WA 98382 • 360-683-7326 Visit our website for dates & times. Best Asian Cuisine! Great Service! Nice Atmosphere! 990 E. Washington St., Ste G, Sequim 360-681-0721 Asian Bistro Mon.-Thurs.: 11am-9:30pm • Sun.: 12pm-9:30pm New Extended Hours Fri. & Sat.: 12pm-12am Waterfront dining at John Wayne Marina Exciting new menu. Fresh local seafood, steaks, pasta, cocktails, wine & beer. We will be closed Jan. 1-8, 2013. Open New Years Eve 4-9 pm • 360-683-7510 • Reservations Recommended 2577 W. Sequim Bay Rd. at the John Wayne Marina • Dinner 4-9, Wed.-Sun. Sequim Family Dentistry) OLYMPIC PENINSULA 2013 Come bird with us! April 5, 6, 7, 2013 Scheduled Events: Guided Birding Birding Cruises • Owl Prowls Photography Workshops Silent Auction Gala Banquet with Chef Michael of Kokopelli Grill Guest Speaker Kevin Schafer, Wildlife Photographer or call 360-681-4076 12 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 & FOOD Spirits Recipe courtesy of Chef Patrick Townsend at Sequim’s Fresh Seafood, 540-A W. Washington St. 360-681-0664 or foodandbbq.com. Townsend earned an Associates in Applied Sciences degree for Culinary Arts at the Culinary Institute of America in San Francisco, Calif. He is a member of the United States Personal Chef Association and successfully completed the Personal Chef program at the Culinary Business Academy. Sequim’s Fresh Seafood specializes in Northwest cuisine with an emphasis on local fresh-farmed ingredients. Hours are 11 a.m.-4 p.m. for lunch and 4-8 p.m. for dinner TuesdaySaturday. Makes 4 servings 1 pound of fingerling potatoes 1 carrot, peeled and cut into ¼-inch dice 6 ounces pearl onions cut into ¼ slices (5 or 6) 8 ounces fresh Brussels sprouts, cored and separated into leaves 1 cup chicken stock ½ cup heavy cream ¼ cup unsalted butter Coarse salt and freshly ground white pepper to taste 2 teaspoons fresh lime juice In a large saucepan of lightly salted boiling water, cook the potatoes for 10-15 minutes over medium-high heat until tender when pierced with the tip of a sharp knife. Drain and set aside to cool. At the same time, bring another pot of salted water to boil over medium-high heat. Cook the carrot for about 2 minutes until just barely tender. Remove with a slotted spoon or strainer and set aside. Cook the Brussels sprout leaves in the boiling water for 2-3 minutes until barely tender. Drain and place them immediately into a bowl of ice water. Drain again. Squeeze out as much excess water as possible from the leaves and pat them dry with a kitchen towel. Cut the cooled potatoes into ½-inch thick rounds. Set aside. In a large saucepan, combine the stock and cream and bring to a boil. Cook for 7-8 minutes, until reduced by one-third and thickened. Whisk in butter, a piece at a time, until the sauce is thickened and enriched. Add the potatoes, carrot, onions and Brussels sprout leaves. Reduce the heat to low and simmer gently for about 5 minutes. Season with salt and pepper. Remove from heat and stir in the lime juice. Cover and keep warm. ASSEMBLY Fresh Steelhead 1 tablespoon unsalted butter 6 ounces of fresh chanterelle mushrooms (can substitute cremini mushrooms) 4 7-ounce steelhead or salmon fillets Coarse salt and fresh-ground white pepper to taste 3 tablespoons canola oil 3 tablespoons finely minced fresh chives In a large sauté pan, heat the butter over medium heat. Add the mushrooms and cook for 3-5 minutes until they begin to release their liquid and soften. Remove from the heat and set aside. Season the steelhead with salt and pepper on both sides. Heat the oil in a 12-inch sauté pan over medium-high heat. Cook the salmon for about 3 minutes until lightly browned. Turn over, reduce the heat, and cook for another 3 minutes longer until the fish is opaque in the center. Stir chives into the vegetables and sauce. Using a slotted spoon, lift about two-thirds of the vegetables from the sauce and arrange them on the individual plates. Top with the steelhead fillets. Spoon the remaining vegetables over the steelhead and then top with the mushrooms. Spoon sauce remaining in the pan over the vegetables and steelhead. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 with chanterelle mushrooms, Brussels sprout leaves and fingerling potatoes 13 Local landmark reaches a milestone The historical Dungeness Schoolhouse, as seen from Towne Road in Sequim. Photo by Reneé Mizar Historical Dungeness Schoolhouse marks 120th anniversary M by Reneé Mizar, Communications Coordinator, Museum & Arts Center in the Sequim-Dungeness Valley ore than 20 years before Sequim formally became a city and just three years after Washington attained statehood, residents of the small but bustling commercial seaport of Dungeness embarked on a $3,000 project to improve their children’s futures. Not long after residents established Dungeness School District No. 29 in 1892, a time when one-room schoolhouses dotted the rural landscape, voters passed a district bond to build a new school along the banks of the Dungeness River on land donated by local entrepreneur Charles Franklin Seal and using lumber harvested on-site. The following year, amid the agriculturally robust farmlands of the Dungeness Valley emerged the two-story, tworoom Dungeness Schoolhouse, which opened its doors to students for the first time on Feb. 27, 1893. What began as a four-month school term led by one teacher for 70-75 students ages 5-20 expanded over the years to a nine-month school 14 year headed by two teachers and a 1921 building remodel that added indoor plumbing, electric lighting, central heat and new west wing. To commemorate the 120th anniversary of the Dungeness Schoolhouse having first opened its doors, the Museum & Arts Center in the Sequim-Dungeness Valley (MAC) is throwing a community-wide party on Feb. 27, 2013, in the local landmark at 2781 Towne Road. “One of the best acknowledgements of the life of a culture, a people or their structures is celebrating the passage of time,” MAC Executive Director DJ Bassett said. “It’s an opportunity for us to get together and build that sense of commonality and community.” In preparation for the celebration, the MAC is actively gathering stories, information, memorabilia and photographs pertaining to the school, its construction and storied past, and welcomes hearing from those once affiliated with the schoolhouse. This includes students, teachers, school board members and family members thereof, as well as former Dungeness Schoolhouse Volunteer Committee members, groups once affiliated with the schoolhouse such as the Dungeness Community Club and Women of the Dungeness, and those who have given to the schoolhouse in various ways over the years. Those who can contribute are encouraged to contact the MAC at 360-681-2257 or publicity@macsequim.org. Becoming a beloved building The Dungeness School operated for more than 60 years until dwindling enrollment numbers hastened its closure in 1955 upon consolidation with the Sequim School District. The years that followed saw the elements, occasional vandalism and overall non-use take a noticeable toll on the vacant building as it fell into disrepair until being purchased by the Dungeness Community Club in 1967. Intent on transforming the dilapidated building into a Living on the Peninsula | WINTER | DECEMBER 2012 community center, the club set about renovating and restoring the historical structure as time and fundraising efforts allowed. During this period, a Dungeness Community Club-related group called the Women of the Dungeness also became an instrumental force in helping transform the former school into a social center by offering educational and cultural opportunities at its bimonthly luncheon meetings that regularly featured topical guest lectures, themed activities and theatrical presentations. “People who lived in this area were in the group and so there was a real sense of ownership there,” said Dungeness Schoolhouse Volunteer Committee member Pat Marcy, whose mother-in-law Jeri Marcy was a member of the Women of the Dungeness for several years. “The schoolhouse is a part of our heritage and the history of this little area that is pretty remarkable.” The Women of the Dungeness also founded an elaborate holiday fundraising event that has filled the Dungeness Schoolhouse with the sounds, sights and smells of Christmas each December for more than 40 years. Since 1969, the schoolhouse has been home to an annual tea and bake sale event to usher in the holiday season and raise funds for its continued preservation. Now undertaken by the Museum & Arts Center’s Dungeness Schoolhouse Volunteer Committee, the festive event began in 1966 under its original moniker of Christmas House and was held in private homes for its first few years until being moved to the Dungeness Schoolhouse. The immense popularity of Christmas House, which reflected a different holiday-inspired theme each year, was such that in 1982, the group compiled the spiral-bound “Christmas House Delights from the Women of the Dungeness,” a compilation cookbook of their favorite holiday recipes. “Maintaining this tradition of the Christmas Tea is important,” Marcy said. “It’s a pleasant time to get together with friends, enjoy tea and just relax in a beautiful setting.” A covered bridge linking Anderson and Towne roads once spanned the Dungeness River in the shadow of a pre-1921 Dungeness Schoolhouse. Bert Kellogg Collection, Museum & Arts Center in the Sequim-Dungeness Valley In the years that followed, the MAC and its volunteers, supporters and community partners have invested their time and talents toward the local landmark’s continued upkeep, with ongoing maintenance and preservation efforts funded through donations, fundraisers, grants and facility rentals. Such efforts, which recently included electrical upgrades throughout the building that now allow for wireless Internet accessibility, were recognized in 2012 when the MAC received a State Historic Preservation Officer’s Award for Outstanding Achievement in Historic Preservation for its stewardship of the Dungeness Schoolhouse. “We’re continuing the spirit of community and honoring the pioneers and their descendants, area old-timers, the Dungeness Community Club, Women of the Dungeness and myriad of service organizations and volunteers who have helped keep this well-loved landmark alive,” Bassett said. “If it wasn’t for their vision, dedication, hard work and perseverance, this living testament to our past would have faded away and with it a little piece of us.” Preserving into perpetuity Noting that the future of the Dungeness Schoolhouse After nearly 30 years of stewardship, during which time rests in the present, Bassett said its continued care and the building was recognized as a Washington State Historical operation is largely dependent upon substantial monetary Site and placed on the National Register of Historic Places, donations and the community’s renting of the facility. Mainownership and operation of the Dungeness Schoolhouse tenance costs of the aged building have proven both small was transferred from the Dungeness Community Club to and large, Bassett said, the most substantial expense of late the MAC in 1995. being the installation of a new well in the spring of 2012 after the old well failed completely. “Just as the community once saved the schoolhouse from disrepair, it now has that same opportunity just in a different way,” Bassett said. “The schoolhouse is important because it is a tangible connection to the past and, particularly for those who have chosen to retire here, is an aesthetical component of developing a sense of place and community.” Seeking to ramp up rentals of the multi-use structure, which includes two downstairs classrooms and an Recent maintenance efforts at the Dungeness Schoolhouse have included installing a new well, shown here being done by Oasis Well Drilling, following upstairs auditorium that is ADA accessible via chairlift, Bassett said the MAC the old one’s failure in the spring of 2012. Photo by Reneé Mizar Living on the Peninsula | WINTER | DECEMBER 2012 is embarking on a marketing campaign specifically for the schoolhouse to draw both long-term renters as well as eventspecific users. As part of the campaign, the MAC recently has secured grant funding to create a Dungeness Schoolhouse website, complete with an online booking calendar and photo galleries of the rooms and grounds, as well as other marketing materials and new building signage. “The surest way to ensure a historical structure is preserved is to continue building its economic base. In many ways the schoolhouse is like the barns of the area; they’re expensive to care for and maintain,” Bassett said. “They might look nice, but if you can actually utilize one to its full potential and best use, its future will be better ensured. That is how we are going to ensure the Dungeness Schoolhouse can be enjoyed by those another 120 years from now.” Additional information about the Dungeness Schoolhouse, including rental agreement forms, is available on the MAC website at. Museum & Arts Center Executive Director DJ Bassett, Dungeness Schoolhouse managers Mike and Kathy Bare and several members of the Dungeness Schoolhouse Volunteer Committee pose on the front steps of the landmark building in August 2012. Photo by Reneé Mizar 15 Four-year-old Jack Root loves on his 6-year-old dog and best friend Duke, a Labrador/ mastiff mix. The Root family adopted Duke from the Olympic Peninsula Humane Society three years ago. Story and photos by Ashley Miller M ary Beth Wegener is grinning like a Cheshire cat. Not only is the euthanasia rate at the Olympic Peninsula Humane Society lower than ever, the nonprofit organization has purchased property for a new animal shelter. Down from 30 percent five years ago, the kill rate is at 7 percent. Wegener, who was hired as executive director of the center last June, credits the decrease in numbers of animals being put to sleep with the combination of hiring a vet on staff two years ago and working with other organizations to transfer and adopt animals. “The philosophy has changed,” Wegener explained over coffee. “We are never going to put down a healthy, adoptable pet again. We might transfer it to a no-kill facility in a more metropolitan area with more foot traffic but we won’t put it down.” Wegener applied for the position because she likes animals and was hoping to get back into nonprofit work after several years in the banking industry. “I applied with absolutely no hope of getting the job,” she recalled. “I was completely surprised.” Since she started, Wegener has been working very hard to change the public’s perception of the humane society, develop new relationships and strengthen the organization from the inside out. “I think the employees feel more empowered now than ever before,” she said. “I am honest with them and expect them to do their jobs. I don’t micromanage them. They take care of 16 Finding forever homes for pets in need the animals and I trust them to do so.” At full capacity with more than 200 animals on site and in foster homes, a new facility has been on the group’s wish list for years. On Oct. 12, the shelter’s board completed the purchase of a 9.5-acre site at 1743 Old Olympic Highway, located between Port Angeles and Sequim, for $325,000. Wegener first learned about the property last May when a volunteer e-mailed her about the “perfect” site for a new shelter. On a whim, she decided to check it out. “I drove out and saw it and thought, ‘Wow, it really is perfect,’ ” she said. Compared to the current 2,900-square-foot facility, the new property with three modular buildings will be very spacious. The tentative plan is to use one of the existing buildings for administration, another to house cats and the third for veterinary care. A barn on the property could be used to temporarily house larger animals. A capital campaign to raise $1.2 million — expected to kick off Jan. 1, 2014 — will focus on building a dog kennel from the ground up and completing additional improvements to the property. Until then, Wegener plans to focus her energy on restructuring the board, redefining important policies and creating a much-needed succession plan. In the meantime, business will run as usual. Hundreds of animals are available and waiting for their forever homes. Lexie, a 4-year-old blue-nosed pit bull, is one of several dogs available for adoption. She’s the longest standing resident at the shelter and has been in foster care and on-site for two years. “She’s a really nice girl and she’d make somebody a great pet but she has to be the only animal,” Wegener said. “I know what most people think when they see a pit bull, and I admit I didn’t like pit bulls at first either, but they are truly awesome dogs who love people. They’re just a breed that’s gotten a lot of bad press.” Wegener has adopted two dogs from the center since she started working there. Buddy, a 6-year-old Shepherd mix, has become the center’s “ambassa-dog.” He comes to work with Wegener every day and attends events with her off-site, too. “I didn’t think much about Buddy when I first met him until I took him to an adoption event and realized what a great dog he was,” she said. “I took him home that weekend and knew I’d never give him back.” “He’s a great example of the dogs we get in here,” she continued. “He’s had an up-and-down life but when you get him out of the shelter he’s the best dog ever.” She also adopted Lulu, an 11-year-old pit bull that wasn’t thriving at the center, after she took the canine home for long-term foster. In the past year and a half, Wegener has fostered more than 30 puppies, some for just a couple of days and others for a few weeks. Finding families to foster animals helps reduce the number of animals at the center and provides very important socialization, Wegener said. As an open admission shelter, the humane society takes in any animal that’s brought in. Each year, more than 2,000 animals are brought to the shelter. No animals are turned away. Because of a cat’s quick gestation period, the shelter almost always has kittens available for adoption. The adoption fee includes spay or neutering, a microchip, a health visit from a vet and the initial vaccines. A complete list of adoption costs for both cats and dogs are available online at www. ophumanesociety.org. All available pets can be viewed online at. com. Just type in the ZIP code and begin searching. The site is updated regularly and includes pictures. In addition to puppies and kittens, the center has several adult animals needing homes. For many people, adopting an adult cat or dog is a better option, Wegener pointed out. Kittens and puppies require a lot of attention, training and patience. Young adult and older pets generally are housebroken and have some degree of training and socialization. Sean and Megan Root, of Port Angeles, adopted both of their dogs from the Olympic Peninsula Humane Society. Sugar is a 7-year-old mix and Duke is a 6-year-old labrador/ mastiff. They found both dogs on Petfinder and adopted them almost four years ago this Christmas. At the time, the couple’s oldest son was just a baby and Duke was an energetic 2-year-old dog with separation anxiety. Now, he is a mature adult dog and their 4-year-old’s best friend — a perfect example that an adult dog from a shelter can be successfully integrated into a family. Olympic Peninsula Humane Society Executive Director Mary Beth Wegener gives some attention to an adult calico cat available for adoption. Wish list The Olympic Peninsula Humane Society eagerly accepts donations in the form of: • Dry adult dog food • Dry puppy food • Dry adult cat food • Wet and dry kitten food • High-efficiency laundry detergent • Paper towels • Toilet paper • Bleach • Non-clumping cat litter • Dog and cat treats LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 105½ East First Street, Suite A Port Angeles, WA 98362 Serving the Peninsula over 50 Years! (360) 452-9080 Securities and Advisory services offered through FSC Securities Corporation, member FINRA/SIPC. Tracy Wealth Management is not affiliated with FSC Securities Corporation or registered as a broker/ dealer or investment advisor. 1 Family Dining Restaurant in America # Kids Eat Free 4pm-Close Daily Senior Early Bird 3-6pm, Mon-Fri (see store for details) (see store for details) Open For Breakfast, Lunch and Dinner Serving Breakfast All Day *Locally Owned and Operated • Service Calls • New Construction • Retail Store & Showroom • Portable Toilet Rentals 425 S. 3rd Ave. • Sequim • 360-683-7996 Therapy for the Shoulder, Elbow, Wrist & Hand We restore functional independence 1360 W. Washington St., Sequim, WA 98382 • (360) 683-2363 (River Road exit, next to Walmart) Sun-Thur 6am-10pm Fri-Sat 6am-12am Come hungry. Leave happy. STOP WASTING MONEY! 360-417-0703 Offering Rehabilitation Pilates 708 S. Race St. • Ste. C • Port Angeles Lynda G. Williamson • OTR/L, CHT, CEAS Announcing the Sequim Gazette’s newest edition, Free duct sealing?? Call today to learn more! Cooler weather has arrived and if your air ducts aren’t sealed you’re wasting energy and that means wasting money! 360-565-3249 or 800-542-7859 x249 We’re expanding our reach & attracting visitors from the Olympic Peninsula and beyond, bringing together our people, businesses and communities. Whether you’re researching a vacation destination or considering relocating, Living on the Peninsula online is your complete resource guide! LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 VISIT ➔ 17 Embracing the beauty of being different MOSAIC A community of individuals working together to improve VISION the quality of life for people with developmental disabilities. Story and photos by Patricia Morrison Coate “We become not a melting pot but a beautiful mosaic. Different people, different beliefs, different yearnings, different hopes, different dreams.” – President Jimmy Carter A lthough President Carter was describing the fabric of America, board members of Special Needs Advocacy Parents were inspired by the concept of a mosaic in defining its community of developmentally disabled adults in Port Angeles and Sequim. The nonprofit organization was founded in 1998 by a dozen parents of special needs children who were concerned about their youths' futures and quality of life after high school. They were joined by several care providers and community members. According to the Centers for Disease Control and Prevention, developmental disabilities are a group of conditions due to an impairment in physical, learning, language or behavior areas. About one in six children in the U.S. have one or more developmental disabilities or other developmental delays. These conditions begin during the developmental period, may impact day-to-day functioning and usually last throughout a person’s lifetime. Some originate from genetic causes such as Down syndrome, while others are birth-related such as a lack of oxygen during birth or a maternal infection. Others derive from fetal alcohol syndrome or drug use during pregnancy or poor prenatal health in the mother. In many situations, the cause of developmental disabilities is unknown, such as in autism. "Parents were thinking, 'What is out there for my child once he or she leaves school?'" said Tracy Wilson, one of SNAP's founders and a longtime professional caregiver. "We agreed the solution had to come from the parents and we knew the need was tremendous in the community for services for families and individuals. The first thing we did was to form a rec club." Wilson has been the caregiver of Violet Snodgrass, now 31, for the past nine years. "The first couple of years were rocky because change is hard, but we had the whole SNAP community to support and accept us," Wilson said. "No way could I have done this in a vacuum without the support we've received. It's been huge, huge! It's become our family and Violet has made great friends there." About a year ago SNAP morphed into Mosaic with the mission statement: "To enrich, encourage and empower people with developmental disabilities toward achieving independence, social skills and community inclusion through the arts, education and social activities." "We picked Mosaic as a name because our participants are a mosaic of different abilities," said Anna Wilson, (no relation to Tracy Wilson). "That really spoke to us because we all are different and we all contribute in different ways. We want to embrace the beauty of being different." "I'm so excited about Mosaic because we don't have two people the same and that's a good thing and a bad thing," said Sandy Voelz, the sister of participant Stacey Shipley, 60. "It's good because it forces us to look at everyone as individuals who respond to different stimuli and have different needs and desires. The 'bad' part is we really have to think about activities we offer because some will need one-on-one attention and others are quite independent." After returning to Port Angeles seven years ago, Voelz saw that although her brother's basic needs were being met with a caregiver, his quality of life was suffering. "All of a sudden I was looking at my brother in an apartment by himself — he wasn't doing anything," Voelz said, "and I realized after all these years, when he was growing up, that we never asked him to speak, so the first thing I did was take him to a speech therapist. Stacey had to have someone to talk to, so we joined SNAP and because he has people to talk to, he can communicate. They help him enunciate, so he's learning to talk, and he's become extremely social. In so many ways we have a population that's pigeon-holed. With Mosaic we're opening their door and our door." After moving to Sequim in 2001, Anna Wilson wanted to expand the world of her developmentally disabled daughter Nina, now 41. Nina, naturally shy, needed more social interaction, her mother felt, and has found it through the programs Mosaic offers, which include life skills, self-advocacy, peer support, Photos at top, left to right: Nina Wilson at First Step Family Support Center in Port Angeles, where she works two days a week. Submitted photo. 2. Tracy Wilson, left, shares her home and her life with Mosaic participant Violet Snodgrass. Patricia Morrison Coate photo. 3. Stacey Shipley enjoys working four days a week at Lee Shore Boats in Port Angeles. Submitted photo 4. Courtney Brown takes part in a painting class through Mosaic. Submitted photo 18 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 cultural arts and sciences, recreation and social enrichment, community outreach and project outreach. Core programs Program director Bonne Smith summarized the scope of Mosaic's curriculum: "Evolving from a recreational-based curriculum in 1998, Mosaic now serves more than 140 people about 1,200 hours a quarter in three-month sessions. Meeting four days weekly and a fifth monthly, Mosaic continues to provide recreation, but increasingly, based on outcomes, added educational experiences for adults with developmental disabilities. With outcomedriven programs, Mosaic continues to evolve embodying the ever-changing needs of the Clallam and Jefferson County participants that we serve. "Based on last year's outcomes, Mosaic's fall classes included healthy eating, nutritional information in supporting diet choices, an exercise companion class, and we will add a softemployment skills class starting in January. The cooking class presents alternative food choices, local whole foods and reduction of sugar and salt. The specific class responds to members that struggle with weight issues, chronic conditions, flavor and palatable issues and was part of the genesis for Mosaic's first health fair, Oct. 14. The second educational experience goal is formed through the theater class, which serves as public relations for the community. Through the theater experience the Snappy Players Troupe build their self-confidence in public speaking and the message about the struggles of everyday life is conveyed in comedic form, breaking down the barriers one show at a time." Challenges Most people with developmental disabilities do realize they are different, said Anna Wilson, and that can cause frustration and a tendency to withdraw from the world. Mosaic counteracts those challenges by providing a safe haven through its classes and recreational activities — dances draw upwards of 50 — and by being visible in the community through outings, community service projects, an art show and the annual theater production in May, developed from dialogues to props by Mosaic participants. "This group, with all different levels of awareness and life skills, reaches out and helps each other," Voelz said. "They are a family of sorts — they connect with each other. Just being part of a team gives them feelings of self-worth in having a purpose, just like we all need one. They are adults and they deserve and want respect." Making themselves understood to others outside the special needs community also is a challenge. "Certainly communication is the major one," said Tracy Wilson. "I think anyone with developmental disabilities faces challenges in the community about belonging. We say a community with heart has room for everyone. The task that's always been before us is to educate the community with exposure to people with developmental disabilities. It's been an adjustment but we've found people to be receptive." Inclusion in small ways and large is a driving force within Mosaic. Voelz admitted sometimes even family members can be uncomfortable or even fearful of their developmentally disabled relatives, and more so the general public. "A smile, a 'How ya doin'?' absolutely makes their day," Voelz said. "They have feelings, hopes and desires, but who's listening? Since they're limited or wired differently, they don't know how to get there even if they know how to ask. Stacey and many others don't express pain even if they have it. Pain doesn't generate the thought to complain in them." That's one of the reasons Mosaic organized a health fair last year so participants could be checked from head to toe and so caregivers could learn what symptoms of illness or injury they should look for. 76 percent of Mosaic’s classes and activities are funded through personal donations, United Way, grants and fundraising. Program fees cover 24 percent of Mosaic’s expenses. Participant profiles Violet Snodgrass Snodgrass is an enthusiastic and engaging young woman, full of natural curiosity, which extended to the interviewer's rapid note-taking. Because language is difficult for her, her caregiver Tracy Wilson filled in the gaps. "Violet worked 7½ years at The Buzz and made lots of friends there. She's someone who never called in sick, who doesn't take cell phone or cigarette breaks and is a hard worker who really takes pride in doing a great job," Wilson said. An article in the Sequim Gazette from Feb. 23, 2005, chronicled her skills and duties from cleaning to stocking. "No other employee is so happy to come to work," said her employer Deb Ferguson at the time. Unfortunately, The Buzz closed this spring and Snodgrass misses all that having a job gave her. "Violet is ready to work hard and is actively seeking work," Wilson said. "She'd like to be around people and have something active to do with people who are kind with a variety of tasks to keep her busy. Violet is involved in lots of things in the community — she goes horseback riding, to plays and musical events, out to dinner and lunch. She's part of Mosaic's community service class, which meets at parks to pick up garbage, and goes to church and visits people in nursing homes. We're very busy and we're gone the majority of every day." Snodgrass also is involved in Special Olympics softball, basketball and bowling, Mosaic classes and its theater project, learning dialogue and making props and scenery. "Life is good," she said. Although Wilson has taught Snodgrass many practical things in the ways of the world, it's been a two-way learning curve. "She has really taught me that we all have the capacity to learn and grow and make changes in our lives. She continues to exceed everyone's expectations, including her own, and clearly delights in herself. She's taught me the value of seeing people for who they are, and can be, and not to limit people by labeling them," Wilson said. "I think I am teaching Violet to believe in herself, to step outside her 'comfort zone' and to think beyond herself by reaching out to others. When my stepfather died five years ago, we went to his memorial service. People were invited to stand up and say a few words. To my great surprise, Violet stood up. She said, 'Bob nice. I like Bob. I miss Bob.' It's moments like that that remind me I really do have the best job in the whole world." well with money, so to live independently would be a challenge for her. She's very loving and happy with life as it is now, with no desire to live on her own." Her mother observes, "She is shy at first, but when comfortable with people she displays leadership qualities which her instructor Bonne Smith takes advantage of. She is very organized and is one step ahead of you, when she knows what kind of project is on the agenda. Nina is very practical and surprisingly computer savvy in spite of her reading challenges. She has an impressive visual memory and is a great asset in a grocery store, classroom or at home as far as locating whatever is needed. She has a very sunny disposition and is very loving with great compassion." That gift makes Nina a favorite with the youngsters at First Step Family Support Center in Port Angeles where she works two days a week in the kitchen and as a children's aide. "They are kind of drawn to her — they are a good match," Wilson said. Nina also attends Mosaic's Voyagers class which exposes participants to culture and science. "Bonne wants them to be able to have conversations on different subject matters that people wouldn't expect from them," Wilson said. "It's a good dynamic class with a wealth of information." She also noted Nina is an accomplished equestrian with many high-point year-end awards during her riding career in Las Vegas, Nev. She also was the recipient of two silver medals in International Special Olympics as an equestrian. "She has shown great courage these last few years, when health issues required surgery and frequent medical procedures. Even under those challenging circumstances we manage to have a good time together." Stacey Shipley In the past six years since Sandy Voelz got her brother Stacey Shipley involved in Mosaic, his quality of life has improved dramatically. "He was alone so much, as are so many others of our population, and I decided he David Dow had not developed only celebrated his 50th birthday having a caregiver. Dec. 5 at a Now he is interested spaghetti in things we never luncheon thought to expose prepared him to, like watching by fellow history programs. His Mosaic teacher thinks he participants. has gained three Nina Wilson "You could not ask for a better travel companion and there is not another person I'd rather spend time with than my daughter Nina, my buddy." said Anna Wilson, now 70. "She's very capable of taking care of herself and could function well on her own, but she's shy, not assertive and not a good advocate for herself. Her reading level may be at the second grade and she doesn't do LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 19 Mosaic, continued from page 19 years in cognitive function since Mosaic — Mosaic has literally brought him to more of his potential." With skills that he learned in classes, Shipley is now able to work four days a week at Lee Shore Boats in Port Angeles and live independently in a house that the owners made possible on their property. The company makes aluminum boats and Shipley is meticulous in sweeping up the metal shavings, his sister said. "Lee Shore Boats is a poster employer for those with special needs. Every single worker embraces Stacey and his job and they make him a part of the team," Voelz said. "Everybody needs a reason to get up in the morning, a place to go, a place to be wanted, something to do and a place to socialize. He keeps a very organized house and does his own laundry and dishes. If he can do it, others that are developmentally disabled can, too." Between work development, Special Olympics, Mosaic and using art skills at home he's learned at Mosaic, Shipley stays very busy. Bowling is one of his favorite activities and he bowls an impressive 246. "Stacey is a real easygoing fellow and everyone seems to like him." Mosaic's future Mosaic holds its classes at Holy Trinity Lutheran Church, 301 Lopez Ave., Port Angeles, and in a small space in the Sequim High School Special Ed classroom. "My dream for Mosaic is that we have a place of our own, a place to call home," said Tracy Wilson, "that's centrally located so we can offer activities and programs throughout the day and in the early evening to accommodate as many as we can. The biggest need is for that social aspect in all that we do. Loneliness Actors with Mosaic produced “The Battle between Funk and Disco” at Olympic Theatre Arts in March. Sequim Gazette photo by Matthew Nash is rampant among people with developmental disabilities." said. "It's very challenging to get the public to attend our fundrais"It doesn't matter where you start, just don't look away," Voelz ers like our theater production in May. It's very hard to measure said. "Our developmentally disabled population is so starving for and show improvements we see among our participants to the that look, that wave," when they're out and about in public. public and I would love for us to connect in a better way in the Anna Wilson said her desire for Mosaic's future is to find a community." "We've got great people but we really need more volunteers, volunteer with marketing experience as 76 percent of Mosaic’s volunteers who think out of the box with new ideas and classes and activities are funded through personal donations, approaches," Voelz said. United Way, grants and fundraising. Program fees cover only 24 For more information, call 360-681-8642 or visit percent of Mosaic’s expenses. or the Mosaic office, 301 Lopez Ave. "Marketing such an organization such as ours makes it very hard to compete with groups like the Boys & Girls Club," Wilson #4, Port Angeles. YOUR DIABETES CARE CENTER living with diabetes. Your full, complete, one-stop source for all your diabetic supplies … Jim’s Pharmacy offers one of the largest selections of diabetic supplies and equipment on the Olympic Peninsula. You’ll find blood glucose monitors, glucose tabs, test strips, specialty foot creams, and much more all in one location. We train our clients how to use them through demonstration, and we pre-fill syringes. Sandy Sinnes, our Diabetes Specialist, is accepting appointments on Fridays for half-hour consultations; $20. Call for details. (360) 457-8527 Fax: (360) 452-3959 Email: info@portofpa.com Website: 20 424 East 2nd Street, Port Angeles 360 452-4200 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 & HEART Soul I can't thank you enough By Karen Frank W hen Dana and I eat dinner, we hold hands and say what we are grateful for. Most of the time they are the same things — each other, a house, food, cats, friends. Some days I’m grumpy or feeling sick, only willing to grumble, “I’m not grateful for anything right now,” which is momentarily true. The holiday season is a time when children and adults often feel they are required to seem grateful even when they receive clothes that are a size too small (or two sizes too large) or are presented with their third fruit cake when they don’t even particularly like fruit. It’s a time when we try to appear thankful to hug relatives we never see during the year and don’t really want to see. It’s a time of year when it’s really, really dark and people go to work in the dark and return home in the dark and are as sleepy as hibernating bears 80 percent of the time. It’s a time of year when the grey is unbroken even during the “daylight” hours and the damp cold seeps into our bones leaving us shivering even when the heat is cranked up. It’s also the season of light and love and gratitude. I like the kind of Christmas tree lights that blink on and off or look like white icicles dripping from rooftops. I appreciate the story behind the lighting of the Hanukkah candles. I wish I had one of those propane fireplaces so that I could be mesmerized and warmed by the crackle of the fake logs. I imagine roaring yule logs and bonfires on Winter Solstice, symbols of the community’s faith that the sun will return even though the night is deep and long. When Meister Eckhart said that if the only prayer we ever pray is “thank you,” that is enough or when Shug in "The Color Purple" said that God wanted us to notice and appreciate the ordinary daily surroundings of creation, like the color purple, that’s the kind of gratitude we are called to in every season. This is something we can learn. In one of our old home movies, my brother is “helping” me open one of my presents. He’s tearing the paper off and flinging it behind him, excited by all the hoopla and anticipation of the day. When he opened his own presents, it was clear which ones he preferred. Those that contained clothes got tossed over his shoulder in preference of toys. It was funny then, but it wouldn’t be humorous if he still did that as an adult (although it kind of would be). Now he knows, as we all know, that the person who bought that present for us did it out of love and the hope that it would bring us pleasure. We don’t have to fake joy at receiving something we don’t want. It’s not the specific gift that we respond to, but the love of the person who gave it to us. Another thing I’ve noticed since I was a child is that it is better to give and receive. It was always fun to pick out some kind of costume jewelry or perfume for my mother, but a little boring to find another belt or tie for my father. It was even more exciting to get presents. I have a photo of me from when I was about 12 standing in front of a cardboard chimney in my pajamas holding a little black purse in one hand and a fishing pole in the other. It’s hard these days to surprise people. Most of the adults in my life have everything they need and very specific tastes. No jacket has been made that Dana won’t appreciate. My mother loves lavender and candles. My brother, music videos. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 I think he is the most innovative when it comes to presents. One year he gave me a scrapbook of old pictures of my cousin and me. Another year, he gave each of us DVDs that he had made from spliced together bits of my grandfather’s home movies. The last present I gave my dad before he died was a (double picture frame) with me on one side and Dana on the other. He kept it on his dresser where he could see it from his bed and it gave him great pleasure. Along about February, during the grey dismal days, remember that the light will return (maybe by August) and that each morning the sun rises to shine upon us in its peculiar Northwestern way. Right now, I am grateful for the colors of green, from cedars to Douglas-fir to willows to salal, for all the woodsy stuff that people weave into wreaths and hang from their doors this time of year. I am grateful for all of you. I thank you very much for reading my column all these years. I thank you for calling me up or e-mailing me or talking to me on the street and telling me when something I wrote was meaningful to you. I thank you for appreciating me the way that I appreciate you. Giving and receiving has been a gift. Many blessings to you all and a Happy New Year! I’m no longer going to be writing this column and I will miss you. I’m still a writer and spiritual director in Port Townsend. If you have any questions or comments, please feel free to e-mail me at karenanddana1@q.com. 21 Halina D’Urso Registered Representative Office: 360.683.4030 Cell: 360.808.4428 THE COMPANY YOU KEEP® New York Life Insurance Company 224 W. Washington St., Suite 202 Sequim, WA 98382 Olympic Peninsula Performance Horses “Lessons & Training” at Olympic View Stables Call Sara Richerts 360-775-5084 • email: sararicherts@gmail.com Don’t miss an issue! Looking for something FUN to do? Don’t look far! Subscribe today to have every issue of Living on the Peninsula delivered to your doorstep. You can find so much to do, all in one place - right here on the Olympic Peninsula! Only $20 for 4 issues jam-packed with stories & gorgeous photos of the Olympic Peninsula and its people. Aquatics • Working out • Gym & Racquetball ALL under one roof! Shoot baskets, play volleyball or take a class – Tai Chi • Step Bench • Aerobics Try a ride on our wild wonderful water slide! Full Size Olympic Pool • Dry & Steam Saunas • Hydro Therapy Pool Call today! 683-3311 610 North Fifth Ave. • Sequim, WA 98382 • 683-3344 Visit our website for classes, pool schedules & more! Open 7 days a week! Mon. - Fri. 5am - 9pm • Sat. 8am – 8pm • Sun. 11am - 8pm • Containers • Drop Boxes • Recycling • Residential Refuse • Demolition Drop Boxes • Construction Debris Boxes 1-800-422-7854 • Phone: 452-7278 • Fax: 417-0122 2058 W. Edgewood Dr. • Port Angeles 22 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Friends of Forks Library A FRIEND TO THE ENTIRE FORKS COMMUNITY Story by Chris Cook Forks High School’s multi-million dollar addition is now open, a 21st-century centerpiece for the rural logging community. Now it’s time to spruce up and modernize the town’s downtown library. Both the high school and library are busy crossroads for residents of Forks, La Push and other neighboring towns. The library, located on South Forks Avenue in a brick-walled former bank building built in the 1960s, is solid, but needs a replacement for its outmoded flat roof. The original commercial-grade carpeting has stood the test of time, but is ready for replacement. Knocking out walls would open up the interior. A meeting room in the back corner of the building is used regularly for library talks plus community meetings. The room, and its kitchen facilities, would be modernized and set up to be used securely after hours when the main library rooms are locked. Thanks to the Clallam County library levy of 2010 the Forks Library and the three other libraries in the NOLS system are seeing increased circulation of their books, DVDs and other materials. The tally for the Forks Library for 2011 was 84,827 items, quite a count for a town with under 3,200 in population. “The library is a cornerstone of our community,” Friends of Forks Library treasurer Ellen Matheny says of the role of the library in the rural West End community. “In contrast to Port Angeles or Sequim that have a wider variety of resources, our library provides an essential meeting place and information resource for our community. Through the library, we can access recently released books, audio books, movies, newspapers and magazines. It provides a portal to the Internet for those without computers at home and for people traveling through town. It also offers periodic programs of interest to learners of all ages.” At the forefront of the effort to raise $175,000 in funds from community sources for the library renovation are the members of the Friends of the Forks Library. The nonprofit organization is actively pursuing that goal. “We currently have $60,159 in the fund,” Matheny says. “Several community members have stepped forward with contributions of $15,000 and $10,000 and many other people with contributions of $500 or less. No contribution is too small. We have a donation can on the book cart in the lobby where folks can leave smaller donations for the renovation. Friends of Forks Library has contributed $10,000 toward the renovation and is sending an additional $3,000 to them this week. This money has been collected by the group over the past several years primarily from book sales.” The North Olympic Library System – a junior taxing district within Clallam County that staffs and funds the Forks Library – has committed $559,000 to the library renovation project. The NOLS funds are mostly coming from 2010 library levy funds and from timber tax receipts. "The Forks Library needs an upgrade and renovation,” a flier sent to West End residents by the FOFL states. “In its prior life, it was a bank building. Now we need to tear down the interior walls and create a library for Forks that is modern, efficient, accessible Friends of Forks Library president Debbie McIntyre and treasurer Ellen Matheny continued the organization’s ongoing Forks Library renovation efforts at “Raise the Roof.” Photo courtesy of Friends of Forks Library Above: This conceptual drawing produced by Jerry Schlie Design of Beaver shows how the children’s section of the Forks Library would look following a remodel. At right: This conceptual elevation drawing shows how the Forks Library’s exterior would look following the proposed $775,000 renovation. Most notable is the raised roof, which would replace the building’s problematic flat roof, a troublesome design feature in the rainiest town in the lower 48 states. Drawings courtesy of North Olympic Library System LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 23 and welcoming. Some of the proposed upgrades include a new roof that can withstand our abundant rainfall, highefficiency lighting and windows, a new heating and cooling system, an upgraded electrical system with the capacity to handle the demands of new library technology and an outside entrance to the meeting room for after-hours use by community groups. The renewed library spaces will be welcoming and flexible, better able to meet the changing needs of our community today and for decades to come.” Matheny sums up the plans: “The library is housed in an old bank building complete with a vault in the center. It needs to be more open and modular to allow multiple uses of its spaces. It also needs updated wiring to support the increased use of technology in the library. The roof leaks and needs to be replaced. The added convenience of after-hours access to the meeting room would be a welcomed result of the renovation.” Looking back decades, the story of the Forks Library reflects the roots of the strong support the project now is receiving. The North Olympic Library System recently posted a history of the library and of its support within the West End community. “In January 19, 1946, the first public library in Forks opened in an unused room of the town's elementary school,” the brief history reads. “As a branch of the county's Clallam Rural Library, it had an initial collection of 600 books, augmented by volumes from the county system, and Lillian Dimmel was the first librarian. Del later built shelves in their enclosed porch, where people were free to come and go and borrow what they liked. Huggins joined others when planning for an official library began in 1944, and it opened in January 1946.” With the incorporation of Forks as a city in August 1945 the new library lost its standing within the rural Clallam County library district. In stepped the local PTA, with members forming a nonprofit library association. Forks residents are noted for their thrift, especially in reusing discarded buildings, moving the buildings when and where needed. “The fledgling library also found itself in need of a new home barely a year and a half into operation,” the NOLS history continues. “As the library's grade-school home was needed to accommodate increasing enrollment. A small building once owned by long-time resident Bert Fletcher, where he raised rabbits in the I920s, became temporary quarters in 1947, and the short-lived ‘rabbit-hutch library’ was born.” The official name of the Forks library became Forks Memorial Library, in remembrance of World War II veterans. The forerunner to today’s Friends of Forks Library was formed and fittingly named the Forks Memorial Library Association, though this name was officially changed to Forks Library when the library became part of the North Olympic Library System. The organization began an ongoing drive for support funds for the library into 1973. During this era library staff plus books and other materials, and facility maintenance, were funded by Clallam County’s rural library district. The history tells how the Forks Library moved from location to location over the years. “In 1951 a site was donated on the corner of B Street (now Bogachiel Way) and the 24 “Twilight” author Stephenie Meyer signs books for teenage girls (and their moms) during a book signing at the Forks Library held in 2006. Meyer contacted the library to arrange the signing as part of her promotion for the early books in her “Twilight” series. Forks Forum file photo Olympic Loop Highway (U.S. 101); volunteers completed the building's construction that year, and the library's first permanent home was dedicated June 28, 1952. The library merged with the newly formed North Olympic Library System in 1973 and plans to resolve crowding at the Forks branch were under way by 1979. The solution turned up just across B Street in the form of the old Seafirst Bank building, which was remodeled and opened January 19, 1981, with 20,000 books. Volunteers moved the library's collection, including grade-schoolers who formed a human ‘book brigade,’ passing books hand-to-hand across the street and through a window of the new library.” The current Friends of Forks Library group began meeting in October 2003. The organization is an associate member of the West Olympic Peninsula Business Association, which is a 501(c)(3) organization, allowing it to take tax-deductible donations. Running the FOFL alongside Matheny are president Debbie McIntyre and vice president/secretary Kate Monahan. “Our mission is to promote the love of reading in our community with various activities,” Matheny says. “We’ve sponsored book reading groups, presentations on a variety of topics including authors doing book readings like when Stephenie Meyer visited Forks in 2006, and children’s reading activities. We also provide the library with things they otherwise might not have in their budget – a data projector for community checkout, several custom-made bookshelves, the train in the children’s reading room, for instance.” The ladies, their team of volunteers within the FOFL, the staff at Forks Library and a long list of community supporters are being very creative in their fundraising. Matheny says, “This year we’ve held a couple book sales and a community dance to promote the renovation of our library. The dance was one of the last activities held at the Rain Forest Arts Center before its demise (in the early morning fire Oct. 29 that devastated a corner of downtown Forks, ed.). We host a ‘perpetual’ book sale in the library lobby – with a shelving cart that offers a wide selection of gently used books to community members, by donation. The book cart is replenished daily by the Friends. “The community dance brought together about 80 people from Forks and the surrounding area. We had two local bands, Crescent Blue and Therapy Session, perform that evening. A caller taught us the Virginia reel and local dance instructors got the evening started with an hour of lessons on the country swing. It was enthusiastically attended and brought in $800 from the evening and an additional $600 in direct contributions to the renovation fund. We plan to hold a Mexican-themed dinner in January at the Congregation Church as another fundraiser.” Forks’ renown as the setting for author Stephenie Meyer’s blockbuster-selling “Twilight” books and movies is aiding the fundraising, too, in some ways. “The focus on Forks due to the ‘Twilight’ books and movies has affected our fundraising by bringing in out-oftown people to our book sales and the community dance,” Matheny says. “We had hoped there would be a greater tie-in and resulting donations due to the library being mentioned in the first book, ‘Twilight’. ” Anyone interested in helping the Friends of the Forks Library reach their library renovation fundraising goals has several options for contributing. Checks for larger donations can be mailed to the library, or dropped off there, made out to North Olympic Library System. The address is: Forks Library, 171 S. Forks Ave., Forks, WA 98331. Online donations can be made by going to the home page and clicking on the “Donate” button. Make sure you specify that your donation is to go to the Forks branch, then choose “Other,” then type in “Renovation Fund.” LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Balance Klahhane Gymnastics Where Fitness Is Fun Strength 457-5187 Life needs it! Join a class. Help your child achieve their full potential! Gymnastics is part of our human nature. Gymnastics clubs are fun, safe places to build a healthy foundation for life. Want your child to learn fitness, confidence and self esteem? • Sewing machines • Sewing Tables • Quilters’ Fabrics • Scissors, Notions • Embroidery • Gift Certificates Designs Repairs • Parts • New & Used • All Makes 609 W. Washington #12 • 681-0820 • sequimsew@yahoo.com Tires • Brakes • Alignment Wheels • Batteries • Shocks FREE Val & Larry Open For Dinner Wednesday through Sunday Take a stroll back in time and enjoy a home-cooked BREAKFAST, LUNCH or DINNER At the Old Mill 721 Carlsborg Rd., Carlsborg, • WA Phone: 360-582-1583 Tues. 8am - 3pm • Wed. thru Sun. 8am - 8pm • Closed Mondays The LES SCHWAB Warranty • Road Hazard Warranty (Limited) • Mounting • Air Checks • Rotations • Flat Repair - Passenger & Tubeless Light Truck Tires “If We Can’t Guarantee it, We Won’t Sell It! Cocktails Beer Wine) One Change May Dramatically Reduce Your Energy Consumption: Team Air Flo 221 W. Cedar St., Sequim | 360-683-3901 | 360-385-5354 Living on the Peninsula | WINTER | DECEMBER 2012 • sleep apnea • restless leg • insomnia • parasomnia (abnormal behavior during sleep) • narcolepsy, etc. Sleep studies are conveniently performed in Sequim. Please check with your insurance provider. Most Self Referrals and Insurances Accepted. 25 A community treasure: Alice McDonald says being a patient at the DVHWC is like having the best doctor ever. She fell in love with the clinic on her first visit and now volunteers at the front desk and does data entry. DVHWC Director Rose Gibbs and provider Larry Germain confer about a patient at the Chronic Health Care Clinic. Dungeness Valley Health & Wellness Clinic Story and photos by Kelly McKillip "Thank you for being so kind and for leaving me with my shred of dignity. Somehow I will find a way to pass it on and return your care. Thank you most sincerely." P.M. "Thank you for making me better when I was very sick. I couldn't have done it without you. When I drive by your clinic my son says look mommy there's the place that made you better. Thank you." These letters express the gratitude patients often express for the care received at the clinic. 26 Health is indispensable for a good quality life. Since opening its doors in 2001, the Dungeness Valley Health & Wellness Clinic (DVHWC) has provided more than 12,000 free patient visits to the underinsured members of our community. The care, which is focused on wellness, gives patients an avenue toward a healthier life. Patients are grateful for the outstanding professional services received and some are inspired to become volunteers themselves. A GREAT IDEA The vision of the DVHWC began with Mary Griffith, a parish nurse with Dungeness Lutheran Church, who saw many of her fellow parishioners suffering because basic health care was beyond their grasp. To respond to this need, Griffith began a free walk-in clinic. Increasingly popular, the organization increased in size and scope, always with the goal of providing great care and the tools and information needed for patients to take responsibility for their own health. A GREAT IDEA GAINS MOMENTUM In 2008, the clinic moved from the small house at the Dungeness Valley Lutheran Church to its current Fifth Avenue location. In response to the need of many patients who require ongoing management of chronic illnesses beyond the scope of the walk-in clinic, the Chronic Health Care Clinic (CHCC) was created and now serves over 300 patients. Rose Gibbs, who came on board as director in 2009, says the urgent-care walk-in clinic and the CHCC continue to provide access to people who otherwise would be denied the most basic medical care. This amazing accomplishment is achieved through the hard work of a dedicated board of directors, staff, community physicians and organizations, including an essential, supportive partnership with Olympic Medical Center. The organization is able to continue the excellent level of care because of the generosity of 11 providers, 22 RNs, 12 assessment staff, 16 receptionists, a physical therapist, a massage therapist, a diabetes educator, an information technology specialist, support personnel and six interpreters who collectively give $300,000 worth of unpaid services annually. CHCC provider Larry Germain, ARNP, helps patients manage complex, chronic illnesses such as diabetes, hypertension and heart disease. Germain believes that all Americans should have access to health care and finds that the patients he serves are very appreciative. Another critical service of the clinic is the Patient Assistance Program (PAP). Coordinator Dian Woodle leads a team of three people who assist patients obtain much needed but prohibitively expensive medications for free from 14 participating pharmaceutical foundations. Each foundation has its own set of rules with all requiring a new application annually for each patient. About 50 percent of the patients utilizing the program have diabetes. Gibbs states that without the PAP and the CHCC, many of these patients would be poorly managed, utilizing hospital emergency departments as they go from crisis to crisis. In an effort to help people take control of their own health, free monthly Working on Wellness (WOW) forums provide information to the entire community on a variety of health topics ranging from managing chronic pain to coping with holiday stress. THE SPIRIT OF GIVING IS CONTAGIOUS During the five years that I have volunteered at the DVHWC as a registered nurse, it never ceases to amaze me at what individuals with compassion and regard for their fellow human beings can accomplish. The positive attitude and clarity of purpose permeates the organization on all levels. Many dedicated providers and other volunteers arrive to give care at the walk-in clinic after working a full day in their own offices. Then there are the patients who become volunteers, such as Shannon Lott, who was traveling to Seattle every 3-4 months for health care when she saw an ad for the clinic in the newspaper. Both she and her husband, Edward, have diabetes and were struggling to manage their health. Impressed and grateful that they could have their illnesses monitored locally and be helped with the cost of expensive medicines, Lott became a volunteer and does everything from assessment to data entry, to being a receptionist and file clerk. Alice McDonald arrived at the walk-in clinic very sick one evening. She was helped and on the spot fell in love with the LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Shannon Lott became a volunteer in gratitude for the great care that she and her husband, Edward, who are both diabetics, receive at the clinic. Stanley Skrobecky feels the clinic has been a blessing in his life. He is grateful for the care in managing his diabetes and the help with obtaining life-saving insulin and other medications. place. She says it’s like having the best doctor ever and is so appreciative that she became a volunteer working the front desk and data entry. Stanley Skrobecky also has diabetes and feels the clinic has been a blessing in his life. He is grateful to have his illness managed so well and receive help with obtaining life-saving insulin and other medicines. Gibbs also arranged an appointment for Skrobecky at Seattle’s Virginia Mason Clinic to address his vision problems. To return his blessings to the Medical assistant and volunteer Grace Turley does double duty as a receptionist at the Chronic Health Care Clinic and as an assessment staff member during the evening walk-in clinic. community, Skrobecky volunteers at the Salvation Army to help feed the hungry. The DVHWC is a 501(3)© organization funded by a combination of contributions, patient donations, grants, contributed services and events. This year’s Clinic Fun Walk in September raised nearly $34,000. For every $1 donated, $3.67 of care is provided. Patient Assistant Program (PAP) coordinator Dian Woodle leads the team that helps many of the patients at the clinic receive their expensive, life-saving medicines for free. The DVHWC is at 777 N. Fifth Ave. at the Fifth Avenue Medical Plaza in Sequim. The walk-in clinic begins at 5 p.m. Mondays and Thursdays. Patients are encouraged to arrive at least 30 minutes early. The Chronic Health Care Clinic is by appointment only. For more information or to volunteer or donate to the clinic call, 582-0218 or visit sequimfreeclinic.org. MAKE YOUR FINANCIAL FUTURE A PRIORITY. You can rely on us for: • Highly Personal Service • A Quality-Focused Investment Philosophy • Convenience Member SIPC Cheryl Gray, AAMS 213 E. Washington St. Suite 2, Sequim (360) 683-7205 Cherie DuBois 540 N. 5th Ave Suite 1, Sequim (360) 683-1432 Hear now with Intiga at Scott Raszler M.A., CCC-A 568 N. 5th Ave. • Sequim 360-683-5389 This product may not be appropriate for all patients. Visit your hearing care professional to see if it’s right for you. ©2012 Oticon, Inc. All Rights Reserved. Living on the Peninsula | WINTER | DECEMBER 2012 27 Quilcene Historical Museum members always are happy to receive artifacts for the museum. Here, with recent acquisitions, docent Ellen Worthington Jenner holds a covered dish, secretary Larry McKeehan holds a bed warmer and chairman Mari Phillips holds a wooden caliper. Handing down history T he little leather-covered notebook is carefully opened. On the inside of the cover’s overleaf hinge, next to a small leather tube for holding a pen, a name is carefully inscribed in India ink. The faded handwriting on the small, lined pages is neat and even, with precise loopings. It still readable, although the uneven fading of the lines from a pen dipped in ink makes for some challenges. It reveals a recipe collection. There are recipes for all kinds of cakes and baked goods, none of which gives flour amounts, oven temperatures or baking times. It comes from a time of wood stove ovens. The Cheap Cookies recipe is mostly lard and flour. Molasses is the predominate sweetener. Measurements are given in gills, teacupfuls and teaspoonfuls. We are glimpsing the past through these pages. The past is hard to hold. It can fade from view and be lost in an astonishingly short time. Luckily for Quilcene’s past, the Quilcene Historical Museum has taken on the mission of preserving its history for future generations. As the museum has grown, so has its collection of artifacts, reflecting an increasing measure of trust by the community and the depth of the museum’s resource commitment. Starting from a casual conversation in 1991, the museum is now solidly planted and looking to expand. “It really started with the Quilcene Fair,” recalled Mari Phillips, museum board chairman. “Al Jakeway was the chair and we had the idea to have exhibits of old photographs.” That was around a quarter of a century ago. Phillips volunteered to organize the display, working 28 Story and photos by Viviann Kuehl with some old photos she’d recently acquired. She put out a request for more photos, promising to return them. And she made copies of any photos submitted. “When we showed the photos, there was tremendous interest in them,” said Phillips. “That first year, they were mostly logging pictures and the second year we had farm photos,” she recalled. “After a few years, I had quite a collection and we began to talk about where to put them.” Steve Ricketts, at the time a silviculturist at the Quilcene Forest Service station, was very interested in starting a museum, recalled Phillips. He got permission to use an old Forest Service building for the purpose. It was historical but had problems with access and layout: steps to negotiate out front, narrow stairs inside, narrow doors and small rooms. “I was nervous because it was going to be very difficult,” recalled Phillips. “At that time, the Worthingtons were donating $250,000 to the church for an upgrade,” recalled Larry McKeehan, museum secretary. Part of the project was to remove the annex, a large fellowship room that had been an addition to the original church, to make way for the new church building. “I heard about the annex,” said Phillips. “It was still available for a buck and it was great for our needs. It was essentially one big room, with a kitchen and a couple of small rooms, which really served our purpose a lot more readily, but we had no place to put it.” Driving by the pasture at the corner of Columbia Street and Center Road, Phillips thought that would be the ideal spot. She approached owner Eileen Worthington. “Her response was, ‘Well, sure, dear,” said Phillips, accurately imitating Worthington’s sweetness of voice. “So we had a building and a site, we just needed a house mover.” Jeff Monroe, an experienced mover who happened to live across the street, agreed to do the moving with his brother, John Monroe. A group of community members met and soon the museum had a charter and nonprofit status. “So many people got involved,” said Phillips. “Tom McClanahan designed the septic, Maberry put in the well, Brinnon Builders did work paid by Kay Anderson, Kristy Ackerman did the floors, Rick Muesterman built display dividers, Erwin Dence did the interior painting, Scott Abbott painted the outside, with a mural added by Dence.” McKeehan spent one summer of his life working on the museum building’s walls. In 1993, the museum was open to the public. “The first exhibit was only about 25 things,” recalled Phillips laughingly. “Now when we look back, it was virtually empty.” “We didn’t decide what to put in the spaces, we just collected things,” said McKeehan. “It took a few years for people to realize that we were really, truly going to be here and would take good care of their things. Now it’s look out, we’re getting so many donations.” Over the past 21 years of the museum’s existence, “We’ve seen an ever-growing interest by local people to preserve their history,” said McKeehan. “We’ve been brought an amazing amount of information and artifacts. Membership has LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 increased, we have committed workers and last year we had about 1,000 visitors.” “As a docent, I meet people with a Quilcene connection who have come back,” said Ellen Worthington Jenner, who grew up in the Worthington house. “We’re a destination and they have wonderful stories.” The museum has made efforts to record oral history and now has a collection of CDs available of 17 individuals speaking on life in Quilcene. The museum has become a recognized repository of historical records, including those of the Quilcene Alumni Association, and has lots of local information in files. “I was able to pull out an old phone book to answer a question the other day,” related McKeehan, “but what’s really cool is that someone donated all these old phone books, dating back to 1953. This is a reflection of the faith that people have in us to take care of their things, to preserve the history.” “Another exciting thing is our local authors,” said Phillips. “We’re proud of our authors and their works.” In its gift shop, the museum has several volumes on local history and some historical fiction by local authors, along with Victorian crafts, postcards and other items of interest. The museum also has sponsored community events over the years, to celebrate Quilcene history, and maybe even create some. Heritage Days, Volksmarch outings, Rendezvous events, Quilcene High School reunion luncheons, specialty tours, wine tastings, salmon barbecues and book signings have all raised awareness and/or funds for the museum. “Now we’ve launched into the big time,” said Phillips, of the museums’s next step, the purchase of what has come to be called Worthington Park. The historical Worthington home and grounds, recognized as 10 acres of possibilities, is a planned purchase for preservation, expansion of museum space and use as a funding source. Noel Criscuola, heir to the Worthington property, has followed in his mother’s footsteps in his exceeding generosity, said Phillips. He gifted the contents of the house to the museum and has been very supportive. “He has given permission to use the grounds and do work on the property, so by the time we sign the deed, we’ll already have our goals accomplished. It’s so well-located and we can have three or four things going on at the same time, with the components so well-placed, the groups won’t impact each other.” A stage has been built, and two concerts given, by a museum subcommittee. December events included a volunteer potluck, a Christmas tree raffle and a Christmas Tea fundraiser in the Worthington mansion. The tea sold out and another is planned for the spring. The museum has gathered $205,000 toward the property purchase price of $300,000 and has until June 30, 2013, to come up with the balance, as contracted by the late Eileen Worthington a year ago. Community matching grants of $35,000 and a Satterlee family matching grant which netted $19,650, and inspired McKeehan to launch his own family challenge, are making a real difference, noted Phillips. “That’s the kind of enthusiasm we like to see,” she said. Planned for later development are restoration of the mansion, a river trail system and a main entry off Center Road. Davis Steelquist has donated 22 pieces of Victorian furniture toward the mansion’s original period restoration and the museum has three sets of donated dishes. Meanwhile, McKeehan has typed out the recipes in the Eileen Worthington, who provided the building site for the museum. little book, which came from the Worthington family, for the museum members to try at home and share results. The past is mingling with the present. Viviann Kuehl is a freelance writer who enjoys the history of Quilcene and peering into the past and future from different perspectives.. Lou, Keith, Sindie and Bandit Dentures starting at $650 each! Call our office today for a free consultation Sheeler 680 W. Washington, Ste. E-106 Sequim, WA Denture Clinic 681-7999 BUYING CLUB PRODUCE • Everybody’s a Member! • Farm-Direct • Organics • Sequim & Eastern Washington UNIQUE MERCANTILE NATURAL GROCERIES • Organics • Gluten-free • Bulk Foods • Natural Body Care • Nutritional Supplements • Gifts & Greeting Cards • Kitchen Supplies FARM STORE OLDTYME BUTCHER • Animal Feeds • Hay & Straw • Pet Supplies • Birdseed COUNTRY-STYLE DELI • Fruit & Veggie Starts • Ornamentals • Flowers • Natural Fertilizers & Soils • Potted & Bare-root Trees • Our Own Beef • Fresh Poultry & Seafood • Our own smokehouse • Daily Soups, Salads, & Sandwiches • Espresso & Fruit Smoothies NURSERY Come see our store in the Sequim Village Center • Vitamins • Herbal Remedies • Homeopathy • Skin & Nail Care • Natural Cosmetics • Largest Selection of Domestic & Imported Organic Wines Monday-Saturday 9am - 6pm • (360) 683-6056 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 29 A hidden paradise . . . right here on the peninsula! Fabrics • Notions Thread • Fine Scissors Patterns & More! Come see our selection. Happy crafting! Hours: Mon. Noon - 4pm ~ Tues. - Sat. 10 am - 5 pm Creative Union Fabrics 112 Kala Square Place, Port Townsend 360-379-0655 Open 7 Days a Week Mon-Fri: 9-6 • Sat & Sun: 9-5 321 Four Corners Rd., PT 360-379-0807 Discover a Winter Wonderland Kelley Shields Inc. Custom Building & Design Building Design Services 15% Off Plant Purchases • 2 Acres - from protected annuals to large trees • Our prices & quality will make it worth the trip! Oral Sedation Dentistry See NO Needle ... Hear NO Drill ... Feel NO Pain ... Lot Analysis Remodels/Additions Turn-Key Construction 260 Kala Point Drive Port Townsend, WA 98368 360-385-7156 . CIRCLE of quality Carol Katuzny and Misty in front of her ’01 Audi SQUARE deal • Car Wash and Vacuum with most services • 40-Point AAA Inspection with every visit • Ask about our Loaner Car Program & Shuttle Service It’s as simple as taking a pill and waking to a beautiful new smile! Home of the 3-year, 30,000 mile W A R R A N T Y Admiralty Dental Center Dr. Edward Savidge, D.D.S. 360-385-7003 • “When Circle & Square Auto Care fixes something, it’s really fixed! They check every little thing with a 40-point inspection, so they not only fix the problem, but also can anticipate any future problems or concerns.” 5 Star Business 10953 Rhody Drive 360-385-2070 right here in Jefferson County ... A Quilter’s Paradise Large Selection of Batiks • Notions & Thread Invisible Web and Interfacing Electric Quilt Software • Books & Patterns Find us on Facebook! Open Wed-Fri: 11-6, Sat: 10-4 1004 Lawrence in the back of Potpourri • 360-385-3299 Creative Independent Education since 1983 jcs@jeffersoncommunityschool.com jeffersoncommunityschool.com 360 Mufflers • Welding • Custom Exhaust • Steel Fabrication Mike Madison, owner 8 am-5 pm Mon.-Fri. • 9495 State Rte. 20 Unit B , Port Townsend • info@ptmuffler.com Community at Home, Kinship with the World NOW! Here’s data, voice and power for high-speed internet conferencing … or, when you simply need a suitable place to meet … "for the naturally sophisticated" "It is our mission to bring joy into the lives of others through the beauty of fine arts & crafts." Robert Held Limited Edition Certificate of Authenticity 914 Water St. Port Townsend, WA 98368 360-385-3630 Do you have Pain? 1233 Lawrence St. 360-379-6798 Port Townsend, WA ■ Teleconferencing via high-speed Broadband ■ 55-inch monitor internet-ready for streaming content ■ Blue-ray Player and surround sound for presentations photo here Insurance Accepted Acupuncture 1136 Water Street ■ Display projection for PowerPoint presentations We can help Nourishing Life Chinese Medicine S B C R $30 Community Acupuncture Clinic Visit our website for details: nourishinglifeacupuncture.com ■ 12-foot wired custom built conference table ■ Wi-Fi equipped room; seating for 16 Perfect for employee training sessions, board meetings, performance reviews, depositions, workshops, interviews, skull sessions, seminars and offsite meetings. Contact Ron Khile at 360 531-4900 for information on conference room usage and office suites available for lease. THE Living END The Value of Nonprofits Nonprofits enrich our lives and our communities in a multitude of ways. Address unmet needs Foremost, nonprofits fulfill unmet needs. In the human service sector, they feed and shelter needy community members. They invest in permanent solutions, moving individuals toward self-sufficiency. Nonprofits provide care for seniors enabling them to remain in their homes longer and later providing assisted living and end-of-life care. Nonprofits care for and mentor children and youth, helping children to be successful in school and graduate from high school; they impart parenting skills to young parents and grandparents who are parenting again; they help youth and adults recover from addictions. They deliver medical, dental and behavioral health care, saving people’s lives. They prevent abuse and violence and address the causes of these terrible problems. They provide legal and mediation services and help people gain job skills. They offer educational opportunities to promote education and enhance local schools and colleges. Nonprofits serve as a connector to animals; our beloved pets often come to us through a nonprofit; and they rescue those animals that have been mistreated. They beautify and preserve our natural world, creating outdoor opportunities for all; our waterfront trails, our clean beaches and our national state and local parks are supported by nonprofits and volunteers. Nonprofits also research diseases and find cures that no regular business model could afford to address. Support for local government and other public services dy ss , Mo At half the cost! And nonprofits do all of the above on a shoestring budget. Despite that e-mail circulating about high salaries and misspent funds, most nonprofits operate very efficiently with lower salaries than in business or government sectors. They rely on dedicated volunteers and staff who have chosen this work because of commitment to mission. Nonprofit leaders have learned to be frugal, to partner and collaborate and stretch our resources. Teach us how to play nice! Nonprofits have learned how to collaborate with one another, with businesses and with government to assure the most efficient use of resources. There are many examples of these collaborations, including our Clallam County Homelessness Task Force, which has reduced homelessness by 65 percent! Our broad early learning community works together across our region to strengthen families, help children have the best start possible and begin school ready to learn. By doing this, nonprofits don’t duplicate work but share scarce resources and serve even more clients. Give us a place to volunteer Nonprofits rely heavily on volunteers and these resources cannot be underestimated. Last year, the 25 United f ay o dW EDUCATION Assist people of all ages to reach their potential through education Clallam Coun ty Boys & Girls Clubs of the Olympic Peninsula Camp Fire USA — Juan de Fuca Council Concerned Citizens for Special Children First Step Family Support Center Girl Scouts of Western Washington North Olympic AmeriCorps Parent Line, Lutheran Community Services Parenting Matters Foundation Peninsula Dispute Resolution Center INCOME Assist people to meet their basic needs and to achieve self-sufficiency American Red Cross, Clallam County Chapter Clallam Bay/Sekiu Crisis Center • Forks Food Bank Olympic Community Action Programs Pro Bono Lawyers • Salvation Army Serenity House of Clallam County 32 Way partner agencies reported almost 3,000 volunteers, donating 164,000 hours of service valued at close to $3.5 million! This is only one small sector of our local nonprofit community. Can you imagine the value of our collective volunteerism in animal rescue, environment and our faith communities? An astounding value and huge, huge return on investment! And we don’t just gain the value from the donated time and work of our volunteers, we also gain new skills, new friends, sometimes a new job, and shared opportunities with one another. Ultimately the act of volunteering can lead to the great feeling of a life well-lived. Employers, return on investment, outcomes-focused & drivers of economic development For 25 years, nonprofits have been the largest growth sector in our country, significantly outpacing for-profit sector job growth. In 2011, United Way’s 25 partner agencies employed 233 people full time and 400 people working part time. Their budgets exceeded $19 million. Nationwide, 1 in 10 individuals work for a nonprofit, generating $1.9 trillion in revenues! Nonprofits are working toward demonstrating value to their clients and to their donors. Good nonprofits are focusing on research-based best practices and delivering outcomes they can report back to the community. By nature they are working on sustainability and return on investment. For example, based on research, for every dollar invested in best-practice early learning experiences for children, there is a return up to $17 — much better than the stock market. Combining efficiency, best practice/outcomes-focused work, employment statistics, and results, nonprofits are a key component to economic development and to a thriving community! United Way partner agencies by areas of service: executive direc tor ,U n i te By Jo Our local cities, counties, schools and health care systems are overburdened and often underfunded. These challenges do not look to be getting easier in the near term. Imagine if they also were suddenly charged with delivering all of the services listed above because there was no one to do this work. Imagine the drain on police and court resources if more people did not have their addictions addressed or the state of the emergency room if all free clinic patients received services there. Nonprofits reduce the need for investment of public dollars and save taxpayers money. HEALTH Provide opportunities for people to have a healthy life Forks Abuse Healthy Families of Clallam County Olympic Peninsula YMCA Peninsula Community Mental Health Center Mosaic (formerly SNAP) St. Andrew’s Place Assisted Living Volunteer Chore Services Catholic Community Services Volunteers in Medicine of the Olympics Clinic West End Youth & Community Club UNITED WAY COMMUNITY SOLUTIONS INITIATIVES Mobilizing the caring power of communities to create solutions that improve lives; creating lasting change and preventing problems from happening in the first place. New in 2012! Great Beginnings — Early Learning Grants Clallam County Literacy Council Access to Health Care Coalition Peninsulas’ 2-1-1 Help Line For more information about United Way of Clallam County, call 457-3011, e-mail info@unitedwayclallam.org or visit. LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 business DIRECTORY ilding supplies, home furnishings and lots more! Used bu Products, services and ideas from across the Peninsula. To advertise in Clallam County, call Debi Lahmeyer at 360-683-3311. In Jefferson County, call Sara Radka at 360-385-2900. Our business is growing, thanks to your support. A 501(c)3 non-profit Recycle – Re-Use – Re-Purpose On the web: Now at 22 Gilbert Road • Sequim • 683-7862 (Just west of the Dungeness River at Highway 101) FAMILY HEALTH CLINIC Our physicians, nurse practitioners, and physician’s assistants specialize in care of the whole family. Productivity Life Coaching 808 N. 5th Avenue, Sequim Olympic Medical Center Campus Phone: 360-683-5900 Hours: 8-5 Mon. - Fri. Our nursery nurtures inspiration, education, and vision for all who visit. 131 Kitchen-Dick Rd., Sequim 360.683.2855 Certified Hearing, Inc. for increased wealth and happiness in your professional and personal life. Stop suffering and get to creating the life you want, now! Currently taking a limited number of new clients. Listen to your heart and soul. Call Marie-Claire at 360.460.4364 willowpond@olympus.net • CAMPING Year-Round Full Hookups Dry Camping calendar event campground fair facilities contact Olympic Peninsula Performance Horses Lessons & Training at Olympic View Stables Call Sara Richerts 360-775-5084 email: sararicherts@gmail.com sararicherts@gmail.com Asian Bistro Mon-Thurs: 11am-9:30pm • Sun: 12pm-9:30pm New Extended Hours Fri & Sat: 12pm-12am 990 E. Washington St., Ste G, Sequim 360-681-0721 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012 Over 30 years experience Jeff Howat - 681-3451 jeff@hfww.biz • Quail Hollow Psychotherapy PLLC Joseph L. Price, PhD • 360.683.4818 www. QuailHollowTherapy.com 33 & NOW Then Sequim Avenue/ Washington Street intersection N ow housing Hurricane Coffee Co., the building at the northwest corner of Sequim Avenue and Washington Street in downtown Sequim (at far right) was, at the turn of the 20th century, home to the Sequim Trading Company. Owned and operated by prolific area businessman Charles Franklin Seal, namesake of nearby Seal Street Park, the mercantile offered customers a one-stop shop for groceries and an array of dry goods, including seasonal fashions. The Sequim Trading Company also holds the historical distinction of having employed Sequim’s first mayor, Jilson White, who worked as resident manager. One of the city’s oldest commercial buildings, it has stood adjacent to the Sequim Opera House, which Seal also owned, for more than 100 years. Historical photo from the Mary Dittmer Collection, Museum & Arts Center in the Sequim-Dungeness Valley. Current photo by Reneé Mizar, Museum & Arts Center in the SequimDungeness Valley. Townsend Laundry & Cleaners T ownsend Laundry and Cleaners ran the dry cleaning operation in the old concrete block building off Monroe Street in downtown Port Townsend. The dry cleaning plant wasn’t much to look at, but its phone number was easy to remember – just three digits – 444. All of the buildings in this photo have pretty much survived as is over the years except that the dry cleaner gave way to a blacksmith shop in the late 1970s when Dean Mook started the business. Twenty years later, Mook was back working for Steve Lopes, who since has established a larger shop elsewhere. Today, Mook is where he started, at the Town Forge. The forge is located almost in an alley off Monroe Street, wedged between a brick medical clinic and Point Hudson’s Sail Loft building with its trademark tower rising above this tangle of buildings. Historical photo from The Leader collection. Today’s photo by Fred Obee. 34 LIVING ON THE PENINSULA | WINTER | DECEMBER 2012.
http://issuu.com/sgazette/docs/lopwinter2012?mode=window
CC-MAIN-2015-32
refinedweb
19,411
61.16
11539/jenkins-task-for-remote-hosts I am working on the deployment scenario in which I need to create and run jenkins task on list of hosts, i.e. create something like parametrized task (where ip address is a parameter) or a task on Multijob Plugin with HOST axis, but run by only 2 ones in parallel over multiple hosts. One of the option? Here is a concept which you can follow. Assume that the hosts have been configured as Jenkins slaves already. Assume that hosts are provided in pipeline job parameter HOSTS as whitespace separated list.Below is an example which you can take reference from: def hosts_pairs = HOSTS.split().collate(2) for (pair in host_pairs) { def branches = [:] for (h in pair) { def host = h // fresh variable per iteration; it will be mutated branches[host] = { stage(host) { node(host) { // do the actual job here, e.g. // execute a shell script sh "echo hello world" } } } } parallel branches } I think you should try this. Try Uninstalling ...READ MORE Try Jenkins Plugin called as Slave SetupPlugin. This ...READ MORE I was also facing the same problem. ...READ MORE I installed Oracle XE to talk to .. this link which SonarQube XML ...READ MORE use multijob plugin READ MORE OR
https://www.edureka.co/community/11539/jenkins-task-for-remote-hosts
CC-MAIN-2019-39
refinedweb
206
66.03
At the heart of Planet Atom is the mergeatom module. I've updated mergeatom a lot since I first released it. It's still a simple Python utility for merging multiple Atom 1.0 feeds into an aggregated feed. Some of the features: - Reads in a list of atom URLs, files or content strings to be merged into a given target document - Puts out a complete, merged Atom document (duplicates by atom:id are suppressed). - Collates the entries according to date, allowing you to limit the total. WARNING: Entries from the original Atom feed may be deleted according to ID duplicate removal or entry count limits. - Allows you to set the sort order of resulting entries - Uses atom:source elements, according to the spec, to retain key metadata from the originating feeds - Normalizes XML namespaces prefixes for output Atom elements (uses atom:*) - Allows you to limit contained entries to a date range - Handles base URIs fixup intelligently (Base URIs on feed elements are) migrated on to copied entries so that contained relative links remain correct It requires atomixlib 0.3.0 or more recent, and Amara 1.1.6 or more recent
http://copia.posthaven.com/merging-atom-10-feeds-with-python
CC-MAIN-2017-13
refinedweb
192
52.9
How do I make a random number in python? On repl.it, import Math does not work, so I can not use math.random to make a random number. What do I do to generate a number? Answered by UzayAnil (24) [earned 5 cycles] AdamZow (9) a=random.randint(1,100); If you want just a random integer and not a floating point, here is the solution: a=random.randint(1,100); print(a); UzayAnil (24) here's some code for you to look at: here's some code for you to look at: in the code, look at lines 1-10. ThePuzzlerThree (7) @MrEconomical When i try to install pycopy-math, it gives me an error message that the package installation failed. LeonDoesCode (438) @ThePuzzlerThree @ThePuzzlerThree math is part of the standard libaray, so all you have to do is: import math But for random integers, you can use: from random import randint num = randint(0, 100) this: import random x = random.randrange(1,100) First you import random. then use the function randrange to create the random number, with the first parameter being the lowest number it can generate, and the second being the highest. Hope this helped you out! apparently, your supposed to do: import random x = random.randint(1, 100) print(x) @DarshanRajpara @UzayAnil thank you it worked! I hope you will check out this project when it is finished! @ThePuzzlerThree if you dont want to import whole random lib, you could use from random import randint x = randint(1, 100) print(x)
https://replit.com/talk/ask/How-do-I-make-a-random-number-in-python/23174
CC-MAIN-2021-25
refinedweb
257
73.58
The Arduino Yun has built-in WiFi and a second microprocessor which runs Linux. That means that you can write programs in your favorite scripting language and interact with APIs directly from your Arduino. In this tutorial, we’ll learn how to send SMS and MMS from our Arduino Yun using Python and Twilio. By the end we will: - Install pip and the Twilio Python helper library on the Yun - Write a Python script to send SMS and MMS - Send a text message from an Arduino program tutorial on sending text messages should take about 15 minutes. The ingredients you’ll need are: - Arduino Yun (~$75 at Spark Fun or Ada Fruit) - microUSB cable (~$5) - microSD card (~$5) To get started, sign up for a free Twilio account and buy an MMS enabled phone number: Install pip on the Arduino Yun In order to install the Twilio Python helper library, we’ll need to install a Python package manager called pip. SSH into your Yun (if you don’t know how to do this, check out our getting started guide). From there: opkg update opkg install distribute opkg install python-openssl easy_install pip And that’s it — now pip’s installed. By default, pip would install Python packages to the Yun’s onboard memory. However, the official Arduino Yun docs have an ominous warning that says, “You’re discouraged from using the Yún’s built-in non-volatile memory, because it has a limited number of writes.” Let’s install the Twilio library to the SD card instead. This requires three steps: - Create a new directory on the SD card for our Python packages - Set a PYTHONPATH to tell Python to look for packages there - Force pip to install the Twilio library in our new directory Plug a properly formatted SD Card into your Arduino (again, check out the Yun getting started guide if you need help with this), then create a new directory: mkdir /mnt/sda1/python-packages Now let’s edit our /etc/profile and set a PYTHONPATH so that Python will check our new directory for packages: vim /etc/profile You’ll see several lines that start with export . Twilio library to our python-packages directory: pip install --target /mnt/sda1/python-packages twilio Send SMS from your Arduino Yun If you don’t already have a Twilio account, sign up now. You can do everything in this tutorial with the free trial. We’re going to write a simple Python script to send a text message. There are two ways we could get this code on our Arduino: - plug the SD card into our computer, write the code there using our favorite text editor, save it to the card and plug the it back into the Yun. - Write the code directly on the Yun using Vim For this tutorial, we’ll go with the latter. First we need to make a directory for our Python code and navigate there: mkdir /mnt/sda1/arduino cd /mnt/sda1/arduino Then create a new python script using Vim: vim send-sms.py Once in Vim, press i to enter insert mode. Then include the Twilio python library: from twilio.rest import TwilioRestClient Next we need to set our Twilio credentials. In a browser, navigate over to your Twilio Dashboard, and click Show Credentials. Copy these values into variables: account_sid = "YOURACCOUNTSID" auth_token = "YOURAUTHTOKEN" Then create variables for both the Twilio phone number you’ll be using for this project and your personal cellphone: twilio_phone_number = "YOURTWILIONUMBER" cellphone = "YOURCELLPHONE" Create a REST client to connect with the Twilio API. client = TwilioRestClient(account_sid, auth_token) Once we’ve got our client, sending an SMS a simple exercise of passing a to , from , and a body to client.messages.create() : client.messages.create(to=cellphone, from_=twilio_phone_number, body="COMING TO YOU LIVE FROM THE ARDUINO YUN!") Type :wq and press enter to save and exit. Then run our script: python send-sms.py If all goes well your phone should light up with a text message. If all doesn’t go well but Python didn’t give you an error message, check the Dev Tools under your Twilio Dashboard. Chances are, you don’t want to edit the Python script every time you want to change the message body. Let’s modify the script to accept a message body from the command line parameter: Open send-sms.py again and add this line to the top of your file: import sys Then change the line that sends the SMS to replace the hardcoded message body with the first command line argument: client.messages.create(to=cellphone, from_=twilio_phone_number, body=sys.argv[1]) Save and quit vim, then run your script again with the message body in quotes: python send-sms.py "Coming to you live from the Arduino command line!" Send MMS from your Arudino Yun Last year Twilio launched the ability to send picture messages, a.k.a. MMS. To send an MMS we need only to pass one additional parameter to the client.messages.create method: a media_url to tell Twilio where our picture is located. For the sake of accurate file names, let’s make a copy of our script. Then open the file in vim: cp send-sms.py send-mms.py vim send-mms.py We can use a couple of vim’s keyboard shortcuts to navigate to our special spot. - Press shift-g to move to the end of the file - Press $ to move to the end of the line - Press i to insert text at the spot prior to the cursor - Paste this code (make sure you get the preceding comma!): , media_url=sys.argv[2] So that whole line should look like: client.messages.create(to=cellphone, from_=twilio_phone_number, body=sys.argv[1], media_url=sys.argv[2]) Save and quit vim, and run your script with two parameters: one for the message body, the other with a url of an image you’d like to send. Here’s one of our puppy on the day we brought her home: python send-mms.py "Here’s my puppy." While you’re waiting for that picture to arrive on your phone, let’s chat about MMS. First, MMS is a pretty slow way to send data and an image is a few orders of magnitude more data than 140 characters of text. It could take up to 60 seconds before you receive your picture. Second, because MMS requires a publicly accessible url, it’s a non-trivial exercise to send an MMS with a picture that’s residing on your Yun. Two options are: - Open a tunnel through your router to give your Arduino Yun a publicly accessible IP using a service such as Yaler. - Upload your file to the cloud. If that last method interests you, check out our tutorial on how to build a photobooth with an Arduino Yun, where we demonstrate how to upload pictures to Dropbox from your Yun. Alright, hopefully by now your picture has arrived on your phone. Let’s play with some Arduino code. Send an SMS from an Arduino Sketch If all you wanted to do send an SMS, you wouldn’t need an Arduino. The reason we’re doing this on the Yun is so that we can do some hardware hacking along with our software writing. Let’s write an Arduino sketch that will run our text message sending Python script. Open the Arduino IDE and create a new sketch. The Arduino Yun has two processors: the “Arduino chip” which controls the pins and interfaces with the hardware, and the “Linux and WiFi Chip.” These two chips communicate with one another via the Bridge library. The Process library is how we execute Linux commands from our Arduino code. At the top of the sketch, include the bridge and process libraries: #include <Bridge.h> #include <Process.h> Your sketch comes pre-populated with setup() and loop() functions. We’ll come back to those in a minute. First, let’s write the function to call our Python script. Add this to the bottom of your sketch: void sendSms() { Process p; // Create a process and call it "p" p.begin("python"); // Process that launch the "python" command p.addParameter("/mnt/sda1/arduino/send-sms.py"); // Add the path parameter p.addParameter("\"Coming to you from the sketch\""); // The message body p.run(); // Run the process and wait for its termination } Now back to our setup() and loop() . The setup() runs one time after you upload the sketch to your Arduino. Ours is pretty simple — we’re just going to initiate the Bridge. void setup() { Bridge.begin(); } Our loop is pretty simple too. We’ll call our sendSms() function, then wait for 10 seconds. void loop() { sendSms(); delay(10000); } Click the checkmark in the top left corner to verify your script. Then click the upload button to send your script to the Arduino Yun. Shortly after you do that, your phone will light up with a text message. Then another. Then another. Once you’ve had enough, comment out the sendSMS line in the sketch and re-upload it to your Yun. Next Steps Now you’ve got an Arduino sketch that can trigger a Python script that can send a message to the 6.8 billion cellphones of the world. What does one do with that kind of power? Perhaps you could: - Use environmental sensors to alert you when the temperature in the fridge rises above a certain temperature - Build security system that texts you when motion is detected in your house - Hook up a webcam and have your dog send you selfies by hitting a big red button Also, if you’re looking to go further with the Yun, check out how to: Also, check out some of our IoT Quickstarts for working with Twilio Programmable Wireless and Twilio’s Sync for IoT. Whatever you build with your Arduino Yun – or something else – I’d love to hear about it. Leave me a message in the comments below, drop me an email at gb@twilio.com or hit me up on Twitter. Happy Hacking!
https://www.twilio.com/blog/2015/02/send-sms-and-mms-from-your-arduino-yun.html
CC-MAIN-2020-50
refinedweb
1,692
70.63
Angular Master Class in Las Palmas Join our upcoming public training!Get a ticket → We are following the development of Angular 2.0.0 since the beginning on and are also contributing to the project. Just recently we’ve built a simple zippy component in Angular and in this article we want to show you how. Want to see things in action first? TABLE OF CONTENTS Getting started with Angular 2.0.0 There are several options today to get started with Angular. For instance, we can go to angular.io and use the quickstart guide. Or, we can install the Angular CLI, which takes care of scaffolding, building and serving Angular applications. In this article we will use Pawel Kozlowski's ng2-play repository the Angular CLI, but again, you can use whatever suits you. We start by installing Angular CLI as a global command on our local machine using npm. $ npm install -g angular-cli Once that is done, we can scaffold a new Angular project by running ng new <PROJECT_NAME>. Note that the project is scaffolded in the directory where we’re in at this moment. $ ng new zippy-app Next, we navigate into the project and run ng serve, which will essentially build and serve a hello world app on. $ cd zippy-app $ ng serve We open a browser tab on localhost://4200 and what we see is the text “zippy-app works!”. Cool, we’re all set up to build a zippy component in Angular! Building the zippy component Before we start building the zippy component with Angular, we need to clarify what we’re talking about when using the term “zippy”. It turns out that a lot of people think they don’t know what a zippy is, even if they do, just because of the naming. Long story short: this, is a zippy. Also known as “accordion”. You can click the summary text and the actual content toggles accordingly. If you take a look at this particular plunk, you’ll see that we actually don’t need to do any special implementation to get this working. We have the <details> element that does the job for us. But how can we implement such a thing in Angular? We start off by adding a new file src/app/my-zippy.component.ts and creating a class in ES2015 that we export, so it can be imported by other consumers of this class, by using the ES2015 module system. If you’re not familiar with modules in ES2015 you might want to read our article on using ES2015 with Angular today. Special Tip: We would normaly use Angular CLI to generate a component for us, instead of creating the files manually, but this articles focuses on understanding the building blocks of creating a custom component. export class ZippyComponent { } The next thing we want to do, is to make our ZippyComponent class an actual component and give it a template so that we can see that it is ready to be used. In order to tell Angular that this particular class is a component, we use something called “Decorators”. Decorators are a way to add metadata to our existing code. Those decorators are actually not supported by ES2015 but have been developed as language extension of the TypeScript transpiler, which is used in this project. We’re not required to use decorators though. As mentioned, those are just transpiled to ES5 and then simply used by the framework. However, for simplicity sake we’ll use them in this article. Angular provides us with a couple of decorators so we can express our code in a much more elegant way. In order to build a component, we need the @Component() decorator. Decorators can be imported just like classes or other symbols, by using ES2015 module syntax. If you heard about annotations in traceur before and wonder how they relate to decorators, you might want to read our article on the difference between annotations and decorators. import { Component } from '@angular/core'; export class ZippyComponent { } The Component decorator adds information about what our component’s element name will be, what input properties it has and more. We can also add information about the component’s view and template. We want our zippy component to be usable as <my-zippy> element. So all we need to do, is to add a @Component() decorator with that particular information. To specify the element name, or rather CSS selector, we need to add a selector property that matches a CSS selector. import { Component } from '@angular/core'; @Component({ selector: 'my-zippy' }) export class ZippyComponent { } Next, our component needs a template. We add information about the component’s view. templateUrl tells Angular where to load the component template from. To make templateUrl work with relative paths, we add another property moduleId with a value module.id. To get more information on moduleId, make sure to check out our article on Component-Relative Paths in Angular import { Component } from '@angular/core'; @Component({ moduleId: module.id, selector: 'my-zippy', templateUrl: 'my-zippy.component.html' }) export class ZippyComponent { } Later at runtime, when Angular compiles this component, it’ll fetch my-zippy.component.html asynchronously. Let’s create a file src/app/my-zippy.component.html with the following contents: <div class="zippy"> <div class="zippy__title"> ▾ Details </div> <div class="zippy__content"> This is some content. </div> </div> CSS classes can be ignored for now. They just give us some semantics throughout our template. Alright, believe it or not, that’s basically all we need to do to create a component. Let’s use our zippy component inside the application. In order to do that, we need to do things: - Add our new component to the application module - Use ZippyComponentin ZippyAppComponent’s template Angular comes with a module system that allows us to register directives, components, service and many other things in a single place, so we can use them throughout our application. If we take a look at the src/app/app.module.ts file, we see that Angular CLI already created a module for us. To register ZippyComponent on AppModule, we import it and add it to the list AppModule’s declarations: import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { ZippyAppComponent } from './zippy-app.component'; import { ZippyComponent } from './my-zippy.component'; @NgModule({ imports: [BrowserModule], declarations: [ZippyAppComponent, ZippyComponent], // we're adding ZippyComponent here bootstrap: [ZippyAppComponent] }) export class AppModule {} We don’t worry too much about the imports for now, but we acknowledge that Angular needs BrowserModule to make our app run in the browser. The declarations property defines all directives and pipes that are used in this module and bootstrap tells Angular, which component should be bootstrapped to run the application. ZippyAppComponent is our root component and has been generated by Angular CLI as well, ZippyComponent is our own custom component that we’ve just created. Now, to actually render our zippy component in our application, we need to use it in ZippyAppComponent’s template. Let’s do that right away: import { Component } from '@angular/core'; @Component({ moduleId: module.id, selector: 'zippy-app', template: '<my-zippy></my-zippy>' }) export class ZippyAppComponent { } Nice! Running this in the browser gives us at least something that looks like a zippy component. The next step is to bring our component to life. Bringing the component to life In order to bring this component to life, let’s recap quickly what we need: - Clicking on the zippy title should toggle the content - The title of the should be configurable from the outside world, currently hard-coded in the template - DOM that is used inside the <my-zippy>element should be projected in the zippy content Let’s start with the first one: when clicking on the zippy title, the content should toggle. How do we implement that in Angular? We know, in Angular 1.x, we’d probably add an ngClick directive to the title and set a scope property to true or false and toggle the zippy content respectively by using either ngHide or ngShow. We can do pretty much the same in Angular >= 2.x as well, just that we have a bit different semantics. Instead of adding an ngClick directive (which we don’t have in Angular 2.x), to call for instance a method toggle(), we bind to the click event directly using the following template syntax. ... <div class="zippy__title" (click)="toggle()"> ▾ Details </div> ... If you’re not familiar with this syntax I recommend you either reading this article on integrating Web Components with Angular, or this article about Angular’s template syntax demystified. Misko’s keynote from this year’s ng-conf is also a nice resource. Now we’re basically listening on a click event and execute a statement. But where does toggle() come from? We can access component methods directly in our template. There’s no $scope service or controller that provides those methods. Which means, toggle() is just a method defined in ZippyComponent. Here’s what the implementation of this method could look like: export class ZippyComponent { toggle() { this.visible = !this.visible; } } We simply invert the value of the component’s visible property. In order to get a decent default state, we set visible to true when the component is loaded. export class ZippyComponent { visible = true; toggle() { this.visible = !this.visible; } } Now that we have a property that represents the visibility state of the content, we can use it in our template accordingly. Instead of ngHide or ngShow (which we also don’t have in Angular >= 2.x), we can simply bind the value of our visible property to our zippy content’s hidden property, which every DOM element has by default. ... <div class="zippy__content" [hidden]="!visible"> This is some content. </div> ... Again, what we see here is part of the new template syntax in Angular. Angular >= 2.x binds to properties rather than attributes in order to work with Web Components, and this is how you do it. We can now click on the zippy title and the content toggles! Oh! The little arrow in the title still points down, even if the zippy is closed. We can fix that easily with Angular’s interpolation like this: ... <div class="zippy__title" (click)="toggle()"> {{ visible ? '▾' : '▸' }} Details </div> ... Okay, we’re almost there. Let’s make the zippy title configurable. We want that consumers of our component can define how they pass a title to it. Here’s what our consumer will be able to do: <zippy title="Details"></zippy> <zippy [title]="'Details'"></zippy> <zippy [title]="evaluatesToTitle"></zippy> In Angular >= 2.x, we don’t need to specify how scope properties are bound in our component, the consumer does. That means, this gets a lot easier in Angular too, because all we need to do is to import the @Input() decorator and teach our component about an input property, like this: import { Component, Input } from '@angular/core'; @Component({ moduleId: module.id, selector: 'my-zippy', templateUrl: 'my-zippy.component.html' }) export class ZippyComponent { @Input() title; ... } Basically what we’re doing here, is telling Angular that the value of the title attribute is projected to the title property. Input data that flows into the component. If we want to map the title property to a different attribute name, we can do so by passing the attribute name to @Input(): @Component({ moduleId: module.id, selector: 'my-zippy', templateUrl: 'my-zippy.component.html' }) export class ZippyComponent { @Input('zippyTitle') title; ... } But for simplicity’s sake, we stick with the shorthand syntax. There’s nothing more to do to make the title configurable, let’s update the template for ZippyAppComponent app. @Component({ moduleId: module.id, selector: 'zippy-app', template: '<my-zippy</zippy>', }) ... Now we need to change the template of zippy to make title to appear at correct place, let’s udpate the template for zippy title. ... <div class="zippy__title" (click)="toggle()"> {{ visible ? '▾' : '▸' }} {{title}} </div> ... Insertion Points instead of Transclusion Our component’s title is configurable. But what we really want to enable, is that a consumer can decide what goes into the component and what not, right? We could for example use our component like this: <my-zippy <p>Here's some detailed content.</p> </my-zippy> In order to make this work, we’ve used transclusion in Angular 1. We don’t need transclusion anymore, since Angular 2.x makes use of Shadow DOM (Emulation) which is part of the Web Components specification. Shadow DOM comes with something called “Content Insertion Points” or “Content Projection”, which lets us specify, where DOM from the outside world is projected in the Shadow DOM or view of the component. I know, it’s hard to believe, but all we need to do is adding a <ng-content> tag to our component template. ... <div class="zippy__content" [hidden]="!visible"> <ng-content></ng-content> </div> ... Angular uses Shadow DOM (Emulation) since 2.x by default, so we can just take advantage of that technology. It turns out that insertion points in Shadow DOM are even more powerful than transclusion in Angular. Angular 1.5 introduces multiple transclusion slots, so we can explicitly “pick” which DOM is going to be projected into our directive’s template. The <ng-content> tag lets us define which DOM elements are projected too. If you want to learn more about Shadow DOM, I recommend the articles on html5rocks.com or watch this talk from ng-europe. Putting it all together Yay, this is how we build a zippy component in Angular. Just to make sure we’re on the same page, here’s the complete zippy component code we’ve written throughout this article: import { Component, Input } from '@angular/core'; @Component({ moduleId: module.id, selector: 'my-zippy', templateUrl: 'my-zippy.component.html' }) export class ZippyComponent { @Input() title; visible = true; toggle() { this.visible = !this.visible; } } And here’s the template: <div class="zippy"> <div (click)="toggle()" class="zippy__title"> {{ visible ? '▾' : '▸' }} {{title}} </div> <div [hidden]="!visible" class="zippy__content"> <ng-content></ng-content> </div> </div> I’ve set up a repository so you can play with the code here. In fact, I’ve also added this component to the Angular project. The pull request is pending merged here and likely to be merged the next few days. At this point I’d like to say thank you to Victor and Misko for helping me out on getting this implemented. You might notice that it also comes with e2e tests. The component itself even emits it’s own events using EventEmitter, which we haven’t covered in this article. Check out the demos to see event emitters in...
https://blog.thoughtram.io/angular/2015/03/27/building-a-zippy-component-in-angular-2.html
CC-MAIN-2019-09
refinedweb
2,434
56.45
AltCover 7.2.801 A cross-platform pre-instrumenting code coverage tool set for .net/.net core and Mono See the version list below for details. Install-Package AltCover -Version 7.2.801 dotnet add package AltCover --version 7.2.801 <PackageReference Include="AltCover" Version="7.2.801" /> paket add AltCover --version 7.2.801 Release Notes This build from Q. Never mind the fluff -- how do I get started? A. Start with the Quick Start guide : # 7.2.801 (Genbu series release 9) * [BUGFIX] Don't `ArgumentNullException` when running the `--callContext` for `async` feature off the build machine * [BUGFIX] Refactor to avoid "System.ArgumentException: An item with the same key has already been added. Key: AltCover.Recorder.g/7.2.0.0" that could occur in some rare circumstances while instrumenting code. * Other minor build process improvements # 7.2.800 (Genbu series release 8) * [BUGFIX] Don't produce invalid IL when `--callContext` indicates a method with a non-`void` return (issue #105, and probably #26 too) * [BUGFIX] Restore application icons, even if they only show in the `.exe `forms (lost in 7.1.795 if not before) * [BUGFIX] Add `AltCover` prefix to MSBuild property names `NetCoreEngine`, `NetStdEngine` (global), `InputDirectory` and `OutputDirectory` (target-scoped) in the injected `.targets` file for `dotnet test` integration. * [BUGFIX] Let the AvaloniaUI based visualizer roll forwards from netcoreapp2.1 onto later runtimes * Finer-grained control of the coverage summary output * When `--callContext` indicates an `async` method, then track all calls within the same async flow, and not just in the direct same-thread call stack. **Note** other out-of-thread calls (`Thread.Start`, `Parallel.Invoke`, explict Async-named method invocation, ...) are not tracked. * Build with net5.0 SDK (modulo work-round for) in net5-only environments * Still build `AltCover.exe/.dll` against `net472` for framework support, `netcoreapp2.1` for the global tool and `netcoreapp2.0` for everywhere else * Still build the GTK2 visualizer against `net472` for consistency * Still build the recorder at `net20` and use that assembly everywhere (see F# compiler issue noted above) except where a `net46` version is substituted for tracking through `async` calls * Move unit tests to `net5.0`, as they are not public API # 7.1.795 (Genbu series release 7) * [BUGFIX] Make LCov tracefile output follow what is actually generated, and not just what the `man` page says * As well as interfaces, hide other types with no non-abstract methods (e.g. plain enums) in the coverage * Show the branch in `public int string Ternary(bool select) => !select ? Left : Right;` just like it is shown in `public int Ternary (bool select, int left, int right) { return select ? left : right; }`. * For <TrackedMethod /> records, add `entry` and `exit` attributes as semicolon separated lists of the UTC times in ticks at which the method was entered and returns * For `dotnet test ... /p:AltCoverXmlReport=...` , if the value for the report file path contains one of the following literals * $(ProjectName) * $(SolutionDir) * $([System.Guid]::NewGuid()) then substitute in the actual values of those build parameters where they haven't already been replaced by MSBuild. Example: Using `/p:AltCoverXmlReport=$(SolutionDir)/_Reports/solution.$(ProjectName).xml` with `dotnet test` of a solution to place distinctly named report files in a common folder. Also * Rationalise .net versions to help speed up the build and ease the net5.0 transition * Clear out some corner case differences between .net core and .net framework builds based on old work-arounds for symbol writing for the instrumented files * Build the recorder at `net20` only and use the same assembly everywhere * Move all the core logic from `AltCover.exe/.dll` to `AltCover.Engine.dll` * Unify the three different entry-point assembly instances into the now shim-like `AltCover.exe/.dll` * Build everything against `netstandard2.0` except executable shims and unit tests (tests at `netcoreapp3.0` by default) * Build `AltCover.exe/.dll` against `net472` for framework support, `netcoreapp2.1` for the global tool and `netcoreapp2.0` for everywhere else * Build the GTK2 visualizer against `net472` for consistency * `net472` debug builds for published libraries are retained purely for FxCop consumption * Collect coverage from unit tests at build time too # 7.1.783 (Genbu series release 6a) * [Visualizer-global-tool] * [BUGFIX] Don't NRE when cancelling a File Open dialog when Avalonia uses its GTK binding (Linux) * Support font selection on Windows natively (monospace fonts only) * On non-Windows platforms, if Tcl/Tk `wish` is present, use that to perform font selection (choose wisely) #. # NuGet packages This package is not used by any NuGet packages. GitHub repositories (9) Showing the top 5 popular GitHub repositories that depend on AltCover:
https://www.nuget.org/packages/AltCover/7.2.801
CC-MAIN-2021-10
refinedweb
761
58.08
Developing Smart Tag Solutions with Microsoft Office Access 2003 Frank C. Rice Microsoft Corporation March 2003 Applies to: Microsoft Office Access 2003 Summary: Learn about an exciting new feature in Microsoft Office Access 2003: smart tags. With smart tags, you can extend your Access solutions by easily adding additional functionality for your users. There is also programmatic support for smart tags that allows you to automate setting or modifying smart tag settings. (11 printed pages) Contents Introduction Background How Smart Tags Work in Access Using Smart Tags Enabling or Disabling the Display of Smart Tags Setting or Modifying the SmartTags Property Programmatically Other Pertinent Information Additional Resources Conclusion Introduction Microsoft Office Access 2003 introduces smart tag technology to database developers. This allows you to expose some of the smart tag functionality seen in other Microsoft Office programs to your users. You can embed smart tags in database fields using a new SmartTags property. As users scroll through form or data access page records, for example, they can click a field's smart tag icon and select an action appropriate to that field. Reports do not support smart tags. Background Smart tags provide the unique ability to link text in Office applications (Microsoft Word, Microsoft Excel, Microsoft Outlook, Microsoft PowerPoint, and now, Access) to other business applications and processes. This enables users to have much easier access to data and external functionality than before. Rather than moving among multiple applications, smart tags can add context by passing data from Office programs such as Access or Excel, to other applications, including other Office applications, based on the content of a field. This contextual linking provides a great potential for productivity improvement. For example, a smart tag may be enabled in a customer name or address field that enables the user to take action in another application such as Outlook or a customer database. More information about smart tags is available in the Additional Resources section at the end of this article. There are some differences between the way that smart tags work in Access 2003 and the way they work in other Office applications such as Word or Excel. In the following sections, we will explore and demonstrate in detail this new addition for Access. How Smart Tags Work in Access Table fields, as well as form and data access page controls, now have a new property setting for specifying smart tags named, appropriately enough, SmartTags. In addition, query columns inherit this property from the source tables on which they are based. For example, consider the following scenario: Joe has heard about the new features in Access 2003 and wants to add smart tags to a data access page his company uses for publishing information to the Web. He opens the page in Design view, opens the Property page for the control to which he wants to apply the smart tag, and then clicks the SmartTags property. The Smart Tags dialog box appears, which displays a list of smart tags installed on his system. Joe selects the smart tags that he wants applied to this field and then saves and republishes the data access page. Now, when users browse the page and move their cursor to the field, the smart tag icon appears and they can select the smart tag actions appropriate for the particular field. Providing smart tags in Access solutions has a number of benefits: - The technology can be targeted so that just the smart tags you want your users to use are included with the solution. - Enhanced functionality is available without you having to add code to your solution to implement it. - The available smart tag actions can be modified or new ones can be added without modifying the Access solution. - Smart tag technology has been available in Word, Excel, and Outlook since Office XP so little or no learning curve is involved in using it in Access. One of the key differences in the Access smart tag implementation and the smart tags found in Word or Excel is that there are no recognizers available in Access. In Word or Excel, the code associated with the smart tag can monitor text entered by the user. When a particular term is "recognized," a smart tag is displayed with actions appropriate to that term. For example, if a user types a person's name in a Word document, a smart tag might recognize that term as a name and display a set of actions, including one which allows the user to look up the e-mail address of the person. In Access, smart tags do not respond based on the type or context of a particular term but rather, once set, the smart tag is either available for a field or not, regardless of the text in the field. In addition, setting the SmartTags property on a table or query field does not propagate the smart tag to any dependent forms or data access pages. You would need to explicitly set the property on the control in the dependent object, or recreate the form or page based on the updated table or query. However, smart tags are inherited by any new forms or data access forms created from a table or query containing smart tags. Using Smart Tags Here is a walkthrough that you can try to enable smart tags in the Northwind sample database in Access 2003. Note This procedure will modify the Northwind sample database provided with Access 2003. It is recommended that you make a copy of the database and use the copy in this walkthrough. In a default installation of Access 2003 , the Northwind sample database is installed at C:\Program Files\Microsoft Office\OFFICE11\SAMPLES. Start Access 2003. Open the Northwind sample database by pointing to Sample Databases on the Help menu, and then clicking Northwind Sample Database. Click Tables on the Objects list, click the Employees table, and then click Design. Click the LastName field. Under Field Properties, click Smart Tags, and then click the ellipses (...). The Smart Tags dialog box displays smart tags that are installed on your system including default name and address actions. Figure 1. Smart Tags dialog box Select the Person name check box. Notice that the Smart Tag Details section displays the available smart tag actions. There is also a More Smart Tags button that connects you to the Microsoft Office Web site to find more information about smart tags. Click OK to close the Smart Tags dialog box. Notice that in the Smart Tags field, the smart tag selection you made inserted "urn:schemas-microsoft-com:office:smarttags#PersonName". This is a semicolon-separated list (assuming that you selected more than one smart tag) of smart tag names, which use the format namespaceURI#tagname. Open the Employees table in Datasheet view, saving the changes to the table when prompted. Notice the triangular-shaped object in the corner of the LastName field. This is a visual indicator that alerts users that smart tags are enabled for the field. To call an action for this smart tag, rest the mouse cursor over the triangle, and then move the cursor over to the square icon. Click the arrow to the right of the icon and select one of the actions. Close the Employees table. Figure 2. Smart tag indicator on the LastName field Smart Tags and Existing Dependent Objects Next, you can open the existing Employees form to see what, if any, changes resulted from adding a smart tag to the LastName field in the table. The Employees form is dependent on the Employees table for data. - Click Forms under Objects, and then double-click the Employees form. - Looking at the form's LastName field, notice that the field is missing the smart tag indicator that you saw when you viewed the LastName field in the Employees table. This confirms that setting smart tags on a table does not propagate those settings to any dependent objects. Close the Employees form. Smart Tags and New Dependent Objects Now, you can create a new form based on the updated Employees table to see if the smart tag created in the table is carried over to the form. - In the Database window, click Forms under Objects. - Click the New button on the Database window toolbar. - In the New Form dialog box, click AutoForm: Columnar. - Click the Employees table in the drop-down list. - Click OK. The new form is created and displayed. - Looking in the LastName field, you see the smart tag indicator. - Switch the form to Design view and then double-click the BirthDate control to open the Property page. - Click the All tab, scroll down to the SmartTags property, click the field, and then click the ellipses (...). - Select the Date check box and click OK. - Close the Property page, and then open the form in Form view. - Notice that a smart tag indicator is now displayed in the BirthDate field. Save the new form as Employees1. To summarize so far, an object's smart tags can be enabled by setting the SmartTags property directly on the object or, for dependent objects, recreating the object based on the table or query containing the smart tag. .gif) Figure 3. Form based on Employees table with smart tags Smart Tag Inheritance with the Save As Option Next, create a new data access page by using the Save As option and see if the smart tags created so far are inherited by the new page. With the Employees1 form in Form view, click Save As on the File menu. In the Save As dialog box, click Data Access Page in the drop-down list under As, and then click OK. In the New Data Access Page dialog box, keep the default name, and then click OK. The data access page is created and displayed. Looking at the page, it appears that the form did not carry over the smart tags. However, rest your mouse cursor over the BirthDate field. Notice that a smart tag icon is displayed. Click the down arrow on the icon. The smart tag's associated actions are displayed. Figure 4. Smart tag actions for a date field on a data access page Based on the above, we can conclude that smart tag enabled controls can propagate from one dependent object to another dependent object but not from the data source to a dependent object. **Note **As shown in Figure 4, clicking the smart tag icon displays a caption at the top of the list followed by a list of actions. The caption and smart tag actions are contained in a dynamic-link library (DLL). A DLL is a collection of functions grouped into an external file that can be used by other programs. Smart tag information is embedded in the data access page's HTML. To see this, with the data access page displayed, click HTML Source on the View menu. Looking through the <META> tags, you will see the following code which provides the same namespace and smart tag name information that we saw in the SmartTags property for the Employees table: <o:SmartTagType </o:SmartTagType> <o:SmartTagType </o:SmartTagType> The procedures we have looked at so far allow you to add or remove smart tags on an individual control or field basis. You can also enable or disable smart tags for all forms or datasheets. Enabling or Disabling the Display of Smart Tags There are two application-level options that allow you to enable or disable smart tags, one to Show Smart Tags on Forms, and one to Show Smart Tags on Datasheets. The settings are available by clicking Options on the Tools menu, and then clicking either the Forms/Reports tab or the Datasheet tab. .gif) Figure 5. Show Smart Tags on Forms option .gif) Figure 6. Show Smart Tags on Datasheets option Now that you have seen how to work with smart tags through the user interface, let's look at what you can do programmatically. Setting or Modifying the SmartTags Property Programmatically In addition to setting the SmartTags property from a Property page or in Design view, you can also set or modify the property programmatically. You essentially use the same methods you use for working with any property in Access. The following samples can be used to set or modify the SmartTags property for tables and forms in Access database (.mdb) and Access project (.adp) files. Setting or Modifying the SmartTags Property in Tables Depending on your environment, do one of the following: .mdb support Application.CurrentDb.TableDefs(1).Field(1).Properties("SmartTags").Value .adp support (this has to be done through T-SQL) EXEC sp_addextendedproperty 'MS_SmartTags', 'urn:side-bar-com:bar:side#SideBar', 'user', dbo, 'table', Side, 'column', Bar Setting or Modifying the SmartTags Property in Forms: This is the same syntax you would use to set or modify properties on other objects. Application.Forms(1).ActiveControl.Properties("SmartTags").Value Programmatically Executing Smart Tag Actions and Other Tasks Besides settingthe SmartTags property programmatically, you can also add a smart tag, delete a smart tag, execute a smart tag action, and other tasks by using the SmartTags collection object. For example, the statement Forms(0).ActiveControl.SmartTags(0).SmartTagActions(0).Execute runs the first action for the first smart tag of the active control on the first open form. To avoid receiving an error, the control containing the smart tag must have the focus, making it the active control. Other Pertinent Information This section lists more useful information about using smart tags in Access: Which Controls Can Be Smart Tag Enabled? Form controls - Label - Text Box - Combo Box - List Box Data Page controls - Label - Bound Span - Text Box - Scrolling Text - Dropdown List - List Box - Hyperlink Smart Tag Support in Linked Tables For linked tables, the SmartTags property is editable. However, if a user changes it, it is only changed in the local copy of that property. Editing does not change the property in the database to which the table is linked. Converting From Earlier Versions of Access Whether the SmartTags property is preserved for tables, queries, and dependent objects depends on the version of Access database. In addition, the ability to create new smart tags also depends on the object and the Access version. The following table defines this relationship in more detail. 1 Originally an Access 2000 database converted to Access 97 from Access 2003. 2 Data access pages aren't available for Access 97 databases. Smart Tag Support in Access Projects Access projects only support the SmartTags property for forms and data access pages. Support for those objects is the same as for .mdbs. Access projects store the SmartTags property as an extended property. This means that the SmartTags property cannot be supported in any version of Microsoft SQL ServerTM prior to SQL Server 2000. The new extended property in SQL Server is named MS_SmartTags. In SQL Server 2000, there is generally no easy way to edit extended properties, which means that the MS_SmartTags property can only be set on tables using T-SQL or with the SQL Enterprise Manager. Smart Tag Information Not Persisted on Import or Export The SmartTags property information is not included when exporting to an XML or XSL formatted file. The smart tag property information is saved when exporting a data access page. What Is Not Supported? Access 2003 does not support the following smart tag operations: - Importing smart tags from Excel. - Exporting smart tags to Excel versions 97 through 2002. - Exporting smart tags to Rich Text Format. - Exporting smart tags to Active Server Pages. Additional Resources Below is a list of additional resources where you can get more information on smart tags: - Microsoft Office XP Smart Tag SDK 1.1 - Developing Smart Tag Solutions - Using the Smart Tag Shim Solution to Deploy Managed Smart Tags in Office XP - Deploying Smart Tags with Enterprise Applications - Smart Tag Installation and Security for Microsoft Office XP - Deploying Smart Tag DLLs by Using the Visual Studio Installer Conclusion In this article, we have explored the new smart tag feature in Access 2003. I have demonstrated how smart tags can allow you to extend your Access solutions by easily adding additional functionality for your users. You have seen how smart tags can be enabled individually or for all forms or datasheets. You have also seen in which file formats and for which objects smart tags are enabled and can be added. We also looked at some of the limitations of the smart tag feature in Access. However, it should be clear after trying some of the procedures in this article that the benefits of using smart tags in Access 2003 far outweigh any limitations.
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa218427(v=office.11)
CC-MAIN-2019-35
refinedweb
2,780
60.65
view raw I am trying to write program for piglatin. I was not getting the output what I am expecting. take the first letter of a “word” and appending that letter to the end of the word with “ay” added to the end as well. Input : Darrin, what are you doing with 500 and 100? Output: arrin, hatway reaay ouyay oingday ithway 500 ndaay 100? Expected Output: arrinday,hatway reay ouyay oingday ithway 500 nday 100? What's wrong with output : First word not appended with ay Since I am appending 'ay', I need eliminate the extra 'a' if the word starts with a or end's with 'a'. I just need add ay at the end instead of first letter + ay. For example: Input is Alex and allen are 500 Output should be lexay nday llenay Also if the starting letter is not alphabet then we should print the same word. Please help me to solve this #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <string.h> static char inputBuffer[100]; static char outputBuffer[100]; void translate (void) { char bufferValue; char firstLetter; int j = 0, k = 0, m = 0; printf("\n"); while (j < (sizeof(inputBuffer) - 1)) { bufferValue = inputBuffer[j]; if (((bufferValue >= 'A') && (bufferValue <= 'Z')) || ((bufferValue >= 'a') && (bufferValue <= 'z'))) { if (j == 0) { firstLetter = bufferValue; } else if (inputBuffer[j-1] == ' ') { firstLetter = bufferValue; } else { printf("%c", bufferValue); outputBuffer[m] = bufferValue; m++; } } else if ((bufferValue == ' ') && !( ((inputBuffer[j-1] < 'A') || ((inputBuffer[j-1] > 'Z') && (inputBuffer[j-1] < 'a')) || (inputBuffer[j-1] > 'z')))) { printf("%cay%c", firstLetter, bufferValue); outputBuffer[m] = firstLetter; m++; outputBuffer[m] = 'a'; m++; outputBuffer[m] = 'y'; m++; outputBuffer[m] = bufferValue; m++; firstLetter = ' '; } else { printf("%c", bufferValue); outputBuffer[m] = bufferValue; m++; } j++; } printf("\n final output: %s",outputBuffer); return; } int main(void) { printf("enter the string\t"); fflush(stdin); gets(inputBuffer); printf ("\nInput buffer contents: %s", inputBuffer); translate(); return 0; } The real problem is that you didn't see the forest through the trees which made the implementation awful to read. To add insult to injury, you decided to break the basic rules of code locality (not using globals unless necessary) and DRY (functions to tell if a charater is a letter exist in the standard library of any language I can think of, don't reimplement it), which made it pretty much irrecoverable as far as maintenance is concerned. Now, let's read the task description again: take the first letter of a “word” and appending that letter to the end of the word with “ay” added to the end as well. Notice what already stands out because of quoting: word. So, I'd divide the implementation into two distinct tasks: The end result of might look like this: #include <stdio.h> #include <string.h> #include <ctype.h> void piglatinize(const char* in) { static const char* SEP = " .,?"; // word separators // Iterate input by words const char *sep = NULL, *word = NULL, *end = in; while (sep = end, // separators from previous word end word = &end[strspn(end, SEP)], // start of word end = &word[strcspn(word, SEP)], // end of word *sep) // iterate until we hit terminating zero character { int wordlen = (int)(end - word); int seplen = (int)(word - sep); if (wordlen > 0 && isalpha(word[0])) // word starts with a letter, pig it! { char firstletter = tolower(word[0]); const char* suffix = (firstletter == 'a') ? "y" : "ay"; printf("%.*s%.*s%c%s", seplen, sep, // separators from previous word wordlen - 1, &word[1], // word without first letter firstletter, suffix); } else // not a real word, just print unchanged { printf("%.*s%.*s", seplen, sep, wordlen, word); } } } int main() { piglatinize("Darrin, what are you doing with 500 and 100?"); } I admit the while loop continuation condition is a handful. If you have trouble understanding this example you might want to read on strspn (and its opposite strcspn) and the comma operator.
https://codedump.io/share/LrKDUNWgBvMO/1/issue-with-string-position-appending-c-program
CC-MAIN-2017-22
refinedweb
628
60.24
I think we could add in the future a wonderful feature to the product : allow any element in a browsed document to become editable. For instance, allow user to directly modify in the browser the textual contents of a paragraph, or even add elements and style them. Used in conjunction with DOM Load & Save, that could be a killing app, letting us put two feet instead of one finger nail into web publishing and/or collaborative edition. I found that Microsoft started doing something like this during the CSS WG meeting in Cleveland in 1999 when the first mention of the |isContentEditable| proprietary extension to js appeared on their developer's web site. See below for (a) technical data about the feature (b) a demo you should **really** try with IE5.5 or higher. We have a good browser, a powerful DOM, a clever editor based on the browser and the DOM. It should be possible, at quite low cost, to add such a feature to our product (me is perhaps dreaming at loud but, hey, that's a part of what I am paid for, right ?-) (a) (b) This should be done in a way that embedders can use it. The real issue here is user experience. How do you indicate to the user that certain parts of a document are editable? How do you manage focus and keyboard navigability of editable elements? I think, if we are to do this, it needs to be keyed off of CSS (-moz-editable or something). This is what hixie has proposed in his CSS UI draft. A property called user-can-edit, I think. (Goes along with user-can-focus, etc.) Simon: in my opinion, the only feedback the browser should give is changing the mouse cursor. The rest of the UI should be the responsibility of the server. The 'contentEditable' elements should be out of the tab list and without focus indication (a bit like word processors: there is no focus and you cannot tab from a paragraph to the next) For instance in a groupware application, the host would serve documents depending on the user access level that could be read-only, partially modifiable or fully editable. The author will have to decide what visual feedback better represents the user access level, and it is nice to imagine that the whole mechanism (making things modifiable and providing feedback) can be implemented by a simple switch of stylesheet depending on the user. spam composer change I think what this RFE is talking about is basically what IE does with <DIV CONTENTEDITABLE>, but to implement it on a broader scale. This would indeed be a definate IE killer. Implementing a CMS would be several orders of magnitude easier with this! If this is so, why is this in de Editor: Composer component? Because the Editor component has been divided in two after the bug was filed, that's all. Original question send to Sujay: Is there, or going to be a way to access the composer components via JavaScript from html documents ? Like the execCommand("command") and .contentEditable='true' methods used by Internet Explorer 5.5 ? These commands are the remains of what used to be Frontpage Express which have become accessable via Jscript, VBscript, Visual Basic and C++. added comments: Currently we're using the IE components as a clientside browserbased content Editor. Developers set the editability 'freedom' via contentEditable. Editing is done via Wysiwyg interface run via JScript execCommand('commandname'). Edited webpages are submitted to a database which does the actual content management. Content management is done via database. *** Bug 122158 has been marked as a duplicate of this bug. *** If this functionality is ever implemented, I would suggest trying to match Micro$oft's syntax whenever possible. Not because Micro$oft's syntax is especially great, but simply because hundreds of web applications are currently being written with it. I have, unfortunately, to agree with last comment... The syntax doesn't have to be the same. Currently working systems that rely on this feature will use other IE centric features as document.all, filters, etc... so it's not likely that just by adding some of the syntax to deal with the editor it will automagically work in every CMS around. What's most likely is that if the people is interested in support Mozilla then they will add a second page and will redirect the user according to the browser, and to design that second page they will use some sample that in the future will be developed to test the implementation by Mozilla developers. So there's no real reason to try and force the development of this feature to be guided by the current syntax of IE. It can use whatever the Mozilla's developers like. Just do it wisely and everybody will be happy. Is this actively being worked on? As for IE syntax, etc. It would help if it was maybe similar, but I don't think it's necessary to be exact. It may be good to even implement just the methods used in the example for this bug. My reasoning is this: If a developer of an existing IE application sees that the example above works, they may consider "supporting" Moz. If they do support it, they will maybe read a little more to get the rest of the functionality working on IE and NS. Unfortunately it's too easy to slap a "IE Only" label on and app. So this halfway compromise may get Moz's foot in the door. BTW the editing stuff in IE doesn't even work on the Mac side, which is typical. At least we know if it's done in Mozilla it will work everywhere. Yes, I am very interested in support the MS interface. We have to see how it fits in with our current embedding re-architecture work. The most complete implementation of this feature in IE can be seen at. Something that looks very similar to this is what is used in Hotmail. If this could be implemented I am sure that is most of the features that are necessary. To call this "the most complete" is an insult to everyone out there who have made superior editors compared to this one. Basically Mozilla has all the features needed to support this the only thing to do is to bring back the possibility to edit the document using the keyboard. This was available in early previews of Gecko. I have no idea why this was removed. The range methods of DOM Level2 Ranges would allow the developer to insert images and change formats and all that stuff. What I meant rather, was this is as complete as I've seen an implementation of this particular technology that is available in IE. I am not talking about having an editor as complete as Composer embedded in the page, but more a basic editor so the user can have a WYSIWYG editor within the page, but those changes they made could still be submitted as a form element. After a review of the embedding work being done by mjudge, we are confident that we will be able to support this, but probably not for 1.0. Whoo Hoo! A target is set after just a little talking to the developers. This is why Open Source developer is great. In the nightly builds there is a menu item called "Editable mode" - si it possible to add a javascript method on frames/iframes that toggles/switches this mode. IMHO it's not a big deal to add this feature and it give us a lot. Using DOM/js a much better (say HTML4.x compilant) editor component that the IE Frontpage toy could be easily done. Please! Jaro Jaro v. Flocken--I think it's a great idea too. We would love it if you or someone else would submit a patch to address this rfe. I'm a very poor C++ programmer, more into the HTML /java and javascript stuff. Howerver I could assist a coder during the implementaion (building a sample Editor app etc.) Jaro What do you mean actually? A javascript hook to open the current page in Composer, but within an iframe in a certain page (with or without the menubar)? This could be a good idea, but as is illustrated in the example there should be a way for the HTML on that page to be "captured", that is, allow the iFrame.body.innerHTML (and innerText) to be assigned to a JS variable in the opener page. This way the edited document could be sent back to the server as raw HTML to be stored in a database, file, etc. Also, just editing the whole page would be no good. It would be better to specify what would be edited, meaning document.openInComposer.src="" In XUL you can add a simple <editor> Tag to get a WYSIWYG Editr area. At the moment it stops at the sandbox - if one tries to pull such a XUL document from another location or using http: instead of chrome: a js security exception will be fired. A simple solution is indeed switching a completed document object into the editor mode because we con use frames or - much better: iframe. The source attribut of the iframe tag gets another document, this can be retrieved with *.body.innerHTML or whatever (I'm sure there is a way to get the complete html code.) Please correct me if I completly wrong :-) Jaro What this bug refers to is not using <editor> or <HTMLArea> or anything. It's about allowing arbitrary content in a web page to be editable. Imagine: <html> <body> <p>Hello there. Type something here:</p> <div editable="true"> Type here... </div> </body> </html> I thought that the Css3 proposed properties user-input and/or user-modify are supposed to do these. (Maybe user-select and user-focus as well in some way?) These seems to be implemented using -moz-user-input and -moz-user-modify but these only work for input[type=text] and textarea. Where do you see a menu item for "Editable mode"? I just checked both the MacOSX and Windows builds for today, and cannot find it. For what it is worth, I would love to see this feature and would immediately use it, but the real issue for me is to be able to submit edited HTML content in a form. So if it worked completely differently that would be OK too - for instance, click a link and a page opens in Composer, click the Composer "save" button and the HTML gets written to a form value, click the "Submit" button to send it to wherever. Whether the page is edited within an iframe or in a separate composer window is immaterial (for me at least). This is my exact need also. I think the menu item he was talking about is Edit Page. I too have an urgent need as we are releasing a new feature as a major part of out application and right now it's an IE only feature. A <div> would be also great, ther is no need for proprietray tag's here, I just tried to explain what I mean. I definately don't mean "auto open" a page in Composer, it's more an embedded widget or area (call it what you like to) that enables editing the innerHTML (or source in case of IFrame) in a WYSIWYG style, excatly what <editor> is doing in the composer XUL File. Many users want to edit parts of pages in content managment environments or bulletin boards in the same way like a textprocessor (say Word or Staroffice here). Another benefit is storing the edited content using any POST method, I doubt thta this is possible/wished in Composer. Well, that's what the original reaquest is for, exactly what you described. I thought you were originally suggesting a temporary work around until the full fix is implemented. As of right now, I think the leap from adding the <editor> tag in XUL to implementing all of that in a <div> is more diffucult then it seems, because of code issues, not architecture. I've been reading things about just using composer(-like inline elements), complete with buttons and all. I thing it would be really important to be able to control the mark-up with javascript. Sure, it would be easy if we (the webdesigners) don't have to add buttons and javascript anymore, but it would certainly limit the fexibility. One thing I don't see you do without javascript being able to control the text is add an image (one that's on the server or that can be uploaded by the user or...). Created attachment 73094 [details] Example of Implemented Editor in IE This is the code currently in use by our company to implement this functionality in IE. removing myself from the cc list jpd@TeachX.rutgers.edu : the "Editor mode" menu item is not in the browser but in viewer ; on windows, try mozilla\dist\win32_d.obj\bin\viewer But that's not, as Simon pointed it out, the main purpose of the current bug. Interesting idea, though. Btw, that menu item does not "toggle" editor mode, it turns editor mode on, that's all. There is no way in viewer to turn it off. Not allowing arbitrary elements to become editable is totally sucking the monster bug wang. This bug being fixed would be of great use to not only myself, but developers around the world. I will feed you beer by the bucket if some kind heart will consider implementing|correcting this. See also: bug 96392 [RFE] Exposing HTML Editor commands to Browser Dom bug 130796 HTML editor for <textarea> (HTMLArea) I think both should be duped to this bug or wontfixed in favor of this bug. This is *not* a dup of bug 130796. They are rather different issues. I think bug 96392 should be dupped against bug 130796. Simon, can you explain why you think they aren't dups of this bug and why you think they're dups of each other? Bug 130796 is about <htmlarea>, which is an HTML-capable <textarea> (though closer to an editable <iframe> in implementation). HTML editing is limited to a specific frame in the page. This bug, and bug 96392, are about making arbitrary parts of the content of a document editable via script, or CSS attributes. The implementation would be different. I'm the lead programmer on a popular OS CMS called WebGUI (plainblack.com). I can't even fathom the amount of effort you guys have already put into Mozilla, but I can tell you that in the CMS world, there is no higher priority than trying to find a cross-platform inline editor. I can tell you that you do not have to strive for compliance with IE or Opera or any other browser provided that the functionality works on all versions of Mozilla. I can also tell you that IE will immediately be replaced by Mozilla as the browser of choice by all of the WebGUI users (including some Fortune 500 companies) if this functionality were implemented. There is no hotter topic than CMS as far as browsers are concerned in any company we've dealt with. And every one of them wants to be able to publish content in their browsers from Windows, Mac, Linux, and Solaris desktops scattered throughout the company. There are a couple of commercial companies out there trying to make this work (as java plugins), but we've purchased both of them and they both fail in many aspects: No matter which of the bugs (listed above or this one) you follow as an implementation method, you will knock IE on it's can, as long as the resultant HTML snippet can be passed back through a form. I cannot wait to see this type of functionality implemented. JT Smith, would an editable iframe be good enough, or do you really need aribitrary elements in a page to be editable? Those are pretty different things, from our perspective anyway. I'm thinking that if the Java solutions are an option then an editable iframe would be good too. An editable IFRAME may do the trick, but I think there's really more to it than that. We need methods through javascript to be determine what is highlighted, perform some manipulation on that (like add style information), and then replace the highlighted text with our new text. There is a great open source DHTML editor here: that you could take a look at. However, they're using some other IE only functionality like modal windows. We have another editor that we got from some DHTML site. It works very well too and doesn't appear to use as much IE specific code. You can see it in action if you go here: and you can download the source from here: However, the thing that I think would be absolutely ideal would be to see Composer loaded as a form element (instead of a textarea). That way users would have all the power of Composer at their fingertips, and CMS developers could integrate the output of composer into their form post without any trouble. There are a couple other advantages to using Composer as a form element that I forgot to mention. 1) If all that power were implemented, there'd be no reason to reinvent the wheel. That way developers could spend their time working on something that hadn't already been done. 2) It would give you an edge over Internet Explorer (until they copied it by creating a form element that used Front Page). 3) It would be fast as hell because there's nothing else to download (unlike java and dhtml based solutions). Composer as a form element could be a option. Composer today works by making a document editable, but all the editing UI is wrapping that editable document. Another option, one that was explored in the now defunct 5.0 codebase was to have floating editing palette available that we implemented. Like you said, this has download speed advantages among others. However, what some people want is the IE style ability to create your own editing UI with HTML that listens for selection as you described. The core functionality could be the same, but we'd have to allow custom or canned UI selection somehow. This is one of the big things both internal and external customers have asked for post 1.0, so now is the time to weigh in on exactly what you need, and want (noting that they can be different things). I work mostly on the server side components of WebGUI. I've asked our resident DOM/JavaScript guru to put together a list of what he'd need to make things work properly if we built our own editor. I'll post that list as soon as he gets it to me (a day or two). I have to say though, that even with the functionality added to create our own editor, I think I'd rather use composer as a form element, because it would be a monsterous effort to try and recreate the power of composer in our own editor. Then again, if we could just extend/alter one of the open source editors that I mentioned previously, perhaps it wouldn't be so big a deal. If you view the attachment to this bug (, IE 5.5+), you will see how we implemented the editor. One feature that was hard to do for us was to allow certain text to be inserted at caret via a double-click with the .value (or innerText) of some other element in the page. This is an important feature for us because we would like to use Moz as an HTML editor, mainly for creating form letter templates that then get values swapped in for keywords. This would eliminate the need for an external word processor (Word,etc.) and keep out application entirely browser-based AND cross platform (all this IE stuff only works IE for Windows). So we are less interested in customization of UI and more with solid Javascript hooks into the embedded Composer element, especially the ability to figure out where the caret position is and insert something (text, HTML) there. As mentioned before, there are two different features being discussed here. 1. The editable IFRAME or <HTMLAREA>. This would be a nice feature if it has a toolbar. It would make a great rich text input field for forums etc. IE (windows only) has this feature (DHTML Editing Component) since version 5.0 (or as plugin to 4.0) and it is used in various input forms. See for instance for an editor demo using that feature (IE 5.0+ on windows only). The site is in dutch, click in "demostratie" for a demo, click on "log in" to activate the editor and on the wrench to start editing. 2. The ability to make abritrary elements editable. IE (windows only) has this feature (contentEditable="true") since version 5.5. This feature really moves browser technology out of the input form era and paves the way for real applications in the browser. See for instance an editor comparable to ecop using contentEditable="true" at (IE 5.5+ on windows only). Click on one of the images to edit that page. As i said, contentEditable paves the way for real applications. An other example which shows some of the power of contentEditable is Xopus. Xopus is a browser based in-place WYSIWYG XML editor. For a demo see: (again: IE 5.5+ on windows only) Implementing just an editable IFRAME would limit us to input forms. Implementing contentEditable in the same way as it is done in IE would make it possible to make these applications, and applications we haven't even thought of, available for Mozilla users. Created attachment 79617 [details] Discussion on netscape.public.mozilla.wishlist Er. Apparently the '-moz-user-input' property already works to some extent. Hixie: I can't figure out how to use -moz-user-input (and -moz-user-modify and -moz-user-focus) to let a visitor edit a page. Do you know how? PLEASE PLEASE PLEASE - just do it! Do it with a <HTMLEDIT> tag, do it by making any element editable - do it with propriaty extensions of HTML - do it by emulating IE - Do it ever which way - But for the sake of having a truly viable alternative to IE, and for the love of the Internet - DO IT - Please. How can I as a web-developer convince my marketing/administration/power guys that Moz is the way to go, when they cannot use the fancy editor they are used to from Hotmail, Yahooo mail etc? To them that spells 'inferiour browser' and I can whine for days about platform-independance, standards conformance etc. What do they care? They all use windows anyway. The Internet is moving from a Server/Client model to a P2P model - Still more and more websites become dynamic, and less tech-savy users are beginning to provide content on a massive scale. CMS is THE THING on the WWW part of the Internet right now (much like real P2P is the buzz at the moment), people are Blogging, chatting and providing content like never before. I've just implemented a cool editor in IE - and it carves holes in my soul that I can't do it in Mozilla but have to resort to a textarea and bbcode. My mother will never get bbcode, and she'll never understand the connection between raw text with some strange markings in it and a nicely formatted piece of text. Listen to JT Smith - he has got more than just a point I beg you - please do it - do it for the sake of Linux/Mac users, for the sake of my mother, do it to bash IE and all of it's security holes, do it to help rid us all from the Microsoft way of thinking, do it for your own reasons. I swear that if I could - I would, but I am not a C++ programmer, merely a humble web-developer. One more thing All this XUL business is really great, and extending the editor is great too - but first things first. If Mozilla is going to make it, it has to cater for the users, then the web-developers and then the application-developers, not the other way around. You cannot underestimate 'sex-appeal' in the user-interface when it comes to swaying users, and a textarea just isn't very sexy! Just to add my 2ps worth to this RFE. I have been advocating Mozilla over IE for 2 years now at every opportunity. Perhaps the most common request I ever get hit with is - give us an alternative to MSHTML - we want to do rich HTML editing via a web page and post back the content - but without IE. This feature would truly, truly rock my world and the worlds of shitloads of other people. I have been playing with SOAP im Mozilla and the GoogleAPI all day and it is superb. Now if only I could press "Save HTML" and get my rich WYSIWYG HTML saving via a SOAP call to a backend. How cool would that be? Hi again, I don't wanna start a flame war but using the DHTML Editor of IXplorer is a pain in the ass, nice html code will be scambled without logic and for no reason. But, as Esben pointed out: Users first - in my case customers first. And they wanna make changes on a html page like writing a letter on a Word(processor). I love mozilla and still dream about such a tool for moz. I strongly feel that it can fight back some lost market share just about this feature (Mac users!) Again: Most of the fuctionallity is already build in - so it cannot be such a big deal. Please! Just my 2c, Jaro Hej Esben, > You cannot underestimate 'sex-appeal' in the user-interface when it comes to > swaying users, and a textarea just isn't very sexy! Well we can. It's just a question of priorities. (before flaming me, please check that I am the reporter of this bug). The list of priorities and the prioritization (we need to invent P12N acronym for this word :-) of these priorities made it impossible to have the current bug on top of list. if there is a commercial need for this feature i think we could go and 'sponsor' a developer to build what is needed (i saw this happening with the roaming feature)... Hey Daniel! >Well we can. It's just a question of priorities. (before flaming me, please check that I am the reporter of this bug). Oh - I never flame people :) Unless, of course, they want me to. RC2 is wonderful - it's slimmer, faster and generally a darling in all respects. IE6 still makes a big white flash when rendering a new page - sometimes with Moz - you have to actually read the page to be sure it changed :) And the javascript engine is a little wonder. I am doing rather javascript intensive project and it is IE 5.5 and IE 6 that comes with strange errors all the time, Moz just does what I tell it to do :) Sometimes IE6 and less just plainly "forgets" to load something that you wan't included, and it changes readystate of elements before they are ready - sometimes (but not all of the time ) it fires onload before the page-elements are ready and so on and blablabla But this is not things that an average user appreciates. The average user is primarily concerned with features - User: Can it do this, can it do that ? Dev: well yes but it leaves a giant security hole on you computer ! User: Cool - I always wanted it to do this and that ! Dev: errh... yes but it leaves a giant security hole on you computer, and the mac-users won't be able to do it ! User: Do you think that we could make it do some other stuff too? Dev: *shrug* yes I think that it is possible My motivation for working with Moz is a mix-up of microsoft resentment, and the fact that Moz comes on all platforms (the latter being the most important). Moz makes my work easier, more creative and more enjoyable. Therefore it is very important to me that people actually start using it on a massive scale. That would have a direct and positive impact on my professional live. However people do not understand 'pipelining' and they don't care. Which is not to say that pipelining is not important and not cool - it is. In my book however it comes second to editable elements, because editable elements will have a direct and positive effect with the userbase, a thing like pipelining has a more subtle impact and can therefore wait. I am sure that there is a lot of important issues, and what I am doing here is trying to put some weight in behind the editable thingy - to make that more important, because to me and a lot of other web-devs it IS very important. I really think that it should be done the most easy way - expose the elements as editable to the scriptable part of the DOM, and let the web-dev community take care of the rest. Very soon you will have an abundance of editors with toolbars and all sorts of cool stuff. If you force some sort of toolbar with it you will only restrain web-devs and their creativity. Imagine me saying: "Oh your company prefers IE on the intranet? Well we can work with that, but it puts certain limitations on the project... Have you considered Mozilla? It's free and it is superiour and it will even work for those of you who uses Mac " :) ? >if there is a commercial need for this feature i think we could go and 'sponsor' a developer to build what is needed (i saw this happening with the roaming feature)... There is a commercial need, and it is currently being met by IE, I am only representing myself and my one-man-company so I won't be able to sponsor it - sadly... With the advent of OpenOffice.org 1.0 and Mozilla 1.0 (and KDE and Ximinian evolution and...) Microsoft will be getting a run for their money. There is not one thing that an average medium-sized company is doing right now, that they can't do on a Linux desktop. Now what is left is making sure that the advocates gets some ammo - and cool, eyecatching features is precisely that sort of ammo - hence I'll reiterate: "You cannot underestimate 'sex-appeal' in the user-interface when it comes to swaying users" And I'll boldly continue "...and a textarea just isn't very sexy!" And this feature is not only cool and eyecatching, it's actually valuable! Priorities is what it is all about, and I am doing what I can to make this edit-thingy seem more important, because to me it. Me too. I feel exactly the same way. And I can add even more weight to this on two counts: 1) I'm the Director of Technology at a Fortune 500 company in Chicago (to cover my butt I won't say which one). Therefore I'm in a position to help mould the direction toward products like Mozilla. Before I came to this company, almost no one had even heard the word open source. They are now using Apache, Linux, Perl, Java, Tomcat, JBoss, MySQL, Net Beans, WebGUI, and many other open source products. And now I'm making a push for the desktop. 2) I'm the leader of WebGUI (plainblack.com), the open source CMS. There are many big companies, universities, and schools using WebGUI and for the most part they'd drop IE in a second if it would mean they could use our Rich Edit functions in one common browser among every platform they use. Desktop-wise that includes every flavour of Windows, Mac, Linux, and Solaris you can imagine. If I had any programmers I could spare at either company you can bet I'd be sponsoring this myself. As it stands I'll do whatever I can, though I'm not sure what that is. Except maybe to say what I just said. FYI, we have internal customers as well for this so it has a high likelyhood of happening, although not before 1.0 goes final. Perhaps 1.2 depending on how timing goes and what ends up being implemented/needed first. So please be specific about what you need and what you want ideally, and thanks to those that already have said as much. What our organization needs at a minimum is shown in the example attachment ( ). I thought the target was 1.1 ... target is as soon as we can get done whatever we need to get done :-) That may be 1.1, maybe 1.2, won't know for sure until we decide on a course of action What I'd like from an editable content-thingy. =============================================== 1. First and foremost I'd like it to be there! Silly point perhaps - but I can eat any solution, as long as it gives users a familiar word-processing- like environment in which to supply content. 2. I'd like it to set me free - I am a web-developer I can judge my users needs when I talk to them. I'd like to be able to respond to those needs, and not the needs of the majority of browserusers in general. Any embedment of the composer should give an absolute freedom in regards to toolbars etc. Keybindings should be the standard, I have no need to introduce CTRL+SHIFT+F12 when people are used to CTRL+C (or APPLE-C or whatever), but all that has to do with the appearance and layout should be optional. 3. I'd like to be able to tell where the caret is, so that I can mangle the text as I please, and put the caret back where the user expects it to be (dynamic creation of URLs is a good example). Why this isn't in the W3C standard is beyond me. 4. The idea of a TEXTAREA with a new propertytype is in a sense OK, but not as an only option - I mean if it is somewhat quicker to implement, then perhaps this is where it should start - but I think that the idea of a 'contentEditable' property of div and other block-tags (or any tags at all) is really the way to go. Somewhere inside the belly of the Browsers memory it's all just data anyhow, and everything is just a node on a tree - why not be able to access it? I mean click on an image and turn it into a text - or, at the webdevs discretion, pop-up some imagechoosing functionality. The argument that the TEXTAREA will let older browsers display it as a plain TEXTAREA really isn't that good. If you are doing some serverside-programming (you'd have to in order to make the thing work), you can do a quick serverside browsersniff and give people what they need (you'd probably have to anyway - there *are* IE users out there :). 5. All issues in the direction of 'how will the user know that this is editable' and 'how do we deal with linebreaks in a non-block element' is up to the webdevs. They can use this poorly and fail, or they can use it brillantly and prevail, it's really not the concern of the people making the browser. So to sum it up - what I pine for is FREEDOM to develop what I need/want. A solution that makes a lot of choices for me, is also dictating what I can offer my customers. I'd like to dictate that myself :) This is also one of the charms of the whole OSS scene, that freedom where the only restriction you meet is that of your own limitations. My limitation is that I haven't got the time nor the skills to just download Moz-source and start hacking it myself. I'd be happy to participate in a more implementation-centric discussion, but the headline for my wishlist is FREEDOM! I add my votes to Esben's post (esp. to p 1) A div (with css-p) or textarea would be cool. Access over the DOM to the inner content of this node is a must to make fancy dialog bodex/buttons etc (I think this is already the case in composer) Moving the caret the the key arrows CTRL+X/V/C DEL BACKSPACE PG UP/DOWN POS1/END Optional: An event/command for paste/copy from the lokal clipborad would be great (For the usual toolbar) -- but this can be a security issue. Jaro Esben: #3 has a W3C standard. Try this: var range=window.getSelection().getRangeAt(0); var nodeAfter=range.startContainer.splitText(range.startOffset); nodeAfter.parentNode.insertBefore(document.createTextNode('hello'), nodeAfter); range.insertNode(document.createTextNode('hello')) should work too, but it has a bug (I think) A lot of things can already be done through javascript. The most important missing feature is a visible caret. And it would be nice if user input handling would be taken care of by Mozilla. (Moving the caret one position to the left when the left key is pressed can be done with javascript, but you don't want to script those things) These two features should be enabled when the css property -moz-user-modify is set to read-write, for any element. > The most important > missing feature is a visible caret. Have you tried caret browsing mode? Press F7 and you have a visible caret in the browser window. It enables you to select stuff and move the caret with arrow keys, but you can't edit anything. Caret browsing: That's sweet. Shouldn't there be a menu item for this in Edit or view? When was this added? Is there any way to set this using scripting? Look at for editing text in Mozilla This is a very basic, but I think a good script to start Sjoerds example combined with caret-browsing is very exciting. It goes to show that mozilla is actually not far from there. I agree however that scripting key-events in order to produce content is not optimal Funny thing that caret-browsing, it may be buggy though, try it here: My caret runs around inside the same element and cannot escape (I'm using RC2) - I am not sure whether to file this as a bug, and I request that some of you go have a look and decide if it is a bug. It seems to me that all the functionality needed, already is in Mozilla, and it only has to be fitted together? To turn on browse with caret (read only) mode: prefs=Components.classes["@mozilla.org/preferences-service;1"].getService(Components.interfaces.nsIPrefService); prefs->SetBoolPref("accessibility.browsewithcaret", true); // or false to turn it off Most of the "caret browsing gets stuck" bugs are actually editor bugs. The same bugs appear while typing an e-mail message or editing a page in Composer, and will appear in <div style="-moz-user-modify: read-write; height: 5em; border: 2px inset #ccc; overflow: auto;"> or <htmlarea>. If you find "caret browsing gets stuck" bugs, please file them after searching the Editor components, because that will make Composer and -moz-user-modify suck less at the same time it makes caret browsing work better. By the way, have you tried the "edit page" bookmarklet in IE? javascript:void(document.body.contentEditable = 'true') It has many of the same problems as Mozilla Composer on complex web pages because it's intended for creating simple content. Many bugs that show up while editing forms and tables on cnn.com don't show up often when you're typing an e-mail message. That doesn't mean we should ignore bugs found while editing cnn.com (they do interfere with caret browsing), but those bugs don't prevent us making -moz-user-modify work reasonably well. >Most of the "caret browsing gets stuck" bugs are actually editor bugs. I believe most if not all of these types of bugs are actually in the "selection" component (not in an editor component). If simply specifying a contenstype of text/html to a textaria eliment brougt up the composer for that textaria it would be a killer. Its easy to use for webdeveloper, its easy to implement in different ways in different browsers (ie, call upp an extern app in an browser only browser). Browser not suporting it would give the raw HTML to edit so it is transparent to older browsers (it shuld require quoted HTML in the textaria to maintain that backwards compability). Thanks /LaH Hey folks, I've been following this bug with interest for a while now. I have a client who asked for a way to edit and/or add to pages in house, so recently embarked on the journey to furnish them with such. I don't like doing platform specific development, and generally do all my development in Mozilla, but after much research it seemed that IE was the only browser which had all the functionality I needed. I've never done any "IE only" development before, so it was an interesting learning experience. I've developed an application which basically lets certain parts of a page be set editable, wraps them in a WYSIWYG editor with standard toolbar buttons to do the formatting, and publishes the changes back to the server. It works a treat if I do say so myself. It uses W3 standard DOM wherever possible, and for those areas where it isn't possible I wrote thin wrappers which do the right thing depending on which browser is used. All in all, it turned out much better than I ever anticipated. The thing that's really great (or really a shame, depending on your viewpoint) is that it's almost totally cross platform. I did 90% of the development work in Mozilla (and let me say now that development in Mozilla, while not perfect, is at least an order of magnitude less painful than doing the same thing in IE); save for the actual editing of text, everything looks and works exactly the same in both browsers, including instantiating the editor and publishing changes. All I'm missing for it to be completely cross platform is the ability to make a block editable in Mozilla. I've thought a lot about what form I'd like that to take. I think I agree with a previous poster that exposing the basic functionality and letting the developer build the actual editing application is the best way; it has the maximum flexibility, and it's just not so hard to do that we need the Mozilla team to present us with a fait accompli. I know there has been a lot of discussion about simply putting in a tag and having an instance of composer show up, but that would actually be bad for me. It's much heavier than I need or want, and almost certainly wouldn't give me the control I need. What I'd like is to be able to set a div, say, to editable and be able to do operations on entered text. What that would mean is that I hit a key, the letter shows up and the caret moves. Behind the scenes, I'd like to be able to either set formatting on highlighted text (through an API ala IE perhaps) or wrap it in tags of my choosing and have it rendered it accordingly (I've actually managed to do this in Moz, but it's a little clunky because of problems with the range object and it's not worth much if I can only use it with existing text). As I say, I've gotten everything I described above with the exception of entering text working, and I have a feeling that if I wanted to write some really hairy Javascript, I could probably make that happen too; I may just try it. The whole functionality is so damn close I can taste it, and it's really frustrating not to be able to carry through. It's especially frustrating because I have clients who would be willing to buy this from me tomorrow if it worked on Macs, and it doesn't look like MS intends to expose this functionality there anytime soon. I know you want to focus on end users, but I can move a fair number of people from IE to Moz immediately if you just make this functionality available. Moz doesn't have to do all the work for me, it just needs to make it possible. I'll take care of the rest. Before I go, I'd also like to say that I've more than once stumbled across functionality in Moz that I didn't know was there and had never seen documented; if something I'm wanting is already available (in any form, as long as it can be used without hacking C++) please let me know where to find info on it. Can I just echo the sentiments expressed above. I also have done a fairly complete editor that works in Mozilla. All I need to polish it and make it better than IE is to be able to insert text at the caret. I do not want an embedded Composer - although I could put that to good use as well. I want to be able to set a DIV on the page as editable, so that my keystrokes will insert text at the caret or, it can take a selected range of text and wrap or unwrap tags around it. Even if it was possible to calculate where the caret was in a DIV declared as editable, I could hack some JS to insert my keystrokes. Incidentally, the mshtml feature of IE is kinda nice. but it has a weakness that Mozilla could exploit by the aforementioned caret enhancements. That is, the code the various execCommands() produce is not valid XHTML. So, if you are trying to produce well formed XHTML, mshtml is of little use. You have to resort to using the textRange object in a textArea or in an editable DIV. We already got a range object in Mozilla, and we already have a getSelection() - we just need to go that last half step and allow an editable area where selections and ranges and caret insertion is possible. As the previous posts have said, give us that and we will do the rest. Hell, I will even stick my editor on Mozdev and let everyone finish it and use it. We needed contentEditable now to build a proof of concept of our Xopus editor on Mozilla. So we have made a very crude implementation ourselves. See This implementation is by no means complete nor stable but only meant as a case study. We still hope contentEditable will be part of Mozilla 1.1. Comments and additions are welcome. Just previewing this brings tears to my eyes - *snort*... Can't wait to download and play with it Esben Maaløe *** Bug 147575 has been marked as a duplicate of this bug. *** Im just gonna agree with all of the above we have a IE only app for editing web sites that we would love to make platfrom independent plz plz plz put this in soon The Xopus implementation is very close to my version. It is obvious that very little really needs to be done to get this working. For me the following is enough to allow an explosion on HTML editors to appear: 1). Introduction of a contentEditable Tag 2). The ability to identify the caret position inside an area where contentEditable=true, in relation to the area. EG report that the caret is 10 characters along in the string representation of the content. *** Bug 150609 has been marked as a duplicate of this bug. *** See also bug 151765, which asks for a way to open Composer using javascript in a web page (4xp). I prefer the method in this bug: allowing the page to make a div editable, and giving the page a way to ask the browser for a hovering composer toolbar for functions like Copy and Paste that can't be done using javascript. I have recently implemented a simple XML editor in mozilla using -moz-user-input styles. This CSS3 approach shows great primise. I have sent a copy of the editor to Ian Oeschger for him to review. Unfortunately the CSS3 -moz-user-input styles are only partially completed. Missing are: 1. A caret in input cable areas. 2. Tabbing between input capable areas. 3. A keyboard interface for character insertion/deletion. I have managed to get around some of these limitations in the prototype. I have been persuaded to place this project on MozDev with the intent of adding XML dialect specifiec editors/viewers. E.g. Docbok, XUL SVG, XSL etc. The editor also currently has a partial implementation of RelaxNG for document validation. I really need for some progress to be made in resolving the above problems before any further progress can be made. Any ides of What and When? Would someone like to bump the target milestone? Sounds like "Future" to me. No point in having it set to a milestone that's already passed. I run a Linux shop. Do development in PHP/MySQL CMS Tools mostly. However in looking for a way to allow users to do basic WYSIWYG I was damn impressed with what you could to with IE and javascript (that isn't possible in Mozilla). Something like this would kick ass: The ability to specify a textarea which would allow folks to write as they would in a Word Processor.. Mike Wondering the discussion on dhtml, htmlarea, htmledit etc. I agree with Laurens van den Oever and Peter Wilson, XML is the way forward. Microsoft uses execCommand in contentEditable areas. execCommand('command')places tags around or opens a tag depending on the command. For instance execCommand('Bold') places <STRONG> tags around the selection or opens the tag for new inserted content and closes the tag when the command is called again. These commands are browser buildin (Dhtmled.ocx IE5+) so your stuck with what micro$oft grants you. It would be more usefull if execCommand can envoke commands based on the DTD used by the document your editing... JP When is the release date (when Mozilla starts supporting functionality similar to IE contentEditable=true property) there is no release date yet, we're scheduling the work and figuring out what exactly we'll do in the first go round and who will be doing it I also hope for this: My Reason: At this time it is not possible to use Mozilla for ANY Content Managementsystem which use a web-based WYSIWYG Editor. ALL CMS i know only support Internet Explorer for use. You also can not develop an WYSIWYG Editor for Mozilla with JavaScript or DHTML because there is no Engine which you use for this. See my attach at: Bug 163249 *** Bug 163249 has been marked as a duplicate of this bug. *** Having the same view as stated in comment #89. This is highly needed, as leading content-management-system vendors are all facing this problem and major companies won't decide for Mozilla if they are not possible to use their web based cms systems. Major European companies have asked about such a beast as well, so making topembed if this was then wrapped in an activeX control, this could be embedded in IE for a consistant cross browser toll too Looks like someone just put a bounty out on this bug...or its kin...the bounty includes a monitary sum (bout 1/4 of the way down search for 'I Just Can't Take Internet Exploder Anymore' Looks like someone just put a bounty out on this bug...or its kin...the bounty includes a monitary sum (bout 1/4 of the way down search for 'I Just Can't Take Internet Exploder Anymore' That's interesting! I've made editor embedding topics such as this (tracked by bug 157128) a major priority during the next few months (and didn't need any monitary incentive :-) I studied the MS/IE HTML editor interface carefully today and we will have no problems supporting this once the editor embedding goals are fulfilled; but if you design an editor to our interfaces, there will be much better capibility, of course! I would be happy to throw in a case of beer to add to the bounty. I'm so sick of recommending that our clients use Explorer if they want to use our web apps. And they still don't work on Mac, only under Windoze. Whoever resolves this bug is making inroards against two monopolies. They certainly deserve all the support we can offer. so contentEditable doesn't work in IE mac? This editor functionality in IE definately does not work on IE Mac. This is a real stopper for us because we would like to do a fairly large rollout of MacOS X. Mozilla/Netscape is preferred, but IE would have been adequate. *** Bug 166987 has been marked as a duplicate of this bug. *** changing alias per cmanske, akkana, glazman The content-decoding issue should not be an issue. You spool the raw stream, then once you know the target location, you move the file and pass it through the chain of decoders (and this should all be happening on a non-UI thread of course). If the decoders are HTTP-specific then that should be changed, since other protocols may well want to do content decoding. er, wrong bug, my bad. Midas, because everything it touches becomes (Netscape) Gold :-) Sounds interesting. 1.1 is out the door. ------- Additional Comment #106 From Alex Vincent 2002-09-12 18:36 ------- Sounds interesting. 1.1 is out the door. >>>>>>> Why? There is nothing about contenteditable in 1.1 or 1.2 or at least nothing I can see from the release notes. Is the problem making it work on all platforms? If so, make incremental releases that work on whatever platform allows. hopefully those that don't (apple...) will expose things that allow their users to enjoy the same benefits as windows users. -Rob I EARNESTLY and completely vote for this functionality to be added to mozilla ASAP!!! This would be nirvana for our application - but it MUST work across platforms. We currently use IE contenteditable for our CMS to allow users to edit their page content. I implore you to add contentEditable to Mozilla. It will have a much greater impact than any other single thing that you can do and it will pay huge dividends in user base and increased downloads of mozilla. I have several thousand customers alone who would be encouraged to download Mozilla in order to edit the CMS content on Macs. yes, yes! This is the holy grail of the editor. We are working hard on this! We need to fulfill basic embedding requirements such as "removing editorshell" (architexture restructuring) and conversion to command-oriented API, then we'll be able to make the jump to "in place editing" or what I like to call it, "editing everywhere." added nsbeta 1 kw. _____________________ This feature will have numerous benefits leading to increased Mozilla browser usage and added value for embedded browsers: 1) Webmail sites (AOL, Netscape, Yahoo, etc) can easily offer parity with IE's functionality, where you can compose messages with a WYSIWYG editor 2) AOL Hometown and other sites that host web pages could include HTML editing capabilities in their page creation GUI. 3) Companies that manage their intranets and websites using content management systems can support Mozilla instead of or in addition to IE. Several "evangelists" inside top 100 companies are trying to get their IT depts. to support Mozilla instead of IE or Netscape 4 only. This is another critical feature that will help them in this effort. It is important to get more people using Mozilla inside companies so they then want to use it at home etc. I know I'm preaching to the choir here! This comment from a content management vendor: Everyone from Vignette to Broadvision uses IE for their online editing tool because of "contenteditable". There is nothing else available for this task short of a java applet. Mozilla's adoption of this feature should naturally be embraced by this market as well as countless developers all around the World who expose editing functionality for their clients in this manner. Mozilla would allow this functionality to be exposed on the macintosh platform which is used by better than 90% of the design/ad agencies, which would further drive adoption. *This feature will have numerous benefits leading to increased Mozilla browser usage* I really wonder if the Mozilla folks know how true that statement is. This feature has been a long time coming, and I've seen a lot of statements along the lines of "we're concentrating on features useful to end users instead." Right now I'm recommending IE to my end users, many of whom prefer Netscape 6.x, because of the lack of this one thing. Not only is Mozilla not gaining users, it's actively losing them. The combination of browser based editing and Mozilla's cross-platform nature is a killer app, people. There is a huge opportunity here, for both Mozilla and developers. I've been following this with increasing frustration (I have one client who's a Mac user who asks me when I'll have something he can use every few weeks) and I've about given up hope. I wish I could help, but I've tried several different ways of doing this in Moz (as well as the Bitflux and Xopus editors), but nothing works nearly well enough to actually be usable by my clients. I assume it requires hacking in the core Moz code and I'm no programmer, so I'm useless there (go ahead, I gave you the opening, smack me down ;). Give me contentEditable functionality and I can (and will) have my editor ported in less than a couple of hours. Once that's done I will have recommended IE to my clients for the last time. I *know* I'm not the only one that's true of. If you want to see an overnight explosion of new end users, it's right there waiting. I hope the mozilla developers are listening to this latest round of requests. It is really true that this is the thing that could really drive adoption forward. The request should given much more priority. We have many designer clients who do not have Windows machines. We keep a couple of laptops around to loan out to them. One of our designers who is on his 3rd project with us finally broke down and bought VirtualPC (we have been telling him Mozilla will have this functionality soon...). The people who are requesting this feature should not be counted as a single user (if that is the determing criteria for working on bugs). Rather they should be counted as 10s or 100s or possibly 1000s of users because they would directly cause the spread of the browser. When this feature is implemented the browser will grow much stronger roots. Is there any kind of a status report from the moz devs on where this feature is? best, -Rob Likewise, we have a site that is in use by thousands daily and have a feature that I am not comfortable really pushing unless it can be offered cross-platform. I would be behind the feature if I could recommend Netscape/Mozilla for it. As one of the main editor engineers working for Netscape, I will again say we are working very hard toward this goal. I'm very sorry we don't have andy specs up, but basically we plan to support the MS/IE interfaces. I'm talking with our own evangalism people today and also with the DIG group on this issue. We are making good progress in 1.2 timeframe to have major editor embedding work done that will enable this to proceed. Charles, That is great news. I'm sure there are many like me subscribed to this bug that over the past couple months have seen many users subscribing to the bug, but haven't really gotten any update on the fact that developers were actually working on it. The last few posts I'm sure will give many of the subscribers a lot of hope that we will soon see this appearing in nightly builds and releases. Thanks again for your efforts developers. Great News Charles! Just to clarify though, are you saying that the contentEditable attribute is scheduled for inclusion in the 1.2 final (October 30, 2002)? Or are you saying that you will have made the necessary modifications to the editor that will ALLOW contenteditable to be worked on and included in a future release? If the latter is the case, what is the projected date that we will see this in release? And perhaps when in a nightly build? Someone Please alert the list when this appears in a nightly build!!! "contentEditable" attribute will probably not make it by 1.2, unless we are exceedingly lucky. And don't worry! You'll know when anything appears in a nightly build. We need to investigate some security and events/focus issues besides the basic editor work we are now doing. We'll post time estimates as soon as we can. Created attachment 102403 [details] Baseline content editable features list For all people cc'd who want to use this functionality, you can help in a couple of ways: 1) Please read this attached file, which contains the potential base set of features that would be available. 2) Attach a text or HTML file with any additional "must have" and "nice to have" features that would make content editable acceptable for you. Any example URLs that illustrate your requested functionality would be really helpful. Or you could attach additional screenshots illustrating features you're using in your systems today...or whatever you think will help. (Please make sure to use Create an Attachment instead of writing in Comments, which could very well get lost in this long bug.) There's no guarantee all requirements will be met but your input will provide great direction. Thanks. I think the features listed in the attachment are heading in the wrong direction. By simply allowing javascript access to the selected text or the word under the insertion point, and the ability to wrap/unwrap any arbitrary tag around that text, or find out what tags inclose that text, it would be easy to implement all these features and more in javascript. Then a simple .js file could be included to simulate the microsoft functionality for those who want it, but those who want to do more will (like me) will have the power to do it. I agree that inserting arbitrary text or HTML at the current cursor position or around the current selection is a must-have feature. But I also think that providing easy methods for performing the functions mentioned in the previous attachment isn't a bad idea, especially if it facilitates compatibility with currently implemented web apps (that use the M$HTML functions). We at Bitflux () don't need all those HTML Capitibilities, either, but I see certainly use for them for a lot of people. But what we really would like is to get rid of our now-written-in-js-character-inserting-with-caret-mode :) It works, but a lot of people complain, that it's not the "Real Thing". For example, we can't simulate the common use of the "end" key and going onto the beginning of a line. Therefore what we (not speaking for others) really need is just good character editing possibilities (as typing, deleting, moving around and maybe copy/paste) within dedicated notes. If we then still can use the standard JS/DOM stuff within that nodes, we're all more than happy and satisfied. And we should be able to use it on non-html-namespace-nodes, as well, please :) Just my 2 cents and thanks for your great work. Ditto the bitflux guys et al. That is all we are looking for too. We would not use anything in the list posted a few messages ago. --------- One thing I like about IE is the ability to click an item and the control (outerHTML) is selected. When you click again you are inside the element. Also, I would want the ability to tab through the nodes. Example: <div class="article"> <h1>Title</h1> <div class="section"> <h2>Sub title</h2> <p>Some <strong>para</strong> stuff</p> </div> </div> I can click on the article div which gets selected - when I hit tab it goes to the H1, next to the section div, then to the H2, then the p, then the strong. The tabbing selects the entire node/element. If the enter key is pressed focus goes inside the selected element. I will get an example up tomorrow sometime. sleepily, -Rob Created attachment 102426 [details] 2 extra wishlist items I agree with the sentiment that it would be better simply to expose a usable API for caret browsing and so on, because that will better lever the skills of the wider mozilla community - i.e. javascript coding and UI design. However, I'd rather have a suboptimal solution that it largely compatible with IE contenteditable in the next few months, than an optimal solution in a couple of years. I don't care if it's written in *machine code*, if we can get it soon :-) Thanks for the feedback, netscape people. One of the greatest features of MSIE WYSIWYG Editors is a capability of pasting a richtext from a clipboard -- it is particularly important for cut-n-paste from MS Word. This is needed for those site administrators, who don't create a text at webpage, but transfer texts to the web. If it is possible (at least, on Win32) to implement such feature, it will be great. > One of the greatest features of MSIE WYSIWYG Editors is a capability of > pasting richtext from a clipboard If this get implemented, you must be able to _turn it off_! We have so many problems, where clients paste their text in times new roman, and then a piece of Times text appears on a Verdana site. >We have so many problems, where clients paste their text in times new roman, and then a piece of Times text appears on a Verdana site. It is so simple to remove any unwanted tags and attributes... I can tell you about it. >If this get implemented, you must be able to _turn it off_! Yes, you are right. Created attachment 102459 [details] contentEditable API I have been working with contentEditable since it was first introduced. My company produces a commercial CMS that uses this functionality. Here's an suggested API for Mozilla contentEditable: Why not just implement it the same way as IE? Created attachment 102462 [details] contentEditable wishlist Attached are my thoughts on a contentEditable implementation. I do want to point out that I don't want or need Composer or FrontPage level functionality. The page layouts and styling will already be set; I just need to be able to give my clients the ability to change some of the content within the site, and give them some functionality as far as structural markup goes. They don't need to decide that a heading is +1 font size in dark red; they just need to be able to mark it as a heading. I should also probably point out that one of the reasons I'm so hot to have this is that I've found that Mozilla is a vastly superior development environment to IE. I actually do most of my development in Moz, then tweak as necessary to make it work in IE. Having to tell people they need IE to use the end result just kills me (I just had a client yesterday tell me she'd use IE if it was absolutely necessary, but she'd really rather find another way). Cross platform in-browser editing is my current holy grail, and you guys are so close. I really do believe making this available will make Mozilla *the* browser to use for content management, and given that there is no question in my mind that this is where the Web is headed, well... hopefully it'll come soon. Thanks folks. Created attachment 102484 [details] contentEditable wishlist by Erik Arviddson Here is my list of wishes Created attachment 102485 [details] contentEditable wishlist by Erik Arvidsson Here is my list of wishes Created attachment 102509 [details] GED's Wishlist for Moz Editable content also has some ideas of how one might implement editable content... IE emulation through Javascript, etc... Created attachment 102540 [details] Requirements for in-browser, user-editable DOM nodes. A summary from attached the document: An emulation of content-editable is *not* required, just the building blocks to allow developers to make something similar. A proper implementation of something like the (now-defunct) CSS3 UI WD withe the above features would be fantastic. Comment #130 and it's attachment say exactly what I was trying to say, but much clearer. I really think this would be the way to go. And as someone already said, you could implement IE's functionality in javascript and automatically load that script when a page designed for IE is viewed. I will be seemingly one of the few voices *against* this proposed "feature", mostly because suddenly adding contenteditable attributes will not allow pages to validate as valid HTML. In addition, while we're adding contenteditable, can we please get <blink> back, and maybe <marquee> since IE has that, too? All that I ask is that everyone stop for a second, and think of the standards, and the possible repercussions of another browser war. > I will be seemingly one of the few voices *against* this proposed "feature", > mostly because suddenly adding contenteditable attributes will not allow pages > to validate as valid HTML. You can add this attributes dynamically via JS for contenteditable user-agents, no? > I will be seemingly one of the few voices *against* this proposed "feature", > mostly because suddenly adding contenteditable attributes will not allow > pages to validate as valid HTML. Alright, I take that back, my main reason for being opposed to this idea is because of the factor of playing "catch-up" with IE. Please don't do it. Stick to the W3C specs, and only the W3C specs. Otherwise, In 8 years, we'll be stuck rewriting another browser from scratch. Don't get me wrong, I *like* this idea of being able to edit a page's contents directly in the browser, and I've played with XOpus and have been very impressed with what it can do. However, I can't agree with a browser (or the group of people who are responsible for it) that adds a feature for rendering or interacting with a web page that is not in the W3C's specs, or tries to compete with another browser. Say no to war on IE. I don't see how the goal of this RFE, allow page to make arbitrary elements user-editable in browser, violates html validity or a spec. IE's use of a contentEditable attribute does this, but there is no reason that mozilla can't expose a simple editing API that doesn't violate existing specifications. mozilla's mission is not to only implement things that are standardized. There is no standard for how to create an e-mail compose box (as far as I know at least :) ), yet mozilla still implements one. Likewise, there is no standard for how an html (or xml) editor should work. There is a standard for its output, so provided that the result of using mozilla's implementation for this RFE is valid html/xml, this should be a non-issue. >. ------- Additional Comments From sjoerd_visscher@hotmail.com 2002-10-11 01:35 ------- >>. A little less than a year ago I asked both the W3c CSS and HTML lists about this. The CSS list powers that be said it was an HTML concern. The HTML list powers said it was an UA's problem - it should not be part of the spec. Here is a post I found in a quick search: As to those who do not know how to use this without creating invalid markup, I say you don't know how to use it. You can assign these items on page load or at other events. There is no need to write your HTML markup invalidly. -Rob I don't think the question here is creating valid XHTML documents and it is not a question of being backwards compatible. The goal is to provide the functionality to get away from a static document to a document (or part of a document) that can edited by the user directly inside the browser. (Universal Canvas?) The goal is not extend HTML. The goal is to allow developers to create applications that allows editing. I agree that the idea to use an contentEditable attribute is a bit stupid. It was most likely done that way because it allowd MS a simple implemenation. MS has a history of using attributes when CSS is more suitable. The way to go in Mozilla is of course to use a CSS property. How about -moz-user-modify or is that only for inputs? This also fits well with how IE is doing it with contentEditable wich uses the inherit value by default which follows the CSS semantic for the same value. Just like -moz-binding is ignored by non Mozilla browsers, -moz-user-modify is ignored as well. Going this way will allow any XML document to validate and it will also make all old browsers being able to display the significant information. > All that I ask is that everyone stop for a second, and think of the standards, > and the possible repercussions of another browser war. Standards are not standards until they are standardized. That's true for everything in our world. Everything. And standards can becom extinct and replaced by another one. And standards can change, Eric Meyer saying that in that case, they are not standards any more until they are standardized again. There is no current standard for making an element editable. And there is no standards body currently working on this. It implies that in the meantime, all vendors are going to find the best implementation they can have, and the day they have it, they'll ask for a standards body to be the battlefield. Being committed to standards does not mean not being pragmatic. Daniel, Netscape's rep at CSS WG as there is no w3c standard for this and a very wide spread existing implementation, just do it simple.. as mentioned before, implement contenteditable forget all the xhtml/xml blah stuff for the first cut... we just want something simple which works... honestly, if the xopus code was reliable it would be enough... the direction of all the editor rework is cool, but that is not IMHO what this bug is about... its about simple basic editing of text KISS principle.... >------- Additional Comments From michael@kitsite.com 2002-10-11 05:43 ------- . So what? The basic functionality needed is to be able to select nodes or partial nodes and affect them with JS. Most of the stuff you should be handling yourself (especially if you are trying to validate the content). Aside: Since contentEditable did not exist before MS created it, then they defined it. It is defined. -Rob > Being committed to standards does not mean not being pragmatic. YES! that is it. Mozilla is being caged into standards which are already subsets of another browser ruling the planet. What are thet chances that mozilla will be adapted widely in its present form?-> uhm almost 2% jump to 5% perhaps. By the way, is Microsoft sleeping at the moment? NO! they will come up with even more unignorable features pushing Mozilla back to its oblivious stage. By the way, who is more active in w3c anyways? MS?? This discussion is destined to degenerate into a flame war. I suggest moving non-development related comments to the newsgroups. This bug is already spammed enough as it is (not that I haven't contributed to this myself). Tony, see comment 10 and comment 114. (see the some of the DHTML properties)). Also see comment 108, comment 110, and comment 111, and comment 113. I see this as pragmatic: the more that is in common between the major browsers, the better for everyone. ?" My god, get off the ego trip. I say we just be compatible with IE so that Mozilla will be compatible with more websites and maybe we'll get more then 0.8% of the market some day. "Why should Mozilla care what IE does? By implementing nonstandard things that IE does, the developers are admitting that IE is better, and they want Mozilla to be more like IE. Am I way off base here?" Surely it can't have escaped your notice that this bug's raison d'etre is that IE *is* better than Mozilla with respect to editable content. It may be that Mozilla can implement editable content in a more coherent way than through the contentEditable attribute, but the more market share Mozilla loses through not providing editable content in *whatever* form, the more unlikely it is that Mozilla will be able to influence *any* future direction of browser standards. Alright, I still think it's a bad idea, but in the interest of letting people get to work, I'll drop the argument. Best of luck, Tony Coming very late into the game with this thread...I would have to argue: contentEditable would be fantastic; duplicating MS functionality using execCommand et al would be horrendous. I've just finished a 2 month long battle with creating a CMS WYSIWYG editor, and the workarounds I had to perform simply because of the MS model force me to drink... I do not know about anyone else who has created / talked about the foundation of a CMS, but there is one thing that client have ALWAYS wanted: the ability to limit the user of the CMS to a very particular style sheet. The MS control does not allow you to do this without nasty workarounds (which i have done), but it would be SO much better if there were a way to insert / alter / edit the entire node properties through assignment. Perhaps something like document.execCommand, but instead of "guessing" which parameters are available, the method could perhaps be passed an object (perhaps even a Node), which would serve as the template for the execCommand. This to me (aside from the Range object) would be the most important thing in a WYSIWYG editor model. I would like to draw back attention to comment #119. Rib Rdb I think has a good point with what he says there. What he describes would be just enough to create a range of functionalities. And it would be 'clean' as well (right?), as opposed to the m$ contenteditable ****. *** Bug 177128 has been marked as a duplicate of this bug. *** Created attachment 105172 [details] some security implications of scriptable copy/paste Mozilla should omit clipboard commands from whatever API it gives to web pages for wysiwyg editing. Since Mozilla doesn't give users cut/copy/paste buttons on the toolbar whenever they focus non-wysiwyg text fields, I don't think this should be a problem. jruderman: I agree with you that the clipboard should not be scriptable, but note that text areas and other input-fields are clipboard enabled. For example, you can paste into a text area via the content menu, the Edit menu, or via the platform-specific keyboard accelerator. In the same way, I'd be looking to have the clipboard functions automagically enabled for any elements/nodes marked as user-editable, and disabled when not user-editable. Text editing without copy and paste is pretty sucky. It would be nice to be able to script as a result of copy/paste, not necessarily to initiate copy/paste. A related example is dragging of images into an area from an image repository in a separate frame. I will take whatever the engineers are able to give us as quickly as they can. I am hearing this will be in the next release of moz. However, it would be shortsighted to abandon the ability to paste to/from clipboard from an editable area. This is something that users consider fundamental to any input area. Rich text paste would be useful. Did this make the target milestone of 1.2beta? Hey, you should all download the latest Mozilla build (1.3alpha --- sorry, this did not make 1.2) and try this: Nice work. This is definately a good start. I guess you already know that Mozilla raises an exception if this is tried twice in the same session. Should we start filing bugs for this or wait until more is in place? Erik Arvidsson-- >I guess you already know that Mozilla raises an exception if this is tried >twice in the same session. Are you trying to create to editable iframes or do you mean if you reload? There is a bug filed about the 'reload' problem which is a regression (Monday's build works for that; possibly Tuesday's build too?). >Should we start filing bugs for this or wait until more is in place? Depends what it is. If it's a security exploit, file the bug and be sure I'm cc'd (brade@netscape.com). If it's a missing command, there is another bug filed on it (look in my bug list) so add a comment to that bug. At this point I have no plans to implement the selection api IE has since IE's own implementation is buggy/inconsistent and the document api seems to be sufficient. *** Bug 151967 has been marked as a duplicate of this bug. *** I've been playing around with javascript to make an inline html editor, one that would drive off style sheets and write "proper" XHTML. It's not trying to be IE. I think Mozilla opens up a lot more possibilities. The result so far is at:. The basic selection handling is there. Deletion was the biggest pain due in part to what I think is a browser bug but it's largely there now but for debugging. Styles et al should come before christmas. Hey folks, I just got the chance to check this out a little, and it looks as if it only works in an iframe? Is that so, and if so, is it going to stay that way? That would make it next to useless for the way I use this functionality; I really need a way to make a fragment of the currently loaded document editable. Is a contentEditable attribute coming, or is setting an entire document to designMode all there going to be (my apologies if these questions are already answered somewhere; I've searched, but not found anything outside the midas spec page)? bugzilla@webwidgets.net (comment 167)-- You are correct that right now the rich editing is only available on an iframe's document. When that work is completed, someone may be able to undertake the content editable aspects which will be much trickier. Will it stay this way? Hard to say for sure. If someone steps up and writes the code and tests it, it can go in. If not, then it won't. Personally, I won't be able to undertake this task for the foreseeable future (next 3 months). Are you sure you can't use iframes which contain the data you want to edit? >...is setting an entire document to designMode all there going to be Setting designMode in mozilla is done on an iframe's document (not the parent document). Thanks for the info. > Are you sure you can't use iframes which contain the data you want to edit? Well, I was hoping to duplicate the operation of my editor for IE, which depends on the ability to set certain (and arbitrary) blocks of a page editable in place. I might be able to fake it with iframes, but not easily and not without writing a completely different implementation, which I was hoping to avoid. I'll spend some time looking into it. > Setting designMode in mozilla is done on an iframe's document (not the parent > document). Yes, I understand that; sorry if I wasn't clear. Again, though, for me to be able to avoid maintaining two totally separate code bases (which I'm not sure I have the time for) I'd need to be able to have only certain parts of that loaded document be editable, or be able to load fragments of an existing document as the iframe's doc. Doable, but not easily with my current backend, and given that IE works now I just don't know that I can justify the time and effort. I'd love to have something cross platform, but all my current clients have Windows available or can delegate to someone who does, so... In any case, thanks for the prompt answer; I do appreciate it. bugzilla@webwidgets.net (comment 169)-- > ...duplicate the operation of my editor for IE... You might want to look at the samples at these locations (which I'm told work in both IE and mozilla): The above don't use contenteditable but do use designMode and iframes. It may not work for you in your situation if you have a complex process which involves multiple editors etc. > You might want to look at the samples at these locations (which I'm told work in > both IE and mozilla): I'd already seen both of those, but thanks. My problem isn't that iframes don't work in IE; it's that I tried and abandoned that approach a long time ago, and my current setup isn't really compatible with doing it that way. In essence, making an editable iframe is doing things the old way, only instead of having just a form on a page to enter plain text into you have something that can create html. An improvement, but the other features of modern browsers make so much more possible, and I'm loathe to go back to that way of doing things. It may be fine for something like a blog or posting to a message board, but get beyond that and things start to get clunky. I've played with the iframe stuff a little, and unfortunately it's not going to work for me. I may do something with it in the future so those who can't or won't use IE on Windows have *something* to work with, but as it won't be nearly as seamless or as usable as what I already have I fear my best bet is to keep recommending that my clients use IE. Disappointing, that. so do you want inline editing, of HTML as HTML? Say user-modify=read-write for a section, say any id'ed section, let's you type away in that section. That's what I want. As I said in a previous comment (#166), I'm playing with javascript to allow inline editing like that. It's basic now but it should support styles and spans by christmas and there's a focus bug in the browser that can kick in after deletion. I don't think you need any custom widget per se, just something to push the DOM around as you click away. conor play: Yes, I took a look at what you have and that's very much along the lines of what I need. I had tried to do something like this in Moz some time ago, but various bugs in the range object prevented it from working well enough to be usable. What you've got done so far is too basic for my uses, but where you plan to go with it sounds good. Given that it doesn't look like Moz is going to get a contentEditable attribute, I'll take another look. I dislike having to deal with all the keystroke handling manually (and using caret browsing *really* feels like a nasty kluge), but if that's the way it is then that's the way it is. If this looks like it'll eventually work for me and I can budget some time, I'd be happy to contribute to the effort if you'd like. In any case I'll take a closer look at it within the next few days. We have today released a new sub-release of Xopus: contentEditable for Mozilla. Please look it up at Mozilla 1.3 supports editing of pages, but only by entering designmode for a complete page. This obviously isn't how you and me would want it, but still it's a start. Our unsurpassed moz2ie.js library has now been extended to tweek this designmode behavior and make it look like true contenteditable the way Bill intended it to be. You can make almost any HTML element editable using one of two ways: 1. <div contenteditable="true">you can edit this!</div> 2. <div id="test">you can edit this</div> <script language="javascript"> document.getElementById('test').contentEditable = true; </script> We hope this will help developers build richer Mozilla based internet applications. Our implementation isn't complete yet, of course. Please feel free to enhance the code. here's another iteration of inline editing using a tweeked DOM:. Use either -moz-user-modify CSS or ContentEditable and include one file, wrap.js, in any page you want to make editable. This works in the currently released browsers - it doesn't rely on "DesignMode". Of course, it needs a lot of work still and a good reliability pass is in order. My two pence worth on inline editing in Mozilla is: - with some fixes, the core Mozilla DOM can easily and cleanly support editing. Mainly this amounts to a few extensions and fixes to Range. - cross browser compatibility should mean nothing more than supporting IE's proprietary contentEditable attribute. It does not necessitate emulation of every IE DOM routine and attribute. ContentEditable should be supported but support should mean letting end users set this property to activate and deactivate native DOM editing appropriately. - fragmenting the code base by maintaining a proprietary "editor" DOM is a waste of resources and of the core DOM's potential. It will probably mean never pulling ahead of IE, at least in the area of editing. With all the work in XML, XHTML and CSS, there is a ton of room to leap frog current IE-based editing. It seems a shame to just "me too" old IE features rather than supporting more through fixes and small extensions to the core DOM. Happy holidays! Why the cursor is visible on the entire content? Isnt it suppose to show up if the purpose is editing only? It is good to have contentEditable feature for Mozilla ,but Users should not get the feeling that they are on editmode for all the web pages they browse. Is there a way to turn it off or it is there to stay(hope not). Never mind, it is fixed on today's build. If I understand this bug correctly, then it would make Mozilla meet one of TimBL's visions for a browser. It could also edit. See which is a browser/editor that works in this way. (added myself to the CC list) I'm very interested in this, but only really if it can give proper XHTML editing as a minimum, and free XML editing (with CSS formatting of XML elements) in the best case. The former would allow me to use XSLT to get the output into the correct format without doing any messy HTML->XHTML conversions first, which would be great, and the latter would just be fantastic. Still, the progress made so far is good. Yes I agree with that it should generate proper XML compliant code. This allows us to do: Editor => XML XML + XSLT => XHTML or even: XML + XSLT => XSL FO => PDF And all without the need of cleaning up HTML code inserted via the editor. This prevents us from the garbage in garbage out routines ;-) Microsoft's contentEditable also has some "support" for this: When you copy/paste data form Word 2000 into a contenteditable div is shows up as xhtml-ish code... Keep up the good stuff Oh Please JP, You're right Microsoft insert word data as xhtml ****, in the meaning that it has a XML-valid markup. It does however include all kinds of proprietary garbage tags and a lot of Markup that is very inefficient. I sure hope the Mozilla crew will make something that produces valid XML, and it would be very nice to get CLEAN XHTML, but I'm not throwing away my garbage cleanup routines just yet. Sorry for the sarcasm, but I think it VERY hard to produce clean HTML, considering my cleanup routines(and others I've seen) aren't altogether perfect yet either. I know, I know. My clean up regexz are starting looking like the Himalayas to straiten Micro$oft pastes... *** Bug 191833 has been marked as a duplicate of this bug. *** minusing request for 1.3; there isn't even a patch to land here! Some of you watching this bug may be interested in trying this in a recent 1.3 build: also see: Any issues found should be filed as new bugs (assuming they aren't already filed). Updates to the above documentation are welcome too! Thanks for the very nice example. There are some errors being reported in IE6 (Windows XP pro) regarding the InitToolbarButtons(); functions as well as when trying to view source. Anyone else see this? Overall though, a very nice example and glad to see this finally available. If anyone has any more examples and contributions, please post them - this is very exciting stuff!!! I am curious if anyone besides myself is in the process of creating a flash based toolbar for the editor? I think this would be perfect since you could execute the function calls using the getURL() syntax from flash and it would atleast insure that your toolbar would look exactly the same in IE as in Moz. I found that IE 6.0 has a problem with addEventListner. I used thist construct which works for me. if(!document.addEventListener) { document.onmousedown = dismisscolorpalette; document.getElementById("edit").contentWindow.document.onmousedown = dismisscolorpalette; document.onkeypress = dismisscolorpalette; document.getElementById("edit").contentWindow.document.onkeypress = dismisscolorpalette; } else { document.addEventListener("mousedown", dismisscolorpalette, true); document.getElementById("edit").contentWindow.document.addEventListener("mousedown", dismisscolorpalette, true); document.addEventListener("keypress", dismisscolorpalette, true); document.getElementById("edit").contentWindow.document.addEventListener("keypress", dismisscolorpalette, true); } In order to prevent this bug from getting more spammed than it already is, I've set up a Midas listserv to facilitate discussion among web application developers regarding Midas implementation and troubleshooting. I will send you all an invite shortly. Sorry if I invited some of you twice, I was having trouble mass inviting everyone at once. Anyway, here's the info regarding the mailing list... You can subscribe or unsubscribe to the list here: If you are a member, you can post by emailing midas@listserv.moses.com. re: comment 186, IE does not implement addEventListener. If memory serves me the equivalent call is |window.attachEvent("onload", myHandler);| (for the case ofsetting an onload handler). Note there is no 'capturing' parameter, and the event identifier includes "on". I don't know if this bug is still "active" given Midas' release but to the interested I've added to the pure DOM-based implementation of ContentEditable that I'd posted here around Christmas (comment #175). My main goal is an XHTML/XML editing module that builds on the w3c DOM - among other things, such a module provides for a straightforward implementation of standards-conformant ContentEditable. Demos and scripts and more information at. for those interested specifically in "contentEditable" as opposed to "designMode", the contentEditable/mozUserModify implementation I posted about before is now a mozdev project called "mozile" for Moz(ila) i(n)l(ine) e(ditor):. adt: nsbeta1- *** Bug 209553 has been marked as a duplicate of this bug. *** OK, I'm now running 1.6b and this was targetted for 1.3b. What is happening with contentEditable? Yes, we have Mozile, but I would much rather have it in Mozilla (and Safari, but that's another case). (In reply to comment #194) > OK, I'm now running 1.6b and this was targetted for 1.3b. What is happening with contentEditable? It appears to be dead. Nobody in the Mozilla organization appears to have a clue as to why it would be useful, nor do they appear to be willing to get one from those who are more informed. > Yes, we have Mozile, but I would much rather have it in Mozilla (and Safari, but that's another > case). You may get your wish wrt Safari. Dave Hyatt has said that contentEditable is near the top of his list of things to implement, and I read within the last couple of weeks (can't remember where or I'd give a link) that Safari will be getting cE in the not too distant future. IE and Safari users will therefore be moving ahead with the functionality that users are demanding while Mozilla argues about important stuff like who gets to use what icon... This is great news! Finally we will have a browser on OS X that will support contentEditable. I actually did think that Mozilla would get there *much* faster than Safari, but how wrong I was! Time to degrade features for Mozilla or drop Mozilla completely from our CMS-product then. Quite sad actually... that it takes >2.5 years to implement a heavily requested feature (it's not magic) in Mozilla. Has this perhaps been silently solved - at least for some people's needs - while we weren't looking?:-) Mihai Bazon's HTMLArea (currently at version 3.0 RC 1) works with both IE 5.5+ and Mozilla 1.3+ - apparently using the "MIDAS" component in the latter case. Works with Firefox as well as Mozilla. See: HTMLArea - A replacement for TEXTAREA elements. You are right, it will work for some people, but not for those wanting to, for exemple, edit certain parts of a web page, but not others. The Midas component in Mozilla 1.3+ and Firefox 0.6.1+ implements "designMode", which is very similar to MSIE's "contentEditable". It works great for editing content in web content management systems etc. and it is quite easy to make a simple web content editor with basic functionality. (However, cross-browser support and more advanced functionality requires a lot of additional Javascipt/DOM programming to handle differences and limitations). We have supported it in our own cross-platform web content editor product () ever since Mozilla 1.3, and we would love to hear from you if it does not work with your version of Mozilla 1.3+ / Firefox 0.7+ (on Macintosh, Linux or Windows). If HTMLArea doesn't meet your needs, see the Bitflux editor Insults about cluelessness from people who spam bugs with advocacy comments are a good reason to revoke bugzilla access from those clueless people. Clueless Safari fans can cling to every word from the mouth of hyatt if they want to, but let's get real and talk again (and not in the bug system) only after Safari actually ships a working content-editable support.. However, most of the talented hackers I know dislike bugspam intensely, so I'm inclined to close this bug, open another one, and revoke bugzilla access from anyone who spams that bug. How's that for vending a clue, hmm? /be (In reply to comment #201) >. Brendan, I think it would be useful for people like me that would like to help in having this feature in place but have no strong knowledge of the mozilla internals to have a few pointers on how this should be done. I mean, mozilla is a relatively complex piece of software and content-editable is not something you get from a side (like an few XUL and javascript files), but a rather intrusive feature that will probably have to deal with a lot of the mozilla internals. Personally, I'm scared by the complexity of this task. But it would be *extremely* helpful to have an insider view of the problem and guide people inside the jungle so that they don't have to explore it themselves.. All we need is a little bridge across these two islands so that we can walk to your side and help. (In reply to comment #202) > Personally, I'm scared by the complexity of this task. But it would be > *extremely* helpful to have an insider view of the problem and guide people > inside the jungle so that they don't have to explore it themselves. I agree. I think I know how to implement contentEditable in Mozilla. But the current bug went totally out of control, from a comments point of view. I am just fed up with reading noise, and seeing the useful comments in this bug drown in an ocean of spam. >. This is the case for all bugs. > All we need is a little bridge across these two islands so that we can walk to > your side and help. Eh. We don't need "a little bridge". We need 60 hours days. More contributors, less trollers. Closing (see comment #201). *** Bug 249692 has been marked as a duplicate of this bug. *** I was wondering, have we gotten any closer to a working contentEditable solution? (In reply to comment #206) > I was wondering, have we gotten any closer to a working contentEditable solution? It's unfortunate that scanning this case I don't recall seeing anyone mention the killer reason for having something like contentEditable. This is to allow Mozilla to become the 'universal canvas' for UI design across platforms. With SVG as well, you can then design complex UI that is controlled by an outside business application, written in something stronger than a script language (e.g Java, Mono etc). Commercially, this is what the company I work for has done. We use IE as the rendering surface for our UI and we also use it as the print and reporting engine (there's a 'DrawToDC' method that allows anything on the browser to be sent to the printer, though I doubt Mozilla has anything directly comparable with this. The business logic communicates with the UI via Javascript (using execScript). This provides a very elegant tiered layer which can be customised in the field. Using behaviors we have UI widgets like tabbed dialogues etc (though XUL would obviously be an acceptable replacement for that). But I need to be able to mark individual fields contenteditable and I also need a working textRange object and good Javascript hooks before I could consider doing anything like this outside of IE. So that's the real reason for contentEditable; not just in-place HTML editing of web pages. In our application, in fact, all HTML is loaded from local files. But HTML provides everything that Avalon promises - only here and now. PS: If I had to vote, I would much prefer contentEditable was a style, so that I could use CSS to control which fields are enabled rather than have the rather ugly contenteditable=true attribute. *** Bug 322335 has been marked as a duplicate of this bug. *** *** Bug 340375 has been marked as a duplicate of this bug. *** Fixed on trunk in bug 237964, so unless something surprising happens, this will be in of Firefox 3.
https://bugzilla.mozilla.org/show_bug.cgi?id=97284
CC-MAIN-2017-34
refinedweb
17,595
70.63
Safety Notices This printer is certified as a Class 1 laser product under the USDepartment. The Center for Devices and Radiological Health (CDRH) of the U.S.Food and Drug Administration implemented regulations for laser products on August 2,1976. These regulations apply to laser products marketed in the United Stales. The label on the printer indicates compliance with the CDRH regulations and must be attached to laser products marketed in the United States. Caution-use of controls or adjustments or performance may result in hazardous radiation exposure. of procedures other than those specified herein Federal Communications Commission Radio Frequency Interference reloate the receiving antenna. l Increase the separation l l Connect the equipment connected. between the equipment and receiver. into an outlet on a circuit different from that to which the receiver is Consult the dealer or an experienced For compliance -_ radionV with the Federal Noise Interference technician for help. Standard, this equipment requires a shielded cable. The above statements apply only to printers marketed in the lJ.S.A Self Declaration Radio interference regarding this equipment has been eliminated according to Vfg 1056/1984 announced by the DBP. DBP has been informed of the introduction examine the whole series. of this special equipment It is the user’s responsibility to see that his own assembled regulations under Vfg 1046/1984. To conform to FTZ-regulations The equipment and has been granted the right to system is in accordance it is necessary to make all connections to the printer with shielded cable. may only be opened by qualified service representatives. The above statement applies only to printers marketed in Germany. with the technical - - i Statement of The Canadian Department of Communications Radio Interference Regulations This digital apparatus does not exceed the Class B limits for radio noise emissions from digital apparatus set out in the Radio Interference Regulations of the Canadian Department of Communications. Le present appareil numerique n’tmen pas de bruits radioelectriques depassant les limites applicables aux appareils numtriques de la classe B prescrites dans le Reglement sur le brouillage radioelectrique edict6 par le Minis&e des Communications du Canada. The above statement applies only to printers marketed in Canada. Safety Notices for Finland Taml Kirjoitin LUOKAN Laitteen kayttlminen turvallallisuusluokan DENNA SKRIVARE 1 LASERLAITE. muulla kuin t&l klyttoohjeessa mainitulla 1 ylittlvalle n~ymlttomllle lasersateilylle. AR EN KLASS 1 LASERAPPARAT. Om apparaten anvlnds pa annat sltt an i denna bruksanvisning osynlig laserstralning, som overskrider grlnsen for laseklass i tavalla saattaa altistaa klyttajan specificerats, I. kan anvlndaren utsattas for The above statement applies only to printers marketed in Finland. Trademark LaserPrinter AppleTalk: 4 StarScript, Acknowledgements StarPage: Star Micronics Co., Ltd. Apple Computer Inc. FX-850: Seiko Epson Corporation HP LaserJet HP: Hewlett Packard Company IBM PC/XT, IBM Proprinter: Lotus l-Z-3: Lotus Development International Microsoft Word, Microsoft Windows: PageMaker: Business Machines Corporation Corporation Microsoft Corporation Aldus Corporation PostScript: Adobe Systems Inc. WordPerfect: WordPerfect, Inc. NOTICE . All rights reserved. Reproduction of any part of this manual in any form whatsoever STAR’s express permission is forbidden. - The contents of this manual are subject to change without notice. l . * without All efforts have been made to ensure the accuracy of the contents of this manual at the time of press. However, should any errors be detected, STAR would greatly appreciate being informed of them. The above notwithstanding, 0 Copyright STAR can assume no responsibility 1991 Star Micronics Co., Ltd. for any errors in this manual. TABLE OF CONTENTS 1 1. How to Use This Manual ......................................................................... 1.1 Laser Printing ............................................................................ 1.2 The Star LaserPrinter 4 - An Introduction .............................. .2 .4 .7 2. Setting Up the Star LaserPrinter 4 ....................................................... .8 8 10 10 11 11 2.1 Unpacking ................................................................................. 2.1.1 Unpacking the Printer ..................................................... 2.1.2 Checking the Parts ........................................................ 2.1.3 Unpacking the EP-L Cartridge.. .................................... 2.1.4 Optional Items ............................................................... 2.15 Carrying the Printer ...................................................... 2.1.6 Opening and Closing the Printer.. ................................. 2.2 Installing the EP-L Cartridge ................................................... 2.3 Loading Paper .......................................................................... 2.3.1 About Paper ................................................................. 2.3.2 Loading Paper Into the Multi-purpose Tray ................ 2.4 Selecting Paper Delivery .......................................................... 12 16 21 .21 .2 1 24 2.5 Connecting the Power Cord .................................................... 2.6 Connecting the Interface Cable ................................................ 2.7 Adjusting Print Quality ............................................................ .26 27 29 __ 31 3. Initial Operation ....................................................................................... 3.1 Front Panel .............................................................................. 3.1.1 Powering Up ................................................................. 3.1.2 The Buttons .................................................................. 3.1.3 Light Indicators ............................................................ 3.1.4 Hex Dump .................................................................... 3.2 Self Test.. ................................................................................. 3.2.1 Printing Test Sheets ...................................................... 3.2.2 Status Sheet Description ............................................... 3.3 Programming from the Control Panel ..................................... .31 32 .33 .37 .38 .38 38 39 .4 1 4. Quick Start with the Star LaserPrinter 4: A Tutorial ....................47 4.1 Preliminaries.. .......................................................................... 4.2 Control Panel ........................................................................... 4.2.1 Basic operations ............................................................ 4.3 Connecting the Printer to the Computer ................................... 4.3.1 Selecting the Interface ................................................... 4.3.2 Activating the Parallel Interface .................................. .47 .48 49 50 51 .53 - .~ 4.4 4.5 4.6 4.7 4.8 4.3.3 Saving the Setting ......................................................... 4.3.4 Returning to Factory Settings ....................................... Manual Feed ............................................................................. Paper Size ................................................................................. Printing in Landscape Orientation.. ......................................... Changing Character Set.. .......................................................... Selecting Display Language .................................................... 54 54 55 56 .57 57 .58 .59 5. Font Selection ........................................................................................... 5.1 Hewlett Packard LaserJet IIP .................................................. 5.2 Epson FX-850 .......................................................................... .63 64 6. Interfacing With Applications Programs ..........................................65 6.1 6.2 6.3 6.4 Lotus l-2-3 Release 2. .............................................................. WordPerfect Version 5 ............................................................ Microsoft Word 4.0 .................................................................. Microsoft Windows .................................................................. 66 .68 69 69 73 7. Maintaining the Star LaserPrinter 4 .................................................. 7.1 7.2 7.3 7.4 .73 Replacing the EP-L cartridge .................................................. Storage and Handling Precautions for the EP-L cartridge ..... ..7 3 .74 Cleaning the Fixing Assembly ................................................ 75 Cleaning the Exterior of the Printer ......................................... ............. .77 8. Troubleshooting .......................................................................... 8.1 Error Messages Displayed on the Screen ................................. 77 78 8.2 Service Call Messages .............................................................. 8.2.1 Engine Service Call Messages ..................................... .78 8.2.2 Controller Service Call Messages ................................ .78 .80 8.3 Operator Call Messages .......................................................... 80 8.3.1 Engine Problems ........................................................... 8.3.2 Font/Emulation Cartridge Problems ............................ .8 1 .8 1 8.3.3 Optional Hardware ....................................................... 82 8.3.4 Change Paper Size ........................................................ .84 8.3.5 Manual Paper Feed ...................................................... 8.4 Operator Information Messages .. .. .. ... ... ... .. .. .... . ... ... .. ..*............ 85 8.4.1 Host Communication problem ... .. ... .. .. ... .. ... ... .. .. ... .. .... .. 85 8.4.2 Function or Size Incompatibility ... ... .. .. ... ... .. ... .. ... .. .. .... 85 8.4.3 Font/Emulation Cartridge .... .. .. ... .. ... ... .. .. ... ... . .. .. .. .. ... .... 86 8.5 Status Messages ... .. .. ... ... ... .. .. .. ... .. .... .. ... .. ... .. ... .. .. ... ... ... .. .. .. .... .. . 87 8.6 Paper Jamming . ... .. .. .. ... ... .. ... .. ... .. ... ... .. .. ... ... ... .. .. .. ... ... .. ... .. ... .. .. 89 8.6.1 Clearing paper jams .. .. .. .. ... ... .. ... .. ... ... .. ... .. .... . .. .... .. .. ... .. 89 8.7 Streaky Prints . .... . ... ... .. .. ... ... ... .. .. .. ... ... ... .. .. ... .. .... .. .. .. ... ... ... .. .. ... 95 8.7.1 White Streaks . .. .. .. ... ... ... .. .. ... ... .. ... .. ... .. ... ... .. .. ... ... ... .. .. ... 95 8.7.2 Stains on Transparency Films ... ... .. .. ... ... ... .. .. ... ... ... .. .. ... 97 9. Options .............................................................................................. 99 9.1 Paper Feeder and Cassettes .. ... .. ... .. ... .. ... ... .. .. ... ... ... .. .. ... .. ... ... .. . 99 9.2 Expansion RAM Board . .. .. .. ... ... ... .. .. ... ... ... .. .. ... .. ... ... .. .. ... ... ... . 100 9.3 Font Cartridge .. .. .. .. ... ... ... .. .. ... ... .. ... .. .. ... ... ... .. .. ... .. .... .. .. .. ... ... .. 100 10. Specifications ...........................,............................................................. 10 I 10.1 Specifications ... .. .. .. ... .... . ... .. ... .. .. .... .. ... .. ... ... .. .. .. ... ... ... .. .. ... .. . 101 104 10.2 Reliability ..,..............,..............,............................................ 10.3 Pin Functions on Interfaces . .. ... .. ... ... .. .. ... .. .... .. .. .. ... .. .... .. .. .. .. 105 Glossary .......................................................................................*................107 I.,. ,’. :’ -. ; How to Use This Manual b - Congratulations on purchasing a Star LaserPrinter 4 StarScript version. You will be delighted with both the quality of the printed images and the ease of operation. With your computer and this printer, you can create professionallooking documents. h This Operations Manual is one of two that explore the entire range of printing possibilities of the Star LaserPrinter 4. This manual is for beginners and for those who plan to concentrate on the basics. Advanced users and those interested in programming should refer to the Applications Manual. I ~.. i- -- Chapter 2 begins with an overview of the manual and of the Star Laser printer. This chapter explains how to unpack your new printer and prepare it for initial operation. Chapter 3 explains how to operate the printer’s control panel and display screen. Some people like to skip the preliminary explanations and begin using the printer immediately. Chapter 4 provides the information for such a fast start. If you begin with Chapter 4, at some later time you should read the rest of the manual in sequence, for a more complete picture of your printer and its operations. s.. Chapter 5 describes the type characteristics printed pages that professional look. that will enable you to give your The Star LaserPrinter 4 StarScript version emulates the operation following widely-used printers: L_ l PostScript l HP LaserJet IIP, a laser printer l Epson FX-850, a dot-matrix printer of the 1 With this capability, the Star LaserPrinter 4 StarScript version will operate with a wide range of applications programs on the market, both old and new. Chapter 6 provides the information that will allow you to use your printer with four popular applications programs: Lotus l-2-3, WordPerfect, Microsoft Word, and Microsoft Windows. Chapter 7 and 8 describe the maintenance and troubleshooting operations to keep your Star LaserPrinter 4 working in “perfect” condition. 1.1 LASER PRINTING Before you begin learning about your new Star LaserPrinter 4, you may find it helpful to know something about laser printing itself. _ A laser is actually a beam of light of just one wavelength (Laser is an acronym for Light Amplification by Stimulated Emission of Radiation). Such a beam of light, described as “highly coherent”, can be focused very sharply. Lasers, generated by gases, liquids or semiconductors, are widely used in applications ranging from surgery to the visual arts. Laser printing is a process that uses a laser beam - in this case, generated by a semiconductorto activate portions of an electrically charged surface. These activated parts represent the words, numbers, or graphics being sent from the computer for printing. Other parts of the printer transfer this image to paper, then clean the surface and prepare it to receive more information to be printed. In other words, this process is a type of laser-activated temporary engraving. In the St’ar LaserPrinter 4, two interconnected units produce the complete printing process: the toner cartridge and the printer body. The toner cartridge contains the drum, which is the rotating surface. In the darkness of the toner cartridge, the drum holds a negative charge placed on it by the primary corona wire in the printer. Shutters on the bottom of the toner cartridge assure that no unwanted light penetrates its interior. 2 - When text or graphics are sent by the computer to the printer, laser beam is generated by a semiconductor laser diode. This beam is focused by special scanning mirrors that turn the light beam into a tool that “writes” or “prints” on the surface of the drum. The areas of the drum touched by the laser beam lose their negative charge and contain a reverse image of the information sent from the computer. As the drum rotates, it passes a developer unit that is also rotating, but in the opposite direction. The surface of the developer unit is covered with toner (“ink”) which has a negative charge. The neutralized portions of the drum, containing the information to be printed, pick up negatively charges toner from the developer unit. At this point, paper fed from the paper cassette moves through the transfer unit, from which it receives a positive charge. As the drum rotates, the negatively charged particles of toner are attracted to the positively charged surface of the paper. A combination of heat and pressure fuse the image to the paper. The paper is then ejected from the printer. Finally, a cleaning mechanism in the toner cartridge cleans excess toner from the drum, and a special light beam neutralizes its entire surface. Then the entire process can begin again. 3 1.2 THE STAR LASERPRINTER 4 INTRODUCTION AN You will meet each portion of your Star LaserPrinter 4 as you read this manual, but a brief introduction is in order here. The following figures show the front, rear and inside views of the printer. @ Side Cover 0 @ Face-down Tray @ Multi-purpose @ Paper Stop @ Extension Tray @ Control Panel @ Paper Guide @‘Cartridge 6jI Front Upper Door Slot @ Power Switch 4 Release Button Tray (MP Tray) - @ Paper Delivery Selector - Rear View I b. @ Parallel Interface Connector @ Air Vent @ AppleTalk Interface Connector @ Power Receptacle @ Serial Interface Connector @ Density Adjustment Lever @ Transfer Roller @ EP-L Cartridge @ Paper Access Door @ Pick-up Roller @ Separation Pad @ Fixing Assembly Cover @ Feed Roller 5 MEMO - - 1 LaserPrinter 4 The fact that you’re now reading this manual shows that you’ve got at least as opening the carton containing your new Star LaserPrinter 4. This chapter will help you to unpack the printer, set it up, and get it running. First, though, you should make some preparations. You may already have decided on the printer’s new location. Whether you have or have not, run through this checklist of requirements: l Environmental control The printer and toner cartridge should never be exposed to strong sunshine or other direct heat sources. It should also be located away from air conditioning ducts, dust and fumes. Excessive moisture should be avoided, such as humidity in excess of 80% or less than 20%. If it is comfortable for you, then it will be comfortable for your printer. l .-. i A large, strong table or stand The printer weighs approx. 22Slbs (10kg) and must be firmly supported. Also, the printer will need more room than it takes up in the shipping carton, because the paper tray will extend forward from the front; so plan for a space at least 2 x 3.2 ft (0.6m2). -. l ‘A three-pronged outlet The outlet should be no more than 8 feet (2Sm) from the printer (the length of the power cord), preferably one shielded from power fluctuations. In any case, no motor-driven appliance should be connected to the same outlet, to avoid interface with the printer’s operations. -_.. l A fresh toner cartridge (product #EP-L) l Paper A package of 16-201b (60- 135g/m2) photocopier paper is best to start with; however, the printer can use thicker paper, as well as special media, such as envelopes and transparent sheets. 7 2.1 UNPACKING The printer comes in two boxes. The large box contains the printer and its accessories, and the smaller one contains the EP-L cartridge. Follow these instructions when unpacking. 2.1.1 Unpacking the Printer Follow the instructions below to unpack the printer. 1. Open the large box, and remove the accessories box. 2. Remove the printer from the box. 8 11 ,I L. i_ “. -- 3. Remove the packing material from round the printer. - / - - - 4. Open the accessories box. Remove the face-up tray, power cord, Operations Manual and Applications Manual. , i - - - - NOTE: Save the packing boxes and materials. If you need to move the printer (for new location or service etc.), use these materials to protect the printer from damage. 2.1.2 Checking the Parts Before setting up the printer, make sure that all standard items shown below are provided and they are free from damage. If any of these items are missing or damaged, contact your supplier. Power cord Operations manual Face-up tray Applications manual 2.1.3 Unpacking the EP-L Cartridge Open the small box (see the illustration material from the EP-L cartridge. below) and remove the packing -- CAUTION: 1) Do not open the aluminum bag containing the cartridge until you are ready to install it in the printer. 2) Do not lean the cartridge against anything or turn it upside down. 10 1 j! 2.1.4 Optional Items Some of the following items may have been ordered. Unpack them. For details, refer to Chapter 9. Options. l Paper feeder l Cassette (A4, Letter, Legal, Executive size) l RAM board (2 or 4MB) l Emulation cartridge 2.1.5 Carrying the Printer Whenever moving the printer from one place to another, always make sure the multi-purpose tray is closed, and carry the printer firmly at the bottom with two hands. CAUTION: Never attempt to carry the printer using the face-down slot. NOTE: When moving the printer, remove the EP-L cartridge from the printer. After removing the cartridge from the printer, replace it in the aluminum bag in which it was originally packed, or cover it with a thick cloth to protect it from direct light. - 11 . 2.1.6 Opening and Closing the Printer l l l When opening the printer’s side cover, do not stop it in the half-way. This will open the protective shutter of the drum, and light will permanently damage the drum. Do not place anything beside the printer. It may damage the multi-purpose tray when the tray is opened. Do not put anything in or on the multi-purpose not press the tray downward. tray except paper, and do CAUTION: The parts of the printer shaded in the illustration below become extremely hot when the printer is used. To avoid any personal injury, do not touch these parts when the printer is open. Although the printer’s cover is closed while printing, it will need to be opened when replacing the EP-L cartridge when clearing paper jams. Open or close the cover in the following way: 12 .- A 5. j,.. i_ Opening the Cover 1. Hold the knob of the multi-purpose tray and pull it to open the tray. 2. The tray opens to about 80 degrees. 13 3. Press the release button (at the right ) upwards and open the cover. _- The cover opens to about 80 degrees (the multi-purpose about 90 degrees.). 14 tray then opens to NOTE: Remove the two orange stoppers by pushing the lower part of the stoppers upwards and removing below). it (as shown in the illustration Closing the Cover 1. Using both hands, lift the cover and close it gently until it latches. - 15 2. Lift the multi-purpose tray and close it until it latches on both sides. 2.2 INSTALLING THE EP-L CARTRIDGE Important Notice Install the EP-L cartridge immediately after opening the aluminum bag. Permanent damage can be caused by light to the photosensitive drum. NEVER expose the cartridge to the strong light (more than 15000 lux) or room lighting (1000 lux) for more than five minutes. Do not open the drum protective shutter on the EP-L cartridge. Keeplthe cartridge away from CRTs, disk drives, diskettes, etc. Otherwise, CRTs and disk drives can be damaged, and data on diskettes can be destroyed by the magnet inside the cartridge. When handling the cartridge, do not touch the bottom of the EP-L cartridge. Print quality will be adversely affected if the protective shutter is open and the drum is damaged. ‘The drum can also be damaged if it exposed to light. Always keep the EP-L cartridge with the label facing up. Do not turn it upside down or stand it on end. The toner may become caked, and this causes print quality to deteriorate. Use the cartridge before the expiry date printed on the carton. Otherwise, print quality may deteriorate. 16 l Use only genuine EP-L cartridges as recommended by your supplier. Follow the instructions below to install the EP-L cartridge for the first time, or replace an used one. 1. Open the multi-purpose tray and the side cover as described in “2.1.6 Opening and Closing the Printer”. 2. If you are installing the EP-L cartridge for the first time, skip to step 3. If you are replacing the old EP-L cartridge, remove it by pulling it by the tab. I I j .-- , i NOTE: If it is difficult to remove the EP-L cartridge, press the green lever located at the right, and try again. j - 3. Open the aluminum bag containing the EP-L cartridge and remove it. NOTE: Save the aluminum bag, as you may need it for storing the cartridge when you move the printer to another place in the future. 4. Holding the cartridge with both hands, rock it gently from end to end, 5 or 6 times, to distribute the toner evenly. If the toner is not distributed evenly in the cartridge, it may adversely affect print quality. .- 18 5. Place the cartridge on a flat surface. While holding down the cartridge with one hand, use the other hand to pull the orange tab to remove the seal. Make sure that the tab is pulled smoothly in a direction parallel to the flat surface. Otherwise, the tape may break or snap, making the cartridge unusable. 1 1 c t . i - 6. Holding the cartridge by the tab, align the arrow on the cartridge with the V mark on the printer. _.. 19 7. Slide the cartridge into the printer gently. 8. Make sure that the cartridge is securely seated, then close the cover gently. NOTE: 1) If you have purchased accordance optional accessories, with their installation manuals. install them in 2) When replacing the cartridge, clean the fixing assembly, described in “7.3 Cleaning the Fixing Assembly”. 20 as 2.3 LOADING PAPER 2.3.1 About Paper Print quality and printer life are greatly affected by the paper used. Paper can be fed into the printer either from the multi-purpose tray or paper cassette (if cassette is installed). Types and sizes of paper can be used with the multipurpose tray are listed below. Size Type Weight Print delivery Plain paper 97 x 148 - 216 x 356mm (Letter, Legal, A4, B5, Executive) Transparency films Letter, A4 Face-up Labels Letter, A4 Face-up Envelopes 97 x 148 - 216 x 356mm 60 - 105g/m2 60 - 90g/m2 Face-down/up Face-up 2.3.2 Loading Paper Into the Multi-purpose Tray 1. If the multi-purpose it forward. tray is not open, open it by unlatching and bringing 2. Hold the arrow at the center of the multi-purpose extension tray. 3. Slide the paper guide to the left. 22 tray, then pull out the 4. Take a small stack of paper and align the edges by tapping it on a flat surface. 5. Align the right side of the stack of paper with the right-hand paper guide. Slide the paper stack gently into the printer as far as it will go. Make sure that the height of the paper stack does not exceed the mark on the paper guide. .~ . .,. -. b... L.. 23 6. Slide the left-hand paper guide so that it touches the left side of the paper stack. Make sure that the paper guide does not press too tightly or fit too loosely against the paper stack. Otherwise, this can cause paper jams or other problems. 2.4 SELECTING PAPER DELIVERY The printer provides two types of paper delivery; face-up and face-down. With face-down delivery, the paper is ejected from the printer with the printed side facing downward. With face-up delivery, the paper is ejected with the printed side facing upward. Selecting Face-down Delivery 1. Open the multi-purpose tray by unlatching and bringing it forward. 2. For face-down delivery, simply open the multi-purpose tray. 3. Adjust the paper stop located on the top of the printer to the position which matches the paper size. 24 Selecting Face-up Delivery 1. Open the multi-purpose tray by unlatching and bringing it forward. 2. Switch the paper delivery selector for the face-up position. 3. Attach the face-up print tray to the side cover of the printer. Fit the pegs at the sides of the tray into the holes in the printer. When face-up printing is finished and the multi-purpose tray is closed, the printer automatically switches back to face-down delivery. 25 2.5 CONNECTING THE POWER CORD Make sure that the power switch on the printer is set to OFF, then connect the power cord to the printer and a AC power outlet as shown below: 1. Make sure that the power switch of the printer is off (that the “0” side of the switch is pressed down). 2. Plug the power cord into the power connector, then plug the other end into the AC outlet. 26 Only use the power cable supplied with the printer. Note that this power cable is fitted with a ground pin. This grounding is an important safety feature and should not be ignored. If a suitable grounded socket is not available, contact a qualified electrician to rectify the situation. 2.6 CONNECTING THE INTERFACE CABLE The host computer transmits information to the printer along an interface cable. The printer is provided with three types of interface (serial, parallel and AppleTalk). No interface cable is supplied as standard with the printer. Determine the kind of interface cable (serial, parallel or AppleTalk) you want to use, and purchase the appropriate cable from your supplier. 1. Before connecting the interface cable, make sure that the power to the printer and the computer is turned off. 2. Plug one end of the interface cable into the appropriate connector on the rear side of the printer. 3. For parallel (Centronics) interface, secure the cable connector connector clips. For serial interface, fasten the cable connector screws. using using 27 - 4. Connect the other end of the cable to the appropriate interface on the host computer. NOTE: 28 The printer’s factory setting is for a parallel interface. If the other interface is to be used, you need to select the interface on the control panel. See “4.3.1 Selecting the Interface” for selecting the type of interface. 2.7 ADJUSTING PRINT QUALITY At some time you may find that the quality of the printing is not what you want. You may run test prints and then make the printing lighter or darker by adjusting the the print density adjustment lever located inside the printer. 1. Hold the knob of the multi-purpose tray and pull it to open the tray. 2. Press the release button (at the right) upwards and open the cover. 3. To increase print density, move the density adjustmen&%r&ne To reduce the density, move the lever to the left. right. NOTE: The density adjustment lever has four settings from left to right. As you move the lever, it clicks at each of the two positions in the middle. la -BI - 30 Initial Operation - - 3.1 FRONT PANEL The front panel of the Star LaserPrinter 4 is a combination control board and interactive message center. The panel consists of: I. , L ! i l a 1 line, 16 character LCD display screen l 5 LED lights, 1 orange and 4 green l 7 momentary contact buttons The momentary contact buttons permit you to instructions to the printer. In turn, the printer uses the display screen and the light indicators to convey information to you. The screen display is the primary way the printer communicates with the operator. It informs the operator about the machine’s state, the printer’s status, alarm conditions that require some action by the operator, “soft” errors, and messages when selecting current, initial and default printer parameters. The LEDs provide an “at glance” summary of the printer’s status. The printer operates in two main modes when the printer is Off Line: 1. Normal Mode (white buttons) - performs function labeled on the button. 2. Program Mode (brown buttons) - press the [PROGRAM] button to enter menu selections. The meaning and use of the panel buttons depends on the mode in which the printer is operating. 31 3.1.1 Powering Up Take note of the following points when turning on or off. l When the printer is connected directly to a computer: Power on: First turn on the computer, and then turn on the printer. Power off: First turn off the printer, and then turn off the computer. l Power on: First turn on the computer, then other devices, and finally turn on the printer. Power off: First turn off the printer, then other devices, and finally turn off the computer. CAUTION: Always wait at least three seconds between turning off and turning on again. Turn on the printer by pressing the “I” side of the power switch. The printer will begin its internal diagnostics and warming up, displaying a series of message on the screen as follows. 1. All LEDs are lit and the LCD display screen turns black. 2. The screen displays “LaserPrinter 4”. 3. Then the screen displays “Memory Test 2 MB”. If an optional 2 MB RAM board has been installed, the message will display “Memory Test 3 MB”. The actual value will depend on the size of the RAM board installed (2 or 4 megabytes). 4. After a short while, the screen displays “EEPROM LOAD DONE” for a moment. This means that default parameters have been read from the EEPROM and the EEPROM CRC has been checked. 5. “PRINTER WARMUP” is displayed, the READY indicator starts blinking and the ON LINE lights up. 6. When the printer is ready, the screen displays “PRINTER READY” and the READY indicator is now continuously lit. 32 -- When the printer is connected to a computer through other devices: - 1 3.7.2 The Buttons This section explains the various meaning and uses of the buttons, including their light indicators, where present. Keep in mind that the buttons’ functions depend on the mode the printer is in: the Normal Mode and Program Mode. [ON LINE] Pressing this button switches the printer from Off Line to On Line or vice versa. When the printer is On Line (ON LINE LED is lit), the printer is able to receive information from the computer and print it. All other buttons (except for the ERROR SKIP button under certain conditions) are inactive when the printer is On Line. When the printer is Off Line, the printer is unable to receive information and print, but the other switches can be used. If the printer is in the Program Mode, pressing this button exits from the Program Mode, and enters to the Normal Mode. L-J When the printer is On Line, the LED is on. When the printer is Off Line, the LED is off. [PRINT] PRINT El This button is active only when the printer is Off Line. Pressing the switch will print any page. If there is no page, the depression of the switch will be ignored. When printing is in operation, the LED is on. [ERROR SKIP/<] l-l This button is basically active only when the printer is Off Line. In the Normal Mode, a depression of this button has no effect if no alarm condition exists. However, if an alarm or warning condition exists, pressing this switch causes the printer to return to the previous state after corrective action has been taken. ERROR SKIP - When the printer is Off Line and in the Program Mode, this key is called NEXT. During programming, pressing this button displays the various items under a category for the operator to select next in the sequence. - 33 [TEST/>] 1 TEST j This button has two functions when the printer is Off Line. l Pressing this button when the printer is in the Non-menu Mode will operate in the following sequence: 1) Pressing this button displays “HOLD FOR TEST”. If the button is released while this message is displayed, the printer returns to the previous state. 2) If the button is held down for three seconds, the screen will display “STATUS SHEET”. Whether pressing the button or not within one second after “STATUS SHEET” is displayed will alter the printer operation. 3) If the button is released without being pressed within one second, the printer will print a status sheet (see 3.2.1 Printing Test Seets). 4) If the button is pressed once within one second, the screen will display “FONT LIST”, and a font list will be printed (see 3.2.1 Printing Test Sheets). 5) If the button is pressed twice within one second, “CLEANING PAPER” will be displayed and a cleaning paper will be printed (see 3.2.1 Printing Test Sheets). 6) If the button is held down for more than 6 seconds after “STATUS SHEET” is displayed, the screen will display “TEST PRINT MODE”. If the button is released while this message is displayed, the printer will print a test ,print mode (see 3.2.1 Printing Test Sheets). l In the Program Mode, pressing the [TEST] button as PREVIOUS presents the available items in reverse sequence. [RESET/v] RESET ‘= V7 This button has three functions when the printer is Off Line. Pressing and holding this button when the printer is in the Normal Mode displays “HOLD FOR RESET”. If this button is Iii held for more than three seconds, “REINITIALIZE” message is displayed and the printer is reinitialized to the initial settings of the emulation currently selected. This will also clear the input buffer, any page in composition, and composed pages queued up. 34 l l l When the printer is in the Program Mode, this button is called ENTER. Pressing the button selects the value with a @. Turning on the power while holding down the button causes the printer to enter the display language selection mode. (See “4.8 Selecting Display Language”.) [PROGRAM/A] q To start using the Program Mode, you must first set the printer Off Line, then press the [PROGRAM] button. Within a programming sequence, pressing the [PROGRAM] button takes the A=2 programming Menu to the next higher level, or exits from the Program Mode and displays “PRINTER READY”. For details, refer to 3.3 Programming from the Control Panel. PROGRAM [FEEDER SELECT] This button controls from where the printer will expect paper when printing. To change from one source to another, the printer must be Off Line, press the ON LINE button if the ON LINE LED is on, then hold down the FEEDER SELECT button 111 for a while. The screen will display “FEEDER SELECT”. FEEDER SELECT 1) If the button is released while this message is displayed, the screen displays the current feeder selection (e.g. multi-purpose tray). Further presses will cycle through the options below: Cassette Only Auto Selection Cassette MP Tray ’ Manual Feed - Cassette Only . 35 NOTE: (1) The screen display will indicate only available selection. If the LC Cassette unit is not installed, only MP TRAY and MANUAL FEED are displayed. (2) The Factory Setting is MP Tray. However, when the LC Cassette unit is installed, the Factory Setting will be changed to CASSETTE ONLY. (3) If the DATA LED is lit or blinking, the new selection will be stored in Initial Setting and issued at the top of the first page stored in the printer. 2) Holding down this button for more than 2 seconds causes the screen to display “MP TRAY SIZE”. If the button is released while this message is displayed, the screen will display the current multi-purpose tray size. Further presses will cycle through the options below: - Paper: Letter Paper: Legal Paper: A4 Paper: Executive Paper: B5 ENV: Monarch ENV: COM-10 ENV: lntntnl DL ENV: lntntnl C5 .- Paper: Letter NOTE: (1) The paper size selected by this button will be stored in Current and Initial Settings. (2) This option is available in all feeder selections except Cassette Only. To exit from this mode, press the [ON LINE], [TEST], [RESET] or [PROGRAM] button. 36 - 3.1.3 Light Indicators [READY] READY The READY LED indicates that the printer is ready for use when it is lit. This LED flashes the printer is warming up, then lights continuously. [ALARM] ALARM The ALARM LED will light up in the event of an error which requires operator to take action (e.g. paper out). The bell will sound for 2 seconds whenever this LED lights up. [DATA] The DATA LED shows the data status of the printer. It is continuously lit when data has been received and not printed DATA yet, and it flashes when the printer is waiting for more data. The DATA LED will go out when all received data has been processed. Do not turn the printer off while the DATA LED is lit, otherwise data will be lost. [ON LINE] The ON LINE LED shows that the printer is ready to receive data (the printer is set On Line when this LED is lit. This LED ON LINE is off when the printer is not ready for receiving data (the printer is set Off Line). The LED flashes when the printer is printing a page or when the printing is switching from On Line to Off Line. [PRINT] PRINT This LED is continuously lit while a page is being transferred through the printer. Otherwise, the LED is always off. 37 3.1.4 Hex Dump The Star LaserPrinter 4 can also operate in the hexadecimal mode. This means that it will print information sent to it in its equivalent hexadecimal notation (in base 16 numbers). This mode is useful in correcting communication incompatibilities between some computers and Star LaserPrinter 4, that can result in incorrect text or formatting. For information on performing a Hex Dump, see “Chapter 8. Troubleshooting”. 3.2 SELF TEST The Star LaserPrinter 4’s test print provides a complete summary of what it is currently capable doing for you. It shows the quality of the printing of text. It summarizes the printer’s configuration, including amount of memory and which fonts are installed or available. It explains how the printed page will look at the current settings. This includes paper size and page layout. It tells whether the parallel or serial interface is active. - 3.2.7 Printing Test Sheets There are four stages to the self test. Also refer to [TEST/>] Buttons. of 3.1.2 The 1) Printing a status sheet Hold the [TEST] button until the screen displays “HOLD FOR TEST” then “STATUS SHEET”. Releasing the button will print the 1 page status sheet. 2) Printing a font list (This operation is effective only in StarPage and HP modes.) Hold the [TEST] button until the screen displays “HOLD FOR TEST” then “STATUS SHEET” Press the [TEST] button “once” within one second after “STATUS SHEET” is displayed. The screen displays “FONT LIST”. When you release the button, the printer will produce a font list. 38 - 3) Printing a cleaning paper Hold the [TEST] button until the screen displays “HOLD FOR TEST” then “STATUS SHEET”. Press the [TEST] button “twice” within one second after “STATUS SHEET” is displayed. The screen displays “CLEANING PAPER”. When you release the button, the printer will produce a cleaning paper. This paper is used to clean the fixing assembly (see 7.3 Cleaning the Fixing Assembly). 4) Printing a test print mode paper Keep holding down the [TEST] button until the screen displays “HOLD FOR TEST”, “STATUS SHEET” then “TEST PRINT MODE”. When you release the button, the printer will produce multi-page “barber pole”. Pressing the button again will display “TEST PRINT STOP” and stop Printing. 3.2.2 Status Sheet Description You can tell several things about the printer’s settings by just looking at the print, without reading any of it. First (and obviously) the printer is operating. Second, the page is printed in the portrait orientation. That is, the printing runs on the page the way portraits are usually painted, higher than they are wide. When the printed area is wider than it is high, the orientation is referred to as landscape. If the printer is set for landscape orientation, the sample is printed that way. The status sheet is divided into two sections. The first section summarizes your printer’s basic configuration, identifying the version of its operating system (Firmware rev.) and specifying the amount of total RAM (memory) and the amount of RAM available for your use. If your printer has an optional RAM expansion board, its memory size will also be listed here. The latter portion of the status sheet provides information on the options for printer operation that are currently active, in the order you are most likely to need the information. You can change any of these by programming from the front panel. Two sets of selected (or default) options are listed: l Initial - the settings that were selected when the printer was first turned on. 39 l Power-up - the settings that are stored in EEPROM (see Glossary). The function of the EMULATION l GROUP is: Emulation - the printer whose functions the Star LaserPrinter emulate. 4 is set to The NUMBER OF COPIES GROUP function includes; l Number of copies - the number of copies of each page to be printed. The function of CHARACTER l GROUP is: Character - character set, pitch, table - The function of JOB SIZE GROUP is: l Job Size - the size of paper. The functions of the LAYOUT are: l l l l l Orientation - portrait or landscape. Margin settings - left, right and top in relation to the available page area, and page (text) length in number of lines. VMI - Vertical Motion Index. VMI refers to the smallest increment that can be made in the vertical or y axis. Line spacing is a multiple of VMI. End of line - whether auto wrap function is on or off. Auto line feed - whether there is an automatic carriage return (CR) at a paragraph break, an automatic line feed (LF) at the end of each line, and a form feed (FF) after each page. The PAPER FEED GROUP includes: l Feeder - whether the paper will be fed from the cassette or manually. l MP tray size - the size of paper to be used by the multi-purpose The PAGE MODE GROUP function includes: l Page Mode - partial or full The INPUT BUFFER GROUP function includes: l Input Buffer - memory capacity of the input buffer The function of the INTERFACE l 40 GROUP is: Interface - parallel, AppleTalk, or serial tray. - 3.3 PROGRAMMING PANEL FROM THE CONTROL As the status report indicates, the various functions that you can select from the control panel are arranged in an outline form, or a hierarchy. This means there are main groups and several levels within each one. Remember: you enter (by pressing [v]) or exit (by pressing [A]) a Level to select the previous (by pressing [>I) or next (by pressing [<I) Item within a Group. [EXIT] [PREVIOUS] [NEXT] + [ENTER] To get into the Program Mode, you must perform two actions: l l Press [ON LINE] to go off the light, then Press [PROGRAM]. This will cause the screen to respond by displaying the phrase “NUMBER OF COPIES”. The order of the main categories within Program Mode is: CHARACTER JOB SIZE LAYOUT PAPER FEED EMULATION PAGE MODE INPUT BUFFER INTERFACE SET POWER-UP LOAD FACTORY SET 41 You move from one item to the next in this level by pressing [<I. Pressing [<I displays the next item of the next higher level. If no such item exists, [v] displays the value that is to be selected with @ added. [ON LINE] ends the Program Mode. Pressing [>I returns the display to the previous item at the same level you are currently in. Pressing [A] displays the next lower (previous) level. Here’s how this works in real life, using a portion of the EMULATION group hierarchy. Set the printer Off Line and press [PROGRAM] so the screen reads “NUMBER OF COPIES”. Then press [<I until the screen displays “EMULATION”. Pressing [v] will display one of the following: StarPage HP LaserJet IIP Epson FX-850 Hex Dump Pressing [<I continuously will list all the available emulations. Suppose you want to select HP LaserJet IIP as the emulation StarPage emulation. from the First get into the Program Mode, l l Press [ON LINE], so that the light is off Press [PROGRAM], COPIES”. after which the screen displays l Now press [<I. The screen responds with “JOB SIZE”. l Pressing [<I twice displays, in order PAPER FEED EMULATION 42 “NUMBER OF I l l l l Pressing [v] here displays “[email protected]” as the default emulation. want to stay in this level of the hierarchy, so this time You Press [<I, giving you “HP LaserJet IIP” which is the emulation you want. To select it, Press [v]. This displays “REINITIALIZE” which means that the printer is being initialized for HP LaserJet IIP emulation. After a while, the screen displays “HP LaserJet [email protected]“. Press [ON LINE] to exit the Program Mode. To see if the printer has been set for HP LaserJet IIP emulation. l Carry out the same operation until the screen displays “EMULATION”. Then press [v]. The screen will display as the default “HP LaserJet [email protected]“. The entire hierarchy is shown on the next pages. 43 STAR LASERPRINTER 4 CONTROL PANEL HIERARCHY To enter program mode: 1. Take the printer off line by pressing the ONLINE button. 2. Enter program mode by pressing the PROGRAM button. 3. Follow the charts below, moving with the buttons indicated. @I 0 a E!!! INTERFACE FACTORY INPUT EMULATION PAGE POWER-UP APPLETALK SERIAL PARALLEL FULL PARTIAL PAGE PAGE - 44 DTR ROBUST POLARITY XON PROTOCOL STOP BIT PARITY DATA BAUD BIT RATE NOTE: 0 HP LaserJet IIP mode only 0 Epson FX-850 mode only 0 Skip in StarPage mode @ Skip in Hex Dump mode NUMBER CHARACTER JOB SIZE LAYOUT PAPER FEED OF COPIES 0 MP TRAY TABLE FEEDER PITCH CHARACTER SET SIZE AUTO END OF LINEFEED LINE MARGIN VMI ORIENTATION 1 1 AUTO MANUAL SET DEFAULT AUTO MARGIN MARGIN TEXT TOP RIGHT LEFT LENGTH MARGIN MARGIN MARGIN SOURCE NUMBER SYMBOL SET 45 MEMO 46 Quick Start with the Star LaserPrinter 4: A Tutorial This chapter is designed for dual use, since most computer users fall into one of two categories. Some people like to read the documentation through from beginning to end. Others like to plunge right into the hands-on mode, using documentation as a quick reference, at most. This chapter is for both groups. It can be read in sequence with the rest of the book. or it can be used as a stand-alone aid. 4.1 PRELIMINARIES Your Star LaserPrinter l the printer body l the EP-L cartridge 4 comes in two basic parts: When you place your printer for use with your computer, be sure that: l it sits on a strong, stable table l there is circulation l you remove all the packing material before you use the printer. on all sides, including the bottom If you are just setting up your Star LaserPrinter 4, and have not read the material in Chapter 2, please take the time to read the section “2.2 Installing the EP-L Cartridge”. It is very vital that you handle and install this component precisely, because it contains important laser printing devices, as well as the toner (“ink”) that actually prints on the paper. In addition to the multi-purpose tray that is provided as a standard feature, various types of paper can be fed into the printer using an optional paper feeder unit. Five different cassettes are available for use with the feeder unit; A4, Letter, Legal, Executive and Envelop. For installing the feeder unit and cassettes, see separate manuals. The printer’s on-off switch is located at the front. 47 4.2 CONTROL PANEL The front panel of the printer provides information about the printer’s internal and operating status. It also allows you to program it for your (and your computer’s) specific needs. The Star LaserPrinter 4 works in two basic modes when the printer is off line: Normal mode-using Program mode-using the white buttons the brown buttons The buttons have different functions in each mode: BUlTON FRONT PANEL MODE (WHITE BUTTONS) PROGRAM MODE (BROWN BUTTONS) [ON LINE] When light is on printer can print. When light is off all other buttons are enabled Fast exit (termination) of Program Mode (returns to Printer Ready) [PRINT] Print information No function [ERROR SKIP/<] Continue printing or display error code Display next item within current level of menu selections [TEST/>] Initiate tests Display previous item within current level of menu selections [RESET/v] Discard information Display next level within current item of menu selection. When in last level of current item, executes function [PROGRAM/A] Enter program mode Display previous level within current item of menu selection [FEEDER SELECT] Select paper source for printing No function in memory - [EXIT] t [NEXT] -j--w [PREVIOUS] t [ENTER] Remember: you enter (by pressing [v]) or exit (by pressing [A]) a Level to select the previous (by pressing [>I) or next (by pressing [<I) Item within a Group. The front panel also has some lights. When lit they mean: READY (green) - the printer is ready for printing or for programming ALARM (orange) DATA (green) - information ON LINE (green) PRINT (green) - and error condition exists and the printer is Off Line received, not yet printed the printer is ready for printing the printer is printing information 4.2. I Basic operations Ther are four basic operations performed l l l l from the panel: On Line and Off Line -The printer can receive information from the computer and print it only when it is On Line. When the printer is On Line, the light on the [ON LINE] button will be lit. For all other functions, the printer must be Off Line. This is accomplished by pressing the [ON LINE] button until its light is out. Form feed - The printer may have unprinted information in its memory. This is indicated by the lighted DATA light. To clear it from the printer, take the printer Off Line. Then press [PRINT]. When all the remaining information is printed, the paper will be ejected and the DATA light will go out. Error Skip - In case of an error, the printer will go Off Line and stop printing. If the error is minor, it is possible to continue printing. To do this, press [ERROR SKIP], then [ON LINE] until its light is OIZ.If the error was minor, printing will resume. If the error was more serious, the panel will provide information on how to handle it. Reset -To clear the printer memory and restore settings to emulation defaults, press [RESET]. 49 Before you begin printing, at any time, but especially when you are new to the process, it is a good idea to run a test or sample print. To do this, take the printer Off Line. Then press and hold [TEST] until the display reads Status Sheet. The printer will provide a status sheet showing. l its configuration l the quality of its text printing, and l a summary of its settings - initial and power-up preset. 4.3 CONNECTING COMPUTER THE PRINTER TO THE The Star LaserPrinter 4 StarScript version comes equipped with a standard Centronics parallel interface, an AppleTalk interface, and an RS-232 serial interface. They can be connected simultaneously, though only one can be active at any one time. The printer comes from the factory pre-set with the Centronics parallel interface active. You can confirm this by looking at the Interface Group portion of the sample print. PAPER PlLD Peoder HP tray *Lx.0 Auto 9electlon A"to ne1ection Paper8 Paper: A4 A4 PrnX MoLm i!artia1paga Partial INPUT BUPPER 1X byte, 1X bytm INTPAPACII Perall. Paralla Error pg. History This means that the Centronics parallel interface is active. If the serial port were active, that line would state serial. 50 - - If you are going to use a Centronics interface, you don’t need to make any changes. If you want to use an RS-232 interface, you must perform a programming sequence. Programming will involve pressing various panel keys and following the information provided on the display screen. The control panel indicates the selected function with [email protected] symbol following the function displayed. 4.3.1 Selecting the Interface Note that when you have completed the selection sequence, the settings must be stored in EEPROM (see 4.3.3 Saving Settings). l Take the printer Off Line by pressing the [ON LINE] button until the button’s light is ofs l Then press [PROGRAM]. l Press [e] until the display screen reads INTERFACE l Press [v]. Now the screen will read [email protected] l Now press [cl. The screen will change to SERIAL l With the display reading SERIAL, this time press [v]. The screen will show BAUDRATE l Pressing [v] now displays 9600baud l l Press [<I to display a series of options, from 300 to 19200. Keep pressing until the one you want appears on the screen. Pressing [v] now displays 9600baudQ 51 . Then press [A] displays BAUD RATE. Pressing [<I displays these options: DATA BIT PARITY STOP BIT - PROTOCOL ROBUST-XON DTR POLARITY The settings from the factory for these options are: 8 bit no parity 1 stop bit DTR High Whenever you want to change any of the serial parameters, follow the same sequence as for selecting the baud rate. For example, to change the data bit, at SERIAL press [v] then [e] until DATA BIT is displayed on the screen. Pressing [v] will display the first option, and [<I will give the rest of them. Press [v] when your choice appears on the screen. Finally, press [ON LINE], which readies the Star LaserPrinter 4 for printing. If your applications program requires an XON/XOFF handshake (“software” rather than DTR “hardware”), set it from the SERIAL menu by l Pressing [v] then [<I until you see PROTOCOL l Pressing [v] displays DTR l 52 Pressing [<I until the display rends XONIXOFF. Press [v] to execute. - The next step is to activate your choice to the printer. l Press [A] 3 times to display INTERFACE l Press [<I to display SET POWER-UP l l Press [v ] to store the current settings into EEPROM. Press [ON LINE], which allows the Star LaserPrinter printing through the serial interface. 4 to begin 4.3.2 Activating the Parallel Interface To return to a parallel interface, do this: l Take the printer Off Line by pressing the [ON LINE] button until the button’s light is off l Then press [PROGRAM]. l Press [<] until the screen displays INTERFACE l Press [v], which displays SERIAL @ l Press [<I which gives PARALLEL . l Press [v] to select parallel. l Press [A] once to display ._. _ INTERFACE l SET POWER-UP __^” l .. l , Press [<I to display Press [v] to store the current settings into EEPROM. Pressing [ON LINE] will allow the Star LaserPrinter printing through the parallel interface. 4 to begin 53 4.3.3 Saving the Setting The new setting will remain in the printer’s RAM memory until it is turned ofi To understand why, you should realize that there are four different types of settings stored in the printer’s memories. The FACTORY SETTINGS, which are those put into the printer’s unchangable ROM memory at the factory. l . The POWER-UP SETTINGS, which the user can create, then store in a permanent memory called EEPROM, even when the power is turned off at the end of the session in which they are created. These settings will then override the Factory Settings, being activated when the printer is turned off, then on again. - _ - The INITIAL SETTINGS, which the user can create and use as long as the printer is not turned off. These are stored in temporary or volatile RAM memory. They override the Power-up Settings. l The CURRENT SETTINGS, which are those stored in temporary RAM memory. These may be issued by software commands that override all other settings. l To make the setting permanent, you must save it in EEPROM! Perform this programming sequence, after taking the printer Off Line. l Press [PROGRAM]. l Press [<I until the screen displays SET POWER-UP l Press [v], after which the screen will display EEPROM LOAD DONE l Finally, press [ON LINE]. Your new settings are now saved, and you can continue with other printing, if desired. 4.3.4 Returning to Factory Settings If for some reason you want to start from the beginning, with the original factory settings, follow this procedure: l In the Program mode, press [c] until the screen reads LOAD FACTORY SET 54 - l Press [v]. The screen will display EC Set @ l Pressing [c] display the other option: US Set Select the one you want. l Press [v]. The screen will display briefly LOAD FROM ROM OK then LOAD FACTORY SET The factory settings are now restored, and you can go On Line for other activities. 4.4 MANUAL FEED The Star LaserPrinter 4 can feed paper automatically using an optional cassette. If an optional cassette is installed, the factory default of feeder is Cassette Only. To change this setting from the control panel, put the printer Off Line, and press [PROGRAM]. l Press [<I until the screen displays: PAPER FEED l Press [v], and the screen will now display: FEEDER . Press [v], which will display Cassette [email protected] l Pressing [e] displays these options: Auto Selection Cassette MP Tray Manual Feed l Press [<I to display Mp Tray or Manual Feed 55 .-L l l Pressing [v] confirms the change. Finally, press the [ON LINE] button to make the Star LaserPrinter ready for printing. 4 Manual paper feeding should always be done from the multi-purpose tray (MP tray), If OHP transparencies, label sheets or envelopes are to be fed manually, the face-up tray should be used. If during either manual or automatic feed, you get a paper jam, refer to the chapter on “Troubleshooting,” for ways to clear the machine. --. --. 4.5 PAPER SIZE The factory default of MP TRAY SIZE is A4 paper. If you want to use other size of paper or envelops, you must first instruct the printer. After going Off Line and pressing [PROGRAM]: l Press [<I until the screen reads PAPER FEED l Press [v], which will display FEEDER l Pressing [<I displays MP TRAY SIZE l Press [v], which will display ‘Paper : A4 (for EC set) or Letter (for US set)@ l Pressing [NEXT] repeatedly Paper Paper Paper Paper Paper Env. Env. Env. Env. 56 : : : : : : : : : Letter Legal A4 Executive B5 Monarch COM-10 lntntnl DL lntntnl C5 will list the other choices, - Press [v] when the required screen. l paper/envelope size is displayed Finally, press [ON LINE], which readies the Star LaserPrinter printing. l on the 4 for Before you start printing, refill the cassette with the appropriate paper. 4.6 PRINTING IN LANDSCAPE ORIENTATION If you want to change from portrait to landscape orientation for your printed page, follow this procedure, after going Off Line and pressing [PROGRAM]: l Press [c] until you get LAYOUT l Press [VI, which will display ORIENTATION l -. -. Press [v] for [email protected] l Pressing [*I gives Landscape l Pressing [v] selects Landscape. _-. 4.7 CHANGING CHARACTER SET -. - If you want to use a character set other than that provided at the factory, follow this procedure: l After going Off Line and pressing [PROGRAM] l Press [*I until you see - CHARACTER l Press [v], which displays, in case of the HP LaserJet IIP emulation SOURCE R (or SOURCE C or SOURCE S)@ -. (SOURCE R means Resident, SOURCE S for Soft font) SOURCE C for Cartridge, and 57 l Press [v] displays NUMBER [email protected] l Pressing [v] again gives various options for SYMBOL SET. l Pressing [<I until you see the desired one. l Then press [v]. l Press [ON LINE]. 4.8 SELECTING - DISPLAY LANGUAGE The language for display can be selected from English, French, German, Italian or Spanish. To select the display language, turn off the printer then follow the procedure below. - 1. Hold down [RESET/v] and turn on the printer (make sure that [RESET/ v] is held at least until the screen displays “SELECT LANGUAGE”). The screen displays _ -- SELECT LANGUAGE Memory Test 2 MB [email protected] 2. Pressing [ERROR SKIP/<] or [TEST/>] changes the language as displayed in the screen. 3. Press [ON LINE] after selecting new language. The language will be saved and the printer goes On Line. If [RESET/v] is released before “SELECT LANGUAGE” message is displayed, the printer will not enter the display language mode (i.e. normal startup operation will be carried out). The factory setting of display language is English. 58 _ - Font Selection The style of printing that appears in your finished work consists of five elements: l l l l the symbol/character able for use set - the letters, numbers, and symbols avail- the fonts present in the Star LaserPrinter 4 - some fonts come as standard equipment, while others are available in cartridges that are installed in the cartridge slot of the printer; you may also purchase fonts on a disk and load them down (downloading) from the computer to the Star LaserPrinter 4’s RAM (for more information consult a software dealer) font attributes - consisting of character style, stroke weight, and typeface spacing, pitch, point size, the printer you select to emulate - the Star LaserPrinter 4 StarScript version can emulate the PostScript, and two widely available printers, the Hewlett Packard LaserJet ZIP (a laser printer) and Epson FX-850 (a dot-matrix printer). ? the capabilities of your applications software. The Star LaserPrinter 4 StarScript version comes equipped with many families of fonts. Try to print out the Font List with control panel operation. (Refer to “3.2 SELF TEST”.) You will get the Font List as shown in next pages. 59 LaserPrinter 4 StarPage Interpreter Font List RESIDENT FONTS phst~ Name] AWltGde-Book2 AvantGanieSookOblique2 AvaMGardeDemi2 AvaMQardeDemiOblk& Bookman-Demi’ Bookman-Demillalic* Bwkman-Ligh? Bcokman-LightItalic Font Name] ITC-Avant-Garde-BookSWA ITC-Avant-Garde-BookObliqueSWA ITC-Avant-Garde-DemiSWA ITC-Avant-Garde-DemlObliqueSWA ITC-Bookman-DemiSWA ITGBookmamDemUtaliSWA Courier-Bold Courier-SWA Courier-BoldSWA Courier-BoldOblique Courier-SoldObliqueSWA COUdW Courter-Oblique Courier-ObliqueSWA Hehretica' Swiss721 -SWA Helvetica-Bold’ Helvetica-SoldOblique’ HMICEI-NanOW’ Helvetica-N--Bold’ Helvetica-Nanow-BcJdOoldOMique’ Helvetica-Narrow-OMiue’ Hetvetii-Oblique’ NavCenturySchlbk-Bol$ NavCentutySchlbMoldltatii3 NewCentutySchlbk-Italic3 NmvCenturySchlbk-Roman3 Palatine-Bold’ Palatino-Boldltalic’ ‘Palatino-Italic’ Palatinc-Roman’ Symbol Times-Bold’ Times-BoldItalic’ Times-Italic’ Times-Roman’ ZapfChanwy-Mediumltal!~2 ZapfDingbats’ - ITC-Bookman-LightSWA ITC-Bookman-LfghtltallcSWA Swiss’Ml-BoldSWA Swiss72143oldObllqueSWA Swiss721-NsrrowSWA !Mss72bNarrowBoldSWA Swiso721.NarrowBoldObliqueSWA Swiss721-NarrothUbliquaSWA Swiss721-ObiiqueSWA Century-Schoolbook-BoldSWA Century-Schoolbook-Z3oZdZtSWA Century-Schoolbook-ZtalicSWA Century-Schoolbook-SWA Zapf-CalligraphictJOl-BoldSWA Zapf-Calligraphic801-BoldltSWA Zapf-Calligraphic801-IfalicSWA Zapf-CalligraphiclOl-SWA B&L~oA-IE~-ZRA (Symbol-Set-BWA) DutchSOl-BoldSWA - V-- DutchBOI-BohiIMicSWA Dutch801 -1talicSWA Dutch801-RomanSWA IE-zapf-C~ly-MediumI~A **m*m*WVA*W (ITC-Zapf-DingbatsSWA) - 60 Star LaserPrinter NO. _--- 4 Pitch Point Style Weight ID _____ _____ ---------------Escape Sequence Symbol __________ ------------------------ Residrat FONT LIST Typeface Sample Fonts: ROOl 10.00 ROMAN0 12.0 Upright Medium Courier ‘E*c>tau~Esoc,op,o.mhlz.ovorca3r ABCDabcdlS'123 ABCDabcd#$'123 ABCDabcd#$'123 ABCDabcdlS'123 ABCDabcd#$'123 R002 Courier ABCDabcdXS'lZJ JiBCDabcd#$'123 FiBCDabcd#$'123 ABCDabcd#S'123 ABCDabcdX$'123 R003 Courier ABCDabcdXS'123 ABCDabcdXS'123 ABCDabcdXS'l23 ABCDabcd#$'123 ABCDabcdXS'l23 R004 Courier R005 R006 R007 61 -- Many of the available fonts can be selected from the control panel with the HP LaserJet IIP mode. The ultimate in printing flexibility is provided by programming with the command sets. For more on this, see the chapter 6 “Interfacing With Applications Programs” and the Applications Manual. For each of the two printers that the Star LaserPrinter 4 emulates, there are default print styles that it will use if you do not make other choices. In the material that follows, the default values will first be presented together, for your convenience. Then the entire range of choices for each attribute without an optional Font Cartridge will be presented; the default values will be marked with an *. ‘Y - 62 5.1 HEWLETT PACKARD LASERJET UP Default Values, primary and secondary fonts: Character number: 1 (Courier, lOcpi, 12point, Medium) Symbol set: Roman-8 A vailable Values, primary and secondary fonts: Character number: 1* (Courier, lOcpi, 12point, Medium) 2 (Courier, lOcpi, 12point, Bold) 3 (Courier, lOcpi, 12poinL Italic) 4 (Courier, 12cpi, lOpoint, Medium) 5 (Courier, 12cpi, lOpoint, Bold) 6 (Courier, 12cpi, lOpoint, Italic) 7 (LinePrinter, Symbol set: 16.66cpi, 85point, Medium) Roman-8 * ECMA94 IBM PC IBM DN PC-850 IS0 2: IRV IS0 4: UK IS0 6: USASCII IS0 10: Swedish IS0 11: Swedish IS0 14: JIS ASCII IS0 15: Italian IS0 16: Portug IS0 17: Spanish IS0 2 1: German IS0 25: French IS0 57: Chinese IS0 60: Norweg IS0 61: Norweg IS0 69: French 63 IS0 84: Portug IS0 85: Spanish HP German HP Spanish 5.2 EPSON FX-850 Default Values: Character set: USA Pitch: 10 cpi Table: Italic A vailable Values: Character set: USA* French Germany UK Denmark I Sweden Italy Spain Japan Norway Denmark II Pitch: 10 cpi* 12 cpi 17 cpi 20 cpi Proportional Table: Italic* Graphics Set #l Graphics Set #2 64 .- I !I _ Interfacing With Applications Programs If this were the best of all possible computer worlds, every software application would run automatically on the Star LaserPrinter 4 or any other printer. The fact is that software applications-word processing, spreadsheets, databases, or others-are designed to work with a specific printer or a limited group of printers. Some very popular software applications were written before laser printers became common, and require a dot-matrix or daisy-wheel printer. Some software is linked to a particular brand of computer and printer. And some can be used on only one brand of laser printer. As you have read, some of the commands needed for printing are resident in the Star LaserPrinter 4. Some can be programmed from the control panel. And some must be issued by the computer, from within a program. Some applications programs allow the user to specify a printer or printers, either by name or by type. Once the selection is made, it becomes part of the program file. As a result, printing usually requires nothing more than issuing a one or two keystroke command. -. - : - In this chapter you will learn how to specify the Star LaserPrinter 4 for four popular applications programs for IBM-compatible computers: Lotus l-2-3, WordPerfect, Microsoft Word, and Microsoft Windows. The Star LaserPrinter4 will print graphics as well as words and numbers, emulating Hewlett Packard graphics, all densities of the Epson dot matrix printers. It can also be used with the two popular desktop publishing programs, Aldus PageMaker, and Xerox Ventura Publisher, which uses the GEM desktop environment. Both are compatible with the HP LaserJet IIP. For further information consult the Applications Manual. Before you begin printing, be sure your Star LaserPrinter 4 is correctly configured for your computer by having either the parallel or serial interface selected and the correct cabling. Prior to printing, make sure that you have selected the same emulation from the front panel as the one you specified in the application. 65 6.1 LOTUS l-2-3 RELEASE 2 This section explains how to installing Lotus l-2-3 for use with your Star LaserPrinter 4. For detailed information, refer to the Lotus Getting Started manual. First, add the following commands to the AUTOEXEC.BAT on parallel or serial interface used. file, depending Parallel: If your printer is connected to the computer’s parallel port LPTl, add the following command to the AUTOEXEC.BAT file: MODE LPTl:,,P Serial: If your printer is connected to the computer’s serial port COM 1, add the following command to the AUTOEXEC.BAT file: MODE COM 1:9600,N8,1 ,P MODE LPTl :=COM 1 l Installing the Lotus l-2-3 INSTALL Program If the Lotus l-2-3 has not been installed, you must install it before proceeding. At the DOS prompt, type LOTUS and press [RETURN]. This causes the computer to display the l-2-3 Access System. Move the cursor to “Install”, then press [RETURN] to run the INSTALL program. Lotus may ask you to insert the UTILITY disk. This will display the MAIN MENU. You can select your printer following the procedure below. 1) Select “Change Selected Equipment” and press [RETURN], displays the Selected Equipment screen. which 2) Select “Test Printer(s)” and press [RETURN], which displays the Text Printer(s) screen. 3) Select “HP” and press [RETURN]. screen. This displays the Text Printer(s) 4) Select “2686 LaserJet or LaserJet+” and press [RETURN]. The printer type for texts has been selected. To select the printer type for graphics, go back to the Selected Equipment screen. 66 5) Select “Graphics Printer(s)” and press [RETURN], Graphics Printer(s) screen. 6) Select “HP” and press [RETURN] screen. which displays the to display the Graphics Printer(s) 7) For 150 or 300DP1, select “LaserJet+” and press [RETURN]. select “LaserJet” and press [RETURN]. For 75DP1, When you have completed the selection of a graphics printer, press [FlO]. Your current selections will displayed. Make sure that the selections are correct. If not, repeat the installation process. To save your selections, screen. press [Esc] to return to the Selected Equipment 1) Select “Save Changes” and press [RETURN], Changes screen. which displays the Saving 2) Read message and press [RETURN]. 3) Now, you have saved the changes you made. Press[F9] to go to the Main Menu, or press [RETURN] to leave the install program. The EXIT screen will be displayed. 4) Select “YES” and press [RETURN]. l Selecting Default Settings After running the INSTALL program, you must select printer defaults. At the DOS prompt, type LOTUS and press [RETURN] to display the 1-23 Access System. Move the cursor to “l-2-3”, then press [RETURN] to run the l-2-3 program. Lotus may ask you to insert the SYSTEM disk. At the opening screen, press [/] to access the spreadsheet menu. Then, type [Wlorksheet, [Gllobal, [DIefault and [Plrinter. The configuration menu will be displayed. Now you are ready to make some selections about the configuration of your Star LaserPrinter 4. 1) Type [I] Interface and select the number which corresponds the interface for your printer. For parallel, type “I”, and for serial, type “2” (for example). 2) Select appropriate settings for auto linefeed, left margin, right margin, top margin, bottom margin, page length, wait, setup and name. 67 3) When you have completed, type [Q]uit, then [Ulpdate. The l-2-3 will save your selections and use these defaults when printing. Now, you are ready to use your Star LaserPrinter 4. You can also use the Star LaserPrinter 4’s Epson FX-850 emulation, following the same routine from the Lotus Menus. 6.2 WORDPERFECT VERSION 5.0 WordPerfect must be installed before you can select a printer. Refer to the “Installation” section in your WordPerfect Manual. Once this is done, follow the procedure below to select a printer. - 1) Press [Shift] and [F7], then select [S] Select Printer. 2) Select [2] Additional Printers, which displays the Select Printer screen. 3) From the list of defined printers, select either HP LaserJet IIP (or HP LaserJet II), Epson FX-850, then press [RETURN]. 4) Press [RETURN] to accept the displayed printer filename or enter a new name, then press [F7] Exit. The Select Printer: Edit screen will be displayed. 5) Now you must specify the interface connection. Select [2] Port at the Select Printer: Edit menu, which displays a list of ports. 6) If you are using a parallel interface, for LPTl, type “1”. 7) If you are using a serial interface, for COM 1, type “4”. The WordPerfect will display the Select Printer: COM port menu. You must now specify the serial parameters, including baud rate, parity, stop bits, character length and XON/XOFFfunction. Be sure that you specify the same values in WordPerfect as you do for the Star Laser-Printer 4. To save your WordPerfect selections, press [F7] to exit the program. 68 _ - 6.3 MICROSOFT WORD 4.0 To use Microsoft Word with your Star LaserPrinter 4, appropriate Printer Descriptions (PRDs) must be installed. PRDs are found on Supplemental Printer Diskettes. These PRDs provide (1) internal and cartridge fonts in both orientations and (2) Hewlette-Packard soft fonts. Install them following the procedure described in Microsoft Word manuals. In Microsoft Word, you can make your printer selections through the Print Menu. 1) At the text screen, type [E]sc and [Plrint. 2) In the resulting menu, select OPTIONS by typing [Olptions. A list of available printers appears at the top of the screen. Each of these printers is the subject of a Printer Description (PRD) file. 6.4 MICROSOFT WINDOWS Microsoft Windows is a program that coordinates the use of various programs that might not otherwise be compatible on the screen. Windows can be configured for your printer, but you should also be aware that any applications program that you use may have its own printer requirements, which must be taken into consideration. To choose a printer for Windows, at the initial Menu select CONTROL PANEL. From the resulting Menu, select the INSTALLATION Menu, then ADD A PRINTER. The screen will display a box that reads: Add printer Insert the disk with the printer file you wish to add into drive A, or choose an alternative drive/directory: 69 A:\ OK CANCEL Insert the Utilities disk in drive A and press OK Now the screen displays this box: Available Printers Printer File: Epson FX-80 NEC 3550 HP LaserJet HP LaserJet Plus HP 7470A Add Cancel Highlight HP LaserJet Plus or Epson and select ADD Another box appears in response: Copy associated printer file HPLASERP.DRV to drive/directory: C:\Windows Yes No (if you have a hard disk) Cancel selecting YES confirms your printer choice and returns you to the CONTROL PANEL Menu. If you are just starting with Windows, you should select SETUP from the CONTROL PANEL Menu. This will present a Menu with three options. You should first select PRINTER and from the list choose the printer you wish to use. Next, from SETUP select CONNECTIONS. This will let you designate the parallel or serial port for the printer, clicking OK to confirm it. If you are using a serial printer, go to SETUP once again, and select 70 - -. COMMUNICATIONS After you indicate the active port you have already chosen, Windows will present a series of settings options. Type in the desired baud rate and “push the button” for your selection of word (bit) length, parity, stop bits, handshake, and port. Select HARDWARE handshake and 1 stop bit. Be sure that the other selections match the printer settings shown under Interface Group on the Star LaserPrinter 4’s sample print. 71 MEMO .- ‘V - 72 Maintaining the Star LaserPrinter 4 The LaserPrinter 4 does not require much care and maintenance. However, it is important to perform a few cleaning tasks to maintain your printer in good condition. This chapter will explains procedures for replacing the EPL cartridge, handling the EP-L cartridge and printer, and cleaning the printer. 7.1 REPLACING THE EP-L CARTRIDGE Replace the EP-L cartridge in the following cases: l l If print quality is still low, even after you have redistributed the toner by removing the cartridge and rocking it gently from end to end about five or six times. If transparency films. films show stain problem at either the top or bottom of the NOTE: For directions on replacing the EP-L cartridge, refer to “2.2 Installing the EP-L cartridge”. 7.2 STORAGE AND HANDLING PRECAUTIONS FOR THE EP-L CARTRIDGE The EP-L cartridge contains both the photosensitive drum and toner used for printing. Since the drum is extremely sensitive to the light, it may be permanently damaged if it is exposed to the direct sunlight or strong light. Caked or unevenly distributed toner may result in poor print quality. Therefore, always observe the following rules: l ; Always keep the EP-L cartridge in the aluminum bag in which it was originally packed. And do not open the bag until you are ready to install it in the printer. -- 73 l l l l l l l Do not store the cartridge where it will be exposed to direct sunlight. Store the cartridge with the label facing up. Do not turn it upside down or stand it on end. Store the cartridge at a temperature 95°F). of between 0°C and 35°C (32°F and Do not store the cartridge in salty air, or where there are corrosive gases such as ammonia. Keep the cartridge away from CRTs, disk drives and floppy disks. The magnet in the cartridge can adversely affect them. - Always keep the cartridge away from children. Be sure to use the cartridge before its expiration date. Otherwise, print quality will be affected. When handling the EP-L cartridge, also pay attention to the following points, in addition to the above. l l Do not touch the bottom of the cartridge when handling the cartridge. Do not open the drum protective shutter. If it is opened, print quality may be affected. 7.3 CLEANING THE FIXING ASSEMBLY Transparency films may sometimes show stain problem at either the top or bottom of the films. This problem may be caused by a dirty paper path or the flaw on’the EP-L cartridge. In this case, clean the fixing assembly, and replace the EP-L cartridge if necessary. Regular cleaning of the fixing assembly will reduce the possibility of paper jam and will prolong the life of the printer. The fixing assembly can be cleaned using cleaning paper. First print a sheet of “CLEANING PAPER” on letter, A4, or legal size paper according to the following procedure (see “3.2.1 Printing Test Sheets”). 1) Hold the [TEST] button until the screen displays “HOLD FOR TEST” then “STATUS SHEET”. 2) Press the [TEST] button “twice” within one second after “STATUS SHEET” is displayed. The screen displays “CLEANING PAPER”. 74 - 3) Releasing the button will produce a cleaning paper. When the cleaning paper is produced, the screen displays “MP LOAD THE PAP”. 4) Now place the cleaning paper face up in the multi-purpose tray, inserting it as far as it will go without forcing it. Make sure that the paper guides are aligned with the paper. 5) Press the [TEST] button. The printer will feed the paper and the screen displays “NOW CLEANING”. 6) When cleaning is done, the screen will display “CLEANING one second. DONE” for 7.4 CLEANING THE EXTERIOR OF THE PRINTER Before cleaning, make sure that the power cord is removed. Use only water and neutral detergent to remove marks and spots from the printer, then wipe the printer dry with a soft dry cloth. Use of any other cleaning solution may damage the printer. . The printer does not require lubrication. Never attempt to lubricate any internal or external parts of the printer. Otherwise, lubrication may affect the operation of the printer and damage the printer. 75 - MEMO - 76 / / L Troubleshooting An important key to reliable operation of your Star LaserPrinter 4 is knowing how to respond when trouble occurs. Some problem conditions can be handled on the spot by the printer operator. Others require servicing by trained personnel. This chapter will explain when to do what. It will also help you deal with the paper jamming and other operational problems. 8.1 ERROR MESSAGES SCREEN -_.-__- DISPLAYED ON THE The control panel screen may display a variety of messages during the course of operation of the Star LaserPrinter 4. You are already familiar with the display sequence during power-up, and when programming the printer. Some messages provide information about the paper supply in the cassette or the condition of the EP-L cartridge. Others have to do with fonts or emulations. Another way of considering errors is that some of them result from failures in hardware or software. Other errors are functional, meaning that either the printer operator or the host computer has commanded the controller to perform an operation that is beyond its designed capabilities. If the printer malfunctions for some reason, warning messages will appear on the screen, recommending what action you should take. The most serious are messages that tell you to call for repair service. Some messages concern less serious conditions that can be handled by the printer operator. Information messages relate a communication problem with the host computer, but do not affect the printer’s operation. These messages are summarized here. 8.2 SERVICE CALL MESSAGES 8.2.1 Engine Service Call Messages Engine service call messages are related to the interface with engine. They can result from either a communication failure, or a halt in the engine’s ready level. 1Message 1Action 1Meaning CALL SERVICE 01 fixing assembly; bit 2 status 2, processing stops; alarm beeps; press [ERROR SKIP] to reset engine and recheck CALL SERVICE 02 faulty BD; status 2, bit 3 CALL SERVICE 03 scanner motor malfunction, status 2, bit 4 - processing stops; alarm beeps; press [ERROR SKIP] to reset engine and recheck processing stops; alarm beeps; press [ERROR SKIP] to reset engine and recheck CALL SERVICE 04 improper communication processing stops; alarm beeps; turn off the power then on to reset engine and recheck 8.2.2 Controller Service Call Messages Controller error messages result from failure in either hardware or software. Some controller error messages occur during startup tests. The messages mean that the controller is malfunctioning, but is still able to display the error massage. Certain conditions can also be detected during controller reset, on insertion of a cartridge, and at other times after the completion of power-up initialization. An error stops the test sequence, the unit goes off line, and the alarm sounds. The controller will attempt to recover, if possible. If the problem is not found during initialization, but recurs during operation, most likely the fault is in the hardware. 78 _ - r Message Meaning Action CALL SERVICE 05 CRC error in controller main ROM program program halts; fatal CALL SERVICE 06 CRC error in resident font ROM program halts; fatal CALL SERVICE 07 read/write test error in program halts; fatal on-board RAM CALL SERVICE 08 read/write test error in sequence stops; alarm beeps; shut off power and fix probexpansion RAM lem; or press [ERROR SKIP] to continue, in which case the expansion RAM will be ignored CARTRIDGE BAD CRC error in Cartridge (if installed) sequence stops; alarm beeps; shut off power and fix problem; or press [ERROR SKIP] to continue, in which case the cartridge will be ignored ZALL SERVICE 10 CRC error in EEPROM sequence stops; alarm beeps; shut off power and fix problem; or press [ERROR SKIP] to continue, in which case program uses factory settings as power-up settings UO FONT PRESENT no font present program halts; fatal UMER ERROR counter-timer error program halts; fatal 79 - 8.3 OPERATOR CALL MESSAGES Whenever the screen displays operator call messages, the printer goes off line and the alarm sounds. The problem is one that the operator should be able to resolve. After making any corrections, you should press [ERROR SKIP] to recheck the status. If the error message is not displayed again, you may continue printing straight away (if the error was one of the Engine Problems below). If the error message was of any other type, you should press [ON LINE] before continuing the print job. 8.3.1 Engine Problems These are mechanical problems that prevent the engine from being ready to print. Message Meaning Action CASS: REFILL X (X means the current job size) cassette is current feeder; out of paper printing stops; refill paper TRAY: REFILL X (X means the current job size) no paper in the multipurpose tray printing stops; refill paper LOAD X CASS (X means the current job size) cassette is current printing feeder; no cassette in LC sette slot PAPER JAMMING paper is jammed within; status 1, bit 5 SET EPKLOSE side cover is open or no close the side cover or install the EP-L cartridge EP-L cartridge 80 stops; install the cas- follow printer procedure “8.6 Paper Jamming” in - 8.3.2 Font/Emulation Cartridge Problems The font and emulation cartridge are plugged into the printer. They can should not be inserted or withdrawn while the DATA LED is continuously lit. If someone does this, the screen will indicate “CARTRIDGE ERROR”. The printer must be reinitialized by turning off the power and then on again. If an emulation cartridge is removed while it is being accessed, the screen will indicate “CARTRIDGE ERROR” until the printer is re-powered up. If a font cartridge is removed while the DATA LED is blinking, the screen will display “REINSERT CART”. In this case, the operator must insert the font cartridge or press [ERROR SKIP]. If [ERROR SKIP] is pressed, the printer will select the closest font. Meaning Message Action CARTRIDGE ROR ER- emulationorfont cartridge Turn the power off then on CARTRIDGE ROR ER- emulation cartridge is re- Turn the power off then on is removed while DATA again to initialize the printer LED is on REINSERT CART moved while emulation again to initialize the printer font cartridge is removed insert the cartridge or press while DATA LED is [ERROR SKIP] blinking 8.3.3 Optional Hardware The following messages will be displayed if necessary optional hardwares (e.g. RAM expansion board) have been removed. Action Message Meaning INSUFFICIENT RAM full page .mode as power- repower-up with expansion up setting; no expansion RAM board, orpress [ERROR SKIP] RAM installed NO EMULATION CART cartridge emulation mode repower-up with the emulaas power-up setting; no tion cartridge, or press [ERemulation cartridge in- ROR SKIP] stalled 81 8.3.4 Change Paper Size -. Certain emulations permit job size to be specified by the host computer. Also, job size can be selected from the menu. If a change in job size is required, the screen will display a message to that effect. The alarm will beep and the printer will go Off Line. You may have the printer ignore such a message by pressing [ERROR SKIP]. (In this case, the printer will print the job size image on the different size paper.) If the current feeder is cassette, inserting the correct paper cassette causes the printer to start printing automatically. If the current feeder is the multi-purpose tray, changing the multi-purpose tray size via panel starts printing automatically. Action Message Meaning CASS: CHANGE A4 A4 size is required; cas- follow procedure above sette TRAY: CHANGE A4 A4 size is required; multi- follow procedure above purpose tray CASS: CHANGE LTR Letter size is required; cas- follow procedure above sette TRAY: CHANGE LTR Letter size is required; multi-purpose tray CASS: CHANGE LGL Legal size is required; cas- follow procedure above sette TRAY: CHANGE LGL Legal size is required; multi-purpose tray CASS: CHANGE EXE Executive size is required; follow procedure above cassette TRAY: CHANGE EXE Executive size is required; follow procedure above multi-purpose tray CASS: CHANGE B5 B5 size is required; cas- follow procedure above sette TRAY: CHANGE B5 size is required; multi- follow procedure above purpose tray BS 82 - follow procedure above follow procedure above - - Message Meaning Action CASS: CHANGE MON Monarch size is required; cassette follow procedure above TRAY: CHANGE MON Monarch size is required; multi-purpose tray follow procedure above CASS: CHANGE COM Corn- 10 size is required; cassette follow procedure above TRAY: CHANGE COM Corn-10 size is required; multi-purpose tray follow procedure above CASS: CHANGE DL DL size is required; sette cas- follow procedure above TRAY: CHANGE DL DL size is required; multipurpose tray follow procedure above CASS: CHANGE C5 C5 size is required; sette cas- follow procedure above TRAY: CHANGE CS C5 size is required; multipurpose tray follow procedure above 83 8.3.5 Manual Paper Feed Certain emulations permit specifying manual paper feed from the host computer. Also, you may select manual feed from the menu. The following message tell you to hand feed paper of a specific size. The alarm beeps and the printer goes Off Line. When you feed the appropriate paper into the multipurpose tray and press [ON LINE], the message will be turned off and printing will continue. The operator has the option of pressing [ERROR SKIP], which turns off the message and causes the printer to continue printing. If there is no paper in the multi-purpose tray, pressing [ERROR SKIP] turns off the message and causes the printer to feed paper from cassette and then displays the message again. Message Meaning Action M-FEED A4 A4 paper to be hand fed follow procedure above M-FEED LETTER Letter paper to be hand fed follow procedure above M-FEED LEGAL Legal paper to be hand fed follow procedure above M-FEED EXEC. Exec. paper to be hand fed follow procedure above M-FEED BS B5 paper to be hand fed follow procedure above M-FEED MONARCH Monarch paper to be hand fed follow procedure above M-FEED COM-10 COM- 10 size paper to be hand follow procedure above fed M-FEED DL DL size paper to be hand fed follow procedure above M-FEED CS C5 paper to be hand fed follow procedure above I - __. . 84 - 8.4 OPERATOR INFORMATION MESSAGES Certain problems with communication with the host computer may be detected. Messages listed below provide information, but do not halt the printing process. The printer uses a default value if necessary. You may remove the message by pressing [ERROR SKIP]. Note that if there are multiple errors, the last one detected is displayed. 8.4. I Host Communication problem Message Meaning Action/Status LINE ERROR parity error in host data follow procedure above INPUT OVERFLOW input buffer overflow follow procedure above BAD FONT DATA downloaded downloaded font ignored BAD FONT CODE downloaded font in error downloaded font ignored BAD CHAR CODE loaded character outside of index downloaded font ignored font bad 8.4.2 Function or Size Incompatibility There are problems that occur when the host computer requests a function not provided or a function that requires more memory than is available. Message Meaning PAGE OVERFLOW page buffer overflow CANNOT ROTATE insufficient rotate font Status partial page currently composed is printed and ejected; remaining data for page is printed on next sheet memory memory to font rotation aborted; printing continues using closest available font CANNOT DOWNLOAI D insufficient memory download font to font download is aborted; printing continues using closest available font received character not in currently selected font blank space is printed for that character -.. CHAR NOT IN FONT - 85 8.4.3 Font/Emulation Cartridge The font and emulation cartridge are plugged into the printer. The operator may insert or withdraw a cartridge only when the DATA LED is off or blinking. The following messages acknowledge the change. Message Meaning Status CART. CHANGED font cartridge is inserted message will remain for one second or removed when DATA LED is off CART. CHANGED CART. CHANGED CART. CHANGED font c&dge is inserted when DATA blinking LED emulation cartridge meSSage Will is one second is IlleSSage Will inserted when DATA LED is off or blinking one second emulation IlleSSage cartridge is removed when DATA LED is off or blinking or when the printer is not in cartridge emulation Will remain fOI’ IlTllain for F3’IlaiIl for one second - 8.5 STATUS MESSAGES Status messages are low priority messages that either indicate normal conditions or provide warnings that need not be acted upon immediately. These conditions should not occur when the printer is On Line. If they do, however, the alarm will beep and the printer will go Off Line. For some conditions however, there is the possibility of acondition occurring when the printer is On Line. If it does, the alarm will not sound and the printer will remain On Line. Message Meaning ActionlStatus PRINTER WARMUP printer is warming up but - service call not necessary ENGINE TEST operator has initiated a test of the printing ENGINE RESET engine is being reset controller REINITIALIZE soft initialization test in progress - occurs when [RESET] is pressed or emulation is changed; may also occur when printer is on line if emulation is changed or reset is sent from host status sheet printout occurs when [TEST] is held for at least 3 but less than 6 seconds font list printout occurs when [TEST] is held at least 3 seconds and pressed within one second cleaning paper printout occurs when [TEST] is held for at least 3 seconds and pressed twice MP LOAD THE PAP waiting for cleaning paper occurs after printing cleaning paper NOW CLEANING feeding cleaning paper occurs when the cleaning paper is being fed CLEANING finished cleaning occurs after cleaning; remaining for one second STATUS SHEET FONT LIST CLEANING PAPER DONE the 87 Message Meaning ActionlStatus TEST PRINT MODE test pattern printout occurs when [TEST] is held for more than six seconds and terminating switch nor pressed TEST PRINT STOP test pattern stop occurs when test print terminating switch pressed REPRINT LOST PGS retransmission may occur jamming ROTATING FONT wait for font rotation PRINTER READY all condition normal and proper but printer is not operation; no unused data in controller or pages in printing PRINTER ACTIVE all condition proper; printer tion; no unused troller or pages 88 of lost sheets normal and is in operadata in conin printing after paper may occur when on line 8.6 PAPER JAMMING As paper travels from the multi-purpose tray (or cassette if installed) through the printing area and is ejected, it can jam at the following locations: A : Paper pick-up area (multi-purpose tray) B : Paper pick-up area with optional cassette paper feeder C : Fixing assembly area D : Face-up print delivery area E : Paper access door 8.6. I Clearing paper jams 1) Remove any paper from the multi-purpose tray and close the extension tray. Leaving paper in the tray causes the paper position to shift during disposal of jam, resulting in further paper jams when printing is restarted. 89 2) If the face-up tray is installed, remove it. 3) Open the side cover by pressing the release button upwards. 90 4) If you have been printing with the multi-purpose tray, check the paper pick-up area first. If there is a paper jam here, remove the paper by pulling it in the direction of the arrow. CAUTION: The fixing assembly area becomes extremely hot when the printer has been operated for some time. Never touch this area when the printer is open. Otherwise, personal injury may result. 5) If you are using the cassette paper feeder, pull out the cassette and check the paper pick-up area. If there is a paper jam here, remove the jammed paper before replacing the cassette. 91 6) If paper is jammed in the fixing assembly area, pull the paper back into the printer and remove it. 7) If the paper does not extend from the side of the fixing assembly area, pull it in the direction of the black arrow. If the paper is extracted in the opposite direction, infused toner can be deposited inside the printer, staining the back of subsequently printed pages. 92 8) If the paper has passed completely through the fixing assembly area, pull it out in the direction of the arrow. 9) If a paper jam has occurred in the face-up delivery area, put the paper straight upwards to remove it. -I 10) After checking all the areas described here, close the side cover, replace the face-up tray (if it is being used), pull out the extension tray and refill the multi-purpose tray. Printing will restart. 93 11) If the jam is still present, a small tom piece of paper may be remaining inside the printer. Check the side upper door for any such paper. 94 8.7 STREAKY PRINTS 8.7.1 White Streaks White streaks occur if the toner level is low and the toner inside the EP-L cartridge is not distributed evenly. They can be eliminated by re-distributing the toner. 1) Leave the printer power on to prevent data corruption during a print operation. Remove any paper in the multi-purpose tray and close the extension tray. Press the release button upwards to open the side cover. 2) Remove the EP-L cartridge by pulling the tab. 95 3) Hold the EP-L cartridge as shown below, then rock it gently back and forth about 45 degrees in each direction, about 5 or 6 times. This will redistribute the toner inside the cartridge. 4) Replace the EP-L cartridge in the printer, then close the side cover. If vertical white streaks still appear after the toner inside the cartridge has been re-distributed, the cartridge needs to be replaced. See “2.2 Installing the EP-L cartridge” for replacing the cartridge. 96 8.7.2 Stains on Transparency Films Transparency films may sometimes show stains at either the top or bottom of the films. These stains may be caused by a dirty paper path or the flaw on the EP-L cartridge. l l Clean the fixing assembly. The fixing roller inside the fixing assembly may have been stain with toner. For cleaning, refer to “7.3 Cleaning the Fixing Assembly”. If the stains still appear, use a different type of transparency film. Replace the EP-L cartridge. The photosensitive drum in the cartridge may have been damaged due to exposure to the bright light. For replacing the cartridge, see “Installing the EP-L cartridge”. 97 MEMO 98 I!i : The following options are available for the printer. This chapter provides brief explanation for these options. For details on installation and maintenance, please refer to their own manuals. l Paper feeder l Cassette (A4, Letter, Legal, Executive and Envelope) l Expansion RAM board (2 or 4MB) l Font cartridge l IBM emulation cartridge 9.1 PAPER FEEDER AND CASSETTES The optional paper feeder is used with one of optional paper cassettes, to feed paper from the paper cassette into the printer. The paper feeder can be attached easily to the bottom of the printer using screws. Cassettes are available following paper sizes: A4, Letter, Legal, Executive and Envelope. Once the paper feeder is installed, paper size can be changed easily by replacing the cassette. Each cassette can holdup to 250 sheets of plain paper (paper weight:80 g/m*), and the envelope cassette holds about 25mm stack of envelopes or about 20 envelopes. 99 Types and sizes of paper can be used with the cassettes are listed below. Size Type Weight Print delivery Plain paper Letter, Legal, A4, B5, Executive iFe$sparency Letter, A4 Face-up Labels Letter, A4 Face-up Envelopes 98 x 190 - 162 x 250mm 60 - 105g/m* Face-down/up 60 - 9Og/m* Face-up - 1 9.2 EXPANSION RAM BOARD Two kinds of memory expansion board are available for the printer: a 2 MB RAM board and a 4 MB RAM board. These RAM board permit printing of full page graphics. To tell whether a board is installed in your printer, watch the printer display screen during power-up. If no board has been installed, the display will read: Memory Test 1 MB If a memory board has been already installed, the display will give the total amount of memory available (for example): Memory Test 5 MB 9.3 FONT CARTRIDGE Fonts can be added to the printer by installing an optional font cartridge. The font cartridge is installed in the slot beneath the control panel of the Star LaserPrinter 4. Insert the font cartridge with the arrow label facing up. If the font cartridge is inserted upside down, the cartridge and/or printer may be damaged. CAUTION: 100 The Font Cartridges for LaserPrinter 4 cannot be used with this printer. _ - Specifications 10.1 SPECIFICATIONS l Type Laser beam page printer l l Engine Resolution: Speed: Warm up: Emulation Resident: Optional: l 300 x 300 dots per inch 4 pages per minute (A4) under 1 minute (at 68°F or 20°C) PostScript HP LaserJet series IIP Epson FX-850 IBM Proprinter RAM Resident : 2 megabyte 2, or 4 megabytes on RAM Expansion Board Optional: l Host Interface Serial: Parallel: AppleTalk RS-232C Centronics l Default Settings : Changeable using the control panel l Fonts Resident: Option: Courier LinePrinter Font cartridges 101 l Paper Manual feed: 97 x 148mm - 216 356mm Cassette feed (Option) Legal/Letter/Executive/A4/BS/Envelop Face down: approx. 50 sheets output approx. 20 sheets Face up: 60 - 105g/m* (plain paper) Weight: OHP film Special: Adhesive labels Postal cards Envelopes x l l l Printing Area: top and left margins: 2.5mm bottom margin: 4.5mm 4.0mm right margin: Acoustic noise: stand by: max. 43dB printing: max. 53dB Environment Operating: Standby: Storage: 50°F to 90°F (10°C to 32.5”C) 20 to 80% relative humidity, no condensation 32°F to 95°F (0°C to 35’C) 10 to 80% relative humidity, no condensation -4°F to 140°F (-20°C to 60°C) 10 to 95% relative humidity, no condensation 9 Power Source: 100/l 15V (50/60Hz) for North America and Japan 200/24OV (50Hz) for Europe, Asia and Oceania 102 l Power Consumption: l Weight l Dimensions max. 550W (when operating) : Print engine Cassette EP-L cartridge 1Okg 2.4kg lkg (toner and drum cartridge) 387 t mm -I 103 10.2 RELIABILITY Suggested monthly print volume 2500 prints Definition of failure Any loss of operation that requires calling a service person. Operation errors and installation failures are not included. Product life Five years or 150,000 prints, whichever comes first. Mean time between failures 4,000 hours or less Mean time to repair 30 minutes or less Paper feed reliability Frequency Type l/2000 or less Paper l/1000 or less Multifeed l/2000 or less Others Toner cartridge life (EP-L cartridge) 3,500 prints at 5% print duty (A4/Letter size) 104 10.3 PIN FUNCTIONS ON INTERFACES Parallel l Interface Pin No. 1 1 Signal Name 1STROBE IDirection 1 1IN I 2 3 4 5 6 7 8 9 10 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7 DATA8 ACK IN IN IN IN IN IN IN IN OUT 11 BUSY OUT 12 PAPER OUT 1SELECTED\ OUT OUT 18 19-3C ) +SVDC 1GND 1OUT 31 (RESET (IN Function / Goes from High to Low (for a t least 0.5 microseconds) ‘whet1 data are valid. I Eight-bit character data. DATA8 is the most significant bit; DATA1 is the least significant bit. High is logic 1 and Low is logic 0. A low pulse acknowledges receipt of data. Low when the printer is ready to accept data. Goes High if the printer runs out of naner. 1High when the printer is on lint 1External supply of +SVDC 1Twisted pair return signal ( ground level. 1Low input resets the printer to 105 l Serial interface 106 ASCII (as’ kee) - a standardized and commonly accepted numerical code that represents letters, numbers, and symbols, as well as command sets. ASCII stands for American Standard Code for Information Interchange. Baud- a measure of the speed at which information is transmitted. The baud rate, which is set for serial transmissions, is approximately the number of characters transmitted per second multiplied by 10. Byte - a series of 8 bits that represents one character. Centronics interface - a standard wiring configuration for parallel data transmission. Centronics is often used as a synonym for the parallel standard developed by Centronics Inc. Character Set - the complete set of characters available in a font. Data Bits - the number of bits in a transmitted byte that actually contains data. EEPROM - Electrically Erasable Programmable Read Only Memory. It has an internal switch to permit a user to erase the contents and write new contents into it by means of electrical signals. EP-L Cartridge - A disposable type of process cartridge which contains the photosensitive drum and toner. Emulation - behaving like something else. The Star LaserPrinter 4 can emulate, or behave like two other printers, the Epson EX-850 and the HP LaserJet IIP laser printer. Expansion RAM hoard -An available option that is easily attached to the LaserPrinter 4 to expand its RAM to a maximum of 5 MB. Three kinds of memory expansion board are available: a 1MB RAM board, a 2MB RAM board and a 4MB RAM board. 107 1 Face-down delivery-A 1 ; k Face-up delivery - A type of paper delivery in which the paper is ejected type of paper delivery in which the paper is ejected from the printer with the printed side facing downward. from the printer with the printed side facing upward. Face-up delivery is used for printing envelopes, transparency films and labels. i 1 Fixing assembly - The assembly used to fix the toner on to the paper or other material using heat and pressure in printing process. It is located inside the printer and it gets extremely hot during the operation. 1 1 f / Font - i 1 , t Font cartridge - a set of letters, numbers, and symbols of the same typeface design. Font cartridges contain fonts which can supplement the available resident fonts of the printer to increase the variety of typefaces. Glyph - a graphic symbol that conveys information. Handshaking - in data communication, the automatic acknowledging by the receiving device of information that has been sent to it, either by signals and the interface (“hard”) or by software control (“soft”). Hex dump - conversion by the printer of ASCII symbols into their hexadecimal (base 16) equivalent. This is useful for programmers or other users who want to troubleshoot the communications between the computer and the printer. .- HMI (Horizontal Motion Index) - refers to dot spacing in the smallest increment that can be made in the horizontal or x axis. Interface - a hardware plug that allows cable connections of two devices. For instance, a cable from a computer parallel port. parallel port to a printers Landscape - printing on paper across its wider dimension, such as a spreadsheet that is 11 in. wide and 8.5 in. high; landscape pictures are usually wider than they are tall. NV RAM - NV(Non-volatile) Random Access Memory is the permanent storage by the Star LaserPrinter 4 of settings programmed from the front panel, even when the power is turned off. When the printer is turned on again, these settings become the current settings. 108 - Parallel - a communications interface that sends or receives information plus control signals at a time. Parity - 8 bits of a bit in a serial information that allows a computer and printer to automatically check for errors in transmission. Pitch - the width of a typeface measured in characters per inch. Point - the height of a type font, measured in points, or l/72 of an inch. Portrait-printing on paper across its narrower dimension, such as a letter on paper that is 8.5 in. wide and 11 in. high; portrait pictures are usually taller than they are wide. Proportional type - a typeface in which some letters take up more room on a printed line than others, such as W taking more space than Z. The spacing of the letters is also intended to enhance the esthetic quality of the printed page. RAM - Random Access Memory, or memory that can be used to store information temporarily, such as text or printing configurations. RAM-stored information is erased when the power is turned off. ROM-Read Only Memory, or memory in which information can be stored permanently, whether the power is on or off. RS-232 - a wiring configuration for sending and receiving serial transmissions, including data and acknowledgments between sending and receiving equipment (“handshaking”). Serial - a communications interface that sends or receives information at a time at a specified baud rate. 1 bit of Stop Bit - 1 or 2 bits used by the computer for the timing of a transmission of information. Stroke weight-the intensity of a printed character, such as light, medium, or hold. Normal printing is medium weight. Symbol Set - the entire list of ASCII letters, numbers, and symbols used with a particular language, such as English or Spanish. VMI (Vertical Motion Index) - refers to line spacing, the smallest increment that can be made in the vertical or y axis. 109 MEMO 110 -._ ,D ALARM light, 37 ASCII code, 107 /B( BAD CHAR CODE message, 8.5 BAD FONT CODE message, 85 BAD FONT DATA message, 85 BAUD rate, 5 1 Buttons, control panel, 3 1, 33-36 c CALL SERVICE message, 78-79 CANNOT ROTATE message, 85 CANNOT DOWNLOAD message, 85 CARTRIDGE BAD message, 79 CART. CHANGED message, 86 CARTRIDGE ERROR message, 8 1 CASS CHANGE message, 82 CASS REFILL message, 80 Centronics connector, 101 Character set, 57 CHAR NOT IN FONT message, 85 CLEANING DONE message, 87 CLEANING PAPER message, 39,87 Control panel (front panel), 4, 3 1,. 33-37,41-43,48-49 / - Courier typeface, 60-61 Current settings, 54 Data bit, 52 DATA light, 37 Display screen, 3 1 DTR message, 52 EEPROM, 32, 107 Emulation modes, 40, 59, 63-64 ENGINE TEST message, 87 ENGINE RESET message, 87 EP-L Cartridge, 16-20, 107 Epson FX-850 emulation mode, 64 ERROR SKIP/< button, 33,48 IF Face-down delivery, 24 Face-down tray, 4 Face-up delivery, 25 Factory settings, 54-55 Fixing assembly, 74, 108 Font, 59-64 Font cartridges, 100 FONT LIST message, 38, 87 IHJ Hex dump, 38 HOLD FOR TEST message, 38 Host communication problems, 85 HP LaserJet BP emulation mode, 63-64 .._ 111 Initial settings, 54 INPUT OVERFLOW message, 85 INSUFFICIENT RAM message, 8 1 INTERFACE group, 40 INTERFACE message, 4 1,5 l-53 El LANDSCAPE message, 57 Landscape orientation, 40,57 Laser, 2 Laser printing, 2-3 LAYOUT group, 40 LAYOUT message, 43,57 LINE ERROR message, 85 LinePrinter typeface, 60-6 1 LOAD CASS message, 80 LOAD FACTORY SET message, 41, 54 Lotus l-2-3,66-68 jM/ Maintenance, 77-97 MEMORY TEST 2MB message, 32 M-FEED message, 84 Microsoft Windows, 69-7 1 Microsoft Word, 69 Multi-purpose tray (MP tray), 4 MP LOAD THE PAP message, 87 IN NO EMULATION CART message, 8 1 NO FONT PRESENT message, 79 NOW CLEANING message, 87 lol ON LINE button, 33,48 ON LINE light, 37,49 Operator call messages, 80-84 Operator information messages, 85-86 112 Optional font cartridges, See font cartridges ORIENTATION message, 57 1pI PAGE OVERFLOW message, 85 Paper, 2 1, 100 Paper cassette, 11,99- 100 PAPER FEED group, 40 PAPER JAMMING message, 80 Paper loading, 2 I-24 Parallel interface, 5, 27-28, 50-53 Parallel interface pin function, 105 PARITY message, 52 Pitch, 40 Portrait orientation, 40 Power receptacle (connector), 5, 26 Power switch, 5, 26 Power outlet, 7 Power-up settings, 54 PRINT button, 33,48 Print density, 29 PRINTER ACTIVE message, 88 PRINTER READY message, 88 PRINTER WARMUP message, 87 PRINT light, 37,49 Print quality, 29-30 Program mode, 3 1,33-35 PROGRAM/A button, 31, 35,48 m RAM expansion board, 11,100 READY light, 37,49 REINITIALIZE message, 87 REINSERT CART message, 8 1 REPRINT LOST PGS message, 88 Release button, 4, 14, 29,90 RESETmi button, 34,48 Resident font, 60-61 ROTATING FONT message, 88 RS-232C connector, tor See serial connec- Self test, 38-40 Self test status sheet, 38 Serial interface, 50-53 Serial interface connector, 5, 27-28 Serial interface pin function, 106 SET EP/CLOSE message, 80 Specifications, 10 I- 103 Status messages, 87-88 STATUS SHEET message, 38,87 STOP BIT message, 52 Stains, 97 Streaky prints, 95-96 /TI TEST/> button, 34,48 TEST PRINT MODE message, 39,88 TEST PRINT STOP message, 39, 88 TIMER ERROR message, 79 TRAY REFILL message, 80 TRAY CHANGE message, 82 u Unpacking, -_ WA WordPerfect, 8- 16 68 113 Consumer Response Star Micronics Co., Ltd. invites your suggestions and comments printer and this manual. Please address your correspondence to: Worldwide Headquarters: STAR MICRONICS CO., LTD. 194 Nakayoshida Shizuoka, JAPAN 422-9 1 Attn: Product Manager American Market: STAR MICRONICS AMERICA, INC. 420 Lexington Avenue, Suite 2702-25 New York, NY 10170 Attn: Product Manager European Market: STAR MICRONICS DEUTSCHLAND Westerbachstral3e 59 P.O. Box 940330 D-6000 Frankfurt/Main 90 F.R. of Germany Attn: Product Manager GMBH U.K. Market: STAR MICRONICS U.K., LTD. Star House Peregrine Business Park Gomm Road, High Wycombe Bucks. HP13 7DL, U.K. Attn: Product Manager French Market: STAR MICRONICS FRANCE S,A.R.L. 25, rue Michael Faraday 78 180 Montigny-le-Bretonneux Attn: Product Manager Asian Market: STAR MICRONICS ASIA LTD. I Room 2408-10 Sincere Building; 173 Des Voeux Road, Central, HONG KONG Attn: Product Manager on your * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/6965401/laserprinter-4-user-s-manual
CC-MAIN-2020-29
refinedweb
18,857
63.09
Google Cloud function to generate thumbnail for images in Google Storage. Project description Google Cloud function to generate thumbnail for images in Google Storage. Convention The thumbnails are placed in a folder “thumbnails” at the same place as original file. The thumbnail size is appended to filename, right before the extention part. For example: bucket/ └── folder/ ├── photo.jpg └── thumbnails/ ├── photo_128x128.jpg └── photo_512x512.jpg The function expect these environment variables to be set: - THUMB_SIZES: Size of thumbnails to be generated. Example: 512x512,128x128. - MONITORED_PATHS: Folders (and theirs children) where the function will process the uploaded images. Muliple paths are separated by “:”, like user-docs:user-profiles. If you want to monitor all over the bucket, set it as /. The variables can be passed via .env file in the working directory. Why Thunagen I’m aware that there is already a Firebase extension which does the same thing. But that extension, when doing its job, need to create a temporary file and in many cases, falling into race condition when the temporary file is deleted by another execution of the same cloud function. Thunagen, on the other hand, generates the file and uploads (back to Storage) on-the-fly (in memory), so it doesn’t get into that issue. Installation Thunagen is distributed via PyPI. You can install it with pip: pip install thunagen Include to your project Thunagen is provided without a main.py file, for you to incorporate more easily to your project, where you may have your own way to configure deployment environment (different bucket for “staging” and “production”, for example). To include Thunagen, from your main.py, do: from thunagen.functions import generate_gs_thumbnail Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/thunagen/
CC-MAIN-2020-05
refinedweb
298
58.38
When you want to filter out duplicates in a list in groovy you normally do something like: def list = [2, 1, 2, 3] def filtered = list as Set assertEquals([1, 2, 3], filtered as List) This kicks out all duplicates in a one-liner. But what if the list is sorted (e.g. in reverseOrder)? def list = [3, 2, 2, 1] def filtered = list as Set assertEquals([3, 2, 1], filtered as List) // this fails! One solution would be to use a small closure: def list = [3, 2, 2, 1] def filteredList = [] list.each { if (!filteredList.contains(it)) { filteredList << it } } assertEquals([3, 2, 1], filteredList) This closures preserves the order and filters out the duplicates. 2 thoughts on “When as Set is not what you want” Just use List#unique() groovy:000> [3, 2, 2, 1].unique() ===> [3, 2, 1] Well, just be aware that List#unique() is destructive. This can cause subtle bugs if you call unique on a method parameter.
https://schneide.blog/2008/09/12/when-as-set-is-not-what-you-want/
CC-MAIN-2022-27
refinedweb
162
69.31
. So I'm superbly confused, I notice in the answer to the question he initializes a statement of: Why is this necessary? When I write the code as follows it executes correctly, despite not having that statement included... I'm just confused as to any pros/cons with including or not including that statement. This is my code that ran properly: Using the prime variable isn't necessary. It was a stylistic choice that I employed to try and make each statement a bit simpler to understand. If I were writing this code for myself, I would have written it as you did -- without the intermediary variable. Hi Alex I don't think this quiz is fair because you have not taught us yet about multiple "if's" Why would you assume that you can only use one if statement per program or function? A function is just a sequence of statements -- those statements can be if statements, or any other kinds of statements. Hi :)! What does ''trip up'' means? Is it to scare? Why don't you use endl in those? Hi R310! > What does ''trip up'' means? Where are you reading that? > Why don't you use endl in those? @std::cin::operator>> waits for the user to press enter. When the user presses enter, a line feed is inserted into the console already. Outputting a line break would result in an empty line. I get it. In the article you wrote that it trip up new programmers! "trip up" means to "cause someone to get stuck or make a mistake" Oh sorry I saw this one after... hi, is this solution valid? Hi Jörg! No it's not. The , operator is not an "or". Use ||. Okay, thank you Hello ! “Consequently, when we print boolean values with std::cout, std::cout prints 0 for false, and 1 for true:” In the program that you wrote below the above statement I noticed that you did not initialize any variables with the data type “bool”. Then how does std::cout know that it has to print 1 when in the cout statement we only wrote true ? Is it possible that if we wrote std::cout <<“true”<< instead of just std::cout <<true<<, std::cout would print true and not 1? Hi Aditi! When you write The "true" is treated as a string and it will be printed as such. When you write The "true" is a bool, @std::cout will print a bool and @std::cout does so by printing the numeric value (0 or 1). As mentioned in the lesson, you can also use @std::boolalpha to print bools as "false" and "true". Right ! I get it now . Thank you ! Something interesting I've noted... Looks like I don't get garbage back when I enter a non-boolean value as input. Using the latest (as at posting) version of MingW/g++. It appears to be converting the input string into a number first, then doing the (zero-or-empty == false) thing. I'm somewhat experienced with PHP, and know that it is implemented in C/C++, so given the way that PHP casts strings into numbers, and non-booleans into booleans, these results are actually unsurprising to me... Input == Output ------------------ 0 == 0 1 == 1 2 == 1 a == 0 -5 == 1 +8 == 1 45674513 == 1 asdfasdl == 0 456asdf == 1 asdf456 == 0 Un-related, looks like the edit box on this site has a bug - it ignore the [code] tags, breaking the code formatting. Hi Kage-Yami! In order to figure out what's actually happening and what's supposed to happen one would have to dig through the standard and implementations, I don't feel like doing that now. A bool is an integral type, so number extractions into a bool works, but they leave the input stream in a failed state (You can check it with @std::cin.fail()). Code tags display only after refreshing the site if they were added in an edit. Interesting... thanks for the tip! Here is how I solved the quiz. Any suggestions? Hi Jhayar! Looks similar to others solutions. It is good to have it here, some people can find it interesting. @nascardriver this time I did initialize all of them :), have a good day! As you know, boolean expressions are expressions You are checking if will be true. If it is true you are returning false, why not try something like this That way if it is prime, it will be true. Then you can do this to invert. Or if you like compact code. Then also, you must remember that , so id the same as becomes which can become means "if x divided by i returns 0 it's not prime/is false" This makes the code much more concise. And remember that comments at the code level should explain why, so a comment like "If it's modulus is 0 it's not prime" makes the intent of the code clear. I though it be fun to make it check for any user inputed integer to see if it's a prime. It uses some not yet explained things like a for loop and and modulo but should make enough sense. So here is my aproach to the assignment (from my testing it works). and hope it is use to someone who is trying something similar :D also was a fun little program to write. Hi Levi! Some suggestions: Thanks man These are some sweet surgestions. it makes sense to only count to the square root as that is the largest division possible. also cleaning up some of the code was nice shows me how to make my code nicer in the future. A small adjustement over nascardrivers's code I removed his comments to emphasize mine After compilation it's the same. I prefer the first variant, because it's easier to understand what the condition is. @alex, @nascardriver, I wrote below mentioned program to tell if a number is prime or not and we can check any number lesser than max size of int. I understand that Nascardriver might not have used structs/recursion/function in your program because these concepts are not introduced yet. But I thought that my code should have been faster to detect large prime numbers, instead, @nascardriver's program is way faster than mine. I do not quite understand why. Reason I think my code should have been faster for large prime number: I divide the large prime number by 2,3,5,7 and so on and calculate the largest prime number which could divide the number user has input to be 1/2, 1/3, 1/5, 1/7 and so on. Thereby reducing the amount of checks that I do. Whereas Nascardriver is checking till number_to_be_checked/2, which can be a lot of numbers when the number_to_be_checked is large. So my question is, I do not understand why Nascardriver's code takes less amount of time than mine, is it because I'm using many variables and function calls? my code: Hi Shri! The biggest slow-downs in your code are recursion and dynamic lists. Both can be terribly slow. If you're using dynamic lists, you should reserve enough space beforehand so the underlying array doesn't have to be resized. Hey Nascardriver, Thanks for the reply! Thanks for clarifying that recursion and dynamic lists require lot of time. 1 point I wanted to clarify with you is regarding your comment: If you're using dynamic lists, you should reserve enough space beforehand so the underlying array doesn't have to be resized. From a bit of googling around I found that std::list (which I have used) provides us with doubly linked list. I have not studied Data structures yet, but from what little understanding I have, I think doubly linked lists do not require reserving memory beforehand unlike say dynamic array. So by this comment of yours: If you're using dynamic lists, you should reserve enough space beforehand so the underlying array doesn't have to be resized. Which array are you referring to that I have used in my code? > I found that std::list (which I have used) provides us with doubly linked list I didn't know that. In that case there's no need to reserve memory beforehand. Still, a doubly linked list needs to store the value and two pointers for each element, meaning that 20 bytes (assuming 64 bit) have to be allocated for every prime number found. Overall your logic seems to be overly complex for the task you're trying to achieve. I wouldn't have I would rather have a variable and set that to false, and the break out the loop. That way we have a single exit point, which makes debugging much easier. If there were multiple instances where false could be returned, then this can be confusing as to what return statement was executed. Here is mine: Hi, i know we not covered the for loops but i want to know if my code it's ok, thank you. You are doing a great job!! Hi Stefan! From lesson 2.1: "If you’re using a C++11 compatible compiler, favor uniform initialization". Your loop only needs to run up to sqrt(x). @static_cast<int> converts the given value to an int. I do this, because @std::sqrt returns a double. You need to include <cmath> to use @std::sqrt. References * Lesson 2.1 - Fundamental variable definition, initialization, and assignment * Lesson 4.4a - Explicit type conversion (casting) * @std::sqrt Thank you very much for your answer! Do you teach System() commands in this tutorial?? Hi Max! You can check out the @std::system documentation here . Other than that there's not much to talk about, because all it does is forward a command to the OS which has nothing to do with C++. Thanks I just spent the past hour ringing my brain for an answer to this quiz on this section. I built so many solutions and none of them would function properly. It was only when I finally gave up and clicked to see the answer that I realized you left a hint after the question... So TIL how that you can add multipart if statements to a bool function and that you should always read the entire quiz before answering the first question. Thank you, Alex. Why you use return false in bool isPrime function? Hi Samira! @isPrime has to return a value. If @x isn't 2, 3, 5 or 7 @isPrime won't return true and it does not return anything automatically. Oh, I see! Thank you nascardriver. But can I use else statement instead of return? which one is the best practice? return or else statement? If you used else you'd still have to return. No matter what happens in a non-void function, you have to return (You'll learn about exceptions in the future). Hi! I've been learning C++ here the last 2 days and i think i'm really making good progress thanks to you. I wanted to share the code i did for the exercise if you could tell me if that's a good thing for a first time. Hi Bennhyon! @sayIfPrime looks good, I made some changes to your @main function: I think you should put the "you silly" comment right after they enter a number, and make them fix it. If you want to know how to do loops in code, you can look for "while loops c++" on google, or directly on this site. Thanks. I'm actually going through the whole tutorial on this page right now and just haven't gotten to loops yet, but I'll get there. Hi Alex, What do you think of this solution? bool isPrime(int x) { return (x == 2 || x== 3 || x== 5 || x== 7); } int main() { int x; std::cout << "Enter a single digit integer: "; std::cin >> x; if (isPrime(x)) std::cout << x << " is a PRIME number!"; else std::cout << x << " is NOT a prime number!"; return 0; } I always forget that you can assign a function to a variable, so the only way I found out to do the exercice was this one^. Is that okay? This code looks great. I didn't use logical OR in my solution because it's not covered until next chapter. Thanks for the fast reply, Alex! Great tutorials, by the way. The best for c++ since it explains everything well and gives so many exercices (which many tutorials don't offer). These exercices are a freaking crucial point in learning this seemingly hard programming language. :D Hi me again, I don't really know where to ask this so... I want to be able to make my function return a text and I don't know how. Hi Gabe! I'll have to use content from future lessons for this since it hasn't been covered yet. Option 1: Option 2: Or don't return a string at all General issues with your code Hello, I'm back I took a break from this because it was frustrating. I realized I could make my life easier and have qOne() return a boolean and have that boolean decided what main would output. But I can't seem to make qOne run then call the return value to use in main. Is this what you're looking for? References: Lesson 1.4 A first look at functions and return values (Section "Reusing functions") Thanks! Hi, My code looks some what different to the solution. Running from a previous lesson, I thought that it was bad practice to have anything within the main function, other than calls to other functions? I thought that it should be seperated. Here is my code: what would be the best practice / what would be expected if you was a software engineer? Cheers, G There's little point in having a main do nothing but call a single other function. In that case, you might as well put the logic in main. Most often in production code I've seen, main looks something like this: But since our programs don't tend to have initialization and cleanup needed, our main() can just be the contents of doWhatever(). why do I have to return false at the end of the function? Hi Ali! I assume you're talking about @isPrime. @isPrime has been declared to return a bool No matter what happens inside the function, it has to return a bool in every case (except when an exception is thrown)! If we were to remove the "return false;" and none of the numbers were matched @isPrime would not have a return value, this in turn would cause undefined behavior which, in the best case, causes a crash. Worst case, it works fine for you but crashes on other machines and you'll have a hard time figuring out why it's working for you but not for others. And why do I have to return true in every other case? Because @isPrime is supposed to tell you if @x is a prime number. Returning true means that @x is a prime, returning false means @x is not a prime. Oh now I get it thank you :) Hi siriusbuisness! Your compiler will generate the same output for both variants so there's no difference. However, when using an intermediary value you can view the variable in a debugger which is not as easy when using the return value of isPrime directly. Hi Alex (and others) In my solution for this chapter’s quiz I used whereas you used an intermediary value Is the intermediary value better convention for if statements? Thank you for your time and the tutorials. Please edit your posts instead of re-posting them, the reply to your question is above. Thanks for your swift answer. I tried editing but for some reason the code wouldn't format properly afterwards so I re-posted. No, in most cases I'd write it as you did (unless I needed to save the result of the isPrime() call for later). I used an intermediary bool here just because it's a bit easier for learners to understand. Here! If that condition"if" has only one statement, then there is no need of using braces there.. But let's assume that u have more than a single statement, if you want them to execute only if the condition is satisfied, then u should go for curly braces.. For a single statement it is not mandatory to keep braces :D My mistakes were: 1. In my for loop I wrote: but it should be: 2. I started at 0, which resulted in dividing by zero. 3. And I don't quite get this one: I made 'input' and 'i' integers which for some reason made my program treat division1 like and integer (5/2 = 2, for example). I fixed that by making 'input' and 'i' floats. But could you explain why it was treating division1 like an integer? So yea now it works :) 'i' into a float and leave 'input' :) > I don’t know how the result type is determined so I wrote a test program. I cover this in lesson 4.4 under the subsection, "Evaluating arithmetic expressions". You're basically correct, but the list in that lesson is a little more comprehensive. Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/boolean-values/comment-page-6/
CC-MAIN-2021-04
refinedweb
2,939
72.36
09 December 2009 18:28 [Source: ICIS news] MIAMI (ICIS news)--Developing methanol from the US' vast reserves of natural gas locked in shale is possible, but only if the cost of natural gas stays under $5/MMBtu, a methanol industry advocate said on Wednesday. Speaking on the sidelines of the CMAI World Methanol Conference in ?xml:namespace> “At [a natural gas cost of] $8-9/MMBtu, no, but at a natgas cost of $3-4/MMBtu, the answer is yes,” Dolan said. However, some market players brought an attitude of resentment regarding the notion of domestic methanol production ramping up, hinting that market share would likely be cut. “The producers from According to the US International Trade Commission (ITC), Consultant Darryl Rogers of Purvin & Gertz called the vast reserves of shale gas
http://www.icis.com/Articles/2009/12/09/9317923/shale-gas-to-methanol-possible-if-natgas-cost-remains.html
CC-MAIN-2014-42
refinedweb
133
54.56
What's on this Page Page 11: Revised 05/142000 Top of Page Back to Page 1 | Go to Table of Contents | On this Page: [Great Links][Book Reviews][Rabbit Hole] Almost every home page that I visit has its list of favorite links. Well, I have my list too! I surmise that you got to this site because of an interest in Roman Catholicism in particular, or Christianity in general. So here goes! I am sure you will find some of these sites interesting, and even exciting! I have read a lot of books as part of my research for this site. I plan to supply a review of the best of the lot for your information. I must never forget what my daughter Lori reminds me of: "Pobody's Nerfect Pops!" She is right. Pobody's Nerfect, especially me! While I am doing my best to present a fair, accurate view, I may stumble and fail in places. I am open to suggestion and criticism, and will take your comments seriously. I just hope you don't reject the 'whole nine yards' because you find one mistake, or some statement you don't agree with. Better you should tell me about it and give me a shot at possible changes, right? So, if I expect others to cut me a little slack on the perfection issue, its only fair of me to do the same for them, right? Enough said. Enjoy! | Go to Great Links | Top of Page | Go to Book Reviews | Comments | Go to Table of Contents | Link problem? Links change a lot. If you find any link here that does not work, please email me with the information so I can correct it. Thank you! Click here to go to the most recently added links. | Back to Table of Links | Go To Book Reviews | Comments? | Go to Table of Contents | Book Reviews in this section: [A] Along The Way -[C] Catechism of the Catholic Church | Catholicism and Fundamentalism |Catholicism at the Dawn of the Third Millennium| The Catholic Myth | | Counterfeit Revival -|D| Dogmatic Cannons and Decrees -[E] Epiphany: A Theological Introduction to Catholicism | Escape from Purgatory -[F] Far From Rome, Near to God | Fifty Years in the Church of Rome | Formidable Truth | From Catholicism to Christ | -[I] Immaculate Misconceptions| -[M] A Modern Priest Looks at His Outdated Church | Mouth of the Lion | Myth of Mary | -[N] New Jerome Bible Commentary | -[Q] Quite Contrary | The Q+A Catholic Catechism [-R] Roman Catholicism | Reckless Faith | -[S]| Seduced By Error | St. Joseph Baltimore Catechism| Surprised by Truth |-[T]| The Two Babylons | -|[V] A View of Rome | Vicars of Christ: the Dark Side of the Papacy | -[W]| Windswept House | Other choices: | Top of Page | List of Links | Comments? | Go to Table of Contents | An excellent review of the current discussion about unity between the Roman Catholic Church and Evangelical Protestantism. While a scholarly volume, it is presented in easy-to-understand terms. Armstrong ask, then answers the question, "Does it still make a difference why we differ, or should we just sweep the Reformation and its doctrinal debates aside and get on with building a new coalition for unity?" Armstrong accurately portrays Roman Catholic dogma and doctrine vis-a-vis the Bible. He traces the historical development of the Church of Rome in light of Rome's self-definition as being the one, holy, catholic, apostolic church. He takes us through the period known as 'the dark ages' and into the medieval world, and shows us how the Roman Catholic Church gradually changed from pure faith in Christ to pure faith in her own sacramental system. A View of Rome graciously avoids emphasis on the incredible parallels between the Roman Catholic Church of today and the ancient pagan mystery religions. For a detailed review of that topic, check out my review of The Two Babylons. (To be supplied). Armstrong also makes a vivid case for the Evangelical doctrines of 'sola scriptura' (the Bible as the final authority for doctrine) and 'sola fides' (justification by faith in Christ alone). Several Roman Catholic email friends have hurled these two precepts at me as if they were plague-ridden rodents! This is the attitude of all of the Catholic apologists I have studied, and one would think that to have faith in Christ alone was some sort of aberration, and not the position called for by the entire Bible! If you are a Protestant, A View of Rome will give you a clear understanding of how your Roman Catholic friends think, and what they are told to believe. If you are a Catholic, A View of Rome will give you a clear understanding of how your Evangelical Protestant friends think, and what they choose to believe. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Comments? | Reckless Faith: When the Church Loses Its Will To Discern reveals how many American Evangelical churches have abandoned the legacy of Martin Luther and the Reformation, the true Gospel of Jesus Christ, and reason itself in favor of 'feel-good' religion. The concept of absolute truth in religion has been traded for the politically correct 'truth is relative' belief. MacArthur shows how a number of popular American Evangelicals have sold-out the Christian's directive to contend for the Faith, and to accept no other Gospel but that of Jesus Christ - in the name of 'Christian Unity' with the Roman Catholic Church. The five-hundred year legacy in which Protestants in general have views the Roman Catholic Church as an un-biblical, and an extra-biblical church long since departed from Gospel Truth has been relegated to an almost unimportant corner by these Evangelical sell-outs. All of the 'giving' in this contract with Romism is on the part of the Evangelicals; Rome has given nothing at all, save for a little of her sleight-of-words designed to make one think she has changed. When Jesus and the Apostles spoke, it was in absolute terms. The real Gospel is intended to divide the believer from the unbeliever, not unite them! Peace at any price was not the Gospel message! Jesus Himself said that He came to set people against each other in the name of Truth. (Matthew 10:34-40). In Reckless Faith, MacArthur also makes a good presentation of the need for discernment, and he reviews the fundamentals of Christianity. He shows us how to decide which doctrines are truly fundamental, or essential as declared by the Bible. MacArthur also tells us about the 'laughing revival' that began in 1994 at the Airport Vineyard church in Canada. He reveals its lack of Biblical support, and shows how it is making itself felt in many Evangelical and Pentecostal churches across America. If you have wondered if a church service that features laughing, barking like dogs, squealing like pigs and the like are of God, you should read MacArthur's analysis. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Along The Way Copyright 1996 by Robert F. O'Donnell. Along The Way describes one man's journey from false religion, through the occult to final salvation in Christ. Laughter and tears will accompany you as you share his joys and pains 'along the way.' You can agonize with Bob as he struggles to leave the church of his youth - a church he came to recognize as false to its core. After a painful parting from that church, Bob was an initiate in both the Saraswati and Giri orders of Yoga, and was heavily involved in the Edgar Cayce 'Search for God' program. Later, he spent several years as a disciple of Swami Nityananda Saraswati, ever seeking the truth, ever encountering deception and pain. It was during a visit with the Swami at his Texas ashram (retreat house) that Robert first met Jesus Christ - in a most unusual way! You may weep with him as he struggles to hold onto a marriage gone sour, or ponder the wisdom and mercy of God as Bob unexpectedly ministers to a dying stranger. From dancing to doctrine, this book can challenge your most dearly held notions. Copies available only from the author. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | A Modern Priest Looks at His Outdated Church by James Kavanaugh, 25th Anniversary Edition. Copyright 1967, 1992 by Father James Kavanaugh. ISBN 1-878995-16-2, Steven J. Nash Publishing. When, in 1968, I first read this amazing book by a Roman Catholic priest, I was almost petrified as I faced one of the most critical decisions of my life: should I leave the Roman Catholic Church - or check my brains at the door and remain a Catholic? Kavanaugh's book was like having a window flung open - a brilliant light to my spiritual melancholy. Before reading A Modern Priest I was under the illusion that priests, to a man, were all cut from the same bolt of cloth: somberly-dressed, sexually repressed, unapproachable, arrogant dispensers of a party line more suited to medieval days of kings, knights and dungeons than to the world I inhabited. What a surprise to find a priest who was also a man! A man most aware of me, of my wife and children, and my milieu. Unlike the pedantic dogmatists pretending to be 'other Christs' (sic. Alter Christos - what Rome claims for her priests) who seemed totally out-of-touch with real life, here was a priest, a man, who was "touched by the spirit of our infirmities." (Hebrews 4:15) How refreshing! So popular was this book, so filled with heart-wrenching honesty, and accurate Biblical perspective, that Mr. Kavanaugh was asked to re-publish it twenty-five years later, in 1992. This anniversary edition reveals that Kavanaugh's unflattering picture of the Roman Catholic Church is as accurate today as ever it was. Kavanaugh shows how the so-called 'fresh air' let into the church by Vatican II is, in fact, merely stale, recycled five-hundred year old musings of long-dead ivory tower scholars. Here are a few words from A Modern Priest to whet your appetite: "I had taken leave from my priesthood because I knew that my Church was not a reflection of what Jesus had taught. I watched our bishops return from the II Vatican Council and saw them still too untrusting and terrified to make changes of real substance. Latin was changed to English, Gregorian chant gave way to hymns, and the altar faced the People. "But the control of human minds remained unchanged. The secret screams of spiritually battered men and women were not addressed! My Church still was invading bedrooms, abusing consciences, distorting sex, manufacturing sin, patronizing other sects, dishonoring women, turning custom to law and loving myths to angry dogmas. Sincere souls were condemned or threatened with a hell that never was. But most of all, it continued to discredit the voice of God that speaks to each of us in private revelation." (Page v). Who better knows the inner workings of the Roman Catholic Church than her own priests? Since my first encounter with the Rev. Kavanaugh back in 1968, I have learned that he was only one of the first to 'come out' about his leaving Rome. Actually, he was in the middle of a tidal wave - disillusioned Roman Catholic priests leaving their titular position behind to find and worship the God of the Bible rather than the cannon laws of a pharisaical Rome. Back then it was almost unheard of that a priest should resign and seek spiritual life elsewhere - thus my compliment on his courage to go public. Today, according to figures found at several internet sites, there are approximately 110,000 priest who have followed Kavanaugh's brave lead. At least half of them left Rome after the close of Vatican Council II! Thus the others must have left before or during that church council. Why did we never hear of disenfranchised ex-priests years ago? First because Rome suppressed the statistics, and second because those men were perhaps too honorable, or too afraid to speak out. It is not so today, thank God. Ex-priests no longer fear the condemnation of the church, nor of their relatives, neighbors and friends. Indeed, such censure from the community is but a shadow of its former self. The anathemas (curses) placed by Rome on her renegade priests and those who love and support them fall on too many deaf ears. Rome is losing control. Fear, her trump card, today motivates only the docile who prefer to let others do their thinking for them when it comes to spiritual matters. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Comments? | From Catholicism to Christ by Ralph Filicchia. Copyright 1991 by Ralph Filicchia, Revival Literature, P.O. Box 6068 Asheville, NC 28816: ISBN 1-56632-000-3. Ralph Filicchia is a man just like me. He was raised a nominal Roman Catholic, just like me. He attended a retreat at St. Gabriel's Monastery in a Boston suburb, just like me. And just like me, at that retreat he submitted a question to be answered by the resident priest who conducted the retreats for Catholic men. His question, a very serious one about salvation, was tossed aside and ignored. That's where our experiences vary. When I submitted my very serious question, it was answered - with a joke! In my case the retreat master ducked the issue by mocking the questioner - me! Mr. Filicchia visited a number of priests in his effort to find the truth. After one such visit, he recounts that, "I left, shaking my head. The moment you displayed even a meager knowledge of the Bible, the average Catholic priest would turn you right off. But why were they so unfamiliar with what I said? Surely they must have read these things for themselves, or at least debated them in seminary. Or did they?" (Page 36). Because I, too, sought the help of a number of priests, and even a Roman Catholic theologian, I can confirm Mr. Filicchia's experiences and observations. The typical priest displays an extremely limited awareness of Scripture. Don't you find this amazing? Should not the priest, above all others, be intimately familiar with the Word of God? Sadly, most are not. Some of you will easily identify with Mr.Filicchia's life. Certainly you will find in these pages the importance of sharing your faith in Christ, and in the Word of God. This author, like so many others, proves beyond doubt the power in that Word, the power in Christ, and the sad ignorance of so many spiritually dysfunctional robots created by the Church of Rome. You will find Mr. Filicchia's testimony more or less follows the pattern of many Catholics who search for spiritual truth, and end up departing from Rome. Click here to read some similar stories of 'recovering Catholics.' | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Go to Table of Contents | Escape From Purgatory By Joseph R. Schofield, Jr. Copyright 1994 ISBN: 0-9639271-1-6 An outstanding feature of Escape From Purgatory is its synoptic table of contents, a rare but very useful tool that gives you a thematic outline of the book with a synopsis of each covered topic. The author starts by establishing the credibility of the Bible as the inerrant, inspired Word of God. This he does by appealing to fulfilled prophecy, archaeological evidence and scientific discoveries. Escape From Purgatory exposes Rome's lies about her traditions and papal succession, and demonstrates how beguiled Roman Catholics are unwitting prisoners of the laws of the worlds biggest cult. The book evaluates the cult of Mary worship, showing how it switches attention from Christ to Mary as one's hope of salvation. It concludes with a review of Bible prophecy on the return (sic., second coming) of Christ. Schofield suggests that Jesus will return very soon, perhaps in our own generation. Since his primary focus is on the church of Rome, Schofield's treatment of this prophetic scenario is understandably light. Schofield's book suggests that we make sure of our salvation through faith in Jesus Christ, and Him alone. This is the Biblical way, not the way of Rome. In the final analysis, you must choose whom you will serve - the Jesus of the Bible or the Cannon lawyers of the Roman Catholic Church. Your eternal salvation depends on your choice! | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Surprised by Truth Edited by Patrick Madrid; Copyright 1994, Basilica Press, P.O. Box 85152, San Diego, CA 92186 ISBN 0-9642610-8-1. Forward by Scott Hann. Perhaps the biggest surprise in Surprised by Truth is the discovery that eleven (11) men and women from very different backgrounds, and with different educational levels, all write with exactly the same precision! As I review this book I am almost forced to violate one of my dearly-held principles, draw my colt .45 and take a shot at the messenger! I hate doing it, but this case deserves it. You see, I have been a professional writer, editor and publisher for many years. I have been the manager of such people as well. While Patrick Madrid's name embellishes the title page as editor of Surprised by Truth, by the time you have digested the first two or three 'testimonies' it becomes evident that he functioned not so much as the editor but as the writer! If any of the editors I worked with had done what Mr. Madrid has done with his source material, I would have fired him or her for professional incompetence. Its not that Mr. Madrid is a poor writer, he isn't. His writing is steady, concentrated, clear, concise, direct and easy to understand. The entire book carries the hallmark of a single person. The sentence structures, use of terms, arguments, and so forth are those of one person, not of the eleven contributors. Its as if the editor simply used his source material as a base from which to launch his personal agenda. This fact alone made me very suspicious as I read Surprised by Truth through to the end. And, sure enough, other markers appeared. For example, you will notice that eight of the eleven contributors came from, out of, or through Calvin's offspring, the Presbyterian Church, one was a non-observant Jew, one a former Baptist, and one seems to be a former Catholic who later returned to the Catholic Church. I can't help but wonder why this is true! I have no argument with the individual stories of struggle and painful decisions. But you could select any eleven people and you would find their testimonies very different in writing style, language, and theological arguments. Certainly you would expect to find a certain amount of common ground - but not eleven people who share similar religious backgrounds, theological arguments, and writing styles. I should know. This site contains the testimonies of a number of former Roman Catholics. I did the editing. I left their personalities and writing styles alone, save to correct misspelling and punctuation, and to add paragraph headings. You can see their individualism as well as their common points. Not so with Surprised by Truth. Surprised by Truth is also surprising in its apparent lack of understanding of Protestant theology. Given eight former Presbyterians, you would expect to see a fairly accurate representation of Calvinism. It isn't there. The contributors who passed-through an Evangelical church display remarkable ignorance of the basic principles of Evangelicalism too. What is displayed, however, is the stereotypical, Rome-defined straw men in their places. This is a typical Roman Catholic approach to apologetics - put forth an erroneous definition and attack it. Since it is wrong to begin with, such a straw man is easy to debunk. This little book seems to presume that when ten - count them - ten former Protestants convert to Roman Catholicism, the very number itself is evidence of the validity of such conversion. I don't really believe that numbers prove anything more than, well, numbers. If a significant number of people do or change something such as religions belief, it is sufficient evidence that something is happening, not that what is happening is right. To such an assumption, let me counter with another book, Far From Rome, Near to God. Here we have not ten, not eleven, but fifty former Roman Catholic priests who left the Church of Rome. If numbers lend credence, guess in which direction the greatest number are moving! Click here for more info on this book. I must say that I did enjoy reading the book. I always enjoy personal testimonies, especially when they contain elements of drama and human struggle. The stories themselves are most interesting from that point of view. Just don't rely on the 'theology' you find between the covers. For a review of a Christian counterpart of this volume, check out Far From Rome, Near to God, which documents the lives of fifty former Roman Catholic priests. Also see Seduced By Error for an in-depth analysis of "Surprised." | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Go to Table of Contents | Comments? | Far From Rome, Near to God, Compiled by Richard Bennett and Martin Buckingham, copyright 1994, Published by Associated Publishers & Authors, Inc., P.O. Box 4998, Lafayette, IN 47903; ISBN 1-878422-72-4 Here is are fifty, often heart-rending accounts of former Roman Catholic priests who have exchanged the bondage of Rome for the freedom of Christ. The compilers have done an admirable job of letting each former priest give his own story in his own words. While this makes for some occasionally difficult text, the adherence to each author's original manuscript lends a sense of life and reality to each story. Who should best know the inner workings of the Roman Catholic Church than her own, very well educated priests? Hear, from their own lips the horrors of monastic life, where priests-to-be are required to whip themselves daily, clean the floor with their tongues, or lay across a doorway so that others will walk on them. Yes, this stuff is still going on! Thrill with each one as they encounter the wide gulf between the 'salvation-by-works' gospel of Rome and the 'salvation-by-grace' Gospel of the Bible. Struggle with them as they are forced to choose between the security and prestige of the priesthood and a potential future with no job, no clothing, no place to live. Be encouraged as you see the hand of God bless them for their faith in Christ, and Him alone. Here are the testimonies of fifty former priests to compare with the handful of 'new' Catholics who crossed the bridge in the other direction. You will find no editorial license in Far From Rome, Near to God as you find in Patrick Madrid's Surprised by Truth.. The compilers (not editors), Richard Bennett, former Dominican priest, and Martin Buckingham, simply present these testimonies as written. You will also find a list of resources available to Roman Catholics who may be considering following in the footsteps of these brave men. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Go to Table of Contents | Quite Contrary: A Biblical Reconsideration of the Apparitions of Mary, by Timothy F. Kauffman. Copyright 1997 ISBN 0-9637141-8-X Ever wonder how the many 'apparitions' of Mary stack up against the Word of God? Here is a well-researched analysis for your reading pleasure. Starting with the Scriptural premise that we are to 'test the spirits' ( 1 John 4: 2-4) to be sure they are of God, the author examines many supposed apparitions of Mary. Amazingly, he found that Roman Catholics never apply this God-given test. They prefer, instead, to go with their 'feelings' that the apparitions must be of God because they 'sound so holy' and often refer to God and Christ. Quite Contrary traces the Roman Catholic doctrines of the Immaculate Conception, the Assumption, and Papal Infallibility to their true source - the many 'apparitions' of Mary! He then shows how the 'infallible' Pope uses his 'infallible' authority to support those same apparitions! Truly a self-serving circular argument if ever there was one! In only one case was an apparition challenged with the test of 1 John 4: 2-4. And in that case the apparition changed the subject and immediately disappeared! That should tell you something! These apparitions fall under the umbrella of 'paranormal experience.' It is here that we need to understand that God is not the only one able to cause things to happen in this world, and in our minds. Satan has the same ability. Thus, when confronted with an apparition that claims to be Mary, the mother of Jesus, we must test it according to God's Word. The warm fuzzies is no test at all. Thus does Kauffman examine appearances at Lourdes France, Fatima Portugal, Guadalupe Mexico, Garrision, NY, Mt. Carmel Italy, Medjugorje Bosnia, and dozens of others. He evaluates the messages from each apparition and shows us how those messages directly contradict the Gospel of Jesus Christ. After tracing the historical development of several major Roman Catholic dogmas, the author suggests that still more major dogma's are about to emerge from Rome at the prompting of the 'apparitions.' In particular, he postulates that the next major doctrinal change to come from Rome will have to do with the position of Mary as the 'Mediatrix of all Grace.' While this is actually being promoted today, it has yet to reach the level of 'infallible dogma.' But with the apparitions pushing Rome to elevate Mary to a place in the Trinity, it probably won't be too long before the Trinity is no longer a trinity, but a quadrinity! Rome is quickly heading toward an official, 'ex-cathedra" (and therefore 'infallible') dogma presenting Mary as the only way Grace is passed out, and as our true 'high priestess' for salvation. | Read this Review Again | Read Next Review | Back to Top of Page | List of Books Reviewed | Go to Table of Contents | Comments? | Roman Catholicism, by Lorraine Boettner. Some Roman Catholic apologists refer to Roman Catholicism as 'the Protestants bible on Roman Catholicism.' That is a fair evaluation! Here in one volume you can find a detailed and accurate presentation of the Roman Catholic Church and its many extra-Biblical, non-Biblical, and at times anti-Biblical dogmas and doctrines. If I had only one book to commend, this is it. Available from your online book dealer. Copyright 1962 by Presbyterian and Reformed Publishing Co. ISBN: 0-87552-130-4. Windswept House, by Fr. Malachi Martin. Although written as a novel, one can readily see that Rev. Martin, a former Jesuit priest with intimate Vatican knowledge, bases his story on factual events and very real people-some of whom are very easy to recognize. I think you will be enthralled by the journey of a lone Catholic Priest, dedicated to the traditional Catholic Church, through the darkness of Vatican intrigue and connections with the occult. A very well written book. Vicars of Christ: The Dark Side of the Papacy, by Peter DeRosa. Here is perhaps the best portrayal of the papacy ever written, if not the most complimentary. Guaranteed to make Catholic apologists sputter and complain (but not to offer any noteworthy arguments against its content, for there seems to be little of that to draw upon). Strangely, this volume is out of print in the United States, and you'll have to use one of the book search internet sites to find a used copy. But the search is will worth the effort! Formidable Truth: A Vindication of Lorraine Boettner, by Robert M. Zins, Th.M. This excellent paperback is a response to the efforts of Roman Catholic apologist Karl Keating to debunk Mr. Boettner's monumental work, Roman Catholicism. In his book, Catholicism and Fundamentalism, Keating mounts a scathing attack on both Boettner and his work. If you harbor any doubts, I suggest that you first read Boettner's Roman Catholicism, then Keating's work, and then the present volume. Zins does an excellent job of revealing shallowness and and stereotypical Roman Catholic 'logic' which is at best, illogical, and at worst, downright deception. Zins responds lucidly to each of Keating's charges against Boettner, and brilliantly proves Boettner to be right on each point. Zins sheds much light on that favorite Roman Catholic apologists' methods: pull something out of context, make a straw-man of it, demolish the straw-man, and then proclaim that the Evangelicals' position has been proven wrong. One expects such machinations from politicians or businesspeople, but from the religious quarter too? Shame on you, Mr. Keating. Copyright 1997 by Robert M. Zins, Th.M. ISBN 0-09637141-5-5. Available from White Horse Publications, P.O. Box 2398 Huntsville, AL 35804-2398, or by calling toll free: 800-867-2398. This volume is a replication of Appendix II of Zin's Romanism: The Relentless Roman Catholic Assault on the Gospel of Jesus Christ, by Robert M. Zins, Th.M. which is available either from the above source, or from AMAZON.COM, the online bookstore at. The Two Babylons, by Rev. Alexander Hislop. This is a great book of Christian apologetics, written in a classical style that many modern readers may have difficulty with. If you can overcome some initial impatience with the lengthy and colorful writing style, you will find an incredibly detailed description of the true source of many Roman Catholic dogmas, doctrines, Traditions and customs. Catholicism and Fundamentalism, The Attack on Romanism by "Bible Christians", by Karl Keating, one of today's best-known Roman Catholic apologists. In a home page such as this, it is only fair to present both sides, right? Since Keating is so well known, and this work is considered one of the best Rome has to offer, I will recommend it to you. At the same time, I suggest that you get a copy of Robert M. Zins' Th.M. paperback entitled Formidable Truth, which analyzes Keatings' attack on Lorraine Boettner's Roman Catholicism. Copyright 1988 by Ignatius Press, San Fransisco. ISBN 0-89870-177-5(SB)/0-89870-195-3(HB). | Back to List of Books | Back to Top of Page | Go to Table of Contents | Comments? | Immaculate Misconceptions, a Self-help Book for former Catholics, by Sherry Bishop. Here is a serious book by a psychotherapist, teacher and consultant who considers herself a 'recovering Catholic.' The author deals extensively with the psychological ramifications of leaving the Roman Catholic Church. The Myth of Mary, by Cesar Vidal, Chick Publications. A very good analysis of two Mary's: the Mary of the Bible vs the Mary of the Roman Catholic Church. Read how Rome bases much of her teaching about her Mary on known-false "gospels," and how those teachings contradict the very "Fathers of the Church" and Scripture. The author also shows the amazing parallels between the Roman Catholic "Mary" and pagan goddesses such as Isis, Peresphone, Demeter, Cybele Astarte, and Athena to name a few. The Question and Answer Catholic Catechism, by John A. Hardon, S.J. Here is an official Roman Catholic catechism that will remind you of the old Saint Joseph Catechism in its style of presentation. For the cash-short Protestant who wants a good overview of what Catholics believe, this paperback will fit the bill. Be sure to examine the presentation of God's Ten Commandments; see if you can find the Second Commandment therein! The New Saint Joseph Baltimore Catechism, Official Revised Edition, by Father Bennet, C.P. A revival of the old-time favorite! | Back to List of Books | Back to Top of Page | Go to Table of Contents | Catechism of the Catholic Church, Libreria Editrice Vaticana, English Translation. This is really the most comprehensive catechism available, and the one I quote most frequently at this site. Copyright 1994, United States Catholic Conference, Inc. Available from St. Paul Books&Media, and almost any Roman Catholic book store. The New Jerome Bible Commentary, by Raymond Brown, S.S., Joseph A. Fitzmyer, S.J., and Roland E. Murphy, O. Carm. A hefty, useful Roman Catholic commentary on the Holy Bible. This is the kind of book you keep on hand as a reference, along with your concordances, lexicons, and so forth. Roman Catholics don't know much about lexicons and concordances simply because they are not really encouraged to do a serious study of Scripture. Dogmatic Cannons and Decrees,of the Council of Trent, and Vatican I, plus the Decree on the Immaculate Conception and the Syllabus of Errors of Pope Pius IX. Another official Catholic publication, in a low-cost paperback. A concise presentations of key Roman Catholic doctrines, and the curses that the Church places on all who disagree with Rome. Very interesting reading. Copyright 1977 by Tan Books and Publishers, Inc., P.O. Box 424 Rockford, IL 61105. Bears Imprimatur and Nihil Obstat. Available from your online book dealer. Epiphany: A Theological Introduction to Catholicism, by the Dominican Friar Aidan Nichols, O.P. From the Liturgical Press, copyright by the Order of St. Benedict, Inc. An 'insiders' view of the Catholic Church in modern times. Catholicism at the Dawn of the Third Millennium, by Thomas, P. Rausch, S.J., professor of theology at Loyola Marymount University. Copyright by the Order of St. Benedict, Inc. A Jesuit's view of the modern Catholic Church. Some of his facts and opinions might well surprise you. Counterfeit Revival, by Hank Hanegraaff. An analysis of the numerous, but false 'revival movements' in today's America. This book has nothing to do with Roman Catholicism, but can prove very helpful to any Christian interested in the subjects of worship and healing. The Catholic Myth: the Behavior and Beliefs of American Catholics, by Andrew M. Greely, Roman Catholic Priest. This volume presents an insightful analysis of what Roman Catholics in America really believe and do - and it bears faint resemblance to the official teachings of the Roman Catholic Church. Get an insider's view of such issues as church attendance, sexuality, ordination of women, election of bishops, Catholic schools, Catholic contributions (very low per capita), moral crisis in the priesthood and others. | Back to List of Books | Back to Top of Page | Go to Table of Contents | Comments? | Reckless Faith: When The Church Loses Its Will To discern, by John F. MacArthur. An analysis of current agreements between Evangelicals and Roman Catholics, and an evaluation of the 'laughing revival' that began in the Airport Vineyard church in Canada in 1994. The Mouth of the Lion, by Dr. David Allen White. If you like biographies of interesting people, don't miss this one! Dr. White, in one of the most masterful pieces of writing I have ever had the pleasure to read, portrays the life of Roman Catholic Bishop De Castro Mayer, of the diocese of Campos, Brazil. By taking an heroic stance in support of the traditional Roman Catholic Mass (also called the Tridentine Mass), Bishop Mayer came under an incredible attack by his fellow bishops and officials in the Vatican itself. All that Bishop Mayer wished to do was to fulfill the vows he made to God when he was ordained by one of those selfsame attackers. Like all priests of his time, Mayer had vowed to never say any Mass but the traditional, Tridentine Mass, and to stand firmly against 'modernism' as defined by Pius X's Syllabus of Errors. That vow was required by the same church that persecuted him for obeying it! Enter now a world of political machinations and intrigue; of physical assaults and public challenges, as proponents of the 'new order of the Mass' (also called Novus Ordo) seek to humiliate and destroy a man who refuses to roll over and play dead. Thrill as you see this obscure and humble man finally bring numberless adversaries to ultimate confusion and failure. The good news is that the good guy wins in the end! The bad news is that in the winning, the Roman Catholic Church entered another major schism (split). Today there are seemingly two different Roman Catholic churches one based on the Tridentine Mass and Catholic Tradition, the other based on changes from Vatican Council II. Each group has its own bishops and priests, its own seminaries, its own convents and nuns. Each claims the other to be false. Rome does her customary best to hide this split by publically ignoring it, but I predict that this particular dilemma is not going to go away. Time will tell. Meanwhile, The Mouth of the Lion makes great reading, and gives a very good background into the modern Roman Catholic Church - by a dyed-in-the-wool Roman Catholic author. Copyright 1993 by Angelus Press. ISBN 0-935952-88-8. Available from your Internet book supplier. | Back to List of Books | Back to Top of Page | Go to Table of Contents | Comments? | Still under construction. Please visit again as we add to this list of book reviews! | Top of Page | Go to Table of Contents | Comments? |
http://www.angelfire.com/ma/romewatch/page11.html
CC-MAIN-2017-04
refinedweb
6,283
70.84
Leon Washington Kick Return for TD 31 33,171 21 1 Ratings have been disabled for this video. Rating is available when the video has been rented. This feature is not available right now. Please try again later. 14:31 12th Fan View - San Diego Chargers at Seattle Seahawks 2010 Week 3by charliemac644,215 views 3:59 Best in Biz Top 5 NFL Returner 2008by Celo Abdi12,393 views 1:03 Stanford Jennings returns kickoff in SuperBowl XXIIIby Michael Schiefer41,339 views 8:14 ★Best Football Touchdowns, Kick Returns, Tackles ! HURDLE!★by mikioisland288,467 views 3:47 Ted Ginn Jr madden 09 kick return Tutorialby BreakerOhio27,657 views 2:40 Brad Smith KO Return TD 12/19/2010by abm19,059 views 2:09 Antonio Cromartie's 109 Yard Field Goal Returnby xlr8er1648,414 views 5:03 Best Kick Returns of 2009by houseofchampions44,288 views 6:13 Jets Running Back Leon Washington Shopping for New Threadsby presspass4,904 views 4:27 Ted Ginn - The Ultimate Punt / Kick Return Highlights Compilationby BABALITYpwned29,636 views 2:21 Iowa Hawkeyes 99 YD Touchdown Return: Derrell Johnson-Koulianos Ohio State Buckeyes 2009 Highlightsby kirkferentzrocks37,910 views 4:41 Leon Washington Goes Home to Jacksonvilleby Paul Kaplan1,097 views 12:13 Devin Hester Kick Return Highlightsby Nocheese22347,592 views 2:37 JMU - Scotty McGee Punt Return TD vs. Richmond - ESPN #1 Play!by Jeffrey Lewis105,050 views 2:14 Leon Washington MIC PKGby Darlene Tsao9,365 views 3:14 The best 10-year-old hoops player in the U.S.?by Chicago Tribune6,377,956 views 1:14 Akwasi Owusu-Ansah Ridiculous Kick Return vs Ediboro: IUP Football 2009by TGraham98835,689 views 2:22 Darren Sproles Kickoff Return 2007by Pat D16,491 views 1:24 DeMarco Murray's 91 yd. kickoff return (11/10/07)by soonerscott200735,470 views 0:53 Leon Washington Tripped Up By Punter for Pre Celebrating .. Seahawks vs. Panthers 2010by thebsoreport261,784 views
http://www.youtube.com/watch?v=AQr3B0QbYBk&feature=youtu.be
CC-MAIN-2013-48
refinedweb
321
55.58
GSM SIM900 Shield w/ Quad-Band GSM RM185.00 - Product Code: GSM-SIM900-Shield - Availability: In Stock. Applications Features Package Included GPRS/GSM SIM900 Shield Development Board x 1 Note: - Make sure your SIM card is unlocked. - The product is provided as is without an insulating enclosure. Please observe ESD precautions specially in dry (low humidity) weather. - The SIM900 card model used does not include pins to directly mount on the ARDUINO, you may link it manually since it is controlled by UART and only need TX and RX. Tutorial from: We will introduce the card PIN to unlock it using AT commands and we will give you some time to connect to the network. The command that we will use is AT + CPIN = "XXXX " , where we will replace XXXX by the PIN of our card. #include <SoftwareSerial.h> SoftwareSerial SIM900 (7, 8); // 10 and 11 for the Arduino Mega. Configure the serial port for the SIM900. Void setup () { // digitalWrite (9, HIGH); // Uncomment to activate the card power by Software // delay (1000); // digitalWrite (9, LOW); Delay (5000); // We give ourselves some time to turn on the GPRS and power the card SIM900.begin (19200); // Set serial port speed for SIM900 Serial.begin (19200); // Set Arduino serial port speed Serial.println ("OK"); Delay (1000); SIM900.println ("AT + CPIN = \" XXXX \ ""); // AT command to enter the PIN of the card Delay (25000); // Time to find a NETWORK } Void call () { Serial.println ("Calling ..."); SIM900.println ("ATDXXXXXXXXX;"); // AT command to make a call Delay (30000); // Wait 30 seconds while making the call SIM900.println ("ATH"); // Call dangle Delay (1000); Serial.println ("Call completed"); } Void loop () { to call(); // Make the call While (1); } The programming to send an SMS will be identical, but we will create another function that will be in charge of sending the commands AT to send the SMS. First we will use the AT + CMGF = 1 \ r command to tell the GPRS module that we are going to send a message, and then enter the number to which it is addressed with the command AT + CMGS = "XXXXXXXXX" . Once this is done, we simply send the content of the message and end it with the ^ Z character. The function would look like this: Void message_sms () { Serial.println ("Sending SMS ..."); SIM900.print ("AT + CMGF = 1 \ r"); // Set the text mode to send or receive messages Delay (1000); SIM900.println ("AT + CMGS = \" XXXXXXXXX \ ""); // Number to which the message will be sent Delay (1000); SIM900.println ("SMS sent from an Arduino. Greetings from Prometec."); // SMS Text Delay (100); SIM900.println ((char) 26); // End command ^ Z Delay (100); SIM900.println (); Delay (5000); // We wait for some time to send the SMS Serial.println ("Sent SMS"); }
http://qqtrading.com.my/gsm/gsm-gprs-sim900-shield-w-quad-band-gsm
CC-MAIN-2019-39
refinedweb
448
73.98
More server issues this morning. Be aware. On another note, I think we need a word that describes what happens when a piece of technology stops working until you call technical support, only to start working again perfectly when you finally reach the tech support dude. Because, no lie, this has happened to me twice in the last day with the server: The moment the dude said “hello, tech support,” I’m suddenly able to connect to my site. And then, this morning, the moment I hung up — I couldn’t reach the site again. Now, that’s no coincidence; that’s cyber-karma messing with my head. Fortunately I made the tech dude run a diagnostic anyway. Who’s the smart one now, balky server? Huh? Huh? Huh? Anyway, if one of you can come up with a word for that situation, I’d be appreciative. Yes, I know, I’m the writer here, I should be coining words, not you. But come on. You know you love the audience participation stuff. There’s already a perfectly good term for that: landlord syndrome. As in: your apartment AC obstinately refuses to work through the hottest weather in years, half-a-dozen attempts to repair it have failed, so you call your landlord and complain about his incompetent maintenance. The landlord comes over himself to see the situation. The very instant he walks in, the AC fires up and works perfectly thereafter. And the moment he leaves, it quits again. “Murphyan synchronicity” ? “Heisenbug”. (A term of art among programmers for some time now; there are entries in the Jargon File and Wikipedia.) technomockery The device is laughing at you. I vote for “Murphyan synchronicity.” It’s perfect. It is called life. I have been on both sides of this. I have taken my car to the shop because it is making a weird noise. Shizam! the noise stops as soon as I get there. I have also done local PC support and I don’t know how many times someone came to me with computer issues onlt to have them disapear as soon as I get to their desk. By the way the server still seems a bit laggy. When I was a desktop tech I used to describe the phenomenon to users all the time. It’s the technician chip. See, computers have a chip in them that detects when someone’s about to fix them and causes them to work until that person goes away. Some brands of cars come with a technician chip, too. I’d call it “typical.” Katherine @ 7: I think people have that chip, too. When I have a mild but persistent illness and finally decide to see a doctor, I usually start feeling better within an hour of making an appointment… My cousin and I already have a word for that kind of situation: “Bitterman.” Named after two brothers who though not particularly impressive guys broker our respective hearts anyway. Can it be coincidence that their last names contains the word “bitter”? I think not. How about “Nerdgatory”. That’s Nerd + Purgatory. Eddie @ 9: The technician chip works in mysterious ways! It’s called “technology.” A friend of mine once defined “technology” as “something that doesn’t quite work.” When it works, it’s an appliance. Refrigerators used to be technology. Now they’re appliances. Computers are technology. I firmly believe that technology is cussed enough to know exactly when working and not working would be most disadvantageous to its users. It would certainly explain the last few elections. Aura of Consultant. It’s that the technology is afraid of what we’ll do to it if it fails, so it plays nice while we’re there. Definitely a heisenbug, although it looks like the estimable Reverend Dodgson beat me to it. We always called it Technician Proximity Syndrome back when I was working tecnical support. It’s the electronic anthropic principle. Sometimes they just need attention from the right people. Unless you pay per call to tech support. Then it’s just a switch they set to flip at random intervals to take your money. How about “spime”? (What? I -like- namespace collision…) I like your cyberkarma idea Schrödinger’s Help Desk I thought that was just the standard intermittent outage? The phone company and I had a 3 month go around with that (they would automatically cancel appointments when their diagnostics detected no problem). After many, many, calls all the way up to Albany and headquarters, while discussing my small baby and inability to reach 911 in emergencies, the senior tech discovered that a mouse was gnawing the wires and whenever it rained or there was a heavy wind, I would have an”intermittent outage” which never happened to coincide with when they would mosey out during good weather. Techopause intechruption Anu Garg’s A.Word.A.Day email list once featured a word that fits this situation: resistentialism (ri-zis-TEN-shul-iz-um) noun - The theory that inanimate objects demonstrate hostile behavior toward us. It’s my favorite word of all the ones I’ve learned from that list. heisenbuggery is definitely a subset of resistentialism. Once, this happened to me, only it was 100% testable. I pick up the phone, and dial a number, and the internet would work. The moment I hung it up, the internet would die again. The culprit? A ground on my phone line that was being broken when I dialed out, and was keeping my DSL modem from making it’s proper connections. We ended up having to have some work done on our phone lines, removing the old alarm system from the loop, and it finally worked 100%. I thought I was going crazy, but luckily the scientific process helped me in the end. Wikipedia provides a long list of weird computer bugs and their names. Their definition of Heisenbug isn’t an exact match. I think Leila @23 has hit the nail on the head. There’s a corollary behavior, where something you’re working on works great while you’re tinkering with it, but then completely and utterly fails when you go to show it to someone. This is exemplified well by this strip from PhD Comics. Incidentally, I think another term for this is the ‘perversity of the inanimate’? Am pretty sure Piers Anthony used/uses it a lot in his Xanth series? This behavior is best explained by quantum bogodynamics. Obvious you or something in your vicinity is a bogon source. When you call tech support they become part of the virtual local bogon field. Since the technically adept tend to be bogon sinks rather than sources, the net bogon flux is reduced, removing the cause of the problem. When they leave or hang up, the bogon field returns to its former condition, and the problem returns. Further reading: As the person who gets to provide the tech support, my description of this phenomenon usually boils down to, “It fears me.” Since when I was a user consultant in college, I have always called that “Voodoo problem solving.” Of course, usually the problem stayed gone after the technical consultant was brought in and the user failed to repeat the problem and everything “just worked”. This worked whether I was the consultant or the user. At least once which I was in grad school I remember calling someone over to watch me type in a command in matlab that just *would not work* so that it *would* work, which it did, as soon as they were watching. There’s another effect which is worth mentioning, and that is that sometimes the things you do to try to debug something can make the bug disappear. This is a real effect. I believe that the term heisenbug is more aptly applied to this effect, than to what John experienced. I don’t think John was trying to suggest that somehow his reaching the technician was really affecting the server. When I fixed computers for money, we’d occasionally get machines in that were allegedly bluescreening and making weird noises and showing various other horrible death omens. And we would be utterly unable to duplicate any of the alleged problems, even after hours and hours and hours of testing. And then the owner would take it home and it would be fine. We referred to such a device as “afraid of techs.” We compared it to the kid who tries to get out of school by feigning ebola and, when told “okay, I’m making an appointment at the doctor,” pops up all “NO NO NO IT’S COOL I FEEL BETTER NOW.” You could certainly put this in the general class of Heisenbugs. A Heisenbug is any bug where the act of observing it makes it go away, or at least change behavior. In programming they’re typically caused by using uninitialized memory (which very well may have different contents under a debugger) or timing issues, since the timing is different under a debugger. With an outside chance of bad compiler optimizations that get turned off when you do a debug build. From my tech support guys: “We just call that PEBKAC*.” Which is all very well and good insofar as they’re concerned, but probably doesn’t describe your actual situation. Still, it is a glimpse into the logic of tech support! (*Problem Exists Between Keyboard and Chair.) This is clearly related to the “demo effect”: there, the thing will work perfectly until you go to show it off (with the hope that it will continue to work), and then it will break. Here, it won’t work until you go to complain about it (with the hope that it will continue to not work), and then it will stop being broken. Dystechxia after all it’s an order thing. Similar to Karmic Dyslexia; good things happen to me…at the wrong time. I second (third, forth, whatever) “heisenbug”. It’s a bug that isn’t there when you look for it. Anon @32, we used to say “the problem is with the nut that connects the keyboard to the monitor” That happens from time to time with me. I call it “I hate you”. So what I do is this: commit extreme violence where the server can see it. Where extreme is just extreme to technology. Example: one day in the lab, one of the machines decided to refuse to boot up after we upgraded its kernel. After much tinkering from the more technical of us (and we were all pretty damn technical) I took a stack of 3.5″ floppies (ah, ancient days) and started tearing them apart. Violently. It’s fun, actually; slam the case against the table until it cracks, then rip it apart, then tear up the fine white cloth inside into itty bits, then rip the plastic apart (works if you’re mad enough), then crumble the magnetic disk and stomp on it. HARD. Pile remains in front of server. I restarted it. It worked. Flawlessly. Since then, that has been how I deal with mysterious technical funk. Of course, ever since my servers moved to datacenters far away, I have not been able to exercise this as effectively. The innate animosity of inanimate objects. The main problem I see with computers is that there’s nowhere you can kick ‘em. Hey, that’s what always worked with my mom’s washing machine! I’ll take a fifth of Heisenbug, if you please. (Currently working on one of those, in fact. Thank you, cfengine) Suddendieuppedness. technosham v.i. 1. to fail or misfunction in the absence of technical support 2. to resume normal function when observed by (or in the presence of) a Technically Clued Person technoshamming n. 1. Machine disfunction in the absence of technical support. 2. Resumption of normal machine function when in the proximity of technical support. Skip@31: Sure, if it really was the act of observing it that made it go away. But if it was just coincidence that it started to work when John got a hold of tech support, then is it still a heisenbug? “feigning ebola” (with the lower case) is the name of my next band. Dr. Phil Anon, Yes, it is still a heisenbug. Calling tech support is an attempt to debug so it counts. Granted many heisenbugs are related to debugging but the term also certainly covers things that really do just look random. Anybody who has worked as an admin for any amount of time will tell you that there are quantum effects that we simply don’t grok at play in modern computing. Why do you think we all have copies of the excuse generator? :) “Meaningful Scalzonicity” sng, Leaving aside for the moment the actual terminology, I still think it is useful as a software developer to distinguish between situations where there was actually something about the observation that changed the behavior, and situations where I just want to indicate that there was an annoying, repeated sequence of events with no reasonably likely causal mechanism. It is very obvious to me that what you are talking about is Perversus Bastardius Technologicus (PBT), a little known syndrome first described in the ancient journals of one Billgaticcus Microsoftillyedropticus as one of the greatest money spinning ideas of his time. In the English translation of his work it appears under the heading “The Telecom Pact” and seems to infer that when first designing his software, now available on just about every piece of tech in existence, he went into partnership with all telecom companies to generate addtional revenue by means of repeated dialling of HELPLINE (high end line price link indexed not equitable ) telephone numbers. Basically he designed specific signal recognition software that linked directly with the Lettusmesswittem chip (previously translated as ‘the sure fire way to cause hypertension’ chip) which forced the unsuspecting operator to squeel loudly – sometimes on multiple occasions – and then in frustration dial said HELPLINE. The dial tone of this specific number would then jump start the chip causing the PBT effect to hyperlink with the chip, ie the chip starts, the dial tone starts the hyperlink, the dial tone stops when answered causing the chip to stall, the problem stops, the connections severs, the hyperlink reactivates, the situation randomly starts overat an unspecified future date. I don’t have a word for it, but my explanation is that computers posess the reincarnated souls of cats. And I say that as a tech support person. Actually, come to think of it, we do have a phrase for that. “The computer is just fucking with him.” Happens a lot. Hence the cat thing. Mechanic’s Syndrome — the knocking stops the second your car pulls into his garage. I think the Hacker’s Dictionary refers to this sort of thing as a Dancing Frog, after the Warner Bros. cartoon in which the frog would only sing and dance in front of the one guy and was at all other times a perfectly ordinary frog. Though that may be specific to irreproducible software bugs and not used in a broader sense. Looks like not every version includes it, but here it is: Nincomfarction: A unit of measure used to quantify the innate tendency of any hardware or software to make an enduser look like a complete idiot in front of trained experts. ex: WordPress has a rating of 12 nincomfarctions.
http://whatever.scalzi.com/2008/05/08/word-coining/
CC-MAIN-2013-20
refinedweb
2,587
71.95
link: sorry to be back on the forum asking another question so soon, but here it is… def cho_han(guess, amount): roll1 = random.randint(1,6) roll2 = random.randint(1,6) roll_total = roll1 + roll2 even_numbers = [2, 4, 6, 8, 10, 12] odd_numbers = [1, 3, 5, 7, 9, 11] if (roll1 + roll2 == even_numbers) and (guess == "Even"): result = "win" final_amount = money + amount elif (roll1 + roll2 == odd_numbers) and (guess == "Odd"): result = "win" final_amount = money + amount else: result = "lose" final_amount = money - amount print( "Your rolls were %.f and %.f for a total of %.f. Since you guessed %s, you %s and now have $%.2f" %(roll1, roll2, roll_total, guess, result, final_amount) ) return final_amount When running the game, it will always give the result that you have lost no matter if your guess of “Even” or “Odd”. This leads me to believe that both of my if & elif statements are auto failing and the default through the else statement is to just say you lose. I have never done a number array before in python and figured I would try something like that here just to save time when checking if the resulting die toss is even or odd. I assume that is where I am going wrong here? Can someone tell me how to correct it?
https://discuss.codecademy.com/t/game-of-chance-code-challenge-4/440390
CC-MAIN-2019-43
refinedweb
211
67.69